<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Your Site's RSS Feed]]></title><description><![CDATA[The HPE Developer portal]]></description><link>https://developer.hpe.com</link><generator>GatsbyJS</generator><lastBuildDate>Tue, 28 Apr 2026 17:02:36 GMT</lastBuildDate><item><title><![CDATA[Implementing a complete Workshop-on-Demand infrastructure in less than an hour]]></title><description><![CDATA[Implementing a complete Workshop-on-Demand infrastructure in less than an hour December 17, 2025 Join our next meetup session to witness how…]]></description><link>https://developer.hpe.com/Implementing-a-complete-Workshop-on-Demand-infrastructure-in-less-than-an-hour/</link><guid isPermaLink="false">https://developer.hpe.com/Implementing-a-complete-Workshop-on-Demand-infrastructure-in-less-than-an-hour/</guid><content:encoded>&lt;h2&gt;Implementing a complete Workshop-on-Demand infrastructure in less than an hour&lt;/h2&gt;
&lt;p&gt;December 17, 2025&lt;/p&gt;
&lt;p&gt;Join our next meetup session to witness how quick and easy it can be to deploy a Workshop-on-Demand infrastructure so that you can deliver your own on-demand training sessions. Bruno Cornec (a former HPE Linux Distinguished Technologist) and Frederic Passeron from HPE Developer Community will demonstrate how you can achieve this goal.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Network Operations Spotlight with PyCentralv2]]></title><description><![CDATA[Network Operations Spotlight with PyCentralv2 April 08, 2026 Return for our second session of the HPE Networking Automation Team’s Developer…]]></description><link>https://developer.hpe.com/Network-operations-spotlight-with-pycentralv2/</link><guid isPermaLink="false">https://developer.hpe.com/Network-operations-spotlight-with-pycentralv2/</guid><content:encoded>&lt;h2&gt;Network Operations Spotlight with PyCentralv2&lt;/h2&gt;
&lt;p&gt;April 08, 2026&lt;/p&gt;
&lt;p&gt;Return for our second session of the HPE Networking Automation Team’s Developer Meetup series! This session will focus on the Automation Team’s PyCentral software development kit (SDK) for HPE Aruba Networking Central. PyCentral is the Automation Team’s very own Python SDK built to integrate automation with network operations. PyCentral handles authentication, request formatting, and error handling with PE Aruba Networking Central&apos;s Rest APIs while exposing simple Python functions. The SDK has received numerous enhancements  including the integration a full suite of support materials for configuration, monitoring, troubleshooting, and streaming. Follow along as the Automation team guides you through a series of demos showcasing the power of automating network operations with PyCentral.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Accelerate public sector AI use cases using a powerful ML Ops platform]]></title><description><![CDATA[Accelerate public sector AI use cases using a powerful ML Ops platform September 21, 2022 Join us for a free, 60-minute session where you…]]></description><link>https://developer.hpe.com/accelerate-public-sector-ai-use-cases-using-a-powerful-ml-ops-platform/</link><guid isPermaLink="false">https://developer.hpe.com/accelerate-public-sector-ai-use-cases-using-a-powerful-ml-ops-platform/</guid><content:encoded>&lt;h2&gt;Accelerate public sector AI use cases using a powerful ML Ops platform&lt;/h2&gt;
&lt;p&gt;September 21, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, learn how the French governmental agency, Pole Emploi, is collaborating with HPE to build and implement an effective and scalable AI/ML Ops platform that helps companies find and hire workers. Hear from Pole Emploi Software Engineer, François Réthoré, and HPE Software Engineer, Dietrich Zinsou, as they explore different uses cases where this AI/ML Ops platform has been put into action.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI Inferencing at the Edge: Use Cases from Earth to Space]]></title><description><![CDATA[AI Inferencing at the Edge: Use Cases from Earth to Space February 12, 2025 High-stake operating environments can add additional challenges…]]></description><link>https://developer.hpe.com/ai-inferencing-at-the-edge-use-cases-from-earth-to-space/</link><guid isPermaLink="false">https://developer.hpe.com/ai-inferencing-at-the-edge-use-cases-from-earth-to-space/</guid><content:encoded>&lt;h2&gt;AI Inferencing at the Edge: Use Cases from Earth to Space&lt;/h2&gt;
&lt;p&gt;February 12, 2025&lt;/p&gt;
&lt;p&gt;High-stake operating environments can add additional challenges when it comes to implementing AI technologies. Whether your use case is focused on speeding up time-to-target while maintaining data security or handling rugged and remote computing environments outside your data center, there are important considerations you need to take into account. Join us in our next session as we explore public sector and space-deployed AI-use cases, spanning the use of RAG for knowledge retrieval to the use of computer vision for image identification.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Accelerating scientific research through high performance computing democratization]]></title><description><![CDATA[Accelerating scientific research through high performance computing democratization February 15, 2023 Large, complex computer simulations of…]]></description><link>https://developer.hpe.com/accelerating-scientific-research-through-high-performance-computing-democratization/</link><guid isPermaLink="false">https://developer.hpe.com/accelerating-scientific-research-through-high-performance-computing-democratization/</guid><content:encoded>&lt;h2&gt;Accelerating scientific research through high performance computing democratization&lt;/h2&gt;
&lt;p&gt;February 15, 2023&lt;/p&gt;
&lt;p&gt;Large, complex computer simulations of the physical world are becoming more common-place thanks to the broader availability of high performance computing resources. Contemporaneously, the advent of open-source software, data science libraries, and machine-learning frameworks mean that some aspect of software engineering is now required of domain scientists. In this session, you’ll learn how these advances necessitate collaborations amongst software engineers, data scientists, and domain scientists and explore a specific use case delivering the first realistic global simulations of the world’s oceans.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[APIs for HPE GreenLake for block storage and next-gen platforms]]></title><description><![CDATA[APIs for HPE GreenLake for block storage and next-gen platforms March 27, 2024 Join us for an introduction to HPE GreenLake for block…]]></description><link>https://developer.hpe.com/apis-for-hpe-greenlake-for-block-storage-and-next-gen-platforms/</link><guid isPermaLink="false">https://developer.hpe.com/apis-for-hpe-greenlake-for-block-storage-and-next-gen-platforms/</guid><content:encoded>&lt;h2&gt;APIs for HPE GreenLake for block storage and next-gen platforms&lt;/h2&gt;
&lt;p&gt;March 27, 2024&lt;/p&gt;
&lt;p&gt;Join us for an introduction to HPE GreenLake for block storage automation using Data Services Cloud Console APIs. This session will guide you through the first steps of automating your processes using Data Services Cloud Console. The APIs covered in this talk will enable HPE GreenLake for block storage management automation using scripts as well as how they can be integrated into custom-built applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automate what happens next using HPE OpsRamp Process Automation]]></title><description><![CDATA[Automate what happens next using HPE OpsRamp Process Automation May 20, 2026 Join our next HPE Developer Community Meetup to learn more…]]></description><link>https://developer.hpe.com/automate-what-happens-next-using-hpe-opsramp-process-automation/</link><guid isPermaLink="false">https://developer.hpe.com/automate-what-happens-next-using-hpe-opsramp-process-automation/</guid><content:encoded>&lt;h2&gt;Automate what happens next using HPE OpsRamp Process Automation&lt;/h2&gt;
&lt;p&gt;May 20, 2026&lt;/p&gt;
&lt;p&gt;Join our next HPE Developer Community Meetup to learn more about HPE OpsRamp Process Automation.
Process Automation enables you to set up and automatically run tasks related to platform events in the system. Use it to trigger workflows on signals, timers, or events to save time and reduce manual operations. It can also be used as a scheduler platform for customers who need to configure tasks that run on various schedules.
In the session, you’ll have the opportunity to understand HPE OpsRamp Process Automation implementation and usecase walkthrough. We look forward to seeing you there!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automated continuous deployment of container-based applications onto HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Automated continuous deployment of container-based applications onto HPE GreenLake for Private Cloud Enterprise November 29, 2023 You need…]]></description><link>https://developer.hpe.com/automated-continuous-deployment-of-container-based-applications-onto-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/automated-continuous-deployment-of-container-based-applications-onto-hpe-greenlake-for-private-cloud-enterprise/</guid><content:encoded>&lt;h2&gt;Automated continuous deployment of container-based applications onto HPE GreenLake for Private Cloud Enterprise&lt;/h2&gt;
&lt;p&gt;November 29, 2023&lt;/p&gt;
&lt;p&gt;You need to develop and deploy applications faster. Key to this is having an automated mechanism that takes care of application deployment, allowing you to focus your time on actual application development. With HPE GreenLake for Private Cloud Enterprise, you get built-in workflows that enable you to build and deliver applications with agility, reducing the need to worry about continuous deployment and allowing you to shift your focus to application development and business outcomes. With these DevOps CD pipelines, you can deliver applications onto your private cloud faster and in a continuous and automated fashion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Boost Spark AI workloads with Pepperdata]]></title><description><![CDATA[Boost Spark AI workloads with Pepperdata October 26, 2022 HPE combines the power and versatility of Apache Spark with the robust, enterprise…]]></description><link>https://developer.hpe.com/boost-spark-ai-workloads-with-pepperdata/</link><guid isPermaLink="false">https://developer.hpe.com/boost-spark-ai-workloads-with-pepperdata/</guid><content:encoded>&lt;h2&gt;Boost Spark AI workloads with Pepperdata&lt;/h2&gt;
&lt;p&gt;October 26, 2022&lt;/p&gt;
&lt;p&gt;HPE combines the power and versatility of Apache Spark with the robust, enterprise-grade HPE Ezmeral Runtime Enterprise to support running analytics at scale against large data sources. But what do you do once adoption scales to the point where dozens or hundreds of data scientists and data analysts are running massive amounts of Spark applications?
In this talk, we’ll learn how HPE is partnering with Pepperdata to bring detailed observability via Pepperdata’s Platform and Application Spotlight, and more importantly, deliver automated, near real-time, autonomous optimization of cluster container resources via Pepperdata’s Capacity Optimizer, without the need for Spark developers to change a single line of code.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Calling all citizen developers: Can low-code platforms accelerate your impact? ]]></title><description><![CDATA[Calling all citizen developers: Can low-code platforms accelerate your impact? November 16, 2022 Across industries, low-code citizen…]]></description><link>https://developer.hpe.com/calling-all-citizen-developers-can-low-code-platforms-accelerate-your-impact/</link><guid isPermaLink="false">https://developer.hpe.com/calling-all-citizen-developers-can-low-code-platforms-accelerate-your-impact/</guid><content:encoded>&lt;h2&gt;Calling all citizen developers: Can low-code platforms accelerate your impact?&lt;/h2&gt;
&lt;p&gt;November 16, 2022&lt;/p&gt;
&lt;p&gt;Across industries, low-code citizen development platforms are unleashing creative talent to solve problems. In this session, HPE colleagues from the Office of the CTO, Education Services, and HR will provide an introduction into this powerful technology and how they have been able to build award-winning and enterprise-grade apps and automations—often with little-to-no budget. Even full-time developers can benefit from learning about low-code citizen development platforms to create “full stack” solutions outside of their areas of expertise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ChapelCon '24]]></title><description><![CDATA[ChapelCon '24 - The Chapel Event of the Year June 5-7, 2024 ChapelCon ’24 welcomes anyone with computing challenges that demand performance…]]></description><link>https://developer.hpe.com/chapelcon-24/</link><guid isPermaLink="false">https://developer.hpe.com/chapelcon-24/</guid><content:encoded>&lt;h2&gt;ChapelCon &apos;24 - The Chapel Event of the Year&lt;/h2&gt;
&lt;p&gt;June 5-7, 2024&lt;/p&gt;
&lt;p&gt;ChapelCon ’24 welcomes anyone with computing challenges that demand performance, particularly through parallelism and scalability. Our open-source community includes those interested in parallel programming, programming languages, and high-performance computing. Building on 10 years of experience from CHIUW (Chapel Implementers and Users Workshop), ChapelCon &apos;24 brings together Chapel users, enthusiasts, researchers, and developers to exchange ideas, present their work, and forge new collaborations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ChatHPE Hub: Enabling Secure and Scalable AI Transformation at HPE]]></title><description><![CDATA[ChatHPE Hub: Enabling Secure and Scalable AI Transformation at HPE March 19, 2025 Join us to learn more about an internal project HPE has…]]></description><link>https://developer.hpe.com/chathpe-hub-enabling-secure-and-scalable-ai-transformation-at-hpe/</link><guid isPermaLink="false">https://developer.hpe.com/chathpe-hub-enabling-secure-and-scalable-ai-transformation-at-hpe/</guid><content:encoded>&lt;h2&gt;ChatHPE Hub: Enabling Secure and Scalable AI Transformation at HPE&lt;/h2&gt;
&lt;p&gt;March 19, 2025&lt;/p&gt;
&lt;p&gt;Join us to learn more about an internal project HPE has developed using HPE Private Cloud AI that helps its teams use Generative AI (GenAI) to achieve its business goals. In this session, we will introduce you to the ChatHPE GenAI Hub – a revolutionary platform from HPE IT that provides secure, private access to large language models (LLMs) for HPE organizations. The ChatHPE GenAI Hub empowers employees to quickly develop and deploy impactful use cases, accelerating innovation and delivering transformative insights across the organization.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Collaborating at scale with Postman]]></title><description><![CDATA[Collaborating at scale with Postman March 29, 2023 Having evolved from everyone’s favorite desktop API endpoint testing utility to a true…]]></description><link>https://developer.hpe.com/collaborating-at-scale-with-postman/</link><guid isPermaLink="false">https://developer.hpe.com/collaborating-at-scale-with-postman/</guid><content:encoded>&lt;h2&gt;Collaborating at scale with Postman&lt;/h2&gt;
&lt;p&gt;March 29, 2023&lt;/p&gt;
&lt;p&gt;Having evolved from everyone’s favorite desktop API endpoint testing utility to a true enterprise API development platform, Postman’s recently released version 10 now delivers a broad range of new enterprise features. In this session, you’ll learn how Postman can help include the broadest possible spectrum of stakeholders involved in API development throughout the entire API lifecycle, including development, testing, deployment and management.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cool AI Use Case, Now What?]]></title><description><![CDATA[Cool AI Use Case, Now What? October 2, 2024 You’ve come up with a great use case where AI technologies can help your business. Now your…]]></description><link>https://developer.hpe.com/cool-ai-use-case-now-what/</link><guid isPermaLink="false">https://developer.hpe.com/cool-ai-use-case-now-what/</guid><content:encoded>&lt;h2&gt;Cool AI Use Case, Now What?&lt;/h2&gt;
&lt;p&gt;October 2, 2024&lt;/p&gt;
&lt;p&gt;You’ve come up with a great use case where AI technologies can help your business. Now your leadership is saying “show me something.” Join this session to understand best practices on getting started with AI and learn some shortcuts from the experts on how to get AI projects from data experiments to AI deployments. Begin your journey: Grab your toolbox, work with your data experts, build some results, and then amplify success on what you have accomplished.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[DayN+ - A new way to look at observability]]></title><description><![CDATA[DayN+ - A new way to look at observability April 16, 2025 Traditional monitoring systems focus mainly on metrics, predefined thresholds, and…]]></description><link>https://developer.hpe.com/dayn-a-new-way-to-look-at-observability/</link><guid isPermaLink="false">https://developer.hpe.com/dayn-a-new-way-to-look-at-observability/</guid><content:encoded>&lt;h2&gt;DayN+ - A new way to look at observability&lt;/h2&gt;
&lt;p&gt;April 16, 2025&lt;/p&gt;
&lt;p&gt;Traditional monitoring systems focus mainly on metrics, predefined thresholds, and manual log analysis to monitor system and application health. With the integration of cloud technologies, IT environments have become significantly more distributed and complex. This complexity reduces visibility into resources, forcing IT operations into an inefficient, reactive problem-solving mode. This situation has led to the emergence of observability.
Join this session to learn how an effective observability practice integrates people, processes, and tools to foster collaboration, break down silos, and enable proactive detection, diagnosis, and resolution of issues.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Developing and deploying AI in the enterprise]]></title><description><![CDATA[Developing and deploying AI in the enterprise April 23, 2025 Integrating the right software into your enterprise AI infrastructure is…]]></description><link>https://developer.hpe.com/developing-and-deploying-ai-in-the-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/developing-and-deploying-ai-in-the-enterprise/</guid><content:encoded>&lt;h2&gt;Developing and deploying AI in the enterprise&lt;/h2&gt;
&lt;p&gt;April 23, 2025&lt;/p&gt;
&lt;p&gt;Integrating the right software into your enterprise AI infrastructure is crucial to deploying AI applications successfully today. In this session, we&apos;ll discuss the ever-changing developer ecosystem, how machine learning workflows have evolved, and how HPE Private Cloud AI can give your teams the ability to move more quickly. Lastly, we will show how to create agentic workflows using tools like Langflow and explore recent advancements in the ecosystem with ready-to-deploy blueprints.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Determined AI Model Training Hackathon]]></title><description><![CDATA[Determined AI Model Training Hackathon February 20, 2023 - April 17, 2023 The Determined Community Team is running a hackathon! Build and…]]></description><link>https://developer.hpe.com/determined-ai-model-training-hackathon/</link><guid isPermaLink="false">https://developer.hpe.com/determined-ai-model-training-hackathon/</guid><content:encoded>&lt;h2&gt;Determined AI Model Training Hackathon&lt;/h2&gt;
&lt;p&gt;February 20, 2023 - April 17, 2023&lt;/p&gt;
&lt;p&gt;The Determined Community Team is running a hackathon! Build and train an ML model using Determined to win cash prizes.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Going from containers, to pods, to Kubernetes – help for your developer environments! ]]></title><description><![CDATA[Going from containers, to pods, to Kubernetes – help for your developer environments!  October 25, 2023 Today, Kubernetes is the undisputed…]]></description><link>https://developer.hpe.com/devops-alert-tool-sprawl-complexity-burnout-help-1/</link><guid isPermaLink="false">https://developer.hpe.com/devops-alert-tool-sprawl-complexity-burnout-help-1/</guid><content:encoded>&lt;h2&gt;&lt;strong&gt;Going from containers, to pods, to Kubernetes – help for your developer environments!&lt;/strong&gt; &lt;/h2&gt;
&lt;p&gt;October 25, 2023&lt;/p&gt;
&lt;p&gt;Today, Kubernetes is the undisputed go-to platform for scaling containers. But for developers, Kubernetes can be daunting, particularly when working with the discrepancies between local and production environments. In this talk, learn how Podman and Podman Desktop bridges this gap, serving as a beginner-friendly launch pad to Kubernetes. Get a demo and learn how you can benefit to streamline your container development processes!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[DevOps Alert: Tool Sprawl. Complexity. Burnout. Help!]]></title><description><![CDATA[DevOps Alert: Tool Sprawl. Complexity. Burnout. Help! September 27, 2023 Today’s DevOps teams are finding it even more difficult to develop…]]></description><link>https://developer.hpe.com/devops-alert-tool-sprawl-complexity-burnout-help/</link><guid isPermaLink="false">https://developer.hpe.com/devops-alert-tool-sprawl-complexity-burnout-help/</guid><content:encoded>&lt;h2&gt;DevOps Alert: Tool Sprawl. Complexity. Burnout. Help!&lt;/h2&gt;
&lt;p&gt;September 27, 2023&lt;/p&gt;
&lt;p&gt;Today’s DevOps teams are finding it even more difficult to develop and deploy features to meet customers’ expectations, even with the ubiquity of free and “easy-to-implement” tools at their disposal. In this session, learn how to address DevOps challenges like tool sprawl and managing Kubernetes application operations across hybrid and multi-cloud landscapes through a centralized GitOps-centric approach.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Digital twins, the Metaverse, and augmented reality: Developer insights and IT foundations for immersive technologies powered by AI]]></title><description><![CDATA[Digital twins, the Metaverse, and augmented reality: Developer insights and IT foundations for immersive technologies powered by AI June 1…]]></description><link>https://developer.hpe.com/digital-twins-the-metaverse-and-augmented-reality-developer-insights-and-it-foundations-for-immersive-technologies-powered-by-ai/</link><guid isPermaLink="false">https://developer.hpe.com/digital-twins-the-metaverse-and-augmented-reality-developer-insights-and-it-foundations-for-immersive-technologies-powered-by-ai/</guid><content:encoded>&lt;h2&gt;Digital twins, the Metaverse, and augmented reality: Developer insights and IT foundations for immersive technologies powered by AI&lt;/h2&gt;
&lt;p&gt;June 14, 2023&lt;/p&gt;
&lt;p&gt;In this Munch &amp;#x26; Learn session, learn about digital twins, the metaverse, and augmented reality, including their application in industries, the role of AI, and developer factors. You’ll get to view real-world examples showing how they can transform industries and create amazing immersive experiences.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Divide and conquer with MicroFrontends]]></title><description><![CDATA[Divide and conquer with MicroFrontends April 26, 2023 Micro Frontend, a design paradigm that extends the concepts of micro services to the…]]></description><link>https://developer.hpe.com/divide-and-conquer-with-microfrontends/</link><guid isPermaLink="false">https://developer.hpe.com/divide-and-conquer-with-microfrontends/</guid><content:encoded>&lt;h2&gt;Divide and conquer with MicroFrontends&lt;/h2&gt;
&lt;p&gt;April 26, 2023&lt;/p&gt;
&lt;p&gt;Micro Frontend, a design paradigm that extends the concepts of micro services to the frontend world, helps scale team development processes to deliver complex frontends. It allows teams to develop multiple micro frontends in parallel, helping build modern complex applications and empowering users to transform legacy apps in a gradual manner by rebuilding the app in parts. Join us to hear Vishal Sharma discuss using micro frontends and how to best split teams to achieve the desired results.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Embracing large language model tools like ChatGPT for iterative problem solving]]></title><description><![CDATA[Embracing large language model tools like ChatGPT for iterative problem solving May 24, 2023 Learn how coders and project architects alike…]]></description><link>https://developer.hpe.com/embracing-large-language-model-tools-like-chatgpt-for-iterative-problem-solving/</link><guid isPermaLink="false">https://developer.hpe.com/embracing-large-language-model-tools-like-chatgpt-for-iterative-problem-solving/</guid><content:encoded>&lt;h2&gt;Embracing large language model tools like ChatGPT for iterative problem solving&lt;/h2&gt;
&lt;p&gt;May 24, 2023&lt;/p&gt;
&lt;p&gt;Learn how coders and project architects alike are using Large Language Model tools like ChatGPT to transform software development. Tailored for those with some developer experience, this Munch &amp;#x26; Learn session explores the highly iterative nature of these tools and their potential to serve as your 24/7 developer assistant.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Enabling business automation using HPE GreenLake platform foundational APIs]]></title><description><![CDATA[Enabling business automation using HPE GreenLake platform foundational APIs March 27, 2024 The HPE GreenLake edge-to-cloud platform, now…]]></description><link>https://developer.hpe.com/enabling-business-automation-using-hpe-greenlake-platform-foundational-apis/</link><guid isPermaLink="false">https://developer.hpe.com/enabling-business-automation-using-hpe-greenlake-platform-foundational-apis/</guid><content:encoded>&lt;h2&gt;Enabling business automation using HPE GreenLake platform foundational APIs&lt;/h2&gt;
&lt;p&gt;March 27, 2024&lt;/p&gt;
&lt;p&gt;The HPE GreenLake edge-to-cloud platform, now offers application programming interfaces (APIs) for many common services, such as workspace management, identity and access management, device and subscription, locations, audit logs, and wellness. In this session, explore using the HPE GreenLake Developer Portal to access them and see how to invoke these APIs using an OpenAPI tool, such as Postman. We’ll demonstrate this using some of the most popular scripting languages (i.e. Bash, PowerShell and Python).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Technology and Solution Summit 2022]]></title><description><![CDATA[HPE Technology and Solutions Summit 2022 March 28-31, 2022 HPE TSS 2022 helps build a collaborative, open discussion where presales…]]></description><link>https://developer.hpe.com/engage-with-the-hpe-dev-team-at-hpe-technology-and-solution-summit-2022/</link><guid isPermaLink="false">https://developer.hpe.com/engage-with-the-hpe-dev-team-at-hpe-technology-and-solution-summit-2022/</guid><content:encoded>&lt;h2&gt;HPE Technology and Solutions Summit 2022&lt;/h2&gt;
&lt;p&gt;March 28-31, 2022&lt;/p&gt;
&lt;p&gt;HPE TSS 2022 helps build a collaborative, open discussion where presales consultants and solution architects from the HPE and HPE partner presales community can each learn from each other to address today’s challenges.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Enhancing NLP with Retrieval-Augmented Generation: A Practical Demonstration]]></title><description><![CDATA[Enhancing NLP with Retrieval-Augmented Generation: A Practical Demonstration September 18, 2024 In the evolving landscape of Natural…]]></description><link>https://developer.hpe.com/enhancing-nlp-with-retrieval-augmented-generation-a-practical-demonstration/</link><guid isPermaLink="false">https://developer.hpe.com/enhancing-nlp-with-retrieval-augmented-generation-a-practical-demonstration/</guid><content:encoded>&lt;h2&gt;Enhancing NLP with Retrieval-Augmented Generation: A Practical Demonstration&lt;/h2&gt;
&lt;p&gt;September 18, 2024&lt;/p&gt;
&lt;p&gt;In the evolving landscape of Natural Language Processing (NLP), Retrieval-Augmented Generation (RAG) stands out as a powerful technique that enhances the NLP application by incorporating relevant external information. In this session, we will delve into the fundamentals and applications of RAG, providing a comprehensive overview of how it integrates retrieval mechanisms with generative capabilities to produce more accurate and contextually aware responses. We will then transition into a live demonstration showcasing the practical usage of RAG and the process of augmenting a generative model with external knowledge sources, showcasing how RAG improves the relevance and quality of generated outputs in real-time applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exploring HPE GreenLake Platform APIs through use cases]]></title><description><![CDATA[Exploring HPE GreenLake Platform APIs through use cases December 13, 2023 The HPE GreenLake edge-to-cloud platform empowers businesses to…]]></description><link>https://developer.hpe.com/exploring-hpe-greenlake-platform-apis-through-use-cases/</link><guid isPermaLink="false">https://developer.hpe.com/exploring-hpe-greenlake-platform-apis-through-use-cases/</guid><content:encoded>&lt;h2&gt;Exploring HPE GreenLake Platform APIs through use cases&lt;/h2&gt;
&lt;p&gt;December 13, 2023&lt;/p&gt;
&lt;p&gt;The HPE GreenLake edge-to-cloud platform empowers businesses to efficiently manage a diverse set of IT resources no matter their location. Working with the platform is made simple though its collection of RESTful APIs, which enable developers to interact with and manage different aspects of the HPE GreenLake platform. Join this session to learn how these APIs will equip your business with the tools necessary for automating the seamless integration of HPE GreenLake services into your custom applications, streamlining processes, and enhancing control and visibility over your infrastructure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exploring the HPE Sustainability Insight Center: Key features, innovations, and API capabilities]]></title><description><![CDATA[Exploring the HPE Sustainability Insight Center: Key features, innovations, and API capabilities September 25, 2024 Discover how the HPE…]]></description><link>https://developer.hpe.com/exploring-the-hpe-sustainability-insight-center-key-features-innovations-and-sic-api-capabilities/</link><guid isPermaLink="false">https://developer.hpe.com/exploring-the-hpe-sustainability-insight-center-key-features-innovations-and-sic-api-capabilities/</guid><content:encoded>&lt;h2&gt;Exploring the HPE Sustainability Insight Center: Key features, innovations, and API capabilities&lt;/h2&gt;
&lt;p&gt;September 25, 2024&lt;/p&gt;
&lt;p&gt;Discover how the HPE Sustainability Insight Center can optimize energy efficiency, reduce costs, and drive impactful sustainability initiatives for your customers&apos; organizations. This session will highlight key product features, including a unified dashboard for energy and carbon emissions reporting, real-time insights for data-driven decisions, and seamless integration with HPE GreenLake. You will have the opportunity to learn about the SIC API, exploring its use to enhance sustainable IT solutions. Join us and take the first step toward a greener, more efficient future. Register now!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exploring Transformative AI Use Cases Across Industries]]></title><description><![CDATA[Exploring Transformative AI Use Cases Across Industries August 21, 2024 Artificial Intelligence (AI) is rapidly reshaping industries…]]></description><link>https://developer.hpe.com/exploring-transformative-ai-use-cases-across-industries/</link><guid isPermaLink="false">https://developer.hpe.com/exploring-transformative-ai-use-cases-across-industries/</guid><content:encoded>&lt;h2&gt;Exploring Transformative AI Use Cases Across Industries&lt;/h2&gt;
&lt;p&gt;August 21, 2024&lt;/p&gt;
&lt;p&gt;Artificial Intelligence (AI) is rapidly reshaping industries, driving innovation, and creating new opportunities for growth and efficiency. This panel discussion brings together thought leaders, industry experts, and technologists to explore transformative AI use cases across diverse sectors, including healthcare, finance, manufacturing, and retail. In this talk, panelists will share insights on the critical factors for AI adoption, including data management, ethical considerations, integration with existing systems, and emerging trends, such as the use of AI for predictive analytics, personalization, automation, and decision-making. Book your seat for this insightful session with our AI subject matter experts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Extraordinary claims require extraordinary engineering (and TAMO isn’t an option)]]></title><description><![CDATA[Extraordinary claims require extraordinary engineering (and TAMO isn’t an option) March 15, 2023 So many challenges we face as a species…]]></description><link>https://developer.hpe.com/extraordinary-claims-require-extraordinary-engineering-and-tamo-isn’t-an-option/</link><guid isPermaLink="false">https://developer.hpe.com/extraordinary-claims-require-extraordinary-engineering-and-tamo-isn’t-an-option/</guid><content:encoded>&lt;h2&gt;Extraordinary claims require extraordinary engineering (and TAMO isn’t an option)&lt;/h2&gt;
&lt;p&gt;March 15, 2023&lt;/p&gt;
&lt;p&gt;So many challenges we face as a species demand an unprecedented ability to simulate and engineer at the scale where quantum mechanics dominate. In this session, Kirk Bresniker of Hewlett Packard Labs will give us his perspective on moving from quantum computing research to productization. His thoughts on approaching quantum computing development as a global community from the application down to the Qubits, as opposed to the other direction, offer insights on how to further its advancement.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Finding vulnerabilities in production with open source ThreatMapper]]></title><description><![CDATA[Finding vulnerabilities in production with open source ThreatMapper July 27, 2022 In this talk, we’ll explore how open source Deepfence…]]></description><link>https://developer.hpe.com/finding-vulnerabilities-in-production-with-open-source-threatmapper/</link><guid isPermaLink="false">https://developer.hpe.com/finding-vulnerabilities-in-production-with-open-source-threatmapper/</guid><content:encoded>&lt;h2&gt;Finding vulnerabilities in production with open source ThreatMapper&lt;/h2&gt;
&lt;p&gt;July 27, 2022&lt;/p&gt;
&lt;p&gt;In this talk, we’ll explore how open source Deepfence ThreatMapper discovers and visualizes the external and internal attack surface for your applications and infrastructure. See how it extends the goodness of Shift Left into your production platforms in our Kubernetes cluster demo as it scans for vulnerabilities and exposed secrets to identify the greatest risks.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From log files to AI insights: The 60-year evolution of observability and AIOps]]></title><description><![CDATA[From log files to AI insights: The 60-year evolution of observability and AIOps January 15, 2025 In an era where systems are increasingly…]]></description><link>https://developer.hpe.com/from-log-files-to-ai-insights-the-60-year-evolution-of-observability-and-aiops/</link><guid isPermaLink="false">https://developer.hpe.com/from-log-files-to-ai-insights-the-60-year-evolution-of-observability-and-aiops/</guid><content:encoded>&lt;h2&gt;From log files to AI insights: The 60-year evolution of observability and AIOps&lt;/h2&gt;
&lt;p&gt;January 15, 2025&lt;/p&gt;
&lt;p&gt;In an era where systems are increasingly complex and interconnected, the ability to monitor, understand, and optimize your IT landscape has evolved dramatically. This session takes you on a 60-year journey through the evolution of observability and the rise of Artificial Intelligence for IT Operations (AIOps).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Galadriel -  An alternative approach to SPIRE federation]]></title><description><![CDATA[Galadriel -  An alternative approach to SPIRE federation February 22, 2023 SPIFFE and SPIRE contribute to strong identification and…]]></description><link>https://developer.hpe.com/galadriel-an-alternative-approach-to-spire-federation/</link><guid isPermaLink="false">https://developer.hpe.com/galadriel-an-alternative-approach-to-spire-federation/</guid><content:encoded>&lt;h2&gt;Galadriel -  An alternative approach to SPIRE federation&lt;/h2&gt;
&lt;p&gt;February 22, 2023&lt;/p&gt;
&lt;p&gt;SPIFFE and SPIRE contribute to strong identification and attestation of workloads in cloud native environments. A more scalable alternative to its current federation method, Galadriel, proposes to facilitate SPIRE federation of multiple trust domains via a central exchange hub. Join this Meetup to learn more about some of the limitations of the current federation method and how Galadriel’s architecture aims to address these, providing a simpler, more scalable solution.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting Started with Aruba Central automation]]></title><description><![CDATA[Getting Started with Aruba Central automation February 28, 2024 Learn how to configure, manage, and retrieve information using the various…]]></description><link>https://developer.hpe.com/getting-started-with-aruba-central-automation/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-aruba-central-automation/</guid><content:encoded>&lt;h2&gt;Getting Started with Aruba Central automation&lt;/h2&gt;
&lt;p&gt;February 28, 2024&lt;/p&gt;
&lt;p&gt;Learn how to configure, manage, and retrieve information using the various automation interfaces of Aruba Central in this session. We’ll explain how to access and utilize the REST APIs using the built-in user interface (UI) and Postman, how to manage the webhooks, and how to view the data from the streaming APIs.  These areas will be showcased in demos so that the audience can recreate the capabilities in their own environments.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake for Compute Ops Management APIs]]></title><description><![CDATA[Getting started with HPE GreenLake for Compute Ops Management APIs January 31, 2024 Join this session to learn how to take advantage of HPE…]]></description><link>https://developer.hpe.com/getting-started-with-hpe-greenlake-for-compute-ops-management-apis/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-hpe-greenlake-for-compute-ops-management-apis/</guid><content:encoded>&lt;h2&gt;Getting started with HPE GreenLake for Compute Ops Management APIs&lt;/h2&gt;
&lt;p&gt;January 31, 2024&lt;/p&gt;
&lt;p&gt;Join this session to learn how to take advantage of HPE GreenLake for Compute Ops Management capabilities, simplifying and unifying operations across the server lifecycle for your whole environment, no matter where your compute infrastructure lies. See how it provides a consistent, secure cloud experience that scales elastically and unifies your compute management programmatically.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How digital twins help companies reach their sustainability goals]]></title><description><![CDATA[How digital twins help companies reach their sustainability goals June 12, 2024 As virtual replicas of physical systems, digital twins, help…]]></description><link>https://developer.hpe.com/how-digital-twins-help-companies-reach-their-sustainability-goals/</link><guid isPermaLink="false">https://developer.hpe.com/how-digital-twins-help-companies-reach-their-sustainability-goals/</guid><content:encoded>&lt;h2&gt;How digital twins help companies reach their sustainability goals&lt;/h2&gt;
&lt;p&gt;June 12, 2024&lt;/p&gt;
&lt;p&gt;As virtual replicas of physical systems, digital twins, help companies reach their sustainability goals by optimizing resource usage, reducing waste, and improving efficiency across the datacenter - from infrastructure to workloads. They enable predictive maintenance, energy conservation, and sustainable design, paving the way to optimize usage. In this session, we’ll discuss about some of the on-going projects inside of HPE Labs and the Technology Incubations teams that investigate optimization and efficiency prediction for datacenters and associated workloads.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to fix your biggest security hole]]></title><description><![CDATA[How to fix your biggest security hole November 20, 2024 Are your users doing everything right to keep your systems secure?
What we need is…]]></description><link>https://developer.hpe.com/how-to-fix-your-biggest-security-hole/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-fix-your-biggest-security-hole/</guid><content:encoded>&lt;h2&gt;How to fix your biggest security hole&lt;/h2&gt;
&lt;p&gt;November 20, 2024&lt;/p&gt;
&lt;p&gt;Are your users doing everything right to keep your systems secure?
What we need is an access control system that is understandable, works on premises, in multiple clouds, object stores, file systems, relational databases and specialized hardware. This access control system must be able to be deployed incrementally, integrate with existing systems, simplify audits, and support formal verification of compliance. This session will describe how you can have all of that in a simple and understandable access control system.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Day Bogota, Colombia]]></title><description><![CDATA[HPE AI Developer Day Crea tu propio AI Agent con HPE & Nvidia Bogota, Colombia February 24, 2026 Únase a nosotros en un taller práctico y…]]></description><link>https://developer.hpe.com/hpe-ai-developer-day-bogota-colombia/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-day-bogota-colombia/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Crea tu propio AI Agent con HPE &amp;#x26; Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Bogota, Colombia&lt;/h3&gt;
&lt;p&gt;February 24, 2026&lt;/p&gt;
&lt;p&gt;Únase a nosotros en un taller práctico y envolvente en el que aprenderá a diseñar e implementar casos de uso de IA generativa del mundo real en HPE Private Cloud AI, con tecnología NVIDIA AI Enterprise Software.
Este taller está diseñado para desarrolladores, científicos de datos e innovadores empresariales que deseen ver cómo se crean, escalan y aplican soluciones de IA de nivel empresarial para resolver retos empresariales reales.&lt;/p&gt;
&lt;h6&gt;&lt;a href=&quot;https://events.bizzabo.com/797906/page/5472192/columbia-information&quot;&gt;Regístrate&lt;/a&gt;&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Day Monterrey, Mexico]]></title><description><![CDATA[HPE AI Developer Day Crea tu propio AI Agent con HPE & Nvidia Monterrey, Mexico January 29, 2026 Sumérgete en una exploración práctica de…]]></description><link>https://developer.hpe.com/hpe-ai-developer-day-monterrey-mexico/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-day-monterrey-mexico/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Crea tu propio AI Agent con HPE &amp;#x26; Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Monterrey, Mexico&lt;/h3&gt;
&lt;p&gt;January 29, 2026&lt;/p&gt;
&lt;p&gt;Sumérgete en una exploración práctica de HPE AI Essentials y NVIDIA AI Enterprise Software, donde adquirirás experiencia práctica con herramientas y marcos de trabajo de vanguardia. Empieza por aprender sobre el tecnol stack tecnológico y recorrer el entorno de desarrollo para comprender sus capacidades. En esta sesión utilizaremos un caso de uso de gestión de retrasos de vuelos como ejemplo. Sin embargo, con estos conocimientos podrás desarrollar tus propios agentes de AI para resolver tu caso de negocio.&lt;/p&gt;
&lt;h6&gt;&lt;a href=&quot;https://events.bizzabo.com/797906/page/5472191/monterrey-information&quot;&gt;Regístrate&lt;/a&gt;&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Day Paris, France]]></title><description><![CDATA[HPE AI Developer Day Créez votre propre agent AI avec HPE et Nvidia Paris, France Le 10 Mars 2026 Construisez et déployez une véritable…]]></description><link>https://developer.hpe.com/hpe-ai-developer-day-paris-france/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-day-paris-france/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Créez votre propre agent AI avec HPE et Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Paris, France&lt;/h3&gt;
&lt;p&gt;Le 10 Mars 2026&lt;/p&gt;
&lt;p&gt;Construisez et déployez une véritable solution d&apos;IA dans le cadre d&apos;un atelier en petit groupe animé par des conseillers HPE et NVIDIA, et repartez avec les outils, les compétences et la confiance nécessaires pour renouveler l&apos;expérience.
HPE Developer Day est un événement d&apos;une journée conçu pour vous aider à passer de vos ambitions en matière d&apos;IA à des résultats concrets.&lt;/p&gt;
&lt;h6&gt;&lt;a href=&quot;https://events.bizzabo.com/797906/page/5526657/paris-information&quot;&gt;Inscription&lt;/a&gt;&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Day Stockholm, Sweden]]></title><description><![CDATA[HPE AI Developer Day Build your own AI Agent with HPE & Nvidia Stockholm, Sweden March 24, 2026 Join us for an immersive, hands-on workshop…]]></description><link>https://developer.hpe.com/hpe-ai-developer-day-stockholm-sweeden/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-day-stockholm-sweeden/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Build your own AI Agent with HPE &amp;#x26; Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Stockholm, Sweden&lt;/h3&gt;
&lt;p&gt;March 24, 2026&lt;/p&gt;
&lt;p&gt;Join us for an immersive, hands-on workshop where you’ll learn how to design and deploy real-world generative AI use cases on HPE Private Cloud AI, powered by NVIDIA AI Enterprise Software.&lt;/p&gt;
&lt;h6&gt;&lt;a href=&quot;https://edgetocloud.se/event/hpe-private-cloud-ai-developer-day-framtidens-ai-losningar-for-offentlig-sektor&quot;&gt;Register&lt;/a&gt;&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Day Göteborg, Sweden]]></title><description><![CDATA[HPE AI Developer Day Build your own AI Agent with HPE & Nvidia Göteborg, Sweden March 25, 2026 Join us for an immersive, hands-on workshop…]]></description><link>https://developer.hpe.com/hpe-ai-developer-day-goteborg-sweeden/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-day-goteborg-sweeden/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Build your own AI Agent with HPE &amp;#x26; Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Göteborg, Sweden&lt;/h3&gt;
&lt;p&gt;March 25, 2026&lt;/p&gt;
&lt;p&gt;Join us for an immersive, hands-on workshop where you’ll learn how to design and deploy real-world generative AI use cases on HPE Private Cloud AI, powered by NVIDIA AI Enterprise Software.&lt;/p&gt;
&lt;h6&gt;&lt;a href=&quot;https://events.bizzabo.com/797906/page/5605984/sweden-information&quot;&gt;Register&lt;/a&gt;&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Days San Jose, California]]></title><description><![CDATA[HPE AI Developer Day Build your own AI Agent with HPE & Nvidia San Jose, California January 26, 2026 Join us for an immersive, hands-on…]]></description><link>https://developer.hpe.com/hpe-ai-developer-days-san-jose-california/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-developer-days-san-jose-california/</guid><content:encoded>&lt;h1&gt;&lt;strong&gt;HPE AI Developer Day&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;&lt;strong&gt;Build your own AI Agent with HPE &amp;#x26; Nvidia&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;San Jose, California&lt;/h3&gt;
&lt;p&gt;January 26, 2026&lt;/p&gt;
&lt;p&gt;Join us for an immersive, hands-on workshop where you’ll learn how to design and deploy real-world generative AI use cases on HPE Private Cloud AI, powered by NVIDIA AI Enterprise Software.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://events.bizzabo.com/797906/page/5472193/bay-area-information&quot;&gt;Register&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE AI Foundations workshop]]></title><description><![CDATA[HPE AI Foundations workshop The innovation workshop is an invaluable opportunity for all IT technologists looking to learn how to design…]]></description><link>https://developer.hpe.com/hpe-ai-foundations-workshop/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ai-foundations-workshop/</guid><content:encoded>&lt;h2&gt;HPE AI Foundations workshop&lt;/h2&gt;
&lt;p&gt;The innovation workshop is an invaluable opportunity for all IT technologists looking to learn how to design, implement, and operate a hybrid cloud AI platform. The HPE AI Foundations workshop shows you how to establish a comprehensive and extensible foundation to empower integration, strategic deployment, responsible data management &amp;#x26; governance, machine &amp;#x26; deep learning, and generative AI. During the workshop, you will gain the knowledge and skills necessary to understand HPE GreenLake cloud and best of breed AI ecosystem technologies incorporated within it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/skillup/&quot;&gt;&lt;strong&gt;Register now!&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE and 451 Research joint webinar]]></title><description><![CDATA[Hybrid Cloud by Design : Optimizing Enterprise IT for Modern Workloads September 17, 2024 Given the varying capacity, cost, performance…]]></description><link>https://developer.hpe.com/hpe-and-451-research-joint-webinar/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-and-451-research-joint-webinar/</guid><content:encoded>&lt;h2&gt;Hybrid Cloud by Design : Optimizing Enterprise IT for Modern Workloads&lt;/h2&gt;
&lt;p&gt;September 17, 2024&lt;/p&gt;
&lt;p&gt;Given the varying capacity, cost, performance, security, governance, and data locality requirements of enterprise workloads, hybrid cloud estates spanning on-premises private and off-premises public environments have become the default operating model for enterprise IT. The resulting increase in IT complexity often results in infrastructure silos instead of systems that deliver a unified cloud operating experience. Join Latha Vishnubhotla, Chief Platform Officer at HPE, and Melanie Posey, Research Director, 451 Research, for insights on how IT leaders formulate their hybrid cloud strategies to deal with this challenge.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Compute Ops Management APIs and the DevOps ecosystem: Updates and developer tools]]></title><description><![CDATA[HPE Compute Ops Management APIs and the DevOps ecosystem: Updates and developer tools June 18, 2025 Explore the latest updates to the HPE…]]></description><link>https://developer.hpe.com/hpe-compute-ops-management-apis-and-the-devops-ecosystem-updates-and-developer-tools/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-compute-ops-management-apis-and-the-devops-ecosystem-updates-and-developer-tools/</guid><content:encoded>&lt;h2&gt;HPE Compute Ops Management APIs and the DevOps ecosystem: Updates and developer tools&lt;/h2&gt;
&lt;p&gt;June 18, 2025&lt;/p&gt;
&lt;p&gt;Explore the latest updates to the HPE Compute Ops Management (COM) APIs and discover essential resources for developers. This session will highlight tools such as Postman collections, GitHub projects and their role in enabling seamless DevOps integration. Additionally, we will introduce the PowerShell library for COM, featuring an overview, a live demonstration, and a roadmap for future enhancements. Join us to maximize your productivity and leverage the HPE GreenLake ecosystem for your projects.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Data World 2021]]></title><description><![CDATA[HPE Data World 2021 3D virtual event December 7, 2021 Let us help you solve your data challenges! In this virtual event, we will show how…]]></description><link>https://developer.hpe.com/hpe-data-world-2021/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-data-world-2021/</guid><content:encoded>&lt;h1&gt;HPE Data World 2021&lt;/h1&gt;
&lt;h3&gt;3D virtual event&lt;/h3&gt;
&lt;p&gt;December 7, 2021&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Let us help you solve your data challenges!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this virtual event, we will show how HPE can help you to unify, modernize, analyze and protect all of your data, from edge-to-cloud, in any and every place it&apos;s stored, so that you get a sustainable competitive advantage.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streamlit - The fastest way to build and share data science apps]]></title><description><![CDATA[Meetups Streamlit - The fastest way to build and share data science apps February 23, 2022 Poor tooling slows down data science and machine…]]></description><link>https://developer.hpe.com/hpe-dev-meetups-1/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-meetups-1/</guid><content:encoded>&lt;h2&gt;Meetups&lt;/h2&gt;
&lt;h3&gt;Streamlit - The fastest way to build and share data science apps&lt;/h3&gt;
&lt;p&gt;February 23, 2022&lt;/p&gt;
&lt;p&gt;Poor tooling slows down data science and machine learning projects. Projects often develop a unique ecosystem of bug-ridden and unmaintainable internal tools to analyze data through a patchwork of Jupyter Notebooks and Flask apps.
In this talk, you’ll be introduced to &lt;strong&gt;Streamlit&lt;/strong&gt;, the fastest way to build and share data apps as Python scripts to help alleviate this issue.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster]]></title><description><![CDATA[Meetups HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster March 30, 2022 Today, most application migrations focus on…]]></description><link>https://developer.hpe.com/hpe-dev-meetups-2/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-meetups-2/</guid><content:encoded>&lt;h2&gt;Meetups&lt;/h2&gt;
&lt;h3&gt;HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster&lt;/h3&gt;
&lt;p&gt;March 30, 2022&lt;/p&gt;
&lt;p&gt;Today, most application migrations focus on lift-and-shift strategies to get to the cloud faster, yet neglect the parallel challenge of harnessing value from massive amounts of data.&lt;/p&gt;
&lt;p&gt;Come to this technology talk with HPE and vFunction to see in action how &lt;strong&gt;vFunction&lt;/strong&gt; uses AI, data science, and automation to significantly expand the number of applications that can be refactored into microservices at a much faster speed and much lower risk as part of an integration for &lt;strong&gt;HPE Ezmeral&lt;/strong&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Styra - Decoupled policy enforcement with Open Policy Agent]]></title><description><![CDATA[Meetups Styra - Decoupled policy enforcement with Open Policy Agent April 27, 2022 In just a few years, the CNCF graduated Open Policy Agent…]]></description><link>https://developer.hpe.com/hpe-dev-meetups-3/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-meetups-3/</guid><content:encoded>&lt;h2&gt;Meetups&lt;/h2&gt;
&lt;h3&gt;Styra - Decoupled policy enforcement with Open Policy Agent&lt;/h3&gt;
&lt;p&gt;April 27, 2022&lt;/p&gt;
&lt;p&gt;In just a few years, the CNCF graduated Open Policy Agent (OPA) project has established itself as the de-facto standard for policy based guard rails around Kubernetes clusters - now it’s moving into microservices! In this talk, we’ll explore the benefits of decoupling policy from application logic, and how OPA can help bring order to an increasingly distributed, heterogeneous, and complex tech stack.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Scaling Language Training to Trillion-parameter Models on a GPU Cluster]]></title><description><![CDATA[Today, natural language processing (NLP) powers the latest conversational AI and translation apps. Join us to explore the collaborative…]]></description><link>https://developer.hpe.com/hpe-dev-meetups-4/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-meetups-4/</guid><content:encoded>&lt;p&gt;Today, natural language processing (NLP) powers the latest conversational AI and translation apps. Join us to explore the collaborative experimentation process that machine learning teams leverage and the challenges they face while training these large-scale NLP models. See how Determined’s open-source deep learning training platform helps model developers train models faster and easier using tools such as resource management, fault tolerance, and model optimization.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Quarkus - Supersonic Subatomic Java]]></title><description><![CDATA[Meetups Quarkus - Supersonic Subatomic Java January 26, 2022 Java based software development has been a winning proposition for the past 2…]]></description><link>https://developer.hpe.com/hpe-dev-meetups/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-meetups/</guid><content:encoded>&lt;h2&gt;Meetups&lt;/h2&gt;
&lt;h3&gt;Quarkus - Supersonic Subatomic Java&lt;/h3&gt;
&lt;p&gt;January 26, 2022&lt;/p&gt;
&lt;p&gt;Java based software development has been a winning proposition for the past 20+ years. Come to this technology talk to get an introduction to &lt;strong&gt;Quarkus&lt;/strong&gt;, a Kubernetes native Java stack that tailors apps to make them shine in the cloud-native universe.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel: Making parallel computing as easy as Py(thon), from laptops to supercomputers]]></title><description><![CDATA[Chapel: Making parallel computing as easy as Py(thon), from laptops to supercomputers April 20, 2022 Join us for a free, 60-minute session…]]></description><link>https://developer.hpe.com/hpe-dev-munch-learn-series-april-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-munch-learn-series-april-2022/</guid><content:encoded>&lt;h3&gt;Chapel: Making parallel computing as easy as Py(thon), from laptops to supercomputers&lt;/h3&gt;
&lt;p&gt;April 20, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, meet with Brad Chamberlain, HPE, who will discuss the Chapel parallel programming language, pioneered by Cray and HPE. Chapel supports parallel programs that are similarly readable/writable as Python while providing performance and portability similar to Fortran, C, C++, and other common techniques for writing parallel and/or scalable code. Brad will also describe an open-source Python library written in Chapel that uses parallelism to accelerate common NumPy and Pandas operations used in data science.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Golden Age of AI, Dark Ages of AI Infrastructure]]></title><description><![CDATA[Golden Age of AI, Dark Ages of AI Infrastructure February 16, 2022 This month, meet with Neil Conway, senior director of engineering…]]></description><link>https://developer.hpe.com/hpe-dev-munch-learn-series-february-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-munch-learn-series-february-2022/</guid><content:encoded>&lt;h3&gt;Golden Age of AI, Dark Ages of AI Infrastructure&lt;/h3&gt;
&lt;p&gt;February 16, 2022&lt;/p&gt;
&lt;p&gt;This month, meet with Neil Conway, senior director of engineering Determined AI at HPE and learn why practical use of deep learning (DL) remains difficult, what problems DL infrastructure tools solve today, where they fall short, and how they can improve. This talk draws on academic work done at Carnegie Mellon, UC Berkeley, and UCLA, as well as experiences at Determined AI, a startup recently acquired by HPE and that builds open-source software to make deep learning engineers dramatically more productive.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Location, location, location! Succeed at the Edge with HPE Ezmeral and NVIDIA]]></title><description><![CDATA[Location, location, location! With data everywhere, location matters more than ever. Learn how to succeed at the Edge with HPE Ezmeral and…]]></description><link>https://developer.hpe.com/hpe-dev-munch-learn-series-january-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-munch-learn-series-january-2022/</guid><content:encoded>&lt;h3&gt;Location, location, location! With data everywhere, location matters more than ever. Learn how to succeed at the Edge with HPE Ezmeral and NVIDIA&lt;/h3&gt;
&lt;p&gt;January 19, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, NVIDIA and HPE partner to help you accelerate your success at the Edge while enjoying a friction-free distributed cloud experience across Edge, Core, and multi-cloud. Register now to learn how you can benefit from this data-first, Edge-in revolution as Denis Vilfort from HPE and William Benton from NVIDIA describe the use of GPU accelerated Apache Spark, HPE Ezmeral Data Fabric, and the HPE GreenLake edge-to-cloud platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Mithril: Introducing Robust Identities into Istio by integrating with SPIRE]]></title><description><![CDATA[Mithril: Introducing Robust Identities into Istio by integrating with SPIRE March 23, 2022 Join us for a free, 60-minute session where you…]]></description><link>https://developer.hpe.com/hpe-dev-munch-learn-series-march-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-munch-learn-series-march-2022/</guid><content:encoded>&lt;h3&gt;Mithril: Introducing Robust Identities into Istio by integrating with SPIRE&lt;/h3&gt;
&lt;p&gt;March 23, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, meet with Mithril engineering team at HPE who will discuss the fundamentals of zero trust,  workload identities, and introduce the concept of Service Mesh. They will then explain how the new open source solution, Mithril, leverages the widely adopted Service Mesh solution Istio, and the secure identities framework of SPIRE to secure the Istio Service Mesh.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why Open Source is more than Software: The example of The Linux Foundation's AgStack project]]></title><description><![CDATA[Why Open Source is more than Software: The example of The Linux Foundation's AgStack project May 18, 2022 Join us for a free, 60-minute…]]></description><link>https://developer.hpe.com/hpe-dev-munch-learn-series-may-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-munch-learn-series-may-2022/</guid><content:encoded>&lt;h2&gt;Why Open Source is more than Software: The example of The Linux Foundation&apos;s AgStack project&lt;/h2&gt;
&lt;p&gt;May 18, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, learn about open services through the intriguing use case of the Linux Foundation’s AgStack project for the world&apos;s agricultural ecosystem. Hear from AgStack Founder, Sumer Johal, and HPE AgStack Developer, Ted Dunning, as they explore how open source enables services to benefit the world.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Discover 2022]]></title><description><![CDATA[HPE Discover 2022 Join the HPE Developer Community team in the Hack Shack Las Vegas, June 28-30 From the latest insights in secure…]]></description><link>https://developer.hpe.com/hpe-discover-2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discover-2022/</guid><content:encoded>&lt;h2&gt;HPE Discover 2022&lt;/h2&gt;
&lt;h3&gt;Join the HPE Developer Community team in the Hack Shack&lt;/h3&gt;
&lt;p&gt;Las Vegas, June 28-30&lt;/p&gt;
&lt;p&gt;From the latest insights in secure connectivity, hybrid cloud, AI and unified data analytics, HPE Discover 2022 is the best place to stay ahead of the trends and technologies that will move your business forward, faster. Join HPE experts, leading companies, and industry luminaries and learn how to accelerate your data-first modernization across edge to cloud. You don’t want to miss this opportunity to once again gather with HPE Developer Community members in the Hack Shack!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Discover 2021]]></title><description><![CDATA[HPE Discover 2021 Join the HPE Developer team in the Hack Shack June 24, 2021 HPE Developer will offer five Hack Shack Workshops on Day 3 at…]]></description><link>https://developer.hpe.com/hpe-discover/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discover/</guid><content:encoded>&lt;h2&gt;HPE Discover 2021&lt;/h2&gt;
&lt;h3&gt;Join the HPE Developer team in the Hack Shack&lt;/h3&gt;
&lt;p&gt;June 24, 2021&lt;/p&gt;
&lt;p&gt;HPE Developer will offer five Hack Shack Workshops on Day 3 at Discover 2021. In each session, experts will cover topics from Kubernetes to data fabric and security, followed by hands-on workshops. These workshops will be available to all levels with some aimed specifically at beginners. The team will be available throughout the week to answer any questions you may have.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake and Infrastructure-as-Code (IaC)]]></title><description><![CDATA[HPE GreenLake and Infrastructure-as-Code (IaC) January 25, 2023 Discover the work that is underway at HPE to support IaC in HPE GreenLake…]]></description><link>https://developer.hpe.com/hpe-greenlake-and-infrastructure-as-code-iac/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-and-infrastructure-as-code-iac/</guid><content:encoded>&lt;h2&gt;HPE GreenLake and Infrastructure-as-Code (IaC)&lt;/h2&gt;
&lt;p&gt;January 25, 2023&lt;/p&gt;
&lt;p&gt;Discover the work that is underway at HPE to support IaC in HPE GreenLake, enabling developers to extend automation within the DevOps toolchain. It will allow them to easily spin up, tear down, and scale infrastructure, making software development, testing, and deployment of applications faster and easier.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Machine Learning Development Environment and the Open Source ML Advantage]]></title><description><![CDATA[HPE Machine Learning Development Environment and the Open Source ML Advantage May 31, 2023 Built upon the widely popular open source…]]></description><link>https://developer.hpe.com/hpe-machine-learning-development-environment-and-the-open-source-ml-advantage/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-machine-learning-development-environment-and-the-open-source-ml-advantage/</guid><content:encoded>&lt;h2&gt;HPE Machine Learning Development Environment and the Open Source ML Advantage&lt;/h2&gt;
&lt;p&gt;May 31, 2023&lt;/p&gt;
&lt;p&gt;Built upon the widely popular open source Determined Training Platform, the HPE Machine Learning Development Environment (MLDE) helps developers and scientists focus on innovation by removing the complexity and cost associated with machine learning model development. In this meetup, you’ll be exposed to the Determined platform and how it integrates the latest, innovative open source AI/ML technologies, including PyTorch, TensorFlow, and DeepSpeed, and new libraries to further accelerate distributed ML model training.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Private Cloud AI Technical Demo]]></title><description><![CDATA[HPE Private Cloud AI Technical Demo April 30, 2025 This technical meetup will delve into the HPE Private Cloud AI platform, offering role…]]></description><link>https://developer.hpe.com/hpe-private-cloud-ai-technical-demo/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-private-cloud-ai-technical-demo/</guid><content:encoded>&lt;h2&gt;HPE Private Cloud AI Technical Demo&lt;/h2&gt;
&lt;p&gt;April 30, 2025&lt;/p&gt;
&lt;p&gt;This technical meetup will delve into the HPE Private Cloud AI platform, offering role-specific walkthroughs for administrators focusing on user and GPU management, data engineers exploring data pipeline construction with Airflow and Spark, and data scientists learning about model deployment via HPE Machine Learning Inference Software (MLIS) and AI Essentials Solution Accelerators.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Sustainability Insight Center: A Win for both Business and the Environment]]></title><description><![CDATA[HPE Sustainability Insight Center: A Win for both Business and the Environment May 21, 2025 Although IT sustainability is a top priority for…]]></description><link>https://developer.hpe.com/hpe-sustainability-insight-center-a-win-for-both-business-and-the-environment/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-sustainability-insight-center-a-win-for-both-business-and-the-environment/</guid><content:encoded>&lt;h2&gt;HPE Sustainability Insight Center: A Win for both Business and the Environment&lt;/h2&gt;
&lt;p&gt;May 21, 2025&lt;/p&gt;
&lt;p&gt;Although IT sustainability is a top priority for many businesses today, significant challenges often stand in their way in achieving their goals. Often, IT sustainability reporting is disaggregated and done mainly offline, accomplished through email communications. In addition, research (Deloitte) shows that 88% of those customers aiming to achieve IT sustainability are challenged with data quality.
Join this session to learn how the HPE Sustainability Insight Center addresses these challenges through a unified real-time dashboard featuring automated data feeds.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Sustainability Strategy and Sustainability Research at Hewlett Packard Labs]]></title><description><![CDATA[HPE Sustainability Strategy and Sustainability Research at Hewlett Packard Labs October 19, 2022 Join us for a free, 60-minute session where…]]></description><link>https://developer.hpe.com/hpe-sustainability-strategy-and-sustainability-research-at-hewlett-packard-labs/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-sustainability-strategy-and-sustainability-research-at-hewlett-packard-labs/</guid><content:encoded>&lt;h2&gt;HPE Sustainability Strategy and Sustainability Research at Hewlett Packard Labs&lt;/h2&gt;
&lt;p&gt;October 19, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, learn about HPE’s sustainability strategy, Net Zero commitments, and Hewlett Packard Labs’ rich history of sustainability research that spans IT systems and beyond IT and that leads technological advancements required to achieve a low-carbon future.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Webinar: Is there life after Hadoop?]]></title><description><![CDATA[HPE Webinar: Is there life after Hadoop? Of course there is! August 26, 2021 Business-critical analytical applications depend on Hadoop…]]></description><link>https://developer.hpe.com/hpe-webinar-is-there-life-after-hadoop/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-webinar-is-there-life-after-hadoop/</guid><content:encoded>&lt;h2&gt;HPE Webinar:&lt;/h2&gt;
&lt;h2&gt;Is there life after Hadoop?&lt;/h2&gt;
&lt;h3&gt;Of course there is!&lt;/h3&gt;
&lt;h4&gt;August 26, 2021&lt;/h4&gt;
&lt;p&gt;Business-critical analytical applications depend on Hadoop – but technology innovations like object-based data platforms, hybrid cloud deployment, and open-source compute options (i.e., Apache Spark and Presto SQL) have emerged, leaving enterprises questioning the future of their data lake investment. Find out how to de-risk your business with a new strategy to modernize your data analytics platform – allowing you to gain more value from your data. This 45-minute live session highlights best practices from enterprises who realized that there is life after Hadoop and explains how HPE Ezmeral software and solutions can help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[What's a data fabric and how does it work?]]></title><description><![CDATA[Munch & Learn series What's a data fabric and how does it work? January 27, 2021 Run by the HPE Developer Community, the Munch & Learn…]]></description><link>https://developer.hpe.com/hpedev-much-learn-1/</link><guid isPermaLink="false">https://developer.hpe.com/hpedev-much-learn-1/</guid><content:encoded>&lt;h2&gt;Munch &amp;#x26; Learn series&lt;/h2&gt;
&lt;p&gt;What&apos;s a data fabric and how does it work?&lt;/p&gt;
&lt;p&gt;January 27, 2021&lt;/p&gt;
&lt;p&gt;Run by the HPE Developer Community, the Munch &amp;#x26; Learn monthly technology talks feature industry technology experts who offer valuable insights into today’s most popular technologies. Take advantage of this free 60-minute session opportunity to listen to and engage with these leading technologists on a variety of subjects. Each month we will be presenting a different topic. For January 2021, meet with Ted Dunning and learn about the value of a data fabric and how it works.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore Containerization and MLOps]]></title><description><![CDATA[Munch & Learn series Explore Containerization and MLOps February 24, 2021 Run by the HPE Developer Community, the Munch & Learn monthly…]]></description><link>https://developer.hpe.com/hpedev-much-learn-2/</link><guid isPermaLink="false">https://developer.hpe.com/hpedev-much-learn-2/</guid><content:encoded>&lt;h2&gt;Munch &amp;#x26; Learn series&lt;/h2&gt;
&lt;p&gt;Explore Containerization and MLOps&lt;/p&gt;
&lt;p&gt;February 24, 2021&lt;/p&gt;
&lt;p&gt;Run by the HPE Developer Community, the Munch &amp;#x26; Learn monthly technology talks feature industry technology experts who offer valuable insights into today’s most popular technologies. Take advantage of this free 60-minute session opportunity to listen to and engage with these leading technologists on a variety of subjects. Each month we will be presenting a different topic. For February 24, 2021, meet with Tom Phelan and learn about container architectures and how they can leverage the Kubernetes Container Orchestrator to deploy and manage stateful, as well as microservice-based, applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Hybrid Classical-Quantum Workflows on HPE Supercomputers]]></title><description><![CDATA[Hybrid Classical-Quantum Workflows on HPE Supercomputers October 16, 2024 Quantum computing is an active area of research that has attracted…]]></description><link>https://developer.hpe.com/hybrid-classical-quantum-workflows-on-hpe-supercomputers/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-classical-quantum-workflows-on-hpe-supercomputers/</guid><content:encoded>&lt;h2&gt;Hybrid Classical-Quantum Workflows on HPE Supercomputers&lt;/h2&gt;
&lt;p&gt;October 16, 2024&lt;/p&gt;
&lt;p&gt;Quantum computing is an active area of research that has attracted attention from key players in academia and industry. While quantum devices are expected to perform well for certain problems involving a small amount of data but high complexity, classical supercomputers continue to better serve researchers in areas with data-intensive workloads relying on software tools and infrastructure developed over decades. In conjunction with several possibly tightly integrated Noisy Intermediate-Scale Quantum (NISQ) devices, classical supercomputers would be valuable candidates for the execution of hybrid workflows while sustaining the classical portion of the computation and communication of potentially large data volumes. Join this talk to get an overview of the collaboration between Classiq and HPE on this topic.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Implementing Centralized Key Management in HPE GreenLake with Thales CipherTrust]]></title><description><![CDATA[Implementing Centralized Key Management in HPE GreenLake with Thales CipherTrust June 26, 2024 Driven by the proliferation in data silos…]]></description><link>https://developer.hpe.com/implementing-centralized-key-management-in-hpe-greenlake-with-thales-ciphertrust/</link><guid isPermaLink="false">https://developer.hpe.com/implementing-centralized-key-management-in-hpe-greenlake-with-thales-ciphertrust/</guid><content:encoded>&lt;h2&gt;Implementing Centralized Key Management in HPE GreenLake with Thales CipherTrust&lt;/h2&gt;
&lt;p&gt;June 26, 2024&lt;/p&gt;
&lt;p&gt;Driven by the proliferation in data silos across edge-to-cloud estates, deepening security compliance regulations, and the expansion of data sovereignty requirements, organizations and governments worldwide are mandating the deployment of data protection technologies, such as Centralized Key Management. In this session, you will learn about the exciting new partnership between HPE GreenLake and Thales, the worldwide leader in digital trust and data security. Here, you’ll be introduced to the new Centralized Key Management solution offering on HPE GreenLake.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Implementing Your AI Breakthroughs Effectively - The Infrastructure to your AI]]></title><description><![CDATA[Implementing Your AI Breakthroughs Effectively - The Infrastructure to your AI November 6, 2024 Grab a coffee together with your IT Ops…]]></description><link>https://developer.hpe.com/implementing-your-ai-breakthroughs-effectively-the-infrastructure-to-your-ai/</link><guid isPermaLink="false">https://developer.hpe.com/implementing-your-ai-breakthroughs-effectively-the-infrastructure-to-your-ai/</guid><content:encoded>&lt;h2&gt;Implementing Your AI Breakthroughs Effectively - The Infrastructure to your AI&lt;/h2&gt;
&lt;p&gt;November 6, 2024&lt;/p&gt;
&lt;p&gt;Grab a coffee together with your IT Ops Manager, as we discuss different AI infrastructure environment options to suit your use case. Consider what processes are necessary, who will be involved, and what resources you already have to build a toolbox that integrates all the AI components together seamlessly. You don’t have to be an AI infrastructure expert to get started, so let’s explore how you’ll get your AI proof of concept implemented.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Intel Innovation 2022]]></title><description><![CDATA[Intel Innovation 2022 September 27-28, 2022 San Jose, California Attend the Intel Innovation developer conference and explore the latest…]]></description><link>https://developer.hpe.com/intel-innovation-2022/</link><guid isPermaLink="false">https://developer.hpe.com/intel-innovation-2022/</guid><content:encoded>&lt;h2&gt;Intel Innovation 2022&lt;/h2&gt;
&lt;h4&gt;September 27-28, 2022 San Jose, California&lt;/h4&gt;
&lt;p&gt;Attend the Intel Innovation developer conference and explore the latest break-throughs in tech-accelerating computing. Visit the HPE booth and hear more about what the HPE Developer Community is doing by engaging with the booth staff.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introduction to HPE GreenLake cloud webhooks]]></title><description><![CDATA[Introduction to HPE GreenLake cloud webhooks February 26, 2025 This presentation introduces a new webhook functionality within HPE GreenLake…]]></description><link>https://developer.hpe.com/introduction-to-hpe-greenlake-webhooks/</link><guid isPermaLink="false">https://developer.hpe.com/introduction-to-hpe-greenlake-webhooks/</guid><content:encoded>&lt;h2&gt;Introduction to HPE GreenLake cloud webhooks&lt;/h2&gt;
&lt;p&gt;February 26, 2025&lt;/p&gt;
&lt;p&gt;This presentation introduces a new webhook functionality within HPE GreenLake cloud; one that enables platform services and applications to seamlessly publish events to external customer HTTP endpoints. Join this session to learn about the architecture, features, and benefits of this new webhook functionality, highlighting its potential to enhance platform extensibility and streamline integrations with customer systems.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introduction to HPE Networking Central's new APIs with Postman and PyCentralv2]]></title><description><![CDATA[Introduction to HPE Networking Central's new APIs with Postman and PyCentralv2 February 25, 2026 Join our next meetup session to learn about…]]></description><link>https://developer.hpe.com/introduction-to-hpe-networking-central-new-apis-with-postman-and-pycentralv2/</link><guid isPermaLink="false">https://developer.hpe.com/introduction-to-hpe-networking-central-new-apis-with-postman-and-pycentralv2/</guid><content:encoded>&lt;h2&gt;Introduction to HPE Networking Central&apos;s new APIs with Postman and PyCentralv2&lt;/h2&gt;
&lt;p&gt;February 25, 2026&lt;/p&gt;
&lt;p&gt;Join our next meetup session to learn about the Automation capabilities of HPE Networking Central that have had a big upgrade, and we’ll show you how to get started! Learn how the latest Central APIs make it easier than ever to configure and monitor your network, all within a refreshed Central interface.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introduction to Kubeflow]]></title><description><![CDATA[Introduction to KubeFlow November 30, 2022 Learn the basics of Kubeflow, the machine learning toolkit for Kubernetes dedicated to making…]]></description><link>https://developer.hpe.com/introduction-to-kubeflow/</link><guid isPermaLink="false">https://developer.hpe.com/introduction-to-kubeflow/</guid><content:encoded>&lt;h2&gt;Introduction to KubeFlow&lt;/h2&gt;
&lt;p&gt;November 30, 2022&lt;/p&gt;
&lt;p&gt;Learn the basics of Kubeflow, the machine learning toolkit for Kubernetes dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable. See how it works and what it brings to both data scientists and MLOps engineers. Find out what external add-ons may be important in certain use cases, and hear what Arrikto’s Enterprise Kubeflow distribution adds to the equation.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[IT New Year’s resolution: Build an ethical and trustworthy AI system]]></title><description><![CDATA[IT New Year’s resolution: Build an ethical and trustworthy AI system January 22, 2025 The "era of AI" provides new technological…]]></description><link>https://developer.hpe.com/it-new-year-resolution-building-an-ethical-and-trustworthy-ai-system/</link><guid isPermaLink="false">https://developer.hpe.com/it-new-year-resolution-building-an-ethical-and-trustworthy-ai-system/</guid><content:encoded>&lt;h2&gt;IT New Year’s resolution: Build an ethical and trustworthy AI system&lt;/h2&gt;
&lt;p&gt;January 22, 2025&lt;/p&gt;
&lt;p&gt;The &quot;era of AI&quot; provides new technological capabilities accompanied by new responsibilities. Learn how we developed AI principles to guide our organization&apos;s development decisions and how to put these principles into action throughout your AI system lifespan and follow-on projects.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon Europe 2021]]></title><description><![CDATA[KubeCon Europe 2021 May 04-07, 2021 Europe The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists…]]></description><link>https://developer.hpe.com/kube-con-europe-2021/</link><guid isPermaLink="false">https://developer.hpe.com/kube-con-europe-2021/</guid><content:encoded>&lt;h2&gt;KubeCon Europe 2021&lt;/h2&gt;
&lt;p&gt;May 04-07, 2021 Europe&lt;/p&gt;
&lt;p&gt;The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities virtually from May 4 – 7, 2021. Join containerd, CoreDNS, Envoy, Fluentd, Harbor, Helm, Jaeger, Kubernetes, Prometheus, Rook, TiKV, TUF, Vitess, Argo, CloudEvents, CNI, Contour, Cortex, CRI-O, Dragonfly, etcd, Falco, gRPC, KubeEdge, Linkerd, NATS, Notary, Open Policy Agent, OpenTracing, Operator Framework, SPIFFE, SPIRE, and Thanos as the community gathers for four days to further the education and advancement of cloud native computing.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon NA 2020]]></title><description><![CDATA[KubeCon NA 2020 November 17-20, 2020 NA The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from…]]></description><link>https://developer.hpe.com/kube-con-na-2020/</link><guid isPermaLink="false">https://developer.hpe.com/kube-con-na-2020/</guid><content:encoded>&lt;h2&gt;KubeCon NA 2020&lt;/h2&gt;
&lt;p&gt;November 17-20, 2020 NA&lt;/p&gt;
&lt;p&gt;The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities virtually from November 17 – 20, 2020. Join Kubernetes, Prometheus, Envoy, CoreDNS, containerd, Fluentd, Jaeger, Vitess, TUF, OpenTracing, gRPC, CNI, Notary, NATS, Linkerd, Helm, Rook, Harbor, etcd, Open Policy Agent, CRI-O, TiKV, CloudEvents, Falco, Argo, Dragonfly, SPIFFE and SPIRE as the community gathers for four days to further the education and advancement of cloud native computing.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon EU 2022]]></title><description><![CDATA[KubeCon EU 2022 May 16-20, 2022 EU Valencia, Spain + Virtual The Cloud Native Computing Foundation’s flagship conference gathers adopters…]]></description><link>https://developer.hpe.com/kubecon-eu-2022/</link><guid isPermaLink="false">https://developer.hpe.com/kubecon-eu-2022/</guid><content:encoded>&lt;h2&gt;KubeCon EU 2022&lt;/h2&gt;
&lt;h4&gt;May 16-20, 2022 EU Valencia, Spain + Virtual&lt;/h4&gt;
&lt;p&gt;The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in Valencia, Spain from 16 – 20 May 2022. Join thousands of cloud-native leaders, including Hewlett Packard Enterprise (HPE), as the community gathers for four days to further the education and advancement of cloud native computing. Visit the HPE booth to learn more about the HPE Developer Community and HPE’s open source efforts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Lessons in AI for retailers: Transforming the shopping experience]]></title><description><![CDATA[Lessons in AI for retailers: Transforming the shopping experience May 14, 2025 AI is reshaping the retail industry, from reducing shrink and…]]></description><link>https://developer.hpe.com/lessons-in-ai-for-retailers-transforming-the-shopping-experience/</link><guid isPermaLink="false">https://developer.hpe.com/lessons-in-ai-for-retailers-transforming-the-shopping-experience/</guid><content:encoded>&lt;h2&gt;Lessons in AI for retailers: Transforming the shopping experience&lt;/h2&gt;
&lt;p&gt;May 14, 2025&lt;/p&gt;
&lt;p&gt;AI is reshaping the retail industry, from reducing shrink and optimizing inventory management to delivering faster, more personalized customer experiences. However, the journey from AI adoption to achieving success can feel overwhelming. Join us to learn how AI is transforming retail operations, from improving store-level decision-making to reimagining the shopping journey. Whether you&apos;re new to AI or refining your strategy, this webinar will give you actionable insights to help meet customer expectations, while protecting customer trust.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon NA 2022]]></title><description><![CDATA[KubeCon NA 2022 October 24-28, 2022 Detroit, Michigan USA + Virtual The Cloud Native Computing Foundation’s flagship conference gathers…]]></description><link>https://developer.hpe.com/kubecon-na-2022/</link><guid isPermaLink="false">https://developer.hpe.com/kubecon-na-2022/</guid><content:encoded>&lt;h2&gt;KubeCon NA 2022&lt;/h2&gt;
&lt;h4&gt;October 24-28, 2022 Detroit, Michigan USA + Virtual&lt;/h4&gt;
&lt;p&gt;The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in Detroit, Michigan USA from 24 – 28 Oct 2022. Join thousands of cloud-native leaders, including Hewlett Packard Enterprise (HPE), as the community gathers for five days to further the education and advancement of cloud native computing. Visit the HPE booth to learn more about the HPE Developer Community and HPE’s open source efforts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Learn how AI hackers detect fragility and how to thwart them with AI model resilience]]></title><description><![CDATA[Learn how AI hackers detect fragility and how to thwart them with AI model resilience March 20, 2024 Learn how AI hackers detect fragility…]]></description><link>https://developer.hpe.com/learn-how-ai-hackers-detect-fragility-and-how-to-thwart-them-with-ai-model-resilience/</link><guid isPermaLink="false">https://developer.hpe.com/learn-how-ai-hackers-detect-fragility-and-how-to-thwart-them-with-ai-model-resilience/</guid><content:encoded>&lt;h2&gt;Learn how AI hackers detect fragility and how to thwart them with AI model resilience&lt;/h2&gt;
&lt;p&gt;March 20, 2024&lt;/p&gt;
&lt;p&gt;Learn how AI hackers detect fragility and how to thwart them with AI model resilience
On every lab test, your AI was superhuman. But how will it fare in the real world of smog, smears, and nation state hackers? In this session, we&apos;ll explore how AI hackers can measure the fragility of today&apos;s AI models, covering the model&apos;s vulnerability under real-world conditions across applications of varying data dimensions, from signals to images to videos. We’ll then show how to engineer robustness into models and sketch out tomorrow&apos;s AI supply chain where confidence is measurable and the model&apos;s perception can be inspected.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon NA 2021]]></title><description><![CDATA[KubeCon NA 2021 Oct 11-15, 2021 NA Los Angeles + Virtual The Cloud Native Computing Foundation’s flagship conference gathers adopters and…]]></description><link>https://developer.hpe.com/kubecon-na-2021/</link><guid isPermaLink="false">https://developer.hpe.com/kubecon-na-2021/</guid><content:encoded>&lt;h2&gt;KubeCon NA 2021&lt;/h2&gt;
&lt;h4&gt;Oct 11-15, 2021 NA Los Angeles + Virtual&lt;/h4&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation’s&lt;/a&gt; flagship conference gathers adopters and technologists from leading open source and cloud native communities from October 11-15, 2021. Join containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Jaeger, Kubernetes, Open Policy Agent, Prometheus, Rook, TiKV, TUF, Vitess, Argo, Buildpacks, CloudEvents, CNI, Contour, Cortex, CRI-O, Dragonfly, Falco, Flux, gRPC, KubeEdge, Linkerd, NATS, Notary, OpenTracing, Operator Framework, SPIFFE, SPIRE, and Thanos as the community gathers for four days to further the education and advancement of cloud native computing.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Leveraging Tech to Address Global Challenges & Health]]></title><description><![CDATA[Leveraging Tech to Address Global Challenges & Health April 19, 2023 The power of technology, digital transformation, and partnerships can…]]></description><link>https://developer.hpe.com/leveraging-tech-to-address-global-challenges-health/</link><guid isPermaLink="false">https://developer.hpe.com/leveraging-tech-to-address-global-challenges-health/</guid><content:encoded>&lt;h2&gt;Leveraging Tech to Address Global Challenges &amp;#x26; Health&lt;/h2&gt;
&lt;p&gt;April 19, 2023&lt;/p&gt;
&lt;p&gt;The power of technology, digital transformation, and partnerships can drive significant advancement in building a more sustainable and equitable world. Join us to hear Deputy Director of Hewlett Packard Enterprise Foundation, Fred Tan, who is helping to drive the Force for Good program, discuss specific initiatives HPE is involved with to achieve our mission of being able to advance the way people live and work by addressing pressing global challenges.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Leveraging the open source ML advantage]]></title><description><![CDATA[Leveraging the open source ML advantage May 1, 2024 Join this virtual session from HPE Data Science Community for discussions around…]]></description><link>https://developer.hpe.com/leveraging-the-open-source-ml-advantage/</link><guid isPermaLink="false">https://developer.hpe.com/leveraging-the-open-source-ml-advantage/</guid><content:encoded>&lt;h2&gt;Leveraging the open source ML advantage&lt;/h2&gt;
&lt;p&gt;May 1, 2024&lt;/p&gt;
&lt;p&gt;Join this virtual session from HPE Data Science Community for discussions around:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scaling ML model training using state-of-the-art hyperparameter tuning and distributed training methods - using all open source software.&lt;/li&gt;
&lt;li&gt;Getting started with open-source LLMs in our playground feature: GenAI studio.&lt;/li&gt;
&lt;li&gt;Smoothly transitioning from foundation model selection and prompt engineering to fine-tuning and deployment.&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[LL-Mesh – Democratizing Gen AI]]></title><description><![CDATA[LL-Mesh – Democratizing Gen AI December 18, 2024 LL-Mesh is a pioneering initiative by HPE aimed at democratizing Generative Artificial…]]></description><link>https://developer.hpe.com/ll-mesh-–-democratizing-gen-ai/</link><guid isPermaLink="false">https://developer.hpe.com/ll-mesh-–-democratizing-gen-ai/</guid><content:encoded>&lt;h2&gt;LL-Mesh – Democratizing Gen AI&lt;/h2&gt;
&lt;p&gt;December 18, 2024&lt;/p&gt;
&lt;p&gt;LL-Mesh is a pioneering initiative by HPE aimed at democratizing Generative Artificial Intelligence (Gen AI). It empowers users to create tools and web applications using Gen AI with Low or No Coding. This approach addresses the technical challenges by simplifying the integration process. Join this session to learn how LL-Mesh platform allows for the creation of a &quot;Mesh&quot; of the Gen AI tools, providing orchestration capabilities through an agentic Reasoning Engine based on Large Language Models (LLMs).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM finetuning for mere mortals]]></title><description><![CDATA[LLM finetuning for mere mortals August 28, 2024 Everyone wants to use Large Language Models (LLMs), and for good reason. With applications…]]></description><link>https://developer.hpe.com/llm-finetuning-for-mere-mortals/</link><guid isPermaLink="false">https://developer.hpe.com/llm-finetuning-for-mere-mortals/</guid><content:encoded>&lt;h2&gt;LLM finetuning for mere mortals&lt;/h2&gt;
&lt;p&gt;August 28, 2024&lt;/p&gt;
&lt;p&gt;Everyone wants to use Large Language Models (LLMs), and for good reason. With applications ranging from content creation to automated software development, LLMs have the potential to transform nearly every industry. How can you make the most of this technology when applying it to your own use-cases? Finetuning is one highly effective approach, but can be challenging to implement correctly. In this talk, you&apos;ll learn about the challenges involved, and how us mere mortals can tackle them using HPE&apos;s new software that leverages the open-source machine learning (ML) ecosystem.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Machine Learning Data Version Control (DVC): Reproducibility and Collaboration in your ML Projects ]]></title><description><![CDATA[Machine Learning Data Version Control (DVC): Reproducibility and Collaboration in your ML Projects September 28, 2022 In this session, we’ll…]]></description><link>https://developer.hpe.com/machine-learning-data-version-control-dvc-reproducibility-and-collaboration-in-your-ml-projects/</link><guid isPermaLink="false">https://developer.hpe.com/machine-learning-data-version-control-dvc-reproducibility-and-collaboration-in-your-ml-projects/</guid><content:encoded>&lt;h2&gt;Machine Learning Data Version Control (DVC): Reproducibility and Collaboration in your ML Projects&lt;/h2&gt;
&lt;p&gt;September 28, 2022&lt;/p&gt;
&lt;p&gt;In this session, we’ll do a demo where you’ll learn how to manage and make your machine learning projects reproducible with our open-source tool DVC and the DVC extension for VS Code IDE. We will see how to track datasets and models, run, compare, visualize, and track machine learning experiments right in VS Code. We&apos;ll then go over our GitOps-based model registry solution. Search, share, and manage all models with full context around model lineage, version, production status, data used to train model, and more.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Machines learn from data to be artificially intelligent]]></title><description><![CDATA[Machines learn from data to be artificially intelligent August 17, 2022 Join us for a free, 60-minute session where you can connect with…]]></description><link>https://developer.hpe.com/machine-learns-from-data-to-be-artificially-intelligent-use-cases-lessons-learned/</link><guid isPermaLink="false">https://developer.hpe.com/machine-learns-from-data-to-be-artificially-intelligent-use-cases-lessons-learned/</guid><content:encoded>&lt;h2&gt;Machines learn from data to be artificially intelligent&lt;/h2&gt;
&lt;p&gt;August 17, 2022&lt;/p&gt;
&lt;p&gt;Join us for a free, 60-minute session where you can connect with experts who offer valuable insights into today’s most popular technologies. This month, hear from HPE’s own Dr. Eng Lim Goh, senior VP and CTO of AI, on the importance of sharing data to gain insights and how to do so responsibly. In his talk, Dr. Goh will illustrate how AI can advance the human condition, exploring a variety of industry use cases and lessons we’ve learned.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ML-at-Scale ‘23]]></title><description><![CDATA[ML-at-Scale ‘23 September 12-13, 2023 What does it mean to truly achieve machine learning (ML) at scale? Determined AI is hosting a virtual…]]></description><link>https://developer.hpe.com/ml-at-scale-‘23/</link><guid isPermaLink="false">https://developer.hpe.com/ml-at-scale-‘23/</guid><content:encoded>&lt;h2&gt;ML-at-Scale ‘23&lt;/h2&gt;
&lt;p&gt;September 12-13, 2023&lt;/p&gt;
&lt;p&gt;What does it mean to truly achieve machine learning (ML) at scale? Determined AI is hosting a virtual conference where you can find out. If you work on solving critical problems with ML, you won’t want to miss this. ML-at-Scale ’23 will feature talks and workshops about pushing the scale of AI – from open source development to high-performance computing.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[No more blind spots: How eBPF transforms observability]]></title><description><![CDATA[No more blind spots: How eBPF transforms observability March 26, 2025 In the world of modern cloud-native applications, traditional…]]></description><link>https://developer.hpe.com/no-more-blind-spots-how-ebpf-transforms-observability/</link><guid isPermaLink="false">https://developer.hpe.com/no-more-blind-spots-how-ebpf-transforms-observability/</guid><content:encoded>&lt;h2&gt;No more blind spots: How eBPF transforms observability&lt;/h2&gt;
&lt;p&gt;March 26, 2025&lt;/p&gt;
&lt;p&gt;In the world of modern cloud-native applications, traditional observability tools often fall short, leaving critical blind spots in performance monitoring, security, and troubleshooting. Enter eBPF (Extended Berkeley Packet Filter), a revolutionary technology that allows deep, low-overhead visibility into the Linux kernel without modifying application code or adding intrusive instrumentation. Join this session to explore how eBPF can transform observability by enabling real-time insights into system behavior, network traffic, and application performance.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[NonStop Technical Boot Camp (TBC) 2023]]></title><description><![CDATA[NonStop Technical Boot Camp (TBC) 2023 September 12-14, 2023 Do not miss this opportunity to be the first to discover solutions for cyber…]]></description><link>https://developer.hpe.com/nonstop-technical-boot-camp-tbc-2023/</link><guid isPermaLink="false">https://developer.hpe.com/nonstop-technical-boot-camp-tbc-2023/</guid><content:encoded>&lt;h2&gt;NonStop Technical Boot Camp (TBC) 2023&lt;/h2&gt;
&lt;p&gt;September 12-14, 2023&lt;/p&gt;
&lt;p&gt;Do not miss this opportunity to be the first to discover solutions for cyber resilience and hybrid cloud to help you stay ahead of the pack. At this event, systemadmins, architects, DBAs, DevOps practitioners and new comers will meet and hear from a thriving community that continues to define the gold standard for building the most reliable IT solutions! Register for the conference and book your hotel room now!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[NVIDIA Deep Learning Institute]]></title><description><![CDATA[Get a free course on AI, HPC or Data Science from NVIDIA Advance your knowledge in AI, accelerated computing, accelerated data science…]]></description><link>https://developer.hpe.com/nvidia-deep-learning-institute/</link><guid isPermaLink="false">https://developer.hpe.com/nvidia-deep-learning-institute/</guid><content:encoded>&lt;h2&gt;Get a free course on AI, HPC or Data Science from NVIDIA&lt;/h2&gt;
&lt;p&gt;Advance your knowledge in AI, accelerated computing, accelerated data science, graphics, and simulation with a free NVIDIA Deep Learning Institute course (offer valid while supplies last).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[NVIDIA GTC – the developer conference for the era of AI]]></title><description><![CDATA[NVIDIA GTC – the developer conference for the era of AI Interested in artificial intelligence and high performance application development…]]></description><link>https://developer.hpe.com/nvidia-gtc-–-the-developer-conference-for-the-era-of-ai/</link><guid isPermaLink="false">https://developer.hpe.com/nvidia-gtc-–-the-developer-conference-for-the-era-of-ai/</guid><content:encoded>&lt;h2&gt;NVIDIA GTC – the developer conference for the era of AI&lt;/h2&gt;
&lt;p&gt;Interested in artificial intelligence and high performance application development? Hewlett Packard Enterprise (HPE) is as well and is excited to announce its participation as a Diamond sponsor of online event NVIDIA GTC22, September 19-22. NVIDIA GTC is the developer conference for the Era of AI and the Metaverse, exploring development, design and simulation, robotics, and applied real-time technologies.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[NVIDIA NIM Agents Blueprints - Do your own AI: Multimodal PDF Data Extraction 101]]></title><description><![CDATA[NVIDIA NIM Agents Blueprints - Do your own AI: Multimodal PDF Data Extraction 101 October 30, 2024 Pursuant to the launch of HPE Private…]]></description><link>https://developer.hpe.com/nvidia-nim-agents-blueprints-do-your-own-ai-multimodal-pdf-data-extraction-101/</link><guid isPermaLink="false">https://developer.hpe.com/nvidia-nim-agents-blueprints-do-your-own-ai-multimodal-pdf-data-extraction-101/</guid><content:encoded>&lt;h2&gt;NVIDIA NIM Agents Blueprints - Do your own AI: Multimodal PDF Data Extraction 101&lt;/h2&gt;
&lt;p&gt;October 30, 2024&lt;/p&gt;
&lt;p&gt;Pursuant to the launch of HPE Private Cloud AI, which leverages NVIDIA AI Enterprise and NIM Agent Blueprints, NVIDIA will highlight how it eases and accelerates the process of building a multimodal PDF data extraction workflow. using NVIDIA NeMo™ Retriever NIM microservices to unlock highly accurate insights from massive volumes of enterprise data.
Join this session to learn about the enterprise-scale multimodal document retrieval workflow designed to enhance generative AI applications with RAG capabilities that can be connected to proprietary data–wherever it resides.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Observability in action]]></title><description><![CDATA[Observability in Action July 26, 2023 In this session, learn how OpsRamp approaches observability in a hybrid-cloud world to help customers…]]></description><link>https://developer.hpe.com/observability-in-action/</link><guid isPermaLink="false">https://developer.hpe.com/observability-in-action/</guid><content:encoded>&lt;h2&gt;Observability in Action&lt;/h2&gt;
&lt;p&gt;July 26, 2023&lt;/p&gt;
&lt;p&gt;In this session, learn how OpsRamp approaches observability in a hybrid-cloud world to help customers answer why problems are occurring with their infrastructure and applications by examining events, metrics, logs and traces. This talk will also explain how OpsRamp avoids vendor lock-in through the use of open standards.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Source at HPE: Three Pillars, Many Paths]]></title><description><![CDATA[Open Source at HPE: Three Pillars, Many Paths 03/18/2026 Join us for a cross-pillar conversation on how HPE uses open source to enable…]]></description><link>https://developer.hpe.com/open-source-at-hpe-three-pillars-many-paths/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-at-hpe-three-pillars-many-paths/</guid><content:encoded>&lt;h2&gt;Open Source at HPE: Three Pillars, Many Paths&lt;/h2&gt;
&lt;p&gt;03/18/2026&lt;/p&gt;
&lt;p&gt;Join us for a cross-pillar conversation on how HPE uses open source to enable platforms, power science, and scale infrastructure.&lt;/p&gt;
&lt;p&gt;Our panel will explore how HPE engages with open source across networking, high-performance computing, and storage—showing that open source isn’t a single strategy but a spectrum: an entry point, a trust mechanism, a catalyst for scientific collaboration, and the foundation for scalable infrastructure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Source Summit Europe 2022]]></title><description><![CDATA[Open Source Summit Europe 2022 13-16 September, 2022 Open Source Summit is the premier event for open source developers, technologists, and…]]></description><link>https://developer.hpe.com/open-source-summit-europe-2022/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-summit-europe-2022/</guid><content:encoded>&lt;h2&gt;&lt;strong&gt;Open Source Summit Europe 2022&lt;/strong&gt;&lt;/h2&gt;
&lt;h4&gt;13-16 September, 2022&lt;/h4&gt;
&lt;p&gt;Open Source Summit is the premier event for open source developers, technologists, and community leaders to collaborate, share information, solve problems, and gain knowledge, furthering open source innovation and ensuring a sustainable open source ecosystem. It is the gathering place for open-source code and community contributors.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OpenSearch – The open-source search and analytics suite you can run yourself 	]]></title><description><![CDATA[OpenSearch – The open-source search and analytics suite you can run yourself June 22, 2022 With systems getting larger, it’s more important…]]></description><link>https://developer.hpe.com/opensearch-–-the-open-source-search-and-analytics-suite-you-can-run-yourself/</link><guid isPermaLink="false">https://developer.hpe.com/opensearch-–-the-open-source-search-and-analytics-suite-you-can-run-yourself/</guid><content:encoded>&lt;h1&gt;OpenSearch – The open-source search and analytics suite you can run yourself&lt;/h1&gt;
&lt;p&gt;June 22, 2022&lt;/p&gt;
&lt;p&gt;With systems getting larger, it’s more important than ever to be able to search through all your data. Whether that involves adding a search functionality for your users or creating a distributed log store to solve problems faster, OpenSearch can help you get there. The best part is that it’s open source, so the community drives what’s built next.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OpenTelemetry: Getting Started and The Road to Production ]]></title><description><![CDATA[OpenTelemetry: Getting Started and The Road to Production August 31, 2022 In this talk, you will experience a complete OpenTelemetry…]]></description><link>https://developer.hpe.com/opentelemetry-getting-started-and-the-road-to-production/</link><guid isPermaLink="false">https://developer.hpe.com/opentelemetry-getting-started-and-the-road-to-production/</guid><content:encoded>&lt;h2&gt;OpenTelemetry: Getting Started and The Road to Production&lt;/h2&gt;
&lt;p&gt;August 31, 2022&lt;/p&gt;
&lt;p&gt;In this talk, you will experience a complete OpenTelemetry Bootcamp. From the very basics, including what OpenTelemetry is and how it can help you, all the way to advanced concepts such as managing cost with sampling, the collector, and what to know before deploying in production. This session will serve you as the roadmap for starting with OpenTelemetry and as your guide to understanding core concepts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Opportunities and challenges of self-hosting GenAI]]></title><description><![CDATA[Opportunities and challenges of self-hosting GenAI January 30, 2024 Join us on 30 January with TitanML as we bring together the HPE data…]]></description><link>https://developer.hpe.com/opportunities-and-challenges-of-self-hosting-genai/</link><guid isPermaLink="false">https://developer.hpe.com/opportunities-and-challenges-of-self-hosting-genai/</guid><content:encoded>&lt;h2&gt;Opportunities and challenges of self-hosting GenAI&lt;/h2&gt;
&lt;p&gt;January 30, 2024&lt;/p&gt;
&lt;p&gt;Join us on 30 January with TitanML as we bring together the HPE data science user group to discuss opportunities and challenges of self-hosting GenAI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Optimizing deep neural network inference workloads]]></title><description><![CDATA[Optimizing deep neural network inference workloads November 15, 2023 AI, and specifically deep learning, is driving significant advancements…]]></description><link>https://developer.hpe.com/optimizing-deep-neural-network-inference-workloads/</link><guid isPermaLink="false">https://developer.hpe.com/optimizing-deep-neural-network-inference-workloads/</guid><content:encoded>&lt;h2&gt;Optimizing deep neural network inference workloads&lt;/h2&gt;
&lt;p&gt;November 15, 2023&lt;/p&gt;
&lt;p&gt;AI, and specifically deep learning, is driving significant advancements in many vertical markets. Whether within the computer vision or natural language processing domain, deploying this technology at scale requires overcoming many challenges and constraints. In this session, learn about deep neural network inference optimization techniques, their pros and cons, and how to use them together to optimize inference performance.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Running reliable systems Part 1: An overview of SRE]]></title><description><![CDATA[Running reliable systems Part 1: An overview of SRE D﻿ecember 7, 2022 Learn about the Site Reliability Engineering (SRE) approach to…]]></description><link>https://developer.hpe.com/running-reliable-systems-part-1-an-overview-of-sre/</link><guid isPermaLink="false">https://developer.hpe.com/running-reliable-systems-part-1-an-overview-of-sre/</guid><content:encoded>&lt;h2&gt;Running reliable systems Part 1: An overview of SRE&lt;/h2&gt;
&lt;p&gt;D﻿ecember 7, 2022&lt;/p&gt;
&lt;p&gt;Learn about the Site Reliability Engineering (SRE) approach to ensuring reliability as part of the software product development process.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Running reliable systems Part 2: Service-Level Objective (SLO) Math]]></title><description><![CDATA[Running reliable systems Part 2: SLO Math D﻿ecember 14, 2022 Learn how to set Service-Level Objectives (SLOs) that reflect the expectations…]]></description><link>https://developer.hpe.com/running-reliable-systems-part-2-service-level-objective-slo-math/</link><guid isPermaLink="false">https://developer.hpe.com/running-reliable-systems-part-2-service-level-objective-slo-math/</guid><content:encoded>&lt;h2&gt;Running reliable systems Part 2: SLO Math&lt;/h2&gt;
&lt;p&gt;D﻿ecember 14, 2022&lt;/p&gt;
&lt;p&gt;Learn how to set Service-Level Objectives (SLOs) that reflect the expectations of your customers by using mathematical probability and implementing SLO-based reliability at many layers in the stack.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Secure GenAI Adoption for all!]]></title><description><![CDATA[Secure GenAI Adoption for all! February 21, 2024 GenAI can bring great value to a business by providing a new interface that can be used to…]]></description><link>https://developer.hpe.com/secure-genai-adoption-for-all/</link><guid isPermaLink="false">https://developer.hpe.com/secure-genai-adoption-for-all/</guid><content:encoded>&lt;h2&gt;Secure GenAI Adoption for all!&lt;/h2&gt;
&lt;p&gt;February 21, 2024&lt;/p&gt;
&lt;p&gt;GenAI can bring great value to a business by providing a new interface that can be used to accelerate creativity, run knowledge-centric tasks, or simply provide more rapid access to information. However, rapid adoption of this technology has led to security snafu&apos;s, privacy issues and hallucinogenic data reaching the wrong people. In this session, we will look at how some of these risks have made it into the critical path for GenAI and how we might start to mitigate these issues through technology and processes. Finally, we will review &quot;Project Ethan&quot;, which was highlighted on the mainstage at Discover Barcelona by HPE CTO, Fidelma Russo, to see how this project will bridge the gap from rapid innovation to secure business value. Join us for a lively presentation and discussion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[State of the Nation – Linux distributions]]></title><description><![CDATA[State of the Nation – Linux distributions September 20, 2023 Hear about the latest trends in Linux. Learn what happened to CentOS, the…]]></description><link>https://developer.hpe.com/state-of-the-nation-–-linux-distributions/</link><guid isPermaLink="false">https://developer.hpe.com/state-of-the-nation-–-linux-distributions/</guid><content:encoded>&lt;h2&gt;State of the Nation – Linux distributions&lt;/h2&gt;
&lt;p&gt;September 20, 2023&lt;/p&gt;
&lt;p&gt;Hear about the latest trends in Linux. Learn what happened to CentOS, the reasons why Red Hat no longer posts source code, and explore new minimal container deployment operating systems like CoreOS and SLE-Micro. In this session, Craig Lamparter of the HPE Linux Enablement Lab will discuss these topics as well as identify which Linux distributions are supported on HPE servers and why.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Storage & Applications in Kubernetes]]></title><description><![CDATA[Storage & Applications in Kubernetes Live Webinar March 17, 2022 10am PT / 1pm ET / 6pm GT In this 30-minute interactive webinar, you’ll be…]]></description><link>https://developer.hpe.com/storage-applications-in-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/storage-applications-in-kubernetes/</guid><content:encoded>&lt;h2&gt;Storage &amp;#x26; Applications in Kubernetes&lt;/h2&gt;
&lt;h3&gt;Live Webinar&lt;/h3&gt;
&lt;p&gt;March 17, 2022&lt;br&gt;
10am PT / 1pm ET / 6pm GT&lt;/p&gt;
&lt;p&gt;In this 30-minute interactive webinar, you’ll be introduced to Kubernetes storage fundamentals, key Kubernetes storage terminology, real-world use cases and best practices.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Sustainably Scaling AI Adoption]]></title><description><![CDATA[Sustainably Scaling AI Adoption August 6, 2025 AI-capable infrastructure configurations are introducing new computing, energy, and cooling…]]></description><link>https://developer.hpe.com/sustainably-scaling-ai-adoption/</link><guid isPermaLink="false">https://developer.hpe.com/sustainably-scaling-ai-adoption/</guid><content:encoded>&lt;h2&gt;Sustainably Scaling AI Adoption&lt;/h2&gt;
&lt;p&gt;August 6, 2025&lt;/p&gt;
&lt;p&gt;AI-capable infrastructure configurations are introducing new computing, energy, and cooling demands on our data centers and technology stacks. Our engineering capabilities around efficiency, design, and cooling are now at the forefront of progress as we balance resources, costs, and outcomes. Join leading minds at HPE as we explore sustainable AI solutions for the enterprise and data center technologies that can enable the AI workloads of the future.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Technology and Solutions Summit 2021]]></title><description><![CDATA[HPE Technology and Solutions Summit 2021 March 15-18, 2021 - Virtual HPE Technology and Solutions Summit 2021 is Hewlett Packard Enterprise…]]></description><link>https://developer.hpe.com/technology-and-solutions-summit-2020-paris/</link><guid isPermaLink="false">https://developer.hpe.com/technology-and-solutions-summit-2020-paris/</guid><content:encoded>&lt;h2&gt;HPE Technology and Solutions Summit 2021&lt;/h2&gt;
&lt;p&gt;March 15-18, 2021 - Virtual&lt;/p&gt;
&lt;p&gt;HPE Technology and Solutions Summit 2021 is Hewlett Packard Enterprise&apos;s largest, annual and most comprehensive technical and solutions knowledge-transfer event. Since its inception in 2006, HPE TSS has established its reputation as a renowned training initiative. Presales Consultants and Solutions Architects from HPE and partner communities in Europe, Middle East and Africa benefit from direct access to HPE technology experts, engineers and Chief Technologists. You will receive up to the minute training around product developments, roadmap updates, strategic insights and be able to practice and apply your knowledge under non-disclosure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Texas Children’s Hospital Healthcare Hackathon]]></title><description><![CDATA[Texas Children’s Hospital Healthcare Hackathon Sponsored by HPE May 14-24, 2021 Showcase your innovative, tech-based solutions to help build…]]></description><link>https://developer.hpe.com/texas-children’s-hospital-healthcare-hackathon-sponsored-by-hpe/</link><guid isPermaLink="false">https://developer.hpe.com/texas-children’s-hospital-healthcare-hackathon-sponsored-by-hpe/</guid><content:encoded>&lt;h2&gt;Texas Children’s Hospital Healthcare Hackathon&lt;/h2&gt;
&lt;p&gt;Sponsored by HPE&lt;/p&gt;
&lt;p&gt;May 14-24, 2021&lt;/p&gt;
&lt;p&gt;Showcase your innovative, tech-based solutions to help build the hospital of the future! To celebrate the groundbreaking of its new hospital, Texas Children’s Hospital is hosting a two-week long virtual Healthcare Hackathon. HPE is proud to sponsor this event where coders can submit innovative solutions to address its biggest day-to-day challenges and win prizes. A special category for kids allows younger coders to participate, too. Register today!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The 10th Annual Chapel Implementers and Users Workshop (CHIUW)]]></title><description><![CDATA[CHIUW 2023 The 10th Annual Chapel Implementers and Users Workshop J﻿une 01-02, 2023 CHIUW 2023 is the 10th annual Chapel Implementers and…]]></description><link>https://developer.hpe.com/the-10th-annual-chapel-implementers-and-users-workshop-chiuw/</link><guid isPermaLink="false">https://developer.hpe.com/the-10th-annual-chapel-implementers-and-users-workshop-chiuw/</guid><content:encoded>&lt;h2&gt;CHIUW 2023&lt;/h2&gt;
&lt;h3&gt;The 10th Annual Chapel Implementers and Users Workshop&lt;/h3&gt;
&lt;p&gt;J﻿une 01-02, 2023&lt;/p&gt;
&lt;p&gt;CHIUW 2023 is the 10th annual Chapel Implementers and Users Workshop, which serves as a forum where users and developers of the general-purpose Chapel programming language (chapel-lang.org) can meet to report on work being done with Chapel, exchange ideas, and forge new collaborations. Anyone interested in parallel programming and/or Chapel is encouraged to attend CHIUW, from long-term enthusiasts to those simply curious to learn more. This year&apos;s CHIUW will be online and there will be no registration fees.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The 9th Annual Chapel Implementers and Users Workshop (CHIUW)]]></title><description><![CDATA[CHIUW 2022 The 9th Annual Chapel Implementers and Users Workshop June 09-10, 2022 CHIUW 2022 is the 9th annual Chapel Implementers and Users…]]></description><link>https://developer.hpe.com/the-9th-annual-chapel-implementers-and-users-workshop-chiuw/</link><guid isPermaLink="false">https://developer.hpe.com/the-9th-annual-chapel-implementers-and-users-workshop-chiuw/</guid><content:encoded>&lt;h2&gt;CHIUW 2022&lt;/h2&gt;
&lt;h3&gt;The 9th Annual Chapel Implementers and Users Workshop&lt;/h3&gt;
&lt;p&gt;June 09-10, 2022&lt;/p&gt;
&lt;p&gt;CHIUW 2022 is the 9th annual Chapel Implementers and Users Workshop, which serves as a forum where users and developers of the Chapel programming language (chapel-lang.org) can meet to report on work being done with Chapel, exchange ideas, forge new collaborations, and engage in coding activities. Anyone interested in parallel programming and Chapel is encouraged to attend CHIUW, from long-term enthusiasts to those simply curious to learn more. This year&apos;s CHIUW will be online and there will be no registration fees.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The data services family of APIs for HPE GreenLake – Putting it all together]]></title><description><![CDATA[The data services family of APIs for HPE GreenLake – Putting it all together April 24, 2024 HPE recently announced a set of data services…]]></description><link>https://developer.hpe.com/the-family-of-hpe-greenlake-data-services-cloud-console-dscc-apis-–-putting-it-all-together/</link><guid isPermaLink="false">https://developer.hpe.com/the-family-of-hpe-greenlake-data-services-cloud-console-dscc-apis-–-putting-it-all-together/</guid><content:encoded>&lt;h2&gt;The data services family of APIs for HPE GreenLake – Putting it all together&lt;/h2&gt;
&lt;p&gt;April 24, 2024&lt;/p&gt;
&lt;p&gt;HPE recently announced a set of data services APIs for the HPE GreenLake platform for Backup and Recovery, Private Cloud Business Edition, Virtualization and common Data Services. In this session, a team of product managers and architects will present common API service interactions (e.g. hybrid services, data services, etc.) to accelerate your consumption of these APIs for use cases such as automation and monitoring. They’ll help you understand the benefits of these API modules, how to interact with them, and their relation to the Storage APIs published earlier.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Hack Shack at Discover 2023]]></title><description><![CDATA[HPE Discover 2023 The Hack Shack at Discover 2023 Las Vegas, June 20-22 This year, the Hack Shack will serve as the technology hub for all…]]></description><link>https://developer.hpe.com/the-hack-shack-at-discover-2023/</link><guid isPermaLink="false">https://developer.hpe.com/the-hack-shack-at-discover-2023/</guid><content:encoded>&lt;h2&gt;HPE Discover 2023&lt;/h2&gt;
&lt;h3&gt;The Hack Shack at Discover 2023&lt;/h3&gt;
&lt;p&gt;Las Vegas, June 20-22&lt;/p&gt;
&lt;p&gt;This year, the Hack Shack will serve as the technology hub for all technologists and home base for the HPE Office of the CTO. Join us at HPE Discover 2023 to take advantage of opportunities to learn more about the HPE GreenLake platform and meet with HPE executives and technology experts, sharing your experiences and hopes for the platform. While you’re there, take advantage of opportunities to win prizes with activities like our Treasure Hunt and Hack Shack challenges and join us for our annual celebration party!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The last mile of the self-hosted LLM]]></title><description><![CDATA[The last mile of the self-hosted LLM November 7, 2023 Join us on 7 November with TitanML as we bring together the HPE data science user…]]></description><link>https://developer.hpe.com/the-last-mile-of-the-self-hosted-llm/</link><guid isPermaLink="false">https://developer.hpe.com/the-last-mile-of-the-self-hosted-llm/</guid><content:encoded>&lt;h2&gt;The last mile of the self-hosted LLM&lt;/h2&gt;
&lt;p&gt;November 7, 2023&lt;/p&gt;
&lt;p&gt;Join us on 7 November with TitanML as we bring together the HPE data science user group to discuss the latest trends, best approaches and practical experiences.&lt;/p&gt;
&lt;p&gt;We’ll have technical talks, live demos and plenty of opportunity to network with your community while enjoying drinks, food and making the most of the table tennis at Bounce Farringdon.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The open-source advantage: Exploring machine learning through thought leadership]]></title><description><![CDATA[The open-source advantage: Exploring machine learning through thought leadership October 18, 2023 In our next Munch & Learn, Chase…]]></description><link>https://developer.hpe.com/the-open-source-advantage-exploring-machine-learning-through-thought-leadership/</link><guid isPermaLink="false">https://developer.hpe.com/the-open-source-advantage-exploring-machine-learning-through-thought-leadership/</guid><content:encoded>&lt;h2&gt;The open-source advantage: Exploring machine learning through thought leadership&lt;/h2&gt;
&lt;p&gt;October 18, 2023&lt;/p&gt;
&lt;p&gt;In our next Munch &amp;#x26; Learn, Chase Christensen and Amber Graner from the HPE Ezmeral team get real about why machine learning is a big deal today. They break down what machine learning actually means and how thought leadership fits into this tech scene. They’ll explore all the perks of open-source machine learning and even name-drop some tools and libraries that are shaking things up in the field. Join us to hear them share some success stories showing how big thinkers can spark more use and development of these forward-thinking principles. To wrap things up, they’ll throw out a challenge to everyone listening, encouraging you to get involved and help shape the future of open-source machine learning!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Transformative Impact of Generative AI on Telco Products]]></title><description><![CDATA[The Transformative Impact of Generative AI on Telco Products April 17, 2024 This session will showcase a PoC for an innovative application…]]></description><link>https://developer.hpe.com/the-transformative-impact-of-generative-ai-on-telco-products/</link><guid isPermaLink="false">https://developer.hpe.com/the-transformative-impact-of-generative-ai-on-telco-products/</guid><content:encoded>&lt;h2&gt;The Transformative Impact of Generative AI on Telco Products&lt;/h2&gt;
&lt;p&gt;April 17, 2024&lt;/p&gt;
&lt;p&gt;This session will showcase a PoC for an innovative application that demonstrates a bot capable of retrieving information and executing commands on Athonet 5G Networks. Utilizing an LLM as a core Reasoning Engine, our PoC highlights the integration of advanced AI capabilities without compromising human or customer data security. We’ll demonstrate how to maximize the benefits of our products based on Generative AI by establishing automatic data feedback loops. Thanks to AI, these loops will enhance our product with the ability to self-improve through the analysis of customer interactions, thereby boosting personalization and operational efficiency.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Unleashing AI Innovation: A deep dive into the HPE Private Cloud AI Software Stack]]></title><description><![CDATA[Unleashing AI Innovation: A deep dive into the HPE Private Cloud AI Software Stack January 29, 2025 Join us for an in-depth exploration of…]]></description><link>https://developer.hpe.com/unleashing-ai-innovation-a-deep-dive-into-the-hpe-private-cloud-ai-software-stack/</link><guid isPermaLink="false">https://developer.hpe.com/unleashing-ai-innovation-a-deep-dive-into-the-hpe-private-cloud-ai-software-stack/</guid><content:encoded>&lt;h2&gt;Unleashing AI Innovation: A deep dive into the HPE Private Cloud AI Software Stack&lt;/h2&gt;
&lt;p&gt;January 29, 2025&lt;/p&gt;
&lt;p&gt;Join us for an in-depth exploration of the cutting-edge AI software stack powering the HPE Private Cloud AI (PCAI). This webinar will demonstrate how our platform enables enterprises to accelerate AI development while maintaining data privacy and operational control. Designed for data scientists, data engineers, AI/ML developers, IT managers, and business leaders, this session will reveal how PCAI revolutionizes the deployment and management of generative AI within your organization.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Unlocking Private AI Power: Insurance Fraud Detection and Beyond]]></title><description><![CDATA[Unlocking Private AI Power: Insurance Fraud Detection and Beyond February 19, 2025 Join us to explore the potential of HPE Private Cloud AI…]]></description><link>https://developer.hpe.com/unlocking-private-ai-power-insurance-fraud-detection-and-beyond/</link><guid isPermaLink="false">https://developer.hpe.com/unlocking-private-ai-power-insurance-fraud-detection-and-beyond/</guid><content:encoded>&lt;h2&gt;Unlocking Private AI Power: Insurance Fraud Detection and Beyond&lt;/h2&gt;
&lt;p&gt;February 19, 2025&lt;/p&gt;
&lt;p&gt;Join us to explore the potential of HPE Private Cloud AI in tackling real-world challenges, such as insurance fraud detection. We&apos;ll delve into the benefits of leveraging Private AI services. This webinar will demonstrate how one can train and deploy AI models to detect fraud in auto insurance claims, utilizing techniques like data preprocessing, model training, and deployment as an API endpoint. We will also discuss the role of generative AI in enhancing fraud detection capabilities.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using HPE GreenLake edge-to-cloud platform to build a private cloud]]></title><description><![CDATA[Using HPE GreenLake edge-to-cloud platform to build a private cloud January 24, 2024 Learn how to onboard & manage your HPE hardware devices…]]></description><link>https://developer.hpe.com/using-hpe-greenlake-edge-to-cloud-platform-to-build-a-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/using-hpe-greenlake-edge-to-cloud-platform-to-build-a-private-cloud/</guid><content:encoded>&lt;h2&gt;Using HPE GreenLake edge-to-cloud platform to build a private cloud&lt;/h2&gt;
&lt;p&gt;January 24, 2024&lt;/p&gt;
&lt;p&gt;Learn how to onboard &amp;#x26; manage your HPE hardware devices with the HPE GreenLake edge-to-cloud platform and how to configure and use RESTful API commands to automate IT management via the platform. Get to view a demo on using the web console to access an array of integrated services such as HPE GreenLake for Private Cloud Business Edition.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using HPE GreenLake for Red Hat OpenShift to migrate, modernize and run your applications smoothly]]></title><description><![CDATA[Using HPE GreenLake for Red Hat OpenShift to migrate, modernize and run your applications smoothly May 29, 2024 Many Applications are…]]></description><link>https://developer.hpe.com/using-hpe-greenlake-for-red-hat-openshift-to-migrate-modernize-and-run-your-applications-smoothly/</link><guid isPermaLink="false">https://developer.hpe.com/using-hpe-greenlake-for-red-hat-openshift-to-migrate-modernize-and-run-your-applications-smoothly/</guid><content:encoded>&lt;h2&gt;Using HPE GreenLake for Red Hat OpenShift to migrate, modernize and run your applications smoothly&lt;/h2&gt;
&lt;p&gt;May 29, 2024&lt;/p&gt;
&lt;p&gt;Many Applications are running in legacy virtualization environments today. Join this session to learn how to migrate these applications smoothly to Red Hat OpenShift. Learn the potential of application modernisation the platform offers: containerizing applications, moving to microservices and using the developer tooling. Also learn how HPE GreenLake offerings help you to get started and set things up as well as manage and run your environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Vendor-Neutral GPU Programming in Chapel]]></title><description><![CDATA[Vendor-Neutral GPU Programming in Chapel July 31, 2024 Writing programs on modern computers requires parallelism to achieve maximum…]]></description><link>https://developer.hpe.com/vendor-neutral-gpu-programming-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/vendor-neutral-gpu-programming-in-chapel/</guid><content:encoded>&lt;h2&gt;Vendor-Neutral GPU Programming in Chapel&lt;/h2&gt;
&lt;p&gt;July 31, 2024&lt;/p&gt;
&lt;p&gt;Writing programs on modern computers requires parallelism to achieve maximum performance. This is complicated by GPUs, which provide great parallel performance at the price of more complex programming. Chapel is an open-source parallel programming language that supports portable, performant software on CPUs and GPUs using a single unified set of language features. In this talk, we will showcase Chapel&apos;s vendor-neutral GPU support and share user experiences writing GPU-enabled programs in Chapel and show how you can write vendor-neutral GPU programs today. How you can get involved in our open-source work and our future plans will conclude this talk.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Watch Party: NVIDIA'S EARTH-2: DIGITAL TWINS FOR WEATHER AND CLIMATE]]></title><description><![CDATA[N﻿VIDIA Watch Party: NVIDIA'S EARTH-2: DIGITAL TWINS FOR WEATHER AND CLIMATE S﻿eptember 22, 2022 Join this interactive session to hear…]]></description><link>https://developer.hpe.com/watch-party-nvidias-earth-2-digital-twins-for-weather-and-climate/</link><guid isPermaLink="false">https://developer.hpe.com/watch-party-nvidias-earth-2-digital-twins-for-weather-and-climate/</guid><content:encoded>&lt;h2&gt;N﻿VIDIA Watch Party: NVIDIA&apos;S EARTH-2: DIGITAL TWINS FOR WEATHER AND CLIMATE&lt;/h2&gt;
&lt;h4&gt;S﻿eptember 22, 2022&lt;/h4&gt;
&lt;p&gt;Join this interactive session to hear industry experts from NVIDIA and HPE discuss NVIDIA’s Earth-2 digital twin project, which continues to advance what is possible in the application of AI to meteorology.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Work Smarter: Natural Language Access to your infrastructure via GreenLake MCP]]></title><description><![CDATA[Work Smarter: Natural Language Access to your infrastructure via GreenLake MCP 01/28/2026 Managing infrastructure often means navigating…]]></description><link>https://developer.hpe.com/work-smarter-natural-language-access-to-your-infrastructure-via-greenlake-mcp/</link><guid isPermaLink="false">https://developer.hpe.com/work-smarter-natural-language-access-to-your-infrastructure-via-greenlake-mcp/</guid><content:encoded>&lt;h2&gt;Work Smarter: Natural Language Access to your infrastructure via GreenLake MCP&lt;/h2&gt;
&lt;p&gt;01/28/2026&lt;/p&gt;
&lt;p&gt;Managing infrastructure often means navigating complex consoles, learning APIs, and manually generating reports. But what if there was an easier way?&lt;/p&gt;
&lt;p&gt;GreenLake MCP (Model Context Protocol) servers enable AI assistants to securely access your infrastructure data, letting you ask questions in natural language. In this session, we&apos;ll explain what MCP is, show you how it works, and demonstrate practical examples: from handling device inventory to security audits.&lt;/p&gt;
&lt;p&gt;No special technical expertise is needed in order to get valuable insights out of this session. If you&apos;ve ever wished that working with infrastructure could be simpler, this session is for you.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[World Artificial Intelligence Cannes Festival]]></title><description><![CDATA[World Artificial Intelligence Cannes Festival This event brings together the top 120 AI players in the market to showcase their latest…]]></description><link>https://developer.hpe.com/world-artificial-intelligence-cannes-festival/</link><guid isPermaLink="false">https://developer.hpe.com/world-artificial-intelligence-cannes-festival/</guid><content:encoded>&lt;h2&gt;World Artificial Intelligence Cannes Festival&lt;/h2&gt;
&lt;p&gt;This event brings together the top 120 AI players in the market to showcase their latest products and technologies. In partnership with NVIDIA, HPE will showcase state-of-the-art AI technologies such as swarm learning, ML/DL development environments, and a software suite for genomic analysis. See what HPE’s AI solutions can bring to your business through demos, workshops and keynote presentations, and how HPE, together with NVIDIA and their rich ecosystem of partners and startups, can help shape your AI strategy.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Activation and Onboarding in HPE Complete Care Service ITOps]]></title><link>https://developer.hpe.com/activation-and-onboarding-in-hpe-complete-care-service-itops/home/</link><guid isPermaLink="false">https://developer.hpe.com/activation-and-onboarding-in-hpe-complete-care-service-itops/home/</guid><content:encoded>&lt;style&gt;
.action-button {
   display: inline-block;
   background-color: #01a982;
   color: white;
   padding: 12px 24px;
   border-radius: 5px;
   text-decoration: none;
   font-size: 16px;
   font-weight: bold;
}
.action-button:hover {
   background-color: #005a3c;
}
#flex-container {
   display: flex;
}

#flex-content {
   flex: 1;
}
.video-box {
   flex: 1;
   max-width: 40%;
   min-width: 200px;
   text-align: center;
}
.video-thumbnail {
   width: 80%;
   height: 200px;
   border-radius: 10px;
   box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
}
#center-align {
   text-align: center;
}
#button-icon {
   height: 20px;
   width: auto;
}
li {
   font-size: 20px;
   line-height: 28px;
   margin-bottom: 10px;
}
#action-link {
    background-color: #01a982;
    color: white;
    padding: 10px 20px;
    border-radius: 20px;
    text-decoration: none;
    display: inline-flex;
    align-items: center;
    gap: 10px;
}
#section-header,
#learn-from-experts,
#additional-resources,
#contact
 {
   color: #01a982;
   font-size: 28px;
   margin-top: 30px;
   border-bottom: 3px solid #01a982;
   padding-bottom: 10px;
}
#content-list {
   list-style-type: decimal;
   padding-left: 20px;
   font-size: 18px;
   line-height: 1.8;
}
.video-title {
   display: block;
   margin: 0;
   font-size: 20px;
   font-weight: bold;
}
li {
   font-size: 20px;
}
#content-table {
   border-collapse: collapse;
   width: 100%;
   border: 1px solid black;
}

#table-header-cell {
   border: 1px solid black !important;
   padding: 8px !important;
   text-align: center !important;
   font-weight: bold !important;
}

#table-cell {
   border: 1px solid black;
   padding: 8px;
   text-align: center;
}

#section-title {
   border: 1px solid black;
   padding: 8px;
   text-align: center;
   font-weight: bold;
   background-color: #f1f1f1;
}
.video-title {
   display: block;
   margin: 0;
   font-size: 20px;
   font-weight: bold;
}
&lt;/style&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;
   HPE Complete Care Service - ITOps is a feature for all service levels of HPE Complete Care. It delivers proactive services to the customers by taking a strategic approach to incident management, moving beyond monitoring and alerting to leveraging observability, automation, and artificial intelligence for IT operations (AIOps) while improving IT operational efficiencies.
&lt;/p&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;
HPE Complete Care Service – ITOps is intended to help HPE Complete Care customers monitor, manage, and operate their entire IT environment regardless of the location of those IT assets – on-premises or cloud native. This service enables the visibility and control customers need to monitor and optimize performance, infrastructure operating costs and create observability of incidents and actions that drive the best outcomes for their IT business.
&lt;/p&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;
One outstanding feature that comes standard with HPE Complete Care Service - ITOps is enabled by HPE OpsRamp Software, and AI-powered SaaS solution.
&lt;/p&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;
Hewlett Packard Enterprise provides all HPE Complete Care Service - ITOps customers with HPE OpsRamp Software enterprise licenses that provide customers with a single pane of glass to discover, dashboard, and monitor physical assets (compute, storage, and networking resources) and virtual assets (VMs, cloud instances, storage buckets, and databases) within the HPE Complete Care Service.
&lt;/p&gt;
&lt;h3 id=&quot;learn-from-experts&quot;&gt;Getting Started&lt;/h3&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;Direct your IT operations teams to the activation and onboarding resources on this page to start taking full advantage today. You’ll quickly recognize the value of HPE Complete Care Service - ITOps powered by HPE OpsRamp Software and want to expand its functionality throughout your hybrid environment – including virtual machines, containers and workloads running on any manufacturer’s hardware.&lt;/p&gt;
  &lt;p style=&quot;font-size: 25px;&quot;&gt;Choose between HPE-assisted or self-service activation and onboarding:&lt;/p&gt;
&lt;h4 id=&apos;hpe-assisted&apos;&gt;HPE-Assisted Activation and Onboarding&lt;/h4&gt;
&lt;p style=&quot;font-size: 24px;&quot;&gt;Your HPE Account Service Manager (ASM) and a Global Support Remote specialist (GSR) are standing by to help guide your activation and onboarding process.&lt;/p&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;Please engage your HPE Account Service Manager (ASM) to guide you through the following steps:&lt;/p&gt;
&lt;ol id=&quot;content-list&quot;&gt;
      &lt;li&gt;Complete the quick setup questionnaire that your HPE ASM will share with you via email.&lt;/li&gt;
&lt;pre&gt;&lt;code&gt;  &amp;#x3C;li&gt;A GSR will then be assigned to provide guidance through the onboarding and customization of your HPE Complete Care Service - ITOps command center. This may include assistance with setting up user access to resources, gateways, auto-discovery and dashboarding, customizing monitoring templates and reports, as well as setting up patching, scripting and alert automation.&amp;#x3C;/li&gt;
&amp;#x3C;/ol&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4 id=&apos;self-service-activation&apos;&gt;Self-service Activation and Onboarding&lt;/h4&gt;
&lt;p style=&quot;font-size: 24px;&quot;&gt;Resources for onboarding, activating, and configuring the HPE Complete Care Service - ITOps:&lt;/p&gt;
&lt;table id=&quot;content-table&quot;&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-header-cell&quot;&gt;Video Tutorials&lt;/td&gt;
      &lt;td id=&quot;table-header-cell&quot;&gt;Blog Tutorials&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
&lt;tbody&gt;
    &lt;tr id=&quot;provisioning-activation&quot;&gt;
      &lt;td colspan=&quot;2&quot; id=&quot;section-title&quot;&gt;
        Activation and Onboarding
      &lt;/td&gt;
    &lt;/tr&gt;
&lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=c0ZmdwACq2A&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/CCITOps-video1.png&quot; alt=&quot;Step 1 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 1: Configuring a Gateway&lt;/span&gt;
      &lt;/td&gt;
      &lt;td id=&quot;table-cell&quot; rowspan=&quot;4&quot;&gt;
        &lt;a href=&quot;https://developer.hpe.com/blog/self-activation-and-onboarding-in-hpe-complete-care-service-%E2%80%93-itops/&quot;&gt;
          Self-activation and onboarding in HPE Complete Care Service - ITOps
        &lt;/a&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=a1GVV-b9hCI&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/CCITOps-video2.png&quot; alt=&quot;Step 2 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 2: Agentless SSH integration and monitoring templates&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=htZwkW-zG00&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/CCITOps-video3.png&quot; alt=&quot;Step 3 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 3: Redfish Server integration and configuration&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=MPTq-3EA60E&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/CCITOps-video4.png&quot; alt=&quot;Step 4 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 4: Creating a customized dashboard&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id=&quot;additional-resources&quot;&gt;Additional Resources&lt;/h3&gt;
  &lt;ul&gt;
   &lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-opsramp/home/&quot;&gt;HPE OpsRamp APIs and Integration Capabilities&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://docs.opsramp.com/&quot;&gt;HPE OpsRamp Software documentation&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=p0uA79qnLuk&quot;&gt;HPE Complete Care Service - ITOps video&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a00134891enw&quot;&gt;HPE Complete Care Service - ITOps Solution Brief&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a50009342enw.html&quot;&gt;HPE Complete Care Service - ITOps Contractual Support Service.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
    &lt;strong&gt;Webinar replays&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=qH93yD5KSL8&amp;list=PLtS6YX0YOX4f5TyRI7jUdjm7D9H4laNlF&quot;&gt;No More Blind Spots: How eBPF Transforms Observability&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=VnKSnf7G4-4&amp;list=PLtS6YX0YOX4f5TyRI7jUdjm7D9H4laNlF&quot;&gt;From log files to AI insights: The 60-year evolution of observability and AIOps&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=nDa_NQPbbVY&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;DayN+ : A new way to look at observability&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;/ul&gt;
&lt;h3 id=&quot;contact&quot;&gt;Any Questions About HPE Complete Care Service - ITOps?&lt;/h3&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;Need help getting started with the observability service in HPE Complete Care Service - ITOps? Join our &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot; style=&quot;font-size: 22px;&quot;&gt;Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C099G0ZBZTJ&quot; style=&quot;font-size: 22px;&quot;&gt;#hpe-complete-care-itops&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel]]></title><description><![CDATA[What is Chapel? Chapel is a programming language designed for productive parallel computing on large-scale systems. Chapel's design and…]]></description><link>https://developer.hpe.com/chapel/home/</link><guid isPermaLink="false">https://developer.hpe.com/chapel/home/</guid><content:encoded>&lt;h1&gt;What is Chapel?&lt;/h1&gt;
&lt;p&gt;Chapel is a programming language designed for productive parallel computing on large-scale systems. Chapel&apos;s design and implementation have been undertaken with portability in mind, permitting Chapel to run on multicore desktops and laptops, commodity clusters, and the cloud, in addition to the high-end supercomputers for which it was designed. Chapel&apos;s design and development are being led by Cray Inc. in collaboration with contributors from academia, computing centers, industry, and the open-source community.&lt;/p&gt;
&lt;p&gt;Chapel supports a multithreaded execution model via high-level abstractions for data parallelism, task parallelism, concurrency, and nested parallelism. Chapel&apos;s locale type enables users to specify and reason about the placement of data and tasks on a target architecture in order to tune for locality and affinity. Chapel supports global-view data aggregates with user-defined implementations, permitting operations on distributed data structures to be expressed in a natural manner. In contrast to many previous higher-level parallel languages, Chapel is designed around a multiresolution philosophy, permitting users to initially write very abstract code and then incrementally add more detail until they are as close to the machine as their needs require. Chapel supports code reuse and rapid prototyping via object-oriented design, type inference, and features for generic programming. Existing code can be integrated into Chapel programs (or vice-versa) via interoperability features.&lt;/p&gt;
&lt;p&gt;Chapel was designed from first principles rather than by extending an existing language. It is an imperative block-structured language, designed to be easy to learn for users of Python, C, C++, Fortran, Java, Matlab, and the like. While Chapel builds on concepts and syntax from many previous languages, its parallel features are most directly influenced by ZPL, High-Performance Fortran (HPF), and the Cray MTA™/Cray XMT™ extensions to C and Fortran.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel-lang.org/docs/examples/index.html&quot;&gt;Try Chapel Tutorials for Yourself&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel-lang.org/&quot;&gt;Learn more about Chapel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel-lang.org/publications/PMfPC-Chapel.pdf&quot;&gt;Read the Introduction to Chapel Whitepaper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel-lang.org/performance.html&quot;&gt;See Chapel&apos;s Performance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel.discourse.group/c/newsletters&quot;&gt;Check out Chapel Newsletters&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/chapel-lang/chapel&quot;&gt;Join HPE and the active Chapel community on GitHub &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Projects Powered by Chapel&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/mhmerrill/arkouda&quot;&gt;Arkouda &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Arkouda allows a user to interactively issue from the Python3 interpreter massively parallel computations on distributed data using functions and syntax that mimic NumPy, the underlying computational library used in the vast majority of Python data science workflows. The computational heart of Arkouda is a Chapel interpreter that accepts a pre-defined set of commands from a client (currently implemented in Python) and uses Chapel&apos;s built-in machinery for multi-locale and multithreaded execution. Arkouda has benefited greatly from Chapel&apos;s distinctive features and has also helped guide the development of the language.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.polymtl.ca/expertises/en/laurendeau-eric&quot;&gt;CHAMPS &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;CHAMPS is multi-physics software oriented toward fluid dynamics, written completely in Chapel and developed to solve systems of equations in a general manner. A key theme is to easily expand the capabilities of the software while keeping good performance on distributed memory. The software currently handles 2D and 3D unstructured grids to solve RANS equations with a finite volume approach using the Spalart-Allmaras turbulence model for closure. Different spatial discretization schemes and linear solvers can be used, including a variety of solvers from the PETSc library for which an API has been developed. Other C libraries for which APIs were developed in this project include the CGNS (CFD General Notation System) library, the Intel MKL library and the METIS library. The overall performance achieved with Chapel is comparable to equivalent C/C++/MPI approaches. Future developments will include multi-fidelty simulations in aerodynamics, aero-elasticity, and aero-icing.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/pnnl/chgl&quot;&gt;CHGL &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The Chapel Hypergraph Library (CHGL) is a library for hypergraph computation in the Chapel language. Hypergraphs generalize graphs, where a hypergraph edge can connect any number of vertices. Thus, hypergraphs capture high-order, high-dimensional interactions between multiple entities that are not directly expressible in graphs. CHGL is designed to provide HPC-class computation with high-level abstractions and modern language support for parallel computing on shared- and distributed memory systems.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S0167739X1930946X&quot;&gt;ChOp &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This project aims at programming distributed algorithms for solving big instances of combinatorial optimization problems, taking into account productivity, parallel efficiency, heterogeneity, and fault tolerance. This project comprises heuristic and exact optimization algorithms, and its main application is a distributed Branch-and-Bound for solving permutation-based combinatorial problems.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/sourceryinstitute/PAW/raw/gh-pages/PAW-ATM19/extendedAbstracts/PAW-ATM2019_abstract2.pdf&quot;&gt;ChplUltra &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;chplUltra is designed to simulate the dynamics of ultra-light dark matter for astrophysics. Ultralight dark matter is a relatively new proposal designed to alleviate some of the challenges faced by the more traditional WIMP (weakly interacting massive particles) candidates. It has a rich phenomenology, including the formation of Bose-Einstein condensate solitons and interference effects from the wave-like behavior of the particles. chplUltra is a pseudo-spectral fixed grid code designed to evolve the Schrodinger-Poisson equations. The code uses a Chapel distributed FFT routine, built around the serial FFTW library. Using Chapel allows the authors to rapidly extend the code and to simultaneously scale it out to ~100s of nodes. The code has been run on up to 512 nodes (18k cores) on a Cray XC system.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://cray.github.io/crayai/&quot;&gt;CrayAI &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;CrayAI is a suite of distributed machine learning workflow libraries designed with HPC in mind. These libraries are portable, running on anything from a laptop up to a supercomputer. The core back-end of these libraries is written in Chapel, while the user-facing interface is Python. CrayAI currently consists of Cray HPO and Cray FS. Cray HPO is a distributed black-box hyperparameter optimization framework and Cray FS is a distributed feature selection library.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Any questions on Chapel?&lt;/h1&gt;
&lt;p&gt;Join the conversation by chat on our &lt;a href=&quot;https://gitter.im/chapel-lang/chapel&quot;&gt;Chapel Gitter Channel&lt;/a&gt; or by posting to our &lt;a href=&quot;https://chapel.discourse.group/latest&quot;&gt;Chapel Discourse Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Determined AI]]></title><description><![CDATA[Building and training optimized models at scale is considered the most demanding and critical stage of machine learning (ML) development…]]></description><link>https://developer.hpe.com/determined-ai/home/</link><guid isPermaLink="false">https://developer.hpe.com/determined-ai/home/</guid><content:encoded>&lt;p&gt;Building and training optimized models at scale is considered the most demanding and critical stage of machine learning (ML) development. &lt;a href=&quot;https://www.determined.ai/&quot;&gt;Determined AI&lt;/a&gt; accelerates time-to-production with an open-source platform to build and train models faster and easier.&lt;/p&gt;
&lt;p&gt;Determined AI helps researchers and scientists focus on innovation and accelerate their time-to-delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premise or in the cloud.&lt;/p&gt;
&lt;p&gt;Determined enables you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Train models faster&lt;/strong&gt; using state-of-the-art distributed training without changing your model code.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatically find high-quality models&lt;/strong&gt; with advanced hyperparameter tuning from the creators of Hyperband.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Get more from your GPUs&lt;/strong&gt; with smart scheduling and cut cloud GPU costs by seamlessly using spot instances.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Track and reproduce your work&lt;/strong&gt; with experiment tracking that works out-of-the-box, covering code versions, metrics, checkpoints, and hyperparameters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Determined integrates these features into an easy-to-use, high-performance machine learning environment — which means you can spend your time building models instead of managing infrastructure.&lt;/p&gt;
&lt;p&gt;Since the open-source project launched in 2020, and as a result of its focus on model training, Determined AI has quickly emerged as a leading training tool in the evolving machine learning software ecosystem. Its solution has been adopted by users across a wide range of industries, such as biopharmaceuticals, autonomous vehicles, defense contracting, and manufacturing.&lt;/p&gt;
&lt;p&gt;Call-to-action: &lt;a href=&quot;https://docs.determined.ai/latest/#get-started-locally&quot;&gt;Get Started With Determined&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Projects&lt;/h1&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;Determined &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Determined is an open-source deep learning training platform that makes building models fast and easy.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.determined.ai/latest/&quot;&gt;Explore the Determined AI Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based workshops available in the HPE Developer &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on Determined AI or about the Determined open-source project?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://determined-community.slack.com/join/shared_invite/zt-1f4hj60z5-JMHb~wSr2xksLZVBN61g_Q#/shared-invite/email&quot;&gt;Determined AI Slack Workspace&lt;/a&gt; and start a discussion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[DragonHPC]]></title><description><![CDATA[DragonHPC is a programmable distributed runtime for HPC & AI workflows. The DragonHPC distributed runtime will power complex post-exascale…]]></description><link>https://developer.hpe.com/dragonhpc/home/</link><guid isPermaLink="false">https://developer.hpe.com/dragonhpc/home/</guid><content:encoded>&lt;p&gt;DragonHPC is a programmable distributed runtime for HPC &amp;#x26; AI workflows. The DragonHPC distributed runtime will power complex post-exascale workflows that can scale, that are cloud-native, that can access data efficiently, that can span multiple systems, and that can be used with multiple languages on heterogeneous hardware.&lt;/p&gt;
&lt;h2&gt;Key advantages of using DragonHPC&lt;/h2&gt;
&lt;p&gt;Among other frameworks and libraries, DragonHPC addresses much of the friction developers and scientists face when approaching a computational problem:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Program across widely distributed resources (e.g., laptops, servers, cloud, supercomputers) as if they were all one computer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Implement applications and workflows across Python, C, C++, and Fortran with interoperable high-performance objects instead of being locked into a single language.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Implement Python-based applications and workflows using the standard multiprocessing API instead of non-standard interfaces.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Develop applications and highly dynamic workflows without the limitations of static execution graphs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Orchestrate individual or collections of Python functions, binaries, and MPI processes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Leverage RDMA and leading-edge HPC communication techniques through simple high-level interfaces and communication objects instead of communication through a filesystem.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Users can also access DragonHPC at any level of its architecture. This allows users to develop new components within Dragon that natively interoperate with existing components. Flexibility, composability, and the ability to adopt DragonHPC for applications, tools, and workflows with strict performance requirements dramatically improves user productivity.&lt;/p&gt;
&lt;h2&gt;Online resources for getting up and running with DragonHPC&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/DragonHPC/dragon&quot;&gt;Public open source code repository with example use cases &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dragonhpc.github.io/dragon/doc/_build/html/index.html&quot;&gt;Public open source code repository &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Grommet]]></title><description><![CDATA[Build awesome apps with Grommet Grommet helps you create responsive and accessible mobile-first projects for the web with an easy-to-use…]]></description><link>https://developer.hpe.com/grommet/home/</link><guid isPermaLink="false">https://developer.hpe.com/grommet/home/</guid><content:encoded>&lt;h2&gt;Build awesome apps with Grommet&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; helps you create responsive and accessible mobile-first projects for the web with an &lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;easy-to-use&lt;/a&gt;, &lt;a href=&quot;https://reactjs.org/&quot;&gt;react&lt;/a&gt;-based component library that is part design system and part development framework.&lt;/p&gt;
&lt;p&gt;Grommet is used by developers and designers alike to build both enterprise-class and consumer-grade applications that can be used in web, desktop, and mobile-friendly formats. It underpins many of HPE’s products such as &lt;a href=&quot;https://developer.hpe.com/greenlake/hpe-greenlake-platform/home/&quot;&gt;HPE GreenLake edge-to-cloud platform&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;View Grommet Components&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://grommet.slack.com/&quot;&gt;Join the conversation on Slack&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;Join HPE and the active Grommet Community on GitHub &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;What’s new with Grommet v2?&lt;/h2&gt;
&lt;p&gt;HPE product designer at grommet.io, Chris Carlozzi, explains how Grommet is evolving.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://medium.com/grommet-io/whats-new-with-grommet-2-2f1883a91acb&quot;&gt;Read his blog post&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=WOy7qdNN1Fg&amp;#x26;t=5108s&quot;&gt;Watch Grommet v2 Launch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Learn more about Grommet&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Go to the Grommet Homepage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://codesandbox.io/s/github/grommet/grommet-sandbox?initialpath=box&amp;#x26;module=%2Fsrc%2FBox.js&quot;&gt;Use Grommet in CodeSandBox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://storybook.grommet.io/?path=/story/components--all&quot;&gt;Access the Grommet Storybook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://v2.grommet.io/play&quot;&gt;Check out the Grommet PlayGround&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;HPE Design System&lt;/h2&gt;
&lt;p&gt;The HPE Design System demonstrates how grommet can be themed and used to build user interfaces with your own brand. The HPE Design System is used within HPE to guide the design of the user interfaces that HPE creates.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://design-system.hpe.design/&quot;&gt;Go to the HPE Design System Homepage&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. Grommet workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on Grommet?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://grommet.slack.com/&quot;&gt;Grommet Slack Workspace&lt;/a&gt; and start a discussion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Alletra]]></title><description><![CDATA[Shed the complexity and silos inherent in conventional hybrid cloud environments with category-defining cloud-native data infrastructure…]]></description><link>https://developer.hpe.com/hpe-alletra/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-alletra/home/</guid><content:encoded>&lt;p&gt;Shed the complexity and silos inherent in conventional hybrid cloud environments with category-defining cloud-native data infrastructure that delivers a cloud operating and consumption experience wherever data lives. Stop managing infrastructure — and start simply accessing and utilizing it, as a service and on demand.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Get the agility of cloud — everywhere&lt;/li&gt;
&lt;li&gt;Run any app — without compromise&lt;/li&gt;
&lt;li&gt;Free your data across hybrid cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;HPE Alletra Storage MP B10000 Web Service API v3&lt;/h1&gt;
&lt;p&gt;The HPE Alletra Storage MP B10000 platform offers a rich set of REST APIs to manage the system configuration, provision storage, and run other administrative operations.  See the links below for additional information about the REST API and how to use them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/support/AlletraMP-B10000-WSAPIV3-devguide&quot;&gt;HPE Alletra Storage MP B10000: Web Services API Developer Guide v3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/ws/api-spec&quot;&gt;HPE Alletra Storage MP B10000: Web Services API Documentation and v3 OpenAPI specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a00148521enw&quot;&gt;HPE Alletra Storage MP B10000 Web Services API v3 FAQ&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Join the HPE DEV slack channel &lt;a href=&quot;https://hpedev.slack.com/archives/C08URLVQRRR&quot;&gt;#hpe-alletra-b10k-api&lt;/a&gt; to start a discussion&lt;/p&gt;
&lt;h1&gt;Projects&lt;/h1&gt;
&lt;p&gt;Plugins, SDKs and documentation.&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;CSI Driver &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider to perform data management operations on storage resources.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/api/hpe-nimble-csp/&quot;&gt;View the API documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://artifacthub.io/packages/helm/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://artifacthub.io/packages/olm/community-operators/hpe-csi-operator&quot;&gt;Operator for Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://access.redhat.com/containers/#/registry.connect.redhat.com/hpestorage/csi-driver-operator&quot;&gt;Operator for OpenShift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/index.html&quot;&gt;Visit documentation on SCOD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/hpe-storage/scod&quot;&gt;Storage Container Orchestrator Documentation &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The definitive source for end-user documentation using Kubernetes and neighboring partner ecosystems with HPE Alletra.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/&quot;&gt;Explore the SCOD portal&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/hpe-storage/array-exporter&quot;&gt;Prometheus Array Exporter &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A Prometheus array exporter that may be deployed as a standalone binary or directly on Kubernetes. There&apos;s also an exporter for the CSI driver that may be deployed separately.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-storage.github.io/array-exporter&quot;&gt;Read the documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/metrics.html&quot;&gt;Learn about the CSI info metrics provider on SCOD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. A CSI workshop for HPE Alletra is available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on HPE Alletra?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Workspace&lt;/a&gt; and start a discussion in the &lt;a href=&quot;https://hpedev.slack.com/archives/C025D75HHGC&quot;&gt;#alletra&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE 3PAR and Primera]]></title><description><![CDATA[Power your mission-critical apps with extreme resiliency and unprecedented simplicity App-aware resiliency sees beyond the walls of storage…]]></description><link>https://developer.hpe.com/hpe-3par-and-primera/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-3par-and-primera/home/</guid><content:encoded>&lt;p&gt;Power your mission-critical apps with extreme resiliency and unprecedented simplicity&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;App-aware resiliency sees beyond the walls of storage to predict/prevent disruptions&lt;/li&gt;
&lt;li&gt;Predictive acceleration safely consolidates every mission-critical app onto the same platform with extreme low-latency performance&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Projects&lt;/h2&gt;
&lt;p&gt;SDKs, Plugins and Language Bindings&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_pstoolkit&quot;&gt;PowerShell Toolkit &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;HPE 3PAR and Primera StoreServ Storage PowerShell Toolkit provides storage administrators the convenience of managing HPE 3PAR StoreServ Storage Systems from a Microsoft Windows PowerShell environment.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://myenterpriselicense.hpe.com/cwp-ui/free-software/3PARPSToolkit&quot;&gt;View Product Details&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_pstoolkit&quot;&gt;Chef &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Chef Cookbook and examples for HPE 3PAR StoreServ.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://supermarket.chef.io/cookbooks/hpe3par&quot;&gt;Go to Chef Supermarket&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module&quot;&gt;Ansible &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The HPE 3PAR and Primera modules for Ansible to enable automation of storage provisioning for the HPE 3PAR and Primera StoreServ array.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_puppet_module&quot;&gt;Puppet &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Puppet module and examples for HPE 3PAR StoreServ.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://forge.puppet.com/modules/hewlettpackardenterprise/hpe3par&quot;&gt;Go to Puppet Forge&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/array-exporter&quot;&gt;Prometheus Array Exporter &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A Prometheus array exporter that may be deployed as a standalone binary or directly on Kubernetes. There&apos;s also an exporter for the CSI driver that may be deployed separately.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-storage.github.io/array-exporter&quot;&gt;Read the documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/metrics.html&quot;&gt;Learn about the CSI info metrics provider on SCOD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/python-hpedockerplugin&quot;&gt;Docker &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;HPE Docker Volume Plugin, provides persistent block storage for containerized applications using HPE 3PAR and Primera StoreServ.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/r/hpestorage/legacyvolumeplugin&quot;&gt;Go to Docker Store&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard&quot;&gt;Language SDKs &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Client libraries in different languages which provide access to the HPE 3PAR array over WSAPI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hpe-storage/python-3parclient&quot;&gt;Go to Python Client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ruby_sdk&quot;&gt;Go to Ruby Client&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/&quot;&gt;OpenStack &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The HPE 3PAR and Primera Cinder storage driver for use with Openstack implementations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.openstack.org/cinder/pike/configuration/block-storage/drivers/hpe-3par-driver.html&quot;&gt;Go to Openstack Driver&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on 3PAR and Primera?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/CRU01FTRS&quot;&gt;#3par-primera&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Cray Programming Environment]]></title><description><![CDATA[HPE Cray Programming Environment (CPE) suite offers programmers a comprehensive set of tools for developing, porting, debugging, and tuning…]]></description><link>https://developer.hpe.com/hpe-cray-programming-environment/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-cray-programming-environment/home/</guid><content:encoded>&lt;style&gt;
li {
    font-size: 27px;
    line-height: 33px;
    max-width: none;
}
&lt;/style&gt;
&lt;p&gt;HPE Cray Programming Environment (CPE) suite offers programmers a comprehensive set of tools for developing, porting, debugging, and tuning applications. The programming environment simplifies the transition to new hardware architectures and configurations by automatically applying optimizations on HPC applications that use existing programming models with a simple recompile.&lt;/p&gt;
&lt;p&gt;The environment provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;User environment (compiler drivers, hugepages, craype-api)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Compilers, programming languages, and models&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Scalable communication libraries&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Scientific and math libraries&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Debugging tools&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Profiling and performance optimization tools&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Learn more about the HPE Cray Programming Environment components by exploring the &lt;a href=&quot;https://cpe.ext.hpe.com/docs/&quot;&gt;CPE documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;HPE Cray user environment&lt;/h2&gt;
&lt;p&gt;Our user environment provides libraries that support code compilation and development environment setup. It includes compiler drivers, the hugepages utility, and the CPE API (craype-api).&lt;/p&gt;
&lt;p&gt;PrgEnv modules provide wrappers (cc, CC, ftn) for both CCE and third-party compiler drivers. These wrappers call the correct compiler with appropriate options to build and link applications with relevant libraries as required by loaded modules.  Note that only dynamic linking is supported. These wrappers replace direct calls to compiler drivers in Makefiles and build scripts.&lt;/p&gt;
&lt;p&gt;Hugepages: Standard Linux operating system typically supports base pages of size 4KiB. For high performance parallel workloads that involve data movement operations across a high-speed network, larger page sizes generally yield improved communication performance. On HPE Cray EX supercomputing systems, the Cray Operating System (COS) supports non-standard large page sizes to offer an additional layer of optimization. HPE CPE software stack is tightly integrated with COS and exposes this feature via craype-hugepages modules. Users can leverage this important optimization by loading specific craype-hugepages modules in their build and runtime environments.&lt;/p&gt;
&lt;p&gt;CrayPE API (cray-api) provides software package integration with enhanced control when building multiple versions of a software product modulefile.&lt;/p&gt;
&lt;h2&gt;HPE Cray Compiling Environment&lt;/h2&gt;
&lt;p&gt;Our Fortran, C, and C++ compilers are designed to help extract maximum performance from the systems regardless of the underlying architecture supporting ARM and x86-64 (Intel and AMD) processors, as well as AMD and NVIDIA accelerators.&lt;/p&gt;
&lt;p&gt;The compilers identify regions of computation that are either sequential scalar or vector parallel, and automatically exploit these capabilities for the targeted system.&lt;/p&gt;
&lt;p&gt;The compilers give programmers optimization feedback with an annotated listing of source code for easier application tuning. Our compilers integrate with debuggers and performance tools in the HPE Cray Programming Environment suite—enhancing each other&apos;s capability to generate correct and performant code.&lt;/p&gt;
&lt;p&gt;The suite also includes integration with GNU, Intel, AMD, and NVIDIA programming environments, so developers can choose between multiple compilers and still use the libraries, debuggers, and performance analysis tools included in our suite to help optimize application performance.&lt;/p&gt;
&lt;p&gt;We focus on standards compliance for code safety, application portability, and investment protection. Our compilers support standard programming languages (Fortran, C/C++, and UPC) and standard programming models such as OpenMP and OpenACC.&lt;/p&gt;
&lt;h2&gt;HPE Cray Message Passing Toolkit (CMPT)&lt;/h2&gt;
&lt;p&gt;CMPT is a collection of libraries that provide portable, efficient, and flexible mechanisms for performing data transfers between parallel processes. It comprises HPE Cray MPI, HPE Cray OpenSHMEMX, HPE Cray PMI, and HPE Cray DSMML libraries.&lt;/p&gt;
&lt;p&gt;HPE Cray MPI is an MPICH ABI compatible library tuned for Intel, AMD, and ARM CPUs as well as AMD and NVIDIA GPUs. It is a highly scalable implementation, customized for low latency and high bandwidth, both on-node and off-node, for point-to-point and collective communications. It is also highly optimized and tuned for the HPE Slingshot network architecture. Strategic optimizations for MPI I/O, MPI_THREAD_MULTIPLE, remote memory access (RMA), and integration with the performance analysis tools in the suite contribute to deliver ideal application performance for today&apos;s HPC codes.&lt;/p&gt;
&lt;p&gt;HPE Cray OpenSHMEMX is a highly scalable implementation of the OpenSHMEM standards library interface specification. It supports Partitioned Global Address Space (PGAS) style of programming to cater to the needs of distributed memory applications with highly irregular and random communication models. It is optimized specifically for system architectures involving Intel, AMD, and ARM CPUs. HPE Cray OpenSHMEMX is tuned for exposing the rich set of features available in HPE Slingshot interconnect through operations like remote memory access (RMA), atomic memory updates, put-with-signal, scalable-collective communication, memory ordering, and effective multithreading.&lt;/p&gt;
&lt;h2&gt;HPE Cray Scientific and Math Libraries&lt;/h2&gt;
&lt;p&gt;This suite offers a comprehensive collection of highly tuned linear algebra subroutines designed to help extract maximum performance from the system with the least amount of effort.&lt;/p&gt;
&lt;p&gt;Customized Cray LibSci (including optimized BLAS, LAPACK, ScaLAPACK, and IRT), Cray LibSci_ACC (GPU accelerated BLAS, and LAPACK), and Cray FFTW (optimized fast Fourier transform routines) are designed to take full advantage of the underlying hardware, optimizing for both intra-node and inter-node performance on all HPE HPC systems.&lt;/p&gt;
&lt;p&gt;The libraries are highly tuned and optimized to select performant algorithms at runtime for a variety of HPE HPC systems. They also feature simplified interfaces into complex software (no source code changes required to access optimized algorithms) and integrate with the HPE Cray Compiling Environment for better productivity.&lt;/p&gt;
&lt;p&gt;NetCDF, HDF5, and Parallel NetCDF I/O libraries are built with the supported compiling environments and are integrated with the HPE Cray Compiling Environment.&lt;/p&gt;
&lt;h2&gt;Debugging tools&lt;/h2&gt;
&lt;p&gt;The HPE Cray Programming Environment offers traditional debuggers combined with new innovative techniques. Together, these technologies allow users to address debugging problems at a broader range and scale than conventional techniques. This means that programmers can spend less time debugging and more time creating. Included are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Comparative Debugger: This market-unique tool helps programmers uncover issues by running two applications side by side. If the values of the selected data structures diverge, the user is notified that an error may exist. This capability is useful for locating errors that are introduced when applications are modified through code, compiler, or library changes, and for application porting between architectures (for example, between CPUs and GPUs) or programming models.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GDB for HPC is based on the popular GDB command-line debugger used to debug applications compiled with Fortran, C, and C++ compilers with enhancements to provide a GDB debugging experience for applications that run at scale across many nodes. The tool enables users to run a traditional scalable debugging session—either by launching an application or by attaching it to an already-running application. A GDB for HPC debug session retrieves debug information from thousands of processes and presents merged backtraces and data, removing vast amounts of duplicate information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Valgrind for HPC: Parallel memory analysis tool based on Valgrind debugger used for applications compiled with Fortran, C, and C++ compilers—it aggregates common errors into a single output record for easier analysis of potential memory problems within applications that run at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stack Trace Analysis Tool (STAT): Helps developers identify if an application is hung or still making progress when running. Generates a merged backtrace for applications so users can get a better insight into application behavior at a function level.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Tool for Abnormal Termination Processing (ATP): When an application crashes, the tool detects a signal and generates a merged backtrace resulting in a minimal core file set so that programmers do not have to plough through an enormous number of core files when debugging the application.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sanitizers for HPC: Help developers detect memory and thread errors for easier analysis and debugging of their applications at scale by aggregating and analyzing output of LLVM sanitizers at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We also offer support for traditional debugging mechanisms via integration with TotalView by Perforce and Arm Forge.&lt;/p&gt;
&lt;h2&gt;Profiling and performance optimization tools&lt;/h2&gt;
&lt;p&gt;Comprehensive collection of tools designed to reduce the time and effort associated with porting and tuning of applications on HPE and HPE Cray systems. We offer different tools and experiments to fit different developer needs and choice of interfaces for ease of use.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Performance analysis tool (PAT) brings valuable insight when analyzing bottlenecks to improve performance of applications that run across the whole system. The tool exposes a wide set of indicators, such as computation, communication, I/O, and memory statistics and displays a program’s top time consumers and bottlenecks (via unique and critical load balance analysis) for jobs at scale. It then automatically generates observations and suggestions to improve code performance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As ease of use is an important feature of the tool suite, both simple and advanced interfaces are available, offering both a simple path to get started and a wealth of capability for analyzing the most complex codes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Programmers can quickly assess the type and severity of performance issues by using our visualization tool, which complements text reports and summarizes programs’ performance data in graphs and charts, allowing users to easily drill down to get to the bottom of issues.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Code parallelization assistant helps developers reveal hidden potential of their application via code restructuring. The tool extends our existing performance analysis and visualization technology by combining performance statistics and program source code visualization with our compiling environment optimization feedback. This tool can easily navigate through source code to highlight dependencies or bottlenecks during the optimization phase of program development or porting.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Using the program library provided by our compiling environment and the performance data collected by our performance, measurement, and analysis tools, users can navigate through their source code to understand which high-level loops could benefit from OpenMP parallelism.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Any questions on HPE Cray Programming Environment?&lt;/h1&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C04TG4XJBL7&quot;&gt;#hpe-cray-programming-environment&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Deep Learning Cookbook]]></title><description><![CDATA[It is a common wisdom today, that to start a deep learning exploration one needs a GPU-enabled system and one of the existing open source…]]></description><link>https://developer.hpe.com/hpe-deep-learning-cookbook/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-deep-learning-cookbook/home/</guid><content:encoded>&lt;p&gt;It is a common wisdom today, that to start a deep learning exploration one needs a GPU-enabled system and one of the existing open source deep learning frameworks. But which GPU box to choose? How many GPUs to put in a system? How many systems to put in a cluster and which interconnect to use? Which framework to pick? Answers to these questions are not obvious. That’s why we decided to create HPE Deep Learning Cookbook – a set of tools to characterize deep learning workloads and to recommend optimal hardware/software (HW/SW) stack for any given workload. Our Cookbook consists of several key assets:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE Deep Learning Benchmarking Suite:&lt;/strong&gt; automated benchmarking tool to collect performance measurements on various HW/SW configurations in a unified way.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE Deep Learning Performance Guide:&lt;/strong&gt; a web-based tool which provides access to a knowledge base of benchmarking results. It enables querying and analysis of measured results as well as performance prediction based on analytical performance models.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reference Designs:&lt;/strong&gt; hardware/software recipes for selected workloads.&lt;/p&gt;
&lt;p&gt;Recommendations with our Deep Learning Cookbook are based on a massive collection of performance results for various deep learning workloads on different HW/SW stacks, and analytical performance models. The combination of real measurements and analytical performance models enables us to estimate the performance of any workload and to recommend an optimal hardware/software stack for that workload. Additionally, we use the Cookbook internally to detect bottlenecks in existing hardware and to guide the design of future systems for artificial intelligence and deep learning.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://community.hpe.com/t5/Behind-the-scenes-Labs/The-Deep-Learning-Cookbook/ba-p/6967323#.WhX-xVWnFhF&quot;&gt;Read a blog post about the HPE Deep Learning Cookbook&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://youtu.be/ao_DeE9lxvk&quot;&gt;Watch a short video introducing the HPE Deep Learning Cookbook&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;Components of the Cookbook&lt;/h1&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/dlcookbook-dlbs&quot;&gt;HPE Deep Learning Benchmarking Suite &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;An automated benchmarking tool, which makes it easy to run performance tests with most popular deep learning frameworks. It enables consistent and reproducible benchmark experiments on various hardware/software combinations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs/#/index?id=deep-learning-benchmarking-suite&quot;&gt;Read the documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;HPE Deep Learning Performance Guide&lt;/h3&gt;
&lt;p&gt;A web-based tool which provides access to a knowledgebase of benchmarking results.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://dlpg.labs.hpe.com/&quot;&gt;Check the tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;Related resources&lt;/h1&gt;
&lt;p&gt;Characterisation and Benchmarking of Deep Learning&lt;/p&gt;
&lt;p&gt;A talk at HPC User Forum introducing HPE Deep Learning Cookbook&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.google.com/url?sa=t&amp;#x26;rct=j&amp;#x26;q=&amp;#x26;esrc=s&amp;#x26;source=web&amp;#x26;cd=1&amp;#x26;cad=rja&amp;#x26;uact=8&amp;#x26;ved=0ahUKEwi_78-1tdPXAhUF4WMKHf8CCoQQtwIIKDAA&amp;#x26;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DlgK0BlXdOCw&amp;#x26;usg=AOvVaw0osgSwEOQqZ4Gg9cVx3b-r&quot;&gt;Watch the talk&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE Introduces New Set of Artificial Intelligence Offerings&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://news.hpe.com/hpe-introduces-new-set-of-artificial-intelligence-platforms-and-services/&quot;&gt;Read the press release&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE Ezmeral Software]]></title><description><![CDATA[HPE Ezmeral Software is a unified platform that streamlines AI and GenAI development and deployment. Its open architecture and flexible…]]></description><link>https://developer.hpe.com/hpe-ezmeral/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ezmeral/home/</guid><content:encoded>&lt;p&gt;&lt;a id=&quot;top&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Software is a unified platform that streamlines AI and GenAI development and deployment. Its open architecture and flexible deployment options empower organizations to efficiently build and deploy AI solutions across hybrid cloud environments. By providing a comprehensive suite of tools for data management, model training, and inference, HPE Ezmeral Software helps address common AI challenges such as data heterogeneity, security compliance, scalability, and management of a diverse ecosystem of AI tools.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Software is composed of two components:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.hpe.com/datafabric&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; is a centralized data management platform that simplifies access, governance, and security across diverse data types. By federating data silos enabling instant access, HPE Ezmeral Data Fabric empowers organizations to unlock the full potential of their data for AI data-driven initiatives.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-ezmeral-unified-analytics.html&quot;&gt;HPE Ezmeral Unified Analytics&lt;/a&gt; is a comprehensive platform that simplifies AI and ML development, deployment, and management across hybrid multicloud environments. By providing self-service access to diverse data sources and a unified set of tools, HPE Ezmeral Unified Analytics accelerates time-to-value for AI initiatives. Its scalable architecture and support for popular open-source frameworks enable organizations to train, tune, and deploy models efficiently, while its centralized monitoring capabilities ensure optimal performance and governance.&lt;/p&gt;
&lt;h2&gt;Any questions on HPE Ezmeral Software?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C01BB50LG4W&quot;&gt;#hpe-ezmeral-runtime-enterprise&lt;/a&gt;, &lt;a href=&quot;https://hpedev.slack.com/archives/C055D1EECAK&quot;&gt;#hpe-ezmeral-unified-analytics-software&lt;/a&gt;, and &lt;a href=&quot;https://hpedev.slack.com/archives/CU3JRBTB7&quot;&gt;#hpe-ezmeral-data-fabric-software&lt;/a&gt; Slack channels.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Ezmeral Data Fabric]]></title><description><![CDATA[HPE Ezmeral Data Fabric centralizes different data types across on-premises, multiple clouds, and edge deployments into a single logical…]]></description><link>https://developer.hpe.com/hpe-ezmeral-data-fabric/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-ezmeral-data-fabric/home/</guid><content:encoded>&lt;p&gt;HPE Ezmeral Data Fabric centralizes different data types across on-premises, multiple clouds, and edge deployments into a single logical data store.  It breaks down silos and stops the endless cycle of merging, remerging, and redeploying data silos into new silos.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Data Fabric allows data analysts, engineers, and scientists to deliver real-time and predictive models organizations can trust. Built-in industry-standard APIs, languages, and protocols increase productivity by preserving existing access mechanisms without low-value refactoring.&lt;/p&gt;
&lt;p&gt;Supports files, object, NoSQL databases, and streams&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapROverview/File-Store.html#MapR-XD&quot;&gt;Data Fabric file store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapROverview/HPE-Ezmeral-Data-Fabric-Object-Store.html&quot;&gt;Data Fabric object store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapROverview/maprDB-overview.html#maprDB-overview&quot;&gt;Data Fabric database&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapROverview/c_mapr_streams.html&quot;&gt;Data Fabric data streams&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/c_ecosystem_intro.html&quot;&gt;HPE Ezmeral Ecosystem Pack includes&lt;/a&gt; certified open-source tools and engines that integrate directly on top of the data fabric. It enables in-place analytics no matter where data is located and reduces time spent integrating and configuring open source tools.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/home/&quot;&gt;API Documentation&lt;/a&gt; for app development.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Tutorials on GitHub&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-music&quot;&gt;Music Catalog: REST and GraphQL tutorial &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-smart-home&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-smart-home&quot;&gt;Smart Home: IoT tutorial &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance&quot;&gt;Ezmeral Data Fabric for Predictive Maintenance &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/customer360&quot;&gt;Customer 360 View &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/finserv-application-blueprint&quot;&gt;Application for Processing Stock Market Trade Data &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Munch &amp;#x26; Learn&lt;/h2&gt;
&lt;h4&gt;What&apos;s a data fabric and how does it work?&lt;/h4&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/qi6sTvu8osk&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;YouTube Videos&lt;/h2&gt;
&lt;h4&gt;How to size a data fabric system&lt;/h4&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/6khp9SanXhY&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;h4&gt;Practical Erasure Coding in a Data Fabric&lt;/h4&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/-6IBKLiOb_Q&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;h4&gt;Data Fabric File and Object Store Overview&lt;/h4&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/S19rkDF_oPs&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;p&gt;To learn more about HPE Ezmeral Data Fabric File and Object Store, check out the HPE article &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Data-Fabric-File-and-Object-Store-Benefits-and/ba-p/7168604#.YrHKV3ZByXI&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Free On-Demand Training&lt;/h2&gt;
&lt;p&gt;Learn for free with online courses that teach you how to build applications, secure, and administer HPE Ezmeral. Visit &lt;a href=&quot;https://learn.software.hpe.com/&quot;&gt;HPE Ezmeral Learn On-Demand&lt;/a&gt; to enroll.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/developer-essentials&quot;&gt;Big Data Essentials&lt;/a&gt;.  Challenges of modernizing the IT landscape is covered in these courses on big data, open source tools, Apache Spark, Data Fabric, Database, Event Streams and more.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/artificial-intelligence&quot;&gt;Artificial Intelligence &amp;#x26; Machine Learning&lt;/a&gt;. Learn the basics of AI and ML, machine learning project planning, and preparing and implementing the ML pipeline.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/security&quot;&gt;Zero Trust &amp;#x26; Data Security&lt;/a&gt;. Courses outlining emerging data security threats, SPIFFE-SPIRE, what zero trust architecture is, and technical details of HPE Ezmeral Data Fabric&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/analytics&quot;&gt;Apache Spark &amp;#x26; SQL Analytics&lt;/a&gt;. Intro courses covering Apache Spark 3 essentials, Spark and Kubernetes, Apache Drill, and SQL Analytics using Apache Drill.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/developer-essentials&quot;&gt;Kubernetes and Stateful Applications&lt;/a&gt;. What applications do, how Kubernetes works, plus access an HPE Workshop On-demand covering HPE Ezmeral Runtime.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.software.hpe.com/page/data-fabric&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;. Technical overview courses on HPE Ezmeral Data Fabric architecture and implementation, database essentials, and event streams pub-sub messaging.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. HPE Ezmeral Data Fabric workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on Ezmeral Data Fabric?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/CU3JRBTB7&quot;&gt;#ezmeral-data-fabric&lt;/a&gt; channel.&lt;/p&gt;
&lt;p&gt;Not a Slack user? You can also ask your questions in our &lt;a href=&quot;https://hpe.com/forum/ezmeral&quot;&gt;Ezmeral Forum&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake]]></title><description><![CDATA[Powered by the HPE GreenLake edge-to-cloud platform, HPE GreenLake helps you achieve data-first modernization by bringing the cloud…]]></description><link>https://developer.hpe.com/hpe-greenlake/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake/home/</guid><content:encoded>&lt;p&gt;Powered by the HPE GreenLake edge-to-cloud platform, HPE GreenLake helps you achieve data-first modernization by bringing the cloud experience to your distributed apps and data wherever they are. With a pay per use, scalable, point &amp;#x26; click self-service experience that is managed for you, HPE GreenLake can help you conserve your capital, reduce complexity, and achieve a faster time to market. You can run your business on HPE GreenLake.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;APIs and Documentation&lt;/h2&gt;
&lt;p&gt;HPE GreenLake customers and partners can take advantage of our well-documented, secure, and scalable framework of APIs for HPE GreenLake found in our developer portal. Learn more about the unified HPE GreenLake experience offered through the HPE GreenLake Developer Portal by clicking on the button below.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;Check out the HPE GreenLake Developer Portal&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Manage workloads with the HPE GreenLake edge-to-cloud platform&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/demos.html&quot;&gt;Learn how with HPE GreenLake edge-to-cloud platform demos and resources&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. HPE GreenLake workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;h2&gt;Any questions on HPE GreenLake?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#HPEGreenLake&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Machine Learning Development Environment]]></title><description><![CDATA[Machine learning (ML) engineers and data scientists are on a never-ending search for new solutions that will enable them to better focus on…]]></description><link>https://developer.hpe.com/hpe-machine-learning-development-environment/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-machine-learning-development-environment/home/</guid><content:encoded>&lt;p&gt;Machine learning (ML) engineers and data scientists are on a never-ending search for new solutions that will enable them to better focus on innovation and accelerate their time to production—and this is what &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; is all about.&lt;/p&gt;
&lt;p&gt;By removing the complexity and cost associated with ML model development, this comprehensive platform speeds time to value for model developers by:&lt;/p&gt;
&lt;p&gt;• Removing the need to write infrastructure code&lt;/p&gt;
&lt;p&gt;• Making it easier for IT administrators to set up, manage, secure, and share AI compute clusters&lt;/p&gt;
&lt;p&gt;With the HPE Machine Learning Development Environment, ML practitioners can:&lt;/p&gt;
&lt;p&gt;• Train models faster using state-of-the-art distributed training, without changing their model code&lt;/p&gt;
&lt;p&gt;• Automatically find high-quality models with advanced hyperparameter tuning from the creators of state-of-the-art tuning algorithms such as Hyperband&lt;/p&gt;
&lt;p&gt;• Get more from their GPUs with smart scheduling, as well as reduce cloud GPU costs by seamlessly using spot instances&lt;/p&gt;
&lt;p&gt;• Track and reproduce their work with experiment tracking that works out of the box, covering code versions, metrics, checkpoints, and hyperparameters&lt;/p&gt;
&lt;p&gt;Using a comprehensive array of features integrated into an easy-to-use, high-performance ML environment, ML engineers can focus on building better models, instead of managing IT infrastructure. Using the HPE Machine Learning Development Environment that supports both cloud and on-premises deployment infrastructure, practitioners can develop models using PyTorch, TensorFlow, or Keras. HPE Machine Learning Development Environment also integrates seamlessly with today’s most popular ML tools for data preparation and model deployment.&lt;/p&gt;
&lt;p&gt;H﻿PE Machine Learning Development Environment is built upon the widely popular open source training platform, &lt;a href=&quot;https://www.determined.ai/&quot;&gt;Determined&lt;/a&gt;. Check out &lt;a href=&quot;https://developer.hpe.com/platform/determined-ai/home&quot;&gt;these related articles&lt;/a&gt; about Determined on the HPE Developer portal &lt;a href=&quot;https://developer.hpe.com/platform/determined-ai/home&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;W﻿e also invite you to check out the &lt;a href=&quot;https://hpe-mlde.determined.ai/latest/&quot;&gt;Documentation&lt;/a&gt; for HPE Machine Learning Development Environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Nimble Storage]]></title><description><![CDATA[HPE Nimble Storage customers and partners have full access to the REST API of the arrays. We also provide open source projects for various…]]></description><link>https://developer.hpe.com/hpe-nimble-storage/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-nimble-storage/home/</guid><content:encoded>&lt;style&gt;
  .button {
    background-color: rgba(23,235,160,1);
    box-sizing: border-box;
    color: #000000; 
    font-size: 18px; 
    display: inline-block;
    padding: 6px 12px;
    vertical-align: middle;
    overflow: hidden;
    text-decoration: none;
    text-align: center;
    cursor: pointer;
    white-space: nowrap;
    border-radius: 4px;
    border: none;
    margin: 0;
    line-height: 24px;
    font-weight: 700;
  } 
&lt;/style&gt;
&lt;p&gt;HPE Nimble Storage customers and partners have full access to the REST API of the arrays. We also provide open source projects for various automation platforms, language SDKs and container ecosystems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://infosight.hpe.com/InfoSight/media/cms/active/public/pubs_REST_API_Reference_NOS_53x.whz&quot;&gt;HPE Nimble Storage REST API hosted on HPE InfoSight&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://infosight.hpe.com/InfoSight/media/cms/active/public/pubs_HPE_infosight_wellness_spec.pdf&quot;&gt;Download the HPE InfoSight Wellness API (PDF)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.cloudvolumes.hpe.com/help/rest/api-overview/&quot;&gt;Explore the HPE Cloud Volumes REST API&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Projects&lt;/h1&gt;
&lt;p&gt;Plugins, SDKs and documentation.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;CSI Driver &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider to perform data management operations on storage resources.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/api/hpe-nimble-csp/&quot;&gt;View the API documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://operatorhub.io/operator/hpe-csi-driver-operator&quot;&gt;Operator for Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://access.redhat.com/containers/#/registry.connect.redhat.com/hpestorage/csi-driver-operator&quot;&gt;Operator for OpenShift&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/index.html&quot;&gt;Visit documentation on SCOD&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/scod&quot;&gt;Storage Container Orchestrator Documentation &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The definitive source for end-user documentation for HPE storage integration with Kubernetes, Docker and neighboring partner ecosystems, including the HPE Container Platform.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/&quot;&gt;Explore the SCOD portal&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/array-exporter&quot;&gt;Prometheus Array Exporter &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A Prometheus array exporter that may be deployed as a standalone binary or directly on Kubernetes. There&apos;s also an exporter for the CSI driver that may be deployed separately.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-storage.github.io/array-exporter&quot;&gt;Read the documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/metrics.html&quot;&gt;Learn about the CSI info metrics provider on SCOD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/flexvolume-driver&quot;&gt;Volume Driver for Kubernetes FlexVolume Plugin &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Legacy FlexVolume driver for Container Provider-based storage systems, Nimble Storage and Cloud Volumes, for integration with Kubernetes FlexVolume Plugin.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/flexvolume_driver/container_provider/index.html&quot;&gt;View documentation on SCOD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/common-host-utils/tree/master/cmd/dockervolumed/managedplugin&quot;&gt;Docker Volume plugin &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Comprehensive Docker Volume plugin that serves as a foundation for all major container orchestration frameworks, including Docker Swarm, Kubernetes and Mesos.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://store.docker.com/plugins/nimble&quot;&gt;Visit on the Docker Store&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://scod.hpedev.io/docker_volume_plugins/hpe_nimble_storage/index.html&quot;&gt;View documentation on SCOD&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/nimble-python-sdk&quot;&gt;SDK for Python &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A client Python Software Development Kit for HPE Nimble Storage arrays leveraging the REST APIs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://pypi.org/project/nimble-sdk/&quot;&gt;Check out the package on PyPi&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://hpe-storage.github.io/nimble-python-sdk/&quot;&gt;View documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/nimble-golang-sdk&quot;&gt;SDK for Go &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A client Go Software Development Kit for HPE Nimble Storage arrays leveraging the REST APIs.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/hpe-storage/nimble-ansible-modules&quot;&gt;Ansible Content Collection &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;HPE Nimble Storage Content Collection for Ansible is a certified collection of Ansible modules to manage HPE Nimble Storage array resources.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://galaxy.ansible.com/hpe/nimble&quot;&gt;Download from Ansible Galaxy&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://hpe-storage.github.io/nimble-ansible-modules&quot;&gt;Read the documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://cloud.redhat.com/ansible/automation-hub/hpe/nimble/&quot;&gt;View on Red Hat Automation Hub&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;PowerShell Toolkit&lt;/h3&gt;
&lt;p&gt;Windows PowerShell scripting toolkit for HPE Nimble Storage arrays.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.powershellgallery.com/packages/HPENimblePowerShellToolkit/3.0.0&quot;&gt;Visit the PowerShell Gallery&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/NimbleStorage/nimble-puppet&quot;&gt;Puppet Module &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Puppet module to manage Nimble Storage arrays.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://forge.puppet.com/nimblestorage/nimblestorage&quot;&gt;View on Puppet Forge&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/NimbleStorage/automation-examples&quot;&gt;Storage and Infrastructure Automation &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;This repository contains examples on how you can automate daunting management tasks for HPE Nimble Storage and adjacent technologies.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/NimbleStorage/Nemo&quot;&gt;Nemo &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Nemo uses OpenZFS to emulate the snapshot and cloning capabilities of the HPE Nimble Storage Docker Volume Plugin.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/NimbleStorage/nimble-sap-hana-agent&quot;&gt;Application Snapshot Agent for SAP HANA &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The HPE Nimble Storage Application Snapshot Agent for SAP HANA uses the HPE Nimble Storage Snapshot Framework to support application consistent and integrated storage level snapshots of SAP HANA.&lt;/p&gt;
&lt;h3&gt;OpenStack Cinder Driver&lt;/h3&gt;
&lt;p&gt;Use HPE Nimble Storage with OpenStack Cinder.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/nimble-volume-driver.html&quot;&gt;Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://opendev.org/openstack/cinder&quot;&gt;View OpenStack Cinder source&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/NimbleStorage/nimble-fuel-cinder-plugin&quot;&gt;OpenStack Fuel Plugin&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Nimble Storage Cinder integration with OpenStack Fuel&lt;/p&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. Nimble/CSI workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot; style=&quot;box-shadow: none;&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;&lt;b&gt;Try Now!&lt;/b&gt;&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
- - -
&lt;h2&gt;Any questions on Nimble?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C7TTAHRUN&quot;&gt;#nimblestorage&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE NonStop]]></title><description><![CDATA[HPE NonStop is a platform that runs some of the world’s most exciting workloads in our day-to-day life. From producing luxury cars, to…]]></description><link>https://developer.hpe.com/hpe-nonstop/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-nonstop/home/</guid><content:encoded>&lt;p&gt;HPE NonStop is a platform that runs some of the world’s most exciting workloads in our day-to-day life. From producing luxury cars, to making payments in our grocery shopping, to helping people travel and executing massive amounts of transactions in global payments networks, HPE NonStop is the platform that lets our customers, and their engineers get their sleep, while their mission-critical applications continue relentlessly in data centres and on private clouds.&lt;/p&gt;
&lt;h2&gt;YouTube Videos&lt;/h2&gt;
&lt;h4&gt;HPE NonStop NS8: Uncompromising availability, performance and scale for mission-critical workloads&lt;/h4&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=M5vq2OxwTDI&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/M5vq2OxwTDI/hqdefault.jpg&quot; alt=&quot;HPE NonStop NS8&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/servers/nonstop.html&quot;&gt;HPE NonStop Home Page&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/4aa4-2988enw&quot;&gt;HPE NonStop Family of Systems brochure&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://shaniceabigail.github.io/nonstop101/&quot;&gt;NonStop 101: Training Wheels for NonStop OS&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop&quot;&gt;Sample Code for HPE NonStop products &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView]]></title><description><![CDATA[HPE OneView Developers Hub HPE OneView takes a software-defined, programmatic approach to managing infrastructure with efficient workflow…]]></description><link>https://developer.hpe.com/hpe-oneview/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview/home/</guid><content:encoded>&lt;h2&gt;HPE OneView Developers Hub&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; takes a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a modern RESTful API, and a comprehensive partner ecosystem.&lt;/p&gt;
&lt;p&gt;Here you will find repositories, demos, guides and other technical resources from HPE and the Composable Ecosystem Partners to automate infrastructure management and eliminate complex manual processes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hewlettpackard/&quot;&gt;Develop together: Visit the HPE GitHub organization&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe.com/info/composableprogram&quot;&gt;Looking for solutions: Visit the Composable Ecosystem Partners&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/resources/integrated-systems/oneview-trial.html?parentPage=/us/en/products/integrated-systems/management-software&quot;&gt;Download a free trial&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;HPE OneView Integrations&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-collection&quot;&gt;Ansible &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.ansible.com/home&quot;&gt;Ansible&lt;/a&gt; by Red Hat automates the provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.redhat.com/en/resources/automate-container-deployment-with-hpe-datasheet&quot;&gt;Read the Deployment Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4AA6-6229ENW&quot;&gt;Read the Accelerating DevOps White Paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://h17007.www1.hpe.com/us/en/enterprise/integrated-systems/info-library/index.aspx?cat=ci_mgmt&amp;#x26;subcat=ansible#.XJVbZCdMEio&quot;&gt;View Additional Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=PVJgUEH0Quw&amp;#x26;feature=youtu.be&quot;&gt;Watch the Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible-collection&quot;&gt;Ansible Docker Image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://psnow.ext.hpe.com/doc/a50003411enw?jumpid=in_lit-psnow-red&quot;&gt;Installation and user guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;h3&gt;Docker&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.docker.com/&quot;&gt;Docker&lt;/a&gt; integrates with HPE OneView to bring containerization out of the cloud and onto your bare-metal infrastructure.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00047301enw&quot;&gt;Reference Configuration for CaaS on HPE Synergy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview&quot;&gt;HashiCorp Terraform &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hashicorp.com/&quot;&gt;HashiCorp Terraform&lt;/a&gt; provides a common workflow to provision hybrid infrastructure and applications so users can seamlessly and efficiently deploy HPE infrastructure.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.terraform.io/intro/index.html&quot;&gt;Terraform Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.hashicorp.com/terraform/getting-started/install.html&quot;&gt;Getting started with Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;Terraform Docker Image&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;Microsoft&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/product-catalog/detail/pip.5390822.html&quot;&gt;HPE OneView for Microsoft System Center&lt;/a&gt; integrates HPE ProLiant and HPE BladeSystem manageability features with Microsoft System Center.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/product-catalog/detail/pip.5390822.html&quot;&gt;View the Product&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;h3&gt;Red Hat® OpenShift®&lt;/h3&gt;
&lt;p&gt;Enable IT operations and application development teams to deliver applications faster using the &lt;a href=&quot;https://www.redhat.com/en/technologies/cloud-computing/openshift&quot;&gt;OpenShift&lt;/a&gt; integration.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=A00038916ENW&quot;&gt;Read the Synergy Reference Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/automate_red_hat_openshift_container_platform_deployment_on_hpe_proliant_servers_with_ansible_tower_and_hpe_oneview/&quot;&gt;Read the Technical White Paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://h17007.www1.hpe.com/us/en/enterprise/integrated-systems/info-library/index.aspx?cat=ci_mgmt&amp;#x26;subcat=ansible&quot;&gt;View Additional Resources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;VMware&lt;/h3&gt;
&lt;p&gt;HPE OneView for &lt;a href=&quot;https://vmware.com/&quot;&gt;VMware&lt;/a&gt; vCenter seamlessly integrates manageability features with VMware virtualization solutions.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/product-catalog/detail/pip.4152978.html&quot;&gt;View the Product&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;SDKs and Language Bindings&lt;/h2&gt;
&lt;p&gt;Access Software Development Kits (SDKs) and language bindings for integrating HPE OneView with common programming languages and frameworks.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Go &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;HPE OneView allows you to treat your physical infrastructure as code. Now you can integrate your favorite tools based in Golang with HPE OneView.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/r/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;OneView Golang SDK Docker Image&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe-oneview-hubot&quot;&gt;Hubot &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Interface with HPE OneView using Hubot, an automation tool that can sync with other chat services.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView&quot;&gt;PowerShell &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;This library provides a pure Windows PowerShell interface to the HPE OneView REST APIs.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;Python &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;This library provides a pure Python interface to the HPE OneView REST APIs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;OneView Python SDK Docker Image&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-redfish-toolkit&quot;&gt;Redfish &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The Redfish toolkit allows customers to take automations that use the Redfish specification and apply them to HPE OneView without need for extensive scripting.&lt;/p&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;h2&gt;HPE Synergy Image Streamer Tools&lt;/h2&gt;
&lt;p&gt;Access tools, reference architectures, artifact bundles, and other technical resources for implementing HPE Synergy Image Streamer.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard?q=image-streamer&quot;&gt;Developer Resources &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Resources for achieving fast software-defined control using the Image Streamer management appliance.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/image-streamer-reference-architectures&quot;&gt;Reference Architectures &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Best practices and configuration guidance for developing and executing Image Streamer deployment plans.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/image-streamer-rhel&quot;&gt;RHEL Artifacts &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Plan scripts and sample artifact bundles for personalizing and deploying Red Hat Enterprise Linux.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/image-streamer-sles&quot;&gt;SLES Artifacts &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Plan scripts and sample artifact bundles for personalizing and deploying SUSE Linux Enterprise Server.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. HPE OneView workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
- - -
&lt;h2&gt;Any questions on HPE OneView?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C5TMA1TK5&quot;&gt;#oneview&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Global Dashboard]]></title><description><![CDATA[Unified View of Infrastructure Across Data Centers in Multiple Locations The HPE OneView Global Dashboard provides a unified view of the…]]></description><link>https://developer.hpe.com/hpe-oneview-global-dashboard/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-global-dashboard/home/</guid><content:encoded>&lt;h1&gt;Unified View of Infrastructure Across Data Centers in Multiple Locations&lt;/h1&gt;
&lt;p&gt;The HPE OneView Global Dashboard provides a unified view of the health and inventory of Hewlett Packard Enterprise servers, profiles, enclosures, HPE Synergy frames, HPE 3PAR and HPE Primera storage systems across multiple appliances for ease of management.&lt;/p&gt;
&lt;p&gt;Aggregates critical activities from multiple appliances into a single feed to quickly identify issues occurring on monitored hardware for prompt resolution.&lt;/p&gt;
&lt;p&gt;Generates reports for monitored assets to view inventory, including firmware versions as well as compliance that allow you to verify that your equipment meets corporate standards.&lt;/p&gt;
&lt;p&gt;One click navigation of managed resources with single sign-on managing appliance.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe.com/products/ovglobaldashboard&quot;&gt;Learn more about Global Dashboard&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;API Reference&lt;/h1&gt;
&lt;p&gt;Global Dashboard provides a public REST API to can be used to retrieve information about resources.  For example, you can retrieve server-hardware information about thousands of servers from datacenters scattered across the global with a single API call.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe-globaldashboard-swagger&quot; title=&quot;OpenAPI (Swagger) definition of the Global Dashboard API&quot;&gt;OpenAPI Definition &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;a href=&quot;http://app.swaggerhub.com/apis/hpe-global-dashboard/hpe-one_view_global_dashboard_rest_api/2.1&quot;&gt;API Reference on SwaggerHub&lt;/a&gt;&lt;/h3&gt;</content:encoded></item><item><title><![CDATA[HPE OpsRamp]]></title><description><![CDATA[The Cloud and Cloud-Native Observability solution within OpsRamp, a Hewlett Packard Enterprise company, is designed to provide organizations…]]></description><link>https://developer.hpe.com/hpe-opsramp/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-opsramp/home/</guid><content:encoded>&lt;p&gt;The Cloud and Cloud-Native Observability solution within OpsRamp, a Hewlett Packard Enterprise company, is designed to provide organizations with deep visibility and actionable insights into their hybrid cloud and cloud-native environments. By leveraging a comprehensive set of powerful monitoring and observability capabilities, OpsRamp enables ITOps and DevOps teams to ensure the reliability, performance, and security of their business-critical services, applications and infrastructure.&lt;/p&gt;
&lt;p&gt;OpsRamp is a comprehensive autonomous IT operations platform designed to help service providers and enterprises modernize and streamline their IT operations. OpsRamp integrates artificial intelligence (AI), automation, and analytics to provide visibility, monitoring, and management across hybrid and multi-cloud environments. It addresses the challenges of complex IT infrastructures by offering tools for unified observability, proactive monitoring, incident management, and remediation.&lt;/p&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html&quot;&gt;OpsRamp&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;API Documentation&lt;/h2&gt;
&lt;p&gt;OpsRamp offers a robust set of RESTful APIs designed to integrate and automate various IT operations management processes. These APIs allow developers to interact programmatically with the OpsRamp platform, enabling advanced customization, data exchange, and workflow automation across IT environments. Detailed information can be found on the &lt;a href=&quot;https://develop.opsramp.com/v2&quot;&gt;OpsRamp documentation&lt;/a&gt; site.&lt;/p&gt;
&lt;h2&gt;Integrations&lt;/h2&gt;
&lt;p&gt;OpsRamp offers an extensive range of integrations that enhance its capabilities for IT operations management. These integrations span across various domains, including service management platforms like ServiceNow and Jira, monitoring and observability tools such as Prometheus, backup and recovery systems like Zerto, and cloud-native systems such as Kubernetes.&lt;/p&gt;
&lt;p&gt;Additionally, OpsRamp supports integration with popular automation tools like Ansible and comprehensive export options to cloud storage solutions such as AWS S3 and Azure Blob Storage. Detailed documentation on all integrations can be found at &lt;a href=&quot;https://docs.opsramp.com/integrations/&quot;&gt;www.docs.opsramp.com/integrations&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Custom integrations:&lt;/strong&gt; OpsRamp&apos;s custom integration framework enables users to create webhook and OAuth2 integrations for data ingestion and event capture within the platform. It offers flexibility to build integrations through either the OpsRamp APIs or the web interface. This framework is designed to accommodate diverse use cases, such as integrating custom alert sources, enabling ticketing workflows, or syncing with external systems, all while providing tools for validation and auditing of integration setups. For further details, visit the &lt;a href=&quot;https://docs.opsramp.com/integrations/a2r/custom-integration/custom-integration/&quot;&gt;OpsRamp Custom Integration documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OpenTelemetry integration:&lt;/strong&gt; OpsRamp&apos;s &lt;a href=&quot;https://opentelemetry.io/&quot;&gt;OpenTelemetry&lt;/a&gt; integration enables organizations to achieve seamless distributed tracing across their IT environments. By leveraging OpenTelemetry, an open-source observability framework, OpsRamp provides a vendor-agnostic solution for capturing and visualizing trace data. This integration helps teams gain end-to-end visibility into the lifecycle of requests spanning microservices, APIs, and other system components, allowing for the identification of latency issues and performance bottlenecks.&lt;br/&gt; &lt;br/&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The platform supports easy instrumentation using OpenTelemetry’s libraries and SDKs, ensuring compatibility with a wide range of programming languages and environments. The resulting trace data can be visualized using OpsRamp&apos;s intuitive interfaces, such as flame graphs and service insights, for efficient performance analysis and troubleshooting. This integration empowers IT and development teams to optimize their distributed systems while maintaining flexibility and industry standards in their observability strategies. For further details, visit the &lt;a href=&quot;https://docs.opsramp.com/integrations/a2r/3rd-party/opentelemetry-integration/&quot;&gt;OpenTelemetry integration documentation&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes integration:&lt;/strong&gt; OpsRamp’s Kubernetes (K8s) integration provides robust monitoring and management of containerized environments. It supports Kubernetes clusters deployed across various cloud providers and on-premises setups. By integrating with Kubernetes, OpsRamp offers capabilities such as automated resource discovery, real-time monitoring of nodes, pods, and services, and visualization of cluster health and performance metrics. The platform enables users to customize monitoring thresholds, set alert conditions, and leverage detailed dashboards for better insight into their Kubernetes workloads. This integration helps ensure operational efficiency and scalability in managing containerized applications across diverse environments. For further details, visit the &lt;a href=&quot;https://docs.opsramp.com/integrations/container-orchestration/kubernetes-new/&quot;&gt;Kubernetes integration documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ansible integration:&lt;/strong&gt; OpsRamp&apos;s &lt;a href=&quot;https://www.ansible.com/&quot;&gt;Ansible&lt;/a&gt; integration enables automation of a wide range of tasks across hybrid infrastructures. It supports agent deployment, updates, and uninstallation, but also goes further by automating diagnostic health checks and performing remediation actions to maintain system stability. Through the use of Ansible playbooks, IT teams can automate the identification and resolution of issues, enforce compliance, and ensure system health. This integration also includes secure management of sensitive data with Ansible Vault, making it a powerful tool for streamlining IT operations and ensuring consistency and efficiency across diverse environments. For further details, visit the &lt;a href=&quot;https://docs.opsramp.com/integrations/automation-integration/ansible-integration/&quot;&gt;Ansible integration documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;Ready to learn more about OpsRamp? Browse through the &lt;a href=&quot;https://www.opsramp.com/resources/&quot;&gt;OpsRamp resources portal&lt;/a&gt;.&lt;/p&gt;
&lt;br/&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h2&gt;Any questions on HPE OpsRamp?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C082P2Q3811&quot;&gt;#hpe-opsramp&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Private Cloud AI]]></title><description><![CDATA[You’re an AI/ML practitioner, not an infrastructure manager. But AI development is bogged down by: Spending excessive amounts of time…]]></description><link>https://developer.hpe.com/hpe-private-cloud-ai/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-private-cloud-ai/home/</guid><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;You’re an AI/ML practitioner, not an infrastructure manager. But AI development is bogged down by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spending excessive amounts of time waiting for data that delays model training and iteration cycles&lt;/li&gt;
&lt;li&gt;Limited compute and storage resources that slow down exploring new model architectures, tuning, and scaling experiments&lt;/li&gt;
&lt;li&gt;Security risks that slow down development as well as the ability to quickly test and deploy new models&lt;/li&gt;
&lt;li&gt;Struggling to keep pace with an ecosystem of constantly changing tool sets and frameworks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE Private Cloud AI is a pre-configured, end-to-end solution for enterprise AI that integrates high-performance compute, storage, networking and AI software so you can focus on what you need to do: build innovative AI solutions. A built-in software ecosystem of tools and frameworks streamlines lifecycles so you can deploy into production without wrangling data, tools, or infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://hpe.com/private-cloud-ai&quot;&gt;Learn more about HPE Private Cloud AI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-private-cloud-ai-picture1-new.jpg&quot; alt=&quot;HPE Private Cloud AI&quot; title=&quot;HPE Private Cloud AI&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE Private Cloud AI developer system - Accelerate AI development&lt;/h1&gt;
&lt;p&gt;The HPE Private Cloud AI developer system aims to abstract away the underlying infrastructure complexity, data wrangling, and environment setup. Being freed from these tasks enables you to focus on building, training, and deploying AI solutions that prove and validate AI projects faster. How do we do that?&lt;/p&gt;
&lt;p&gt;This system is ready to run out-of-the-box to reduce the burden of infrastructure provisioning and management. With its built-in, end-to-end software platform, you gain direct, unified access to open-source and NVIDIA tools, enabling you to focus on critical tasks by eliminating hardware-software interoperability headaches. Got custom or third-party tools you want to use? No problem. Import them into the software model catalog along with NVIDIA Blueprints to accelerate your specialized AI workflows.&lt;/p&gt;
&lt;p&gt;But this development environment isn’t just for ideation; it’s designed to be the starting point for production AI. As your models mature and demand enterprise-grade scale and reliability, you can seamlessly transition AI workflows to the HPE Private Cloud family, leveraging its proven architecture and unified management for true enterprise-grade AI.&lt;/p&gt;
&lt;h1&gt;Technical Demos&lt;/h1&gt;
&lt;h3&gt;Simplify AI from Infrastructure to Model Deployment&lt;/h3&gt;
&lt;p&gt;Join Randy Thomasson as he demonstrates how HPE Private Cloud AI removes the complexities of AI infrastructure, streamlines data pipelines, and simplifies model deployment.&lt;/p&gt;
&lt;a href=&quot;https://www.brighttalk.com/webcast/19535/640132?utm_source=HPE&amp;utm_medium=brighttalk&amp;utm_campaign=640132&quot; target=&quot;_blank&quot;&gt;
&lt;p&gt;&lt;img src=&quot;/img/simplify-ai-from-infrastructure-to-model-deployment-500-281.png&quot; alt=&quot;Simplify AI from Infrastructure to Model&quot;&gt;&lt;/p&gt;
&lt;/a&gt;
&lt;h3&gt;Simplify Data Pipelines: HPE AI Essentials Demo for Data Teams&lt;/h3&gt;
&lt;p&gt;See how HPE AI Essentials simplifies the creation of powerful data pipelines using Apache Airflow and Apache Spark.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=e0IUMJKpqGg&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/e0IUMJKpqGg/hqdefault.jpg&quot; alt=&quot;Simplify Data Pipelines&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Fast track innovation: how HPE simplifies model deployment&lt;/h3&gt;
&lt;p&gt;This demo showcases the built-in machine learning services that simplify and automate model development and deployment.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=THeg2DwrF4c&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/THeg2DwrF4c/hqdefault.jpg&quot; alt=&quot;Fast track innovation&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Building a generative AI foundation with HPE Private Cloud AI&lt;/h3&gt;
&lt;p&gt;In this video, Alex Ollman provides a deep dive into the HPE Private Cloud AI architecture and infrastructure needed to deploy generative AI at enterprise scale.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=AIG4-O9ZVRY&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/AIG4-O9ZVRY/hqdefault.jpg&quot; alt=&quot;HPE Private Cloud AI architecture for GenAI&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;HPE Private Cloud AI technical demo&lt;/h3&gt;
&lt;p&gt;Join Randy Thomasson to delve into offering role-specific walkthroughs for administrators focusing on user and GPU management, data engineers exploring data pipeline construction, and data scientists learning about model deployment.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=5uRoHXD2Sks&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/5uRoHXD2Sks/hqdefault.jpg&quot; alt=&quot;HPE Private Cloud AI technical demo&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Resources&lt;/h1&gt;
&lt;h2&gt;HPE Private Cloud AI Documentation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/psnow/product-documentation?oid=1014847366&amp;#x26;cc=my&amp;#x26;lc=en&amp;#x26;jumpid=in_pdp-psnow-docs&quot;&gt;Product Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe.com/support/PCAIUserGuide&quot;&gt;Administration Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/support/AIEDocs&quot;&gt;HPE AI Essentials Software&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Tutorials – HPE AI Essentials&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HPEEzmeral/aie-tutorials/tree/aie-1.7.0&quot;&gt;Tutorials for AI Essentials Software on HPE Private Cloud AI &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Analyst Reports&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Technical validation: &lt;a href=&quot;https://psnow.ext.hpe.com/doc/a00146294enw&quot;&gt;Accelerate AI business value with NVIDIA Computing for HPE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Economic validation: &lt;a href=&quot;https://www.hpe.com/psnow/doc/a00146433enw&quot;&gt;The economic benefits of HPE Private Cloud AI with NVIDIA AI Computing by HPE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Any questions about HPE Private Cloud AI?&lt;/h1&gt;
&lt;p&gt;Need help getting started with the HPE Private Cloud AI, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C08MBCD6ER5&quot;&gt;#hpe-private-cloud-ai&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[iLO RESTful API]]></title><description><![CDATA[One Interface for Server Management Automation HPE Server management provides intelligent remote control automation through HPE Integrated…]]></description><link>https://developer.hpe.com/ilo-restful-api/home/</link><guid isPermaLink="false">https://developer.hpe.com/ilo-restful-api/home/</guid><content:encoded>&lt;h1&gt;One Interface for Server Management Automation&lt;/h1&gt;
&lt;p&gt;HPE Server management provides intelligent remote control automation through &lt;a href=&quot;https://www.hpe.com/us/en/hpe-integrated-lights-out-ilo.html&quot;&gt;HPE Integrated Lights-Out&lt;/a&gt; (iLO) and the Redfish® iLO RESTful API. Gain even more capabilities that go beyond scripting by leveraging one API to manage your complete lifecycle of HPE Gen10, Gen10 Plus and Gen11 servers.&lt;/p&gt;
&lt;p&gt;A single API interface integrates server management components and full compute power. Use it with HPE iLO 5 and iLO 6 to perform remote server provisioning, configuration, inventory and monitoring industry standards through Redfish API conformance.&lt;/p&gt;
&lt;h1&gt;HPE Redfish API Implementation&lt;/h1&gt;
&lt;p&gt;Obtain simple, secure management of today’s scalable data center hardware with the Redfish® API ecosystem. It’s an open industry-standard specification and schema that helps you integrate solutions within your existing tools. Published by the Distributed Management Task Force (&lt;a href=&quot;http://www.dmtf.org/standards/redfish&quot;&gt;DMTF&lt;/a&gt;), it&apos;s ideal for cloud and web-based infrastructures, which typically have large quantities of servers in heterogeneous environments.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://h50146.www5.hpe.com/products/software/oe/linux/mainstream/support/whitepaper/pdfs/4AA6-1727ENW.pdf&quot;&gt;Read about Redfish on HPE iLO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://ilorestfulapiexplorer.ext.hpe.com/&quot;&gt;Explore the API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/&quot;&gt;Consult the API reference documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/ur9UKRV_0S8&quot;&gt;Choose a Redfish Client Tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;GitHub and PowerShell Repositories&lt;/h1&gt;
&lt;p&gt;Find tools you need to help you leverage the iLO RESTful API SDKs.&lt;/p&gt;
&lt;h2&gt;SDKs and Language Bindings&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://servermanagementportal.ext.hpe.com&quot;&gt;iLO RESTful API Documentation&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;HPE Reference documentation with examples to help you write Redfish client programs and scripts.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/python-ilorest-library&quot;&gt;The Python library &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The Python library provides a rich Redfish library and examples for developers to easy interact with the iLO RESTful API.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://www.powershellgallery.com/packages?q=hpe*cmdlets&quot;&gt;The PowerShell Gallery&lt;/a&gt; and &lt;a href=&quot;https://github.com/HewlettPackard/PowerShell-ProLiant-SDK&quot;&gt;library &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The PowerShell Gallery and library provide Cmdlets and scripts to interact with the Windows PowerShell Interface to the iLO RESTful API.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/ilo-sdk-ruby&quot;&gt;The Ruby library &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The Ruby library enables to interact the iLO RESTful API.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/javascript-ilorest-library&quot;&gt;The JavaScript library &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The JavaScript library enables Java developers to easily integrate with the iLO RESTful API.&lt;/p&gt;
&lt;h2&gt;DevOps&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://galaxy.ansible.com/hpe/ilo&quot;&gt;Ansible &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HewlettPackard/ilo-ansible-collection/&quot;&gt;Ansible playbooks and roles&lt;/a&gt; for HPE iLO using the Redfish® API.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/chef-ilorest-cookbook&quot;&gt;Chef Cookbook &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Chef Cookbook for installing the Python iLOrest library and examples.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/puppet-ilorest-module&quot;&gt;Puppet module &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Puppet module for installing the Python iLOrest library and examples.&lt;/p&gt;
&lt;h2&gt;IT Operations&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot;&gt;RESTful Interface Tool &lt;img src=&quot;Github&quot; alt=&quot;iLOrest&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot;&gt;HPE iLOrest&lt;/a&gt;, the HPE RESTful Interface Tool is an open source Redfish client scripting tool also featuring interactive and debug modes. Packaging includes Windows, many Linux flavors as well as a &lt;a href=&quot;https://pypi.org/project/ilorest&quot;&gt;PyPI project&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishclients/ilorest-userguide/&quot;&gt;Read the documentation guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=xfEN95pNNfY&quot;&gt;Watch the Demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/nagios-hpeilo-restful-extension&quot;&gt;Nagios- Plug-in &lt;img src=&quot;Github&quot; alt=&quot;Redfish Nagios plug-in&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Nagios- Plug-in for Industry Standard in IT infrastructure monitoring.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;/hackshack&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. iLO/Redfish workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
- - -
&lt;h2&gt;Any questions on iLO or Redfish?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C9RRCL9TJ&quot;&gt;#redfish&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeDirector]]></title><description><![CDATA[KubeDirector is an open source Kubernetes “custom controller” that simplifies implementing stateful scaleout application clusters on…]]></description><link>https://developer.hpe.com/kubedirector/home/</link><guid isPermaLink="false">https://developer.hpe.com/kubedirector/home/</guid><content:encoded>&lt;p&gt;KubeDirector is an open source Kubernetes “custom controller” that simplifies implementing stateful scaleout application clusters on Kubernetes. Using standard Kubernetes (K8s) facilities to implement stateful scale out application clusters, it enables the transparent integration of K8s user/resource management and existing K8s clients and tools.&lt;/p&gt;
&lt;p&gt;KubeDirector does not tie a custom resource definition to a particular type of application, or contain hardcoded application-specific logic within the controller. Application characteristics are instead defined by metadata and an associated package of configuration artifacts, simplifying application deployment.&lt;/p&gt;
&lt;p&gt;KubeDirector provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One operator for stateful apps&lt;/strong&gt;, enabling the deployment of legacy, stateful enterprise applications without writing or implementing application-specific operators&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;An abstract framework&lt;/strong&gt;, allowing the definition of application characteristics through metadata and configuration artifacts&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Continuous validation of resources&lt;/strong&gt;, automating app deployment and validation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hooks for “Day two” operations&lt;/strong&gt;, simplifying lifecycle management&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Introduction to KubeDirector&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://kubedirector.io/&quot;&gt;Project web site&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/bluek8s/kubedirector/wiki&quot;&gt;KubeDirector Wiki&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;GitHub repositories&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/bluek8s/kubedirector&quot;&gt;KubeDirector on GitHub&lt;/a&gt;: Main KubeDirector repository.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://hpe-container-platform-community.github.io/learn-kubedirector/docs/&quot;&gt;Learn Kubedirector on GitHub&lt;/a&gt;: Get hints on how to use&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Documentation and tutorials&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/bluek8s/kubedirector/blob/master/doc/quickstart.md&quot;&gt;Quickstart Guide&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/bluek8s/kubedirector/blob/master/HISTORY.md&quot;&gt;Update release info&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. KubeDirector workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on KubeDirector?&lt;/h2&gt;
&lt;p&gt;You can raise any questions and issues you might have on our &lt;a href=&quot;https://github.com/bluek8s/kubedirector/issues&quot;&gt;GitHub issues&lt;/a&gt; link.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Adding certificates to the trust store Add certificates to the trust store by using the Add-HPESvtCertificate cmdlet. You can specify the…]]></description><link>https://developer.hpe.com/hpe-simplivity/adding-certificates/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/adding-certificates/</guid><content:encoded>&lt;h1&gt;Adding certificates to the trust store&lt;/h1&gt;
&lt;p&gt;Add certificates to the trust store by using the &lt;code&gt;Add-HPESvtCertificate&lt;/code&gt; cmdlet. You can specify the path to the certificate file or an X509Certificate2 object. The certificate file can be a PEM or a DER file. To use the &lt;code&gt;Add-HPESvtCertificate&lt;/code&gt; cmdlet, you must first authenticate.&lt;/p&gt;
&lt;p&gt;The following example shows how add a certificate called &lt;code&gt;test-node.pem&lt;/code&gt; by path. It assumes that you have already authenticated.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PS C:\&gt; Add-HPESvtCertificate -Path c:\temp\test-node.pem

certificate : -----BEGIN CERTIFICATE-----
              MIIELTCCAxWgAwIBAgIJAMptn/02hezjMA0GCSqGSIb3DQEBCwUAMIGsMQswCQYD
              VQQGEwJVUzEWMBQGA1UECAwNTWFzc2FjaHVzZXR0czEUMBIGA1UEBwwLV2VzdGJv
              W8rc/ZJmzmxtOmM1DPDCL6RpINS1V4ouJIF5DHo8kzP1UjZjHQcCAwEAAaNQME4w
              CQYDVR0TBAIwADALBgNVHQ8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsG
              AQUFBwMCMBUGA1UdEQQOMAyHBAqQAWeHBArPAfswDQYJKoZIhvcNAQELBQADggEB
              AKkXh9z6wo5Mt2zyDrh2p252AwUTmILLD7+YgGxU1stpjFaVlKL7wiEsW4/g37+J

     .
     .
     .
              -----END CERTIFICATE-----
hash        : 47ca6deb5880fd26ec4f546b3884aa65c78590df
subject     : EMAILADDRESS=security@myco.com,CN=omnicube.xxx.us-east.myco.local,O=myco,L=mycity,ST=mystate,C=US
issuer      : EMAILADDRESS=security@myco.com, CN=omnicubexx.us-east.myco.local, O=myco, L=mycity, ST=mystate, C=US
serialno    : ca6d9ffd3685ece2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example shows how to add a root certificate by using an X509Certificate2 object:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$newrootcert = [System.Security.Cryptography.X509Certificates.X509Certificate2]::New($newrootpath)
Add-HPESvtCertificate -Certificate $newrootcert
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Optionally, you can pipe the parameter to &lt;code&gt;Add-HPESvtCertificate&lt;/code&gt; instead of passing it as an argument. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&apos;test.pem&apos; | Add-HPESvtCertificate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$newrootcert | Add-HPESvtCertificate
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Architecture The HPE OmniStack REST API is available on every Virtual Controller. To issue a REST request, target the network address (IP…]]></description><link>https://developer.hpe.com/hpe-simplivity/architecture-and-object-model/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/architecture-and-object-model/</guid><content:encoded>&lt;h1&gt;Architecture&lt;/h1&gt;
&lt;p&gt;The HPE OmniStack REST API is available on every Virtual Controller. To issue a REST request, target the network address (IP address or DNS name) of the Virtual Controller. The REST API provides a secure portal into the Virtual Controller. Consequently, address REST API requests to secure port 443. The following diagram illustrates the basic architecture of the REST API:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://developer.hpe.com/uploads/media/2018/7/svt-rest-api-arch-1532712778003.png&quot; alt=&quot;architecture diagram&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Object model&lt;/h1&gt;
&lt;p&gt;The REST API object model enables you to monitor and manage a set of HPE OmniStack objects, including backups, datastores, hosts, clusters, policies, tasks, and virtual machines. The following table describes each of the types of objects that the REST API supports:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Object type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;backup&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A complete, standalone image of a virtual machine, taken at a specific point in time. You can retrieve all of the backups that are defined in the federation. You can copy, delete, lock, rename, restore, and set the retention time for backups.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;datastore&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A repository that contains the files of one or more virtual machines. You can retrieve all of the datastores that are defined in the federation. You can create new datastores, and you can delete, resize, and set policies for existing datastores.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;host&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;An HPE OmniStack host in a federation. You can retrieve all of the hosts that are defined in a federation along with their capacity, hardware, and metrics data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;omnistack_cluster&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A logical grouping of systems that are running the HPE OmniStack software. You define an &lt;code&gt;omnistack_cluster&lt;/code&gt; to enable resources to be shared efficiently across the HPE OmniStack hosts in a federation. You can retrieve all of the &lt;code&gt;omnistack_cluster&lt;/code&gt; objects that are defined in a federation. You can retrieve &lt;code&gt;omnistack_cluster&lt;/code&gt; metrics, throughput, and connected clusters data, and you can set the time zone for an &lt;code&gt;omnistack_cluster&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;policy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Contains backup rules that can be applied to an individual datastore or virtual machine. You can retrieve all the policies that are defined in the federation. You can create, delete, and rename policies. You can also create, edit, and delete the rules associated with policies.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;task&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tracks the progress of an HPE OmniStack operation. When the status of a task is &lt;code&gt;COMPLETED&lt;/code&gt;, the &lt;code&gt;affected_objects&lt;/code&gt; indicate any created or modified objects. You can retrieve tasks.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;virtual_machine&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Represents a single virtual machine that has been created within an HPE OmniStack datastore. You can retrieve all of the virtual machines in the federation. You can back up, clone, and move virtual machines. You can set policies for virtual machines and retrieve virtual machine backup and metrics data.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;REST API object type relationships&lt;/h2&gt;
&lt;p&gt;The REST API supports the relationship of one object to another object of a different type by referencing the ID of the related object as a value of one of the properties of the base object. The following figure shows these object type relationships:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://developer.hpe.com/uploads/media/2018/7/svt-rest-api-object-type-relationships-1532712788492.png&quot; alt=&quot;object type relationships&quot;&gt;&lt;/p&gt;
&lt;h2&gt;REST API tasks as managed objects&lt;/h2&gt;
&lt;p&gt;The control plane treats tasks as true managed objects, which enables clients to efficiently determine the changes that requests have made on the system; for example, during the creation of new objects. Tasks contain meaningful information that enables clients to post-process requests in an intuitive and deterministic manner. The REST API task managed object has the following characteristics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A task has an ID value that enables a client to query for a task.&lt;/li&gt;
&lt;li&gt;The state of a task conveys where the task currently is in the processing sequence. Based on the current state, other properties of the task might not be set.&lt;/li&gt;
&lt;li&gt;If the task ends in failure, the &lt;code&gt;error_code&lt;/code&gt; and &lt;code&gt;message&lt;/code&gt; properties contain information about the failure.&lt;/li&gt;
&lt;li&gt;If the task completes successfully, the &lt;code&gt;affected_objects&lt;/code&gt; array contains information about the object(s) that the task impacted. Each &lt;code&gt;affected_object&lt;/code&gt; entry has a type-significant ID, as well as a type string. For example, if the move of a virtual machine completes successfully, the task that is associated with the move has an &lt;code&gt;affected_object&lt;/code&gt; of the &lt;code&gt;virtual_machine&lt;/code&gt; type, and the ID of the task is the system-assigned ID of the new virtual machine.&lt;/li&gt;
&lt;li&gt;Multiple &lt;code&gt;affected_objects&lt;/code&gt; can be associated with a task (thus, the array of affected_objects).&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[About the API What's New Feature and function support by REST API version Architecture and object model Enumerations Interactive REST API…]]></description><link>https://developer.hpe.com/hpe-simplivity/aside/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/aside/</guid><content:encoded>&lt;h3&gt;About the API&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/whats-new/&quot;&gt;&lt;strong&gt;What&apos;s New&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/feature-and-function-support-by-rest-api-version&quot;&gt;&lt;strong&gt;Feature and function support by REST API version&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/architecture-and-object-model&quot;&gt;&lt;strong&gt;Architecture and object model&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/enumerations&quot;&gt;&lt;strong&gt;Enumerations&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/interactive-rest-api-reference&quot;&gt;&lt;strong&gt;Interactive REST API reference&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/errors-and-exceptions&quot;&gt;&lt;strong&gt;About REST API errors and exceptions&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/http-error-code-reference&quot;&gt;&lt;strong&gt;HTTP error code reference&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/authorization-related-rest-api-log-messages-and-responses&quot;&gt;&lt;strong&gt;Authorization-related REST API log messages and responses&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Explore the API&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/api/simplivity/&quot;&gt;&lt;strong&gt;HPEOmniStack REST API Reference&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Use the API&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/sample-code&quot;&gt;&lt;strong&gt;Getting started&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/get-backups&quot;&gt;&lt;strong&gt;Getting backups&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/restore-virtual-machines&quot;&gt;&lt;strong&gt;Restoring virtual machines&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/versioning&quot;&gt;&lt;strong&gt;Getting the API version&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/optimize-get-requests&quot;&gt;&lt;strong&gt;Optimizing GET requests&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/metrics&quot;&gt;&lt;strong&gt;Getting performance and capacity metrics&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/powershell&quot;&gt;&lt;strong&gt;Using the API with PowerShell&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/python&quot;&gt;&lt;strong&gt;Using the API with Python&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/java&quot;&gt;&lt;strong&gt;Using the API with Java&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Authenticating&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/authenticating-against-hpe-omnistack-api&quot;&gt;&lt;strong&gt;Authenticating against the API&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/revoke-oath-2-token&quot;&gt;&lt;strong&gt;Revoking OAuth 2 tokens&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/request-authentication-by-emergency-grant&quot;&gt;&lt;strong&gt;Requesting authentication by emergency grant&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Manage certificates using PowerShell&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/hpesvtcli-powershell-commands&quot;&gt;&lt;strong&gt;Installing the certificate management Powershell cmdlets&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/ps-getoathtoken&quot;&gt;&lt;strong&gt;Authenticating&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/adding-certificates&quot;&gt;&lt;strong&gt;Adding certificates to the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/getting-a-certificate&quot;&gt;&lt;strong&gt;Getting certificates from the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/getting-a-root-certificate&quot;&gt;&lt;strong&gt;Getting root certificates&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/remove-certificate&quot;&gt;&lt;strong&gt;Removing certificates from the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/tls-certificate-validation&quot;&gt;&lt;strong&gt;Disabling/enabling TLS certificate validation on HPE OmniStack for vSphere&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Authenticating against the API About OAuth 2 authentication and authorization The REST API uses OAuth 2 authentication. To perform any…]]></description><link>https://developer.hpe.com/hpe-simplivity/authenticating-against-hpe-omnistack-api/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/authenticating-against-hpe-omnistack-api/</guid><content:encoded>&lt;h1&gt;Authenticating against the API&lt;/h1&gt;
&lt;hr&gt;
&lt;p&gt;About OAuth 2 authentication and authorization&lt;/p&gt;
&lt;p&gt;The REST API uses OAuth 2 authentication. To perform any operation using the REST API, you must log in and obtain a valid OAuth token. To log in, use the same username/password combination that you would use to create a session on the Virtual Controller using the &lt;code&gt;svt-session-start&lt;/code&gt; CLI command. Once you have received an OAuth token, pass this token in with every REST API operation to authorize the operation. The privilege level of a REST API session is equivalent to the privilege level of a CLI session that uses the same credentials. For example, a read-only user can perform GET operations but cannot perform most POST/PUT/DELETE operations.&lt;/p&gt;
&lt;h1&gt;Requesting an OAuth 2 token&lt;/h1&gt;
&lt;p&gt;This example shows how to request an OAuth 2 token for use with the REST API using &lt;code&gt;curl&lt;/code&gt;. OAuth 2 token requests are the only HTTP requests that use basic authentication. For these OAuth 2 token requests, use &lt;code&gt;simplivity&lt;/code&gt; as the username and leave the password blank.&lt;/p&gt;
&lt;p&gt;You need a username/password combination that to use to request an OAuth 2 token.&lt;/p&gt;
&lt;p&gt;The following &lt;code&gt;curl&lt;/code&gt; command requests an OAuth 2 token:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -k https://simplivity@[host]/api/oauth/token -d grant_type=password -d
username=[username] -d password=[password]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;host&lt;/em&gt; is the IP address of the Virtual Controller to which you want to authenticate.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;username&lt;/em&gt; is the username for a user account with the appropriate privileges for the REST API operations that you plan to perform.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;password&lt;/em&gt; is the password that corresponds to the &lt;em&gt;username&lt;/em&gt; that you have specified.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must supply the &lt;code&gt;-k&lt;/code&gt; switch due to the use of self-signed certificates. You can import these certificates into your local certificate store.&lt;/p&gt;
&lt;p&gt;This call returns a JSON response. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;access_token&quot;:&quot;d3c4782f-7fd9-496f-969b-29b7f7715972&quot;,
&quot;token_type&quot;:&quot;bearer&quot;,
&quot;expires_in&quot;:81301,
&quot;scope&quot;:&quot;read write&quot;,
&quot;updated_at&quot;:1455232016377
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note the value for the &lt;code&gt;access_token&lt;/code&gt; in this response.&lt;/p&gt;
&lt;p&gt;Pass the value for the &lt;code&gt;access_token&lt;/code&gt; in every HTTP request header, using the following format: &lt;code&gt;Authorization: Bearer [access_token]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Authorization: Bearer d3c4782f-7fd9-496f-969b-29b7f7715972&lt;/code&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Authorization-related REST API log messages and responses Successful OAuth 2 token request log messages If you request and receive an OAuth…]]></description><link>https://developer.hpe.com/hpe-simplivity/authorization-related-rest-api-log-messages-and-responses/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/authorization-related-rest-api-log-messages-and-responses/</guid><content:encoded>&lt;h1&gt;Authorization-related REST API log messages and responses&lt;/h1&gt;
&lt;h3&gt;Successful OAuth 2 token request log messages&lt;/h3&gt;
&lt;p&gt;If you request and receive an OAuth 2 token for the REST API successfully, &lt;code&gt;svt-rest-api.log&lt;/code&gt; displays relevant messages. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2016-04-29 14:36:48.379 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtSessionMap : User count after add: 1
2016-04-29 14:36:48.384 INFO 13430 --- [tp1872627924-22]
o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Fri Apr
29 14:36:48 EDT 2016, principal=administrator, type=AUTHENTICATION_SUCCESS,
data={details={grant_type=password, username=administrator}}]
2016-04-29 14:36:48.387 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtTokenEnhancer : enhancing accessToken
28b42cf2-4d6b-40a4-a91e-101a9e21c1cc with updated_at=1461955008387
2016-04-29 14:36:48.387 DEBUG 13430 --- [tp1872627924-22]
c.s.r.security.SvtInMemoryTokenStore : Storing access token
28b42cf2-4d6b-40a4-a91e-101a9e21c1cc
2016-04-29 14:36:48.387 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtTokenEnhancer : enhancing accessToken
28b42cf2-4d6b-40a4-a91e-101a9e21c1cc with updated_at=1461955008387
2016-04-29 14:36:48.388 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtSessionMap : Adding HMS session map with
OAuth=28b42cf2-4d6b-40a4-a91e-101a9e21c1cc
2016-04-29 14:36:48.389 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtSessionMap : Session count after add: 1
2016-04-29 14:36:48.389 DEBUG 13430 --- [tp1872627924-22]
c.s.restapi.security.SvtSessionMap : Expired session count after add: 0
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;JSON response indicates unsuccessful OAuth 2 token request&lt;/h3&gt;
&lt;p&gt;If an authorization error occurs during the token request, the JSON response includes a series of fields that provide information about the cause of the failure. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;timestamp&quot;: 1457970690535,
&quot;status&quot;: 401,
&quot;error&quot;: &quot;Unauthorized&quot;,
&quot;exception&quot;: &quot;com.simplivity.restapi.exceptions.SvtExceptions
$UnauthorizedException&quot;,
&quot;message&quot;: &quot;Unauthorized&quot;,
&quot;path&quot;: &quot;/api/backups&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Normal OAuth 2 token audit log messages&lt;/h3&gt;
&lt;p&gt;During normal REST API operations, &lt;code&gt;svt-rest-api.log&lt;/code&gt; displays token audit messages. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2016-04-29 14:53:29.895 DEBUG 13430 --- [tp1872627924-35]
c.s.restapi.security.SvtTokenServices : Time elapsed (ms) since last update of
access token: 53388
2016-04-29 14:53:29.895 INFO 13430 --- [tp1872627924-35]
o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Fri Apr
29 14:53:29 EDT 2016, principal=administrator, type=AUTHENTICATION_SUCCESS,
data={details=remoteAddress=172.16.43.1, tokenType=BearertokenValue=&amp;#x3C;TOKEN&gt;}]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;JSON response indicates expired OAuth 2 token&lt;/h3&gt;
&lt;p&gt;Each OAuth 2 token expires after ten minutes of inactivity. The following example shows an HTTP 401 error response that indicates an expired token:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;error&quot;: &quot;invalid_token&quot;,
&quot;message&quot;: &quot;Access token expired due to inactivity: 4a1da31f-5405-4a93-
af5c-799403ea70d6&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Tokens expire completely after 24 hours even with continuous activity during this time period. The following example shows an HTTP 401 error response that indicates a token that has expired after 24 hours:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;error&quot;: &quot;invalid_token&quot;,
&quot;message&quot;: &quot;Access token expired: 4a1da31f-5405-4a93-af5c-799403ea70d6&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If a token expires, you can request a new token.&lt;/p&gt;
&lt;h3&gt;Expired OAuth 2 token log messages&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;svt-rest-api.log&lt;/code&gt; contains messages to indicate a token that has expired due to inactivity. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2016-04-29 16:58:07.953 DEBUG 14950 --- [ qtp39959931-18]
c.s.restapi.security.SvtTokenServices : Time elapsed (ms: 66702 since last
update of access token: ec7b184f-b591-4609-bee8-96777339cf0e
Chapter 2: Getting started 27
2016-04-29 16:58:09.372 DEBUG 14950 --- [ SessionTickler]
c.s.restapi.security.SessionTickler : HMS Session 16bc9cec-37eb-4a11-
b620-189c9d420410 tickled.
2016-04-29 16:58:11.126 DEBUG 14950 --- [ qtp39959931-18]
c.s.restapi.security.SvtTokenServices : Removing token ec7b184f-b591-4609-
bee8-96777339cf0e due to inactivity. inactiveTokenExpiration is 20000
2016-04-29 16:58:14.827 DEBUG 14950 --- [ SessionTickler]
c.s.restapi.security.SessionTickler : HMS Session 16bc9cec-37eb-4a11-
b620-189c9d420410 tickled.
2016-04-29 16:58:15.540 DEBUG 14950 --- [ qtp39959931-18]
c.s.r.security.SvtInMemoryTokenStore : Removing access token ec7b184fb591-
4609-bee8-96777339cf0e
2016-04-29 16:58:16.573 DEBUG 14950 --- [ qtp39959931-18]
c.s.restapi.security.SvtSessionMap : Removing session with oauth
id=ec7b184f-b591-4609-bee8-96777339cf0e
2016-04-29 16:58:16.574 DEBUG 14950 --- [ qtp39959931-18]
c.s.restapi.security.SvtSessionMap : Session count after remove: 0
2016-04-29 16:58:16.575 DEBUG 14950 --- [ qtp39959931-18]
c.s.restapi.security.SvtSessionMap : Expired session count after remove: 0
2016-04-29 16:58:20.820 INFO 14950 --- [ qtp39959931-18]
o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Fri Apr
29 16:58:20 EDT 2016, principal=access-token, type=AUTHENTICATION_FAILURE,
data={type=org.springframework.security.authentication.BadCredentialsException,
message=Access token expired due to inactivity: ec7b184f-b591-4609-
bee8-96777339cf0e}]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;svt-rest-api.log&lt;/code&gt; contains a message to indicate a token that has expired after 24 hours. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;2016-04-28 10:17:27.605 INFO 23921 --- [qtp1909078861-30]
o.s.b.a.audit.listener.AuditListener : AuditEvent [timestamp=Thu Apr
28 10:17:27 EDT 2016, principal=access-token, type=AUTHENTICATION_FAILURE,
data={type=org.springframework.security.authentication.BadCredentialsException,
message=Access token expired: 2d65e35f-6306-40ce-a8b8-5a8278e44e73}]
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[About REST API errors and exceptions Shallow validation REST API errors The service performs shallow validation to identify errors…]]></description><link>https://developer.hpe.com/hpe-simplivity/errors-and-exceptions/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/errors-and-exceptions/</guid><content:encoded>&lt;h1&gt;About REST API errors and exceptions&lt;/h1&gt;
&lt;h2&gt;Shallow validation REST API errors&lt;/h2&gt;
&lt;p&gt;The service performs shallow validation to identify errors immediately upon the submission of an operation. In this case, the service does not create a task instance for the operation. Rather, it returns an HTTP response, such as a 400 (Bad Request) error, that includes a JSON body to explain the error.&lt;/p&gt;
&lt;p&gt;The following example shows the JSON body of an error response for an attempt to rename a policy using a name that is already in use:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  &quot;exception&quot;: &quot;com.simplivity.rpc.common.TaskException&quot;,
  &quot;path&quot;: &quot;/api/policies/0fa0d890-6847-4eb0-8480-78d6fc3a9913/rename&quot;,
  &quot;error_code&quot;: &quot;13&quot;,
  &quot;message&quot;: &quot;Duplicate name exists.&quot;,
  &quot;timestamp&quot;: 1460129027118,
  &quot;status&quot;: &quot;400&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, the error occurred because the attempted to rename a policy using a name that already existed in the . The JSON body includes the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;path: displays the affected object and attempted operation.&lt;/li&gt;
&lt;li&gt;error_code: displays the numeric code that uniquely identifies the error that occurred&lt;/li&gt;
&lt;li&gt;message: displays a description of the error that occurred&lt;/li&gt;
&lt;li&gt;timestamp: displays the date and time at which the error occurred&lt;/li&gt;
&lt;li&gt;status: Displays the status of the operation, that is, the HTTP response code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The REST API service also performs certain additional validation checks before creating a task for the operation. These additional checks serve to identify crucial errors immediately.&lt;/p&gt;
&lt;h2&gt;Deep validation REST API errors&lt;/h2&gt;
&lt;p&gt;The REST API performs deep validation to identify errors after the submission of an operation. If an operation fails after the service has created a task instance for the operation, the task instance reports error information. In this case, you can GET the task instance, and this GET returns HTTP response 200. The following example shows the JSON body of a task instance that reports an error caused by an attempt to move a virtual machine while it is powered on:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  &quot;task&quot;: {
    &quot;id&quot;: &quot;420b50a4-99b0-d27a-4f28-ea2c9e899da0:420b50a4-99b0-d27a-4f28-ea2c9e899da0:20102ad9-69c1-46b3-b70d-171fc5df033f&quot;,
    &quot;state&quot;: &quot;FAILED&quot;,
    &quot;message&quot;: &quot;The VM power state is not acceptable.&quot;,
    &quot;affected_objects&quot;: [],
    &quot;error_code&quot;: 130,
    &quot;start_time&quot;: &quot;2016-05-09T20:37:08Z&quot;,
    &quot;end_time&quot;: &quot;2016-05-09T20:37:09Z&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JSON body of the task instance for a failed operation includes the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;id: displays the unique identifier for the task&lt;/li&gt;
&lt;li&gt;state: displays the state of the task, that is, FAILED&lt;/li&gt;
&lt;li&gt;message: displays a description of the error that occurred&lt;/li&gt;
&lt;li&gt;affected_objects: displays information about the objects that were the targets of the failed operation&lt;/li&gt;
&lt;li&gt;error_code: displays the numeric code that uniquely identifies the error that occurred&lt;/li&gt;
&lt;li&gt;start_time: displays the date and time at which the failed task started&lt;/li&gt;
&lt;li&gt;end_time: displays the date and time at which the failed task ended with an error&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;ID not found errors&lt;/h2&gt;
&lt;p&gt;If you specify an ID in a GET or other operation, and this ID does not exist for the object type that you have specified, the REST API service returns HTTP response 404, along with a JSON error body, such as the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;exception&quot;: &quot;com.simplivity.restapi.exceptions.ObjectNotFoundException&quot;,
  &quot;path&quot;: &quot;/api/backups/badid/rename&quot;,
  &quot;message&quot;: &quot;Object type: backup id: badid could not be found&quot;,
  &quot;timestamp&quot;: 1460140207690,
  &quot;status&quot;: &quot;404&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JSON body of the response for an error reporting an ID that the REST API service did not find includes the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;path: displays badid as the affected object and the attempted operation&lt;/li&gt;
&lt;li&gt;message: dDisplays a description of the error that occurred&lt;/li&gt;
&lt;li&gt;timestamp: displays the date and time at which the error occurred&lt;/li&gt;
&lt;li&gt;status: displays the status of the operation, that is, the HTTP response code&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Feature and function support by REST API version The tables in this section list the features and functions that each version of the REST…]]></description><link>https://developer.hpe.com/hpe-simplivity/feature-and-function-support-by-rest-api-version/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/feature-and-function-support-by-rest-api-version/</guid><content:encoded>&lt;h1&gt;Feature and function support by REST API version&lt;/h1&gt;
&lt;p&gt;The tables in this section list the features and functions that each version of the REST API supports for the following object types:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;backups&lt;/li&gt;
&lt;li&gt;cluster_groups&lt;/li&gt;
&lt;li&gt;datastores&lt;/li&gt;
&lt;li&gt;hosts&lt;/li&gt;
&lt;li&gt;omnistack_clusters&lt;/li&gt;
&lt;li&gt;policies&lt;/li&gt;
&lt;li&gt;security/certificates&lt;/li&gt;
&lt;li&gt;tasks&lt;/li&gt;
&lt;li&gt;virtual_machines&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;&lt;code&gt;backups&lt;/code&gt;&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;backups&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;consistency_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sent_completion_time&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sent_duration&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;unique_size_bytes&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;unique_size_timestamp&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;virtual_machine_state&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;virtual_machine_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET virtual_disks&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET virtual_disk_partitions&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET virtual_disk_partition_files&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST calculate_unique_size&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST cancel&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST copy&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST delete&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST lock&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST rename&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST restore&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST restore_file&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST restore_files&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST set_retention&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;&lt;code&gt;cluster_groups&lt;/code&gt;&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;cluster_groups&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST rename&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;datastores&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;datastore&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET standard_hosts&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST resize&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST set_policy&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST share&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Post unshare&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;hosts&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;host&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;can_rollback&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;current_feature_level&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;date&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;federation_ip&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;federation_mask&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;federation_mtu&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;infosight_configuration&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;life_remaining&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;management_mask&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;management_mtu&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;omnistack_cluster_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;policy_enabled&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;potential_feature_level&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;storage_ip&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;storage_mask&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;storage_mtu&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;upgrade_state&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;virtual_controller_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET capacity&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET hardware&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET metrics&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET virtual_controller_shutdown_status&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST cancel_virtual_controller_shutdown&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST remove_from_federation&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST shutdown_virtual_controller&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;omnistack_clusters&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;omnistack_cluster&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arbiter_address&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arbiter_connected&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cluster_feature_level&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;connected_cluster&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_object_parent_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_object_parent_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;infosight_configuration&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;time_zone&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;upgrade_state&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;upgrade_task_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;version&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET connected_clusters&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET metrics&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET throughput&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET time_zone_list&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST set_time_zone&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;policies&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;policy&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET datastores&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET /policies/policy_schedule_report&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET virtual_machines&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST policies&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST rename&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST resume&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST rules&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X (multiple rules)&lt;/td&gt;
&lt;td&gt;X (one rule)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST suspend&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST impact_report/create_rules&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST impact_report/edit_rules&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST impact_report/delete_rule&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PUT rule&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;&lt;code&gt;security/certificates&lt;/code&gt;&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;security/certificates&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hash&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;certificate&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;subject&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;issuer&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;serialno&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;tasks&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;task&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;virtual_machines&lt;/h1&gt;
&lt;p&gt;The REST API supports the following features and functions for the &lt;code&gt;virtual_machine&lt;/code&gt; object type:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;1.13&lt;/th&gt;
&lt;th&gt;1.12&lt;/th&gt;
&lt;th&gt;1.11&lt;/th&gt;
&lt;th&gt;1.10&lt;/th&gt;
&lt;th&gt;1.9&lt;/th&gt;
&lt;th&gt;1.8&lt;/th&gt;
&lt;th&gt;1.7&lt;/th&gt;
&lt;th&gt;1.6&lt;/th&gt;
&lt;th&gt;1.5&lt;/th&gt;
&lt;th&gt;1.4&lt;/th&gt;
&lt;th&gt;1.3&lt;/th&gt;
&lt;th&gt;1.2&lt;/th&gt;
&lt;th&gt;1.1&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Additional returned properties:&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;adapter_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;app_aware_vm_status&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_hypervisor_object_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compute_cluster_parent_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;deleted_at&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;device_number&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;network_interfaces&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;network_label&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ha_resynchronization_progress&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;host_id&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_allocated_capacity&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_allocated_cpu&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_consumed_cpu&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_consumed_memory&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_cpu_count&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_free_space&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_folder_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_is_template&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_management_system_name&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_total_memory&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_type&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_virtual_disk_count&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hypervisor_virtual_machine_power_state&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;modified_at&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ipv4_addresses&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mac_address&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mac_generation&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;network_interfaces&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;replica_set&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET backups&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET metrics&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST backup&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST backup_parameters&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST clone&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST move&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST set_policy&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST validate_backup_credentials&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST policy_impact_report/apply_policy&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST power_off&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Enumerations Use this section as a reference for the REST API enumerations (enums). The REST API always performs sorting and filtering on…]]></description><link>https://developer.hpe.com/hpe-simplivity/enumerations/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/enumerations/</guid><content:encoded>&lt;h1&gt;Enumerations&lt;/h1&gt;
&lt;p&gt;Use this section as a reference for the REST API enumerations (enums).&lt;br&gt;
The REST API always performs sorting and filtering on enumerated values using the full uppercase enumerations. The REST API ignores the value of the case parameter when you sort or filter by fields with enumerated values.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;adapter_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;NetworkInterface&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;E1000&lt;/code&gt;, &lt;code&gt;E1000E&lt;/code&gt;, &lt;code&gt;PCNET32&lt;/code&gt;, &lt;code&gt;VMXNET&lt;/code&gt;, &lt;code&gt;VNXNET2&lt;/code&gt;, and &lt;code&gt;VMXNET3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;adapter_type&lt;/code&gt; assigned to the virtual machine. For more information about these values, see the VMware vSphere documentation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;UNKNOWN&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;adapter_type&lt;/code&gt; cannot be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;app_aware_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;backup_parameters&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DEFAULT&lt;/td&gt;
&lt;td&gt;Crash-consistent.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NONE&lt;/td&gt;
&lt;td&gt;Application-consistent backup using a VMware snapshot.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VSS&lt;/td&gt;
&lt;td&gt;This backup is an application aware backup with Microsoft VSS.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;app_aware_vm_status&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;virtual_machines&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CAPABLE&lt;/td&gt;
&lt;td&gt;The virtual machine is ready and can take application aware snapshots.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;INVALID_CREDS&lt;/td&gt;
&lt;td&gt;A valid set of credentials for the virtual machine is not available, which prevents application aware snapshots of this virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;INVALID_OS&lt;/td&gt;
&lt;td&gt;The virtual machine is not running an appropriate operating system (OS) which prevents application aware snapshots of this virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The state of the virtual machine with respect to readiness for an application aware snapshot is not known.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN_FAULT&lt;/td&gt;
&lt;td&gt;A general failure has occurred that prevents application aware snapshots of the virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VALID_CREDS&lt;/td&gt;
&lt;td&gt;A valid set of credentials for the virtual machine is available which enables application aware snapshots of this virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VMWARE_TOOLS_UNAVAILABLE&lt;/td&gt;
&lt;td&gt;VMware Tools are not installed or not running on the virtual machine which prevents application aware snapshots of this virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;consistency_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;backup&lt;/code&gt; object, &lt;code&gt;BackupVMMO&lt;/code&gt; object, Policy rule, Policy rule POST body, create_or_edit_rule, rule&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DEFAULT&lt;/td&gt;
&lt;td&gt;An application consistent backup with VMware Snapshots.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NONE&lt;/td&gt;
&lt;td&gt;A crash consistent backup or a backup that is not application consistent.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VSS&lt;/td&gt;
&lt;td&gt;An application aware backup with Microsoft VSS.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;ha_status&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;virtual_machine&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DEFUNCT&lt;/td&gt;
&lt;td&gt;Object no longer exists but is still being referenced.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DEGRADED&lt;/td&gt;
&lt;td&gt;Object is not fully replicated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NOT_APPLICABLE&lt;/td&gt;
&lt;td&gt;Object is intentionally not replicated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OUT_OF_SCOPE&lt;/td&gt;
&lt;td&gt;Status not available from this location.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAFE&lt;/td&gt;
&lt;td&gt;Object is replicated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SYNCING&lt;/td&gt;
&lt;td&gt;Object is becoming replicated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;Status could not be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;hypervisor_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;backup&lt;/code&gt; object, &lt;code&gt;datastore&lt;/code&gt; object, &lt;code&gt;omnistack_cluster&lt;/code&gt;, &lt;code&gt;virtual_machine&lt;/code&gt; object, &lt;code&gt;BackupMO&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VSPHERE&lt;/td&gt;
&lt;td&gt;The Hypervisor Management System (HMS) associated with this backup is vSphere.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HYPERV&lt;/td&gt;
&lt;td&gt;The HMS associated with this backup is Hyper-V.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;hypervisor_virtual_machine_power_state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;virtual_machine&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OFF&lt;/td&gt;
&lt;td&gt;The Hypervisor Management System (HMS) considers the virtual machine to be powered off.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ON&lt;/td&gt;
&lt;td&gt;The HMS considers the virtual machine to be powered on.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUSPENDED&lt;/td&gt;
&lt;td&gt;The HMS considers the virtual machine to be suspended.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The hypervisor-based power state of the virtual machine cannot be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;mac_generation&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;NetworkInterface&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ASSIGNED&lt;/td&gt;
&lt;td&gt;The MAC address is automatically assigned when you power on the virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GENERATED&lt;/td&gt;
&lt;td&gt;The MAC address was automatically generated.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MANUAL&lt;/td&gt;
&lt;td&gt;The MAC address was statically assigned.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The mac_generation type could not be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;media_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;physical_drive&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;HDD&lt;/td&gt;
&lt;td&gt;Hard disk drive.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSD&lt;/td&gt;
&lt;td&gt;Solid-state drive.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;media_type&lt;/code&gt; could not be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;role&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;ReplicaInfo&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PRIMARY&lt;/td&gt;
&lt;td&gt;The ReplicaInfo id represents the ID of the host that contains the primary data replica of the virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SECONDARY&lt;/td&gt;
&lt;td&gt;The ReplicaInfo id represents the ID of the host that contains the secondary data replica of the virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;state&lt;/h2&gt;
&lt;p&gt;Used by:&lt;code&gt;backup&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CANCELED&lt;/td&gt;
&lt;td&gt;The backup operation was canceled successfully.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CANCELING&lt;/td&gt;
&lt;td&gt;The backup operation is responding to a manual cancellation of a backup in progress.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DEGRADED&lt;/td&gt;
&lt;td&gt;The backup is in an unprotected high availability state. This situation can occur when an HPE OmniStack host in the backup replica set is replaced by another HPE OmniStack host, and the backup has been saved to one HPE OmniStack host in a datacenter with multiple HPE OmniStack hosts.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETED&lt;/td&gt;
&lt;td&gt;The backup data was deleted before the expiration of the backup. This situation can occur when removing an HPE OmniStack host may remove the last copy of a backup in a datacenter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FAILED&lt;/td&gt;
&lt;td&gt;The backup was unsuccessful.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NEW&lt;/td&gt;
&lt;td&gt;The backup operation started, but the initial backup of the virtual machine and processing of the backup on the source datacenter is not complete.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PROTECTED&lt;/td&gt;
&lt;td&gt;The backup is successful and is in a protected high availability (HA) state. If the backup was a remote backup, successful replication to the remote site has also completed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;QUEUED&lt;/td&gt;
&lt;td&gt;The backup is waiting to be replicated to a remote datacenter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REBUILDING&lt;/td&gt;
&lt;td&gt;The backup data is rebuilding onto a second HPE OmniStack host to ensure high availability for the backup in a multi-node datacenter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAVING&lt;/td&gt;
&lt;td&gt;The backup replication is in progress. The state changes to &lt;code&gt;Protected&lt;/code&gt;, &lt;code&gt;Queued&lt;/code&gt;, &lt;code&gt;Failed&lt;/code&gt;, or &lt;code&gt;Degraded&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The backup state cannot be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;host&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ALIVE&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host is healthy.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FAULTY&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host is in a critical error state, and operations have failed over to an alternate HPE OmniStack host in the federation. It is likely that one or more error or event messages were logged.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MANAGED&lt;/td&gt;
&lt;td&gt;The Virtual Controller for this host is offline but can still be managed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REMOVED&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host has been removed from the federation but is still being recognized.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUSPECTED&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host has one or more components that show degraded performance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host status is indeterminate, perhaps because it is unable to communicate with other federation HPE OmniStack hosts. It is possible that one or more error or event messages were logged.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;task&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;COMPLETED&lt;/td&gt;
&lt;td&gt;The task has completed successfully.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FAILED&lt;/td&gt;
&lt;td&gt;The task has failed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IN_PROGRESS&lt;/td&gt;
&lt;td&gt;The task is currently being processed.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;virtual_machine&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ALIVE&lt;/td&gt;
&lt;td&gt;An active hypervisor-based virtual machine is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETED&lt;/td&gt;
&lt;td&gt;The hypervisor-based virtual machine that is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object has been deleted with at least one backup of that virtual machine still existing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REMOVED&lt;/td&gt;
&lt;td&gt;The hypervisor-based virtual machine that is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object has been removed from the virtual machine inventory of the hypervisor.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;status&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;credential_validation&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VM_POWERED_OFF&lt;/td&gt;
&lt;td&gt;The virtual machine is powered off, so it is not possible to determine if the credentials are valid.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;INVALID&lt;/td&gt;
&lt;td&gt;Credentials are invalid.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VALID&lt;/td&gt;
&lt;td&gt;Credentials are valid.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;type&lt;/h2&gt;
&lt;p&gt;Used by:&lt;code&gt;backup&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;MANUAL&lt;/td&gt;
&lt;td&gt;A user created this backup manually. Manual backups are not deleted automatically.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POLICY&lt;/td&gt;
&lt;td&gt;An automatic policy created this backup. The backup is subject to automatic deletion when the retention time for the backup expires or the maximum number of backups is exceeded. The oldest backups are deleted first.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNSPECIFIED&lt;/td&gt;
&lt;td&gt;The backup type cannot be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;omnistack_cluster&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CLOUD&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;omnistack_cluster&lt;/code&gt; is comprised of HPE OmniStack Cloud hosts.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OMNISTACK&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;omnistack_cluster&lt;/code&gt; is comprised of HPE OmniStack hosts that share resources and provide high availability and load-balancing services.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The type could not be determined.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;upgrade_state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;host&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FAIL&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host upgrade task failed, and an error code and message indicate the reason for the failure.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IN_PROGRESS&lt;/td&gt;
&lt;td&gt;The HPE OmniStack upgrade task is proceeding.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NOOP&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host upgrade is incomplete. For example, a host may have failed to upgrade successfully, and the upgrade needs to be repeated, but this host is already at the correct version and does not need to be upgraded again.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUCCESS&lt;/td&gt;
&lt;td&gt;The HPE OmniStack host upgrade task completed successfully.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;It is not possible to determine the status of the HPE OmniStack host upgrade task.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;upgrade_state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;omnistack_cluster&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FAIL_CAN_ROLLBACK&lt;/td&gt;
&lt;td&gt;At least one software upgrade for this &lt;code&gt;omnistack_cluster&lt;/code&gt; is in a state that cannot be upgraded, committed, or rolled back. This state should rarely occur. Contact Customer Support (support.hpe.com).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FAIL_CANNOT_ROLLBACK&lt;/td&gt;
&lt;td&gt;One or more HPE OmniStack hosts failed the upgrade. The upgrade either rolled back automatically or failed before a rollback was required. Attempt the upgrade and, if it fails again, contact Customer Support (support.hpe.com).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IN_PROGRESS&lt;/td&gt;
&lt;td&gt;The upgrade task is proceeding.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUCCESS_COMMIT_NEEDED&lt;/td&gt;
&lt;td&gt;The upgrade is ready to commit (or roll back).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUCCESS_COMMITTED&lt;/td&gt;
&lt;td&gt;The &lt;code&gt;omnistack_cluster&lt;/code&gt; is committed to the current software version. Commits occur at the federation level, so all &lt;code&gt;omnistack_cluster&lt;/code&gt; instances in the federation should be in this state.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SUCCESS_MIXED_VERSION&lt;/td&gt;
&lt;td&gt;None of the HPE OmniStack hosts in the have an upgrade in progress, but the &lt;code&gt;omnistack_cluster&lt;/code&gt; has mixed versions of software on different HPE OmniStack hosts. An upgrade is needed to make the &lt;code&gt;omnistack_cluster consistent&lt;/code&gt;. This state can occur when a node with a different software version is added to the &lt;code&gt;omnistack_cluster&lt;/code&gt;. An upgrade is required to ensure that all HPE OmniStack hosts are running the same version.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;It is not possible to determine the status of the previous upgrade task.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;virtual_machine_state&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;backup&lt;/code&gt; object, &lt;code&gt;BackupMO&lt;/code&gt; object&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ALIVE&lt;/td&gt;
&lt;td&gt;An active hypervisor-based virtual machine is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETED&lt;/td&gt;
&lt;td&gt;The hypervisor-based virtual machine that is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object has been deleted with at least one backup of that virtual machine still existing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;REMOVED&lt;/td&gt;
&lt;td&gt;The hypervisor-based virtual machine that is associated with the &lt;code&gt;virtual_machine&lt;/code&gt; object has been removed from the virtual machine inventory of the hypervisor.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;virtual_machine_type&lt;/h2&gt;
&lt;p&gt;Used by: &lt;code&gt;BackupMO&lt;/code&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TEMPLATE&lt;/td&gt;
&lt;td&gt;The virtual machine instance associated with the backup is a virtual machine template.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UNKNOWN&lt;/td&gt;
&lt;td&gt;The type of the virtual machine instance associated with the backup is unknown.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM&lt;/td&gt;
&lt;td&gt;The virtual machine instance associated with the backup is a virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting backups This example uses curl to retrieve a list of all the backups in the federation along with details about each backup. Where…]]></description><link>https://developer.hpe.com/hpe-simplivity/get-backups/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/get-backups/</guid><content:encoded>&lt;h1&gt;Getting backups&lt;/h1&gt;
&lt;p&gt;This example uses &lt;code&gt;curl&lt;/code&gt; to retrieve a list of all the backups in the federation along with details about each backup.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -H &quot;Accept: application/vnd.simplivity.v1+json&quot; -H &quot;Authorization:
Bearer [*access_token*]&quot; -X GET -k -i https://[*host*]/api/backups
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-H&lt;/code&gt; (or &lt;code&gt;--header&lt;/code&gt;) enables curl to add request headers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;code&gt;&quot;Accept: application/vnd.simplivity.v1+json&quot;&lt;/code&gt; header indicates that the response should use version 1 of the HPE OmniStack JSON extension.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;access_token&lt;/em&gt; is the complete access token, for example, &lt;code&gt;f93f2059- afef-4310-9147-447645992a5d&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-X GET&lt;/code&gt; indicates an HTTP GET operation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-k&lt;/code&gt; allows the use of self-signed SSL/TLS certificates.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;host&lt;/em&gt; is the IP address of the Virtual Controller that you want to authenticate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The GET request returns a JSON response similar to the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;offset&quot;: 0,
&quot;limit&quot;: 500,
&quot;count&quot;: 21,
&quot;backups&quot;: [
{
&quot;id&quot;: &quot;f6d5980c-478c-4292-aabf-67da627dbc57&quot;,
&quot;name&quot;: &quot;vm32_A_6-backup-2016-03-13T21:03:46-04:00&quot;,
&quot;sent&quot;: 0,
&quot;state&quot;: &quot;PROTECTED&quot;,
&quot;type&quot;: &quot;MANUAL&quot;,
&quot;omnistack_cluster_id&quot;: &quot;0bf3988b-b684-4002-a04d-55bb8831b8c4&quot;,
&quot;omnistack_cluster_name&quot;: &quot;Boston -:- OmniCube CN-2000 5.5.0&quot;,
&quot;datastore_id&quot;: &quot;c1d172e2-bd89-4bad-ae2a-848e7fbe3dc4&quot;,
&quot;datastore_name&quot;: &quot;ds2&quot;,
&quot;expiration_time&quot;: &quot;NA&quot;,
&quot;virtual_machine_id&quot;: &quot;729861fd-a006-4849-bf08-b232551989c0&quot;,
&quot;virtual_machine_name&quot;: &quot;vm32_A_6&quot;,
&quot;size&quot;: 12574720,
&quot;application_consistent&quot;: false,
&quot;created_at&quot;: &quot;2016-03-14T01:03:46Z&quot;
},
...
]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the response, the count value indicates that there are a total of 21 backup objects (that is, resources). The response returns these backups in a JSON array of objects. The offset and limit values indicate that the response array starts at the backup with an offset of 0 and includes backups up to the maximum limit of 500 objects. These values for the offset and limit are the default values.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting certificates from the trust store Use the Get-HPESvtCertificate cmdlet to list the all certificates in the trust store or to search…]]></description><link>https://developer.hpe.com/hpe-simplivity/getting-a-certificate/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/getting-a-certificate/</guid><content:encoded>&lt;h1&gt;Getting certificates from the trust store&lt;/h1&gt;
&lt;p&gt;Use the &lt;code&gt;Get-HPESvtCertificate&lt;/code&gt; cmdlet to list the all certificates in the trust store or to search for a specific one by the certificate&apos;s SHA1 thumbprint. To use the &lt;code&gt;Get-HPESvtCertificate&lt;/code&gt;, you must be an authenticated user.&lt;/p&gt;
&lt;p&gt;This example shows how to get the list of certificates. It assumes that you have already authenticated.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PS C:\ Get-HPESvtCertificate | Format-List

Thumbprint                                Subject
----------                                -------
7E084B44FCDAB6EF74AC40E8D9BD508337D96CDE  CN=omnicubexx.us-east.myco.local
2F43831D62C01FD2AF448DFA8BF86E126B8190DD  CN=omnicubexx.us-east.myco.local
77E44C4DFF0E565665B6CB27E77BA8FBAFFF88A2  CN==omnicubexx.us-east.myco.local
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example shows how to search by SHA1 thumbprint:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PS C:\ Get-HPESvtCertificate -Thumbprint EB2CD509BA53...D4C6

Thumbprint              Subject      EnhancedKeyUsageList
---------------        ---------           ---------------------------
EB2CD509BA53...D4C6    OU=VMware Enginee...
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting root certificates The Get-HPESvtRootCertificate cmdlet returns an array of X509Certificate2 objects from the vCenter Server. This…]]></description><link>https://developer.hpe.com/hpe-simplivity/getting-a-root-certificate/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/getting-a-root-certificate/</guid><content:encoded>&lt;h1&gt;Getting root certificates&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;Get-HPESvtRootCertificate&lt;/code&gt; cmdlet returns an array of X509Certificate2 objects from the vCenter Server. This cmdlet does not require that you authenticate.&lt;/p&gt;
&lt;p&gt;When you use this cmdlet, you must independently verify that the certificate is what you expect it to be because the command does not perform validation checks on the certificate.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Get-HPESvtRootCertificate -hmsHostname 192.0.2.2 | Format-list

Subject      : OU=VMware Engineering, O=vc-147-50-53, S=California, C=US, DC=local, DC=vsphere, CN=CA
Issuer       : OU=VMware Engineering, O=vc-147-50-53, S=California, C=US, DC=local, DC=vsphere, CN=CA
Thumbprint   : 83754908A5088D883FBC86275EF1FD964E2153C0
FriendlyName :
NotBefore    : 7/9/2017 10:48:12 PM
NotAfter     : 7/7/2027 10:48:12 PM
Extensions   : {System.Security.Cryptography.Oid, System.Security.Cryptography.Oid, System.Security.Cryptography.Oid, System.Security.Cryptography.Oid}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use the &lt;code&gt;Get-HPESvtRootCertificate&lt;/code&gt; cmdlet in conjunction with the &lt;code&gt;Add-HPESvtCertificate&lt;/code&gt; cmdlet to populate the certificate trust store on the Virtual Controller.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE SimpliVity]]></title><description><![CDATA[HPE OmniStack REST API The HPE OmniStack REST API enables tool developers and integrators to manage HPE OmniStack assets efficiently…]]></description><link>https://developer.hpe.com/hpe-simplivity/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/home/</guid><content:encoded>&lt;h2&gt;HPE OmniStack REST API&lt;/h2&gt;
&lt;p&gt;The HPE OmniStack REST API enables tool developers and integrators to manage HPE OmniStack assets efficiently, intuitively, and securely. Anyone can use the REST API to gather information about, analyze, configure, and troubleshoot HPE OmniStack hosts and federations. Moreover, tools developers can use the REST API to automate these tasks.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;About the API&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/whats-new/&quot;&gt;&lt;strong&gt;What&apos;s New&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/feature-and-function-support-by-rest-api-version&quot;&gt;&lt;strong&gt;Feature and function support by REST API version&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/architecture-and-object-model&quot;&gt;&lt;strong&gt;Architecture and object model&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/enumerations&quot;&gt;&lt;strong&gt;Enumerations&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/interactive-rest-api-reference&quot;&gt;&lt;strong&gt;Interactive REST API reference&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/errors-and-exceptions&quot;&gt;&lt;strong&gt;About REST API errors and exceptions&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/http-error-code-reference&quot;&gt;&lt;strong&gt;HTTP error code reference&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/authorization-related-rest-api-log-messages-and-responses&quot;&gt;&lt;strong&gt;Authorization-related REST API log messages and responses&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Use the API&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/sample-code&quot;&gt;&lt;strong&gt;Getting started&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/get-backups&quot;&gt;&lt;strong&gt;Getting backups&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/restore-virtual-machines&quot;&gt;&lt;strong&gt;Restoring virtual machines&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/versioning&quot;&gt;&lt;strong&gt;Getting the API version&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/optimize-get-requests&quot;&gt;&lt;strong&gt;Optimizing GET requests&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/metrics&quot;&gt;&lt;strong&gt;Getting performance and capacity metrics&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/powershell&quot;&gt;&lt;strong&gt;Using the API with PowerShell&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/python&quot;&gt;&lt;strong&gt;Using the API with Python&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/java&quot;&gt;&lt;strong&gt;Using the API with Java&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Authenticating&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/authenticating-against-hpe-omnistack-api&quot;&gt;&lt;strong&gt;Authenticating against the API&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/revoke-oath-2-token&quot;&gt;&lt;strong&gt;Revoking OAuth 2 tokens&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/request-authentication-by-emergency-grant&quot;&gt;&lt;strong&gt;Requesting authentication by emergency grant&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Manage certificates using PowerShell&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/hpesvtcli-powershell-commands&quot;&gt;&lt;strong&gt;Installing the certificate management Powershell cmdlets&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/ps-getoathtoken&quot;&gt;&lt;strong&gt;Authenticating&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/adding-certificates&quot;&gt;&lt;strong&gt;Adding certificates to the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/getting-a-certificate&quot;&gt;&lt;strong&gt;Getting certificates from the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/getting-a-root-certificate&quot;&gt;&lt;strong&gt;Getting root certificates&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/remove-certificate&quot;&gt;&lt;strong&gt;Removing certificates from the trust store&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/platform/hpe-simplivity/tls-certificate-validation&quot;&gt;&lt;strong&gt;Disabling/enabling TLS certificate validation on HPE OmniStack for vSphere&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Projects&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/Docker-SimpliVity&quot;&gt;Docker &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;See the HPE SimpliVity REST API-based solution to stand up Docker’s environment automatically on top of the HPE SimpliVity HCI&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/Go-from-zero-to-Docker-in-minutes-with-HPE-hyperconverged/ba-p/6990234#.W2IksxYpDDs&quot;&gt;Go from zero to Docker in minutes with HPE hyperconverged&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/simplivity-vra-plugin&quot;&gt;HPE SimpliVity plugin for VMware vRealize Automation &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Learn more about the HPE SimpliVity plugin for VMware vRealize Automation (VRA) which enables delivery and administration of hybrid cloud applications and services. The plugin is based on the HPE SimpliVity REST API.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/Simplified-Management-Operate-HPE-SimpliVity-data-services/ba-p/7013599#.W2CpIH58thE&quot;&gt;Simplified Management – Operate HPE SimpliVity data services directly through vRealize Automation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;http://github.com/HewlettPackard/simplivity-microfocus-hcm-plugin&quot;&gt;HPE SimpliVity plugin for Micro Focus Hybrid Cloud Management &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Learn more about the HPE SimpliVity plugin for Micro Focus Hybrid Cloud Management.&lt;/p&gt;
&lt;p&gt;The plugin is based on the HPE SimpliVity REST API.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/HPE-SimpliVity-integration-for-Micro-Focus-Hybrid-Cloud/ba-p/7021447&quot;&gt;HPE SimpliVity integration for Micro Focus Hybrid Cloud Management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h3&gt;HPE SimpliVity and the Citrix Cloud&lt;/h3&gt;
&lt;p&gt;Learn how to host and administer any Citrix service on HPE SimpliVity&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/Just-announced-Automation-for-HPE-SimpliVity-and-Citrix-Cloud/ba-p/7022957#.W-L_m5NKiUl&quot;&gt;Read the article&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/SimpliVity-Citrix-HyperV-Plugin&quot;&gt;Download the HPE OmniStack for Hyper-V plugin for Citrix Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/SimpliVity-Citrix-VCenter-Plugin&quot;&gt;Download the HPE OmniStack for Vsphere plugin for Citrix Cloud&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Language bindings&lt;/h2&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe-simplivity-powershell&quot;&gt;PowerShell certificate management cmdlets &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.powershellgallery.com/packages/HPESvtCmdlets&quot;&gt;Download the cmdlets from the PowerShell Gallery&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/atkinsroy/HPESimpliVity&quot;&gt;PowerShell Module &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/atkinsroy/HPESimpliVity&quot;&gt;Visit the priject website on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/simplivity-python&quot;&gt;Python SDK &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://pypi.org/project/simplivity/&quot;&gt;Download the library from pypi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/simplivity-go&quot;&gt;Go SDK &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/HewlettPackard/simplivity-ansible&quot;&gt;Ansible Library &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/simplivity-ansible/releases/tag/v1.0.0&quot;&gt;Download the library from GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Installing the certificate management PowerShell module You can download and install the HPE SimpliVity certificate management PowerShell…]]></description><link>https://developer.hpe.com/hpe-simplivity/hpesvtcli-powershell-commands/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/hpesvtcli-powershell-commands/</guid><content:encoded>&lt;h1&gt;Installing the certificate management PowerShell module&lt;/h1&gt;
&lt;p&gt;You can download and install the HPE SimpliVity certificate management PowerShell module from the Powershell Gallery.&lt;/p&gt;
&lt;p&gt;The module works with Powershell Core (PS6) on Ubuntu or Windows 10 and on PowerShell 5.1 (PS5.1).&lt;/p&gt;
&lt;h2&gt;Installing from the PowerShell Gallery&lt;/h2&gt;
&lt;p&gt;To install directly from PowerShell, run the following command from a Powershell window:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Install-Module -Name HPESvtCmdlets&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If you are reinstalling to obtain a newer version, you might need to use the -Force option. You can always uninstall a module using the &lt;code&gt;Uninstall-module&lt;/code&gt; cmdlet.&lt;/p&gt;
&lt;h2&gt;Loading the module&lt;/h2&gt;
&lt;p&gt;After you have downloaded the module, load it by using the &lt;code&gt;import-module&lt;/code&gt; cmdlet. For example:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Import-Module .\HpeSvtCmdlets\HPESvtCmdlets.psm1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To verify that the module imported properly, run the following command and verify that it contains the full set of cmdlets.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;get-module HpeSvtCmdlets&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;You see the list of cmdlets in the current version. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Script 1.1.21.0 HPESvtCmdlets {Add-HPESvtCertificate, Get-HPESvtAuthToken, Get-HPESvtCertificate, Get-HPESvtRootCertificate...}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can get help on any command by calling &lt;code&gt;Get-Help&lt;/code&gt; (or just &lt;code&gt;help&lt;/code&gt; for short). For example:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Get-Help Add-HPESvtCertificate&lt;/code&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[HTTP error code reference Use this reference for information about the HTTP error codes returned by the REST API. The error codes are…]]></description><link>https://developer.hpe.com/hpe-simplivity/http-error-code-reference/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/http-error-code-reference/</guid><content:encoded>&lt;h2&gt;HTTP error code reference&lt;/h2&gt;
&lt;p&gt;Use this reference for information about the HTTP error codes returned by the REST API. The error codes are returned in the &lt;code&gt;status&lt;/code&gt; field of the JSON error response body.&lt;/p&gt;
&lt;h3&gt;400-Bad Request&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;InvalidBodyException&lt;/td&gt;
&lt;td&gt;You tried to pass invalid information in the body of a POST or PUT request.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;InvalidParameterException&lt;/td&gt;
&lt;td&gt;You passed an invalid query parameter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;InvalidSortCriteriaException&lt;/td&gt;
&lt;td&gt;You passed an invalid limit or sort criteria query parameter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ObjectInUseException&lt;/td&gt;
&lt;td&gt;You tried to DELETE a policy that is in use by a datastore or virtual machine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PermissionDeniedException&lt;/td&gt;
&lt;td&gt;You tried to view or modify an object without the required permissions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TypeMismatchException&lt;/td&gt;
&lt;td&gt;You passed in an incorrect object type. For example, you passed in a String, but an Integer was expected.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;401---Unauthorized&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SvtExceptions$UnauthorizedException&lt;/td&gt;
&lt;td&gt;You submitted an invalid access token or the access token has expired. Tokens expire after 24 hours or after 10 minutes of inactivity.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;404---Not Found&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ObjectNotFoundException&lt;/td&gt;
&lt;td&gt;You tried to GET or act on an object, but the object cannot be found.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;405---Method Not Allowed&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;HttpRequestMethodNotSupportedException&lt;/td&gt;
&lt;td&gt;You tried to perform an HTTP operation on a resource that is not supported.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;413---Payload Too Large&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;BodyTooLargeException&lt;/td&gt;
&lt;td&gt;You submitted a request body that is too large.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;415---Unsupported Media Type&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;HttpMediaTypeNotSupportedException&lt;/td&gt;
&lt;td&gt;You submitted a PUT or POST request and the content type that is not valid. For example: &lt;code&gt;Content type &apos;application/xml&apos; not supported.&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;500---Internal Server Error&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;InternalServerException&lt;/td&gt;
&lt;td&gt;You submitted a request, there are unexpected problems. Check the log for more details.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;502---Bad Gateway&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Bad Gateway&lt;/td&gt;
&lt;td&gt;There are too many simultaneous REST requests.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;504---Gateway Timeout&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exception&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gateway Time-out&lt;/td&gt;
&lt;td&gt;The REST request took more than 60 seconds to complete.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Interactive REST API reference The REST API service provides a comprehensive, intuitive, and interactive real-time reference as a web page…]]></description><link>https://developer.hpe.com/hpe-simplivity/interactive-rest-api-reference/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/interactive-rest-api-reference/</guid><content:encoded>&lt;h1&gt;Interactive REST API reference&lt;/h1&gt;
&lt;p&gt;The REST API service provides a comprehensive, intuitive, and interactive real-time reference as a web page on all Virtual Controllers that are running the REST API service. The reference describes each object type, along with the supported operations, request input parameters, response model, and response codes. Furthermore, the REST API reference provides the Try it out! option that enables you to perform REST API operations on a federation. You must log in to an HPE OmniStack host to use the Try it out! option.&lt;/p&gt;
&lt;p&gt;To access the interactive REST API reference, obtain the IP address of a Virtual Controller in the federation that you want to manage. Then, in a web browser, navigate to the following URL:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; https://[Virtual Controller IP address]/api/index.html
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Using the API with Java This sample Java code performs authentication, issues example GET requests, performs a POST operation (in this case…]]></description><link>https://developer.hpe.com/hpe-simplivity/java/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/java/</guid><content:encoded>&lt;h1&gt;Using the API with Java&lt;/h1&gt;
&lt;p&gt;This sample Java code performs authentication, issues example GET requests, performs a POST operation (in this case, renaming a backup), and monitors the status of the operation using a task instance.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;package main.java;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import javax.net.ssl.SSLSession;
import javax.net.ssl.HostnameVerifier;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
import org.springframework.web.client.RestTemplate;
import org.json.JSONObject;
import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;

public class RestClient {

    private static String access_token;
    private String BASE_URL;
    static final String HMS_USERNAME = &quot;HMS_USER&quot;;
    static final String HMS_PASSWORD = &quot;HMS_PASS&quot;;
    public RestClient(String hostIp)
    {
        BASE_URL = &quot;https://&quot;+hostIp+&quot;/api/&quot;;
    }

    // Create a trust manager that does not validate certificate chains.
    private void enableSSL()
    {
        TrustManager[] trustAllCerts = new TrustManager[]
        { new X509TrustManager()
        {
            public X509Certificate[] getAcceptedIssuers()
            {
                return new X509Certificate[0];
            }
            public void checkClientTrusted(X509Certificate[] certs, String authType)
            {
            }
            public void checkServerTrusted(X509Certificate[] certs, String authType)
            {
            }
        } };
        try {
            SSLContext sc = SSLContext.getInstance(&quot;TLSv1.2&quot;);
            sc.init(null, trustAllCerts, new SecureRandom());
            HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());
            HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier() {
                public boolean verify(String hostname, SSLSession session) {
                    return true;
                }
            });

        } catch (Exception e) {
        }
    }

    /*
     * Authenticate user and retrieve access token.
     */
     public String getAccessToken()
    {
        enableSSL();

        String encoding = Base64.encode(&quot;simplivity:&quot;.getBytes());

        RestTemplate restTemplate = new RestTemplate();
        MultiValueMap&amp;#x3C;String, String&gt; body = new LinkedMultiValueMap&amp;#x3C;String, String&gt;();
        body.add(&quot;username&quot;, HMS_USERNAME);
        body.add(&quot;password&quot;, HMS_PASSWORD);
        body.add(&quot;grant_type&quot;, &quot;password&quot;);
        HttpHeaders headers = new HttpHeaders();
        headers.set(&quot;Accept&quot;, &quot;application/json&quot;);
        headers.set(&quot;Authorization&quot;, &quot;Basic &quot; + encoding);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(body, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(
            BASE_URL+&quot;oauth/token&quot;, HttpMethod.POST, entity,
            String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        access_token = (String) jsonObj.get(&quot;access_token&quot;);
        System.out.println(&quot;Authenticated user and retrieved access token: &quot;+ access_token);
        return access_token;
    }

    /*
     * Issue a GET request: GET /policies.
     */
    public Object getPolicies()
    {
        RestTemplate restTemplate = new RestTemplate();
        HttpHeaders headers = new HttpHeaders();
        headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(&quot;parameters&quot;, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies&quot;, HttpMethod.GET, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object policies=  jsonObj.get(&quot;policies&quot;);
        System.out.println(policies.toString());
        return policies;
    }

    /*
     * Issue a GET request with sorting and filtering:
     * GET the first 100 policies
     * sorted in ascending order by name
     * and show only the name and rules fields.
     */
    public Object getFirst100Policies()
    {
        RestTemplate restTemplate = new RestTemplate();
        HttpHeaders headers = new HttpHeaders();
        headers.set(&quot;Accept&quot;, &quot;application/json&quot;);
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(&quot;parameters&quot;, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies?fields=name,rules&amp;#x26;limit=100&amp;#x26;offset=0&amp;#x26;sort=name&amp;#x26;order=ascending&quot;,
            HttpMethod.GET, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object policies=  jsonObj.get(&quot;policies&quot;);
        System.out.println(policies.toString());
        System.out.println(&quot;Limit: &quot;+ jsonObj.get(&quot;limit&quot;));
        System.out.println(&quot;Count: &quot;+ jsonObj.get(&quot;count&quot;));
        return policies;
    }

    /*
     * Issue a POST request: Create a new policy.
     */
    public void createNewPolicy()
    {
        RestTemplate restTemplate = new RestTemplate();
        // Set a custom media type.
        MediaType myMediaType = new MediaType(&quot;application&quot;, &quot;vnd.simplivity.v1+json&quot;);

        // Set the headers.
        HttpHeaders headers = new HttpHeaders();
        headers.setAccept(Arrays.asList(myMediaType));
        headers.setContentType(myMediaType);
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);

        // Form the POST body.
        String policyMo =  &quot;{\&quot;name\&quot;: \&quot;randomPolicyName4\&quot;}&quot;;
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;String&gt;(policyMo, headers);

        // Issue the POST operation and expect a task object in return.
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies&quot;, HttpMethod.POST, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object task = jsonObj.get(&quot;task&quot;);
        JSONObject taskJson = new JSONObject(task.toString());
        String taskId = taskJson.getString(&quot;id&quot;);
        String state = taskJson.getString(&quot;state&quot;);

        // Monitor the status of the policy creation operation by using a loop to query
        // the task while this task is IN_PROGRESS.
        // The state field in the JSON response body indicates the status.
        while(state.equals(&quot;IN_PROGRESS&quot;))
        {
            // Wait one second.
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            res = restTemplate.getForEntity(BASE_URL+&quot;tasks/&quot;+taskId, String.class);
            jsonObj = new JSONObject(res.getBody());
            task = jsonObj.get(&quot;task&quot;);
            taskJson = new JSONObject(task.toString());
            state = taskJson.getString(&quot;state&quot;);
        }
        System.out.println(&quot;Task object: &quot; +task.toString());
    }

    public static void main(String[] args)
    {
        RestClient restClient = new RestClient(&quot;10.150.1.71&quot;);
        // Authenticate user and retrieve access token.
        restClient.getAccessToken();
        // GET policies.
        restClient.getPolicies();
        // GET first 100 policies sorted in ascending order by name
        // and showing only the name and rules fields.
        restClient.getFirst100Policies();
        // POST /policies: Create a new policy.
        restClient.createNewPolicy();
    }
}
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting performance and capacity metrics You use the REST API to view IOPs, latency, and throughput data over time. This information is…]]></description><link>https://developer.hpe.com/hpe-simplivity/metrics/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/metrics/</guid><content:encoded>&lt;h1&gt;Getting performance and capacity metrics&lt;/h1&gt;
&lt;p&gt;You use the REST API to view IOPs, latency, and throughput data over time. This information is available for individual &lt;code&gt;hosts&lt;/code&gt;, &lt;code&gt;omnistack_clusters&lt;/code&gt;, and &lt;code&gt;virtual_machines&lt;/code&gt;. The metrics that the REST API provides include data about both reads and writes, as well as the date of the metrics sample.&lt;/p&gt;
&lt;p&gt;The REST API collects metrics at five-second intervals.&lt;/p&gt;
&lt;p&gt;When requesting metrics data, you can specify the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;resolution&lt;/code&gt;: Specify one of the following values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SECOND&lt;/li&gt;
&lt;li&gt;MINUTE&lt;/li&gt;
&lt;li&gt;HOUR&lt;/li&gt;
&lt;li&gt;DAY&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;time_offset&lt;/code&gt;: Specify the &lt;code&gt;time_offset&lt;/code&gt; from the current date and time or a &lt;code&gt;time_offset&lt;/code&gt; as a specific date/time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;range&lt;/code&gt;: Specify the desired range as a number of seconds from the &lt;code&gt;time_offset&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following example shows how to find the last ten seconds of metrics data for a specific &lt;code&gt;omnistack_cluster&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/omnistack_clusters/f25bc244-e70f-4ffb-833d-e797bc2bc231/metrics?range=10&amp;#x26;resolution=SECOND
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This request returns an array of data points similar to the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;metrics&quot;: [
    {
      &quot;name&quot;: &quot;iops&quot;,
      &quot;data_points&quot;: [
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 6,
          &quot;date&quot;: &quot;2016-09-08T16:11:20Z&quot;
        },
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 0,
          &quot;date&quot;: &quot;2016-09-08T16:11:25Z&quot;
        }
      ]
    },
    {
      &quot;name&quot;: &quot;throughput&quot;,
      &quot;data_points&quot;: [
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 537,
          &quot;date&quot;: &quot;2016-09-08T16:11:20Z&quot;
        },
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 0,
          &quot;date&quot;: &quot;2016-09-08T16:11:25Z&quot;
        }
      ]
    },
    {
      &quot;name&quot;: &quot;latency&quot;,
      &quot;data_points&quot;: [
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 1259,
          &quot;date&quot;: &quot;2016-09-08T16:11:20Z&quot;
        },
        {
          &quot;reads&quot;: 0,
          &quot;writes&quot;: 0,
          &quot;date&quot;: &quot;2016-09-08T16:11:25Z&quot;
        }
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Performance chart&lt;/h1&gt;
&lt;p&gt;The REST API provides a simple one page performance chart that allows you to request host metrics and display them using &lt;code&gt;Chart.js.&lt;/code&gt; For example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://developer.hpe.com/uploads/media/2018/7/svt-rest-api-perf-chart-1532710805262.png&quot; alt=&quot;Example of the performance chart&quot;&gt;&lt;/p&gt;
&lt;p&gt;The code to generate the sample page looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;#x3C;!DOCTYPE html&gt;
&amp;#x3C;!-- Copyright © 2017
SimpliVity Corporation - All Rights Reserved --&gt;
&amp;#x3C;html&gt;
&amp;#x3C;head&gt;
    &amp;#x3C;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.17.1/moment.min.js&quot;&gt;&amp;#x3C;/script&gt;
   &amp;#x3C;script src=&quot;lib/jquery-1.11.3.min.js&quot;&gt;&amp;#x3C;/script&gt;
    &amp;#x3C;script src=&quot;lib/Chart.js&quot;&gt;&amp;#x3C;/script&gt;
&amp;#x3C;/head&gt;

&amp;#x3C;body&gt;
    &amp;#x3C;canvas id=&quot;myChart&quot; width=&quot;400&quot; height=&quot;90%&quot;&gt;&amp;#x3C;/canvas&gt;
    &amp;#x3C;div id=&quot;query params&quot;&gt;
        &amp;#x3C;form&gt;
        Host:
        &amp;#x3C;select name=&quot;host&quot; id=&quot;host&quot; onchange=&quot;getHosts()&quot;&gt;
            &amp;#x3C;option&gt;Fetching Hosts&amp;#x3C;/option&gt;
        &amp;#x3C;/select&gt;
        Resolution:
        &amp;#x3C;select name=&quot;resolution&quot; id=&quot;resolution&quot;&gt;
            &amp;#x3C;option&gt;Second&amp;#x3C;/option&gt;
            &amp;#x3C;option&gt;Minute&amp;#x3C;/option&gt;
            &amp;#x3C;option&gt;Hour&amp;#x3C;/option&gt;
            &amp;#x3C;option&gt;Day&amp;#x3C;/option&gt;
        &amp;#x3C;/select&gt;
        &amp;#x3C;input type=button value=&quot;Show Metrics&quot; onclick=&quot;fetchHostMetrics();&quot;&gt;
        &amp;#x3C;/form&gt;
    &amp;#x3C;/div&gt;

&amp;#x3C;script&gt;
    fetchHosts();

    function fetchHosts() {
        $.ajax({
            url: &apos;hosts?fields=name,id&apos;,
            type: &apos;GET&apos;,
            dataType : &quot;json&quot;,
            statusCode: { 401: function() {alert(&quot;Invalid Credentials - Login Again&quot;); }}
        })
        .done(function(json) {
            // reset the combo box with the list of hosts
            var hosts = json.hosts;
            var sel = document.getElementById(&apos;host&apos;);
            sel.innerHTML = &quot;&quot;;
            hosts.forEach(function(host) {
                var opt = document.createElement(&apos;option&apos;);
                opt.innerHTML = host.name;
                opt.value = host.id;
                sel.appendChild(opt);
            })
        })

        .fail(function( xhr, status, errorThrown ) {
            alert( &quot;Sorry, there was a problem!&quot; );
            console.log( &quot;Error: &quot; + errorThrown );
            console.log( &quot;Status: &quot; + status );
            console.dir( xhr );
        })
    }

    function fetchHostMetrics() {
        // find the selected host
        var sel = document.getElementById(&apos;host&apos;);
        var hostid = sel.value;
        // find the selected resolution
        var sel = document.getElementById(&apos;resolution&apos;);
        var resolution = sel.value;

        $.ajax({
            url: &apos;hosts/&apos;+hostid+&apos;/metrics?resolution=&apos;+resolution,
            type: &apos;GET&apos;,
            dataType : &quot;json&quot;,
            statusCode: { 401: function() {alert(&quot;Invalid Credentiald - Login Again&quot;); }}
        })
        .done(function(json) {createLineChart(json)})
        .fail(function( xhr, status, errorThrown ) {
            alert( &quot;Sorry, there was a problem!&quot; );
            console.log( &quot;Error: &quot; + errorThrown );
            console.log( &quot;Status: &quot; + status );
            console.dir( xhr );
        })
    }

    function processJSONintoDatasets(json) {
        // create 6 datasets from the JSON
        var iops_reads = [];
        var iops_writes = [];
        var latency_reads = [];
        var latency_writes = [];
        var throughput_reads = [];
        var throughput_writes = [];

        // process all the metrics
        json.metrics.forEach(function(metric) {
            if (metric.name == &quot;iops&quot;) {
                metric.data_points.forEach(function(point) {
                    iops_reads.push({x:point.date, y: point.reads});
                    iops_writes.push({x:point.date, y: point.writes});
                })
            }

            else if (metric.name == &quot;latency&quot;) {
                 metric.data_points.forEach(function(point) {
                    latency_reads.push({x: point.date, y: point.reads});
                    latency_writes.push({x: point.date, y: point.writes});
                })
            }

            else if (metric.name == &quot;throughput&quot;) {
                 metric.data_points.forEach(function(point) {
                    throughput_reads.push({x:point.date, y: point.reads});
                    throughput_writes.push({x:point.date, y: point.writes});
                })
            }
        })

        // create the data object
        var data = {
            datasets: [
                { label: &apos;IOPS read&apos;, data: iops_reads, yAxisID: &quot;y-axis-left&quot;, borderColor: &apos;rgba(31,120,180,1)&apos;,backgroundColor: &apos;rgba(31,120,180, 0.1)&apos;},
                { label: &apos;IOPS written&apos;, data: iops_writes, yAxisID: &quot;y-axis-left&quot;, borderColor: &quot;#33a02c&quot;},
                { label: &apos;latency read&apos;, data: latency_reads, yAxisID: &quot;y-axis-left&quot;, borderColor: &quot;#e31a1c&quot; },
                { label: &apos;latency writes&apos;, data: latency_writes, yAxisID: &quot;y-axis-left&quot;, borderColor:&quot;#ff7f00&quot; },
                { label: &apos;throughput read&apos;, data:throughput_reads, yAxisID: &quot;y-axis-right&quot;, borderColor:&quot;#6a3d9a&quot; },
                { label: &apos;throughput writes&apos;, data: throughput_writes, yAxisID: &quot;y-axis-right&quot;, borderColor: &quot;#b15928&quot; }
            ]
        }

        // now set the common options
        data.datasets.forEach(function(dataset) {
            dataset.fill = true;
            dataset.lineTension = 0;
        })
        return data;
    }

    var myLineChart;
    function createLineChart(json) {
        var ctx = document.getElementById(&quot;myChart&quot;);
        var data =processJSONintoDatasets(json);

        // remove any existing chart before creating a new one
        if (myLineChart)
            myLineChart.destroy();

        myLineChart = new Chart(ctx, {
            type: &apos;line&apos;,
            data: data,
            options: {
                responsive: true,
                title:{
                    display:true,
                    text:&quot;Metrics&quot;
                },
                scales: {
                    xAxes: [{
                        type: &quot;time&quot;,
                        display: true,
                        scaleLabel: {
                            display: true,
                            labelString: &apos;Time=(GMT)&apos;
                        }
                    }],

                    yAxes: [{
                        display: true,
                        scaleLabel: {
                            display: true,
                            labelString:&apos;IOPS/Latency&apos;
                        },
                        position: &quot;left&quot;,
                        &quot;id&quot;: &quot;y-axis-left&quot;
                    }, {
                        display: true,
                        scaleLabel: {
                            display: true,
                            labelString: &apos;Throughput&apos;
                        },
                        position: &quot;right&quot;,
                        &quot;id&quot;: &quot;y-axis-right&quot;
                    }]
                }
            }
        });
    }
    &amp;#x3C;/script&gt;
&amp;#x3C;/body&gt;
&amp;#x3C;/html&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Capacity metrics&lt;/h1&gt;
&lt;p&gt;Each HPE OmniStack &lt;code&gt;host&lt;/code&gt; and &lt;code&gt;omnistack_cluster&lt;/code&gt; object returns a set of capacity metrics when you perform a GET operation. You can query for a specific period of time. The REST API provides the following capacity metrics for &lt;code&gt;host&lt;/code&gt;and &lt;code&gt;omnistack_cluster&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;allocated_capacity`&lt;/li&gt;
&lt;li&gt;capacity_savings`&lt;/li&gt;
&lt;li&gt;compression_ratio`&lt;/li&gt;
&lt;li&gt;deduplication_ratio`&lt;/li&gt;
&lt;li&gt;efficiency_ratio`&lt;/li&gt;
&lt;li&gt;free_space`&lt;/li&gt;
&lt;li&gt;local_backup_capacity`&lt;/li&gt;
&lt;li&gt;remote_backup_capacity`&lt;/li&gt;
&lt;li&gt;stored_compressed_data`&lt;/li&gt;
&lt;li&gt;stored_uncompressed_data`&lt;/li&gt;
&lt;li&gt;stored_virtual_machine_data`&lt;/li&gt;
&lt;li&gt;used_capacity`&lt;/li&gt;
&lt;li&gt;used_logical_capacity`&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following example returns &lt;code&gt;free_space&lt;/code&gt; (in bytes) for the last ten seconds for a specific &lt;code&gt;host&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/hosts/422ac72f-8437-6e09-eb44-6ffc8bde0aad/capacity?fields=free_space&amp;#x26;range=5&amp;#x26;resolution=SECOND&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JSON response:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;metrics&quot;: [
    {
      &quot;name&quot;: &quot;free_space&quot;,
      &quot;data_points&quot;: [
        {
          &quot;value&quot;: 5170371316326,
          &quot;date&quot;: &quot;2016-09-08T16:29:50Z&quot;
        },
        {
          &quot;value&quot;: 5170369843492,
          &quot;date&quot;: &quot;2016-09-08T16:29:55Z&quot;
        }
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Optimizing GET requests You can increase the efficiency of the REST API by minimizing the amount of processing that occurs on the server and…]]></description><link>https://developer.hpe.com/hpe-simplivity/optimize-get-requests/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/optimize-get-requests/</guid><content:encoded>&lt;h1&gt;Optimizing GET requests&lt;/h1&gt;
&lt;p&gt;You can increase the efficiency of the REST API by minimizing the amount of processing that occurs on the server and the amount of data that is transmitted over the network. This page describes a number of ways that you can sort and filter the output of GET requests to optimize them. You can increase the speed of these transactions by eliminating the transfer of unnecessary content.&lt;/p&gt;
&lt;h2&gt;Request specific fields&lt;/h2&gt;
&lt;p&gt;If you only need a small number of fields, you can request only these fields. The following example requests only the &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;name&lt;/code&gt; fields:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/backups?fields=id,name
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Limit the number of objects by relationship&lt;/h2&gt;
&lt;p&gt;If you only want to retrieve a limited number of objects, you can use filtering parameters to return only objects associated with a particular resource. The following example queries for all backups of a specific virtual machine:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/backups?fields=id,name&amp;#x26;virtual_machine_id=729861fd-a006-4849-bf08-b232551989c0
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Sort returned data&lt;/h2&gt;
&lt;p&gt;To increase the efficiency of the returned data, you can sort these results by most fields. The following example sorts results by the &lt;code&gt;name&lt;/code&gt; field:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/backups?fields=id,name&amp;#x26;virtual_machine_id=729861fd-a006-4849-bf08-b232551989c0&amp;#x26;sort=name
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For most fields, the sort is a case-sensitive lexicographic sort. Numbers come before capital letters, which come before lowercase letters. The REST API service sorts backup names case-insensitively.&lt;/p&gt;
&lt;h2&gt;Request objects in sorted/filtered chunks&lt;/h2&gt;
&lt;p&gt;You can use the &lt;code&gt;offset&lt;/code&gt; and &lt;code&gt;limit&lt;/code&gt; fields to request objects in sorted/filtered chunks. You can set the limit to any integer value up to 5000. The following example sorts results by the &lt;code&gt;name&lt;/code&gt; field and requests chunks of 50 results at a time:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET /api/backups?fields=id,name&amp;#x26;virtual_machine_id=729861fd-a006-4849-bf08-
b232551989c0&amp;#x26;sort=name&amp;#x26;offset=50&amp;#x26;limit=50
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The REST API always performs sorting and filtering on enumerated values using the full uppercase enumerations. The REST API ignores the value of the case parameter when you sort or filter by fields with enumerated values.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Using the API with Windows PowerShell This sample Windows PowerShell code performs authentication, issues an example GET request, performs a…]]></description><link>https://developer.hpe.com/hpe-simplivity/powershell/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/powershell/</guid><content:encoded>&lt;h1&gt;Using the API with Windows PowerShell&lt;/h1&gt;
&lt;p&gt;This sample Windows PowerShell code performs authentication, issues an example GET request, performs a POST operation (in this case, renaming a backup), and monitors the status of the operation using a task instance.&lt;/p&gt;
&lt;p&gt;When using Windows PowerShell to develop clients:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Allow the use of self-signed SSL certifications, as follows: &lt;code&gt;[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $True }&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alternatively, you can use &lt;code&gt;https://github.com/Jaykul/Tunable-SSL-Validator&lt;/code&gt;with the &lt;code&gt;-Insecure&lt;/code&gt; option.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use PowerShell version 3 or above to support &lt;code&gt;Invoke-RestMethod&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HTTP Basic Authentication requires the manual creation of the base 64 encoding, as follows:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((&quot;{0}:{1}&quot; -f &quot;simplivity&quot;,&quot;&quot;)))
Invoke-RestMethod -Headers @{Authorization=(&quot;Basic {0}&quot; -f $base64AuthInfo)}
$body = @{grant_type=&apos;password&apos;;username=&apos;username&apos;;password=&apos;password&apos;} -Method POST
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Consider the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $True }
$token =  Invoke-RestMethod -Uri https://10.150.1.93/api/oauth/token -Headers @{Authorization=(&quot;Basic {0}&quot; -f $base64AuthInfo)} -Body $body -Method Post
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Which returns a response similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;access_token  : 45718cb1-528b-430f-8a6d-89e25ae9c7ea
token_type    : bearer
expires_in    : 83649
scope         : read write
updated_at    : 1459805657628
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example shows how to make a GET call for all backups:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$header = @{Authorization=&apos;Bearer &apos;+$token.access_token}
$backups = Invoke-RestMethod -Insecure -Header $header -Uri https://10.150.1.93/api/backups
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example finds the type of the 22nd backup: &lt;code&gt;$backups.backups[22].type&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This example renames this backup:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$header[&apos;Content-Type&apos;]=&apos;application/vnd.simplivity.v1+json&apos;
$uri = &apos;https://10.150.1.93/api/backups/&apos;
$uri += $backups.backups[22].id
$uri += &apos;/rename&apos;
$body = @{backup_name=&apos;new_name&apos;}
$body = $body | ConvertTo-Json
TunableSSLValidator\Invoke-RestMethod -Insecure -Header $header -Uri $uri -Method Post -Body $body
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Sample code&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;# Set the base URL for REST API requests.
$BASE_URL = &apos;https://[host]/api/&apos;

# Set the username and password.
$hms_username = &apos;HMS_USER&apos;
$hms_password = &apos;HMS_PASS&apos;

# Allow the use of self signed certificates.
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $True }

# Create a base64 encoding for HTTP Authentication.
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((&quot;{0}:{1}&quot; -f &quot;simplivity&quot;,&quot;&quot;)))

# Create a JSON body with username, password, and grant_type.
$body = @{grant_type=&apos;password&apos;;username=$hms_username;password=$hms_password}

# Authenticate user and generate access token.
$url = $BASE_URL+&apos;oauth/token&apos;
$header = @{Authorization=(&quot;Basic {0}&quot; -f $base64AuthInfo)}
$response= Invoke-RestMethod -Uri $url -Headers $header -Body $body -Method Post
$access_token = $response.access_token;

# Add the access_token to the header.
$header =@{Authorization=&apos;Bearer &apos;+$access_token}

# Issue a GET request: GET /backups.
$url = $BASE_URL+&apos;backups&apos;
$backups = Invoke-RestMethod -Header $header -Uri $url

# Issue a POST request: Rename the second backup from the GET results.
# Find the ID of the second backup from the GET results.
$backupid = $backups.backups[2].id

# Create a JSON body for the rename action.
$body = @{backup_name=&apos;new_name&apos;}
$body = $body | ConvertTo-Json

# Form the URI.
$url = $BASE_URL+&apos;backups/&apos;
$url += $backupid
$url += &apos;/rename&apos;

# Issue the POST operation and expect a task object in return.
$response = Invoke-RestMethod -Header $header -Uri $url -Method Post -Body $body -ContentType &apos;application/vnd.simplivity.v1+json&apos;

# Monitor the status of the rename operation by using a loop to query the task while this task is IN_PROGRESS.
# The state field in the JSON response body indicates the status.
$taskid = $response.task.id
$state = $response.task.state
$url = $BASE_URL+&apos;tasks/&apos;+$taskid
  while ($state -eq &apos;IN_PROGRESS&apos;)
  {
    # Wait one second and try again.
    Start-Sleep -s 1
    $response = Invoke-RestMethod -Header $header -Uri $url
    $state = $response.task.state
  }

# Print out the task result.
$response
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Authenticating The certificate management cmdlets use OAuth to authenticate. Use the Get-HPESvtAuthToken cmdlet to obtain an OAuth access…]]></description><link>https://developer.hpe.com/hpe-simplivity/ps-getoathtoken/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/ps-getoathtoken/</guid><content:encoded>&lt;h1&gt;Authenticating&lt;/h1&gt;
&lt;p&gt;The certificate management cmdlets use OAuth to authenticate.&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;Get-HPESvtAuthToken&lt;/code&gt; cmdlet to obtain an OAuth access_token. The access_token is stored in an encrypted file and is automatically used by any of the Powershell cmdlets during the same session. The access_token has a ten minute idle lifetime.&lt;/p&gt;
&lt;p&gt;To add or delete a certificate, you must authenticate as a user in the hypervisor management system (HMS) administrator role or as the &lt;code&gt;svtcli&lt;/code&gt; user. To get certificates, you need to authenticate with the HMS, but the HMS administrator role is not required.&lt;/p&gt;
&lt;p&gt;You need to obtain a token from each Virtual Controller where you want to manage certificate trust stores by using the Powershell cmdlets.&lt;/p&gt;
&lt;h2&gt;Authenticating as the HMS administrator&lt;/h2&gt;
&lt;p&gt;At the command line, you can pass the username then provide the password when prompted by &lt;code&gt;Get-HPESvtAuthToken&lt;/code&gt;. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;PS C:\&gt; Get-HPESvtAuthToken 192.0.2.2  admin-username
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you are scripting the authentication, you can combine the credentials into an object, then pass the object. The following example uses the &lt;code&gt;$creds&lt;/code&gt; object to pass the credentials.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# securely store the password
PS C:\HPESvtCmdlets&gt; $secpasswd = ConvertTo-SecureString &quot;password&quot; -AsPlainText -Force

PS C:\HPESvtCmdlets&gt; $creds = New-Object System.Management.Automation.PSCredential(&quot;admin-username&quot;, $secpasswd)
PS C:\HPESvtCmdlets&gt; Get-HPESvtAuthToken 192.0.2.2 $creds
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It returns an access_token. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;access_token : bd96becc-71e7-4eca-8859-cbbdfb6d1cdd
token_type   : bearer
expires_in   : 86399
scope        : read write
updated_at   : 1544475704159
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Authenticating as the &lt;code&gt;svtcli&lt;/code&gt; user&lt;/h2&gt;
&lt;p&gt;You might need to authenticate as the &lt;code&gt;svtcli&lt;/code&gt; user when the hypervisor management system is not reachable by the Virtual Controller.&lt;/p&gt;
&lt;p&gt;You can use the same technique of combining the credentials into the &lt;code&gt;$creds&lt;/code&gt; object and then passing &lt;code&gt;$creds&lt;/code&gt; on the command line, but when you authenticate as the &lt;code&gt;svtcli&lt;/code&gt; user, you must pass the &lt;code&gt;-emergency&lt;/code&gt; parameter. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$emergencypwd = ConvertTo-SecureString &quot;password&quot; -AsPlainText -Force
$emergencycreds = New-Object System.Management.Automation.PSCredential(&quot;svtcli&quot;,$emergencypwd)
$oauthcred = Get-HPESvtAuthToken -HostName $ovcip -credential $emergencycreds -emergency
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It returns an access_token. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;access_token : bd96becc-71e7-4eca-8859-cbbdfb6d1cdd
token_type   : bearer
expires_in   : 86399
scope        : read write
updated_at   : 1544475704159
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Using the API with Python This sample Python code performs authentication, issues three example GET requests, performs a POST operation (in…]]></description><link>https://developer.hpe.com/hpe-simplivity/python/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/python/</guid><content:encoded>&lt;h1&gt;Using the API with Python&lt;/h1&gt;
&lt;p&gt;This sample Python code performs authentication, issues three example GET requests, performs a POST operation (in this case, renaming a backup), and monitors the status of the operation using a task.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import requests
# Set the base URL for REST API requests.
url = &apos;https://[host]/api/&apos;

# Set the username and password.
hms_username = &apos;HMS_USER&apos;
hms_password = &apos;HMS_PASS&apos;

# Authenticate user and generate access token.
response = requests.post(url+&apos;oauth/token&apos;, auth=(&apos;simplivity&apos;, &apos;&apos;), verify=False, data={
  &apos;grant_type&apos;:&apos;password&apos;,
  &apos;username&apos;:hms_username,
  &apos;password&apos;:hms_password})
access_token = response.json()[&apos;access_token&apos;]

# Add the access_token to the header.
headers = {&apos;Authorization&apos;:  &apos;Bearer &apos; + access_token, &apos;Accept&apos; : &apos;application/vnd.simplivity.v1+json&apos;}

# Issue a GET request: GET /hosts.
response = requests.get(url+&apos;hosts&apos;, verify=False, headers=headers)
print(response.json())

# Issue a GET request: GET /datastores.
response = requests.get(url+&apos;datastores&apos;, verify=False, headers=headers)
print(response.json())

# Issue a GET request with sorting and filtering:
# GET first 100 backups of the MANUAL type
# sorted in ascending order by name
# and show only the name, id, and created_at fields.
response = requests.get(url+&apos;backups?type=MANUAL&amp;#x26;fields=name%2C%20id%2C%20created_at&amp;#x26;limit=100&amp;#x26;offset=0&amp;#x26;sort=name&amp;#x26;order=ascending&apos;, verify=False, headers=headers)
print(response.json())

# Issue a POST request: Rename the first backup from the GET results.
# Define a container to hold the JSON response.
json = response.json()
# Find the ID of the first backup from the GET results.
backupid = json[&apos;backups&apos;][0][&apos;id&apos;]
# Create a JSON body for the rename action.
body = &apos;{&quot;backup_name&quot;:&quot;newbackupname&quot;}&apos;
# Specify the correct MIME type for the body.
headers[&apos;Content-Type&apos;] = &apos;application/vnd.simplivity.v1+json&apos;
# Issue the POST operation and expect a task object in return.
response = requests.post(url+&apos;backups/&apos;+backupid+&apos;/rename&apos;, data=body, verify=False, headers=headers)
json = response.json()

# Monitor the status of the rename operation by using a loop to query
# the task while this task is IN_PROGRESS.
# The state field in the JSON response body indicates the status.
taskid = json[&apos;task&apos;][&apos;id&apos;]
state = json[&apos;task&apos;][&apos;state&apos;]
while (state == &quot;IN_PROGRESS&quot;):
    # Wait one second.
    time.sleep(1)
    response = requests.get(url+&apos;tasks/&apos;+taskid, verify=False, headers=headers)
    json = response.json()
    state = json[&apos;task&apos;][&apos;state&apos;]
print(json)
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Removing certificates from the trust store You might need to delete a certificate from the trust store on an HPE OmniStack host if the…]]></description><link>https://developer.hpe.com/hpe-simplivity/remove-certificate/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/remove-certificate/</guid><content:encoded>&lt;h1&gt;Removing certificates from the trust store&lt;/h1&gt;
&lt;p&gt;You might need to delete a certificate from the trust store on an HPE OmniStack host if the existing certificate has expired or is corrupted, and you want to replace it. If the certificate has expired or is corrupted, then you must authenticate to the HPE OmniStack Virtual Controller by using the emergency access account (&lt;code&gt;svtcli&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;You can remove certificates from the trust store by using the &lt;code&gt;Remove-HPESvtCertificate&lt;/code&gt; cmdlet and specifying the certificate&apos;s SHA1 thumbprint.&lt;/p&gt;
&lt;p&gt;The following sample code shows how delete a certificate from the trust store by thumbprint. It assumes that you have already authenticated.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Remove-HPESvtCertificate -Thumbprint 7E084B44FCDAB6EF74AC40E8D9BD508337D96CDE
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Requesting authentication by emergency grant The emergency grant type functions in a way that is similar to the password grant type. However…]]></description><link>https://developer.hpe.com/hpe-simplivity/request-authentication-by-emergency-grant/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/request-authentication-by-emergency-grant/</guid><content:encoded>&lt;h1&gt;Requesting authentication by emergency grant&lt;/h1&gt;
&lt;p&gt;The emergency grant type functions in a way that is similar to the password grant type. However, this grant type uses the emergency local svtcli account for authentication. This functionality is similar to the &lt;code&gt;--emergency&lt;/code&gt; option in CLI commands.&lt;/p&gt;
&lt;p&gt;This grant type is intended for use only when the Hypervisor Management System has become unavailable, and you want to access or restore backups of &lt;code&gt;virtual_machine&lt;/code&gt; objects. Using this type of token, clients can only access the following limited set of API operations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GET /hosts&lt;/li&gt;
&lt;li&gt;GET /omnistack_clusters&lt;/li&gt;
&lt;li&gt;GET /backups&lt;/li&gt;
&lt;li&gt;GET /datastores&lt;/li&gt;
&lt;li&gt;GET /virtual_machines&lt;/li&gt;
&lt;li&gt;POST /backups/{bkpId}/restore&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this grant type, the REST API does not support sorting and filtering GET requests or querying for optional properties.&lt;/p&gt;
&lt;p&gt;Use a &lt;code&gt;curl&lt;/code&gt; command like the following to request a token for the HPE OmniStack CLI (svtcli) account from a Virtual Controller with &lt;em&gt;host&lt;/em&gt; as the IP address, svtcli as the username, and &lt;em&gt;password&lt;/em&gt; as the password:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -k https://simplivity@[host]/api/oauth/token -d grant_type=emergency -d
username=svtcli -d password=[password]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;host&lt;/em&gt; is the IP address of the Virtual Controller you want to authenticate to.&lt;/li&gt;
&lt;li&gt;svtcli is the username for the emergency local HPE OmniStack CLI account.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;password&lt;/em&gt; is the password for the emergency local HPE OmniStack CLI (svtcli) account.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must supply the &lt;code&gt;-k&lt;/code&gt; switch due to the use of self-signed certificates. You can import these certificates into your local certificate store.&lt;/p&gt;
&lt;p&gt;This call returns a JSON response similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;access_token&quot;:&quot;6f9166c2-bc7f-4cfc-a304-56aadd85e214&quot;,
&quot;token_type&quot;:&quot;bearer&quot;,
&quot;expires_in&quot;:86399,
&quot;scope&quot;:&quot;read write&quot;,
&quot;updated_at&quot;:1489421129188
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note the value for the &lt;code&gt;access_token&lt;/code&gt; in this response. Tokens expire either after ten minutes of inactivity or after 24 hours even with continuous activity. If you pass incorrect credentials, then the client receives a response similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;error&quot;:&quot;invalid_grant&quot;,
&quot;error_description&quot;:&quot;Invalid credentials&quot;
}
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Restoring virtual machines This example uses curl to issue a POST request to restore a specific virtual machine from a backup. The example…]]></description><link>https://developer.hpe.com/hpe-simplivity/restore-virtual-machines/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/restore-virtual-machines/</guid><content:encoded>&lt;h1&gt;Restoring virtual machines&lt;/h1&gt;
&lt;p&gt;This example uses &lt;code&gt;curl&lt;/code&gt; to issue a POST request to restore a specific virtual machine from a backup. The example assumes that you have obtained an OAuth 2 token and that you have retrieved the identifier of the object that you want to perform the operation on.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -X POST --H &quot;Content-Type: application/vnd.simplivity.v1+json&quot; --header
&quot;Authorization: Bearer [access_token]&quot; -d &quot;{
\&quot;virtual_machine_name\&quot;: \&quot;new_vm_name\&quot;
}&quot; &quot;https://[host]/api/backups/[id]/restore?restore_original=false&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-H&lt;/code&gt; (or &lt;code&gt;--header&lt;/code&gt;) enables &lt;code&gt;curl&lt;/code&gt; to add request headers.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&quot;Content-Type: application/vnd.simplivity.v1+json&quot;&lt;/code&gt; indicates that the data in the body of the request uses version 1 of the HPE OmniStack JSON extension.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;access_token&lt;/code&gt; is the complete access token. For example: &lt;code&gt;f93f2059-afef-4310-9147-447645992a5d&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The body of the POST request is a JSON object that includes the required &lt;code&gt;virtual_machine_name&lt;/code&gt; field that specifies the desired name for the new virtual machine that you want to create from the backup.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/api/backups/[id]/restore&lt;/code&gt; is the base URI for the POST restore operation: For example: &lt;code&gt;/api/backups/0f123f92-2d75-4640-aab1-fa22b0a037cd/restore&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;host&lt;/code&gt; is the IP address of the Virtual Controller to authenticate.&lt;/li&gt;
&lt;li&gt;The query parameter &lt;code&gt;restore_original&lt;/code&gt; indicates whether to restore the original virtual machine. Setting this parameter to &lt;code&gt;false&lt;/code&gt; specifies that you want to create a new virtual machine from the backup, rather than restoring the backup to an existing virtual machine.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The REST API service returns a &lt;code&gt;task&lt;/code&gt; instance as the response to this POST request. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&quot;task&quot;: {
&quot;id&quot;: &quot;422a3b5d-9125-467b-1909-9fb5f5c9c65e:422a3b5d-9125-467b-1909-
9fb5f5c9c65e:841eb5cf-19d2-4fbb-a7ec-20638373ccfc&quot;,
&quot;state&quot;: &quot;IN_PROGRESS&quot;,
&quot;affected_objects&quot;: [],
&quot;error_code&quot;: 0,
&quot;start_time&quot;: &quot;2016-03-15T15:43:24Z&quot;,
&quot;end_time&quot;: &quot;1970-01-01T00:00:00Z&quot;
}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following &lt;code&gt;curl&lt;/code&gt; command monitors the status of this operation by retrieving the task instance:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -H &quot;Accept: application/vnd.simplivity.v1+json&quot; -H &quot;Authorization:
Bearer [*access_token*]&quot; -X GET -k -i https://[host]/api/tasks/[*task_id*]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-H&lt;/code&gt; (or &lt;code&gt;--header&lt;/code&gt;) enables curl to add request headers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;code&gt;&quot;Accept: application/vnd.simplivity.v1+json&quot;&lt;/code&gt; header indicates that you want the response to use version 1 of the HPE OmniStack JSON extension.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;access_token&lt;/code&gt; is the complete access token.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-X GET&lt;/code&gt; indicates an HTTP GET operation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-k&lt;/code&gt; allows the use of self-signed SSL/TLS certificates.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;host&lt;/code&gt; is the IP address of the Virtual Controller to authenticate against.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;task_id&lt;/code&gt; is the complete ID of the task to monitor. For example: &lt;code&gt;422a3b5d-9125-467b-1909-9fb5f5c9c65e:422a3b5d-9125-467b-1909- 9fb5f5c9c65e:841eb5cf-19d2-4fbb-a7ec-20638373ccfc&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can issue this command multiple times, as necessary, to monitor the state of the operation. Possible states include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;IN_PROGRESS&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;COMPLETED&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FAILED&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the operation completes, the response to this GET request includes any affected objects. In this example, the &lt;code&gt;affected_objects&lt;/code&gt; body includes the object type and ID of the new virtual machine that operation created.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting started You can develop REST API clients in a variety of languages, including Python, Windows PowerShell, Java, and others. Before…]]></description><link>https://developer.hpe.com/hpe-simplivity/sample-code/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/sample-code/</guid><content:encoded>&lt;h1&gt;Getting started&lt;/h1&gt;
&lt;p&gt;You can develop REST API clients in a variety of languages, including Python, Windows PowerShell, Java, and others. Before starting to create a REST API client, it is important to understand the following basic client workflow:&lt;/p&gt;
&lt;p&gt;1 - Identify the user credentials that you plan to use to authenticate to an HPE OmniStack host.&lt;/p&gt;
&lt;p&gt;2 - Authenticate to the HPE OmniStack host to generate an access token.&lt;/p&gt;
&lt;p&gt;3 - Store the access token for use in all REST API request headers.&lt;/p&gt;
&lt;p&gt;4 - Issue a GET request to get a set of instances of an object type, based on the filtering and sorting criteria that you specify.&lt;/p&gt;
&lt;p&gt;5 - Issue a POST request or other operation to perform an action that affects this instance.&lt;/p&gt;
&lt;p&gt;6 - Poll the returned task instance to monitor the status of the operation.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Revoking an OAuth 2 token This example uses curl to illustrate how to revoke an OAuth 2 token. You might revoke a token when you are done…]]></description><link>https://developer.hpe.com/hpe-simplivity/revoke-oath-2-token/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/revoke-oath-2-token/</guid><content:encoded>&lt;h1&gt;Revoking an OAuth 2 token&lt;/h1&gt;
&lt;p&gt;This example uses &lt;code&gt;curl&lt;/code&gt; to illustrate how to revoke an OAuth 2 token. You might revoke a token when you are done using it and you want to free up any outstanding resources, such as vCenter Server sessions.&lt;/p&gt;
&lt;p&gt;The following &lt;code&gt;curl&lt;/code&gt; command revokes an OAuth 2 token:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -k https://[host]/api/oauth/revoke -H &quot;Authorization: Bearer
0a08c809-17ff-479f-b0a8-aedd4d8305a0&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;host&lt;/em&gt; is the IP address of the Virtual Controller.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must supply the &lt;code&gt;-k&lt;/code&gt;switch due to the use of self-signed certificates. You can import these certificates into your local certificate store.&lt;/p&gt;
&lt;p&gt;The call returns HTTP status 200 without any body.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Disabling/enabling TLS certificate validation on HPE OmniStack for vSphere In some cases, it may be necessary to temporarily disable TLS…]]></description><link>https://developer.hpe.com/hpe-simplivity/tls-certificate-validation/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/tls-certificate-validation/</guid><content:encoded>&lt;h1&gt;Disabling/enabling TLS certificate validation on HPE OmniStack for vSphere&lt;/h1&gt;
&lt;p&gt;In some cases, it may be necessary to temporarily disable TLS certification validation while you are fixing various certificate issues. However, use these steps only when it is necessary to re-establish communication to the Virtual Controller. Re-enable validation immediately after you diagnose and fix the certificate issues.&lt;/p&gt;
&lt;p&gt;WARNING&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Removing all certificates from the trust store causes HPE SimpliVity to stop performing TLS certificate validation which is a potential security risk.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Disabling TLS certificate validation&lt;/h2&gt;
&lt;p&gt;If you need to turn off certificate validation across reboots, you can disable TLS validation by removing all of the certificates from the trust store by using the PowerShell certificate management cmdlets.&lt;/p&gt;
&lt;p&gt;When TLS is not working properly in your environment, you must connect to the Virtual Controller using the &lt;code&gt;svtcli&lt;/code&gt; account because HMS entities cannot be validated when the connection is down.&lt;/p&gt;
&lt;p&gt;This procedure assumes that you have downloaded and installed the certificate management cmdlets.&lt;/p&gt;
&lt;p&gt;To begin, open a Powershell window and run the following script to remove all of the certificates from the trust store. Update the script to use the IP addresses and credentials for your system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ErrorActionPreference = &quot;Stop&quot;

$hmsip = &quot;192.0.2.2&quot;
$ovcip = &quot;192.0.2.5&quot;

#
# When the certficates are bad, use the &quot;emergency&quot; account
#
$emergencypwd = ConvertTo-SecureString &quot;password&quot; -AsPlainText -Force
$emergencycreds = New-Object System.Management.Automation.PSCredential(&quot;svtcli&quot;,$emergencypwd)
$oauthcred = Get-HPESvtAuthToken -HostName $ovcip -credential $emergencycreds -emergency

#
# delete all of the certificates - this turns off TLS validation
#

HPESvtCertificate | ForEach-Object -Process {
    # remove from OVC trust store
    Remove-HPESvtCertificate -Thumbprint $_.Thumbprint
}

#
# The HMS user should be able to logon now
#

$oauthcred = Get-HPESvtAuthToken -HostName $ovcip -credential $creds
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Enabling TLS certificate validation&lt;/h2&gt;
&lt;p&gt;Leaving validation off is not an acceptable long term solution. Fix the issues with the HMS certificate and then re-enable TLS validation.&lt;/p&gt;
&lt;p&gt;This procedure assumes that you have downloaded and installed the certificate management cmdlets.&lt;/p&gt;
&lt;p&gt;To begin, open a Powershell window and run the following script to add the HMS certificate to the trust store to restore TLS validation. Update the script to use the IP addresses and credentials for your system.&lt;/p&gt;
&lt;p&gt;The script gets the vCenter certificates then adds them to the Virtual Controller&apos;s trust store. When certificate validation is turned off or not working, you must connect using the &lt;code&gt;svtcli&lt;/code&gt; account.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ErrorActionPreference = &quot;Stop&quot;

$hmsip = &quot;192.0.2.2&quot;
$ovcip =  &quot;192.0.2.5&quot;

$secpasswd = ConvertTo-SecureString &quot;password&quot; -AsPlainText -Force
$creds = New-Object System.Management.Automation.PSCredential(&quot;administrator&quot;,$secpasswd)

#
# When the certificates are bad, use the &quot;emergency&quot; account
#
$emergencypwd = ConvertTo-SecureString &quot;password&quot; -AsPlainText -Force
$emergencycreds = New-Object System.Management.Automation.PSCredential(&quot;svtcli&quot;,$emergencypwd)
$oauthcred = Get-HPESvtAuthToken -HostName $ovcip -credential $emergencycreds -emergency

#
# grab vmware certs from HMS
#
$vmwarecerts = Get-HPESvtRootCertificate -HostName $hmsip

#
# add them into trust store
#
$vmwarecerts | ForEach-Object -Process {
    Add-HPESvtCertificate -Certificate $_
}

#
# The HMS user should still be able to logon
#
$oauthcred = Get-HPESvtAuthToken -HostName $ovcip -credential $creds
Write-Host $oauthcred
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[Getting the API version The REST API is backward compatible. As new attributes and operations are added in each release, older clients…]]></description><link>https://developer.hpe.com/hpe-simplivity/versioning/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/versioning/</guid><content:encoded>&lt;h1&gt;Getting the API version&lt;/h1&gt;
&lt;p&gt;The REST API is backward compatible. As new attributes and operations are added in each release, older clients continue to work without any changes. Clients should strive to be backward compatible and should query the version of the REST API they are targeting to make sure that this version supports the needed functionality. The first version of the REST API was v1, followed by v1.1, v1.2, and so on.&lt;/p&gt;
&lt;p&gt;You can issue a GET request to determine the version running on a Virtual Controller. To determine the version, issue a &lt;code&gt;GET /api/version&lt;/code&gt; request. This request does not need to be authenticated. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -k https://&amp;#x3C;host&gt;/api/version
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The response includes the REST API version and the SVTFS version. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
REST_API_Version: &quot;1.1&quot;,
SVTFS_Version: &quot;3.5.9904.284&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Accept header&lt;/h3&gt;
&lt;p&gt;Use the &lt;code&gt;Accept&lt;/code&gt; header to request a particular version of the REST API response objects. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Accept: application/vnd.simplivity.v1.1+json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You always receive the current version representation, even if you request an older version.&lt;/p&gt;
&lt;h3&gt;Content type&lt;/h3&gt;
&lt;p&gt;Each new version of the REST API introduces new operations. For detailed information about the operations that a particular REST API version supports, see the interactive REST API reference on the Virtual Controller you plan to interact with.&lt;/p&gt;
&lt;p&gt;You must specify the correct minimum version using the &lt;code&gt;Content-Type&lt;/code&gt; header. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Content-Type: application/vnd.simplivity.v1.1+json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following example shows how to use &lt;code&gt;curl&lt;/code&gt; to specify the desired minimum version in the &lt;code&gt;Content-Type&lt;/code&gt; header within a POST request to lock a backup:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl -k -X POST --header &quot;Content-Type: application/vnd.simplivity.v1.1+json&quot;
&quot;https://&amp;#x3C;host&gt;/api/backups/a5f3e4ae-ae54-48cd-86fc-d168a542ef3f/lock&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you specify a &lt;code&gt;Content-Type&lt;/code&gt; that is too old while issuing a POST or PUT request, the REST API returns an exception. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
&apos;status&apos;: &apos;415&apos;,
&apos;path&apos;: &apos;/api/backups/f61a3e0f-829b-4739-86da-c82de96c2c85/lock&apos;,
&apos;exception&apos;: &apos;org.springframework.web.HttpMediaTypeNotSupportedException&apos;,
&apos;message&apos;: &quot;Content type &apos;application/vnd.simplivity.v1+json&apos; not supported&quot;,
&apos;timestamp&apos;: &apos;2016-05-16T18:25:01Z&apos;
}
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Simplivity]]></title><description><![CDATA[What's new in each version The following sections list the features added in each version of the HPE OmniStack REST API. Version 1.1…]]></description><link>https://developer.hpe.com/hpe-simplivity/whats-new/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity/whats-new/</guid><content:encoded>&lt;h1&gt;What&apos;s new in each version&lt;/h1&gt;
&lt;p&gt;The following sections list the features added in each version of the HPE OmniStack REST API.&lt;/p&gt;
&lt;h2&gt;Version 1.16 (released with HPE OmniStack 4.1.0)&lt;/h2&gt;
&lt;p&gt;Version 1.16 of the REST API added the following new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;connected_clusters&lt;/code&gt; field for &lt;code&gt;GET omnistack_clusters&lt;/code&gt; has been deprecated. Use the REST call  &lt;code&gt;GET  /api/omnistack_clusters/{clusterid}/connected_clusters&lt;/code&gt; instead.&lt;/li&gt;
&lt;li&gt;You can now set the IWO status for an HPE OmniStack cluster.&lt;/li&gt;
&lt;li&gt;You can now create and identify single_replica datastores.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.15 (released with HPE OmniStack 4.0.1)&lt;/h2&gt;
&lt;p&gt;Version 1.15 of the REST API added the following new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can now update credentials for external stores and unregister external stores from a cluster.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;availability_zone_effective&lt;/code&gt; and &lt;code&gt;availability_zone_planned&lt;/code&gt; properties were added to the &lt;code&gt;omnistack_hosts&lt;/code&gt; object.&lt;/li&gt;
&lt;li&gt;The REST call &lt;code&gt;GET  /api/omnistack_clusters/throughput&lt;/code&gt; has been deprecated. Use the replacement REST call &lt;code&gt;GET  /api/omnistack_clusters/{clusterId}/throughput&lt;/code&gt; instead.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.14 (released with HPE OmniStack 4.0.0)&lt;/h2&gt;
&lt;p&gt;Version 1.14 of the REST API added the following new features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can now backup to an external store and restore a backup from an external store by creating a new virtual machine.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;arbiter_required&lt;/code&gt; and &lt;code&gt;arbiter_configured&lt;/code&gt; properties were added to the &lt;code&gt;omnistack_clusters&lt;/code&gt; object.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.13 (released with HPE OmniStack 3.7.10)&lt;/h2&gt;
&lt;p&gt;Version 1.13 of the REST API added no new features.&lt;/p&gt;
&lt;h2&gt;Version 1.12 (released with HPE OmniStack 3.7.9)&lt;/h2&gt;
&lt;p&gt;Version 1.12 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The certificates URI was renamed to security/certificates&lt;/li&gt;
&lt;li&gt;cluster_feature_level was added to omnistack_clusters&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.11 (released with HPE OmniStack 3.7.8)&lt;/h2&gt;
&lt;p&gt;Version 1.11 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Added infosight_configuration properties to the &lt;code&gt;hosts&lt;/code&gt; and &lt;code&gt;omnistack_clusters&lt;/code&gt; objects&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.10 (released with HPE OmniStack 3.7.7)&lt;/h2&gt;
&lt;p&gt;Version 1.10 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Certificate management&lt;/li&gt;
&lt;li&gt;Improved performance for GET backups&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.9 (released with HPE OmniStack 3.7.6)&lt;/h2&gt;
&lt;p&gt;Version 1.9 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Perform file level restore from backups&lt;/li&gt;
&lt;li&gt;Shutdown a Virtual Controller, cancel the shutdown, and check the status of the shutdown operation&lt;/li&gt;
&lt;li&gt;Remove an HPE OmniStack host from a federation&lt;/li&gt;
&lt;li&gt;Backup rule impact reporting&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.8 (released with HPE OmniStack 3.7.5)&lt;/h2&gt;
&lt;p&gt;Version 1.8 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Share a datastore with a standard host (host without HPE OmniStack software)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop sharing a datastore with a standard host&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provide details on standard hosts that can share a datastore (unique ID, IP address, host name, sharing status, virtual machine count)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Save virtual machine backup parameters (includes backup type application consistent or crash consistent and virtual machine credentials to access VSS if necessary to create an application-consistent backup with VSS instead of a VMware snapshot)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Show the name of the hypervisor management system for clusters, hosts, datastores, and virtual machines&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.7 (released with HPE OmniStack 3.7.3)&lt;/h2&gt;
&lt;p&gt;Version 1.7 of the REST API added the following new operations for &lt;code&gt;backups&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Calculates the unique size of a specified backup&lt;/li&gt;
&lt;li&gt;Cancels a specific running backup&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.6 (released with HPE OmniStack 3.7.2)&lt;/h2&gt;
&lt;p&gt;Version 1.6 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Added the following fields for &lt;code&gt;host&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;federation_mask&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;federation_mtu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;management_mask&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;management_mtu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;storage_mask&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;storage_mtu&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added the following field for &lt;code&gt;datastore&lt;/code&gt; and &lt;code&gt;omnistack_clusters&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;hypervisor_management_system&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.5 (released with HPE OmniStack 3.7.0)&lt;/h2&gt;
&lt;p&gt;Version 1.5 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set backup policy for multiple &lt;code&gt;virtual_machines&lt;/code&gt; with one operation&lt;/li&gt;
&lt;li&gt;Renamed &lt;code&gt;hypervisor_management_system&lt;/code&gt; to &lt;code&gt;hypervisor_type&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Added support for an emergency &lt;code&gt;grant_type&lt;/code&gt; to enable you to restore virtual machines in situations in which the Hypervisor Management System has become unavailable&lt;/li&gt;
&lt;li&gt;Added support for revoking OAuth tokens&lt;/li&gt;
&lt;li&gt;Added support for searching for objects by one or more IDs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.4 (released with HPE OmniStack 3.6.2)&lt;/h2&gt;
&lt;p&gt;Version 1.4 of the REST API added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Suspend or resume policy-based backups for a &lt;code&gt;host&lt;/code&gt;, &lt;code&gt;omnistack_cluster&lt;/code&gt;, or entire federation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delete multiple backups with one POST request&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added &lt;code&gt;hypervisor_management_system&lt;/code&gt; for &lt;code&gt;backup&lt;/code&gt;, &lt;code&gt;datastore&lt;/code&gt;, &lt;code&gt;omnistack_cluster&lt;/code&gt;, and &lt;code&gt;virtual_machine&lt;/code&gt; objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added the following fields for &lt;code&gt;backup&lt;/code&gt;, &lt;code&gt;datastore&lt;/code&gt;, and &lt;code&gt;virtual_machine&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;compute_cluster_parent_hypervisor_object_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;compute_cluster_parent_name&lt;/code&gt; +Added the following fields for &lt;code&gt;backup&lt;/code&gt; objects:&lt;/li&gt;
&lt;li&gt;&lt;code&gt;unique_size_bytes&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;unique_size_timestamp&lt;/code&gt; +&lt;code&gt;virtual_machine_type&lt;/code&gt; +Added the following optional fields for &lt;code&gt;virtual_machine&lt;/code&gt; objects:&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_allocated_capacity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_cpu_count&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_free_space&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_is_template&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_total_memory&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hypervisor_virtual_disk_count&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added the following fields for &lt;code&gt;host&lt;/code&gt; objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;can_rollback&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;compute_cluster_parent_hypervisor_object_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;compute_cluster_parent_name&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;current_feature_level&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;policy_enabled&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;potential_feature_level&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;upgrade_state&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added &lt;code&gt;version&lt;/code&gt; for &lt;code&gt;omnistack_cluster&lt;/code&gt; objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Faster performance and additional query options for &lt;code&gt;omnistack_cluster&lt;/code&gt; objects&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Filter &lt;code&gt;backup&lt;/code&gt;, &lt;code&gt;datastore&lt;/code&gt;, and &lt;code&gt;omnistack_cluster&lt;/code&gt; objects using multiple values in a comma-separated (OR) list&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.3 (released with HPE OmniStack 3.6.1)&lt;/h2&gt;
&lt;p&gt;Version 1.3 of the added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Association of &lt;code&gt;omnistack_cluster&lt;/code&gt; objects with Hypervisor Management System (HMS) clusters&lt;/li&gt;
&lt;li&gt;The new &lt;code&gt;hypervisor_object_parent_name&lt;/code&gt; and &lt;code&gt;hypervisor_object_parent_id&lt;/code&gt; properties identify the parent of an &lt;code&gt;omnistack_cluster&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Set the retention time for one or more backups&lt;/li&gt;
&lt;li&gt;Cluster throughput&lt;/li&gt;
&lt;li&gt;Cluster connectivity&lt;/li&gt;
&lt;li&gt;Set the time zone for a cluster&lt;/li&gt;
&lt;li&gt;Request a list of valid time zones for use when setting the time zone for a cluster&lt;/li&gt;
&lt;li&gt;Request the amount of life remaining for solid state drives (SSDs)&lt;/li&gt;
&lt;li&gt;Get the upgrade status for clusters&lt;/li&gt;
&lt;li&gt;Increased security resulting from removing the ability to refresh OAuth 2 tokens&lt;/li&gt;
&lt;li&gt;Filter &lt;code&gt;virtual_machine&lt;/code&gt; objects using multiple values in a comma-separated (OR) list&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.2 (released with HPE OmniStack 3.5.3)&lt;/h2&gt;
&lt;p&gt;Version 1.2 of the added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ability to set a backup policy for a datastore&lt;/li&gt;
&lt;li&gt;Host capacity metrics&lt;/li&gt;
&lt;li&gt;&lt;code&gt;virtual_machine&lt;/code&gt; hypervisor power state&lt;/li&gt;
&lt;li&gt;&lt;code&gt;omnistack_cluster&lt;/code&gt; associated hypervisor object ID and time zone&lt;/li&gt;
&lt;li&gt;Host associated compute cluster information&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1.1 (released with HPE OmniStack 3.5.2)&lt;/h2&gt;
&lt;p&gt;Version 1.1 of the added the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ability to query the REST version&lt;/li&gt;
&lt;li&gt;vCenter Server linked mode support&lt;/li&gt;
&lt;li&gt;Rename, copy, and lock backups&lt;/li&gt;
&lt;li&gt;Performance metrics&lt;/li&gt;
&lt;li&gt;Host hardware reporting&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Version 1 (released with HPE OmniStack 3.5.1)&lt;/h2&gt;
&lt;p&gt;The initial version of the REST API focused on orchestration tools and included the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Management: Perform common operations, such as clone, back up, restore, move, and set policy&lt;/li&gt;
&lt;li&gt;Reporting: Retrieve storage utilization per node and datacenter efficiency data&lt;/li&gt;
&lt;li&gt;Simple query support: Retrieve and information using straight-forward, intuitive queries, and perform sorting, filtering, and paging for all object types&lt;/li&gt;
&lt;li&gt;Interactive documentation: Access an intuitive, comprehensive, real-time HTML5 reference&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Morpheus]]></title><description><![CDATA[Morpheus, a Hewlett Packard Enterprise company, is a vendor-agnostic management platform for multi-cloud orchestration, unified operations…]]></description><link>https://developer.hpe.com/morpheus/home/</link><guid isPermaLink="false">https://developer.hpe.com/morpheus/home/</guid><content:encoded>&lt;p&gt;Morpheus, a Hewlett Packard Enterprise company, is a vendor-agnostic management platform for multi-cloud orchestration, unified operations, and self-service provisioning. It bridges the gap between teams, tools, and processes, independent of where and how applications are deployed. As a centralized control plane, Morpheus enables enterprise-wide governance, policy enforcement, and self-service application provisioning access across virtually any hybrid cloud environment.&lt;/p&gt;
&lt;p&gt;Learn more at &lt;a href=&quot;https://morpheusdata.com/&quot;&gt;www.morpheusdata.com&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;API Documentation&lt;/h2&gt;
&lt;p&gt;The Morpheus platform offers an extensive REST API to manage the configuration of the platform as well as deploy workloads to the various clouds that the platform supports. Detailed information about the REST API and how to use it can be found in the &lt;a href=&quot;https://apidocs.morpheusdata.com&quot;&gt;Morpheus API documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Plugin Developer Portal&lt;/h2&gt;
&lt;p&gt;The Morpheus platform offers a robust plugin framework that enables developers to build integrations to extend the native platform functionality. The framework includes providers for creating custom reports, cloud integrations, UI integrations, and more. Start building integrations with your environment by browsing through the &lt;a href=&quot;https://developer.morpheusdata.com&quot;&gt;Morpheus developer portal&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Morpheus Integrations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morpheus Golang SDK:&lt;/strong&gt; The Morpheus Golang library provides an interface for interacting with the Morpheus platform within a Golang application. Details on using the library can be found at &lt;a href=&quot;https://github.com/gomorpheus/morpheus-go-sdk&quot;&gt;morpheus-go-sdk &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morpheus Terraform Provider:&lt;/strong&gt; The Morpheus Terraform provider enables the Morpheus platform to be managed in a declarative fashion using HashiCorp Terraform. Documentation for the Terraform provider can be found at &lt;a href=&quot;https://registry.terraform.io/providers/gomorpheus/morpheus/latest/docs&quot;&gt;HashiCorp Terraform registry&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Morpheus CLI:&lt;/strong&gt; The Morpheus CLI is a Ruby-based command line tool for interacting with the Morpheus platform. The CLI can be used to programmatically manage the platform or deploy workloads using a YAML or JSON payload. Detailed information about the Morpheus CLI and how to use it can be found in the &lt;a href=&quot;https://clidocs.morpheusdata.com&quot;&gt;CLI documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Gain hands-on experience with Morpheus&lt;/h2&gt;
&lt;p&gt;The Morpheus platform includes a community edition that enables users to get hands-on experience with the Morpheus platform. The platform can be quickly deployed on a single Linux server running in a home lab, datacenter, or public cloud environment. Get started with the community edition at &lt;a href=&quot;https://morpheusdata.com/community&quot;&gt;https://morpheusdata.com/community&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OpenCHAMI]]></title><description><![CDATA[Open Composable Heterogeneous Adaptable Management Infrastructure OpenCHAMI (Open Composable Heterogeneous Adaptable Management…]]></description><link>https://developer.hpe.com/openchami/home/</link><guid isPermaLink="false">https://developer.hpe.com/openchami/home/</guid><content:encoded>&lt;h2&gt;Open Composable Heterogeneous Adaptable Management Infrastructure&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;OpenCHAMI&lt;/strong&gt; (Open Composable Heterogeneous Adaptable Management Infrastructure) is an open-source system management platform designed to bring &lt;strong&gt;cloud-like flexibility and security&lt;/strong&gt; to High Performance Computing (HPC) environments. It provides modern tools to efficiently &lt;strong&gt;deploy, manage, and scale&lt;/strong&gt; HPC clusters, making it easier for organizations to support both traditional scientific workloads and emerging &lt;strong&gt;AI, ML, and data-driven applications&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;🛡 &lt;strong&gt;Security-First Architecture&lt;/strong&gt; – Implements &lt;strong&gt;zero-trust authentication&lt;/strong&gt;, fine-grained &lt;strong&gt;access control&lt;/strong&gt;, and &lt;strong&gt;OIDC-based authorization&lt;/strong&gt; to safeguard HPC environments.&lt;/li&gt;
&lt;li&gt;🧩 &lt;strong&gt;Composable &amp;#x26; Scalable&lt;/strong&gt; – Modular, cloud-native design supports &lt;strong&gt;heterogeneous compute infrastructures&lt;/strong&gt; across on-prem and cloud environments.&lt;/li&gt;
&lt;li&gt;🔧 &lt;strong&gt;Microservices-Based&lt;/strong&gt; – Built on &lt;strong&gt;lightweight, containerized services&lt;/strong&gt;, making deployment and scaling more efficient.&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;Community-Driven &amp;#x26; Open&lt;/strong&gt; – Developed &lt;strong&gt;in collaboration with leading HPC sites&lt;/strong&gt; under the &lt;strong&gt;Linux Foundation HPC initiative&lt;/strong&gt;, ensuring transparency and innovation.&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Fast Boot&lt;/strong&gt; - Faster and more secure boot times with cloud-init. POST + 40 seconds!&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;strong&gt;Why OpenCHAMI?&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;HPC system management has traditionally relied on &lt;strong&gt;monolithic, complex, and rigid&lt;/strong&gt; solutions that are difficult to adapt to modern workloads. OpenCHAMI &lt;strong&gt;bridges the gap between HPC and cloud-native technologies&lt;/strong&gt; by providing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Security &amp;#x26; Compliance&lt;/strong&gt; – Designed with modern security best practices for protecting critical infrastructure.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud-Like Flexibility&lt;/strong&gt; – Enables dynamic, on-demand scaling and &lt;strong&gt;automation&lt;/strong&gt; of system resources.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Open &amp;#x26; Transparent Governance&lt;/strong&gt; – Unlike proprietary solutions, OpenCHAMI is &lt;strong&gt;community-led&lt;/strong&gt; and continuously evolving.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Learn how OpenCHAMI simplifies HPC system management with &lt;strong&gt;security-first architecture, composability, and microservices-based deployment.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Watch OpenCHAMI in action &lt;a href=&quot;https://youtu.be/UbBdbhzXjbA&quot;&gt;here&lt;/a&gt;:&lt;/p&gt;
&lt;h3&gt;Projects&lt;/h3&gt;
&lt;h4&gt;&lt;a href=&quot;https://github.com/openCHAMI&quot;&gt;OpenCHAMI&lt;/a&gt;&lt;a href=&quot;https://github.com/openCHAMI&quot;&gt;&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;HPC System Management for Cloud Engineers and HPC Sysadmins&lt;/p&gt;
&lt;p&gt;Learn more:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://openchami.org/&quot;&gt;OpenCHAMI Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openchami.org/docs/&quot;&gt;Explore the OpenCHAMI Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openchami.org/docs/tutorial/&quot;&gt;Check the tutorials&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openchami.org/blog/&quot;&gt;Read blogs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based workshops available in the HPE Developer &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on OpenCHAMI open-source project?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://join.slack.com/share/enQtMTAzNjA1NDMxMjI2NzItYTY1Yjk3NzgzNjhlZWEyZThjY2RiNGU3ZTM1MzhlMGQxYmRmZTJiMTIyMTZkYjQ4MDYzMTU0NTUzZmZmYTNmZQ&quot;&gt;open source slack channel&lt;/a&gt; and start a discussion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Project Data Map]]></title><description><![CDATA[With so much data expanding exponentially at the edge, it’s challenging to find, use, manage, and share the right data when you need it. To…]]></description><link>https://developer.hpe.com/project-data-map/home/</link><guid isPermaLink="false">https://developer.hpe.com/project-data-map/home/</guid><content:encoded>&lt;p&gt;With so much data expanding exponentially at the edge, it’s challenging to find, use, manage, and share the right data when you need it. To acquire the true meaning of your data and take action within your distributed enterprise, HPE is developing Project Data Map, which will allow you to navigate to the data you’re looking for with governance, lineage and trust.&lt;/p&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe title=&quot;Brightcove Player&quot; src=&quot;//players.brightcove.net/4119874060001/tViCJfxWJ_default/index.html?videoId=ref:2f8dce2b-5585-48e3-a9bb-95e4a7f04562&quot; allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;p&gt;Project Data Map includes an open Common Metadata Framework, and correlates and scores metadata relationships derived implicitly from data usage and signals that continuously track pattern changes. This allows it to get smarter data, enabling precision AI and machine learning models to make the best possible decisions.&lt;/p&gt;
&lt;p&gt;HPE is developing Project Data Map to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Democratize access to data, data analytics, and AI to domain experts, citizen users, and data scientists.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a standard and open approach to meta data, enabling easier cross-vertical discovery and sharing of distributed diverse data sets.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use data governance of rights and conditions to improve efficiency, promote transparency, and enable business insight.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Accelerate the data and analytics exchange.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reduce the extreme expense of cloud egress.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Secure data movement from every layer of the stack, continuously and automatically.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable a distributed enterprise to minimize unnecessary data movement for more efficient data management and help with cloud egress.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Features&lt;/h2&gt;
&lt;p&gt;With an open source foundation and a global data fabric, like HPE Ezmeral software, Project Data Map provides seamless, unified access to distributed data and unified control of distributed Kubernetes clusters. Project Data Map features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A data preview browser&lt;/strong&gt;, enabling you to see fragments of data and access its lineage to check on validity and quality.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data exchange and marketplace access&lt;/strong&gt;, providing a mechanism to publish contextual meta data to help define policy, license and exchange contracts, and provide access rights to data when criteria is met.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;An open Common Metadata Framework&lt;/strong&gt; that standardizes the way for you to find the most meaningful data based on personalized recommendations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A data policy engine&lt;/strong&gt;, making data sovereignty possible through dynamic policy generation and governance integrated with standard AAA services and versioned persistence on-platform and enforceable license models.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data search and filter/meta data standards&lt;/strong&gt;, automating meta data generation on ingest with the ability to provide rich meta data to describe the data and update it for personalization.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;An Open API&lt;/strong&gt;, allowing a service ecosystem to both consume data and orchestrate the creation of meta data for the mutual benefit of data producers and consumers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Trustworthy Data Foundation&lt;/strong&gt;, tracking and analyzing meta data from edge to core in order to shape data and pipelines with trustworthy elements throughout the entire data lifecycle.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://youtu.be/71_dEAWuBPw?t=1144&quot;&gt;New Innovations: Project Edge Cluster, Project Sustainability Edge Cluster, and Project Data Map&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://youtu.be/71_dEAWuBPw?t=2305&quot;&gt;Project Data Map demo&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=9VTLA1nxpoo&quot;&gt;Lightboard video&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/getting-the-most-from-your-data-driven-transformation-2109.html&quot;&gt;Getting the most from your data-driven transformation: 10 key principles&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/getting-value-from-your-data-shouldn-t-be-this-hard-2106.html&quot;&gt;Getting value from your data shouldn&apos;t be this hard&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://community.hpe.com/t5/Advancing-Life-Work/Dataspaces-how-an-open-metadata-layer-can-establish-a/ba-p/7149075#.Yw01QXbMKUk&quot;&gt;How an open metadata layer can establish a trustworthy data pipeline&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/why-the-future-of-ai-hinges-on-trust-2205.html&quot;&gt;Why the future of AI hinges on trust&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Collaborations&lt;/h2&gt;
&lt;p&gt;Explore how Project Data Map can transform the flow of data in these real-world projects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CGIAR (formerly the Consultative Group for International Agricultural Research), Digital Green, The AgStack Foundation and HPE&lt;/strong&gt; are on a global mission to build solutions to address the exponentially growing need for food. To learn more, watch the &lt;a href=&quot;https://www.hpe.com/us/en/discover-more-network/series/scale-for-good.html?media-id=/us/en/resources/discover/dmn/scale-for-good/thefoodcrisiscantechnologyscaletofeedtheworld/_jcr_content.details.json&amp;#x26;media-strategy=delegate&quot;&gt;Scale for Good video&lt;/a&gt;, and the &lt;a href=&quot;https://www.youtube.com/watch?v=g0cGYXg11Os&quot;&gt;AI for Good webinar&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Novartis Global Health and HPE&lt;/strong&gt; are seeking ways to develop a disease surveillance solution for dengue fever by collecting and integrating complex data sources across organizations globally, providing data transparency at scale to inform targeted response strategies. To learn more, &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/news-advisory/2021/06/hewlett-packard-enterprise-and-novartis-join-forces-to-advance-novartis-global-health-efforts.html&quot;&gt;read the press release&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Gaia-X and HPE&lt;/strong&gt; are creating an interoperable data exchange in which data can be shared under the protection of European data privacy laws. Hear the &lt;a href=&quot;https://share.transistor.fm/s/b465abf0&quot;&gt;HPE Tech Talk podcast&lt;/a&gt; to learn more.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. Project Data Map workshops are coming soon.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on Project Data Map?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C03LU2V1CSJ&quot;&gt;#HPEDataMap&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[SmartSim]]></title><description><![CDATA[What is SmartSim? Recently, there is growing interest in applying machine learning (ML) algorithms to improve scientific simulation…]]></description><link>https://developer.hpe.com/smartsim/home/</link><guid isPermaLink="false">https://developer.hpe.com/smartsim/home/</guid><content:encoded>&lt;h1&gt;What is SmartSim?&lt;/h1&gt;
&lt;p&gt;Recently, there is growing interest in applying machine learning (ML) algorithms to improve scientific simulation efficiency and accuracy. New software approaches are needed to couple existing scientific applications, traditionally written in Fortran/C/C++ and MPI, to rapidly evolving ML and data analytics libraries, typically written in Python. Currently, the diversity of programming languages, dependence on file input/output (I/O), and large variance in compute resource requirements for scientific applications makes it difficult to perform online analysis, training, and inference with most ML and data analytics packages at the scale needed for numerical simulations.&lt;/p&gt;
&lt;p&gt;How does one connect the two programming paradigms of numerical model development and machine learning? While on the surface, this question seems to approach the problem, we believe the true difficulty (and opportunity) in bridging these workloads needs to be reformulated in terms of data exchange: How to pass data between a simulation and ML model at scale? SmartSim provides the answer to this challenge.&lt;/p&gt;
&lt;p&gt;SmartSim is a software framework that facilitates the convergence of numerical simulations and AI workloads on heterogeneous architectures. SmartSim enables simulations in Fortran,C, C++ and Python to execute ML models hosted within in-memory storage (DRAM) facilitating online inference at simulation runtime. In addition, SmartSim’s API orchestrates the movement of data between simulation and learning components with single-line put/get semantics. SmartSim can host ML models on CPU-only or GPU-enabled compute nodes adjacent to or co-located with the simulation. SmartSim is portable and can be run on laptops and scales to thousands of processors.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/smartsim-architecture.png&quot; alt=&quot;&quot; title=&quot;SmartSim Architecture&quot;&gt;&lt;/p&gt;
&lt;p&gt;SmartSim is comprised of two libraries, a single light weight client library, the SmartRedis, that is compiled into end-users simulation and an Infrastructure Library (IL) that facilitates workflow around simulations. SmartSim users are able to run their ML models written in Python in either TensorFlow, TensorFlow-Lite, Keras, Pytorch, or any framework that can serialize to ONNX(e.g. scikit learn). In addition to online inference, SmartSim also facilitates online analysis, visualization, learning, and computational steering. Because simulation data can be held in co-located DRAM, scientists can interact with and perturb model data manually or programmatically during the course of a simulation. SmartSim enables users to execute nearly any simulation from a Jupyter notebook on HPC systems that support PBSPro, Slurm, Cobalt, as well as laptops and workstations.&lt;/p&gt;
&lt;p&gt;Interested to learn more? Check out these resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=JsSgq-fq44w&quot;&gt;Watch a recent presentation to Pangeo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.craylabs.org/docs/overview.html&quot;&gt;Read the SmartSim documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/CrayLabs/SmartSim&quot;&gt;SmartSim Github open-source repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://arxiv.org/abs/2104.09355&quot;&gt;Read our recent paper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://join.slack.com/t/craylabs/shared_invite/zt-nw3ag5z5-5PS4tIXBfufu1bIvvr71UA&quot;&gt;Join our Slack community&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[SPIFFE and SPIRE Projects]]></title><description><![CDATA[Inspired by production infrastructure at Facebook, Google, Netflix, and more, SPIFFE is a set of open-source standards for securely…]]></description><link>https://developer.hpe.com/spiffe-and-spire-projects/home/</link><guid isPermaLink="false">https://developer.hpe.com/spiffe-and-spire-projects/home/</guid><content:encoded>&lt;p&gt;Inspired by production infrastructure at Facebook, Google, Netflix, and more, SPIFFE is a set of open-source standards for securely authenticating software services in dynamic and heterogeneous infrastructures through platform-agnostic, cryptographic identities. SPIRE is an open-source system that implements the SPIFFE specification in a wide variety of environments.&lt;/p&gt;
&lt;p&gt;Together, the projects deliver a foundational capability, &lt;em&gt;service identity&lt;/em&gt;, for cloud- and container-deployed microservices. They enable organizations to deploy consistent, fine-grained cross-service authentication via a “dial-tone” API across heterogeneous environments.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.cncf.io/announcements/2022/09/20/spiffe-and-spire-projects-graduate-from-cloud-native-computing-foundation-incubator/&quot;&gt;SPIFFE and SPIRE are graduated projects&lt;/a&gt; from the Cloud Native Computing Foundation (CNCF). Joining the group of already &lt;a href=&quot;https://www.cncf.io/projects/&quot;&gt;graduated projects&lt;/a&gt;,  including HELM and Kubernetes, SPIFFE and SPIRE projects have received contributions from Bloomberg, Google, Pinterest, Square, Uber, HPE and others and have grown to become a foundational layer within the cloud native ecosystem. These projects integrate with multiple cloud native technologies and projects, such as Istio, Envoy, gPRC, Sigstore, and OPA (Open Policy Agent).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://spiffe.io/&quot;&gt;Project web site at CNCF&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can download an eBook that presents the SPIFFE standard for service identity, and SPIRE, the reference implementation for SPIFFE &lt;a href=&quot;https://spiffe.io/book/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Learn from the experts&lt;/h1&gt;
&lt;h3&gt;Introduction to SPIFFE and SPIRE&lt;/h3&gt;
&lt;p&gt;In this lightboard video, Evan Gilman, co-author of O’Reilly’s book Zero Trust Networks and a maintainer for SPIRE, provides an overview of CNCF’s SPIFFE and SPIRE Projects. Evan goes into the security issues that SPIFFE/SPIRE solve and how through workload identity attestation.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=-XGKybqTfZo&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/-XGKybqTfZo/hqdefault.jpg&quot; alt=&quot;Introduction to Spiffe and Spire&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Service Authentication for Zero Trust Model with SPIRE&lt;/h3&gt;
&lt;p&gt;In this video, Evan Gilman, co-author of O’Reilly’s book Zero Trust Networks and a maintainer for SPIRE, explains how SPIRE addresses zero trust challenges in a distributed environment.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=QNDWRQY0t-o&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/QNDWRQY0t-o/hqdefault.jpg&quot; alt=&quot;Zero Trust challenges&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;GitHub repositories &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/spiffe/spiffe&quot;&gt;spiffe&lt;/a&gt;: This repository includes the SPIFFE ID, SVID and Workload API specifications, example code, and tests, as well as project governance, policies, and processes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/spiffe/spire&quot;&gt;spire&lt;/a&gt;: This is a reference implementation of SPIFFE and the SPIFFE Workload API that can be run on and across varying hosting environments.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/spiffe/go-spiffe/tree/main/v2&quot;&gt;go-spiffe&lt;/a&gt;: Golang client libraries.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/spiffe/java-spiffe&quot;&gt;java-spiffe&lt;/a&gt;: Java client libraries&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/py-spiffe&quot;&gt;py-spiffe&lt;/a&gt;: Python client libraries&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/c-spiffe&quot;&gt;c-spiffe&lt;/a&gt;: C client libraries&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;Integrations&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://spiffe.io/docs/latest/microservices/envoy/&quot;&gt;Tutorial on how to configure the Envoy proxy with SPIFFE and SPIRE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://istio.io/latest/docs/ops/integrations/spire/&quot;&gt;Tutorial on how to configure Istio with SPIRE&lt;/a&gt;. The Istio integration was contributed to by HPE engineers, and is now part of Istio, since V1.14.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h1&gt;Workshops-on-Demand&lt;/h1&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops/&quot;&gt;HPE Developer Community Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. SPIFFE and SPIRE workshops are available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on SPIFFE?&lt;/h2&gt;
&lt;p&gt;Join the &lt;strong&gt;&lt;a href=&quot;https://slack.spiffe.io/&quot;&gt;SPIFFE Slack Workspace&lt;/a&gt;&lt;/strong&gt; and start a discussion.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Swarm Learning]]></title><description><![CDATA[Editor’s Note: HPE Swarm Learning product is now offered only as a community edition. HPE Swarm Learning is a decentralized, privacy…]]></description><link>https://developer.hpe.com/swarm-learning/home/</link><guid isPermaLink="false">https://developer.hpe.com/swarm-learning/home/</guid><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note: HPE Swarm Learning product is now offered only as a community edition.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HPE Swarm Learning is a decentralized, privacy-preserving Machine Learning framework. This ML framework utilizes the computing power at, or near, the distributed data sources to run the Machine Learning algorithms that train the models. It uses the security of a blockchain platform to share learnings with peers in a safe and secure manner. In  HPE Swarm Learning, training of the model occurs at the edge, where data is most recent, and where prompt, data-driven decisions are mostly necessary. In this completely decentralized architecture, only the insights learned are shared with the collaborating ML peers, not the raw data. This tremendously enhances data security and privacy.&lt;/p&gt;
&lt;p&gt;Swarm Learning nodes works in collaboration with other Swarm Learning nodes in the network. It regularly shares its learnings with the other nodes and incorporates their insights. This process continues until the Swarm Learning nodes train the model to desired state. User can monitor the progress of the current training as shown in the below image. It shows all running Swarm nodes, loss, model metric (for example, accuracy) and overall training progress for each User ML node. On hovering over the &quot;progress bar&quot;, one can see the number of completed epochs and the total number of epochs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/guid-cb6f59c9-7cd9-4ee8-ba7c-3082f07b8491-high.png&quot; alt=&quot;&quot; title=&quot;HPE Swarm Learning Topology&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/guid-899b556f-d33f-42d1-8d0d-37f191715709-high.jpg&quot; alt=&quot;HPE Swarm Learning architecture&quot; title=&quot;HPE Swarm Learning architecture&quot;&gt;&lt;/p&gt;
&lt;p&gt;User can now extend Swarm client to support other machine learning platforms as well. Currently Swarm client supports machine learning platforms like PyTorch and Keras (based on Tensorflow 2 in backend). Please find the instructions to extend Swarm client &lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/blob/master/lib/src/README.md&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We are happy to announce &lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/releases/tag/v2.2.0&quot;&gt;Swarm 2.2.0 community release&lt;/a&gt;. In this release, we have delivered key enhancements on UI/UX, which includes experiment tracking for easier “birds-eye” visualization of past training rounds, parallel Swarm installation on multiple hosts, Podman support via SLM-UI etc., that will significantly enhance user experience. We have also added powerful features to Swarm manageability framework for better management of user ML workloads.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Articles&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.nature.com/articles/s41586-021-03583-3&quot;&gt;2﻿021 Nature Paper - Swarm Learning for decentralized and confidential clinical machine learning&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.researchgate.net/publication/342495847_Swarm_Learning_as_a_privacy-preserving_machine_learning_approach_for_disease_classification&quot;&gt;Research gate - Swarm Learning as a privacy-preserving machine learning approach for disease classification&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Learn more about HPE Swarm Learning and its underlying technology&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;-﻿ The big shift: What is swarm learning?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Swarm learning is the next gold rush for machine intelligence - training at the edge so the edge devices get smarter and also train their peers. With no central authority, Blockchain is integrated to add control, privacy, and security. Learn more by watching &lt;strong&gt;What is swarm learning?&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=6Fep6Lw5t-U&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/6Fep6Lw5t-U/hqdefault.jpg&quot; alt=&quot;What is swarm learning?&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Listen to what the experts are saying&lt;/h2&gt;
&lt;p&gt;-﻿ &lt;strong&gt;How to optimize machine learning at the edge with HPE Swarm Learning&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this video Ronald van Loon and HPE Chief Technologist, Krishnaprasad Shastry, talk about optimizing machine learning at the edge with swarm learning. Learn more by watching &lt;strong&gt;How to optimize machine learning at the edge with HPE Swarm Learning&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=paBt6nvyTHQ&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/paBt6nvyTHQ/hqdefault.jpg&quot; alt=&quot;Optimize ML at the edge with HPE Swarm learning&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Resources&lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/swarm-learning.html&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning&quot;&gt;&lt;strong&gt;HPE Swarm Learning community edition &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/tree/master/examples&quot;&gt;&lt;strong&gt;Try HPE Swarm Learning examples &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/blob/master/lib/src/README.md&quot;&gt;&lt;strong&gt;H﻿PE Swarm Learning client &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions on HPE Swarm Learning?&lt;/h2&gt;
&lt;p&gt;J﻿oin the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C04A5DK9TUK&quot;&gt;#hpe-swarm-learning&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Zerto]]></title><description><![CDATA[Zerto, a Hewlett Packard Enterprise company, empowers customers to run an always-on business by simplifying the protection, recovery, and…]]></description><link>https://developer.hpe.com/zerto/home/</link><guid isPermaLink="false">https://developer.hpe.com/zerto/home/</guid><content:encoded>&lt;p&gt;Zerto, a Hewlett Packard Enterprise company, empowers customers to run an always-on business by simplifying the protection, recovery, and mobility of on-premises and cloud applications. Zerto’s continuous data protection (CDP) eliminates the risks and complexity of modernization and cloud adoption across private, public, and hybrid deployments. The simple, software-only solution uses continuous data protection at scale to converge disaster recovery, backup, and data mobility. Zerto is trusted by over 9,500 customers globally and is powering offerings for Microsoft Azure, IBM Cloud, AWS (Amazon Web Services), Google Cloud, Oracle Cloud, and more than 350 managed service providers.  &lt;/p&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://www.hpe.com/us/en/zerto.html&quot;&gt;Zerto&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Welcome to the Zerto Hacker Hub!&lt;/h2&gt;
&lt;p&gt;On this page, will be provided the tools to help you get started in deploying and managing Zerto’s CDP, which leverages its proprietary near-synchronous replication capabilities to protect and restore virtual machines and public cloud instances. Zerto offers an extensive REST API with Swagger support that enables users to programmatically create and manage end-to-end disaster recovery workflows.  &lt;/p&gt;
&lt;p&gt;Get started by diving into &lt;a href=&quot;https://github.com/ZertoPublic&quot;&gt;our GitHub page&lt;/a&gt; or check out the documentation at &lt;a href=&quot;https://help.zerto.com&quot;&gt;https://help.zerto.com&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Simplify automation with Zerto’s REST Swagger APIs&lt;/h3&gt;
&lt;p&gt;Zerto’s REST Swagger APIs offer a powerful and user-friendly way to automate and integrate Zerto into your existing IT workflows. These APIs provide a comprehensive set of endpoints that allow you to programmatically manage, monitor, and control your Zerto environment with ease. Whether you are looking to automate routine tasks, integrate with third-party tools, or develop custom applications, Zerto’s REST APIs make it straightforward and efficient.&lt;/p&gt;
&lt;p&gt;With Swagger, developers can interactively explore and test the APIs, making it easier to understand their capabilities and integrate them into your systems. The Swagger interface provides clear documentation and real-time testing capabilities, ensuring that you can quickly get up to speed and start leveraging Zerto’s powerful features.&lt;/p&gt;
&lt;p&gt;Unlock the full potential of your disaster recovery strategy with Zerto’s REST Swagger APIs today!&lt;/p&gt;
&lt;p&gt;For more information and to access the Zerto REST Swagger APIs, visit the &lt;a href=&quot;https://help.zerto.com/category/ZVM_REST_API_Swagger&quot;&gt;Zerto API Documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Unlock powerful insights with Zerto Resiliency Observation Console&lt;/h3&gt;
&lt;p&gt;Meet the Zerto Resiliency Observation Console (zROC) – a dynamic, Docker-compose based software stack designed to transform your Zerto API data into rich, visual insights using Prometheus and Grafana. This innovative tool empowers you to monitor and analyze your data effortlessly.&lt;/p&gt;
&lt;p&gt;What sets zROC apart is its custom Prometheus exporter code, crafted to seamlessly integrate with standard Prometheus and Grafana containers. The comprehensive configuration files included in the repository ensure that you can start deriving value immediately, straight out of the box.&lt;/p&gt;
&lt;p&gt;While zROC comes with pre-built dashboards to get you started, we encourage you to unleash your creativity by building custom dashboards or tweaking existing ones to perfectly match your unique needs. Dive into the world of intuitive data visualization and take your monitoring capabilities to the next level.&lt;/p&gt;
&lt;p&gt;Explore the Zerto Resiliency Observation Console today on our &lt;a href=&quot;https://github.com/ZertoPublic/zroc&quot;&gt;GitHub page&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Zerto In-Cloud for AWS&lt;/h3&gt;
&lt;p&gt;Zerto In-Cloud for AWS is disaster recovery for EC2 instances between regions and availability zones. Zerto In-Cloud for AWS blends the familiar Zerto approach to disaster recovery for workloads with the AWS native platform capabilities and services. Zerto In-Cloud for AWS features virtual protection groups, non-disruptive to production failover testing to isolated environments, and disaster recovery failover orchestration. &lt;/p&gt;
&lt;p&gt;Zerto In-Cloud for AWS supports 1000+ instance types and is 100% API-first developed to enable automation tools integration. &lt;/p&gt;
&lt;p&gt;Linked to our &lt;a href=&quot;https://github.com/ZertoPublic/ZIC-AWS&quot;&gt;GitHub here&lt;/a&gt; are the Terraform files you can use to start experimenting with Zerto In-Cloud for AWS.  &lt;/p&gt;
&lt;p&gt;Ready to learn more? Read the &lt;a href=&quot;https://www.zerto.com/wp-content/uploads/2021/11/Zerto-In-Cloud-for-AWS-Data-Sheet.pdf&quot;&gt;Zerto-In Cloud for AWS datasheet&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;Ready to learn more about Zerto? Browse through the &lt;a href=&quot;https://www.zerto.com/&quot;&gt;Zerto portal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Looking to try Zerto out? Sign up for our &lt;a href=&quot;https://www.zerto.com/myzerto/labs&quot;&gt;Hands on Labs! &lt;/a&gt;&lt;/p&gt;
&lt;br /&gt;
&lt;hr&gt;
&lt;h2&gt;Any questions about Zerto?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C03J3EGDDM0&quot;&gt;#zerto&lt;/a&gt; channel.&lt;/p&gt;
&lt;p&gt;Not a Slack user? Send us an email to &lt;a href=&quot;mailto:zertotm@hpe.com&quot;&gt;zertotm@hpe.com&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Arkouda]]></title><link>https://developer.hpe.com/Arkouda/</link><guid isPermaLink="false">https://developer.hpe.com/Arkouda/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Chapel]]></title><link>https://developer.hpe.com/Chapel/</link><guid isPermaLink="false">https://developer.hpe.com/Chapel/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Cloud Native Computing Foundation]]></title><link>https://developer.hpe.com/Cloud_Native_Computing_Foundation/</link><guid isPermaLink="false">https://developer.hpe.com/Cloud_Native_Computing_Foundation/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Debian]]></title><link>https://developer.hpe.com/Debian/</link><guid isPermaLink="false">https://developer.hpe.com/Debian/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Distributed Management Task Force]]></title><link>https://developer.hpe.com/Distributed_Management_Task_Force/</link><guid isPermaLink="false">https://developer.hpe.com/Distributed_Management_Task_Force/</guid><content:encoded></content:encoded></item><item><title><![CDATA[ETSI]]></title><link>https://developer.hpe.com/ETSI/</link><guid isPermaLink="false">https://developer.hpe.com/ETSI/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Gen-Z support on Linux]]></title><link>https://developer.hpe.com/Gen-Z_support_on_Linux/</link><guid isPermaLink="false">https://developer.hpe.com/Gen-Z_support_on_Linux/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Grommet]]></title><link>https://developer.hpe.com/Grommet/</link><guid isPermaLink="false">https://developer.hpe.com/Grommet/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Deep Learning Benchmarking Suite]]></title><link>https://developer.hpe.com/HPE_Deep_Learning_Benchmarking_Suite/</link><guid isPermaLink="false">https://developer.hpe.com/HPE_Deep_Learning_Benchmarking_Suite/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Software Delivery Repository]]></title><link>https://developer.hpe.com/HPE_Software_Delivery_Repository/</link><guid isPermaLink="false">https://developer.hpe.com/HPE_Software_Delivery_Repository/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HudsonAlpha Synergy Contribution]]></title><link>https://developer.hpe.com/HudsonAlpha_Synergy_Contribution/</link><guid isPermaLink="false">https://developer.hpe.com/HudsonAlpha_Synergy_Contribution/</guid><content:encoded></content:encoded></item><item><title><![CDATA[KubeEdge]]></title><link>https://developer.hpe.com/KubeEdge/</link><guid isPermaLink="false">https://developer.hpe.com/KubeEdge/</guid><content:encoded></content:encoded></item><item><title><![CDATA[LF EDGE]]></title><link>https://developer.hpe.com/LF_EDGE/</link><guid isPermaLink="false">https://developer.hpe.com/LF_EDGE/</guid><content:encoded></content:encoded></item><item><title><![CDATA[LinuxCOE]]></title><link>https://developer.hpe.com/LinuxCOE/</link><guid isPermaLink="false">https://developer.hpe.com/LinuxCOE/</guid><content:encoded></content:encoded></item><item><title><![CDATA[LinuxKI]]></title><link>https://developer.hpe.com/LinuxKI/</link><guid isPermaLink="false">https://developer.hpe.com/LinuxKI/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Linux Foundation]]></title><link>https://developer.hpe.com/Linux_Foundation/</link><guid isPermaLink="false">https://developer.hpe.com/Linux_Foundation/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Linux Software Raid Redundant Boot]]></title><link>https://developer.hpe.com/Linux_Software_Raid_Redundant_Boot/</link><guid isPermaLink="false">https://developer.hpe.com/Linux_Software_Raid_Redundant_Boot/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Mageia Linux Distribution]]></title><link>https://developer.hpe.com/Mageia_Linux_Distribution/</link><guid isPermaLink="false">https://developer.hpe.com/Mageia_Linux_Distribution/</guid><content:encoded></content:encoded></item><item><title><![CDATA[MapR Demonstration Projects]]></title><description><![CDATA[A collection of open source projects developed by MapR Technologies before it was acquired by HPE.]]></description><link>https://developer.hpe.com/MapR_Technologies/</link><guid isPermaLink="false">https://developer.hpe.com/MapR_Technologies/</guid><content:encoded>&lt;p&gt;A collection of open source projects developed by MapR Technologies before it was acquired by HPE.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Memory-Driven Computing Developer Toolkit]]></title><link>https://developer.hpe.com/Memory-Driven_Computing_Developer_Toolkit/</link><guid isPermaLink="false">https://developer.hpe.com/Memory-Driven_Computing_Developer_Toolkit/</guid><content:encoded></content:encoded></item><item><title><![CDATA[OpenSHMEM]]></title><link>https://developer.hpe.com/OpenSHMEM/</link><guid isPermaLink="false">https://developer.hpe.com/OpenSHMEM/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Open Compute Project]]></title><link>https://developer.hpe.com/Open_Compute_Project/</link><guid isPermaLink="false">https://developer.hpe.com/Open_Compute_Project/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Python-redfish]]></title><link>https://developer.hpe.com/Python-redfish/</link><guid isPermaLink="false">https://developer.hpe.com/Python-redfish/</guid><content:encoded></content:encoded></item><item><title><![CDATA[SmartSim]]></title><link>https://developer.hpe.com/SmartSim/</link><guid isPermaLink="false">https://developer.hpe.com/SmartSim/</guid><content:encoded></content:encoded></item><item><title><![CDATA[RSFO (Rapid Setting for Oracle)]]></title><link>https://developer.hpe.com/RSFO_(Rapid_Setting_for_Oracle)/</link><guid isPermaLink="false">https://developer.hpe.com/RSFO_(Rapid_Setting_for_Oracle)/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Spark]]></title><link>https://developer.hpe.com/Spark/</link><guid isPermaLink="false">https://developer.hpe.com/Spark/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Spiffe]]></title><link>https://developer.hpe.com/Spiffe/</link><guid isPermaLink="false">https://developer.hpe.com/Spiffe/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Spire]]></title><link>https://developer.hpe.com/Spire/</link><guid isPermaLink="false">https://developer.hpe.com/Spire/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Storage drivers]]></title><link>https://developer.hpe.com/Storage_drivers/</link><guid isPermaLink="false">https://developer.hpe.com/Storage_drivers/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Topology Framework]]></title><link>https://developer.hpe.com/Topology_Framework/</link><guid isPermaLink="false">https://developer.hpe.com/Topology_Framework/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Determined AI]]></title><description><![CDATA[Determined is an open-source deep learning training platform that makes building models fast and easy]]></description><link>https://developer.hpe.com/determined-ai/</link><guid isPermaLink="false">https://developer.hpe.com/determined-ai/</guid><content:encoded>&lt;p&gt;Determined is an open-source deep learning training platform that makes building models fast and easy&lt;/p&gt;</content:encoded></item><item><title><![CDATA[DragonHPC]]></title><description><![CDATA[DragonHPC is a programmable distributed runtime for HPC & AI workflows.]]></description><link>https://developer.hpe.com/dragonhpc/</link><guid isPermaLink="false">https://developer.hpe.com/dragonhpc/</guid><content:encoded>&lt;p&gt;DragonHPC is a programmable distributed runtime for HPC &amp;#x26; AI workflows.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeDirector]]></title><description><![CDATA[Helps deploy and manage stateful applications on Kubernetes.]]></description><link>https://developer.hpe.com/kubedirector/</link><guid isPermaLink="false">https://developer.hpe.com/kubedirector/</guid><content:encoded>&lt;p&gt;Helps deploy and manage stateful applications on Kubernetes.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OpenCHAMI]]></title><description><![CDATA[HPC System Management for Cloud Engineers and HPC Sysadmins]]></description><link>https://developer.hpe.com/openchami/</link><guid isPermaLink="false">https://developer.hpe.com/openchami/</guid><content:encoded>&lt;p&gt;HPC System Management for Cloud Engineers and HPC Sysadmins&lt;/p&gt;</content:encoded></item><item><title><![CDATA[SPIFFE/SPIRE]]></title><link>https://developer.hpe.com/spiffe-and-spire-projects/</link><guid isPermaLink="false">https://developer.hpe.com/spiffe-and-spire-projects/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Aruba Networking Central]]></title><description><![CDATA[Supercharge your network operations with HPE Aruba Networking Developer Hub. Find help and resources to automate networks and quickly build…]]></description><link>https://developer.hpe.com/aruba-central/home/</link><guid isPermaLink="false">https://developer.hpe.com/aruba-central/home/</guid><content:encoded>&lt;p&gt;Supercharge your network operations with HPE Aruba Networking Developer Hub. Find help and resources to automate networks and quickly build powerful applications by integrating with HPE Aruba Networking products. Find APIs and guides, visit the Code Exchange, and join our community of Airheads.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://devhub.arubanetworks.com/&quot;&gt;Check out the HPE Aruba Networking Developer Hub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://community.arubanetworks.com/community-home/digestviewer?communitykey=ea467413-8db4-4c49-b5f8-1a12f193e959&amp;#x26;tab=digestviewer&quot;&gt;Join the conversation at the Airheads Developer Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.ansible.com/integrations/networks/aruba&quot;&gt;Simplify network operations with HPE Aruba Networking and Ansible&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/aruba&quot;&gt;Leverage the HPE Aruba Networking GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.arubanetworks.com/hpe-aruba-networking-central/docs/rest-api-getting-started&quot;&gt;HPE Aruba Networking Central API reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.arubanetworks.com/aruba-central/docs/ansible-getting-started&quot;&gt;Getting Started with Ansible and HPE Aruba Networking Central&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/aruba/aruba-postman-collections&quot;&gt;HPE Aruba Networking Central Postman collections&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://devhub.arubanetworks.com/code-exchange&quot;&gt;HPE Aruba Networking Code Exchange&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Any questions on Aruba?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C0164BJHKJP&quot;&gt;#aruba-central&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data Services Cloud Console]]></title><description><![CDATA[To eliminate the silos and complexity of data management and infrastructure, HPE provides the Data Services Cloud Console, a SaaS-based…]]></description><link>https://developer.hpe.com/data-services-cloud-console/home/</link><guid isPermaLink="false">https://developer.hpe.com/data-services-cloud-console/home/</guid><content:encoded>&lt;p&gt;To eliminate the silos and complexity of data management and infrastructure, HPE provides the Data Services Cloud Console, a SaaS-based cloud console that delivers cloud operational agility and unified data operations as a service. Data Services Cloud Console also offers a unified API that gives developers access to infrastructure and data as code.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/a-single-destination-for-data-and-infra-services.png&quot; alt=&quot;DSCC UI&quot; title=&quot;DSCC UI diagram&quot;&gt;&lt;/p&gt;
&lt;p&gt;The cloud console brings the cloud experience to wherever data lives and streamlines data management across your hybrid cloud. It provides a suite of cloud services across your edge, core, and public cloud to accelerate data, agility, and innovation for everyone, from data managers, to data innovators. Delivered as a SaaS-based console, Data Services Cloud Console speeds customer innovation because there is nothing to deploy, everything is always updated to the latest software, and ownership risk is reduced by virtue of it being a subscription. The console is a single pane for management of any data and infrastructure services that are found as the cloud data services and  the cloud infrastructure services catalog.&lt;/p&gt;
&lt;h3&gt;Introduction to Data Services Cloud Console&lt;/h3&gt;
&lt;p&gt;To access the Data Services Cloud Console, click on this URL &lt;a href=&quot;https://console.greenlake.hpe.com&quot;&gt;https://console.greenlake.hpe.com&lt;/a&gt;, and sign-in using your HPE GreenLake&apos;s account.&lt;a href=&quot;https://common.cloud.hpe.com&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For a detailed overview of the console, and to get started using it, please click on this link &lt;a href=&quot;https://www.hpe.com/us/en/storage/data-services-cloud-console.html&quot;&gt;https://www.hpe.com/us/en/storage/data-services-cloud-console.html&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For a quick overview of the console, please click on the videos below.&lt;/p&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe title=&quot;HPE Data Services Cloud Console Chalk Talk&quot; src=&quot;https://www.youtube.com/embed/AxUE89X3Sy0&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe title=&quot;A Closer Look at HPE Data Services Cloud Console&quot; src=&quot;https://www.youtube.com/embed/lzOWapX0m5U&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;h3&gt;Data Services Cloud Console Public REST API&lt;/h3&gt;
&lt;p&gt;Data Services Cloud Console public REST API provides a resource for customers who are looking to enhance their infrastructure management and data-ops using the programmatic extensions from the console.  More blog posts will be coming that will provide more information to help customers adopting this API, providing examples, code snippets and other helpful information.&lt;/p&gt;
&lt;p&gt;For detailed &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;&lt;/a&gt;exploration of the console&apos;s public REST API, please take a look at the following blog posts and the video of the API demo in the Youtube.&lt;/p&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe title=&quot;Introduction to HPE Data Services Cloud Console public API&quot; src=&quot;https://www.youtube.com/embed/g3UO0S-4r6I&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Blog: &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/&quot;&gt;Getting Started with Data Services Cloud Console API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Blog: &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Using HPE GreenLake Console&apos;s API Gateway for Data Services Cloud Console&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Blog: &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;Implementing OAuth 2.0 Flow for Data Services Cloud Console&apos;s Client Application&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Any Question on Data Services Cloud Console?&lt;/h3&gt;
&lt;p&gt;Please join &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPEDEV Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services-cloud-console&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data Services on the HPE GreenLake platform]]></title><description><![CDATA[Data Services on the HPE GreenLake edge-to-cloud platform is a group of services which are part of the service catalogues in the HPE…]]></description><link>https://developer.hpe.com/data-services-on-the-hpe-greenlake-platform/home/</link><guid isPermaLink="false">https://developer.hpe.com/data-services-on-the-hpe-greenlake-platform/home/</guid><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Data Services on the HPE GreenLake edge-to-cloud platform is a group of services which are part of the service catalogues in the HPE GreenLake edge-to-cloud Platform. These services bring the cloud experiences to HPE GreenLake customers, wherever the data lives, across on-premises or public cloud throughout its lifecycle. With the streamlined ordering, provisioning, management, protection, analysis, and archiving, customers can achieve higher agility to innovate using latest trend in I.T. such as artificial intelligence. For more information on how to leverage these Data Services on the HPE GreenLake edge-to-cloud platform, please visit HPE Data Management &lt;a href=&quot;https://www.hpe.com/us/en/storage/data-services-cloud-console.html&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Introduction to Data Services on the HPE GreenLake edge-to-cloud platform&lt;/h2&gt;
&lt;p&gt;There are two major categories of services from Data Services on the HPE GreenLake edge-to-cloud platform:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure Platform suites&lt;/strong&gt; for the HPE Storage family (for more info &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/storage.html&quot;&gt;visit&lt;/a&gt; HPE GreenLake for Data Storage) to enable configuration, management, monitor and optimization of HPE Alletra Storage families for block, file, and object. These services known as the &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-compute-ops-management.html&quot;&gt;HPE GreenLake Data Ops Manager&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-block-storage.html&quot;&gt;HPE GreenLake for Block Storage&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-file-storage.html&quot;&gt;HPE GreenLake for File Storage&lt;/a&gt;, and &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-storage-fabric-management.html&quot;&gt;HPE GreenLake for Storage Fabric Management.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud Data Services suites&lt;/strong&gt; for hybrid cloud workload lifecycle management, to enable data protection, to provide disaster recovery, to deploy self-service hybrid cloud, and to enable infrastructure analysis. These services known as &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-backup-recovery.html&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-private-cloud-business-edition.html&quot;&gt;HPE GreenLake for Private Cloud Business Edition&lt;/a&gt;, and &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-disaster-recovery.html&quot;&gt;HPE GreenLake for Disaster Recovery&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/data-services-catalogues.png&quot; alt=&quot;HPE GreenLake Data Services Catalogue&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure displays the list of the services that are part of the Data Services on the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=index.html&quot;&gt;HPE GreenLake edge-to-cloud platform&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Data Services REST APIs&lt;/h2&gt;
&lt;h3&gt;The history&lt;/h3&gt;
&lt;p&gt;In &lt;strong&gt;November 2021&lt;/strong&gt;, after the release of the HPE GreenLake edge-to-cloud platform, Data Services introduce the first set of APIs to manipulate resources made available under HPE GreenLake Data Ops Manager. Today, this set of API was expanded to include Block Storage, File Storage services, and current storage Family of HPE Primera, Nimble Gen5, and Alletra (6K,9K,MP). This set of APIs, was associated with &lt;strong&gt;Infrastructure Platform suites&lt;/strong&gt;, also known as the &lt;strong&gt;Data Services Cloud Console API v1.5 (March 2024)&lt;/strong&gt;. This existing API conforms to the &lt;strong&gt;OpenAPI Standard 3.0&lt;/strong&gt; specification, and the spec file can be downloaded in &lt;strong&gt;either YAML or JSON OpenAPI Specification&lt;/strong&gt; from the following &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;website&lt;/a&gt;. The blog posts about getting started with this set of existing APIs is available in this &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/&quot;&gt;HPE Developer Forum&lt;/a&gt;. There are also additional blog posts that contains information such as &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;authentication using OAuth 2.0&lt;/a&gt; to generate the required &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/&quot;&gt;access token&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/&quot;&gt;using openapi-generator-cli&lt;/a&gt; to convert the DSCC API OpenAPI Specifications to &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-powershell-sdk/&quot;&gt;PowerShell&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/&quot;&gt;Python client Library&lt;/a&gt;, a &lt;a href=&quot;https://developer.hpe.com/blog/new-powershell-toolkit-available-for-managing-hpe-data-services-cloud-console/&quot;&gt;PowerShell toolkit&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/automating-operations-on-dscc-using-ansible-playbooks/&quot;&gt;an example of ansible playbooks&lt;/a&gt; to manage this storage sources.&lt;/p&gt;
&lt;h3&gt;The current family of the APIs&lt;/h3&gt;
&lt;p&gt;With the announcements of additional API support for the Data Services suites on the HPE GreenLake edge-to-cloud platform in &lt;strong&gt;March 2024&lt;/strong&gt;, Cloud Data Services introduce a family of API sets to expand the support of Data Services such as HPE GreenLake for Backup and Recovery and HPE GreenLake for Private Cloud Business Edition. These sets of API conforms to the &lt;strong&gt;OpenAPI Standard 3.1&lt;/strong&gt; specification, and it can be downloaded in &lt;strong&gt;JSON OpenAPI Specification&lt;/strong&gt; file from the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake developer website&lt;/a&gt;. These later API sets use the same authentication as the existing API (OAuth 2.0 Client Credential) and the same access token for all services inside a HPE GreenLake region. There are blog posts that introduce the new API sets, and tips to use available OpenAPI tool to convert the OpenAPI spec 3.1 to 3.0 so that the spec can be converted using the open-source openapi-generator into client-Libraries of multiple scripting languages.&lt;/p&gt;
&lt;h3&gt;The new set of Data Services APIs&lt;/h3&gt;
&lt;p&gt;This set of APIs family are broken down into different groups:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Data Services :&lt;/strong&gt; Group of APIs to accommodate common resources used for both Data Services and Infrastructure Platform suites such as Task List (Asynchronous Operations), Dual Authorization, Issues, Settings, Storage Locations, and displaying DSCC Tags&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Virtualization :&lt;/strong&gt; Group of APIs to accommodate common interaction with both on-premises and public cloud Hyper-Visors such as registration of VMware vCenter or AWS account, discovery of virtual machines or AWS EC2 instance, migration of virtual machines and many others use cases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Private Cloud Business Edition :&lt;/strong&gt; Group of APIs designed to view inventory of a DHCI 2.0, SimpliVity, or Azure cloud account, to perform upgrades for Storage software, hyper-Visor software, to add, update delete provisioning policies and many other use cases in HPE GreenLake for Private Cloud Business Edition.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;The new documentation about the family of Data Services APIs&lt;/h3&gt;
&lt;p&gt;This is the pivoting point for the HPE GreenLake APIs for Data Services on the HPE GreenLake edge-to-cloud platform where HPE GreenLake user’s clients for API can perform more complex automation that involves all categories of services made available in the Data Services on HPE GreenLake edge-to-cloud platform. To accommodate the documentation for the combination of multiple sets of APIs, the HPE GreenLake documentation about the data services APIs had been updated to provide directions on how to access the family of the HPE GreenLake API for Data Services on HPE GreenLake edge-to-cloud platform shown in this &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=sd00003533en_us&amp;#x26;page=ps_api_dscc.html&quot;&gt;link&lt;/a&gt;. The help menu inside the HPE GreenLake for those data services as of March 2024 was also updated as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/the-link-to-the-documentation-of-all-hpe-gl-apis-for-data-services-on-hpegl.png&quot; alt=&quot;HPE data services help menu after the update&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the link that was added to points to the collection of the HPE GreenLake APIs for Data Services on HPE GreenLake platform which points further to the Help Menu for using the API.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/landing-page-that-describes-all-of-the-rest-api-for-data-services-on-hpegl.png&quot; alt=&quot;Using the API links to the GL API for the family of data services&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the links to each set of HPE GreenLake APIs part of the family of Data Services on the HPE GreenLake platform.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;The future for Data Services APIs&lt;/h3&gt;
&lt;p&gt;In the future, there will be some more sets of HPE GreenLake APIs related to the additional Data Services published at the HPE GreenLake Developer Website that conform with OpenAPI 3.1 specification. Furthermore, the Data Services Cloud Console API will eventually be published to conform to the OpenAPI 3.1 specifications just as the rest of the sets of OpenAPI specs. To accommodate powerful and comprehensive DataOps use cases such as presented in this blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-virtualization/&quot;&gt;post&lt;/a&gt; and another blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-backup-and-recovery/&quot;&gt;post&lt;/a&gt;, the combination of multiple HPE GreenLake APIs from different sets of Data Services on HPE GreenLake platform has been used.&lt;/p&gt;
&lt;h3&gt;Learn more about the Data Services APIs&lt;/h3&gt;
&lt;p&gt;For getting started with the Data Services APIs and improve your agility in managing your data, please look at the following blog posts, Data Services documentation, and the video of the Developer Forum webinar in the HPE YouTube channel.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services/&quot;&gt;post&lt;/a&gt;: Getting Started with HPE GreenLake APIs for Data Services.&lt;/li&gt;
&lt;li&gt;Blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-virtualization/&quot;&gt;post&lt;/a&gt;: Getting Started with HPE GreenLake APIs for Virtualization.&lt;/li&gt;
&lt;li&gt;Blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-private-cloud-business-edition-apis/&quot;&gt;post&lt;/a&gt;: Getting Started with HPE GreenLake APIs for HPE GreenLake Private Cloud Business Edition.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=sd00003533en_us&amp;#x26;page=ps_api_dscc.html&quot;&gt;link&lt;/a&gt; to the documentation of the HPE GreenLake APIs at HPE GreenLake help menu.&lt;/li&gt;
&lt;li&gt;Watch the replay of the Meetup webinar “&lt;strong&gt;The data services family of APIs for HPE GreenLake – Putting it all together&lt;/strong&gt;”&lt;/li&gt;
&lt;/ol&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.25%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/-ffzhc6TzoA?si=fzg4jiakx_L2GxZB&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;p&gt;&lt;br /&gt; &lt;br /&gt;&lt;/p&gt;
&lt;h3&gt;Any questions on HPE GreenLake APIs for Data Services on HPE GreenLake platform?&lt;/h3&gt;
&lt;p&gt;Join the HPE Developer Community Slack &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;Workspace&lt;/a&gt;, and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake edge-to-cloud platform]]></title><description><![CDATA[The HPE GreenLake platform, along with its marketplace and partner ecosystem, provides customers with a consistent, unified foundation for…]]></description><link>https://developer.hpe.com/hpe-greenlake-cloud-platform/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-cloud-platform/home/</guid><content:encoded>&lt;p&gt;The HPE GreenLake platform, along with its marketplace and partner ecosystem, provides customers with a consistent, unified foundation for application development and hybrid cloud innovation. This helps customers reduce costs and complexity, while enhancing stability and performance across multiple IT environments.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;APIs and Documentation&lt;/h2&gt;
&lt;p&gt;HPE GreenLake customers and partners can take advantage of our well-documented, secure, and scalable framework of APIs for HPE GreenLake found in our developer portal. Learn more about the unified HPE GreenLake experience for developers by visiting the &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Demos and resources&lt;/h2&gt;
&lt;p&gt;Get an overview of how you can manage workloads with the HPE GreenLake platform using our &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/demos.html&quot;&gt;quick demos&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Test Drive for developers&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://auth.hpe.com/hpe/cf/registration&quot;&gt;Register for an HPE account&lt;/a&gt; and check what is available for developers in our &lt;a href=&quot;https://testdrive.greenlake.hpe.com/&quot;&gt;Test Drive&lt;/a&gt; platform.&lt;/p&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. HPE GreenLake workshops are available today.&lt;/p&gt;
&lt;br/&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;br/&gt;
&lt;h2&gt;Any questions on HPE GreenLake?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#HPEGreenLake&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Compute Ops Management]]></title><description><![CDATA[Compute powers business today, and organizations need compute resources to be available in more environments than ever before -- at the edge…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-compute-ops-management/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-compute-ops-management/home/</guid><content:encoded>&lt;p&gt;Compute powers business today, and organizations need compute resources to be available in more environments than ever before -- at the edge, close to where data is generated and consumed, and in the cloud where they can provide greater agility.&lt;/p&gt;
&lt;p&gt;Every business is facing tough challenges – to digitally transform, address security risks, and operate efficiently. Complex server management is a distraction from these challenges, consuming IT resources and slowing innovation. HPE Compute Ops Management solves these challenges by simplifying and unifying operations across the server lifecycle, for the whole environment, no matter where your compute infrastructure lives. The service provides a consistent, secure cloud experience that scales elastically and unifies compute management.&lt;/p&gt;
&lt;p&gt;With HPE Compute Ops Management, you can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Unify compute management&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Streamline your compute management operations using a seamless as-a-service single console experience from edge-to-cloud with self-service and real-time access to servers.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Simplify and automate&lt;/strong&gt;&lt;br&gt;
Simplify and bring agility to your compute lifecycle management to lower your Total Cost of Ownership (TCO).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Secure cloud operations&lt;/strong&gt;&lt;br&gt;
Securely control your distributed compute lifecycle tasks using a cloud-native architecture to manage and monitor your servers seamlessly.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;a href=&quot;https://www.hpe.com/us/en/compute/management-software.html&quot;&gt;Learn more about HPE Compute Ops Management&lt;/a&gt;&lt;/h3&gt;
&lt;h2&gt;APIs and documentation&lt;/h2&gt;
&lt;p&gt;HPE Compute Ops Management customers and partners can take advantage of our well-documented, secure, and scalable APIs found on the HPE GreenLake developer portal.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/&quot;&gt;Check out the HPE Compute Ops Management APIs on the HPE GreenLake Developer Portal&lt;/a&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;a href=&quot;https://www.hpe.com/info/com-gsg&quot;&gt;Reference the HPE Compute Ops Management Getting Started Guide for configuration and usage guidance&lt;/a&gt;&lt;/h3&gt;
&lt;h2&gt;Any questions on HPE Compute Ops Management?&lt;/h2&gt;
&lt;p&gt;Please join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C03QTQWC213&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://hpedev.slack.com/archives/C03QTQWC213&quot;&gt;#hpe-compute-ops-management&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Private Cloud Enterprise]]></title><description><![CDATA[HPE Private Cloud Enterprise reimagines the private cloud experience with a scalable, pay-per-use, enterprise-grade solution delivered to…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise/home/</guid><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; reimagines the private cloud experience with a scalable, pay-per-use, enterprise-grade solution delivered to you as a managed service across your locations — from edge to cloud. Built for both cloud-native and traditional applications, it supports the self‑service deployment of container, bare metal, and virtual machine services.&lt;/p&gt;
&lt;p&gt;Its design principles are centered on leveraging open standards and open systems, preventing vendor lock-in with the ability to place your workloads in the environment of your choice based on cost and performance. You also get the full advantage of modern DevOps and automation with infrastructure-as-code (IaC) configuration management, REST APIs, and cloud command shell, for streamlined infrastructure provisioning and integration with existing DevOps/CI toolchains—speeding deployments for cloud admins and developers alike.&lt;/p&gt;
&lt;h2&gt;HPE Private Cloud Enterprise services&lt;/h2&gt;
&lt;h3&gt;Containers&lt;/h3&gt;
&lt;p&gt;Provision Kubernetes clusters on-demand to deploy and scale containerized applications and cloud-native workloads. The default container service is built on the HPE Ezmeral Runtime Environment and based on CNCF-compliant Kubernetes. You can create container clusters using VMs and/or bare-metal compute instances to meet a range of performance requirements.&lt;/p&gt;
&lt;p&gt;HPE Private Cloud Enterprise also supports select third-party container platforms such as Amazon Elastic Kubernetes (EKS), so you can leverage the same container runtimes in your public and private cloud, streamlining workload portability and providing a consistent cloud-native experience across hybrid clouds for cloud or tenant admins (cloud system admins, DevOps admins, and more) and cloud consumers (developers, project contributors, and such).&lt;/p&gt;
&lt;h3&gt;Bare Metal&lt;/h3&gt;
&lt;p&gt;Provision bare-metal instances on-demand to support workloads that require the performance of a dedicated physical server. You can organize bare-metal instances into compute groups; define a unique set of resources for each group (compute instances, storage volumes, VLAN segments, IP pools, SSH keys, and more).  Easily bring your sanctioned, hardened operating system images or your own virtualization or container technology stacks to meet your corporate IT standards and versioning policies.&lt;/p&gt;
&lt;h3&gt;Virtual machines&lt;/h3&gt;
&lt;p&gt;Provision virtual machines on-demand to support traditional virtualized workloads.  The service supports the popular VMware ESXi™ hypervisor. Choose from a variety of virtual machine instances with different compute instance types and sizes to meet the price-performance requirements of different workloads. In addition, HPE supports predefined VM instance families aligned to compute modules with various memory, storage, and CPU characteristics. You can also define and size your VM instances to address the specific requirements of any application.  HPE Private Cloud Enterprise offers integrations with popular configuration management platforms, including Ansible, Ansible Tower, Chef, and Puppet.&lt;/p&gt;
&lt;p&gt;To learn, more review the &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50007892enw&quot;&gt;HPE Private Cloud Enterprise technical white paper&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Developer experience&lt;/h2&gt;
&lt;p&gt;HPE Private Cloud Enterprise includes powerful DevOps tools that streamline and automate repetitive tasks with repeatable workflows, super-charging productivity and increasing agility for faster, more consistent code collaboration.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fast self-service access to leverage the platform and its resources&lt;/li&gt;
&lt;li&gt;Reduce manual DevOps tasks with automation&lt;/li&gt;
&lt;li&gt;Integrate with your familiar DevOps toolchains (currently GitHub)&lt;/li&gt;
&lt;li&gt;Easily monitor and manage the status of projects&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpegl4pce-architecture-v2.png&quot; alt=&quot;HPE Private Cloud Enterprise architecture&quot; title=&quot;HPE Private Cloud Enterprise architecture&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Infrastructure as code (IaC)&lt;/h3&gt;
&lt;p&gt;Leverage IaC functionality offered by the HPE GreenLake Terraform Provider to provision and manage your private cloud resources. The Terraform Registry resources can be &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest/docs&quot;&gt;downloaded from here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Get some real, hands-on experience by registering for this workshops-on-demand:
&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/36&quot;&gt;Introduction to Virtual Machine Infrastructure-as-Code using Terraform and HPE Private Cloud Enterprise&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;REST APIs&lt;/h3&gt;
&lt;p&gt;HPE GreenLake APIs provide a secure, standardized interface for automating common and repetitive tasks, programmatically configuring and provisioning services, and integrating applications with cloud services. The APIs are based on the OpenAPI specification and governed by administratively defined RBACs and strong authentication methods.&lt;/p&gt;
&lt;h3&gt;Cloud shell&lt;/h3&gt;
&lt;p&gt;Cloud shell as an interactive browser-based shell enables secure access to your HPE Private Cloud Enterprise resources. Development packages, orchestration tools and the latest IaC libraries are pre-installed as part of the cloud shell.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Access pre-loaded with latest libraries and utilities with in-built authentication&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pre-packaged orchestration &amp;#x26; IaC tools and templates (HPE GreenLake specific resources and tooling)&lt;/li&gt;
&lt;li&gt;Development packages such as Terraform, Git, Docker / Docker Compose/ Docker CLI, Golang (latest), Python 3.x&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Clone GitHub repo in cloud shell, debug and deploy applications into HPE Private Cloud Enterprise resources&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create and manage VM and container resources via IaC using HPE GreenLake Terraform modules&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Each cloud shell instance backed by 1GB of persistent storage provisioned for User&apos;s $HOME directory&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpegl4pce-cloud-shell.png&quot; alt=&quot;HPE Private Cloud Enterprise cloud shell&quot; title=&quot;HPE Private Cloud Enterprise cloud shell&quot;&gt;&lt;/p&gt;
&lt;h3&gt;DevOps&lt;/h3&gt;
&lt;p&gt;Cloud or tenant admins and cloud consumers can create DevOps projects—workspaces where authorized administrators and contributors can configure external connections such as GitHub, create and manage automated CD pipelines for external accounts, and associate Kubernetes container clusters with the deployment pipelines.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpegl4pce-devops.png&quot; alt=&quot;HPE Private Cloud Enterprise DevOps CI/CD pipeline&quot; title=&quot;HPE Private Cloud Enterprise DevOps CI/CD pipeline&quot;&gt;&lt;/p&gt;
&lt;p&gt;For more information on the HPE GreenLake DevOps capabilities, please watch this video:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Create a CD pipeline with a release trigger with HPE Private Cloud Enterprise&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=i95-FO0bvgg&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/i95-FO0bvgg/hqdefault.jpg&quot; alt=&quot;Create a CD pipeline&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50007892enw&quot;&gt;Technical white paper&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=index.html&quot;&gt;Support documentation&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter-Notebook based workshops available in the HPE Developer &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. &lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/36&quot;&gt;Introduction to Virtual Machine Infrastructure-as-Code using Terraform and HPE Private Cloud Enterprise&lt;/a&gt; is available today.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake cloud]]></title><description><![CDATA[HPE GreenLake cloud is a modular hybrid cloud platform that enables enterprises to unlock insights and innovate faster by deploying a single…]]></description><link>https://developer.hpe.com/hpe-greenlake-platform/home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-platform/home/</guid><content:encoded>&lt;style&gt;
ul li{
 font-size:27px;
 margin-top: 10px;
}
ul li:first-child {
    margin-top:0;
}
&lt;/style&gt;
&lt;p&gt;HPE GreenLake cloud is a modular hybrid cloud platform that enables enterprises to unlock insights and innovate faster by deploying a single cloud operating model everywhere.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;HPE GreenLake cloud APIs and documentation&lt;/h2&gt;
&lt;p&gt;HPE GreenLake customers and partners can take advantage of our well-documented, secure, and scalable framework of APIs for HPE GreenLake found in the &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;, covering both the platform and the cloud services that run on it.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake cloud APIs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Conform to the OpenAPI 3.x specification. This makes them easy to learn, discoverable by code, and accessible with any programming language.&lt;/li&gt;
&lt;li&gt;Use a single endpoint. All calls go to a single unified API global domain endpoint: &lt;em&gt;ht&lt;span&gt;tps://&lt;/span&gt;global.api.greenlake.hpe.com&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Use a single OAuth token to access all APIs.&lt;/li&gt;
&lt;li&gt;Are built to be secure and highly available.&lt;/li&gt;
&lt;li&gt;Are RESTful for flexible implementation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We provide a &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;dedicated developer portal&lt;/a&gt; to support you with documentation, code, community, and more.&lt;/p&gt;
&lt;p&gt;For a full list of HPE GreenLake APIs please visit &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;the HPE GreenLake API catalog&lt;/a&gt;. Some of the APIs currently available are listed below:&lt;/p&gt;
&lt;h3&gt;Workspace management&lt;/h3&gt;
&lt;p&gt;Learn about the details of a workspace, and discover how to fully create, read, update, and delete (CRUD) on managed service provider (MSP) tenant workspaces. You can also access a full CRUD roster of users, enabling you to send invitations to join a workspace.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/workspace/public/&quot;&gt;Visit Workspace management API Reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Authorization&lt;/h3&gt;
&lt;p&gt;Familiarize yourself with the details about a permissions role, or full CRUD on user assignments to roles.&lt;/p&gt;
&lt;h3&gt;Devices and subscriptions management&lt;/h3&gt;
&lt;p&gt;Add and modify hardware devices in a workspace, initiate subscriptions, and assign them to devices.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/device-management/public&quot;&gt;Visit device management API Reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/subscription-management/public&quot;&gt;Visit subscription management API Reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Locations&lt;/h3&gt;
&lt;p&gt;Create, read, and update physical locations used for shipping and support.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/location-management/public&quot;&gt;Visit locations management API Reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Audit Log&lt;/h3&gt;
&lt;p&gt;Read details about user actions for accountability.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/audit-logs/public/&quot;&gt;Visit Audit log API reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Wellness&lt;/h3&gt;
&lt;p&gt;Get a list of wellness events via searches and filters, organize them with tags, read event details, pull KPIs, and open support cases.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/wellness/public/&quot;&gt;Visit Wellness API reference&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Learn more about the unified HPE GreenLake cloud experience for developers by visiting the &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Demos and resources&lt;/h2&gt;
&lt;p&gt;Get an overview of how you can manage workloads with HPE GreenLake cloud using our &lt;a href=&quot;https://hpe.com/greenlake/demos&quot;&gt;interactive demos&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;Take advantage of our free, Jupyter Notebook-based Workshops-on-Demand available in the &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops/?activeTab=4&quot;&gt;Hack Shack&lt;/a&gt;. These technical workshops provide you with an in-depth, hands-on learning experience where you can interact with and learn from the experts. Designed to fit your schedule, these workshops are available 24/7 – any time, from anywhere. HPE GreenLake workshops are available today.&lt;/p&gt;
&lt;br/&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Try now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;br/&gt;
&lt;h2&gt;Any questions on HPE GreenLake APIs?&lt;/h2&gt;
&lt;p&gt;Join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Hybrid Observability in HPE GreenLake Flex Solutions]]></title><link>https://developer.hpe.com/hybrid-observability-flex-solutions/home/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-flex-solutions/home/</guid><content:encoded>&lt;style&gt;
.action-button {
   display: inline-block;
   background-color: #01a982;
   color: white;
   padding: 12px 24px;
   border-radius: 5px;
   text-decoration: none;
   font-size: 16px;
   font-weight: bold;
}
.action-button:hover {
   background-color: #005a3c;
}
#flex-container {
   display: flex;
}

#flex-content {
   flex: 1;
}
.video-box {
   flex: 1;
   max-width: 40%;
   min-width: 200px;
   text-align: center;
}
.video-thumbnail {
   width: 80%;
   height: 200px;
   border-radius: 10px;
   box-shadow: 0 4px 8px rgba(0, 0, 0, 0.2);
}
#center-align {
   text-align: center;
}
#button-icon {
   height: 20px;
   width: auto;
}
li {
   font-size: 20px;
   line-height: 28px;
   margin-bottom: 10px;
}
#action-link {
    background-color: #01a982;
    color: white;
    padding: 10px 20px;
    border-radius: 20px;
    text-decoration: none;
    display: inline-flex;
    align-items: center;
    gap: 10px;
}
#section-header,
#learn-from-experts,
#additional-resources,
#contact
 {
   color: #01a982;
   font-size: 28px;
   margin-top: 30px;
   border-bottom: 3px solid #01a982;
   padding-bottom: 10px;
}
#content-list {
   list-style-type: decimal;
   padding-left: 20px;
   font-size: 18px;
   line-height: 1.8;
}
.video-title {
   display: block;
   margin: 0;
   font-size: 20px;
   font-weight: bold;
}
li {
   font-size: 20px;
}
#content-table {
   border-collapse: collapse;
   width: 100%;
   border: 1px solid black;
}

#table-header-cell {
   border: 1px solid black !important;
   padding: 8px !important;
   text-align: center !important;
   font-weight: bold !important;
}

#table-cell {
   border: 1px solid black;
   padding: 8px;
   text-align: center;
}

#section-title {
   border: 1px solid black;
   padding: 8px;
   text-align: center;
   font-weight: bold;
   background-color: #f1f1f1;
}
.video-title {
   display: block;
   margin: 0;
   font-size: 20px;
   font-weight: bold;
}
&lt;/style&gt;
&lt;h1 id=&apos;overview&apos;&gt;Activation &amp; Onboarding of Hybrid Observability in HPE GreenLake Flex Solutions&lt;/h1&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;
    Your HPE GreenLake Flex Solution gives you full access to the tools and intelligence inherent within HPE GreenLake cloud – bringing the cloud experience to all your IT so you can deploy and manage resources across your private and public clouds while retaining control of your data and flexibility over how you consume and manage your services.
  &lt;/p&gt;
&lt;div id=&quot;flex-container&quot;&gt;
  &lt;div id=&quot;flex-content&quot;&gt;
    &lt;p style=&quot;font-size: 22px;&quot;&gt;
One outstanding feature that comes standard with HPE GreenLake Flex Solutions is hybrid observability – for monitoring the compute, storage, and networking resources within your Flex solution. &lt;/p&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;&lt;a href=&quot;/greenlake/hybrid-observability-flex-solutions/why-greenLake-flex-solutions&quot; target=&quot;_blank&quot; style=&quot;font-size: 22px;&quot;&gt;Hybrid observability&lt;/a&gt; is an AI-powered SaaS solution – powered by &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html&quot; target=&quot;_blank&quot; style=&quot;font-size: 22px;&quot;&gt; HPE OpsRamp Software&lt;/a&gt; – that provides your IT operations teams full visibility and control to optimize uptime and infrastructure operating costs while improving process efficiencies. 
    &lt;/p&gt;
  &lt;/div&gt;
  &lt;div class=&quot;video-box&quot;&gt;
      &lt;a href=&quot;https://www.youtube.com/watch?v=3Jp4MbsNydM&quot; target=&quot;_blank&quot;&gt;
        &lt;img src=&quot;/img/stepsflex/main_video_img.png&quot; alt=&quot;HPE GreenLake Flex Solutions Video&quot; class=&quot;video-thumbnail&quot;&gt;
      &lt;/a&gt;
  &lt;/div&gt;
&lt;/div&gt;
  &lt;h3 id=&quot;learn-from-experts&quot;&gt;Getting Started&lt;/h3&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;Direct your IT operations teams to the activation and onboarding resources on this page to start taking full advantage today. You’ll quickly recognize the value of hybrid observability from HPE OpsRamp and want to expand its functionality throughout your hybrid environment – including virtual machines, containers and workloads running on any manufacturer’s hardware.&lt;/p&gt;
  &lt;p style=&quot;font-size: 25px;&quot;&gt;Choose between HPE-assisted or self-service activation and onboarding:&lt;/p&gt;
  &lt;h4 id=&apos;hpe-assisted&apos;&gt;HPE-Assisted Activation and Onboarding&lt;/h4&gt;
&lt;p style=&quot;font-size: 24px;&quot;&gt;Your HPE Account Service Manager (ASM) and a Global Support Remote specialist (GSR) are standing by to help guide your activation and onboarding process.&lt;/p&gt;
&lt;div id=&quot;content-container&quot; style=&quot;display: flex; align-items: flex-start; gap: 20px;&quot;&gt;
  &lt;div id=&quot;text-content&quot; style=&quot;flex: 2;&quot;&gt;
    &lt;p style=&quot;font-size: 22px;&quot;&gt;After your hardware has been installed and configured in your environment, engage your HPE Account Service Manager (ASM) to guide you through the following steps:&lt;/p&gt;
    &lt;ol id=&quot;content-list&quot;&gt;
      &lt;li&gt;&lt;a href=&quot;https://forms.office.com/pages/responsepage.aspx?id=YSBbEGm2MUuSrCTTBNGV3MNwzZIB3zBHp5x5QZ1HakFUOE9NMFdWU0RYRjIzSkRDVzFCVUgzVTM1My4u&quot; target=&quot;_blank&quot;&gt;Complete the quick setup questionnaire&lt;/a&gt; that your ASM will share with you via email.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;page=GUID-C335D790-5166-406F-B0B8-C93AB46A2C76.html#ariaid-title1&quot; target=&quot;_blank&quot;&gt;Create a workspace&lt;/a&gt; within HPE GreenLake cloud. After creating your first workspace, you’ll be taken to your HPE GreenLake cloud page where you’ll see featured services, like hybrid observability powered by HPE OpsRamp.&lt;/li&gt;
      &lt;li&gt;Invite additional users from your IT operations team by using the Identity and Access Management (IAM) tool.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;page=GUID-9EDAAB42-9182-488D-A06F-6E8CB4BFAB60.html&quot; target=&quot;_blank&quot;&gt;Provision OpsRamp&lt;/a&gt; in your HPE GreenLake cloud.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;page=GUID-7CDEB995-EAAB-4A2B-BC1F-2EB6F20B594B.html%23ariaid-title1&quot; target=&quot;_blank&quot;&gt;Specify the region&lt;/a&gt; where you’d like your OpsRamp SaaS instance deployed.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;page=GUID-46819916-2559-4A35-B10A-42E982026A6C.html&quot; target=&quot;_blank&quot;&gt;Launch OpsRamp using the subscription key&lt;/a&gt; received within your welcome letter or from your ASM.&lt;/li&gt;
      &lt;li&gt;A GSR will then be assigned to provide guidance through the onboarding and customization of your hybrid observability command center. This may include assistance with setting up gateways, auto-discovery and dashboarding, customizing monitoring templates and reports, as well as setting up patching, scripting and alert automation.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/div&gt;
  &lt;div id=&quot;image-content&quot; style=&quot;flex: 1; text-align: right;&quot;&gt;
    &lt;img src=&quot;/img/stepsflex/assisted.png&quot; alt=&quot;Activation and Onboarding Steps&quot; style=&quot;max-width: 100%; height: auto; border-radius: 8px;&quot;&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;h4 id=&apos;self-service-activation&apos;&gt;Self-service Activation and Onboarding&lt;/h4&gt;
&lt;p style=&quot;font-size: 24px;&quot;&gt;Resources for onboarding, activating, and configuring the hybrid observability service&lt;/p&gt;
&lt;table id=&quot;content-table&quot;&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-header-cell&quot;&gt;Video Tutorials&lt;/td&gt;
      &lt;td id=&quot;table-header-cell&quot;&gt;Blog Tutorials&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr id=&quot;provisioning-activation&quot;&gt;
      &lt;td colspan=&quot;2&quot; id=&quot;section-title&quot;&gt;
        Provisioning and activation
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=HFRDexDyr88&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 1 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 1: Provisioning the hybrid observability service in your workspace&lt;/span&gt;
      &lt;/td&gt;
      &lt;td id=&quot;table-cell&quot; rowspan=&quot;4&quot;&gt;
        &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-1-provisioning-and-activation-in-hpe-greenlake-flex-solutions/&quot;&gt;
          Hybrid observability service –  Provisioning and activation in HPE GreenLake Flex Solutions
        &lt;/a&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=7lBc9_2TC3s&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 2 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 2: Activate hybrid observability  subscription key in your workspace&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=oinaiu1zeBo&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 3 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 3: Assign access roles for desired users&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=vviJ_2B_C_o&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 4 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 4: Launching the hybrid observability service from your workspace&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr id=&quot;client-settings-gateway-configs&quot;&gt;
      &lt;td colspan=&quot;2&quot; id=&quot;section-title&quot;&gt;
        Configuring client settings and gateways
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=4jYTVIbUAU4&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 5 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 5: Preparing the client/user settings&lt;/span&gt;
      &lt;/td&gt;
      &lt;td id=&quot;table-cell&quot; rowspan=&quot;2&quot;&gt;
        &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/&quot;&gt;
          Hybrid observability service – Initial configuration to enable the discovery of resources in HPE GreenLake Flex Solutions 
        &lt;/a&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=AtQCNt67SIA&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 6 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 6: Configuring a Gateway&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr id=&quot;monitoring-dashboards&quot;&gt;
      &lt;td colspan=&quot;2&quot; id=&quot;section-title&quot;&gt;
        Monitoring, integration, and dashboards
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=JtiWrHUqzdk&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 7 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 7: Agentless SSH integration and monitoring templates&lt;/span&gt;
      &lt;/td&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;
          Hybrid observability service – Enabling the monitoring of agentless SSH-enabled system in HPE GreenLake Flex Solutions
        &lt;/a&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=7E1-QvFdeE4&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 8 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 8: Redfish Server integration and configuration&lt;/span&gt;
      &lt;/td&gt;
&lt;td id=&quot;table-cell&quot; rowspan=&quot;3&quot;&gt;
        &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;
          Hybrid observability service – Enabling the monitoring of physical devices in HPE GreenLake Flex Solutions
        &lt;/a&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=JpsGVEG_eVA&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 9 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 9: Storage array integration and configuration&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td id=&quot;table-cell&quot;&gt;
        &lt;a href=&quot;https://www.youtube.com/watch?v=6ahkkdmGsi8&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;
          &lt;img src=&quot;/img/stepsflex/video_img.jpg&quot; alt=&quot;Step 10 Image&quot; style=&quot;width: 120px; height: 75px; margin: 0;&quot;&gt;
        &lt;/a&gt;
        &lt;span class=&quot;video-title&quot;&gt;Part 10: Creating a customized dashboard&lt;/span&gt;
      &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
  &lt;h3 id=&quot;additional-resources&quot;&gt;Additional Resources&lt;/h3&gt;
  &lt;ul&gt;
   &lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a50010160enw.html&quot;&gt;Hybrid observability in HPE GreenLake Flex Solutions Service Description&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&quot;&gt;HPE GreenLake Flex Solutions User Guide&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50010620enw?jumpid=in_pdfviewer-psnow&quot;&gt;HPE GreenLake Flex Solutions Briefcase&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-opsramp/home/&quot;&gt;HPE OpsRamp APIs and Integration Capabilities&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/&quot;&gt;HPE OpsRamp documentation for Hybrid Observability in HPE GreenLake Flex Solutions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
    &lt;strong&gt;Webinar replays&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=qH93yD5KSL8&amp;list=PLtS6YX0YOX4f5TyRI7jUdjm7D9H4laNlF&quot;&gt;No More Blind Spots: How eBPF Transforms Observability&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=VnKSnf7G4-4&amp;list=PLtS6YX0YOX4f5TyRI7jUdjm7D9H4laNlF&quot;&gt;From log files to AI insights: The 60-year evolution of observability and AIOps&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=nDa_NQPbbVY&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;DayN+ : A new way to look at observability&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;/ul&gt;
  &lt;h3 id=&quot;contact&quot;&gt;Any Questions About Hybrid Observability in HPE GreenLake Flex Solutions?&lt;/h3&gt;
  &lt;p style=&quot;font-size: 22px;&quot;&gt;Need help getting started with the bundled hybrid observability in HPE GreenLake Flex Solutions? Join our &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot; style=&quot;font-size: 22px;&quot;&gt;Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C08K4GV7YN5&quot; style=&quot;font-size: 22px;&quot;&gt;#hpe-greenlake-flex-observability&lt;/a&gt; Slack channel. You can also email us at &lt;a href=&quot;mailto:gsccitops@hpe.com&quot; style=&quot;font-size: 22px;&quot;&gt;gsccitops@hpe.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why Hybrid Observability?]]></title><link>https://developer.hpe.com/hybrid-observability-flex-solutions/why-greenLake-flex-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-flex-solutions/why-greenLake-flex-solutions/</guid><content:encoded>&lt;h1 id =&quot;overview&quot;&gt;Why Hybrid observability?&lt;/h1&gt;
&lt;p style=&quot;font-size: 22px;&quot;&gt;
IT operations teams need hybrid observability to gain unified visibility and insights across complex, distributed IT infrastructure environments. Hybrid observability enables faster issue detection and remediation, improved performance, and more efficient management. Hybrid observability within your HPE GreenLake Flex solutions is a powerful added-value service, delivered via an AI-powered SaaS-based application, to help your IT operations teams proactively ensure system performance and optimize infrastructure operating costs while improving process efficiencies.
&lt;/p&gt;
&lt;h3 id =&quot;1-optimize-costs-and-utilization&quot;&gt;1. Optimize Costs and Utilization?&lt;/h3&gt;
&lt;div style=&quot;display: flex; align-items: flex-start; gap: 20px;&quot;&gt;
  &lt;div style=&quot;font-size: 22px; line-height: 32px;&quot;&gt;
    Optimize resource utilization and forecast future requirements for more accurate budgeting, and drive better cost efficiencies and predictability.
    &lt;ul&gt;
      &lt;li&gt;Capacity and usage analytics&lt;/li&gt;
      &lt;li&gt;Customizable dashboards and reporting&lt;/li&gt;
      &lt;li&gt;Asset lifecycle management&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/div&gt;
    &lt;img src=&quot;/img/stepsflex/capacity_insights.png&quot; alt=&quot;Capacity Insights&quot; style=&quot;width: 500px; height: auto;&quot;&gt;
&lt;/div&gt;
&lt;h3 id=&quot;2-assure-uptime-and-performance&quot;&gt;2. Assure Uptime and Performance&lt;/h3&gt;
&lt;div style=&quot;display: flex; align-items: flex-start; gap: 20px;&quot;&gt;
  &lt;div style=&quot;font-size: 22px; line-height: 32px;&quot;&gt;
    Rapidly identify issues and the potential impact on application experience. Move from reactive to proactive to quickly resolve incidents before they impact business performance.
    &lt;ul&gt;
      &lt;li&gt;Full-stack observability&lt;/li&gt;
      &lt;li&gt;Health and performance data&lt;/li&gt;
      &lt;li&gt;Service dependency mapping&lt;/li&gt;
      &lt;li&gt;Predictive analytics on trending health data with proactive alerting&lt;/li&gt;
      &lt;li&gt;Policy-based resolution workflows&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/div&gt;
    &lt;img src=&quot;/img/stepsflex/full_stack_observability.png&quot; alt=&quot;Uptime and Performance&quot; style=&quot;width: 500px; height: auto;&quot;&gt;
&lt;/div&gt;
&lt;h3 id=&quot;3-streamline-it-operations-and-service-delivery&quot;&gt;3. Streamline IT Operations and Service Delivery&lt;/h3&gt;
&lt;div style=&quot;display: flex; align-items: flex-start; gap: 20px;&quot;&gt;
  &lt;div style=&quot;font-size: 22px; line-height: 32px;&quot;&gt;
    Policy-based automation of routine operational tasks leveraging AI and ML analytics. Focus your IT team on innovation and business-critical needs.
    &lt;ul&gt;
      &lt;li&gt;Unified visibility across hybrid estate&lt;/li&gt;
      &lt;li&gt;Intuitive process automation&lt;/li&gt;
      &lt;li&gt;Event and incident management&lt;/li&gt;
      &lt;li&gt;Bi-directional integration with Service Desk&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/div&gt;
    &lt;img src=&quot;/img/stepsflex/AI_Insights.png&quot; alt=&quot;IT Operations and Service Delivery&quot; style=&quot;width: 500px; height: auto;&quot;&gt;
&lt;/div&gt;
&lt;h3 id=&quot;4-simplify-compliance-and-governance&quot;&gt;4. Simplify Compliance and Governance&lt;/h3&gt;
&lt;div style=&quot;display: flex; align-items: flex-start; gap: 20px;&quot;&gt;
  &lt;div style=&quot;font-size: 22px; line-height: 32px;&quot;&gt;
    Mitigate risk and safeguard sensitive data by automating policy enforcement with compliance and regulatory requirements.
    &lt;ul&gt;
      &lt;li&gt;Prescribed monitoring templates&lt;/li&gt;
      &lt;li&gt;Automated OS upgrades and patching&lt;/li&gt;
      &lt;li&gt;Role-based consoles&lt;/li&gt;
      &lt;li&gt;Audit trails and session recordings&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/div&gt;
    &lt;img src=&quot;/img/stepsflex/automated_patching.png&quot; alt=&quot;Compliance and Governance&quot; style=&quot;width: 500px; height: auto;&quot;&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[HPE Ezmeral Software Forum]]></title><description><![CDATA[...]]></description><link>https://developer.hpe.com/EzmeralSoftwareForum/</link><guid isPermaLink="false">https://developer.hpe.com/EzmeralSoftwareForum/</guid><content:encoded>&lt;p&gt;...&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Be an Open Source Contributor]]></title><description><![CDATA[.]]></description><link>https://developer.hpe.com/OSScontribute/</link><guid isPermaLink="false">https://developer.hpe.com/OSScontribute/</guid><content:encoded>&lt;p&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Developer Slack Channel]]></title><link>https://developer.hpe.com/Slack/</link><guid isPermaLink="false">https://developer.hpe.com/Slack/</guid><content:encoded></content:encoded></item><item><title><![CDATA[X]]></title><description><![CDATA[Text]]></description><link>https://developer.hpe.com/Twitter/</link><guid isPermaLink="false">https://developer.hpe.com/Twitter/</guid><content:encoded>&lt;p&gt;Text&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Be a Blogger]]></title><description><![CDATA[.]]></description><link>https://developer.hpe.com/contribute/</link><guid isPermaLink="false">https://developer.hpe.com/contribute/</guid><content:encoded>&lt;p&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Newsletter]]></title><description><![CDATA[...]]></description><link>https://developer.hpe.com/newsletter/</link><guid isPermaLink="false">https://developer.hpe.com/newsletter/</guid><content:encoded>&lt;p&gt;...&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Source]]></title><description><![CDATA[...]]></description><link>https://developer.hpe.com/openSource/</link><guid isPermaLink="false">https://developer.hpe.com/openSource/</guid><content:encoded>&lt;p&gt;...&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Welcome to the HPE Dev Community DevTalks]]></title><description><![CDATA[DevTalks are developer/data scientist focused 1-hour sessions delivered by the HPE Developer Community
every other week on Wednesdays, at…]]></description><link>https://developer.hpe.com/devtalks/</link><guid isPermaLink="false">https://developer.hpe.com/devtalks/</guid><content:encoded>&lt;p&gt;DevTalks are developer/data scientist focused 1-hour sessions delivered by the HPE Developer Community
every other week on Wednesdays, at 5 PM CET.&lt;/p&gt;
&lt;p&gt;Please note that these talks are internal/partners only for now.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;    Date       &lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Speaker(s)&lt;/th&gt;
&lt;th&gt;Link         &lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;09-Dec-20&lt;/td&gt;
&lt;td&gt;What’s a data fabric and how does it work?&lt;/td&gt;
&lt;td&gt;Ted Dunning&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qi6sTvu8osk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;06-Jan-21&lt;/td&gt;
&lt;td&gt;Managing HPE Cloud Volumes with API&lt;/td&gt;
&lt;td&gt;Yann Allandit&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/aReR7DF1iIY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20-Jan-21&lt;/td&gt;
&lt;td&gt;Building a dynamic Machine Learning pipeline with KubeDirector&lt;/td&gt;
&lt;td&gt;Denis Choukroun&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/AO0x7pxEw98&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;03-Feb-21&lt;/td&gt;
&lt;td&gt;Advanced PowerShell scripting with HPE OneView PowerShell library&lt;/td&gt;
&lt;td&gt;Dung Hoang Khac&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qp_gmOj5OX0&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Feb-21&lt;/td&gt;
&lt;td&gt;How to start developing NLP applications&lt;/td&gt;
&lt;td&gt;Rohit Rawat &amp;#x26; Vineet Gundecha&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://vimeo.com/514054456/fc11ffd8cf&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;03-Mar-21&lt;/td&gt;
&lt;td&gt;Delivering the AI Stack for the Modern Enterprise&lt;/td&gt;
&lt;td&gt;Vinod Iyengar (H2O.ai) &amp;#x26;  Ellen Friedman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/lLxy03I3qrE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31-Mar-21&lt;/td&gt;
&lt;td&gt;Industrialising and transforming Data Science scope and scale&lt;/td&gt;
&lt;td&gt;Doug Cackett&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://vimeo.com/532641045/d498467501&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14-Apr-21&lt;/td&gt;
&lt;td&gt;Delivering hands-on experience with Jupyter Notebooks&lt;/td&gt;
&lt;td&gt;Frédéric Passeron&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/lAlNNUkuPc8?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Apr-21&lt;/td&gt;
&lt;td&gt;Kubernetes 101&lt;/td&gt;
&lt;td&gt;Didier Lalli / Nigel Poulton&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/PWVJKK1obKQ?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12-May-21&lt;/td&gt;
&lt;td&gt;Redfish programming made easy and secure with Ansible and HPE OneView&lt;/td&gt;
&lt;td&gt;François Donzé&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/HEU29-Nf4J8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-May-21&lt;/td&gt;
&lt;td&gt;Simplify GPU Sharing and Utilization with Run:AI on HPE Ezmeral&lt;/td&gt;
&lt;td&gt;Omri Geller, Run:AI&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/3KMdV0CcvRE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;09-Jun-21&lt;/td&gt;
&lt;td&gt;Deploying end-to-end machine learning workflows​ with HPE Ezmeral ML Ops&lt;/td&gt;
&lt;td&gt;Denis Choukroun&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/MoqQTvwH0p8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;07-Jul-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/7/DevTalk-SmartSim-jul-7-2021.pdf&quot;&gt;SmartSim – Bringing Simulation and AI Together at Scale&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Samuel Partee, Matt Ellis&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/-K5yFCP6wTg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Jul-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/7/DevTalk32-TrilioVault-21-july-2021.pdf&quot;&gt;Recovering and Porting Cloud-Native Applications in the Fast-Paced DevOps World&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Prashanto Kochavara, Trilio&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/wVnDk-JMMwI&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;04-Aug-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/8/DevTalk33-Faster-apps-and-app-delivery-with-GraphQL.pdf&quot;&gt;Faster apps and app delivery with GraphQL&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Phil Prasek / Michael Watson, apollographql.com&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/koIPQUK-i6E&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-Aug-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/8/Sysdig-and-HPE-final.pdf&quot;&gt;Security and Observability for HPE Ezmeral with Sysdig&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Alex Lawrence, Sysdig​&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/nMvrgeQRkEw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Sep-21&lt;/td&gt;
&lt;td&gt;Learn about the upcoming HPE GreenLake Cloud Services Developer Portal&lt;/td&gt;
&lt;td&gt;Travis Tripp, Pulkit Sindhwani&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://psnow.ext.hpe.com/asset?id=e800d2f9-1e6c-4fd7-aef3-b76c70ca194e&amp;#x26;preview=true&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29-Sep-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE_SmartIO.pdf&quot;&gt;Discover hardware-based networking and security acceleration with HPE Pensando SmartNIC&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Olivier Vallois&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=j45I21Chakg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13-Oct-21&lt;/td&gt;
&lt;td&gt;Using the Aruba CX pyaoscx Python SDK&lt;/td&gt;
&lt;td&gt;Alvin Castro&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/vB3DEpRDLOk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10-Nov-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/11/gopaddle-introduction-HPEDev-Final-10-Nov-2021.pdf&quot;&gt;No-Code Acceleration on HPE Ezmeral&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Vinothini Raju, &amp;#x26; Ashvin Kumar, gopaddle&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/2A195hVR5Qk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Get Real with AI Jam Series]]></title><description><![CDATA[AI is now revolutionizing how enterprises create business value from data. New technical capabilities are disrupting infrastructure…]]></description><link>https://developer.hpe.com/get-real-with-ai-jam-series/</link><guid isPermaLink="false">https://developer.hpe.com/get-real-with-ai-jam-series/</guid><content:encoded>&lt;p&gt;AI is now revolutionizing how enterprises create business value from data. New technical capabilities are disrupting infrastructure configurations, business processes and organizational structures. This now enabling companies to harness the power and value of their data more efficiently, leading to enhanced decision-making, improved customer experiences, and the creation of new business models.&lt;/p&gt;
&lt;p&gt;This new interactive series is focused on “keeping it real with AI” and will provide you with insights on how to make AI a functional reality in your organization. Mark your calendar to engage with HPE’s leading innovators, developers and architects as they share what they have learned traversing industry hype, while transforming AI into technical and business value.&lt;/p&gt;
&lt;style&gt;
table {
    display: block;
    width: 100%;
    width: max-content;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
    padding: 10px !important;
}
thead tr:first-child td {
  -webkit-box-shadow: none;
  -moz-box-shadow: none;
  box-shadow: none;
  border:1px solid grey;
  text-align: center !important;
  padding: 20px !important;
  font-weight: bold !important;
}


&lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;     Date     &lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Speaker(s)&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;06-Aug-25&lt;/td&gt;
&lt;td&gt;Sustainably Scaling AI Adoption&lt;/td&gt;
&lt;td&gt;Audrey Scribner /  Anthony Delli Colli / Jason Zeiler / Lin Nease / Karim Abou Zahab&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=AmL_k-I6dSY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14-May-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/%5B%23007%5D+AI+Jam+Show+-+Lessons+in+AI+for+Retailers.pdf&quot;&gt;Lessons in AI for retailers: Transforming the shopping experience&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Audrey Scribner /  Anthony Delli Colli / Karen Rhodes / Reginald Height&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=_8gnUunB7bM&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;23-Apr-25&lt;/td&gt;
&lt;td&gt;Developing and deploying AI in the enterprise&lt;/td&gt;
&lt;td&gt;Audrey Scribner /  Anthony Delli Colli / Jimmy Whitaker / Derek Bekebrede / Alejandro Martinez&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Dau7swlAkJY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12-Feb-25&lt;/td&gt;
&lt;td&gt;AI Inferencing at the Edge: Use cases from Earth to Space&lt;/td&gt;
&lt;td&gt;Audrey Scribner / Michael Greene / Mark Fernandez.&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=9QxCAaAKN3c&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22-Jan-25&lt;/td&gt;
&lt;td&gt;IT New Year’s resolution: Build an ethical and trustworthy AI system&lt;/td&gt;
&lt;td&gt;Audrey Scribner / Pam Wood / Sahand Ghorbanpour / Jeff Oxenberg&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=S9SfJIUgIx4&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;06-Nov-24&lt;/td&gt;
&lt;td&gt;Implementing your AI breakthroughs effectively - The infrastructure to your AI&lt;/td&gt;
&lt;td&gt;Audrey Scribner / Anthony Delli Colli / Shep Bostin / Tyler Britten&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=7pGOYlA5eyI&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;02-Oct-24&lt;/td&gt;
&lt;td&gt;Cool AI Use Case, Now What?&lt;/td&gt;
&lt;td&gt;Audrey Scribner / Hayden Barnes / Chad Smykay / Anthony Delli Colli&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=gxpcBISePhE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Aug-24&lt;/td&gt;
&lt;td&gt;Exploring Transformative AI Use Cases Across Industries&lt;/td&gt;
&lt;td&gt;Audrey Scribner / Anthony Delli Colli / Mark Perreira / Peter Moser / Travis Tripp&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=XEJqcdWj790&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[HPE Innovation workshops]]></title><description><![CDATA[The HPE Innovation workshops are an invaluable opportunity for all IT technologists looking to learn how to design, implement, and operate a…]]></description><link>https://developer.hpe.com/hpe-innovation-workshops/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-innovation-workshops/</guid><content:encoded>&lt;p&gt;The HPE Innovation workshops are an invaluable opportunity for all IT technologists looking to learn how to design, implement, and operate a hybrid cloud AI platform.&lt;/p&gt;
&lt;p&gt;One of the first workshops, HPE AI Foundations, shows you how to establish a comprehensive and extensible foundation to empower integration, strategic deployment, responsible data management &amp;#x26; governance, machine &amp;#x26; deep learning, and generative AI.&lt;/p&gt;
&lt;p&gt;During the workshop, you will gain the knowledge and skills necessary to understand HPE GreenLake cloud and the best of breed AI ecosystem technologies incorporated within it.&lt;/p&gt;
&lt;style&gt;
table {
    display: block;
    width: 100%;
    width: max-content;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
    padding: 10px !important;
}
thead tr:first-child td {
  -webkit-box-shadow: none;
  -moz-box-shadow: none;
  box-shadow: none;
  border:1px solid grey;
  text-align: center !important;
  padding: 20px !important;
  font-weight: bold !important;
}
&lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;    Date    &lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Speaker(s)&lt;/th&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;13-Aug-24&lt;/td&gt;
&lt;td&gt;HPE AI Foundations Workshop - APAC&lt;/td&gt;
&lt;td&gt;Steven Fatigante&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14-Aug-24&lt;/td&gt;
&lt;td&gt;HPE AI Foundations Workshop - EMEA&lt;/td&gt;
&lt;td&gt;Steven Fatigante&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Aug-24&lt;/td&gt;
&lt;td&gt;HPE AI Foundations Workshop - AMA&lt;/td&gt;
&lt;td&gt;Steven Fatigante&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Welcome to the HPE Developer Community (Links)]]></title><description><![CDATA[We are glad you followed that QR Code link. Now please join the HPE Developer Community using your favourite communication channel:
 On…]]></description><link>https://developer.hpe.com/links/</link><guid isPermaLink="false">https://developer.hpe.com/links/</guid><content:encoded>&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/hpedevprogram-qrcode-1584535759606.png&quot; width=&quot;400&quot;&gt;
&lt;p&gt;We are glad you followed that QR Code link. Now please join the HPE Developer Community using your favourite communication channel:
&lt;br /&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On &lt;a href=&quot;https://www.yammer.com/hpe.com/groups/HPEDeveloperCommunityProgram&quot;&gt;Yammer&lt;/a&gt; (HPE)&lt;/li&gt;
&lt;li&gt;On &lt;a href=&quot;https://www.yammer.com/presalesone/groups/HPEDeveloperCommunity&quot;&gt;Yammer&lt;/a&gt; (Partners)&lt;/li&gt;
&lt;li&gt;On &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;On &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;On the &lt;a href=&quot;https://developer.hpe.com&quot;&gt;Web&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;By &lt;a href=&quot;mailto:hpedev@hpe.com&quot;&gt;email&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Subscribe to our monthly &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;Newsletter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Register to one of our &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;p&gt;Thank you!&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/rsz_1rsz_stack-waving-1557501164196.png&quot; width=&quot;400&quot;&gt;</content:encoded></item><item><title><![CDATA[Meetups]]></title><description><![CDATA[Connect with the experts to dive deep and learn more about some of today’s most exciting technologies. In these meetups, speakers will…]]></description><link>https://developer.hpe.com/meetups/</link><guid isPermaLink="false">https://developer.hpe.com/meetups/</guid><content:encoded>&lt;p&gt;Connect with the experts to dive deep and learn more about some of today’s most exciting technologies. In these meetups, speakers will discuss specific products in detail and collaborate with you to determine how they may help in your particular situation.&lt;/p&gt;
&lt;p&gt;Hosted by the HPE Developer Community, these meetups are held on a monthly basis at 5PM CET (8AM PST).  Read more about Meetups &lt;a href=&quot;https://developer.hpe.com/blog/new-for-2022-hpe-dev-meetups/&quot;&gt;in this blog&lt;/a&gt; post.&lt;/p&gt;
&lt;style&gt;
table {
    display: block;
    width: 100%;
    width: max-content;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
    padding: 10px !important;
}
thead tr:first-child td {
  -webkit-box-shadow: none;
  -moz-box-shadow: none;
  box-shadow: none;
  border:1px solid grey;
  text-align: center !important;
  padding: 20px !important;
  font-weight: bold !important;
}
&lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;    Date    &lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Speaker(s)&lt;/th&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;20-May-26&lt;/td&gt;
&lt;td&gt;Automate what happens next using HPE OpsRamp Process Automation&lt;/td&gt;
&lt;td&gt;Singh S B, Simmaranpal / J C Prasad Vasireddy / Ravi Kiran Srirangam Sadananda&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe.zoom.us/webinar/register/3217769572987/WN_COR_W9XMQtq4FYC5JhL__g&quot;&gt;Register&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;08-Apr-26&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/HPEDEV+Meetup+26-04+-+Network+Operations+Spotlight+with+PyCentralv2.pdf&quot;&gt;Network Operations Spotlight with PyCentralv2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Collin Koss /Alvin Castro&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=tEQZqb3C98I%C3%9F&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-Feb-26&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/HPE+Dev+Meetup+Feb+26+-+Introduction+to+HPE+Networking+Central&amp;#x27;s+new+APIs+with+Postman+and+PyCentralv2.pdf&quot;&gt;Introduction to HPE Networking Central&apos;s new APIs with Postman and PyCentralv2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Alvin Castro / Lucas Voron&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=5QFTzwk9VrU&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Dec-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/WOD_infrastructure_less_than_an_hour.pdf&quot;&gt;Implementing a complete Workshop-on-Demand infrastructure in less than an hour&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Frederic Passeron / Bruno Cornec&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=RSt7ewTulPE&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-Jun-25&lt;/td&gt;
&lt;td&gt;HPE Compute Ops Management APIs and the DevOps ecosystem: Updates and developer tools&lt;/td&gt;
&lt;td&gt;Lionel Jullien&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=TKl_gRJnoxk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30-Apr-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/HPE+PCAI+Webinar+04-30-25.pdf&quot;&gt;HPE Private Cloud AI Technical Demo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Randy Thomasson&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=5uRoHXD2Sks&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Mar-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/No+More+Blindspots.pdf&quot;&gt;No More Blind Spots: How eBPF Transforms Observability&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Neil Pearson&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=qH93yD5KSL8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Feb-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/HPE+GreenLake+Cloud+Webhooks.pdf&quot;&gt;Introduction to HPE GreenLake cloud webhooks&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Vandewilly Silva / Roman Nersisyan&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=CPiMcIjXyVA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29-Jan-25&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/HPEDEV_HPE+Private+Cloud+AI+deep+dive.pdf&quot;&gt;Unleashing AI Innovation: A deep dive into the HPE Private Cloud AI Software Stack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Srikanth Venkata Seshu / Thiru Palanichamy&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=6zj7biG74uk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-Dec-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/LLM+Agentic+Tool+Mesh+%E2%80%93+Democratizing+Gen+AI+-+Meetup+18th+December.pdf&quot;&gt;LLM Agentic Tool Mesh – Democratizing Gen AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Antonio Fin / Cong Xu&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=vDQAVIuEsVo&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30-Oct-24&lt;/td&gt;
&lt;td&gt;NVIDIA NIM Agents Blueprints - Do your own AI: Multimodal PDF Data Extraction 101&lt;/td&gt;
&lt;td&gt;Miguel Martinez&lt;/td&gt;
&lt;td&gt;NVIDIA&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-Sep-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE_SIC_KashishPahwa.pdf&quot;&gt;Exploring the HPE Sustainability Insight Center: Key features, innovations, and API capabilities&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kevin Tronkowski / Kashish Pahwa / Pallavi Pathak&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=jzYjTrnbmsE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Aug-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Kevin+HPE+Developer+Community+Presentation.pdf&quot;&gt;LLM finetuning for mere mortals&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kevin Musgrave&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=GpjCOrlYt0M&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31-Jul-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Vendor-Neutral-GPU-Programming-in-Chapel-Meetup-31-July-2024-v1.0.pdf&quot;&gt;Vendor-Neutral GPU Programming in Chapel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jade Abraham / Engin Kayraklioglu&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=nj-WqhGEy24&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Jun-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Implementing+Centralized+Key+Management+in+HPE+GreenLake+with+Thales+CipherTrust+Hackshack-v1.pdf&quot;&gt;Implementing Centralized Key Management in HPE GreenLake with Thales CipherTrust&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jeff Ceason&lt;/td&gt;
&lt;td&gt;Thales DIS CPL&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=A77yVzNtIKs&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29-May-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/SlidesMeetup-May24.zip&quot;&gt;Using HPE GreenLake for Red Hat OpenShift to migrate, modernize and run your applications smoothly&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Russell Clark / Ian Lawson&lt;/td&gt;
&lt;td&gt;HPE / Red Hat&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ScHOpRElCBE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24-Apr-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/DSCC+API+Family.pdf&quot;&gt;The data services family of APIs for HPE GreenLake – Putting it all together&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Paul Murray / Edgard Lopez / Ronald Dharma&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=-ffzhc6TzoA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27-Mar-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/GLPAPI.pdf&quot;&gt;Enabling business automation using HPE GreenLake Platform foundational APIs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;HPE Dev Team&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Uv0kKyVfjgg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Feb-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE-Developer-Central.pdf&quot;&gt;Getting Started with HPE Aruba Networking Central automation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Alvin Castro / Karthik Satheesh Kumar&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=R4oo-7ACuLg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31-Jan-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Getting_Started_With_HPE_GL_for_ComputeOpsMgmt.pdf&quot;&gt;Getting started with HPE GreenLake for Compute Ops Management APIs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Sebastian Schagerer&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=uwpxzHNXKvE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29-Nov-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Dev+Community+Meetup-PCE+DevOps+-+WhiteBackground.pdf&quot;&gt;Automated continuous deployment of container-based applications onto HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Pavan Belur Gopalakrishna Upadhya, Ruma Nandi, Sonu Sudhakaran&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=WlFohNTqv8w&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-Oct-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/%5BHPE%5D+Simplifying+containers+and+Kubernetes+on+your+laptop+with+Podman+Desktop.pdf&quot;&gt;Going from containers, to pods, to Kubernetes – help for your developer environments!&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cedric Clyburn / Evan Shortiss&lt;/td&gt;
&lt;td&gt;Red Hat&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Hy4MsJgxG8k&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27-Sep-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Nethopper-AvoidK8sToolSprawlWithKAOPS-092723%5B34%5D.pdf&quot;&gt;DevOps Alert: Tool Sprawl. Complexity. Burnout. Help!&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Chris Munford&lt;/td&gt;
&lt;td&gt;Nethopper&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=zlRJUl_aF04&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Jul-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Observability+in+Action+26th+July+2023.pdf&quot;&gt;Observability in Action&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Neil Pearson&lt;/td&gt;
&lt;td&gt;OpsRamp/HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ogBb_IPXorM&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31-May-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE+Developer+Meetup+-+May+.pdf&quot;&gt;HPE Machine Learning Development Environment and the open source ML advantage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Isha Ghodgaonkar&lt;/td&gt;
&lt;td&gt;Determined AI/HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=e8Fv4IWl38s&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Apr-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Divide+and+Conquer+with+Micro-frontends.pdf&quot;&gt;Divide and conquer with Micro Frontends&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Vishal Sharma&lt;/td&gt;
&lt;td&gt;ThoughtWorks&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=LbCw7Z7KT1U&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;29-Mar-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE+Postman+Intro.pdf&quot;&gt;Collaborating at scale with Postman&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Bryan Cross&lt;/td&gt;
&lt;td&gt;Postman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=LuXNpIEzYgg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22-Feb-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE+Dev+meetup+Feb+23+-+Project+Galadriel.pdf&quot;&gt;Galadriel - An alternative approach to SPIRE federation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Maximiliano Churichi/Max Lambrecht&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=IfaBCBXJBdw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-Jan-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/iac-developer-2023.pdf&quot;&gt;HPE GreenLake and Infrastructure-as-Code (IaC)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Eamonn O’Toole&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=zUo8Ag2IXqk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Jan-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE+Ezmeral+Unified+Analytics_HPEDEV+-+Read-Only.pdf&quot;&gt;HPE Ezmeral Unified Analytics&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Matt Morris&lt;/td&gt;
&lt;td&gt;HPE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=1Z4fNOHGYlk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14-Dec-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Part+2_Running+Reliable+systems_SLO+Math.pdf&quot;&gt;Running reliable systems - Part 2: Service level objective (SLO) Math&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Leonid Yankulin&lt;/td&gt;
&lt;td&gt;Google Cloud&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ZDxptOGs-ow&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7-Dec-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Part+1_+Running+Reliable+systems_+SRE+Overview.pdf&quot;&gt;Running reliable systems - Part 1: An Overview of Site Reliability Engineering (SRE)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Leonid Yankulin&lt;/td&gt;
&lt;td&gt;Google Cloud&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=XhhqEjUaLjE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30-Nov-22&lt;/td&gt;
&lt;td&gt;Introduction to Kubeflow&lt;/td&gt;
&lt;td&gt;Souheil Inati/Chase Christensen&lt;/td&gt;
&lt;td&gt;Arrikto&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Oct-22&lt;/td&gt;
&lt;td&gt;Boost Spark AI workloads with Pepperdata&lt;/td&gt;
&lt;td&gt;Heidi Carson/Kirk Lewis&lt;/td&gt;
&lt;td&gt;Pepperdata&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=N36DTliNmck&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Sep-22&lt;/td&gt;
&lt;td&gt;Machine Learning Data Version Control (DVC): Reproducibility and Collaboration in your ML Projects&lt;/td&gt;
&lt;td&gt;Alex Kim&lt;/td&gt;
&lt;td&gt;Iterative&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=sgkN09LkCP4&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;31-Aug-22&lt;/td&gt;
&lt;td&gt;OpenTelemetry: Getting Started and The Road to Production&lt;/td&gt;
&lt;td&gt;Michael Haberman&lt;/td&gt;
&lt;td&gt;Aspecto&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=odi9isyZOrU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27-Jul-22&lt;/td&gt;
&lt;td&gt;Finding vulnerabilities in production with open source ThreatMapper&lt;/td&gt;
&lt;td&gt;Owen Garrett&lt;/td&gt;
&lt;td&gt;Deepfence&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=r62VLwT6w3Y&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22-Jun-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/opensearch-project.pdf&quot;&gt;OpenSearch – The open-source search and analytics suite you can run yourself&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;David Tippett&lt;/td&gt;
&lt;td&gt;AWS&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=KdssEOIdO_0&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-May-22&lt;/td&gt;
&lt;td&gt;Scaling Language Training to Trillion-parameter Models on a GPU Cluster&lt;/td&gt;
&lt;td&gt;Evan Sparks&lt;/td&gt;
&lt;td&gt;Hewlett Packard Enterprise&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=rIPqCvvMmms&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27-Apr-22&lt;/td&gt;
&lt;td&gt;Decoupled policy enforcement with Open Policy Agent&lt;/td&gt;
&lt;td&gt;Anders Eknert&lt;/td&gt;
&lt;td&gt;Styra&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=_0XJnr8U0sU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;t=15s&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30-Mar-22&lt;/td&gt;
&lt;td&gt;HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster&lt;/td&gt;
&lt;td&gt;Samantha Cartwright / Amir Rapson&lt;/td&gt;
&lt;td&gt;vFunction&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=UvcyIjzml7s&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;23-Feb-22&lt;/td&gt;
&lt;td&gt;Streamlit - The fastest way to build and share data science apps&lt;/td&gt;
&lt;td&gt;Arnaud Miribel&lt;/td&gt;
&lt;td&gt;Streamlit.io&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/sdgTYy3BJiM&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;26-Jan-22&lt;/td&gt;
&lt;td&gt;Quarkus - Supersonic Subatomic Java&lt;/td&gt;
&lt;td&gt;Dimitris Andreadis&lt;/td&gt;
&lt;td&gt;RedHat&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=mY1z9OC0y54&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[Munch & Learn Technology Talks]]></title><description><![CDATA[Join our free, one-hour webinars to hear from and engage with renowned technologists as they share thought-leadership insights on popular…]]></description><link>https://developer.hpe.com/munch-and-learn/</link><guid isPermaLink="false">https://developer.hpe.com/munch-and-learn/</guid><content:encoded>&lt;p&gt;Join our free, one-hour webinars to hear from and engage with renowned technologists as they share thought-leadership insights on popular HPE and open source technologies. Get your questions answered while sharing a recipe or two.&lt;/p&gt;
&lt;p&gt;Hosted by the HPE Developer Community, these gatherings are held on a monthly basis at 5PM CET (8AM PST). You can read more about Munch &amp;#x26; Learn Technology Talks &lt;a href=&quot;https://developer.hpe.com/blog/hpe-dev-launches-its-munch-learn-technical-talks&quot;&gt;in this blog&lt;/a&gt; post.&lt;/p&gt;
&lt;style&gt;
table {
    display: block;
    width: max-content !important;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
     font-weight: normal !important;
    padding: 10px !important;
}
thead tr:first-child td {
  -webkit-box-shadow: none;
  -moz-box-shadow: none;
  box-shadow: none;
  border:1px solid grey;
  text-align: center !important;
  padding: 20px !important;
  font-weight: bold !important;
}


&lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;       Date &lt;/th&gt;
&lt;th&gt;Title&lt;/th&gt;
&lt;th&gt;Speaker(s)&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;18-Mar-26&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/munch-and-learn_Open-Source-at-HPE_Three+Pillars.pdf&quot;&gt;Open Source at HPE: Three Pillars, Many Paths&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Ria Farrell Schalnat / Alvin Castro /Dr. Andrew Shao –Shaun Tancheff&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=c3L13ELQ51o&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Jan-26&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/Munch-and-Learn-GreenLake-MCP-Jan2026.pdf&quot;&gt;Work Smarter: Natural Language Access to your infrastructure via GreenLake MCP&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Vandewilly Silva / Roman Nersyan&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=-RYmpXaGl2o&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-May-25&lt;/td&gt;
&lt;td&gt;HPE Sustainability Insight Center: A Win for both Business and the Environment&lt;/td&gt;
&lt;td&gt;Kashish Pahwa / Kevin Tronkowski&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=1H0M-c7g744&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16-Apr-25&lt;/td&gt;
&lt;td&gt;DayN+ - A new way to look at observability&lt;/td&gt;
&lt;td&gt;Raj Mistry&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=nDa_NQPbbVY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-Mar-25&lt;/td&gt;
&lt;td&gt;ChatHPE Hub: Enabling Secure and Scalable AI Transformation at HPE&lt;/td&gt;
&lt;td&gt;Jose Mejias / Manmeet Dhillon&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=cAXwX3Yb62c&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-Feb-25&lt;/td&gt;
&lt;td&gt;Unlocking Private AI Power: Insurance Fraud Detection and Beyond&lt;/td&gt;
&lt;td&gt;Jordan Nanos&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=BYzF2Twg8UY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Jan-25&lt;/td&gt;
&lt;td&gt;From log files to AI insights: The 60-year evolution of observability and AIOps&lt;/td&gt;
&lt;td&gt;Neil Pearson&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=VnKSnf7G4-4&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20-Nov-24&lt;/td&gt;
&lt;td&gt;How to fix your biggest security hole&lt;/td&gt;
&lt;td&gt;Ted Dunning&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=mgla3ovDlXA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16-Oct-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.us-east-1.amazonaws.com/Munch_and_Learn_Classiq_HPE_final.pdf&quot;&gt;Hybrid Classical-Quantum Workflows on HPE Supercomputers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Aniello Esposito / Erik Garcell / Tamuz Danzig&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=dO_5qXqfu5o&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-Sep-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/RAG+-Munch+%26+Learn.pptx.zip&quot;&gt;Enhancing NLP with Retrieval-Augmented Generation: A Practical Demonstration&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Rajesh P E / Umang Kedia&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=TQPvK-pFAD8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Aug-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/August+21+2024+AI+Munch+and+Learn.pdf&quot;&gt;Exploring Transformative AI Use Cases Across Industries&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Panel Discussion&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=XEJqcdWj790&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12-Jun-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/M%26L+Data+Center+Digital+Twins_June_2024+rev+01.pdf&quot;&gt;How digital twins help companies reach their sustainability goals&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cullen Bash / Gallig Renaud&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=WUlH17ruIek&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Apr-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE+Developer+Munch+%26+Learn+-+The+Transformative+Impact+of+Generative+AI+on+Telco+Products+-+April+2024.pdf&quot;&gt;The Transformative Impact of Generative AI on Telco Products&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Antonio Fin&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=61ghjMnc-JU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20-Mar-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HackShack-HPE-Trust-ML.pdf&quot;&gt;Learn how AI hackers detect fragility and how to thwart them with AI model resilience&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Soumyendu Sarkar / Ashwin Ramesh Babu / Sajad Mousavi&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=CMASNlKuTao&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Feb-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/SecureGenAIAdoption.pdf&quot;&gt;Secure GenAI Adoption for all!&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Glyn Bowden / Tom Phelan / Saad Zaher&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=FSEMz8fvpYE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24-Jan-24&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/munch-n-learn_build_private_cloud_HPE_GL_edge-to-cloud-Platform.pdf&quot;&gt;Using HPE GreenLake edge-to-cloud platform to build a private cloud&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Don Wake&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Rm1z2pHtyw0&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13-Dec-23&lt;/td&gt;
&lt;td&gt;Exploring HPE GreenLake Platform APIs through use cases&lt;/td&gt;
&lt;td&gt;Pallavi Pathak&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=IM7W89Vt7zQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Nov-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Inference-Optimization-HPEDevelopers-Nov15-2023.pdf&quot;&gt;Optimizing deep neural network inference workloads&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Hana Malha / Lindsey Hillesheim&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Ck5dVgp68uA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-Oct-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/The+Open+Source+advantage+Exploring+Machine+Learning+Through+Thought+Leadership.pdf&quot;&gt;The open-source advantage: Exploring machine learning through thought leadership&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Amber Graner / Chase Christensen&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=BgirPJNDtxs&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20-Sep-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Linux_Trends_07.pdf&quot;&gt;State of the Nation – Linux distributions&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Craig Lamparter&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=dFYLyy7oL-Q&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;t=3s&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14-Jun-23&lt;/td&gt;
&lt;td&gt;Digital Twins, the Metaverse, and Augmented Reality: Developer Insights and IT Foundations for Immersive Technologies Powered by AI&lt;/td&gt;
&lt;td&gt;James Hopfenblatt / Marcus Bonner / Garry Orsolini&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=T1aWHB0-4kA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24-May-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/ChatGPTCollaboration.pdf&quot;&gt;A New Era of Software Development: Embracing Large Language Model Tools like ChatGPT for Iterative Problem Solving&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jeff Fougere / Andrew Nieuwsma / Jim Schreckengast&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=zAm5CpOHfH4&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-Apr-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/Tech+for+Good+presentation.pdf&quot;&gt;Leveraging Tech to Address Global Challenges &amp;#x26; Health&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fred Tan&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Wu04-dz81Pc&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Mar-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/BRESNIKER+-+Dev+Community+-+Extra.pdf&quot;&gt;Quantum computing insights from HPE Labs: Extraordinary Claims Require Extraordinary Engineering (and TAMO isn’t an option)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kirk Bresniker&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=wVY7uZstDWA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15-Feb-23&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/2023_OSS_MunchandLearn.pdf&quot;&gt;Accelerating scientific research through high performance computing democratization&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Andrew Shao / Scott Bachman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=DnmhTj1PVIU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16-Nov-22&lt;/td&gt;
&lt;td&gt;Calling all citizen developers: Can low-code platforms accelerate your impact?&lt;/td&gt;
&lt;td&gt;Jeffrey Fougere / Richard Kerridge / Colin Lue King&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=zc_54fq8PoY&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-Oct-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/HPE_Munch%26Learn_Sustainability_final.pdf&quot;&gt;HPE Sustainability Strategy and Sustainability Research at Hewlett Packard Labs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Tiffani Jarnigan / Cullen Bash&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=SUgdVsncWrk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Sep-22&lt;/td&gt;
&lt;td&gt;Accelerate public sector AI use cases using a powerful ML Ops platform&lt;/td&gt;
&lt;td&gt;François Réthoré / Dietrich Zinsou&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=5pejLKu32Js&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Aug-22&lt;/td&gt;
&lt;td&gt;Machines learn from data to be artificially intelligent&lt;/td&gt;
&lt;td&gt;Dr. Eng Lim Goh&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/3KOFDciS3WU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;18-May-22&lt;/td&gt;
&lt;td&gt;Why Open Source is more than Software: The example of The Linux Foundation&apos;s AgStack project&lt;/td&gt;
&lt;td&gt;Ted Dunning / Sumer Johal&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=dnhjRF5dr6M&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20-Apr-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/ChapelForHPEMunchAndLearn.pdf&quot;&gt;Chapel: Making parallel computing as easy as Py(thon), from laptops to supercomputers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Brad Chamberlain&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=7Qk8T7_bevo&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;23-Mar-22&lt;/td&gt;
&lt;td&gt;Mithril: Introducing Robust Identities into Istio by integrating with SPIRE&lt;/td&gt;
&lt;td&gt;Praneetha Manthravadi / Mithril engineering team&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/xhd8MhG4Vvw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16-Feb-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/munch-and-learn-feb-2022.pdf&quot;&gt;Golden Age of AI, Dark Ages of AI Infrastructure&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Neil Conway&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/ktZFLD-9qgw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-Jan-22&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/JanuaryMunchAndLearn.zip&quot;&gt;Location, location, location!  Succeed at the Edge with HPE Ezmeral and NVIDIA&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Denis Vilfort / William Benton&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=C5HfiLatauQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;08-Dec-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/DecemberMunchAndLearn-Jeff.pdf&quot;&gt;Redfish: Past, Present and Future&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jeff Hilland / Jeff Autor / François Donzé&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Q1Qeb24lpKg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17-Nov-21&lt;/td&gt;
&lt;td&gt;The Great Unification: Building Analytic pipelines with Apache Spark Workloads&lt;/td&gt;
&lt;td&gt;Matt Hausmann / Donald (Don) Wake / Matthew Morris&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/TxZP_T9CC5Y&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;22-Sep-21&lt;/td&gt;
&lt;td&gt;Digital Transformation.Next: Data &amp;#x26; Analytics Workloads&lt;/td&gt;
&lt;td&gt;Matt Maccaux / Randy Thomasson&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Q4kJKCS7rbo&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25-Aug-21&lt;/td&gt;
&lt;td&gt;Kubernetes 101&lt;/td&gt;
&lt;td&gt;Nigel Poulton / Didier Lalli&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/PWVJKK1obKQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;28-Jul-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/7/HPE-Munch-and-Learn-7-28-july-2021.pdf&quot;&gt;How to make data consumable for real-world data science&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Ellen Friedman / Ted Dunning&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/4WKjRqflF7M&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;30-Jun-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/4/fundamentals-of-microservices-1625131973756.pdf&quot;&gt;Microservices Architecture 101&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Owen Garrett&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qyyxQU37ZyQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19-May-21&lt;/td&gt;
&lt;td&gt;Data Science Unplugged: Part 2&lt;/td&gt;
&lt;td&gt;Doug Cackett / Ellen Friedman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Va4tSr__Yok&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;21-Apr-21&lt;/td&gt;
&lt;td&gt;Building a foundation for zero trust with SPIFFE&lt;/td&gt;
&lt;td&gt;Daniel Feldman / Frederick Kautz&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/G1ceKr16nn8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24-Mar-21&lt;/td&gt;
&lt;td&gt;Data Science Unplugged: Part 1&lt;/td&gt;
&lt;td&gt;Doug Cackett / Ellen Friedman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Inh6eXM0EbA&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;24-Feb-21&lt;/td&gt;
&lt;td&gt;Explore Containerization and MLOps&lt;/td&gt;
&lt;td&gt;Tom Phelan / Nigel Poulton&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/9PvKpe7yMpI&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;27-Jan-21&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/munch-and-learn-dunning-1611939333032.pdf&quot;&gt;What’s a data fabric and how does it work?&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Ted Dunning / Ellen Friedman&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qi6sTvu8osk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake Developer Portal]]></title><description><![CDATA[Discover the unified HPE GreenLake experience through well-documented, secure, and scalable APIs.]]></description><link>https://developer.hpe.com/01-hpe-greenlake/</link><guid isPermaLink="false">https://developer.hpe.com/01-hpe-greenlake/</guid><content:encoded>&lt;p&gt;Discover the unified HPE GreenLake experience through well-documented, secure, and scalable APIs.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Missed a technology talk? Get the replay]]></title><description><![CDATA[Browse our YouTube channel to learn new things and expand your skill set.]]></description><link>https://developer.hpe.com/02-hpe-greenlake-testdrive/</link><guid isPermaLink="false">https://developer.hpe.com/02-hpe-greenlake-testdrive/</guid><content:encoded>&lt;p&gt;Browse our YouTube channel to learn new things and expand your skill set.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE AI Developer Days]]></title><description><![CDATA[HPE Developer Day is a focused one-day event designed to help you move from AI ambition to real-world results.
Check it out!]]></description><link>https://developer.hpe.com/03-role/</link><guid isPermaLink="false">https://developer.hpe.com/03-role/</guid><content:encoded>&lt;p&gt;HPE Developer Day is a focused one-day event designed to help you move from AI ambition to real-world results.
Check it out!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meetups]]></title><description><![CDATA[Connect with the experts to dive deep and learn more about some of today’s most exciting technologies.]]></description><link>https://developer.hpe.com/05-meetups/</link><guid isPermaLink="false">https://developer.hpe.com/05-meetups/</guid><content:encoded>&lt;p&gt;Connect with the experts to dive deep and learn more about some of today’s most exciting technologies.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Munch & Learn Technology Talks]]></title><description><![CDATA[Attend our monthly community meetups where you can interact with experts regarding popular new technologies and get your questions answered.]]></description><link>https://developer.hpe.com/04-munch-learn-technology-talks/</link><guid isPermaLink="false">https://developer.hpe.com/04-munch-learn-technology-talks/</guid><content:encoded>&lt;p&gt;Attend our monthly community meetups where you can interact with experts regarding popular new technologies and get your questions answered.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Workshops-on-Demand]]></title><description><![CDATA[Familiarize yourself with popular HPE and open source technologies through on-demand technical training. These workshops use Jupyter…]]></description><link>https://developer.hpe.com/06-workshops-on-demand/</link><guid isPermaLink="false">https://developer.hpe.com/06-workshops-on-demand/</guid><content:encoded>&lt;p&gt;Familiarize yourself with popular HPE and open source technologies through on-demand technical training. These workshops use Jupyter Notebooks to give you a unique, hands-on learning experience.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Implementing a local LLM using S3-based model storage and vLLM in HPE Private Cloud AI]]></title><description><![CDATA[Deploying a local Large Language Model (LLM) architecture using S3‑compatible object storage and vLLM as the inference engine provides a…]]></description><link>https://developer.hpe.com/07-spiffe-spire-graduates-enabling-greater-security-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/07-spiffe-spire-graduates-enabling-greater-security-solutions/</guid><content:encoded>&lt;p&gt;Deploying a local Large Language Model (LLM) architecture using S3‑compatible object storage and vLLM as the inference engine provides a scalable, cost‑efficient, and secure foundation for enterprise AI adoption.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Hack Shack]]></title><description><![CDATA[Explore our Virtual Experience to learn about different technologies and have some fun in the process.]]></description><link>https://developer.hpe.com/08-hpe-hackshack/</link><guid isPermaLink="false">https://developer.hpe.com/08-hpe-hackshack/</guid><content:encoded>&lt;p&gt;Explore our Virtual Experience to learn about different technologies and have some fun in the process.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[SPIFFE and SPIRE Projects]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) is the leading contributor to Cloud Native Computing Foundation’s (CNCF) SPIFFE and SPIRE open source…]]></description><link>https://developer.hpe.com/09-featured-platform-1/</link><guid isPermaLink="false">https://developer.hpe.com/09-featured-platform-1/</guid><content:encoded>&lt;p&gt;Hewlett Packard Enterprise (HPE) is the leading contributor to Cloud Native Computing Foundation’s (CNCF) SPIFFE and SPIRE open source projects.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Determined AI]]></title><description><![CDATA[Determined AI accelerates innovation with open source AI solutions to build and train models faster and easier.]]></description><link>https://developer.hpe.com/10-featured-platform-2/</link><guid isPermaLink="false">https://developer.hpe.com/10-featured-platform-2/</guid><content:encoded>&lt;p&gt;Determined AI accelerates innovation with open source AI solutions to build and train models faster and easier.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Home]]></title><description><![CDATA[HPE Developer Community Where you’ll find all things software at HPE. Join us to collaborate and build applications and integrations with…]]></description><link>https://developer.hpe.com/</link><guid isPermaLink="false">https://developer.hpe.com/</guid><content:encoded>&lt;h1&gt;HPE Developer Community&lt;/h1&gt;
&lt;p&gt;Where you’ll find all things software at HPE. Join us to collaborate and build applications and integrations with HPE products using the latest software and open source technologies.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Learn On-Demand]]></title><description><![CDATA[ok]]></description><link>https://developer.hpe.com/EzmeralLearnOnDemand/</link><guid isPermaLink="false">https://developer.hpe.com/EzmeralLearnOnDemand/</guid><content:encoded>&lt;p&gt;ok&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Munch & Learn Technology Talks]]></title><description><![CDATA[Join our free, one-hour webinars to hear from and engage with renowned technologists as they share thought-leadership insights on popular…]]></description><link>https://developer.hpe.com/MunchandLearn/</link><guid isPermaLink="false">https://developer.hpe.com/MunchandLearn/</guid><content:encoded>&lt;p&gt;Join our free, one-hour webinars to hear from and engage with renowned technologists as they share thought-leadership insights on popular HPE and open source technologies.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Workshops-on-Demand]]></title><link>https://developer.hpe.com/Workshops-on-Demand/</link><guid isPermaLink="false">https://developer.hpe.com/Workshops-on-Demand/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Meetups]]></title><description><![CDATA[Connect with the experts to dive deep and learn more about some of today’s most exciting technologies.]]></description><link>https://developer.hpe.com/skillup-1/</link><guid isPermaLink="false">https://developer.hpe.com/skillup-1/</guid><content:encoded>&lt;p&gt;Connect with the experts to dive deep and learn more about some of today’s most exciting technologies.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get Real with AI – Jam Series]]></title><description><![CDATA[This informative series will provide you with insights on how to make AI a functional reality in your organization, transforming industry…]]></description><link>https://developer.hpe.com/skillup-2/</link><guid isPermaLink="false">https://developer.hpe.com/skillup-2/</guid><content:encoded>&lt;p&gt;This informative series will provide you with insights on how to make AI a functional reality in your organization, transforming industry hype into technical and business value.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake Test Drive]]></title><description><![CDATA[Done]]></description><link>https://developer.hpe.com/skillup-3/</link><guid isPermaLink="false">https://developer.hpe.com/skillup-3/</guid><content:encoded>&lt;p&gt;Done&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Digital Learner]]></title><description><![CDATA[Unlock unlimited access to strategically curated training on HPE technologies, IT, business, and power skills.
Learn continuously. Lead…]]></description><link>https://developer.hpe.com/skillup-4/</link><guid isPermaLink="false">https://developer.hpe.com/skillup-4/</guid><content:encoded>&lt;p&gt;Unlock unlimited access to strategically curated training on HPE technologies, IT, business, and power skills.
Learn continuously. Lead confidently.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Missed a session? - Get the replay]]></title><description><![CDATA[Browse replays of our most successful technology talks.]]></description><link>https://developer.hpe.com/skillup/</link><guid isPermaLink="false">https://developer.hpe.com/skillup/</guid><content:encoded>&lt;p&gt;Browse replays of our most successful technology talks.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cloud Architect]]></title><description><![CDATA[In a highly distributed environment with diverse needs, cloud architects today are challenged to design, support and maintain a hybrid cloud…]]></description><link>https://developer.hpe.com/cloud-architect/home/</link><guid isPermaLink="false">https://developer.hpe.com/cloud-architect/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;In a highly distributed environment with diverse needs, cloud architects today are challenged to design, support and maintain a hybrid cloud infrastructure that provides maximum efficiency and cost effectiveness. On this page, we focus on the HPE GreenLake edge-to-cloud platform and provide practical tips to address key issues.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **Customize solutions to your specific needs:**
&lt;pre&gt;&lt;code&gt;*In an Everything-as-a-Service model.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Today’s cloud strategy**
&lt;pre&gt;&lt;code&gt;*A mix of public cloud, private cloud, and legacy infrastructure*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - See how you can [get hands-on experience with the new HPE GreenLake for Data Fabric](/blog/hpe-ezmeral-early-access-gives-developers-hands-on-experience-with-the-new-hpe-greenlake-for-data-fabric/)
&lt;pre&gt;&lt;code&gt;- Learn how to [curate and expose service catalog items using HPE GreenLake for private cloud](/blog/curate-and-expose-service-catalog-items-using-hpe-greenlake-for-private-cloud/)

- Discover how you can [deploy an ML model in the HPE GreenLake Platform ML Ops service](/blog/mlops-–-deploying-an-ml-model-in-greenlake-platform-mlops-service/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Explore the service [HPE GreenLake for Compute Ops Management](/greenlake/hpe-greenlake-for-compute-ops-management/home/)
&lt;pre&gt;&lt;code&gt;- Learn about [Data Services Cloud Console](/greenlake/data-services-cloud-console/home/)

- Learn to work with [HPE Aruba Central](/greenlake/aruba-central/home/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Supporting an increasingly distributed workforce and data&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn of &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2022/12/hpe-greenlake-adds-application-analytics-and-developer-services-to-modernize-workloads-across-the-hybrid-cloud.html&quot;&gt;new applications and developer services&lt;/a&gt; that drive a data-first modernization strategy across edge to cloud&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;See how &lt;a href=&quot;https://community.hpe.com/t5/the-cloud-experience-everywhere/modern-private-cloud-made-easy-hpe-unveils-hpe-greenlake-for/ba-p/7169009#.Y9tUKOzMJGM&quot;&gt;HPE GreenLake for Private Cloud Enterprise makes a modern private cloud easy&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Explore the industry’s first &lt;a href=&quot;/greenlake/hpe-greenlake-for-data-fabric/home/&quot;&gt;hybrid analytics-ready data fabric&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Dealing with networking concerns&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read how &lt;a href=&quot;https://community.arubanetworks.com/community-home/digestviewer/viewthread?GroupId=67&amp;#x26;MessageKey=c59874f5-eab5-44e4-b7c0-59e7762410e8&amp;#x26;CommunityKey=e1202040-11b3-4eea-9f57-d903f67db2f9&quot;&gt;Aruba Central is now part of HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn &lt;a href=&quot;/blog/how-to-create-a-virtual-network-in-hpe-greenlake-for-private-cloud/&quot;&gt;how to create a virtual network in HPE GreenLake for Private Cloud&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Secure connections by &lt;a href=&quot;/blog/configuring-azure-ad-with-greenlake-cloud-platform-and-aruba-central/&quot;&gt;configuring Azure AD as the SAML IDP with HPE GreenLake Cloud Platform and Aruba Central&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Addressing security and access management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;See &lt;a href=&quot;/blog/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/&quot;&gt;how to use an API access token for HPE GreenLake for Compute Ops Management&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/understanding-hpe-greenlake-identity-access-management/&quot;&gt;Understanding HPE GreenLake Central Identity &amp;#x26; Access Management&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Explore the CNCF &lt;a href=&quot;/platform/spiffe-and-spire-projects/home/&quot;&gt;SPIFFE and SPIRE projects&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;View video explaining how to &lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/10b3350c-52fe-45ca-9232-71ddc7185c77/video?jumpId=in_videogallery_e8826c6e-76e3-4c2e-b751-a257a47788f5_gaiw&quot;&gt;access iLO through HPE GreenLake Compute Ops Management&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/blog/api-console-for-data-services-cloud-console/&quot;&gt;Use HPE GreenLake Console’s API Gateway for Data Services Cloud Console&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Automation and orchestration of cloud infrastructure&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore automation through &lt;a href=&quot;/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/&quot;&gt;HPE GreenLake and Infrastructure-as-code using Terraform&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;See &lt;a href=&quot;/blog/how-to-integrate-chef-automation-with-hpe-greenlake-for-private-cloud/&quot;&gt;how to integrate Chef automation with HPE GreenLake for private cloud&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Controlling costs and managing cloud governance&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Check out &lt;a href=&quot;https://docs.consumption.support.hpe.com/HPE_GreenLake_Billing_Docs/HPE_GreenLake_Billing_User_Guide/Viewing_monthly_usage_and_charges&quot;&gt;HPE GreenLake Usage and Analytics documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=HPE-Consumption-Analytics.html&quot;&gt;HPE GreenLake Cloud Consumption Analytics User Guide&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and observability&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;See how &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/blog-post/2022/03/unified-server-management-from-edge-to-cloud-drives-greater-performance-and-efficiency.html&quot;&gt;unified server management from edge-to-cloud drives greater performance and efficiency&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn &lt;a href=&quot;/blog/how-to-monitor-hpe-compute-ops-management/&quot;&gt;how to monitor HPE GreenLake for Compute Ops Management infrastructure with Grafana Metrics Dashboards&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/greenlake-platform-infrastructure-monitoring-with-apache-skywalking/&quot;&gt;HPE GreenLake for Private Cloud Enterprise monitoring with Apache SkyWalking and OpenTelemetry&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Filling skills gaps&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore the &lt;a href=&quot;/skillup/&quot;&gt;HPE Developer Skill Up page&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Take a &lt;a href=&quot;https://testdrive.greenlake.hpe.com/&quot;&gt;Test Drive&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Discover &lt;a href=&quot;https://education.hpe.com/ww/en/training/portfolio/greenlake.html&quot;&gt;HPE GreenLake training and certification&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Join in on &lt;a href=&quot;https://community.arubanetworks.com/community-home?CommunityKey=403fcb2e-4306-4eee-9bc7-bd6d0309ff7e&quot;&gt;Aruba Central Community discussions&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn more about streamlining hybrid cloud management by reading &lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a00112667.Streamlining-hybrid-cloud-management.html?rpv=cpf&amp;#x26;parentPage=/us/en/greenlake/hybrid-multi-cloud-services&quot;&gt;HPE GreenLake Management Services&lt;/a&gt;, checking out this &lt;a href=&quot;https://www.hpe.com/us/en/services/remote-infrastructure-monitoring.html&quot;&gt;site&lt;/a&gt;, or going through this &lt;a href=&quot;https://community.hpe.com/t5/the-cloud-experience-everywhere/hybrid-cloud-simplified-a-podcast-intro-to-hpe-greenlake-hybrid/ba-p/7163196#.Y9tX2uzMJGM&quot;&gt;podcast&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Case studies&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/67b545ce-0d1c-43c7-8f85-ee03b4bf1754/western-canada-lottery-corporation--winning-with-insight-and-new-ways-to-play/video/&quot;&gt;Western Canada Lottery Corporation: winning with insight and new ways to play&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50001216enw?jumpid=in_hpesitesearch&quot;&gt;HPE GreenLake helps Wibmo adopt agile, scalable infrastructure model&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;More HPE GreenLake &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/customer-stories.html&quot;&gt;customer stories&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Meetups&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    A series of in-depth talks on open source developer technologies.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE GreenLake and Infrastructure-as-Code (IaC)](https://www.youtube.com/watch?v=zUo8Ag2IXqk&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;ab_channel=HewlettPackardEnterprise)
&lt;pre&gt;&lt;code&gt;- [HPE GreenLake for Data Fabric](https://www.youtube.com/watch?v=rzLxGZIraTg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HewlettPackardEnterprise)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Ezmeral Data Fabric 101 – Get to know the basics around the data fabric](/hackshack/workshop/26)
&lt;pre&gt;&lt;code&gt;- [Introduction to Virtual Machine Infrastructure-as-Code using Terraform and HPE GreenLake for Private Cloud Enterprise](/hackshack/replays/36)

- [Creating a Zero Trust Model for Microservices Architectures with SPIRE and Envoy](/hackshack/replays/32)


 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Demos, resources, APIs and documentation&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;The [HPE GreenLake developer portal](https://developer.greenlake.hpe.com/) offers documentation and API information. You can also find additional resources on the demos and resources page.
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE GreenLake edge-to-cloud platform demos and resources](https://www.hpe.com/us/en/greenlake/demos.html)
&lt;pre&gt;&lt;code&gt;- [HPE GreenLake API and documentation portal](https://developer.greenlake.hpe.com/)

- [HPE GreenLake Test Drive](https://www.hpe.com/us/en/greenlake/demos.html)


 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;Ping us with your comments, questions, and requests for information.

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Developer Community Slack]( https://slack.hpedev.io/)
  &lt;/div&gt;  
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Data-Driven Developer]]></title><description><![CDATA[N/A]]></description><link>https://developer.hpe.com/data-driven-developer/home/</link><guid isPermaLink="false">https://developer.hpe.com/data-driven-developer/home/</guid><content:encoded>&lt;p&gt;N/A&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cloud/Datacenter Owner]]></title><description><![CDATA[Deliver modern, scalable and secure on-prem and cloud infrastructures managed through centralized dashboards that monitor multiple vendors…]]></description><link>https://developer.hpe.com/cloud-datacenter-owner/home/</link><guid isPermaLink="false">https://developer.hpe.com/cloud-datacenter-owner/home/</guid><content:encoded>&lt;p&gt;Deliver modern, scalable and secure on-prem and cloud infrastructures managed through centralized dashboards that monitor multiple vendors and services.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data/ML Engineer]]></title><description><![CDATA[It’s your job to ensure the right data powers the right applications at the right time and in the right place. With an increased number and…]]></description><link>https://developer.hpe.com/data-ml-engineer/home/</link><guid isPermaLink="false">https://developer.hpe.com/data-ml-engineer/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;It’s your job to ensure the right data powers the right applications at the right time and in the right place.&lt;/p&gt;
&lt;p&gt;With an increased number and variety of workloads, how can you address all aspects of data logistics and processing that can make or break the success of any data-intensive project, including analytics and AI/machine learning? And do it easily and reliably?&lt;/p&gt;
&lt;p&gt;On this page, we provide content to help you meet these challenges. You will find a rotating selection of foundational material, ideas to help you get inspired, as well as practical tips on key issues to improve efficiency and performance. You’ll also learn what Hewlett Packard Enterprise (HPE) offers.&lt;/p&gt;
&lt;p&gt;The roles of the Data/ML Engineer and Data Scientist can overlap. You may also find content of interest to you on the &lt;a href=&quot;/role/data-scientist/home/&quot;&gt;Data Scientist&lt;/a&gt; page. Content on this page changes as new material becomes available or new topics arise, so check back regularly.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **A sampler of new ideas related to data/ML engineering:**
&lt;pre&gt;&lt;code&gt;*Learn how industry innovation may affect your job.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Key to data science projects is a unifying data infrastructure to handle logistics and the containerization of applications**
&lt;pre&gt;&lt;code&gt;*Simplify operations and workflows with the right data fabric and orchestrate containerized applications with open source Kubernetes.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Unit testing isn’t just for code: you need to unit test your data. [Watch Deequ: Unit Tests for Data](https://www.youtube.com/watch?v=2f_JewK79GI)
&lt;pre&gt;&lt;code&gt;- Data locality helps support GPUs and other accelerators from a data point of view. Read [How fine-grained data placement helps optimize application performance](https://developer.hpe.com/blog/how-fine-grained-data-placement-helps-optimize-application-performance/)

- Better connections between data producers and data consumers make data science more successful. Read [Getting value from your data shouldn’t be this hard](https://www.hpe.com/us/en/insights/articles/getting-value-from-your-data-shouldn-t-be-this-hard-2106.html)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Study the technical paper [HPE Ezmeral Data Fabric: Modern infrastructure for data storage and management](https://www.hpe.com/psnow/doc/a00110846enw)
&lt;pre&gt;&lt;code&gt;- Read [What’s your superpower for data management?](https://community.hpe.com/t5/HPE-Ezmeral-Uncut/What-s-your-superpower-for-data-management/ba-p/7100920#.Ya5RTb3ML0p)

- View the [HPE Ezmeral Data Fabric platform page](https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/)

- Read [Kuberneticized machine learning and AI using Kubeflow](https://developer.hpe.com/blog/kubernetized-machine-learning-and-ai-using-kubeflow/)

- Learn how management of large scale Kubernetes clusters is made easier with [HPE Ezmeral Runtime Enterprise](https://developer.hpe.com/platform/hpe-ezmeral/home/) 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;What can I do to lower the entry barriers to developing new AI/ML/data science projects?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AI/ML projects can and should be run on the same system as analytics projects: Read “Chapter 3: AI and Analytics Together” in the free eBook &lt;a href=&quot;https://www.hpe.com/us/en/resources/software/ai-and-analytics-systems.html&quot;&gt;AI and Analytics at Scale: Lessons from Real-World Production Systems&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Who should be included on the team to ensure the success of the project?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/The-New-Data-Science-Team-Who-s-on-First/ba-p/7154783#.Ybi1pb3MI2y&quot;&gt;The New Data Science Team: Who’s on First?&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I handle data movement?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/A-better-approach-to-major-data-motion-Efficient-built-in/ba-p/7135056#.Ya5Xqb3ML0p&quot;&gt;A better approach to major data motion: built-in data mirroring&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Watch the webinar &lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/5a1ff1b7-faf8-43f2-98a3-d5b7331616b6/video?jumpid=em_4pbhacrk27_aid-520049397&amp;#x26;utm_source=RE&quot;&gt;Data Motion at Scale: the Untold Story&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What makes it easier to deal with edge computing in large-scale systems?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/To-the-edge-and-back-again-Meeting-the-challenges-of-edge/ba-p/7132609#.Ya5X3r3ML0o&quot;&gt;To the edge and back again: Meeting the challenges of edge computing&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I ensure data trust and security?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;New approaches are improving the connection between data producers and data consumers. See how in the video &lt;a href=&quot;https://www.youtube.com/watch?v=9VTLA1nxpoo&quot;&gt;Dataspaces: connecting to data you can trust&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn about the &lt;a href=&quot;https://developer.hpe.com/platform/spiffe-and-spire-projects/home/&quot;&gt;SPIFFE and SPIRE projects&lt;/a&gt; that are hosted by the CNCF Foundation&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How are others doing this?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Check out these real-world case studies&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003176enw?jumpid=in_lit-psnow-red&quot;&gt;Accelerating Autonomous Car Development with Ready Access to Global Data Fabric&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003827enw&quot;&gt;Accelerating Data Insight for a Better Work Life&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Munch &amp;#x26; Learn technology talk&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Monthly meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [What’s a data fabric and how does it work?](https://www.youtube.com/watch?v=qi6sTvu8osk)
&lt;pre&gt;&lt;code&gt;- [Data Science Unplugged: Part 1](https://www.youtube.com/watch?v=Inh6eXM0EbA)

- [Data Science Unplugged: Part 2](https://www.youtube.com/watch?v=Va4tSr__Yok)

- [How to make data consumable for real-world data science](https://www.youtube.com/watch?v=4WKjRqflF7M)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Ezmeral Data Fabric 101 – Get to know the basics around the data fabric](https://hackshack.hpedev.io/workshop/26)
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Documentation&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;The [HPE Ezmeral Data Fabric platform page](https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/) offers documentation and API information along with informative videos and tutorials. Additional documentation can be found here.
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Ezmeral Data Fabric 7.0 documentation](https://docs.datafabric.hpe.com/70/index.html)
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;Ping us with your comments, questions, and requests for information.

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Dev Slack]( https://slack.hpedev.io/)
  &lt;/div&gt;  
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Data Scientist]]></title><description><![CDATA[Data science is an intriguing field, and recognition of its potential is rapidly expanding. But there are challenges. You need access to the…]]></description><link>https://developer.hpe.com/data-scientist/home/</link><guid isPermaLink="false">https://developer.hpe.com/data-scientist/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;Data science is an intriguing field, and recognition of its potential is rapidly expanding. But there are challenges. You need access to the right data and the flexibility to use a variety of tools of your own choice. The pipeline for data preparation and for model and application deployment needs to be reliable and efficient. And you need to increase the likelihood that stakeholders and IT managers will green-light new projects.&lt;/p&gt;
&lt;p&gt;On this page, we provide a range of content – for advanced data scientists to those just getting started – to help you meet these challenges. You will find a rotating selection of foundational material, ideas to help you get inspired, as well as practical tips on key issues that help make your data science projects easier to build and more likely to be successful. You’ll also learn what Hewlett Packard Enterprise (HPE) offers.&lt;/p&gt;
&lt;p&gt;The roles of the Data/ML Engineer and Data Scientist can overlap. You may also find content of interest to you on the &lt;a href=&quot;/role/data-ml-engineer/home/&quot;&gt;Data/ML Engineer&lt;/a&gt; page. Content on this page changes as new material becomes available or new topics arise, so check back regularly.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **A sampler of new ideas**
&lt;pre&gt;&lt;code&gt;*Learn how innovation may affect your job.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Optimize data access**
&lt;pre&gt;&lt;code&gt;*The right data infrastructure gives you direct access to data via a wide range of APIs for a choice in tools.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [Watch Deequ: Unit Tests for Data](https://www.youtube.com/watch?v=2f_JewK79GI) to learn why unit testing isn’t just for code
&lt;pre&gt;&lt;code&gt;- Read [Swarm Learning: Turn Your Distributed Data into Competitive Edge](https://www.hpe.com/us/en/pdfViewer.html?docId=a50000344&amp;#x26;parentPage=/us/en/products/compute/hpc/deep-learning&amp;#x26;resourceTitle=Swarm+Learning:+Turn+Your+Distributed+Data+into+Competitive+Edge+technical+white+paper) to see how innovative architectures take advantage of the increasingly distributed nature of data
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Study the technical paper [HPE Ezmeral Data Fabric: Modern infrastructure for data storage and management](https://www.hpe.com/psnow/doc/a00110846enw)
&lt;pre&gt;&lt;code&gt;- Learn best practices in [Getting the most from your data-driven transformation: 10 key principles](https://www.hpe.com/us/en/insights/articles/getting-the-most-from-your-data-driven-transformation-2109.html)

- View the [HPE Ezmeral Data Fabric platform ](https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/) page
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Working together**
&lt;pre&gt;&lt;code&gt;*Domain expertise helps frame questions, identify useful data, and take action on insights.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Containerization of applications**
&lt;pre&gt;&lt;code&gt;*The open source Kubernetes framework orchestrates containers.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Discover [The New Data Science Team: Who’s on First?](https://community.hpe.com/t5/HPE-Ezmeral-Uncut/The-New-Data-Science-Team-Who-s-on-First/ba-p/7154783#.Ybi1pb3MI2y) and learn how multiple roles contribute to a successful data science project
&lt;pre&gt;&lt;code&gt;- Combine diverse data sets to advance healthcare. Watch  [Data Saves Lives](https://www.hpe.com/us/en/discover-more-network/events/discover-2021/results.html/types/sessions/search/b4372?media-id=/us/en/resources/discover/las-vegas-2021/B4372/_jcr_content.details.json) to learn how

- Watch  [Data Feeds People](https://www.hpe.com/us/en/discover-more-network/events/discover-2021/results.html/types/sessions/search/b4373) to see how combined stakholder expertise puts advanced agricultural knowledge to work in the field - literally
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Read [Kuberneticized machine learning and AI using Kubeflow](https://developer.hpe.com/blog/kubernetized-machine-learning-and-ai-using-kubeflow/)
&lt;pre&gt;&lt;code&gt;- Learn how management of large scale Kubernetes clusters is made easier with [HPE Ezmeral Runtime Enterprise](https://developer.hpe.com/platform/hpe-ezmeral/home/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;How do I find and get access to the right data?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Avoiding-pitfalls-Tips-for-better-data-science/ba-p/7144228&quot;&gt;Avoiding pitfalls: Tips for better data science&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Better connections between data producers and data consumers make data science more successful. Check out &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/getting-value-from-your-data-shouldn-t-be-this-hard-2106.html&quot;&gt;Getting value from your data shouldn’t be this hard&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What can I do to lower the entry barriers to developing new AI/ML/data science projects?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AI/ML projects can and should be run on the same system as analytics projects: Read “Chapter 3: AI and Analytics Together” in the free eBook &lt;a href=&quot;https://www.hpe.com/us/en/resources/software/ai-and-analytics-systems.html&quot;&gt;AI and Analytics at Scale: Lessons from Real-World Production Systems&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Second-project-advantage-Lowering-barriers-to-AI-and-machine/ba-p/7154034#.YZYX2elKhE4&quot;&gt;2nd project advantage: lowering barriers to AI and machine learning&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I optimize data logistics and preparation efforts to keep them from overwhelming the data science project?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Budgeting-time-for-AI-ML-projects/ba-p/7090807#.YZYZVelKhE4&quot;&gt;Budgeting time for AI/ML projects&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;See content for the &lt;a href=&quot;/role/data-ml-engineer/home/&quot;&gt;Data/ML Engineer role&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What makes it easier to deal with edge computing in large-scale systems?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/To-the-edge-and-back-again-Meeting-the-challenges-of-edge/ba-p/7132609#.Ya5X3r3ML0o&quot;&gt;To the edge and back again: Meeting the challenges of edge computing&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How are others doing this?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Check out these real-world case studies&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003176enw?jumpid=in_lit-psnow-red&quot;&gt;Accelerating Autonomous Car Development with Ready Access to Global Data Fabric&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003827enw&quot;&gt;Accelerating Data Insight for a Better Work Life&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/architecting-the-worlds-largest-biometric-identity-system-the-aadhaar-ex/&quot;&gt;Architecting the World’s Largest Biometric Identity System: The Aadhaar Experience&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Munch &amp;#x26; Learn technology talk&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Monthly meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [What’s a data fabric and how does it work?](https://www.youtube.com/watch?v=qi6sTvu8osk)
&lt;pre&gt;&lt;code&gt;- [Data Science Unplugged: Part 1](https://www.youtube.com/watch?v=Inh6eXM0EbA)

- [Data Science Unplugged: Part 2](https://www.youtube.com/watch?v=Va4tSr__Yok)

- [How to make data consumable for real-world data science](https://www.youtube.com/watch?v=4WKjRqflF7M)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Ezmeral Data Fabric 101 – Get to know the basics around the data fabric](https://hackshack.hpedev.io/workshop/26)
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Documentation&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;    
    The [HPE Ezmeral Data Fabric platform page](https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/) offers documentation and API information along with informative videos and tutorials. Additional documentation can be found here.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Ezmeral Data Fabric 7.0 documentation](https://docs.datafabric.hpe.com/70/index.html)
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;Ping us with your comments, questions, and requests for information.

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [HPE Dev Slack]( https://slack.hpedev.io/)
  &lt;/div&gt;  
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Developer]]></title><description><![CDATA[Every software developer, computer programmer, software engineer, and coder shares the same goal: to create the best software for customers…]]></description><link>https://developer.hpe.com/developer/home/</link><guid isPermaLink="false">https://developer.hpe.com/developer/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;Every software developer, computer programmer, software engineer, and coder shares the same goal: to create the best software for customers. This involves user experience, security, serviceability, and reusability, as well as commercial success. In addition to knowing the languages and tools at your disposal, it also requires an understanding of your deployment model - considering the teams in place, the corporate context and the delivery mechanism. And in this quickly evolving environment, it’s not always easy to keep up.&lt;/p&gt;
&lt;p&gt;Here, we cover the latest software trends, new programming languages, and new techniques and methodologies to help you improve the way you build software for your customers – whether it’s through open source projects or commercial offerings.&lt;/p&gt;
&lt;p&gt;The role of the Developer may overlap with other roles like the &lt;a href=&quot;https://developer.hpe.com/role/data-driven-developer/home/&quot;&gt;Data-driven Developer&lt;/a&gt; or even the &lt;a href=&quot;https://developer.hpe.com/role/data-scientist/home/&quot;&gt;Data Scientist&lt;/a&gt;. On this page, we focus on developers who build the applications that generate data, which Data Scientists can use to support business outcomes.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **Build the right user experience**
&lt;pre&gt;&lt;code&gt;*Delivering a consistent, enjoyable user experience (UX) is key to developing successful applications today.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Kubernetes**
&lt;pre&gt;&lt;code&gt;*Open source Kubernetes is the de-facto standard for managing containerized applications at scale. But it can be complex and take time and experience to fully master.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Bringing UX designers into a project early can significantly improve your chances of success. Read [Wow – A practiced and perfected design process.](https://developer.hpe.com/blog/wow-a-practiced-and-perfected-design-process-part-1-uncovering-the-merit/)
&lt;pre&gt;&lt;code&gt;- Implementing validated designs isn’t always easy. See how [HPE DesignSystem](https://design-system.hpe.design/), based on open source Grommet, can help you get there faster and easier.
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Learn how to migrate an existing application by reading [Deploying Complex Stateful Applications on Kubernetes with KubeDirector](https://developer.hpe.com/blog/deploying-complex-stateful-applications-on-kubernetes-with-kubedirector/) or [Best Practices for Migrating Your Apps to Containers and Kubernetes](https://developer.hpe.com/blog/best-practices-for-migrating-your-apps-to-containers-and-kubernetes/)
&lt;pre&gt;&lt;code&gt;- Read [Introducing a multi-vendor CSI driver for Kubernetes](https://developer.hpe.com/blog/introducing-a-multi-vendor-csi-driver-for-kubernetes/) and [Using Raw Block and Ephemeral Inline Volumes on Kubernetes](https://developer.hpe.com/blog/using-raw-block-and-ephemeral-inline-volumes-on-kubernetes/) to learn how to leverage the CSI driver for persistent storage for your applications
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Containerized apps**
&lt;pre&gt;&lt;code&gt;*Complex applications can be assembled from small, independent containerized modules with REST API-based interaction. Choose the right platform for your deployment.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Zero-Trust Security**
&lt;pre&gt;&lt;code&gt;*Securing an application in heterogeneous, distributed environments is a daunting task. A CNCF-incubated project, SPIFFE/SPIRE, can help.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;Modern, cloud-native application development relies heavily on containers and Kubernetes. When determining the right platform for your use, be sure to check out [HPE Ezmeral Runtime Enterprise.](https://developer.hpe.com/platform/hpe-ezmeral/home/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [Learn about the concept of Zero-Trust security in the ebook Solving the Bottom Turtle](https://spiffe.io/book/)
&lt;pre&gt;&lt;code&gt;- [Discover some of the more recently added SPIFFE/SPIRE capabilities](https://developer.hpe.com/blog/top-13-capabilities-within-spiffe-and-spire-released-in-2019/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;How can I easily prototype my designs?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn about the &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home/&quot;&gt;open source React framework Grommet&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://designer.grommet.io/&quot;&gt;Test the Grommet Designer &lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I best handle parallel programming?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore &lt;a href=&quot;https://developer.hpe.com/platform/chapel/home/&quot;&gt;Chapel&lt;/a&gt;, an open source programming language invented by parallel programming engineers from Cray Computer, now part of Hewlett Packard Enterprise&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://chapel.discourse.group/latest&quot;&gt;Join the Chapel community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read about the HPE Cray Programming Environment &lt;a href=&quot;https://community.hpe.com/t5/Advantage-EX/Make-your-apps-run-faster-with-HPE-Cray-Programming-Environment/ba-p/7144400?jumpId=in_videogallery_0f352d12-b606-4fae-97db-be760f0fda7a_gaiw&quot;&gt;here&lt;/a&gt; or follow &lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/4ee817bd-3edc-45db-aed0-ae054d9d8712/take-the-frustration-out-of-hpc-software-development-with-hpe-cray-programming-environment-/video/&quot;&gt;this&lt;/a&gt; 25mn seminar&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I secure workloads in distributed environments?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn about the CNCF project, &lt;a href=&quot;https://developer.hpe.com/platform/spiffe-and-spire-projects/home/&quot;&gt;SPIFFE/SPIRE&lt;/a&gt;, and how to achieve zero-trust security for your application&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://slack.spiffe.io/&quot;&gt;Join the SPIFFE/SPIRE community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How can I deploy my stateful application on my modern Kubernetes platform?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If migrating a legacy application to Kubernetes is one of your concerns, &lt;a href=&quot;https://developer.hpe.com/blog/best-practices-for-migrating-your-apps-to-containers-and-kubernetes/&quot;&gt;read about  the KubeDirector project&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How can I focus more about the application, and less about the data?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/resources/software/ai-and-analytics-systems.html&quot;&gt;Read the eBook&lt;/a&gt; to learn how to simplify using and sharing your data with a common infrastructure&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure you check our &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/&quot;&gt;HPE Ezmeral Data Fabric platform&lt;/a&gt; pages&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do you handle collaboration and sharing of code in your organization?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Mastering Git (including GitHub, GitLab, etc.) is a key element of the modern developer toolkit and mandatory for being a contributor or consumer of open source projects. We have designed this series of blog posts to help with Git wizardry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-git/&quot;&gt;Get started with Git and get involved in the open source community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun/&quot;&gt;Learn to share with the community using Git&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-3-contributing-back-to-th/&quot;&gt;Start contributing back to the community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Munch &amp;#x26; Learn technology talk&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Monthly meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [Microservices Architecture 101](https://www.youtube.com/watch?v=qyyxQU37ZyQ)
&lt;pre&gt;&lt;code&gt;- [Building a foundation for zero trust with SPIFFE](https://www.youtube.com/watch?v=G1ceKr16nn8)

- [Kubernetes 101](https://www.youtube.com/watch?v=PWVJKK1obKQ&amp;#x26;feature=youtu.be)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://developer.hpe.com/campaign/meetups/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Meetups&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    A series of in-depth talks on open source developer technologies.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [Quarkus - Supersonic Subatomic Java](https://hpe.zoom.us/webinar/register/5716414626617/WN_VS7nBF_qQze0G64XLzBilw)
&lt;pre&gt;&lt;code&gt;- [Streamlit - The fastest way to build and share data science apps](https://hpe.zoom.us/webinar/register/2016414625150/WN_FzzTDsTjQBSw-UFwD6UTdw)

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [Kubernetes 101 - Introduction to Kubernetes Concepts](https://hackshack.hpedev.io/workshop/24)

- [SPIFFE - SPIRE 101 – An introduction to SPIFFE server and SPIRE agent security concepts](https://hackshack.hpedev.io/workshop/27)

- [API 101 - REST API basics and the value they provide](https://hackshack.hpedev.io/workshop/9)

- [Python 101 - Introduction to Python programming language](https://hackshack.hpedev.io/workshop/15)

- [Streamline app development with open source Grommet](https://hackshack.hpedev.io/workshop/14)

- [RUST 101 - Introduction to the Rust programming language](https://hackshack.hpedev.io/workshop/16)

- [GIT 101 – Get involved in the open source community](https://hackshack.hpedev.io/workshop/17)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Ping us with your comments, questions, and requests for information. We recommend starting with the HPE Developer Slack workspace.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [HPE Developer Slack](https://slack.hpedev.io/)

- [Grommet Slack](https://grommet.slack.com/)

- [Chapel Discourse Forum](https://chapel.discourse.group/latest)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[DevOps & Site Reliability Engineer]]></title><description><![CDATA[Setting up the software delivery pipeline, streamlining processes, and ensuring it all works is your responsibility. And automation is key…]]></description><link>https://developer.hpe.com/devops-sre/home/</link><guid isPermaLink="false">https://developer.hpe.com/devops-sre/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;Setting up the software delivery pipeline, streamlining processes, and ensuring it all works is your responsibility. And automation is key to your success. Here, you’ll not only find tips and tricks to help you deliver what’s expected from you but also ideas that will help you expand your capabilities.&lt;/p&gt;
&lt;p&gt;You may also find content of interest to you on the &lt;a href=&quot;/role/data-scientist/home/&quot;&gt;Data Scientist&lt;/a&gt; and &lt;a href=&quot;/role/developer/home/&quot;&gt;Developer pages&lt;/a&gt;, so be sure to check them out.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **Solve future problems now**
&lt;pre&gt;&lt;code&gt;*Learn how industry innovation affects you*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Setting up the pipeline**
&lt;pre&gt;&lt;code&gt;*Ease operations and workflows*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Check out [Infrastructure-as-code on HPE GreenLake using Terraform](/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/)
&lt;pre&gt;&lt;code&gt;- Read [Kubernetes Cluster as Code - Part 1](/blog/kubernetes-clusters-as-code-part1/)

- Better understand [HPE GreenLake Central Identity &amp;#x26; Access Management](/blog/understanding-hpe-greenlake-identity-access-management/)

- Explore [From DevOps to AIOps with the power of HPC: Why and how it’s time to make the move](https://community.hpe.com/t5/tech-insights/from-devops-to-aiops-with-the-power-of-hpc-why-and-how-it-s-time/ba-p/7116668#.Y9tcoezMJGM)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Read [Lift and Transform Apps with HPE CSI Driver for Kubernetes](/blog/lift-and-transform-apps-with-hpe-csi-driver-for-kubernetes/)
&lt;pre&gt;&lt;code&gt;- Learn more about [Autopilot Kubernetes Deployments on HPE Ezmeral Runtime Enterprise](/blog/autopilot-kubernetes-deployments-on-hpe-ezmeral-runtime-enterprise/)

- Check out how to [Scale Kubernetes Clusters using HPE GreenLake Terraform Provider](/blog/scale-kubernetes-cluster-using-hpe-greenlake-terraform-provider/)

- Watch [Microservices Architecture 101](https://www.youtube.com/watch?v=qyyxQU37ZyQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;feature=youtu.be&amp;#x26;ab_channel=HewlettPackardEnterprise)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leveraging automation for maximum efficiency, repeatability, and cost effectiveness&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Watch &lt;a href=&quot;https://www.youtube.com/watch?v=zUo8Ag2IXqk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;HPE GreenLake and Infrastructure-as-Code (IaC)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/&quot;&gt;Infrastructure-as-code on HPE GreenLake using Terraform&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Study the technical paper &lt;a href=&quot;/blog/infrastructure-as-code-with-hpe-oneview-and-ansible-by-red-hat/&quot;&gt;Infrastructure as code with HPE OneView and Ansible by Red Hat&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Securing and automating app development&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Watch &lt;a href=&quot;https://www.youtube.com/watch?v=XhhqEjUaLjE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Running Reliable systems Part 1: An overview of Site Reliability Engineering (SRE)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Watch &lt;a href=&quot;https://www.youtube.com/watch?v=ZDxptOGs-ow&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Running Reliable systems Part 2: Service level objective (SLO) math&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;View &lt;a href=&quot;https://www.youtube.com/watch?v=G1ceKr16nn8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;feature=youtu.be&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Building a foundation for zero trust with SPIFFE&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn about &lt;a href=&quot;/blog/galadriel-a-spire-federation-alternative/&quot;&gt;Galadriel – a SPIRE Federation Alternative&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/the-advent-of-ephemeral-infrastructure-as-code/&quot;&gt;The advent of ephemeral Infrastructure-as-Code&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check out &lt;a href=&quot;/blog/deploying-mongodb-application-on-hpe-greenlake-for-containers/&quot;&gt;A guide to deploying MongoDB applications using HPE GreenLake for Containers&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read how to &lt;a href=&quot;/blog/secure-containerized-and-traditional-apps-concurrently/&quot;&gt;Secure containerized and traditional apps concurrently&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Setting up pipelines and streamlining processes&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore the HPE GreenLake for Compute Ops Management REST API – &lt;a href=&quot;/blog/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-curl-and-postman/&quot;&gt;using cURL and Postman&lt;/a&gt; or by &lt;a href=&quot;/blog/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-python-and-powershell/&quot;&gt;using Python and PowerShell&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn how &lt;a href=&quot;https://community.hpe.com/t5/the-cloud-experience-everywhere/big-news-for-devops-teams-new-capabilities-in-hpe-greenlake-for/ba-p/7178793#.Y9te5uzMJGM&quot;&gt;HPE GreenLake for Private Cloud Enterprise offers expanded container deployment&lt;/a&gt; options for Kubernetes to support DevOps and CI/CD environments&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Monitoring and ensuring applications are stable&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Review &lt;a href=&quot;/blog/greenlake-platform-infrastructure-monitoring-with-apache-skywalking/&quot;&gt;HPE GreenLake for Private Cloud Enterprise monitoring with Apache SkyWalking and OpenTelemetry&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/monitor-application-performance-across-hybrid-cloud-environment-using-apache-skywalking-and-service-mesh/&quot;&gt;Addressing hybrid cloud application challenges using HPE GreenLake for Private Cloud Enterprise – Part 2: Application monitoring&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check out &lt;a href=&quot;/blog/containers-best-practices-for-running-in-production/&quot;&gt;Containers: Best practices for running in production&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Modernizing and migrating legacy apps&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;View &lt;a href=&quot;https://www.youtube.com/watch?v=UvcyIjzml7s&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;/blog/application-modernization-with-the-application-workbench/&quot;&gt;Application modernization with the Application Workbench&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check out our &lt;a href=&quot;/blog/best-practices-for-migrating-your-apps-to-containers-and-kubernetes/&quot;&gt;Best practices for migrating your apps to containers and Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;See how to &lt;a href=&quot;/blog/lift-and-transform-apps-with-hpe-csi-driver-for-kubernetes/&quot;&gt;Lift and transform apps with HPE CSI Driver for Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Case studies&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a50000267.T%C3%BCbingen-University-Hospital-future-proofs-its-SAP-environment-case-study.html?rpv=cpf&amp;#x26;parentPage=/us/en/greenlake/customer-stories&quot;&gt;Tübingen University Hospital future-proofs its SAP environment case study&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a50006818.Packaging-giant-builds-platform-for-sustainable-growth--EPL-Limited.html?rpv=cpf&amp;#x26;parentPage=/us/en/customer-case-studies/modal-fragment/e6818&quot;&gt;Packaging giant builds platform for sustainable growth – EPL Limited&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Munch &amp;#x26; Learn technology talk&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Monthly meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - [Calling all citizen developers: Can low-code platforms accelerate your impact?](https://www.youtube.com/watch?v=zc_54fq8PoY&amp;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;index=1&amp;ab_channel=HewlettPackardEnterprise)
&lt;pre&gt;&lt;code&gt;- [Kubernetes 101](https://www.youtube.com/watch?v=PWVJKK1obKQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;feature=youtu.be&amp;#x26;ab_channel=HewlettPackardEnterprise)

- [Microservices Architecture 101](https://www.youtube.com/watch?v=qyyxQU37ZyQ&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;feature=youtu.be&amp;#x26;ab_channel=HewlettPackardEnterprise)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;/campaign/meetups/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Meetups&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    A series of in-depth talks on open source developer technologies.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=zUo8Ag2IXqk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;HPE GreenLake and Infrastructure-as-Code (IaC)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=XhhqEjUaLjE&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Running reliable systems - Part 1: An Overview of Site Reliability Engineering (SRE)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ZDxptOGs-ow&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=2&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Running reliable systems - Part 2: Service level objective (SLO) Math&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=odi9isyZOrU&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;OpenTelemetry: Getting started and the road to production&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=r62VLwT6w3Y&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;Finding vulnerabilities in production with open source ThreatMapper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=UvcyIjzml7s&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;index=1&amp;#x26;ab_channel=HewlettPackardEnterprise&quot;&gt;HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/24&quot;&gt;Kubernetes 101 – Introduction to Kubernetes Concepts&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/33&quot;&gt;Docker 101 – Introduction to Docker Concepts&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/31&quot;&gt;Ansible 101 – Introduction to Ansible Concepts&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/replays/36&quot;&gt;Introduction to Virtual Machine Infrastructure-as-Code using Terraform and HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/replays/32&quot;&gt;Creating a Zero Trust Model for Microservices Architectures with SPIRE and Envoy&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/0&quot;&gt;Intro to the HPE Ezmeral Runtime Enterprise REST API&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/9&quot;&gt;API 101 - API basics and the value they provide&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshop/21&quot;&gt;StackStorm 101 - Introduction to the StackStorm Framework&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Documentation&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;The [HPE GreenLake API portal](https://developer.greenlake.hpe.com/) offers documentation and API information along with informative videos and tutorials. 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;Ping us with your comments, questions, and requests for information.

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [HPE Developer Community Slack]( https://slack.hpedev.io/)

- [HPE GreenLake Test Drive](https://testdrive.greenlake.hpe.com/)

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;  
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Open Source Advocate]]></title><description><![CDATA[To stay on the cutting edge, you know your organization needs the agility afforded by open source software and community collaboration. And…]]></description><link>https://developer.hpe.com/open-source-advocate/home/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-advocate/home/</guid><content:encoded>&lt;style&gt;
.row {
  display: grid;
	grid-template-columns: 1fr 1fr;
  column-gap: 50px;
}
&lt;/style&gt;
&lt;p&gt;To stay on the cutting edge, you know your organization needs the agility afforded by open source software and community collaboration. And that’s what you’ll find here. Hewlett-Packard Enterprise (HPE) has always had a culture of open collaboration and giving back to the community. It’s in our DNA. Open source collaboration is the whole idea behind the HPE Developer Community, offering a place where HPE and external engineers can come together to accelerate innovation.&lt;/p&gt;
&lt;p&gt;Our open source projects today focus on common problems customers are trying to solve; making hybrid IT simple, enabling cloud economics, programming data analyses on laptops through supercomputers, and securing digital infrastructure. But they go beyond providing innovative enterprise solutions to tackling global issues, such as ensuring the right people can access the right data wherever it’s needed to combat disease and hunger. We do so by contributing to projects focused on cloud-native application development, data fabric, and high performance computing (HPC), investing in future growth areas such as AI, machine learning, data science, and edge computing.&lt;/p&gt;
&lt;p&gt;Explore community-powered innovation here as we highlight a few of the latest open source contributions from HPE. We encourage you to join these community efforts, contributing where and when you can.&lt;/p&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #7630EA; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Get Inspired
    **Data science de facto standards emerge**
&lt;pre&gt;&lt;code&gt;*Explore open standards for AI/ML tool and application development.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    ### Building a Foundation 
    **Secure from the start**
&lt;pre&gt;&lt;code&gt;*Start with a zero-trust policy where everyone needs to be authenticated and authorized.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Improve scientific simulation efficiency and accuracy at scale using high-performance compute simulations and the open source framework of [SmartSim](https://developer.hpe.com/platform/smartsim/home/)
&lt;pre&gt;&lt;code&gt;- Accelerate time-to-production with [Determined AI](https://developer.hpe.com/platform/determined-ai/home/), an open source deep learning training platform to build and train models faster and easier
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- Use [SPIFFE](https://developer.hpe.com/platform/spiffe-and-spire-projects/home/) (Secure Production Identity Framework for Everyone) to securely authenticate services in dynamic and heterogeneous environments through platform-agnostic, cryptographic identification. This ensures distributed workloads can continuously establish mutual trust and encrypted communications within and across organizational boundaries, especially when communicating over untrusted networks
 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Prototype-then-port parallel processing apps**
&lt;pre&gt;&lt;code&gt;*ML/DL and AI apps requiring large-scale simulation often need the power of parallel processing, but access to HPC hardware can be limited.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Connect seamlessly to trustworthy data**
&lt;pre&gt;&lt;code&gt;*Dataspaces provide the building blocks to integrate diverse data sets from multiple distributed owners and gain broader discovery and access with improved governance and trust.*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- Consider writing your app in [Chapel](https://developer.hpe.com/platform/chapel/home/), which allows you to prototype your app on a laptop and then run it at scale on larger systems
 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- Create a standard and open approach to meta data, enabling easier cross-vertical discovery and sharing of distributed diverse data sets with [HPE Dataspaces](https://www.hpe.com/us/en/what-is/dataspaces.html)
 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Apply open source solutions to solve global issues**
&lt;pre&gt;&lt;code&gt;*In an interconnected world, technology can enhance and protect our existence*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    **Modernize the SQL stack for analytics transformations**
&lt;pre&gt;&lt;code&gt;*Enterprises increasingly require data analytics services with cloud-like operations*
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [AgStack](https://github.com/agstack) Foundation aims to improve global agriculture efficiency through the creation, maintenance and enhancement of free, reusable, open and specialized digital infrastructure for data and applications
 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- Current on-premises SQL technologies don’t account for the new requirements around hybrid cloud and scale. [Presto](https://github.com/prestodb/presto) supports SQL runtimes to increase speed
 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #33DAC8; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Addressing Key Concerns&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;I have data everywhere, including supplier and customer sites. I need to take advantage of it for AI and machine learning applications. What open source technologies can help me do this?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;View this video to learn how &lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/505e5730-c199-4ca8-9ee7-d70ba6f6f7fa/dataspaces--connecting-you-to-data-you-can-trust/video/&quot;&gt;HPE Dataspaces&lt;/a&gt; connects data consumers with data producers offering a clean and trusted exchange of data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How can open, interconnected data access solve global issues?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Explore how HPE’s research partnership with &lt;a href=&quot;https://www.cgiar.org/how-we-work/governance/system-organization/&quot;&gt;CGIAR System Organization&lt;/a&gt;, &lt;a href=&quot;https://agstack.org/&quot;&gt;The AgStack Foundation&lt;/a&gt; and &lt;a href=&quot;https://www.digitalgreen.org/&quot;&gt;Digital Green&lt;/a&gt; enables farmers to find the data they need to advance their agricultural goals in this &lt;a href=&quot;https://community.hpe.com/t5/Advancing-Life-Work/Dataspaces-how-an-open-metadata-layer-can-establish-a/ba-p/7149075#.YjUz4hDMLlx&quot;&gt;article&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Where can I find an open source tool that helps me build and train ML models faster and easier?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read &lt;a href=&quot;https://developer.hpe.com/blog/determined-ai-is-joining-hewlett-packard-enterprise/&quot;&gt;Determined AI is Joining Hewlett Packard Enterprise&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://developer.hpe.com/platform/determined-ai/home/&quot;&gt;Determined AI&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;My machine-learning application requires the power of parallel processing. But access to HPC machines is limited. Is there a way I can prototype my app on less expensive hardware and then port it over to a cluster or supercomputer?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Explore &lt;a href=&quot;https://developer.hpe.com/platform/chapel/home/&quot;&gt;Chapel&lt;/a&gt;, an open source programming language that aims to solve the “prototype then port” issue, developed by Cray, an HPE company&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://chapel-lang.org/&quot;&gt;Chapel&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://chapel.discourse.group/latest&quot;&gt;Join the Chapel community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How do I secure an application in heterogeneous, distributed environments?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn how the CNCF-incubated project, &lt;a href=&quot;https://developer.hpe.com/platform/spiffe-and-spire-projects/home/&quot;&gt;SPIFFE/SPIRE&lt;/a&gt;, can help in the eBook &lt;a href=&quot;https://spiffe.io/book/&quot;&gt;Solving the Bottom Turtle&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Explore some of the &lt;a href=&quot;https://developer.hpe.com/blog/top-13-capabilities-within-spiffe-and-spire-released-in-2019/&quot;&gt;top capabilities of SPIFFE/SPIRE&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://slack.spiffe.io/&quot;&gt;Join the SPIFFE/SPIRE community&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How can I deploy my stateful application on my modern, open Kubernetes platform?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn about and contribute to the HPE-sponsored open source project &lt;a href=&quot;https://kubedirector.io/&quot;&gt;KubeDirector&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy cloud-native and non-cloud-native apps running on bare-metal or virtualized infrastructure, on-premises, on any cloud and at the edge with KubeDirector-powered &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral/home/&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;How can I manage my systems using open source standards?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Simply and securely manage your data center hardware with the &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home/&quot;&gt;Redfish API ecosystem&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn why the &lt;a href=&quot;https://www.hpe.com/psnow/doc/4AA6-1727ENW?jumpid=in_lit-psnow-red&quot;&gt;industry is transitioning to Redfish&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Where can I find open source design system and development framework tools to create responsive, accessible mobile-first apps?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Build awesome apps with the HPE-sponsored open source project, &lt;a href=&quot;/platform/grommet/home/&quot;&gt;Grommet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;br&gt;
&lt;hr style=&quot;background: #FF8300; height: 5px; border: none&quot;&gt;
&lt;br&gt;
&lt;h3&gt;Skill Up&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Munch &amp;#x26; Learn technology talk&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Monthly meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
    - Mithril: Introducing Robust Identities into Istio by integrating with SPIRE
&lt;pre&gt;&lt;code&gt;- [Golden Age of AI, Dark Ages of AI Infrastructure](https://www.youtube.com/watch?v=ktZFLD-9qgw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY)

- [Redfish: Past, Present and Future](https://www.youtube.com/watch?v=Q1Qeb24lpKg&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY)

- [How to make data consumable for real-world data science](https://www.youtube.com/watch?v=4WKjRqflF7M&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HPETechnology)

- [Building a foundation for zero trust with SPIFFE](https://www.youtube.com/watch?v=G1ceKr16nn8&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&amp;#x26;ab_channel=HPETechnology)

- [What’s a data fabric and how does it work?](https://www.youtube.com/watch?v=qi6sTvu8osk&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://developer.hpe.com/campaign/meetups/&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Meetups&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    A series of in-depth talks on open source developer technologies.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- Decoupled policy enforcement with Open Policy Agent

- HPE + vFunction: Modernizing Legacy Applications and Data Sources Faster

- [Quarkus - Supersonic Subatomic Java](https://hpe.zoom.us/webinar/register/5716414626617/WN_VS7nBF_qQze0G64XLzBilw)

- [Streamlit - The fastest way to build and share data science apps](https://hpe.zoom.us/webinar/register/2016414625150/WN_FzzTDsTjQBSw-UFwD6UTdw)

 
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot; style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Workshops-on-Demand&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Free, in-depth, hands-on workshops that allow you to explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [Kubernetes 101 - Introduction to Kubernetes Concepts](https://hackshack.hpedev.io/workshop/24)

- [SPIFFE - SPIRE 101 – An introduction to SPIFFE server and SPIRE agent security concepts](https://hackshack.hpedev.io/workshop/27)

- [API 101 - REST API basics and the value they provide](https://hackshack.hpedev.io/workshop/9)

- [Python 101 - Introduction to Python programming language](https://hackshack.hpedev.io/workshop/15)

- [Streamline app development with open source Grommet](https://hackshack.hpedev.io/workshop/14)

- [RUST 101 - Introduction to the Rust programming language](https://hackshack.hpedev.io/workshop/16)

- [GIT 101 – Get involved in the open source community](https://hackshack.hpedev.io/workshop/17)

- [Stackstorm 101 – Introduction to the Stackstorm automation features](https://developer.hpe.com/hackshack/workshop/21)

- [Jupyter Notebooks 101 – A simple how to on Jupyter Notebooks](https://developer.hpe.com/hackshack/workshop/25)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;&lt;/p&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Find documentation, API information, videos and tutorials on our [open source platform pages](/opensource/).
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [Chapel](https://chapel-lang.org/)

- [Determined AI](/platform/determined-ai/home/)

- [Grommet](/platform/grommet/home/)

- [KubeDirector](/platform/hpe-ezmeral/home/)

- [SmartSim](/platform/smartsim/home/)

- [SPIFFE/SPIRE](/platform/spiffe-and-spire-projects/home/)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt;
&lt;div style=&quot;font-weight: 700; font-size: 27px&quot;&gt;Engage&lt;/div&gt;
&lt;div class=&quot;row&quot;&gt;
  &lt;div class=&quot;column&quot;&gt;
    Ping us with your comments, questions, and requests for information. We recommend starting with the HPE Developer Slack workspace.
  &lt;/div&gt;
  &lt;div class=&quot;column&quot;&gt;
&lt;pre&gt;&lt;code&gt;- [HPE Developer Slack](https://slack.hpedev.io/)

- [Grommet Slack](https://grommet.slack.com/)

- [Chapel Discourse Forum](https://chapel.discourse.group/latest)
&lt;/code&gt;&lt;/pre&gt;
  &lt;/div&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Build data pipelines to inform decision-making processes]]></title><description><![CDATA[Data is growing in all dimensions, including its importance to a business. Building an end-to-end data analytics pipeline is the only way to…]]></description><link>https://developer.hpe.com/build-data-pipelines-to-inform-descision-making-processes/home/</link><guid isPermaLink="false">https://developer.hpe.com/build-data-pipelines-to-inform-descision-making-processes/home/</guid><content:encoded>&lt;p&gt;Data is growing in all dimensions, including its importance to a business. Building an end-to-end data analytics pipeline is the only way to connect the dots. Now that data is distributed over multiple sites in different locations with data processing taking place outside the traditional data center environment, you need to find new ways to make information available from the edge to the cloud quickly, securely, and reliably.&lt;/p&gt;
&lt;p&gt;Today’s advanced data fabric can help you build the data pipelines you need to inform your real-time decision-making processes. When you use &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/If-HPE-Ezmeral-Data-Fabric-is-the-answer-what-is-the-question/ba-p/7092812#.YUjRDmZKj0q&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;, you can build data pipelines that support modern workloads that demand data to be handled wherever it is – from edge-to-core and edge-to-cloud, in any direction, at any speed, and in any format. Check out the resources found here on the HPE Developer portal to get started. Ping us on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; if you have any questions or need more information.&lt;/p&gt;
&lt;h2&gt;Documentation and tutorials&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/#tutorials&quot;&gt;HPE Ezmeral Data Fabric platform page&lt;/a&gt; on the HPE Developer portal offers documentation and API information along with informative videos and tutorials.&lt;/p&gt;
&lt;h2&gt;Munch &amp;#x26; Learn&lt;/h2&gt;
&lt;p&gt;The Munch &amp;#x26; Learn Technology Talks are monthly community meetups where you can hear from experts on the newest technologies. Catch up on any you may have missed and register for upcoming talks &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot;&gt;here&lt;/a&gt;. For those interested in data science, you’ll find a number of interesting sessions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=qi6sTvu8osk&quot;&gt;What’s a data fabric and how does it work?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Inh6eXM0EbA&quot;&gt;Data Science Unplugged: Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Va4tSr__Yok&quot;&gt;Data Science Unplugged: Part 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=4WKjRqflF7M&quot;&gt;How to make data consumable for real-world data science&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Workshops-on-Demand&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt; are free, Jupyter Notebook-based workshops that offer an in-depth, hands-on learning experience. Explore details of a technology by interacting with it. Designed to fit your schedule, these workshops are available 24/7 – from anywhere at any time. Check out these interesting HPE Ezmeral Data Fabric workshops:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/workshop/28&quot;&gt;Building a dynamic Machine Learning pipeline with KubeDirector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/workshop/29&quot;&gt;Deploying end-to-end machine learning workflows with HPE Ezmeral MLOps&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Documentation and tutorials&lt;/h2&gt;
&lt;p&gt;The HPE Developer blog offers many articles and tutorials to help you learn about data fabric and build data pipelines that deliver you the data you need, where and when you need it. Explore our rich library of articles:&lt;/p&gt;
&lt;p&gt;(Blog feed)&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Design a website with interactive experiences]]></title><link>https://developer.hpe.com/design-a-website-with-interactive/home/</link><guid isPermaLink="false">https://developer.hpe.com/design-a-website-with-interactive/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Employ DevOps practices to accelerate]]></title><link>https://developer.hpe.com/employ-devlops-practives-to-accelerate/home/</link><guid isPermaLink="false">https://developer.hpe.com/employ-devlops-practives-to-accelerate/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Deploy apps across private, public, and hybrid clouds]]></title><link>https://developer.hpe.com/deploy-apps-across-private-public-and-hybrid/home/</link><guid isPermaLink="false">https://developer.hpe.com/deploy-apps-across-private-public-and-hybrid/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Provision and configure cloud-based and on-prem services]]></title><link>https://developer.hpe.com/provision-and-configure-cloud-based-and-on-prem-services/home/</link><guid isPermaLink="false">https://developer.hpe.com/provision-and-configure-cloud-based-and-on-prem-services/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Secure your infrastructure]]></title><link>https://developer.hpe.com/secure-your-infrastructure/home/</link><guid isPermaLink="false">https://developer.hpe.com/secure-your-infrastructure/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Transform manual into automated processes]]></title><link>https://developer.hpe.com/transform-manual-into-automated/home/</link><guid isPermaLink="false">https://developer.hpe.com/transform-manual-into-automated/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[What other use cases are missing? AI?]]></title><link>https://developer.hpe.com/what-other-use-cases-are-missing/home/</link><guid isPermaLink="false">https://developer.hpe.com/what-other-use-cases-are-missing/home/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Be a Blogger!]]></title><description><![CDATA[Read instructions found in Be a Blogger. Create a Personal GitHub account or sign into your GitHub account. Review tips offered in the <a…]]></description><link>https://developer.hpe.com/contribute/</link><guid isPermaLink="false">https://developer.hpe.com/contribute/</guid><content:encoded>&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read instructions found in &lt;a href=&quot;https://developer.hpe.com/blog/be-an-hpe-dev-blogger/&quot;&gt;Be a Blogger&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;a href=&quot;https://github.com/signup&quot;&gt;Personal GitHub account&lt;/a&gt; or sign into your GitHub account.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Review tips offered in the &amp;#x3C;a target=&apos;_blank&apos;rel=&quot;noopener noreferrer&quot; href=&apos;&lt;a href=&quot;https://github.com/hpe-dev-incubator/hpe-dev-portal/blob/master/docs/ContributorGuide-v2.md&amp;#x27;%3EHPE&quot;&gt;https://github.com/hpe-dev-incubator/hpe-dev-portal/blob/master/docs/ContributorGuide-v2.md&apos;&gt;HPE&lt;/a&gt; Developer External Contributor Guide&lt;/a&gt; regarding the Content Management System (CMS).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download and review the &lt;a href=&quot;https://brandcentral.hpe.com/uploads/media/2026/2/writing-style-guide-aug-2025v2-1755635696557-7-1771886948866.pdf&quot;&gt;HPE Brand Writing Style Guide&lt;/a&gt; for accurate and relevant styling.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Start Now&lt;/strong&gt; button to connect to our Content Management System to fork our repository and start working on your blog.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;style&gt;
  .button {
    background-color: rgba(23,235,160,1);
    box-sizing: border-box;
    color: #000000; 
    font-size: 18px; 
    display: inline-block;
    padding: 6px 12px;
    vertical-align: middle;
    overflow: hidden;
    text-decoration: none;
    text-align: center;
    cursor: pointer;
    white-space: nowrap;
    border-radius: 4px;
    border: none;
    margin: 0;
    line-height: 24px;
    font-weight: 700;
  } 
&lt;/style&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a target=&apos;\_blank&apos;rel=&quot;noopener noreferrer&quot; href=&quot;https://developer.hpe.com/admin&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Start Now&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Be an Open Source Contributor]]></title><description><![CDATA[In the world of open source, you need to be fluent with Git. If you are not yet, we recommend you read the following 3-part blog series…]]></description><link>https://developer.hpe.com/osscontribute/</link><guid isPermaLink="false">https://developer.hpe.com/osscontribute/</guid><content:encoded>&lt;p&gt;In the world of open source, you need to be fluent with &lt;a href=&quot;https://git-scm.com/&quot;&gt;Git&lt;/a&gt;. If you are not yet, we recommend you read the following 3-part blog series:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-git/&quot;&gt;Getting started with Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun/&quot;&gt;Sharing with the community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-involved-in-the-open-source-community-part-3-contributing-back-to-th/&quot;&gt;Contributing back&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We also offer a Workshop-on-Demand that you can take to get hands-on experience with Git. Feel free to register by clicking below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/17&quot;&gt;Getting started with Git&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When you get started with open source, it means that you will either contribute to an existing project or create your own project. If you want to create your own project, don’t forget to create a public repo for it. For HPE employees, be aware that there is an open source review process you need to follow before releasing any code to open source. Make sure you check this &lt;a href=&quot;https://opensource.corp.hpecorp.net/osrp_process_upstream.html&quot;&gt;HPE-only website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The HPE Developer Community has highlighted a number of contributors/maintainers in a series of blog posts. Read about their respective journeys and join the team in our Hall of Fame:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/open-source-contributor-explains-how-kubedirector-empowers-data-intensive-apps/&quot;&gt;Kartik Mathur&lt;/a&gt; (KubeDirector)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/becoming-a-linux-kernel-contributor-following-the-journey-of-souptick-joarder/&quot;&gt;Souptick Joarder&lt;/a&gt; (Linux kernel)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/meet-linux-distinguished-technologist-and-open-source-evangelist-bruno-cornec/&quot;&gt;Bruno Cornec&lt;/a&gt; (MondoRescue)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/spire-maintainer-agustn-martnez-fay-reveals-his-passion-for-information-/&quot;&gt;Agustín Martínez Fayó&lt;/a&gt; (SPIRE)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/chapel-technical-lead-brad-chamberlain-opens-up-about-open-source/&quot;&gt;Brad Chamberlain&lt;/a&gt; (Chapel)&lt;a href=&quot;https://developer.hpe.com/blog/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/&quot; title=&quot;https://developer.hpe.com/blog/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/&quot; title=&quot;https://developer.hpe.com/blog/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/&quot;&gt;Shimrit Yacobi&lt;/a&gt; (Grommet)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One last piece of advice. When joining an existing open source project in order to contribute, you will have to make yourself known to other contributors and maintainers. This might take time. Be patient. Start small by documenting issues in the code or in the documentation and propose a solution. You can also review and comment on proposed changes. In most cases, there will be a Slack or a Gitter forum dedicated to the project. Don’t hesitate to join and start a discussion there. Once your name becomes associated with good feedback and proposals, it will be a lot easier for you to contribute code and get it approved and merged.&lt;/p&gt;
&lt;p&gt;And that’s when it gets really exciting.&lt;/p&gt;
&lt;p&gt;We estimate that there are 200 million packages on GitHub today, so finding the right place to engage is not that easy. If you are looking for a good project to contribute to, we suggest the following list (in alphabetical order). Enjoy the journey and check out our &lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/opensource&quot;&gt;Open Source&lt;/a&gt;&lt;/strong&gt; page to learn more about some of the key projects HPE supports.&lt;/p&gt;
&lt;style&gt; table { display: block; width: 100%; width: max-content; max-width: 100%; overflow: auto; -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; border:1px solid grey; } td { -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; border:1px solid grey; text-align: left !important; padding: 10px !important; } thead tr:first-child td { -webkit-box-shadow: none; -moz-box-shadow: none; box-shadow: none; border:1px solid grey; text-align: center !important; padding: 20px !important; font-weight: bold !important; } &lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;   Name/Repo&lt;/th&gt;
&lt;th&gt;   Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/agstack&quot;&gt;AgStack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;AgStack consists of an open repository to create and publish models, with free and easy access to public data, interoperable frameworks for   cross-project use and topic-specific extensions and toolboxes. It leverages existing technologies, such as agriculture standards (AgGateway, UN-FAO, CAFA, USDA and NASA-AR); public data (Landsat, Sentinel, NOAA and Soilgrids; models (UC-ANR IPM), and open source projects like Hyperledger,   Kubernetes, Open Horizon, Postgres, Django and more.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/logging-log4j2&quot;&gt;Apache Logging (log4j)&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Apache Log4j is a Java-based logging utility originally   written by Ceki Gülcü. It is part of the Apache Logging Services, a project of the Apache Software Foundation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/airflow&quot;&gt;Airflow&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Apache Airflow (or simply Airflow) is a platform to   programmatically author, schedule, and monitor workflows.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/arrow-kt/arrow&quot;&gt;Arrow&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Arrow is a library for Typed Functional Programming in Kotlin.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/gchq/Bailo&quot;&gt;Bailo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Managing the lifecycle of machine learning to support scalability, impact, collaboration, compliance and sharing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/calcite&quot;&gt;Calcite&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Apache Calcite is a dynamic data management framework.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/chapel-lang/chapel&quot;&gt;Chapel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Chapel is a modern programming language designed for productive parallel computing at scale. Chapel&apos;s design and implementation have been undertaken with portability in mind, permitting Chapel to run on multicore desktops and laptops, commodity clusters, and the cloud, in   addition to the high-end supercomputers for which it was originally created.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/cmf&quot;&gt;CMF&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Common metadata framework (CMF) addresses the problems associated with tracking of pipeline metadata from distributed sites and tracks code, data and metadata together for end-to-end traceability.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;HPE CSI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider   (CSP) to perform data management operations on storage resources.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/datahub-project/datahub&quot;&gt;DataHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The Metadata Platform for the Modern Data Stack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;Determined AI &lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Determined is an open source deep learning training platform that makes building models fast and easy.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/drill&quot;&gt;Drill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage systems. It was inspired, in part, by Google&apos;s Dremel.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/druid&quot;&gt;Druid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Druid is a high performance real-time analytics database.   Druid&apos;s main value add is to reduce time to insight and action.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/falcosecurity/falco&quot;&gt;Falco&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The Falco Project, originally created by Sysdig, is an incubating CNCF open source cloud native runtime security tool. Falco makes it easy to consume kernel events, and enrich those events with information from Kubernetes and the rest of the cloud native stack. Falco can also be   extended to other data sources by using plugins. Falco has a rich set of   security rules specifically built for Kubernetes, Linux, and cloud-native. If a rule is violated in a system, Falco will send an alert notifying the user of the violation and its severity.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/fluent&quot;&gt;Fluent&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fluentd is a cloud native logging solution to unify data   collection and consumption.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/grafana&quot;&gt;Grafana&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/grommet/&quot;&gt;Grommet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A react-based framework that provides accessibility,   modularity, responsiveness, and theming in a tidy package.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/istio&quot;&gt;Istio&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;An open platform to connect, manage, and secure microservices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/jaegertracing/jaeger&quot;&gt;Jaeger&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing platform created by Uber Technologies and donated to Cloud Native   Computing Foundation. It can be used for monitoring microservices-based  distributed systems.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/JuliaLang&quot;&gt;Julia&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Julia is a high-level, high-performance, dynamic programming   language. While it is a general-purpose language and can be used to write any application, many of its features are well suited for numerical analysis and   computational science.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/jupyter/notebook&quot;&gt;Jupyter Notebook&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The Jupyter notebook is a web-based notebook environment for interactive computing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/kafka&quot;&gt;Kafka&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/bluek8s/kubedirector&quot;&gt;KubeDirector&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;KubeDirector uses standard Kubernetes (K8s) facilities of   custom resources and API extensions to implement stateful scaleout application clusters. This approach enables transparent integration with K8s user/resource management and existing K8s clients and tools.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/kubernetes/kubernetes&quot;&gt;Kubernetes&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides basic   mechanisms for deployment, maintenance, and scaling of applications. Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and   practices from the community.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/lf-edge&quot;&gt;LFEdge&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;LF Edge aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/torvalds/linux&quot;&gt;Linux&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The Linux operating system.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/LinuxKI&quot;&gt;Linux KI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The LinuxKI Toolset (or LinuxKI for short) is an opensourced advanced mission critical performance troubleshooting tool for Linux. It is designed to identify performance issues beyond the typical performance metrics and results in faster root cause for many performance issues.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/llvm/llvm-project&quot;&gt;LLVM&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/ODIM-Project/ODIM&quot;&gt;ODIM&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Open Distributed Infrastructure Management (ODIM). A bold collaborative open source initiative to bring together a critical mass of infrastructure management and orchestration stakeholders to define and execute the collaborative work in several areas.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/OpenLineage/OpenLineage&quot;&gt;OpenLineage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;An Open Standard for lineage metadata collection.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/open-metadata/OpenMetadata&quot;&gt;OpenMetaData&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/open-telemetry&quot;&gt;OpenTelemetry&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;OpenTelemetry is a vendor-neutral standard way to collect telemetry data for applications, their supporting infrastructures, and services.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/openshmem-org&quot;&gt;OpenSHMEM&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;OpenSHMEM is an effort to create a specification for a   standardized API for parallel programming in the Partitioned Global Address Space. Along with the specification the project is also creating a reference   implementation of the API.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/openbmc&quot;&gt;OpenBMC&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;OpenBMC is a Linux distribution for management controllers used in devices such as servers, top of rack switches or RAID appliances. It uses Yocto, OpenEmbedded, systemd, and D-Bus to allow easy customization for your platform.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/prestodb&quot;&gt;PrestoDB&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Presto is a distributed SQL query engine for big data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/prometheus&quot;&gt;Prometheus&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/rstudio&quot;&gt;R Studio&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;RStudio is an integrated development environment (IDE) for the R programming language.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/CrayLabs/SmartSim&quot;&gt;SmartSim&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;SmartSim is a workflow library that makes it easier to use common Machine Learning (ML) libraries, like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and applications. SmartSim launches ML infrastructure on HPC systems alongside user workloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/spark&quot;&gt;Spark&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Spark is a unified analytics engine for large-scale data   processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL   and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine   learning, GraphX for graph processing, and Structured Streaming for stream processing.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/spiffe&quot;&gt;SPIFFE/SPIRE&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;SPIRE (the SPIFFE Runtime Environment) is a toolchain of APIs for establishing trust between software systems across a wide variety of hosting platforms. SPIRE exposes the SPIFFE Workload API, which can attest running software systems and issue SPIFFE IDs and SVIDs to them. This in turn allows two workloads to establish trust between each other, for example by establishing an mTLS connection or by signing and verifying a JWT token. SPIRE can also enable workloads to securely authenticate to a secret store, a database, or a cloud provider service.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/apache/zookeeper&quot;&gt;Zookeeper&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;ZooKeeper is a centralized service for maintaining   configuration information, naming, providing distributed synchronization, and providing group services.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[HPE Developer Evangelist]]></title><description><![CDATA[Are you a fan of HPE Developer? Become an HPE Developer Evangelist and help us spread the word on how developers, data scientists, and IT…]]></description><link>https://developer.hpe.com/evangelist/</link><guid isPermaLink="false">https://developer.hpe.com/evangelist/</guid><content:encoded>&lt;p&gt;Are you a fan of HPE Developer? Become an HPE Developer Evangelist and help us spread the word on how developers, data scientists, and IT technologists can write applications and develop integrations for Hewlett Packard Enterprise enabled environments. As an HPE Developer Evangelist, you’ll be the primary HPE Developer contact for your region/country for colleagues, HPE partners and customers. You’ll forward communications to them, alerting them of upcoming meetups, Munch &amp;#x26; Learn sessions and promoting the newest Workshops-on-Demand.&lt;/p&gt;
&lt;p&gt;Being an HPE Developer Evangelist means you’ll get special attention. When you represent HPE Developer at selected local events, you’ll be supported with swag to help promote how HPE focuses on open, collaborative software development. You’ll be able to propose new subjects for meetups and workshops, influencing our roadmap. And you can rely on the HPE Developer Slack Channel and Newsletter to provide you with the support you need to help answer questions and identify the newest blog posts, workshops, and events to promote.&lt;/p&gt;
&lt;p&gt;What are you waiting for? Sign up today!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Sustainability Strategy and Sustainability Research at Hewlett Packard Labs]]></title><link>https://developer.hpe.com/hackshackhome-1/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-1/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Collaborating at scale with Postman]]></title><link>https://developer.hpe.com/hackshackhome-10/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-10/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Divide and conquer with MicroFrontends]]></title><link>https://developer.hpe.com/hackshackhome-11/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-11/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Leveraging Tech to Address Global Challenges & Health]]></title><link>https://developer.hpe.com/hackshackhome-12/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-12/</guid><content:encoded></content:encoded></item><item><title><![CDATA[    WIN IN THE HPE DEV TREASURE HUNT!]]></title><link>https://developer.hpe.com/hackshackhome-14/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-14/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Machine Learning Development Environment and the open source ML advantage]]></title><link>https://developer.hpe.com/hackshackhome-13/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-13/</guid><content:encoded></content:encoded></item><item><title><![CDATA[A new era of software development using large language model tools like ChatGPT ]]></title><link>https://developer.hpe.com/hackshackhome-15/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-15/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Digital twins, the Metaverse, and augmented reality: Immersive technologies powered by AI]]></title><link>https://developer.hpe.com/hackshackhome-16/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-16/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Observability in action]]></title><link>https://developer.hpe.com/hackshackhome-17/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-17/</guid><content:encoded></content:encoded></item><item><title><![CDATA[DevOps Alert: Tool Sprawl]]></title><link>https://developer.hpe.com/hackshackhome-18/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-18/</guid><content:encoded></content:encoded></item><item><title><![CDATA[State of the Nation – Linux distributions]]></title><link>https://developer.hpe.com/hackshackhome-19/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-19/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Introduction to KubeFlow]]></title><link>https://developer.hpe.com/hackshackhome-2/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-2/</guid><content:encoded></content:encoded></item><item><title><![CDATA[From containers, to pods, to Kubernetes]]></title><link>https://developer.hpe.com/hackshackhome-20/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-20/</guid><content:encoded></content:encoded></item><item><title><![CDATA[The open-source advantage: Exploring machine learning through thought leadership]]></title><link>https://developer.hpe.com/hackshackhome-21/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-21/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Optimizing deep neural network inference workloads]]></title><link>https://developer.hpe.com/hackshackhome-22/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-22/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Automated continuous deployment of container-based applications]]></title><link>https://developer.hpe.com/hackshackhome-23/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-23/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Exploring HPE GreenLake Platform APIs through use cases]]></title><link>https://developer.hpe.com/hackshackhome-24/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-24/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake for Compute Ops Management APIs]]></title><link>https://developer.hpe.com/hackshackhome-25/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-25/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Getting Started with Aruba Central automation]]></title><link>https://developer.hpe.com/hackshackhome-26/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-26/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Using HPE GreenLake edge-to-cloud platform to build a private cloud]]></title><link>https://developer.hpe.com/hackshackhome-28/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-28/</guid><content:encoded></content:encoded></item><item><title><![CDATA[APIs for HPE GreenLake for block storage and next-gen platforms ]]></title><link>https://developer.hpe.com/hackshackhome-27/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-27/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Secure GenAI Adoption for all!]]></title><link>https://developer.hpe.com/hackshackhome-29/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-29/</guid><content:encoded></content:encoded></item><item><title><![CDATA[An overview of SRE]]></title><link>https://developer.hpe.com/hackshackhome-3/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-3/</guid><content:encoded></content:encoded></item><item><title><![CDATA[ Enabling business automation using HPE GreenLake platform foundational APIs]]></title><link>https://developer.hpe.com/hackshackhome-30/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-30/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Learn how AI hackers detect fragility and how to thwart them with AI model resilience]]></title><link>https://developer.hpe.com/hackshackhome-31/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-31/</guid><content:encoded></content:encoded></item><item><title><![CDATA[The data services family of APIs for HPE GreenLake – Putting it all together]]></title><link>https://developer.hpe.com/hackshackhome-32/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-32/</guid><content:encoded></content:encoded></item><item><title><![CDATA[The Transformative Impact of Generative AI on Telco Products]]></title><link>https://developer.hpe.com/hackshackhome-33/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-33/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Using HPE GreenLake for Red Hat OpenShift to migrate, modernize and run your applications smoothly]]></title><link>https://developer.hpe.com/hackshackhome-34/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-34/</guid><content:encoded></content:encoded></item><item><title><![CDATA[How digital twins help companies reach their sustainability goals]]></title><link>https://developer.hpe.com/hackshackhome-35/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-35/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Implementing Centralized Key Management in HPE GreenLake with Thales CipherTrust]]></title><link>https://developer.hpe.com/hackshackhome-36/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-36/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Vendor-Neutral GPU Programming in Chapel]]></title><link>https://developer.hpe.com/hackshackhome-37/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-37/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Enhancing NLP with RAG: A Practical Demonstration]]></title><link>https://developer.hpe.com/hackshackhome-38/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-38/</guid><content:encoded></content:encoded></item><item><title><![CDATA[LLM finetuning for mere mortals]]></title><link>https://developer.hpe.com/hackshackhome-39/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-39/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE GreenLake and IaC]]></title><link>https://developer.hpe.com/hackshackhome-4/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-4/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Hybrid Classical-Quantum Workflows on HPE Supercomputers]]></title><link>https://developer.hpe.com/hackshackhome-40/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-40/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Exploring the HPE Sustainability Insight Center: Key features, innovations, and API capabilities]]></title><link>https://developer.hpe.com/hackshackhome-41/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-41/</guid><content:encoded></content:encoded></item><item><title><![CDATA[How to fix your biggest security hole]]></title><link>https://developer.hpe.com/hackshackhome-42/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-42/</guid><content:encoded></content:encoded></item><item><title><![CDATA[NVIDIA NIM Agents Blueprints - Do your own AI: Multimodal PDF Data Extraction 101]]></title><link>https://developer.hpe.com/hackshackhome-43/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-43/</guid><content:encoded></content:encoded></item><item><title><![CDATA[LL-Mesh – Democratizing Gen AI]]></title><link>https://developer.hpe.com/hackshackhome-44/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-44/</guid><content:encoded></content:encoded></item><item><title><![CDATA[A deep dive into the HPE Private Cloud AI Software Stack]]></title><link>https://developer.hpe.com/hackshackhome-45/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-45/</guid><content:encoded></content:encoded></item><item><title><![CDATA[From log files to AI insights: The 60-year evolution of observability and AIOps]]></title><link>https://developer.hpe.com/hackshackhome-46/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-46/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Introduction to HPE GreenLake cloud webhooks]]></title><link>https://developer.hpe.com/hackshackhome-47/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-47/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Unlocking Private AI Power: Insurance Fraud Detection and Beyond]]></title><link>https://developer.hpe.com/hackshackhome-48/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-48/</guid><content:encoded></content:encoded></item><item><title><![CDATA[ChatHPE Hub: Enabling Secure and Scalable AI Transformation at HPE]]></title><link>https://developer.hpe.com/hackshackhome-49/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-49/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Ezmeral Unified Analytics]]></title><link>https://developer.hpe.com/hackshackhome-5/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-5/</guid><content:encoded></content:encoded></item><item><title><![CDATA[No more blind spots: How eBPF transforms observability]]></title><link>https://developer.hpe.com/hackshackhome-50/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-50/</guid><content:encoded></content:encoded></item><item><title><![CDATA[DayN+ - A new way to look at observability]]></title><link>https://developer.hpe.com/hackshackhome-51/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-51/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Private Cloud AI Technical Demo]]></title><link>https://developer.hpe.com/hackshackhome-52/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-52/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Sustainability Insight Center: A Win for both Business and the Environment]]></title><link>https://developer.hpe.com/hackshackhome-53/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-53/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Compute Ops Management APIs and the DevOps ecosystem: Updates and developer tools]]></title><link>https://developer.hpe.com/hackshackhome-54/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-54/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Implementing a complete Workshop-on-Demand infrastructure in less than an hour]]></title><link>https://developer.hpe.com/hackshackhome-55/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-55/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Work Smarter: Natural Language Access to your infrastructure via GreenLake MCP]]></title><link>https://developer.hpe.com/hackshackhome-56/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-56/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Introduction to HPE Networking Central's new APIs with Postman and PyCentralv2]]></title><link>https://developer.hpe.com/hackshackhome-57/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-57/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Open Source at HPE: Three Pillars, Many Paths]]></title><link>https://developer.hpe.com/hackshackhome-58/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-58/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Network Operations Spotlight with PyCentralv2]]></title><link>https://developer.hpe.com/hackshackhome-59/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-59/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE GreenLake for Data Fabric]]></title><link>https://developer.hpe.com/hackshackhome-6/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-6/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Automate what happens next using HPE OpsRamp Process Automation]]></title><link>https://developer.hpe.com/hackshackhome-60/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-60/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Galadriel – A new take on SPIRE federation]]></title><link>https://developer.hpe.com/hackshackhome-7/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-7/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Accelerating scientific research]]></title><link>https://developer.hpe.com/hackshackhome-8/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-8/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Extraordinary claims require extraordinary engineering]]></title><link>https://developer.hpe.com/hackshackhome-9/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome-9/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Running reliable systems Part 2: SLO Math]]></title><link>https://developer.hpe.com/hackshackhome/</link><guid isPermaLink="false">https://developer.hpe.com/hackshackhome/</guid><content:encoded></content:encoded></item><item><title><![CDATA[JOIN IN THE FUN & GAMES AT HPE DISCOVER 2022]]></title><link>https://developer.hpe.com/hpediscover2022/</link><guid isPermaLink="false">https://developer.hpe.com/hpediscover2022/</guid><content:encoded></content:encoded></item><item><title><![CDATA[Mithril: Introducing Robust Identities into Istio by integrating with SPIRE]]></title><link>https://developer.hpe.com/munchlearn/</link><guid isPermaLink="false">https://developer.hpe.com/munchlearn/</guid><content:encoded></content:encoded></item><item><title><![CDATA[KubeCon NA 2022 developers Find your cache!]]></title><link>https://developer.hpe.com/treasurehunt/</link><guid isPermaLink="false">https://developer.hpe.com/treasurehunt/</guid><content:encoded></content:encoded></item><item><title><![CDATA[DISCOVERING WORKSHOPS-ON-DEMAND]]></title><link>https://developer.hpe.com/workshop/</link><guid isPermaLink="false">https://developer.hpe.com/workshop/</guid><content:encoded></content:encoded></item><item><title><![CDATA[HPE Private Cloud AI: Natural Language to Structured Query Language]]></title><description><![CDATA[In today's data-driven world, two fundamental languages enable us to interact with information: Natural Language and SQL (Structured Query…]]></description><link>https://developer.hpe.com/hpe-private-cloud-ai-interact-with-sql-database-using-natural-language/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-private-cloud-ai-interact-with-sql-database-using-natural-language/</guid><pubDate>Thu, 23 Apr 2026 11:35:26 GMT</pubDate><content:encoded>&lt;p&gt;In today&apos;s data-driven world, two fundamental languages enable us to interact with information: &lt;strong&gt;Natural Language&lt;/strong&gt; and &lt;strong&gt;SQL (Structured Query Language)&lt;/strong&gt;. Natural language is the way humans naturally communicate. It allows us to express ideas, ask questions, and convey intentions effortlessly, whether through speech or text. On the other hand, &lt;strong&gt;SQL&lt;/strong&gt;  is a specialized language designed for managing and querying structured data stored in databases. While SQL is powerful for precise data retrieval, it often requires technical expertise and familiarity with database schemas.&lt;/p&gt;
&lt;p&gt;Bridging the gap between these two languages is crucial for making data accessible and actionable. This is where &lt;strong&gt;generative AI (GenAI)&lt;/strong&gt; comes into play. By leveraging advanced artificial intelligence (AI) models, we can translate natural language queries into SQL commands, enabling anyone, regardless of technical background to unlock valuable insights from complex, structured datasets. Using GenAI to interpret and generate queries democratizes data analysis, accelerates decision-making, and helps organizations harness their data&apos;s full potential for strategic advantage.&lt;/p&gt;
&lt;p&gt;This blog post walks you through steps to deploy and configure various tools, required to demonstrate NL to SQL use case (using manufacturing dataset) on &lt;strong&gt;HPE Private Cloud AI&lt;/strong&gt;. By leveraging these technologies, organizations can unlock valuable insights from their data.&lt;/p&gt;
&lt;h2&gt;HPE Private Cloud AI&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (HPE PCAI)&lt;/a&gt; offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE Private Cloud AI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated &lt;em&gt;&lt;a href=&quot;https://www.nvidia.com/en-us/ai-data-science/products/nim-microservices/&quot;&gt;NVIDIA Inference Microservices (NIM)&lt;/a&gt;&lt;/em&gt; LLMs, along with a powerful suite of AI tools and frameworks for data engineering, analytics, and data science.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie112hen_us&amp;#x26;page=About/aie-overview.html&quot;&gt;HPE AI Essentials&lt;/a&gt;&lt;/strong&gt; is a software and data foundation layer designed to accelerate the development, deployment, and management of artificial intelligence (AI) and GenAI applications. It is part of the &lt;strong&gt;HPE Private Cloud AI&lt;/strong&gt; portfolio and provides a curated, ready-to-run suite of open-source and proprietary tools, enabling organizations to move from AI pilots to production quickly.&lt;/p&gt;
&lt;h2&gt;Architecture&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-151822.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE AI Essentials version 1.12+, which has PrestoMCP&lt;/li&gt;
&lt;li&gt;OpenWebUI version v0.6.31+, which supports the MCP server as an external tool.&lt;/li&gt;
&lt;li&gt;Role &apos;Private Cloud AI Administrator&apos; assigned to the user. This is required to import PostgreSQL framework.&lt;/li&gt;
&lt;li&gt;Hugging Face user access token, to download the LLM.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Prepare data source&lt;/h2&gt;
&lt;p&gt;You may connect to your existing database to HPE PCAI using Data Services Connector, available in HPE AI Essentials. Otherwise, you may choose to deploy a database using &lt;em&gt;Import Framework&lt;/em&gt; feature, described in next section.&lt;/p&gt;
&lt;h3&gt;Deploy database and load data&lt;/h3&gt;
&lt;p&gt;A sample Helm chart for PostgreSQL is available in the GitHub repo, &lt;a href=&quot;https://github.com/ai-solution-eng/frameworks/tree/main/postgresql&quot;&gt;ai-solution-eng/frameworks.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Deploy PostgreSQL on HPE Private Cloud AI using &lt;em&gt;Import Framework&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-184640.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Provide the Name, Description and Icon (from GitHub repo).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-184320.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upload the Helm chart that has been downloaded from GitHub.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-184131.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Update the PostgreSQL password in line #38.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-184251.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Review and submit.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-184352.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Within a few minutes, you&apos;ll find that the PostgreSQL framework is in &apos;Ready&apos; state.&lt;/p&gt;
&lt;h3&gt;Load database file&lt;/h3&gt;
&lt;p&gt;Use the script, &lt;em&gt;&lt;a href=&quot;https://github.com/ai-solution-eng/ai-solution-demos/blob/main/nl-to-sql-mcp-manufacturing/create_manufacturing_data.py&quot;&gt;create_manufacturing_data.py&lt;/a&gt;&lt;/em&gt; to create a sample data. Execute the script to create a new &quot;.db&quot; file, which will be used in the following steps.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;python ./create_manufacturing_data.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On HPE AI Essentials, open your Jupyter notebook server and upload the generated &quot;.db&quot; file and *&lt;a href=&quot;https://github.com/ai-solution-eng/ai-solution-demos/blob/main/nl-to-sql-mcp-manufacturing/loaddata.py&quot;&gt;loaddata.py &lt;/a&gt;*script. You will need to update the password (that was provided while Importing Postgres), in line #9 of &lt;em&gt;loaddata.py&lt;/em&gt; script.&lt;/p&gt;
&lt;p&gt;After uploading these two files, open a terminal in the Jupyter notebook Server. Execute the following commands,&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pip install psycopg2
python loaddata.py
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Connect database to HPE AI Essentials&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Data Engineering &gt; Data Sources&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In &lt;strong&gt;Structured Data&lt;/strong&gt; click on &lt;strong&gt;Add New Data Source&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;PostgreSQL&lt;/strong&gt; and click &lt;strong&gt;Create Connection&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fill in the details as below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name* : manufacturingdb&lt;/li&gt;
&lt;li&gt;Connection URL*: jdbc:postgresql://postgresql.postgres.svc.cluster.local:5432/manufacturing&lt;/li&gt;
&lt;li&gt;Connection User*: postgres&lt;/li&gt;
&lt;li&gt;Connection Password*: &quot;YOUR_DEFINED_PASSWORD&quot;&lt;/li&gt;
&lt;li&gt;Click on &apos;PostgreSQL Advanced Settings&apos;&lt;/li&gt;
&lt;li&gt;Case Insensitive Name Matching: Tick&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-191659.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Explore the data catalog&lt;/h3&gt;
&lt;p&gt;Once the data is loaded and database is connected, you can explore the available data via the Data Catalog.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Data Engineering &gt; Data Catalog&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;manufacturingdb&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Tick &lt;strong&gt;public&lt;/strong&gt; schema&lt;/li&gt;
&lt;li&gt;See three tables, machine_metrics, machines and operators.&lt;/li&gt;
&lt;li&gt;Select one of the table, click &lt;strong&gt;Data Preview&lt;/strong&gt; to get an overview of the data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-153224.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-153549.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;LLM deployment&lt;/h2&gt;
&lt;p&gt;Deploy the &lt;strong&gt;Qwen/Qwen3-8B-Instruct&lt;/strong&gt; LLM using HPE MLIS (Machine Learning Inference Software) framework, available in HPE AI Essentials. You may replace Qwen/Qwen3-8B-Instruct with any other model of your choice.&lt;/p&gt;
&lt;p&gt;In HPE MLIS, &lt;strong&gt;packaged models&lt;/strong&gt; -&gt; create packaged model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-154513.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-154632.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-154714.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under &apos;Advanced&apos; tab, set the following,&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Environment Variables:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HUGGING_FACE_HUB_TOKEN: &amp;#x3C;&amp;#x3C;**Your Hugging Face Token**&gt;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arguments:&lt;/strong&gt; &lt;em&gt;--model Qwen/Qwen3-8B --enable-reasoning --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser hermes --port 8080&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-200031.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After creating the packaged model, you may deploy the model.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deployments&lt;/strong&gt; -&gt; Create deployment&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-200126.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-200220.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-200318.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-200419.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the model gets deployed, go to &lt;strong&gt;HPE AI Essentials -&gt; GenAI -&gt; Model Endpoints.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Click on &apos;Action -&gt; Generate API Token&apos;. Copy the Model Endpoint and the API token to a text file, as you will need them in next steps.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-161519.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Chat interface&lt;/h2&gt;
&lt;p&gt;Use the Open WebUI framework as the chat interface. Follow the steps to configure Open WebUI&lt;/p&gt;
&lt;h4&gt;Connect to LLM&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Open Open WebUI&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Admin Panel &gt;&gt; Settings &gt;&gt; Connections&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Add new &apos;OpenAI API&apos; connection -- provide the Model Endpoint URL and API token saved in previous step.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-161912.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Configure MCP server&lt;/h4&gt;
&lt;p&gt;Navigate in Open WebUI to &lt;strong&gt;Admin Panel &gt;&gt; Settings &gt;&gt; External Tools&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add a new Tool Server&lt;/li&gt;
&lt;li&gt;Change the type to MCP Streamable HTTP&lt;/li&gt;
&lt;li&gt;Add the URL of ezPresto MCP server, this can be retrieved from Data Engineering &gt;&gt; Data Sources &gt;&gt; MCP server. Open the menu and click on copy endpoint. &lt;/li&gt;
&lt;li&gt;Add the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie112hen_us&amp;#x26;page=Security/k8s-secret-auth-token.html&quot;&gt;JWT token&lt;/a&gt; of the user you want to have the Presto Connections. The MCP agent gets access to all datasources this user has access to.&lt;/li&gt;
&lt;li&gt;Provide the ID and Name, PrestoMCP&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-162520.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As final step in Open WebUI, you need to &lt;strong&gt;Create a Model&lt;/strong&gt; that is leveraging the &lt;strong&gt;Qwen3 8b&lt;/strong&gt; base model and has the MCP Server as a Tool.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click in OpenWebUI on Workspace. Click &lt;strong&gt;New Model&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Provide your Model a Name for example &lt;em&gt;Manufacturing&lt;/em&gt; select Qwen/Qwen3-8B as Base Model&lt;/li&gt;
&lt;li&gt;Edit Visibility to &lt;em&gt;Public&lt;/em&gt; in case you want the model to be available for everyone to chat&lt;/li&gt;
&lt;li&gt;Add a System Prompt for example: &quot;Always use &apos;manufacturingdb&apos; catalog and the schema &apos;public&apos; for SQL queries. Syntax: catalog.schema.table is how you reference a table in presto&quot;&lt;/li&gt;
&lt;li&gt;Click on the Advance Params and set &lt;em&gt;Function Calling&lt;/em&gt; to &lt;em&gt;Native&lt;/em&gt;, otherwise it will only make one call. You can edit this within your chat as well. Underneath Tools tick the &lt;em&gt;PrestoMCP Tool&lt;/em&gt; and click Save&amp;#x26;Create.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-163728.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Natural Language to SQL&lt;/h2&gt;
&lt;p&gt;In Open WebUI, click on &apos;New Chat&apos;, select &apos;Manufacturing&apos; and enable the &apos;PRESTOMCP&apos; tool inside the chat interface.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-164049.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using chat, you may now interact with the SQL database using natural language.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-04-23-190106.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;By using the tools in HPE AI Essentials; HPE MLIS&apos;s robust model management, PrestoMCP and Open WebUI&apos;s intuitive chat interface, one can create a powerful ecosystem for transforming natural language queries into actionable insights. This comprehensive approach democratizes data access, allowing users to effortlessly interact with complex datasets through conversational interfaces.&lt;/p&gt;
&lt;p&gt;By enabling natural language to SQL translation, organizations can unlock the full potential of their data which accelerates decision-making, fostering data-driven culture, and gaining valuable insights without requiring deep technical expertise. As AI and data technologies continue to evolve, such implementations will become essential tools for making data more accessible, understandable, and impactful across all levels of an organization.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more guides and best practices on leveraging HPE Private Cloud AI for your AI use cases.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel Project Seeks New Funding]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/chapel-project-seeks-new-funding/</link><guid isPermaLink="false">https://developer.hpe.com/chapel-project-seeks-new-funding/</guid><pubDate>Tue, 21 Apr 2026 17:11:24 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Redefining storage operations with AI and MCP]]></title><description><![CDATA[The shift from scripts to conversations in storage management Interacting with storage infrastructure today often means navigating…]]></description><link>https://developer.hpe.com/redefining-storage-operations-with-ai-and-mcp/</link><guid isPermaLink="false">https://developer.hpe.com/redefining-storage-operations-with-ai-and-mcp/</guid><pubDate>Sun, 19 Apr 2026 19:15:10 GMT</pubDate><content:encoded>&lt;h3&gt;The shift from scripts to conversations in storage management&lt;/h3&gt;
&lt;p&gt;Interacting with storage infrastructure today often means navigating dashboards, writing scripts, or manually stitching together API calls. While powerful, these approaches can slow down operations and create a gap between intent and execution, especially when quick insights or actions are needed.&lt;/p&gt;
&lt;p&gt;What if you could request:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Which arrays are running low on capacity?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Create a volume for this workload.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;And have those actions carried out reliably?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In this post, I’ll show you how to use the Model Context Protocol (MCP) with &lt;strong&gt;Visual Studio Code&lt;/strong&gt; and &lt;strong&gt;GitHub Copilot&lt;/strong&gt; to enable natural language interaction with storage systems. By leveraging Data Service Cloud Console (DSCC)&apos;s open API specification, you can expose storage operations as AI-understandable capabilities, turning everyday management tasks into simple, conversational workflows.&lt;/p&gt;
&lt;h2&gt;From queries to actions: AI-driven storage control&lt;/h2&gt;
&lt;h3&gt;The foundation: Why OpenAPI matters&lt;/h3&gt;
&lt;p&gt;Before MCP and AI came into the picture, there is a critical enabler, a strong OpenAPI specification.&lt;/p&gt;
&lt;p&gt;The well-designed OpenAPI spec provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Clearly defined endpoints&lt;/li&gt;
&lt;li&gt;Structured request/response schemas&lt;/li&gt;
&lt;li&gt;Consistent naming and semantics&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This structure is what makes it possible to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Programmatically understand capabilities&lt;/li&gt;
&lt;li&gt;Automatically generate tools&lt;/li&gt;
&lt;li&gt;Reliably map user intent to actions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without a solid API foundation, exposing capabilities to AI would be inconsistent and error-prone.&lt;/p&gt;
&lt;h3&gt;The shift: From APIs to intent-driven operations&lt;/h3&gt;
&lt;p&gt;Most storage platforms already provide rich APIs. However, APIs are typically structured and require users to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understand endpoints and payloads&lt;/li&gt;
&lt;li&gt;Refer to documentation&lt;/li&gt;
&lt;li&gt;Write and execute requests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/hdd.png&quot; alt=&quot;&quot; title=&quot;High level workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;MCP serves as the translation layer, converting human intent into executable API calls.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; While this blog article demonstrates the setup using &lt;strong&gt;Visual Studio Code&lt;/strong&gt; and &lt;strong&gt;GitHub Copilot&lt;/strong&gt;, MCP clients are not limited to Visual Studio Code. Any compatible AI client can interact with the MCP server. For more information, refer to this &lt;a href=&quot;https://github.com/ivo-toby/mcp-openapi-server&quot;&gt;page&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Setting up MCP with Visual Studio Code&lt;/h3&gt;
&lt;p&gt;To get started, you configure an MCP server locally and connect it to your API layer. Before setting up and using the MCP server, ensure the following prerequisites are met:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install Node.js (version 18.0  or later)&lt;/li&gt;
&lt;li&gt;Install and sign in to GitHub Copilot with an active license&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Step 1: Configuring an Model Context Protocol (MCP) server&lt;/h4&gt;
&lt;p&gt;In Visual Studio Code, start by opening an empty folder. Inside this folder, create a ‘.vscode’ directory to store your workspace configurations. Within the .vscode folder, add a JSON configuration file with the following details:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;servers&quot;: {
    &quot;fleet-openapi&quot;: {
      &quot;type&quot;: &quot;stdio&quot;,
      &quot;command&quot;: &quot;npx&quot;,
      &quot;args&quot;: [
        &quot;-y&quot;,
        &quot;@ivotoby/openapi-mcp-server&quot;,
        &quot;--api-base-url&quot;, &quot;https://fleetscale-app.qa.cds.hpe.com&quot;,
        &quot;--headers&quot;, &quot;Authorization:Bearer BEARER_TOKEN&quot;,
        &quot;--openapi-spec&quot;, &quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.yaml&quot;,
        &quot;--name&quot;, &quot;fleet-openapi&quot;,
        &quot;--tools&quot;, &quot;all&quot;,
        &quot;--tag&quot;, &quot;storage-systems&quot;,
        &quot;--tag&quot;, &quot;headroom&quot;,
        &quot;--tag&quot;, &quot;capacity&quot;,
        &quot;--tag&quot;, &quot;performance&quot;,
      ]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Generate the access token as mentioned in this &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;blog&lt;/a&gt;. Replace the BEARER_TOKEN with the generated access token. Bearer tokens are short-lived and security-scoped; they should not be treated as static secrets. When a token expires, update it in the JSON file, reload the Visual Studio Code window, and restart the MCP server.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;server-name&lt;/strong&gt; → A unique identifier for the MCP server instance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;type&lt;/strong&gt; → Specifies how the MCP server communicates (e.g., stdio, http).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;command&lt;/strong&gt; → The executable used to start the MCP server (e.g., npx).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;args&lt;/strong&gt; → Command-line arguments passed to the MCP server at startup.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;api-base-url&lt;/strong&gt; → Base endpoint of the backend service that the MCP server will call.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;openapi-spec&lt;/strong&gt; → Path to the OpenAPI file used to generate tools.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;headers&lt;/strong&gt; → HTTP headers (like authentication tokens) sent with API requests.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;tools&lt;/strong&gt; → Controls which capabilities (APIs/actions) are exposed to the AI.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&quot;tools&quot;: &quot;all&quot;&lt;/strong&gt; → Exposes all available tools without filtering.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;tag&lt;/strong&gt; → Filters tools based on OpenAPI tags (logical grouping).&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Restricting Operations for Safer Execution:&lt;/h5&gt;
&lt;p&gt;Since MCP enables AI-driven interaction with your storage systems, it’s important to control what actions are allowed, especially in production environments.&lt;/p&gt;
&lt;p&gt;Using the &lt;strong&gt;--operation&lt;/strong&gt; flag, you can restrict the MCP server to specific HTTP methods.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;servers&quot;: {
    &quot;fleet-openapi&quot;: {
      &quot;type&quot;: &quot;stdio&quot;,
      &quot;command&quot;: &quot;npx&quot;,
      &quot;args&quot;: [
        &quot;-y&quot;,
        &quot;@ivotoby/openapi-mcp-server&quot;,
        &quot;--api-base-url&quot;, &quot;https://fleetscale-app.qa.cds.hpe.com&quot;,
        &quot;--headers&quot;, &quot;Authorization:Bearer BEARER_TOKEN&quot;,
        &quot;--openapi-spec&quot;, &quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.yaml&quot;,
        &quot;--name&quot;, &quot;fleet-openapi&quot;,
        &quot;--tools&quot;, &quot;all&quot;,
		&quot;--operation&quot;, &quot;get&quot;,
        &quot;--tag&quot;, &quot;storage-systems&quot;,
        &quot;--tag&quot;, &quot;headroom&quot;,
        &quot;--tag&quot;, &quot;capacity&quot;,
        &quot;--tag&quot;, &quot;performance&quot;,
      ]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This limits MCP to read-only APIs, preventing any create, update, or delete actions.&lt;/p&gt;
&lt;p&gt;This is useful for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Safe initial adoption&lt;/li&gt;
&lt;li&gt;Monitoring and reporting use cases&lt;/li&gt;
&lt;li&gt;Security-sensitive environments&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can gradually enable additional operations as needed, with proper guardrails in place.&lt;/p&gt;
&lt;p&gt;This configuration starts the MCP server locally, loads the OpenAPI specification, and exposes the APIs as MCP tools. It also passes the required authentication headers for API calls. To apply the changes, reload the Visual Studio Code window using &lt;strong&gt;Ctrl + Shift + P → “Developer: Reload Window.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Ensure your organization approves the use of MCP and AI tools, and that all integrations comply with internal security and cybersecurity policies before use.&lt;/p&gt;
&lt;h4&gt;Step 2: Connect with GitHub Copilot and start the MCP server&lt;/h4&gt;
&lt;p&gt;In Visual Studio Code, install the GitHub Copilot extension and sign in with your GitHub account to start using it.&lt;/p&gt;
&lt;p&gt;To start the MCP server, open the Command Palette (&lt;strong&gt;Ctrl+Shift+p&lt;/strong&gt;) and select ‘&lt;em&gt;&lt;strong&gt;MCP: List Servers&lt;/strong&gt;&lt;/em&gt;’. You should see the server configured in your JSON file (for example, &lt;em&gt;fleet-openapi&lt;/em&gt;) listed there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2listservers.png&quot; alt=&quot;&quot; title=&quot;List Servers&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select the configured server and click “&lt;em&gt;&lt;strong&gt;Start Server&lt;/strong&gt;&lt;/em&gt;”. Once the server is up and running, you should see a confirmation in the console similar to the example below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/3serverstart.png&quot; alt=&quot;&quot; title=&quot;MCP server running&quot;&gt;&lt;/p&gt;
&lt;p&gt;The GitHub Copilot extension allows you to choose from supported LLMs based on your subscription and configuration, as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/4listllms.png&quot; alt=&quot;&quot; title=&quot;List the LLM models&quot;&gt;&lt;/p&gt;
&lt;p&gt;GitHub Copilot can leverage the tools discovered by the MCP extension in Visual Studio Code. You can also view the discovered tools using the Command Palette (&lt;strong&gt;Ctrl + Shift + P&lt;/strong&gt;) by searching for MCP-related commands.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/5listtools.png&quot; alt=&quot;&quot; title=&quot;List the tools&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Step 3: Interacting using natural language&lt;/h4&gt;
&lt;p&gt;Once everything is set up, you can begin issuing natural language queries.&lt;/p&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;List all storage arrays&lt;/li&gt;
&lt;li&gt;Identify arrays above 80% capacity&lt;/li&gt;
&lt;li&gt;Create a 100GB volume on a specific array&lt;/li&gt;
&lt;li&gt;Check overall system health&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These queries are interpreted, mapped to MCP tools, and executed via DSCC APIs.&lt;/p&gt;
&lt;p&gt;For example, consider the following prompt where the user asks to list storage system details:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/6liststoragesystems.png&quot; alt=&quot;&quot; title=&quot;List storage systems-part1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/7liststoragesystems.png&quot; alt=&quot;&quot; title=&quot;List Storage systems-part2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The behaviour and responses may vary depending on the LLM used, as different models can interpret prompts and execute actions with varying accuracy and reasoning.&lt;/p&gt;
&lt;h5&gt;What happens behind the scenes&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;The user enters a natural language query (in this case, requesting storage system details)&lt;/li&gt;
&lt;li&gt;Copilot interprets the intent and identifies the relevant operation (e.g., list storage systems)&lt;/li&gt;
&lt;li&gt;The MCP client discovers available tools from the MCP server and selects the most relevant one based on the intent.&lt;/li&gt;
&lt;li&gt;The MCP server maps the request to the corresponding API (derived from the OpenAPI specification) and executes it via DSCC, passing the required authentication headers configured in the server JSON.&lt;/li&gt;
&lt;li&gt;The API returns a structured response, which is then relayed back through the MCP server.&lt;/li&gt;
&lt;li&gt;Structured response is returned&lt;/li&gt;
&lt;li&gt;The response is presented to the user, often enriched with additional insights (for example, highlighting potential issues such as performance headroom overutilization).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This removes the need to manually:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Write API calls&lt;/li&gt;
&lt;li&gt;Look up documentation&lt;/li&gt;
&lt;li&gt;Handle request formatting&lt;/li&gt;
&lt;li&gt;Parse responses&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Please ensure that the user does not have admin or superuser privileges, as the project is still in its early stages and requires further enhancements in RBAC and guardrails. For safer execution, use the &lt;em&gt;operation&lt;/em&gt; tag appropriately.&lt;/p&gt;
&lt;h4&gt;Relationship to HPE GreenLake MCP documentation&lt;/h4&gt;
&lt;p&gt;The DSCC MCP integration follows the same core patterns as the GreenLake MCP setup.&lt;/p&gt;
&lt;p&gt;You can refer to the GreenLake documentation for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MCP server setup&lt;/li&gt;
&lt;li&gt;Tool definitions&lt;/li&gt;
&lt;li&gt;Request/response flow&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE GreenLake MCP references can be found &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/mcp-server/public&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;While DSCC APIs and authorization differ, the overall MCP concepts and configuration remain largely the same and can be reused with minimal changes.&lt;/p&gt;
&lt;h4&gt;Conclusion: From APIs to conversation&lt;/h4&gt;
&lt;p&gt;What makes this transformation possible is not just MCP or AI, it’s the combination of a strong OpenAPI foundation and a protocol that can leverage it effectively. By layering MCP on top of a well-defined API ecosystem like DSCC, you can unlock a new interaction model where users express intent and systems handle execution.&lt;/p&gt;
&lt;p&gt;With tools like Visual Studio Code and GitHub Copilot, users can now manage storage systems using natural language, reducing complexity and speeding up operations.&lt;/p&gt;
&lt;h4&gt;Key takeaway&lt;/h4&gt;
&lt;p&gt;Great APIs don’t just enable integrations—they enable entirely new ways of interaction.&lt;/p&gt;
&lt;p&gt;And MCP doesn’t replace APIs—it enhances how we interact with them.&lt;/p&gt;
&lt;p&gt;It does so by combining:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OpenAPI specifications&lt;/li&gt;
&lt;li&gt;MCP server&lt;/li&gt;
&lt;li&gt;AI assistants&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This enables a shift from script-driven operations to intent-driven interactions.&lt;/p&gt;
&lt;h4&gt;Call to action:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Start by exploring your existing OpenAPI specifications. Identify a few key operations and expose them through an MCP server. Try interacting with them using natural language in Visual Studio Code with GitHub Copilot.&lt;/li&gt;
&lt;li&gt;If you have questions or want to explore deeper integrations, feel free to reach out &lt;a href=&quot;anusha.y@hpe.com&quot;&gt;anusha.y@hpe.com&lt;/a&gt; or continue experimenting. This is just the beginning of conversational infrastructure.&lt;/li&gt;
&lt;li&gt;Please check out the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more articles on this topic.&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Integrating Dagster as a modern data orchestration framework in HPE Private Cloud AI]]></title><description><![CDATA[HPE Private Cloud AI (PCAI) provides a curated set of pre‑integrated orchestration and machine‑learning (ML) frameworks, including Airflow…]]></description><link>https://developer.hpe.com/integrating-dagster-as-a-modern-data-orchestration-framework-in-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/integrating-dagster-as-a-modern-data-orchestration-framework-in-hpe-private-cloud-ai/</guid><pubDate>Sun, 19 Apr 2026 06:11:38 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (PCAI)&lt;/a&gt; provides a curated set of pre‑integrated orchestration and machine‑learning (ML) frameworks, including &lt;em&gt;Airflow&lt;/em&gt;, &lt;em&gt;Kubeflow&lt;/em&gt;, &lt;em&gt;Spark&lt;/em&gt; and &lt;em&gt;Ray&lt;/em&gt;, to streamline the development and operationalization of AI workloads. However, teams that require stronger data‑centric orchestration, asset lineage, and reproducibility may find gaps in the existing toolchain. Traditional task‑based orchestrators such as &lt;em&gt;Airflow&lt;/em&gt; don’t always provide the asset‑level visibility, modularity, or developer‑friendly workflow needed for modern data engineering practices.&lt;/p&gt;
&lt;p&gt;This blog post introduces &lt;em&gt;Dagster&lt;/em&gt; as an additional, asset‑oriented orchestration framework that augments the existing HPE Private Cloud AI tool stack without replacing any component. Using the &lt;em&gt;Import Framework&lt;/em&gt;, &lt;em&gt;Dagster&lt;/em&gt; can be deployed within minutes and integrated seamlessly into the HPE Private Cloud AI environment. &lt;em&gt;Dagster&lt;/em&gt;’s modular architecture cleanly separates the infrastructure layer from the user code layer, allowing the user code package to be built as an independent container image, pushed to the local PCAI image registry, and deployed entirely within the HPE Private Cloud AI boundary. This integration approach reinforces data protection and supports strict data sovereignty requirements. Once integrated, &lt;em&gt;Dagster&lt;/em&gt; provides advanced orchestration capabilities, including asset-level lineage tracking, deterministic reproducibility, and comprehensive observability. Its developer‑focused design enables teams to build, execute, and monitor data assets with greater reliability and maintainability, making it a valuable optional addition within HPE Private Cloud AI for workflows that benefit from modern data‑centric orchestration.&lt;/p&gt;
&lt;h3&gt;Why Dagster?&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://dagster.io/&quot;&gt;&lt;em&gt;Dagster&lt;/em&gt;&lt;/a&gt; is a modern data orchestration framework centered on the concept of data assets. Rather than focusing primarily on tasks, &lt;em&gt;Dagster&lt;/em&gt; encourages teams to model pipelines as interconnected datasets with explicit lineage and dependencies. Its strong developer oriented features, such as type‑checking, local development tooling, built‑in testing, and rich observability, make it especially effective for contemporary data stacks involving &lt;a href=&quot;https://www.getdbt.com/&quot;&gt;&lt;em&gt;dbt&lt;/em&gt;&lt;/a&gt;, cloud warehouses, and ML workflows. It’s widely used by data engineering and analytics teams to make pipelines more reliable, observable, and maintainable.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Dagster&lt;/em&gt; does not replace existing orchestrators such as &lt;em&gt;Airflow&lt;/em&gt;, &lt;em&gt;Kubeflow&lt;/em&gt;, or &lt;em&gt;Ray&lt;/em&gt;. Instead, it complements them by offering a more data‑centric option for teams that prioritize lineage, observability, and long‑term maintainability. In practice, &lt;em&gt;Dagster&lt;/em&gt;, &lt;em&gt;Airflow&lt;/em&gt;, and other orchestration tools often coexist within the same ecosystem, each bringing strengths suited to different workflow styles. &lt;em&gt;Dagster&lt;/em&gt; excels in asset‑driven, lineage‑aware environments, while tools like &lt;em&gt;Airflow&lt;/em&gt; remain reliable choices for task‑driven scheduling with extensive integrations. Rather than competing, these tools work together to give teams the flexibility to choose the orchestration model that best fits each part of their data platform.&lt;/p&gt;
&lt;p&gt;The following sections describe the process for integrating &lt;em&gt;Dagster&lt;/em&gt; into HPE Private Cloud AI using the &lt;em&gt;Import Framework&lt;/em&gt;. Once integrated, &lt;em&gt;Dagster&lt;/em&gt; becomes an additional orchestration framework within the PCAI environment, providing users with more options to select the approach that best aligns with their workflow and orchestration requirements.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE Private Cloud AI version 1.5.0 or later, running HPE AI Essentials version 1.9.1 or later.&lt;/li&gt;
&lt;li&gt;Access to an HPE Private Cloud AI workspace (with the &lt;em&gt;Private Cloud AI Administrator&lt;/em&gt; role), allowing to perform administrative operations.&lt;/li&gt;
&lt;li&gt;Docker Engine version 27.3.1 or later, including the default docker CLI, which will be used for building and pushing &lt;em&gt;Dagster&lt;/em&gt; user code images.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The deployment examples in the following sections use the &lt;em&gt;kubectl&lt;/em&gt; CLI and kubeconfig to display deployment details in the PCAI Kubernetes (K8s) cluster for illustration purposes only. Direct cluster access via &lt;em&gt;kubectl&lt;/em&gt; is generally not required, as the full framework setup can be completed through the &lt;em&gt;Import Framework&lt;/em&gt; UI.&lt;/p&gt;
&lt;h3&gt;Integrate &lt;em&gt;Dagster&lt;/em&gt; framework using the &lt;em&gt;Import Framework&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;The official &lt;a href=&quot;https://github.com/dagster-io/dagster/tree/master/helm&quot;&gt;&lt;em&gt;Dagster&lt;/em&gt; Helm charts&lt;/a&gt; contain the main &lt;em&gt;dagster&lt;/em&gt; chart and the &lt;em&gt;dagster-user-deployments&lt;/em&gt; subchart. The &lt;em&gt;dagster&lt;/em&gt; chart deploys the core &lt;em&gt;Dagster&lt;/em&gt; infrastructure, specifically the &lt;em&gt;Dagster Webserver&lt;/em&gt; and the &lt;em&gt;Dagster daemon&lt;/em&gt;, using prebuilt images available from &lt;em&gt;DockerHub&lt;/em&gt;. The &lt;em&gt;dagster-user-deployments&lt;/em&gt; subchart, by contrast, is responsible for deploying &lt;em&gt;Dagster&lt;/em&gt; user code, which contains the user-defined pipelines and asset definitions. Because this code is user-specific, customers must build their own user code image and use that image when deploying &lt;em&gt;Dagster&lt;/em&gt; user code. In many cases, customers prefer to store this image locally within their environment.&lt;/p&gt;
&lt;p&gt;The following sections describe how to build a sample user code image, deploy &lt;em&gt;Harbor&lt;/em&gt; and configure it as the local image registry within HPE Private Cloud AI, and push the built &lt;em&gt;Dagster&lt;/em&gt; user code image to &lt;em&gt;Harbor&lt;/em&gt; so it can be used for the &lt;em&gt;Dagster&lt;/em&gt; deployment.&lt;/p&gt;
&lt;h4&gt;Build the &lt;em&gt;Dagster&lt;/em&gt; user code image&lt;/h4&gt;
&lt;p&gt;This section describes the process for building the &lt;em&gt;Dagster&lt;/em&gt; user code image in preparation for the &lt;em&gt;Dagster&lt;/em&gt; deployment. For demonstration purposes, the &lt;em&gt;&lt;a href=&quot;https://github.com/dagster-io/dagster/tree/master/examples/deploy_k8s&quot;&gt;&apos;iris_analysis&apos;&lt;/a&gt;&lt;/em&gt; example project from the official &lt;em&gt;&lt;a href=&quot;https://github.com/dagster-io/dagster&quot;&gt;Dagster GitHub repository&lt;/a&gt;&lt;/em&gt; is used as the sample for the user code image build.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd examples/deploy_k8s/
$ ls -al
total 24
drwxrwxr-x  3 guoping guoping 4096 mars  20 14:01 .
drwxrwxr-x 37 guoping guoping 4096 mars  19 11:13 ..
-rw-rw-r--  1 guoping guoping  516 mars  20 14:01 Dockerfile
drwxrwxr-x  2 guoping guoping 4096 mars  19 11:13 iris_analysis
-rw-rw-r--  1 guoping guoping  361 mars  19 11:13 pyproject.toml
-rw-rw-r--  1 guoping guoping    0 mars  19 11:13 py.typed
-rw-rw-r--  1 guoping guoping  477 mars  19 11:13 README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following &lt;em&gt;Dockerfile&lt;/em&gt; is used to build the sample &lt;em&gt;Dagster&lt;/em&gt; user code image. In addition to installing the required libraries (e.g., &lt;em&gt;dagster&lt;/em&gt;, &lt;em&gt;dagster-postgres&lt;/em&gt;, &lt;em&gt;dagster-k8s&lt;/em&gt;, and &lt;em&gt;pandas&lt;/em&gt;), be sure to update the file paths and port settings to match you own &lt;em&gt;Dagster&lt;/em&gt; project.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat Dockerfile
FROM python:3.11
COPY . /
ENV PYTHONUNBUFFERED=1
RUN pip install --upgrade pip
RUN \
    pip install \
        dagster \
        dagster-postgres \
        dagster-k8s \
        pandas
WORKDIR /iris_analysis/
EXPOSE 80
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following &lt;em&gt;Docker&lt;/em&gt; command to build the image &lt;em&gt;&apos;pcaidemo/user-code-example&apos;&lt;/em&gt; with the tag &lt;em&gt;&apos;1.12.19&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker build . -t pcaidemo/user-code-example:1.12.19
[+] Building 188.9s (8/10)                                                                                                                                           docker:default
 =&gt; [internal] load build definition from Dockerfile                                                                                                                           0.1s
 =&gt; =&gt; transferring dockerfile: 555B                                                                                                                                           0.0s
 =&gt; [internal] load metadata for docker.io/library/python:3.11                                                                                                                 6.7s
 =&gt; [auth] library/python:pull token for registry-1.docker.io                                                                                                                  0.0s
 =&gt; [internal] load .dockerignore                                                                                                                                              0.0s
[+] Building 271.3s (9/10)                                                                                                                                           docker:default
 =&gt; [internal] load build definition from Dockerfile                                                                                                                           0.1s
 =&gt; =&gt; transferring dockerfile: 555B                                                                                                                                           0.0s
 =&gt; [internal] load metadata for docker.io/library/python:3.11                                                                                                                 6.7s
 =&gt; [auth] library/python:pull token for registry-1.docker.io                                                                                                                  0.0s
 =&gt; [internal] load .dockerignore                                                                                                                                              0.0s
[+] Building 280.0s (11/11) FINISHED                                                                                                                                 docker:default
 =&gt; [internal] load build definition from Dockerfile                                                                                                                           0.1s
 =&gt; =&gt; transferring dockerfile: 555B                                                                                                                                           0.0s
 =&gt; [internal] load metadata for docker.io/library/python:3.11                                                                                                                 6.7s
 =&gt; [auth] library/python:pull token for registry-1.docker.io                                                                                                                  0.0s
 =&gt; [internal] load .dockerignore                                                                                                                                              0.0s
 =&gt; =&gt; transferring context: 2B                                                                                                                                                0.0s
 =&gt; [internal] load build context                                                                                                                                              0.1s
 =&gt; =&gt; transferring context: 2.08kB                                                                                                                                            0.1s
 =&gt; [1/5] FROM docker.io/library/python:3.11@sha256:ff461875d046c85ecc529e93cf2a0004f29df70566194936214115b36703d866                                                         158.7s
 =&gt; =&gt; resolve docker.io/library/python:3.11@sha256:ff461875d046c85ecc529e93cf2a0004f29df70566194936214115b36703d866                                                           0.0s
 =&gt; =&gt; sha256:ff461875d046c85ecc529e93cf2a0004f29df70566194936214115b36703d866 10.32kB / 10.32kB                                                                               0.0s
 =&gt; =&gt; sha256:990d2ceca3883d62ee15b9e7c06da32a9f9a6bb95d5c0b47548581e6e0a38d50 2.32kB / 2.32kB                                                                                 0.0s
 =&gt; =&gt; sha256:e1388005fc3d7fd4f5611bd6b70464d8dc602a6189c9d5689def96add4b74a3a 6.35kB / 6.35kB                                                                                 0.0s
 =&gt; =&gt; sha256:ee3a0e7d77f0c84203cab438fcf345647c8121bbd80506a3c692f8608a14c4f4 67.78MB / 67.78MB                                                                              17.9s
 =&gt; =&gt; sha256:8f6ad858d0a46fa8ee628532c70b8dc82d06179d543b0b09ec19fc03d4c5b373 49.30MB / 49.30MB                                                                              13.4s
 =&gt; =&gt; sha256:b012eb15dff0bce418c03ec940325aee6aa4300d771c325728855697e620c63a 25.62MB / 25.62MB                                                                               5.8s
 =&gt; =&gt; sha256:8688d0f2f567884eb217c6f80efa063bdb13a1951e92e6c5cac1ae5b736f5e1b 236.08MB / 236.08MB                                                                            42.6s
 =&gt; =&gt; sha256:66063df90a44c93620e1790b680bad5509bc860518ee257a157d6262916b680a 6.09MB / 6.09MB                                                                                15.9s
 =&gt; =&gt; extracting sha256:8f6ad858d0a46fa8ee628532c70b8dc82d06179d543b0b09ec19fc03d4c5b373                                                                                     31.6s
 =&gt; =&gt; sha256:1589b6e505d3cd8ceb2b87cebc53c22b3bd9b858def90d5c108605bbd58d8b28 23.98MB / 23.98MB                                                                              24.7s
 =&gt; =&gt; sha256:8cd65a420aac1587da1c19e7cc7bd6f61b226f69fe9ec6f9d3f6215b9bf33cf2 249B / 249B                                                                                    18.8s
 =&gt; =&gt; extracting sha256:b012eb15dff0bce418c03ec940325aee6aa4300d771c325728855697e620c63a                                                                                     10.8s
 =&gt; =&gt; extracting sha256:ee3a0e7d77f0c84203cab438fcf345647c8121bbd80506a3c692f8608a14c4f4                                                                                     24.9s
 =&gt; =&gt; extracting sha256:8688d0f2f567884eb217c6f80efa063bdb13a1951e92e6c5cac1ae5b736f5e1b                                                                                     66.5s
 =&gt; =&gt; extracting sha256:66063df90a44c93620e1790b680bad5509bc860518ee257a157d6262916b680a                                                                                      1.4s
 =&gt; =&gt; extracting sha256:1589b6e505d3cd8ceb2b87cebc53c22b3bd9b858def90d5c108605bbd58d8b28                                                                                      4.2s
 =&gt; =&gt; extracting sha256:8cd65a420aac1587da1c19e7cc7bd6f61b226f69fe9ec6f9d3f6215b9bf33cf2                                                                                      0.0s
 =&gt; [2/5] COPY . /                                                                                                                                                             4.1s
 =&gt; [3/5] RUN pip install --upgrade pip                                                                                                                                       19.0s
 =&gt; [4/5] RUN     pip install         dagster         dagster-postgres         dagster-k8s         pandas                                                                     82.4s
 =&gt; [5/5] WORKDIR /iris_analysis/                                                                                                                                              0.1s
 =&gt; exporting to image                                                                                                                                                         8.5s
 =&gt; =&gt; exporting layers                                                                                                                                                        8.3s
 =&gt; =&gt; writing image sha256:e5ccb2007d4d8eb9ab2d964c08a45d343a6775ceb7d012021902dc0dc51dc247                                                                                   0.0s
 =&gt; =&gt; naming to docker.io/pcaidemo/user-code-example:1.12.19 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the below command to verify the image &lt;em&gt;&apos;pcaidemo/user-code-example&apos;&lt;/em&gt; has been built with its proper tag &lt;em&gt;&apos;1.12.19&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker images
REPOSITORY                   TAG       IMAGE ID       CREATED              SIZE
pcaidemo/user-code-example   1.12.19   e5ccb2007d4d   About a minute ago   1.51GB
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Set up &lt;em&gt;Harbor&lt;/em&gt; as a local image registry&lt;/h4&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://goharbor.io/&quot;&gt;Harbor&lt;/a&gt;&lt;/em&gt; is an open-source container registry designed for cloud-native environments like K8s. It securely stores and manages container images with policies and role-based access control (RBAC), ensures images are scanned and free from vulnerabilities, and signs images as trusted.&lt;/p&gt;
&lt;p&gt;You can install &lt;em&gt;Harbor&lt;/em&gt; and set it up as a local image registry in HPE Private Cloud AI by following the instructions in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/setting-up-harbor-as-a-local-container-registry-in-hpe-private-cloud-ai/&quot;&gt;Setting up Harbor as a local container registry in HPE Private Cloud AI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After &lt;em&gt;Harbor&lt;/em&gt; is deployed via the &lt;em&gt;Import Framework&lt;/em&gt;, an imported* Harbor* tile appears under &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Open&lt;/strong&gt;&lt;/em&gt; to launch the &lt;em&gt;Harbor&lt;/em&gt; console. After creating a project (e.g., &lt;em&gt;&apos;pcaidemo&apos;&lt;/em&gt;) under &lt;strong&gt;Projects&lt;/strong&gt; and a user (e.g., &lt;em&gt;&apos;pcai-admin&apos;&lt;/em&gt;) with the &lt;em&gt;Maintainer&lt;/em&gt; role under &lt;strong&gt;Administration -&gt; Users&lt;/strong&gt;, add the user to the project &lt;em&gt;&apos;pcaidemo&apos;&lt;/em&gt; from its &lt;em&gt;Members&lt;/em&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-user.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Push the &lt;em&gt;Dagster&lt;/em&gt; user code image to &lt;em&gt;Harbor&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;From the &lt;em&gt;Linux&lt;/em&gt; client, run the following command to log in to the &lt;em&gt;Harbor&lt;/em&gt; registry using the user credentials configured above.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker login harbor.ai-application.pcai0104.ld7.hpecolo.net
Username: pcai-admin
Password:

WARNING! Your credentials are stored unencrypted in &apos;/home/guoping/.docker/config.json&apos;.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Tag the built image with the &lt;em&gt;Harbor&lt;/em&gt; registry URL (e.g., &lt;em&gt;&apos;harbor.ai-application.pcai0104.ld7.hpecolo.net&apos;&lt;/em&gt;) and the project name &lt;em&gt;&apos;pcaidemo&apos;&lt;/em&gt;. Run the following command to push the image to the &lt;em&gt;Harbor&lt;/em&gt; registry.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker tag pcaidemo/user-code-example:1.12.19 harbor.ai-application.pcai0104.ld7.hpecolo.net/pcaidemo/user-code-example:1.12.19
$ docker push harbor.ai-application.pcai0104.ld7.hpecolo.net/pcaidemo/user-code-example:1.12.19
The push refers to repository [harbor.ai-application.pcai0104.ld7.hpecolo.net/pcaidemo/user-code-example]
5f70bf18a086: Pushed
72974f5579e5: Pushed
e4701c8f7c5b: Pushed
2b8a04da403b: Pushed
4ec5e33e2b38: Pushed
5c262981bdb5: Pushed
30d39f2c6455: Pushed
6afcfc3ecd04: Pushed
817e939a94eb: Pushed
dd6e353abeff: Pushed
c5864b4cf4c9: Pushed
1.12.19: digest: sha256:b877e86abeea7c509dfb029a1d9fba51c45aaa9e84ca84399a92e79c2e2ac442 size: 2634
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following command to view the repositories from the &lt;em&gt;Harbor&lt;/em&gt; registry.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ curl -k -sS --user &apos;pcai-admin:&amp;#x3C;hidden&gt;&apos; https://harbor.ai-application.pcai0104.ld7.hpecolo.net/v2/_catalog | jq
{
  &quot;repositories&quot;: [
    &quot;pcaidemo/user-code-example&quot;
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the &lt;em&gt;Harbor&lt;/em&gt; console, under the &lt;em&gt;Repositories&lt;/em&gt; tab of the project &lt;em&gt;&apos;pcaidemo&apos;&lt;/em&gt;, the image &lt;em&gt;&apos;pcaidemo/user-code-example&apos;&lt;/em&gt; appears in the list.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-pacidemo-user-code.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This user code image, once pushed to the local &lt;em&gt;Harbor&lt;/em&gt; registry, will be used for the &lt;em&gt;Dagster&lt;/em&gt; user code deployment.&lt;/p&gt;
&lt;h4&gt;Deploy &lt;em&gt;Dagster&lt;/em&gt; framework&lt;/h4&gt;
&lt;p&gt;Based on the official &lt;a href=&quot;https://github.com/dagster-io/dagster/tree/master/helm&quot;&gt;&lt;em&gt;Dagster&lt;/em&gt; Helm charts&lt;/a&gt;, a revised version, available in the GitHub repository &lt;a href=&quot;https://github.com/GuopingJia/pcai-helm-examples/tree/main/dagster&quot;&gt;pcai-helm-examples&lt;/a&gt;, provides HPE Private Cloud AI compatible deployment configurations. This updated chart includes the required &lt;em&gt;Istio VirtualService&lt;/em&gt; and &lt;em&gt;Kyverno ClusterPolicy&lt;/em&gt; manifests to ensure alignment with PCAI’s service mesh and policy controls. It also incorporates modifications for pulling the user code image from the local &lt;em&gt;Harbor&lt;/em&gt; registry.&lt;/p&gt;
&lt;p&gt;Follow the steps below to deploy &lt;em&gt;Dagster&lt;/em&gt; into HPE Private Cloud AI using the &lt;em&gt;Import Framework&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the PCAI left navigation panel, select &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Import Framework&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/pcai-tools-frameworks-import-framework.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;By following the Import Framework wizard workflow, &lt;em&gt;Dagster&lt;/em&gt; can be deployed into the PCAI environment within minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-dagster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run the following commands to verify the &lt;em&gt;Dagster&lt;/em&gt; deployment in the namespace &apos;dagster&apos; of the PCAI K8s cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n dagster
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/dagster-daemon-66c46866f8-sc4n7                                   1/1     Running   0          30h
pod/dagster-dagster-user-deployments-k8s-example-user-code-1-8k86gs   1/1     Running   0          30h
pod/dagster-dagster-webserver-66447f9b57-9rzmh                        1/1     Running   0          30h
pod/dagster-postgresql-0                                              1/1     Running   0          30h

NAME                                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/dagster-dagster-webserver     ClusterIP   10.96.1.6    &amp;#x3C;none&gt;        80/TCP     30h
service/dagster-postgresql            ClusterIP   10.96.3.66   &amp;#x3C;none&gt;        5432/TCP   30h
service/dagster-postgresql-headless   ClusterIP   None         &amp;#x3C;none&gt;        5432/TCP   30h
service/k8s-example-user-code-1       ClusterIP   10.96.3.96   &amp;#x3C;none&gt;        3030/TCP   30h

NAME                                                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dagster-daemon                                             1/1     1            1           30h
deployment.apps/dagster-dagster-user-deployments-k8s-example-user-code-1   1/1     1            1           30h
deployment.apps/dagster-dagster-webserver                                  1/1     1            1           30h

NAME                                                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/dagster-daemon-66c46866f8                                             1         1         1       30h
replicaset.apps/dagster-dagster-user-deployments-k8s-example-user-code-1-85d9d56f44   1         1         1       30h
replicaset.apps/dagster-dagster-webserver-66447f9b57                                  1         1         1       30h

NAME                                  READY   AGE
statefulset.apps/dagster-postgresql   1/1     30h
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Access &lt;em&gt;Dagster&lt;/em&gt; framework&lt;/h3&gt;
&lt;p&gt;After &lt;em&gt;Dagster&lt;/em&gt; is deployed via the &lt;em&gt;Import Framework&lt;/em&gt;, an &lt;em&gt;imported&lt;/em&gt; &lt;em&gt;Dagster&lt;/em&gt; tile appears under &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Connect to &lt;em&gt;Dagster&lt;/em&gt; deployment&lt;/h4&gt;
&lt;p&gt;Clicking &lt;em&gt;&lt;strong&gt;Open&lt;/strong&gt;&lt;/em&gt; on the &lt;em&gt;Dagster&lt;/em&gt; tile launches the &lt;em&gt;Dagster&lt;/em&gt; Webserver and directs you to the &lt;strong&gt;Overview&lt;/strong&gt; page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-overview.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Deployment&lt;/strong&gt; and open the &lt;em&gt;Code locations&lt;/em&gt; tab. The entry &lt;em&gt;&apos;k8s-example-user-code-1&apos;&lt;/em&gt; shows the user code image &lt;em&gt;&apos;harbor.ai-application.pcai0104.ld7.hpecolo.net/pcaidemo/user-code-example:1.12.19&apos;&lt;/em&gt; pulled from the &lt;em&gt;Harbor&lt;/em&gt; registry.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-deployment.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;em&gt;Harbor&lt;/em&gt; console, under the &lt;em&gt;Logs&lt;/em&gt; tab of the project &lt;em&gt;&apos;pcaidemo&apos;&lt;/em&gt;, you can see the artifact pull operations for the image &lt;em&gt;&apos;pcaidemo/user-code-example&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-audit-logs.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Materialize &lt;em&gt;Dagster&lt;/em&gt; assets&lt;/h4&gt;
&lt;p&gt;In the &lt;em&gt;Dagster&lt;/em&gt; Webserver, navigate to &lt;strong&gt;Catalog&lt;/strong&gt;, select the asset &lt;em&gt;&apos;iris_dataset_size&apos;&lt;/em&gt;, and click the &lt;em&gt;&lt;strong&gt;Materialize selected&lt;/strong&gt;&lt;/em&gt; button. This triggers the materialization process for the asset &lt;em&gt;&apos;iris_dataset_size&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-catalog.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Run the following command to view the K8s job that was started on the cluster to materialize the asset.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get jobs -n dagster
NAME                                               STATUS     COMPLETIONS   DURATION   AGE
dagster-run-044deadf-141d-4b25-89b1-dd74f0a44f89   Complete   1/1           8s         25s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the K8s job completes, the asset &lt;em&gt;&apos;iris_dataset_size&apos;&lt;/em&gt; appears with a &lt;em&gt;Materialized&lt;/em&gt; status in the &lt;strong&gt;Catalog&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-catalog-materialization.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click the asset &lt;em&gt;&apos;iris_dataset_size&apos;&lt;/em&gt; to view its overview.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-catalog-materialization-overview.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With this materialized data asset, you can begin capturing how data is transformed, joined, filtered, and aggregated throughout each stage of the data pipeline, providing a complete record of the data’s journey and its lineage. For more details on these advanced capabilities, refer to the &lt;a href=&quot;https://docs.dagster.io/&quot;&gt;&lt;em&gt;Dagster&lt;/em&gt; documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Wipe materializations&lt;/strong&gt;&lt;/em&gt; from the selected asset to remove its materializations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dagster-catalog-materialization-wipe.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post explored the pre-curated orchestration toolchain available within PCAI and introduced &lt;em&gt;Dagster&lt;/em&gt; as a modern, asset-centric framework that can be integrated seamlessly into the HPE Private Cloud AI environment via the &lt;em&gt;Import Framework&lt;/em&gt;. When deployed alongside existing orchestration services such as &lt;em&gt;Airflow&lt;/em&gt;, &lt;em&gt;Kubeflow&lt;/em&gt;, and &lt;em&gt;Ray&lt;/em&gt;, &lt;em&gt;Dagster&lt;/em&gt; operates as an additional, fully compatible orchestration layer within PCAI. Its modular architecture and clear separation between infrastructure and user code allow all user-defined pipeline definitions to be deployed and executed locally within the HPE Private Cloud AI environment, ensuring strong data sovereignty guarantees. By aligning naturally with PCAI&apos;s service model and operational patterns, &lt;em&gt;Dagster&lt;/em&gt; enriches the platform with a clean, asset-oriented orchestration approach that enhances pipeline reliability while remaining fully compliant with PCAI’s security and governance expectations.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud AI and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Observe any observable using HPE OpsRamp — Part 2: Setting up the stack]]></title><description><![CDATA[Part 2 of a series on vendor-neutral observability with HPE OpsRamp and OpenTelemetry Introduction In Part 1, I described the full…]]></description><link>https://developer.hpe.com/siva-observe-any-observable-using-opsramp-—-part-2-setting-up-the-stack/</link><guid isPermaLink="false">https://developer.hpe.com/siva-observe-any-observable-using-opsramp-—-part-2-setting-up-the-stack/</guid><pubDate>Fri, 17 Apr 2026 12:36:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Part 2 of a series on vendor-neutral observability with HPE OpsRamp and OpenTelemetry&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/siva-observe-any-observable-using-opsramp-open-telemetry-as-the-universal-ingestion-standard/&quot;&gt;Part 1&lt;/a&gt;, I described the full architecture of the observability stack — five components wired together through open protocols to deliver end-to-end telemetry from a Redfish hardware domain into HPE OpsRamp. Now it is time to build that stack and verify every component before sending a single signal anywhere.&lt;/p&gt;
&lt;p&gt;This article covers installation, configuration, and verification of every component in the local stack: &lt;a href=&quot;https://github.com/DMTF/Redfish-Interface-Emulator&quot;&gt;the Redfish emulator&lt;/a&gt;, &lt;a href=&quot;https://github.com/open-telemetry&quot;&gt;OpenTelemetry Collector(OTel)&lt;/a&gt;, &lt;a href=&quot;https://github.com/prometheus/prometheus&quot;&gt;Prometheus&lt;/a&gt;, &lt;a href=&quot;https://github.com/grafana/grafana&quot;&gt;Grafana&lt;/a&gt;, and &lt;a href=&quot;https://github.com/jaegertracing/jaeger&quot;&gt;Jaeger&lt;/a&gt;. I deliberately exclude the HPE OpsRamp configuration from this article — I want to establish that the local stack is fully functional and producing real data before introducing the cloud management plane. That separation of concerns makes troubleshooting dramatically easier.&lt;/p&gt;
&lt;p&gt;By the end of this article, you will have a running stack that you can interact with through five different browser UIs and a smoke test that validates every component automatically.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Prerequisites: What you need before starting&lt;/h2&gt;
&lt;p&gt;Before beginning, you will need an EC2 instance running Ubuntu 22.04 or later with the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;At least 4GB RAM and 2 vCPUs (the full stack with all containers needs headroom)&lt;/li&gt;
&lt;li&gt;Docker and Docker Compose installed&lt;/li&gt;
&lt;li&gt;Python 3.11 or later with &lt;code&gt;uv&lt;/code&gt; package manager&lt;/li&gt;
&lt;li&gt;Ports 3000, 5000, 9090, 16686, 4317, 4318, 8888, 8889, 55679, 13133 open in your security group&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I use EC2 Instance Connect in the browser as my terminal throughout this series, so there is no local SSH key management required.&lt;/p&gt;
&lt;h3&gt;Installing uv&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;uv&lt;/code&gt; is the modern Python package manager I use throughout this project. &lt;a href=&quot;https://github.com/astral-sh/uv&quot;&gt;uv&lt;/a&gt; is significantly faster than pip and handles virtual environments cleanly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.bashrc
uv --version
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Installing Docker on Ubuntu&lt;/h3&gt;
&lt;p&gt;If Docker is not already installed, use the convenience script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker ubuntu
newgrp docker
docker --version
docker compose version
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;The project structure&lt;/h2&gt;
&lt;p&gt;I organize everything under a single directory to keep the two PoC versions isolated from each other.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;~/sivabala/siva-sdk/
├── poc-v1/                    ← original PoC (v1)
│   └── ...
├── poc-v2/                    ← this article&apos;s stack
│   ├── docker-compose.yml
│   ├── otel-collector-config.yaml
│   ├── prometheus.yml
│   ├── grafana/
│   │   └── provisioning/
│   │       ├── datasources/
│   │       └── dashboards/
│   └── opsramp-sdk/
│       ├── pyproject.toml
│       └── src/opsramp_sdk/redfish/
│           ├── agent.py
│           ├── event_listener.py
│           ├── opsramp_resources.py
│           └── resource_context.py
├── start-v1.sh
├── start-v2.sh
├── stop-all.sh
└── status.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I first create the base directory structure:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir -p ~/sivabala/siva-sdk/poc-v2/opsramp-sdk/src/opsramp_sdk/redfish
mkdir -p ~/sivabala/siva-sdk/poc-v2/grafana/provisioning/datasources
mkdir -p ~/sivabala/siva-sdk/poc-v2/grafana/provisioning/dashboards
mkdir -p ~/sivabala/siva-sdk/poc-v2/grafana/dashboards
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Component 1: Docker Compose — the orchestration layer&lt;/h2&gt;
&lt;p&gt;All five local components run in Docker containers managed by a single &lt;code&gt;docker-compose.yml&lt;/code&gt;. This file defines the services, their relationships, port mappings, and shared network.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# docker-compose.yml
services:

  # 1. Redfish emulator — hardware simulation
  redfish-emulator:
    image: dmtf/redfish-interface-emulator:latest
    container_name: redfish-emulator
    ports:
      - &quot;5000:5000&quot;
    restart: unless-stopped
    healthcheck:
      test: [&quot;CMD&quot;, &quot;curl&quot;, &quot;-sf&quot;, &quot;http://127.0.0.1:5000/redfish/v1/&quot;]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 15s
    networks:
      - redfish-poc

  # 2. OpenTelemetry Collector — signal processing and routing
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    container_name: otel-collector
    command: [&quot;--config=/etc/otel/config.yaml&quot;]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel/config.yaml:ro
    ports:
      - &quot;4317:4317&quot;    # OTLP gRPC
      - &quot;4318:4318&quot;    # OTLP HTTP
      - &quot;8888:8888&quot;    # Collector self-metrics
      - &quot;8889:8889&quot;    # Prometheus scrape endpoint
      - &quot;55679:55679&quot;  # zpages debug UI
      - &quot;13133:13133&quot;  # health check
    restart: unless-stopped
    depends_on:
      redfish-emulator:
        condition: service_healthy
    networks:
      - redfish-poc

  # 3. Jaeger — distributed trace visualization
  jaeger:
    image: jaegertracing/all-in-one:latest
    container_name: jaeger
    environment:
      - COLLECTOR_OTLP_ENABLED=true
    ports:
      - &quot;16686:16686&quot;  # Jaeger UI
      - &quot;14317:4317&quot;   # OTLP gRPC for Jaeger
    restart: unless-stopped
    networks:
      - redfish-poc

  # 4. Prometheus — metrics storage and remote write
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    command:
      - &quot;--config.file=/etc/prometheus/prometheus.yml&quot;
      - &quot;--storage.tsdb.path=/prometheus&quot;
      - &quot;--web.enable-lifecycle&quot;
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    ports:
      - &quot;9090:9090&quot;
    restart: unless-stopped
    depends_on:
      - otel-collector
    networks:
      - redfish-poc

  # 5. Grafana — local dashboard visualization
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
      - GF_USERS_ALLOW_SIGN_UP=false
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
      - ./grafana/dashboards:/etc/grafana/provisioning/dashboards:ro
    ports:
      - &quot;3000:3000&quot;
    restart: unless-stopped
    depends_on:
      - prometheus
      - jaeger
    networks:
      - redfish-poc

volumes:
  prometheus_data:
  grafana_data:

networks:
  redfish-poc:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I start all containers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~/sivabala/siva-sdk/poc-v2
sudo docker compose up -d
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Component 2: OpenTelemetry Collector — the pipeline configuration&lt;/h2&gt;
&lt;p&gt;The OpenTelemetry (OTel) Collector configuration defines receivers, processors, exporters, and pipelines. I will focus here on the local stack configuration — the HPE OpsRamp exporters are introduced in Part 4.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# otel-collector-config.yaml

extensions:
  zpages:
    endpoint: 0.0.0.0:55679
  health_check:
    endpoint: 0.0.0.0:13133
    path: &quot;/health&quot;

receivers:
  # Receives OTLP signals from the Python agent
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

  # Polls Redfish endpoints directly for health metrics
  httpcheck:
    targets:
      - endpoint: http://redfish-emulator:5000/redfish/v1/
        method: GET
      - endpoint: http://redfish-emulator:5000/redfish/v1/Chassis/Chassis-1/Power
        method: GET
      - endpoint: http://redfish-emulator:5000/redfish/v1/Chassis/Chassis-1/Thermal
        method: GET
    collection_interval: 15s

  # Scrapes the Collector&apos;s own metrics
  prometheus/self:
    config:
      scrape_configs:
        - job_name: opentelemetry-collector
          scrape_interval: 30s
          static_configs:
            - targets:
                - localhost:8888

processors:
  resource:
    attributes:
      - action: insert
        key: deployment.environment
        value: &quot;redfish-poc-v2&quot;
      - action: insert
        key: resource.type
        value: &quot;RESOURCE&quot;

  batch:
    send_batch_size: 512
    timeout: 5s

exporters:
  debug:
    verbosity: normal

  prometheus:
    endpoint: &quot;0.0.0.0:8889&quot;
    namespace: &quot;redfish&quot;
    send_timestamps: true
    metric_expiration: 3m
    resource_to_telemetry_conversion:
      enabled: true

  otlp/jaeger:
    endpoint: &quot;jaeger:4317&quot;
    tls:
      insecure: true

service:
  extensions: [zpages, health_check]
  pipelines:
    metrics/agent:
      receivers:  [otlp]
      processors: [resource, batch]
      exporters:  [debug, prometheus]
    metrics/httpcheck:
      receivers:  [httpcheck]
      processors: [resource, batch]
      exporters:  [debug, prometheus]
    metrics/self:
      receivers:  [prometheus/self]
      processors: [batch]
      exporters:  [prometheus]
    logs:
      receivers:  [otlp]
      processors: [resource, batch]
      exporters:  [debug]
    traces:
      receivers:  [otlp]
      processors: [resource, batch]
      exporters:  [debug, otlp/jaeger]
  telemetry:
    logs:
      level: warn
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Component 3: Prometheus — metrics configuration&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;prometheus.yml&lt;/code&gt; configures Prometheus to scrape the OTel Collector&apos;s Prometheus exporter endpoint. The &lt;code&gt;remote_write&lt;/code&gt; section is added in Part 4.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: &quot;otel-collector&quot;
    static_configs:
      - targets: [&quot;otel-collector:8889&quot;]
    metric_relabel_configs:
      - source_labels: [__name__]
        regex: &quot;redfish_.*&quot;
        action: keep
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Component 4: Grafana — datasource provisioning&lt;/h2&gt;
&lt;p&gt;Grafana is pre-configured with Prometheus and Jaeger datasources through provisioning files, eliminating manual setup.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# grafana/provisioning/datasources/datasources.yaml
apiVersion: 1
datasources:
  - name: Prometheus
    type: prometheus
    uid: prometheus
    access: proxy
    url: http://prometheus:9090
    isDefault: true
    editable: true

  - name: Jaeger
    type: jaeger
    uid: jaeger
    access: proxy
    url: http://jaeger:16686
    editable: true
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Component 5: The Python SDK — project setup&lt;/h2&gt;
&lt;p&gt;The Python agent uses &lt;code&gt;uv&lt;/code&gt; for dependency management. The &lt;code&gt;pyproject.toml&lt;/code&gt; defines all OTel dependencies.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;# opsramp-sdk/pyproject.toml
[project]
name = &quot;opsramp-sdk&quot;
version = &quot;2.0.0&quot;
description = &quot;OpsRamp SDK v2 — OTel-native resource and telemetry ingestion&quot;
requires-python = &quot;&gt;=3.11&quot;
dependencies = [
    &quot;requests&gt;=2.31.0&quot;,
    &quot;opentelemetry-api&gt;=1.24.0&quot;,
    &quot;opentelemetry-sdk&gt;=1.24.0&quot;,
    &quot;opentelemetry-exporter-otlp-proto-grpc&gt;=1.24.0&quot;,
    &quot;opentelemetry-instrumentation-requests&gt;=0.45b0&quot;,
    &quot;python-dotenv&gt;=1.0&quot;,
]

[project.scripts]
opsramp-redfish-agent-v2  = &quot;opsramp_sdk.redfish.agent:main&quot;
opsramp-redfish-events-v2 = &quot;opsramp_sdk.redfish.event_listener:main&quot;
opsramp-redfish-provision = &quot;opsramp_sdk.redfish.opsramp_resources:provision_all_resources&quot;

[build-system]
requires = [&quot;uv_build&gt;=0.10.9,&amp;#x3C;0.11.0&quot;]
build-backend = &quot;uv_build&quot;

[tool.uv]
package = true

[dependency-groups]
dev = [
    &quot;pytest&gt;=8.0&quot;,
    &quot;pytest-timeout&gt;=2.3&quot;,
    &quot;python-dotenv&gt;=1.0&quot;,
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd ~/sivabala/siva-sdk/poc-v2/opsramp-sdk
uv sync
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Environment configuration: The .env file&lt;/h2&gt;
&lt;p&gt;All credentials and endpoints are stored in a &lt;code&gt;.env&lt;/code&gt; file that is loaded automatically by each Python module.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# poc-v2/.env

# HPE OpsRamp endpoints
OPSRAMP_API_URL=https://api.opsramp.com
OPSRAMP_TENANT_ID=&amp;#x3C;your-tenant-id&gt;

# Resource API credentials (scope: global:manage) — used for provisioning
OPSRAMP_CLIENT_ID=&amp;#x3C;client-id&gt;
OPSRAMP_CLIENT_SECRET=&amp;#x3C;client-secret&gt;

# Collector credentials (scope: logs:write, metrics:write) — used by OTel Collector
OPSRAMP_COLLECTOR_CLIENT_ID=&amp;#x3C;collector-client-id&gt;
OPSRAMP_COLLECTOR_CLIENT_SECRET=&amp;#x3C;collector-client-secret&gt;

# Redfish emulator
REDFISH_BASE_URL=http://localhost:5000

# OTel Collector
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317

# Agent behaviour
POLL_INTERVAL_SEC=15
LOG_LEVEL=INFO
SKIP_PROVISION=false

# Event listener
EVENT_SIMULATOR_ENABLED=true
SIM_INTERVAL_SEC=30
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Verification: Confirming every component is healthy&lt;/h2&gt;
&lt;p&gt;I wrote a comprehensive smoke test that validates every component automatically. Here is how to run it and what to look for:&lt;/p&gt;
&lt;h3&gt;Step 1: Verify Docker containers&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo docker compose ps
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected output — all five containers running:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;NAME               STATUS
redfish-emulator   Up (healthy)
otel-collector     Up
jaeger             Up
prometheus         Up
grafana            Up
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If &lt;code&gt;otel-collector&lt;/code&gt; shows &lt;code&gt;Restarting&lt;/code&gt;, check its logs:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo docker logs otel-collector 2&gt;&amp;#x26;1 | grep -i error | tail -10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The most common issue is an invalid &lt;code&gt;address:&lt;/code&gt; field in the &lt;code&gt;telemetry.metrics&lt;/code&gt; section of the collector config. Remove that field if present.&lt;/p&gt;
&lt;h3&gt;Step 2: Verify the Redfish emulator&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -s http://localhost:5000/redfish/v1/ | python3 -m json.tool | head -10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected: JSON response with &lt;code&gt;RedfishVersion&lt;/code&gt;, &lt;code&gt;Chassis&lt;/code&gt;, &lt;code&gt;Systems&lt;/code&gt;, &lt;code&gt;Managers&lt;/code&gt; links.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Verify all 13 chassis are present
curl -s http://localhost:5000/redfish/v1/Chassis | \
  python3 -c &quot;import sys,json; d=json.load(sys.stdin); print(f&apos;Chassis count: {d[\&quot;Members@odata.count\&quot;]}&apos;)&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected: &lt;code&gt;Chassis count: 13&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Verify power data for Chassis-1
curl -s http://localhost:5000/redfish/v1/Chassis/Chassis-1/Power | \
  python3 -c &quot;
import sys,json
d=json.load(sys.stdin)
for pc in d.get(&apos;PowerControl&apos;,[]):
    print(f&apos;Consumed: {pc.get(\&quot;PowerConsumedWatts\&quot;)}W&apos;)
&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Verify the OTel Collector&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Health check endpoint
curl -s http://localhost:13133/health
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected: &lt;code&gt;{&quot;status&quot;:&quot;Server available&quot;}&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# zpages pipeline status — shows all active pipelines
curl -s http://localhost:55679/debug/pipelinez | grep -o &quot;pipeline[^&amp;#x3C;]*&quot; | head -10
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Verify httpcheck metrics are flowing
curl -s http://localhost:8889/metrics | grep &quot;redfish_httpcheck_status&quot; | head -5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected: Multiple lines with &lt;code&gt;redfish_httpcheck_status{...} 1&lt;/code&gt; (1 = endpoint up)&lt;/p&gt;
&lt;h3&gt;Step 4: Verify Prometheus&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Check scrape targets are healthy
curl -s http://localhost:9090/api/v1/targets | \
  python3 -c &quot;
import sys,json
d=json.load(sys.stdin)
for t in d[&apos;data&apos;][&apos;activeTargets&apos;]:
    print(f&apos;{t[\&quot;health\&quot;]:10s} {t[\&quot;scrapeUrl\&quot;]}&apos;)
&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected: &lt;code&gt;up&lt;/code&gt; for all targets.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Query httpcheck metrics
curl -s &quot;http://localhost:9090/api/v1/query?query=redfish_httpcheck_status&quot; | \
  python3 -c &quot;
import sys,json
d=json.load(sys.stdin)
print(f&apos;httpcheck series: {len(d[\&quot;data\&quot;][\&quot;result\&quot;])}&apos;)
&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Verify Grafana&lt;/h3&gt;
&lt;p&gt;Open &lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:3000&lt;/code&gt; in your browser. Log in with &lt;code&gt;admin&lt;/code&gt; / &lt;code&gt;admin&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Connections → Data sources&lt;/strong&gt; and verify both Prometheus and Jaeger datasources show a green &quot;Data source connected&quot; status.&lt;/p&gt;
&lt;h3&gt;Step 6: Run the automated smoke test&lt;/h3&gt;
&lt;p&gt;I wrote a comprehensive shell script that validates all endpoints automatically:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;chmod +x ~/sivabala/siva-sdk/poc-v2/smoke-test.sh
bash ~/sivabala/siva-sdk/poc-v2/smoke-test.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The script checks every component and reports pass/fail with HTTP status codes. A clean stack shows zero failures before starting the agent.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;The zpages debug UI: Live pipeline visibility&lt;/h2&gt;
&lt;p&gt;One of the most useful tools for understanding what the OTel Collector is doing in real time is the zpages debug interface. I added this specifically for demonstration purposes.&lt;/p&gt;
&lt;p&gt;Open &lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:55679/debug/pipelinez&lt;/code&gt; in your browser. You will see the complete pipeline topology — every receiver, processor, and exporter — with live counters showing signals flowing through each stage.&lt;/p&gt;
&lt;p&gt;The key pages are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;/debug/pipelinez&lt;/code&gt; — Pipeline status with signal counts&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/debug/tracez&lt;/code&gt; — Live trace sampling — watch spans arrive in real time&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/debug/servicez&lt;/code&gt; — Collector version and running extensions&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/debug/rpcz&lt;/code&gt; — gRPC connection stats from the agent&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is the single best tool for demonstrating the OTel Collector pipeline to an audience because it shows the data moving in real time without requiring any additional tooling.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Browser access summary&lt;/h2&gt;
&lt;p&gt;After a successful stack startup, you should have access to five browser UIs:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;UI&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it shows&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Redfish API&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:5000/redfish/v1/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Raw hardware data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;zpages&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:55679/debug/pipelinez&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Live pipeline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grafana&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:3000&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Jaeger&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:16686&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Trace waterfall&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prometheus&lt;/td&gt;
&lt;td&gt;&lt;code&gt;http://&amp;#x3C;EC2-IP&gt;:9090&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Metrics queries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2&gt;Conclusion and what comes next&lt;/h2&gt;
&lt;p&gt;The local stack is now running and verified. Using the steps above, I now have five healthy containers, a working Redfish emulator producing real hardware data, an OTel Collector routing signals through its pipelines, and Prometheus, Grafana, and Jaeger ready to visualize that data.&lt;/p&gt;
&lt;p&gt;Importantly, I have verified each component independently before combining them with the Python agent or HPE OpsRamp. This incremental verification approach is what makes debugging tractable when something goes wrong.&lt;/p&gt;
&lt;p&gt;In Part 3, I will introduce two powerful testing tools — &lt;code&gt;otel-cli&lt;/code&gt; and &lt;code&gt;promtool&lt;/code&gt; — and show how to use them to test the Redfish emulator, validate the collector pipeline, and trigger test traces manually. These tools let you probe every part of the stack without writing a single line of Python.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more insights on &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html&quot;&gt;HPE OpsRamp&lt;/a&gt; (Hybrid Cloud Observability) and practical ideas to apply it in your daily operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Reflections on 30 Years of HPC Programming: So many hardware advances, so little adoption of new languages]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/reflections-on-30-years-of-hpc-programming-so-many-hardware-advances-so-little-adoption-of-new-languages/</link><guid isPermaLink="false">https://developer.hpe.com/reflections-on-30-years-of-hpc-programming-so-many-hardware-advances-so-little-adoption-of-new-languages/</guid><pubDate>Thu, 09 Apr 2026 22:39:07 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore HPE Private Cloud AI tutorials, managing application accounts with iLOrest, Chapel news, & PyCentral!]]></title><link>https://developer.hpe.com/2026-april-06/</link><guid isPermaLink="false">https://developer.hpe.com/2026-april-06/</guid><pubDate>Mon, 06 Apr 2026 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[AI DevCon 2026: Why the future of enterprise AI is a data and storage engineering problem]]></title><description><![CDATA[AI DevCon 2026, held on 12–13 March 2026 at the NIMHANS Convention Centre in Bengaluru, brought together developers, data engineers…]]></description><link>https://developer.hpe.com/ai-devcon-2026-why-the-future-of-enterprise-ai-is-a-data-and-storage-engineering-problem/</link><guid isPermaLink="false">https://developer.hpe.com/ai-devcon-2026-why-the-future-of-enterprise-ai-is-a-data-and-storage-engineering-problem/</guid><pubDate>Fri, 03 Apr 2026 20:51:38 GMT</pubDate><content:encoded>&lt;p&gt;AI DevCon 2026, held on 12–13 March 2026 at the NIMHANS Convention Centre in Bengaluru, brought together developers, data engineers, platform architects, DevOps leaders, and AI practitioners working at every layer of the stack. While the conference agenda spanned topics such as GenAI, agentic AI, DevOps with AI, edge inference, and open-source tooling, one theme consistently surfaced across keynotes, technical talks, and hands‑on sessions: AI systems succeed or fail based far more on data engineering and infrastructure than on model sophistication.&lt;/p&gt;
&lt;p&gt;Across both days, speakers repeatedly demonstrated that many production AI failures—hallucinations, incorrect answers, unreliable agents, or non‑deterministic behavior—are not caused by faulty models. Instead, they originate upstream, in how data is stored, curated, versioned, refreshed, and retrieved. For organizations building enterprise AI systems, this reframes the challenge: AI reliability has become a storage‑adjacent engineering concern.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;From models to momentum: The shift toward data-centric AI engineering&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Several sessions, particularly in the &lt;strong&gt;Succeeding with AI and GenAI &amp;#x26; LLMs&lt;/strong&gt; tracks, emphasized the industry’s shift away from model‑centric thinking. Large language models(LLMs) are increasingly commoditized—open, swappable, or accessible via APIs. What differentiates successful AI deployments is no longer which model is used, but how effectively organizations manage the data that feeds those models.&lt;/p&gt;
&lt;p&gt;Presenters traced the evolution of engineering pipelines across eras. Traditional analytics relied on structured data warehouses. Machine learning systems introduced feature pipelines and offline training loops. In contrast, modern GenAI systems depend on knowledge pipelines—continuous data ingestion, semantic chunking, embedding generation, vector indexing, and retrieval‑augmented generation (RAG). Each step builds directly on stored data, and each step is sensitive to data quality, freshness, and lineage.&lt;/p&gt;
&lt;p&gt;The key insight shared repeatedly was simple but profound: LLMs reason over whatever context they are given—and they do so with complete confidence, even when that context is outdated, duplicated, or wrong.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Where GenAI systems actually break in production&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Real production examples shared at AI DevCon highlighted how GenAI systems fail in subtle but damaging ways. Many RAG implementations rely on naïve chunking strategies—splitting documents by token count rather than by semantic structure. This fragments meaning, separates definitions from constraints, and causes incomplete context to be retrieved at inference time. Weak or outdated embeddings further amplify the problem, surfacing text that is lexically similar but semantically incorrect.&lt;/p&gt;
&lt;p&gt;Perhaps the most concerning failure mode discussed was knowledge freshness drift. Policies, documentation, and procedures evolve continuously, yet embeddings and vector indexes are often refreshed infrequently. As a result, AI systems confidently return answers that reflect last quarter’s truth, not today’s reality. One real‑world case discussed involved an airline chatbot providing incorrect refund guidance based on outdated stored content—an error that ultimately resulted in legal liability.&lt;/p&gt;
&lt;p&gt;These failures make an uncomfortable truth unavoidable: RAG systems are only as reliable as the stored data they retrieve, and storage systems that lack strong versioning, provenance, and refresh guarantees directly undermine AI correctness.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The emergence of semantic data quality pipelines&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To address these challenges, multiple speakers introduced what is best described as a semantic data quality layer—a new architectural component that sits between raw storage and AI models. Unlike traditional ETL pipelines focused on schema and format, this layer focuses on meaning.&lt;/p&gt;
&lt;p&gt;Incoming data is semantically validated, deduplicated, enriched, and scored before it is allowed to influence AI outputs. Low‑value or low‑trust content is filtered out entirely. Embedding similarity is used not just for retrieval, but also for semantic deduplication and drift detection. In practice, presenters showed that more than half of raw data often never reaches the AI model, because it is irrelevant, redundant, or insufficiently trustworthy.&lt;/p&gt;
&lt;p&gt;This reframes data engineering for AI as a quality‑first discipline. Instead of maximizing data volume, successful teams aggressively curate data relevance, ensuring that models see less data—but better data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI raises the stakes for infrastructure&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Agentic AI, a major focus of Day 2, further increases pressure on data and storage foundations. Unlike traditional AI applications, agents are long‑running, stateful, and action‑oriented. They retrieve context, call tools, store intermediate reasoning artifacts, and revisit prior decisions. This creates access patterns that are far more dynamic than classic read‑heavy inference workloads.&lt;/p&gt;
&lt;p&gt;From an infrastructure perspective, agents introduce continuous read‑write cycles, ephemeral knowledge states, and frequent context recomposition. Storage systems must handle high concurrency, small object access, frequent updates, and strong consistency—while still supporting compliance, auditability, and isolation. The message from the conference was clear: agentic AI magnifies any weaknesses in the underlying data platform.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why this matters deeply for HPE Storage&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For HPE Storage teams, the themes from AI DevCon 2026 are directly relevant. Nearly every AI failure described at the conference maps back to core storage concerns: data freshness, version control, metadata quality, lineage tracking, governance, and performance consistency at scale. In the GenAI era, storage is no longer a passive repository—it actively shapes AI outcomes.&lt;/p&gt;
&lt;p&gt;As enterprises deploy RAG systems and AI agents on top of proprietary data, storage becomes the system of record for organizational truth. Vector databases and embeddings are derivative views; the authoritative content still resides in object, file, or block storage. If that foundational data is duplicated, stale, or poorly governed, AI systems will reflect those flaws—no matter how advanced the model.&lt;/p&gt;
&lt;p&gt;From an HPE perspective, this creates a strategic opportunity. Enterprise customers increasingly need storage platforms that are AI‑aware by design: optimized for rapid re‑indexing, capable of exposing rich metadata, supportive of frequent refresh cycles, and resilient under embedding‑heavy workloads. Storage that enables strong provenance, policy‑driven lifecycle management, and auditability becomes a trust anchor for AI—not just an infrastructure component.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Storage as an AI trust layer&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The conference made it clear that trust is the critical currency of enterprise AI adoption. Models may generate answers, but organizations are accountable for correctness. Storage platforms that support immutable records, versioned datasets, and explainable data sourcing directly contribute to AI trust and regulatory readiness.&lt;/p&gt;
&lt;p&gt;For HPE Storage engineering teams, aligning storage capabilities with AI pipelines—rather than treating AI as an external consumer—positions HPE to play a foundational role in customers’ AI strategies. The opportunity is not to compete with AI models or vector databases, but to enable them to operate reliably, securely, and at enterprise scale.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Closing thoughts&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI DevCon 2026 reinforced a pivotal industry transition. As AI systems become more capable and autonomous, correctness and reliability are moving upstream—from model architectures into data pipelines and infrastructure design. In this new reality, storage engineering is inseparable from AI engineering.&lt;/p&gt;
&lt;p&gt;For HPE, this is not a peripheral trend but a defining moment. The teams that design, build, and evolve HPE Storage platforms are increasingly shaping how enterprise AI systems behave in the real world. By embracing this responsibility, HPE can help ensure that the AI systems built on top of its platforms are not just powerful—but dependable.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streamlining hybrid cloud: Announcing the unified HPE/hpe Terraform provider v1.1.0]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/Streamlining-hybrid-cloud-Announcing-the-unified-HPE-Terraform-provider v1.1.0/</link><guid isPermaLink="false">https://developer.hpe.com/Streamlining-hybrid-cloud-Announcing-the-unified-HPE-Terraform-provider v1.1.0/</guid><pubDate>Thu, 02 Apr 2026 23:06:40 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Observe any observable using HPE OpsRamp: Open telemetry as the universal ingestion standard]]></title><description><![CDATA[Part 1 of a series on vendor-neutral observability with HPE OpsRamp and OpenTelemetry Introduction I have spent considerable time thinking…]]></description><link>https://developer.hpe.com/siva-observe-any-observable-using-opsramp-open-telemetry-as-the-universal-ingestion-standard/</link><guid isPermaLink="false">https://developer.hpe.com/siva-observe-any-observable-using-opsramp-open-telemetry-as-the-universal-ingestion-standard/</guid><pubDate>Mon, 30 Mar 2026 12:23:01 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Part 1 of a series on vendor-neutral observability with HPE OpsRamp and OpenTelemetry&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I have spent considerable time thinking about a fundamental question in modern infrastructure management: what does it mean for a platform to be truly observable? Not just monitorable — but observable in the sense that any signal, from any source, using any open standard, can flow into a single management plane without vendor lock-in.&lt;/p&gt;
&lt;p&gt;In this article, I want to share what I built to answer that question: a proof-of-concept that demonstrates how HPE OpsRamp can serve as a universal observability backend for any infrastructure domain, using OpenTelemetry as the sole ingestion standard. I will walk you through the objective, the architecture, every component in the stack, and exactly how they are wired together.&lt;/p&gt;
&lt;h2&gt;What is OpenTelemetry?&lt;/h2&gt;
&lt;p&gt;OpenTelemetry (OTel) is a &lt;a href=&quot;https://www.cncf.io/&quot;&gt;CNCF&lt;/a&gt; open-source project that provides a vendor-neutral standard for collecting and exporting telemetry — metrics, logs, and traces — from any application or infrastructure component. It defines a common data model, APIs, SDKs, and a wire protocol (OTLP) that any observability backend can consume.&lt;/p&gt;
&lt;p&gt;Rather than instrumenting your code differently for every monitoring tool, you instrument once with OpenTelemetry and route signals wherever you need them.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Project home:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry.io&quot;&gt;opentelemetry.io&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;By the end of this article you will understand the full picture — the what and the why. In subsequent articles I will go deeper into the how: installation, verification, testing, and signal ingestion.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;The objective: Observe anything, from anywhere, using open standards&lt;/h2&gt;
&lt;p&gt;The central premise of this proof-of-concept is deceptively simple.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before ingestion:&lt;/strong&gt; HPE OpsRamp has no knowledge of the target infrastructure. No resources pre-created, no templates configured, no proprietary agents installed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;After ingestion:&lt;/strong&gt; HPE OpsRamp automatically has a complete operational picture — resources with full domain attributes, topology relationships, live metrics, structured logs, distributed traces, and correlated alerts — all derived purely from OpenTelemetry signals.&lt;/p&gt;
&lt;p&gt;The domain does not matter. I used Redfish — the DMTF standard for hardware management — purely as a convenient resource simulator. The same architecture works for Kubernetes workloads, VMware vSphere, bare-metal servers, IoT sensors, or any system that can emit OpenTelemetry signals. Redfish is the stand-in, not the subject.&lt;/p&gt;
&lt;p&gt;The key constraint I imposed on myself: &lt;strong&gt;no HPE OpsRamp proprietary agents, no vendor-specific SDKs for signal collection, no webhook-based event ingestion&lt;/strong&gt;. Everything flows through open standards.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;The stack: Five components, one open pipeline&lt;/h2&gt;
&lt;p&gt;The architecture consists of five components arranged in a clean signal pipeline. Let me introduce each one and explain its role before describing how they interconnect.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/five-stack.jpg&quot; alt=&quot;The stack: Five components, one open pipeline&quot; title=&quot;The stack: Five components, one open pipeline&quot;&gt;&lt;/p&gt;
&lt;h3&gt;The DMTF Redfish emulator — resource simulation layer&lt;/h3&gt;
&lt;p&gt;The DMTF Redfish Interface Emulator is an open-source Python application that implements the Redfish API specification. It simulates a complete hardware infrastructure: 13 chassis, 7 compute systems, and 13 BMC managers — 34 resources in total — each exposing real Redfish endpoints for power consumption, thermal readings, fan speeds, memory, CPU, and hardware events.&lt;/p&gt;
&lt;p&gt;I chose Redfish because it is a genuine industry standard used in real data centers, it produces rich structured data, and it runs entirely in Docker with no external dependencies. Every chassis exposes endpoints like &lt;code&gt;/redfish/v1/Chassis/Chassis-1/Power&lt;/code&gt; and &lt;code&gt;/redfish/v1/Chassis/Chassis-1/Thermal&lt;/code&gt; that return real numeric data suitable for metric collection.&lt;/p&gt;
&lt;p&gt;The emulator runs on port 5000 and is the sole source of truth for infrastructure state in this stack.&lt;/p&gt;
&lt;h3&gt;The OTel Python agent — instrumentation layer&lt;/h3&gt;
&lt;p&gt;This is the custom Python application I wrote for this proof-of-concept. It is the heart of the open-standard ingestion story.&lt;/p&gt;
&lt;p&gt;The agent uses the &lt;strong&gt;OpenTelemetry Python SDK&lt;/strong&gt; to instrument Redfish polling in a way that produces three distinct signal types:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Metrics&lt;/strong&gt; — The agent creates one &lt;code&gt;MeterProvider&lt;/code&gt; per chassis and one per system. Each provider has its own &lt;code&gt;Resource&lt;/code&gt; object carrying the chassis or system identity. Power consumption, thermal temperatures, fan speeds, memory, and CPU metrics are emitted as OTel gauge instruments with full resource context attached.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Logs and events&lt;/strong&gt; — Redfish hardware events are captured as OTel &lt;code&gt;LogRecord&lt;/code&gt; objects. Each log record carries the host.name of the originating resource, enabling HPE OpsRamp to associate the log with the correct resource entry. A separate &lt;code&gt;LoggerProvider&lt;/code&gt; per resource ensures the association is precise.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Traces&lt;/strong&gt; — Every poll cycle is wrapped in an OTel trace span using &lt;code&gt;RequestsInstrumentor&lt;/code&gt; for automatic HTTP instrumentation. The trace waterfall shows exactly which Redfish API calls were made, their latency, and any errors — giving full visibility into the agent&apos;s operational behavior.&lt;/p&gt;
&lt;p&gt;The agent sends all three signal types to the OTel Collector using &lt;strong&gt;OTLP/gRPC&lt;/strong&gt; on port 4317.&lt;/p&gt;
&lt;h3&gt;The OTel Collector — processing and routing layer&lt;/h3&gt;
&lt;p&gt;The OpenTelemetry Collector is the central routing engine of the stack. It is the only component that speaks to both the local observability tools and HPE OpsRamp simultaneously.&lt;/p&gt;
&lt;p&gt;The collector runs three parallel pipelines:&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;metrics pipeline&lt;/strong&gt; receives OTLP metrics from the agent and the httpcheck receiver, processes them through a resource attribute processor, and exports them to Prometheus on port 8889. Prometheus then pushes to HPE OpsRamp via remote_write.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;logs pipeline&lt;/strong&gt; receives OTLP log records from both the agent and the event listener, batches them for efficiency, and exports them directly to HPE OpsRamp&apos;s log ingestion endpoint using OTLP/gRPC with OAuth2 authentication managed by the oauth2client extension.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;traces pipeline&lt;/strong&gt; receives OTLP spans and exports them to Jaeger for local visualization.&lt;/p&gt;
&lt;p&gt;The Collector also runs the &lt;strong&gt;httpcheck receiver&lt;/strong&gt; — a built-in receiver that polls the Redfish API endpoints directly, producing endpoint health metrics (up/down status and response time) without any agent code.&lt;/p&gt;
&lt;h3&gt;Prometheus — metrics buffering and remote write&lt;/h3&gt;
&lt;p&gt;Prometheus serves a dual purpose in this stack. Locally, it provides a queryable time-series database that Grafana uses for dashboards. Remotely, it acts as the push mechanism for metrics into HPE OpsRamp via &lt;code&gt;remote_write&lt;/code&gt; — HPE OpsRamp&apos;s recommended ingestion path for Prometheus-format metrics.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;remote_write&lt;/code&gt; configuration in &lt;code&gt;prometheus.yml&lt;/code&gt; includes the HPE OpsRamp endpoint, OAuth2 credentials, and the critical resource association labels: &lt;code&gt;type=&quot;RESOURCE&quot;&lt;/code&gt; and &lt;code&gt;uuid=&quot;&amp;#x3C;resourceUUID&gt;&quot;&lt;/code&gt; that tell HPE OpsRamp which managed resource each metric belongs to.&lt;/p&gt;
&lt;h3&gt;HPE OpsRamp — the observability backend&lt;/h3&gt;
&lt;p&gt;HPE OpsRamp is the management plane that consumes all signals and provides the operational intelligence layer: resource lifecycle management, topology visualization, metric correlation, log management, alert generation, and AIOps capabilities.&lt;/p&gt;
&lt;p&gt;HPE OpsRamp is not configured in advance for this domain. It learns everything about the Redfish infrastructure from the signals it receives, augmented by a one-time resource provisioning step that uses the HPE OpsRamp REST API to create resource entries with full domain attributes before telemetry begins.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;The communication map: How everything is wired&lt;/h2&gt;
&lt;p&gt;Understanding the signal flow is essential before diving into installation and code. Here is the complete communication topology.&lt;/p&gt;
&lt;h3&gt;Signal flow: Metrics&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Redfish Emulator :5000
      │  HTTP GET (every 15s)
      ▼
OTel Agent (agent.py)
      │  OTLP/gRPC :4317  (metrics per chassis/system)
      ▼
OTel Collector :4317
      │  Prometheus exporter :8889
      ▼
Prometheus :9090
      │  remote_write HTTPS
      │  Authorization: Bearer &amp;#x3C;token&gt;
      │  labels: type=&quot;RESOURCE&quot;, uuid=&quot;&amp;#x3C;resourceUUID&gt;&quot;
      ▼
HPE OpsRamp Metrics Explorer
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Signal flow: Logs and events&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Redfish Emulator :5000
      │  HTTP GET (EventService polling)
      │  HTTP POST (event push to :9999)
      ▼
Event Listener (event_listener.py)
      │  OTLP/gRPC :4317  (LogRecords with host.name, type, uuid)
      ▼
OTel Collector :4317
      │  OTLP/gRPC :443  (batched, gzip compressed)
      │  Authorization: Bearer &amp;#x3C;oauth2 token&gt;
      │  Header: tenantId: &amp;#x3C;tenantId&gt;
      ▼
HPE OpsRamp Log Management
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Signal flow: Traces&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;OTel Agent (RequestsInstrumentor auto-instrumentation)
      │  OTLP/gRPC :4317  (spans for every HTTP call)
      ▼
OTel Collector :4317
      │  OTLP/gRPC to Jaeger :14317
      ▼
Jaeger UI :16686
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Signal flow: Resource provisioning&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;HPE OpsRamp REST API (one-time, on agent startup)
      │  POST /tenancy/auth/oauth/token  →  Bearer token
      │  POST /api/v2/tenants/{id}/resources  →  resourceUUID per resource
      │  POST /api/v2/tenants/{id}/topologies  →  parent-child relationships
      ▼
ResourceContext (in-memory registry: hostname → uuid)
      │
      ▼
All subsequent OTel signals carry: type=&quot;RESOURCE&quot;, uuid=&quot;&amp;#x3C;resourceUUID&gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Operational protocols&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hop&lt;/th&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Port&lt;/th&gt;
&lt;th&gt;Auth&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Agent → Collector&lt;/td&gt;
&lt;td&gt;OTLP/gRPC&lt;/td&gt;
&lt;td&gt;4317&lt;/td&gt;
&lt;td&gt;None (same host)&lt;/td&gt;
&lt;td&gt;Push&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Collector → HPE OpsRamp logs&lt;/td&gt;
&lt;td&gt;OTLP/gRPC&lt;/td&gt;
&lt;td&gt;443&lt;/td&gt;
&lt;td&gt;OAuth2 Bearer&lt;/td&gt;
&lt;td&gt;Push&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prometheus → HPE OpsRamp metrics&lt;/td&gt;
&lt;td&gt;HTTPS remote_write&lt;/td&gt;
&lt;td&gt;443&lt;/td&gt;
&lt;td&gt;OAuth2 Bearer&lt;/td&gt;
&lt;td&gt;Push&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Collector httpcheck → Redfish&lt;/td&gt;
&lt;td&gt;HTTP&lt;/td&gt;
&lt;td&gt;5000&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Pull&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent → Redfish&lt;/td&gt;
&lt;td&gt;HTTP&lt;/td&gt;
&lt;td&gt;5000&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Pull&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HPE OpsRamp REST API&lt;/td&gt;
&lt;td&gt;HTTPS&lt;/td&gt;
&lt;td&gt;443&lt;/td&gt;
&lt;td&gt;OAuth2 Bearer&lt;/td&gt;
&lt;td&gt;Push&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2&gt;The resource association problem and how I solved it&lt;/h2&gt;
&lt;p&gt;This deserves special attention because it is the most important design decision in the entire stack.&lt;/p&gt;
&lt;p&gt;HPE OpsRamp can receive metrics and logs as raw telemetry and display them in its explorer views. But to associate those signals with a specific managed resource — to show them in the resource&apos;s Metrics tab or Logs tab, and to enable topology-aware correlation — HPE OpsRamp requires two mandatory attributes on every signal:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;type = &quot;RESOURCE&quot;
uuid = &quot;5ce1fab8-b706-46bc-8941-47eb32a8f571&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;type&lt;/code&gt; must be exactly the string &lt;code&gt;&quot;RESOURCE&quot;&lt;/code&gt; in uppercase. The &lt;code&gt;uuid&lt;/code&gt; must be the &lt;code&gt;resourceUUID&lt;/code&gt; that HPE OpsRamp assigned when the resource was created via its REST API.&lt;/p&gt;
&lt;p&gt;I solved this with a module called &lt;code&gt;resource_context.py&lt;/code&gt; — a shared in-memory registry that maps each resource hostname to its full identity attributes. Phase 1 (provisioning) populates this registry. Phase 2 (telemetry) reads from it to enrich every metric data point, log record, and trace span.&lt;/p&gt;
&lt;p&gt;The critical timing constraint is that &lt;code&gt;agent_resource&lt;/code&gt; — the OTel Resource object for the agent — must be constructed &lt;strong&gt;after&lt;/strong&gt; Phase 1 completes, not at module import time. If built at import time, the registry is empty and &lt;code&gt;uuid&lt;/code&gt; is an empty string. This is a subtle but consequential bug I had to identify and fix explicitly.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;The observability stack in context&lt;/h2&gt;
&lt;p&gt;What makes this architecture compelling for a demonstration is not any single component — it is the combination. Each component is an open standard or open-source tool:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Redfish&lt;/strong&gt; — DMTF open standard for hardware management&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OpenTelemetry&lt;/strong&gt; — CNCF open standard for telemetry&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prometheus&lt;/strong&gt; — CNCF open-source metrics engine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Jaeger&lt;/strong&gt; — CNCF open-source distributed tracing&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Grafana&lt;/strong&gt; — Open-source visualization&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HPE OpsRamp&lt;/strong&gt; — Commercial management plane consuming all of the above&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The PoC demonstrates that HPE OpsRamp can serve as the observability backend for any infrastructure domain instrumented with OpenTelemetry, without requiring its proprietary agents for signal collection. The ingestion path is fully open standard.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Conclusion and what comes next&lt;/h2&gt;
&lt;p&gt;I have laid out the full architecture — the components, the signal flows, the protocols, and the resource association mechanism. The picture should be clear: any infrastructure domain that can be instrumented with OpenTelemetry can be observed in HPE OpsRamp using this pattern.&lt;/p&gt;
&lt;p&gt;In Part 2, I will walk through the complete installation and verification of every component in the local stack — the Redfish emulator, OTel Collector, Prometheus, Grafana, and Jaeger — running in Docker Compose on a single EC2 instance. I will show exactly how to verify each component is healthy before sending a single signal to HPE OpsRamp.&lt;/p&gt;
&lt;p&gt;Stay tuned to the HPE Developer Community blog for more insights on HPE HPE OpsRamp (Hybrid Cloud Observability) and practical ideas to apply it in your daily operations.&lt;/p&gt;
&lt;h2&gt;Want to know more?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OpenTelemetry project:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry.io&quot;&gt;opentelemetry.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OTel specification:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry.io/docs/specs/otel/&quot;&gt;opentelemetry.io/docs/specs/otel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OTel Python SDK:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry-python.readthedocs.io/&quot;&gt;opentelemetry-python.readthedocs.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OTel Collector:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry.io/docs/collector/&quot;&gt;opentelemetry.io/docs/collector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OTLP protocol:&lt;/strong&gt; &lt;a href=&quot;https://opentelemetry.io/docs/specs/otlp/&quot;&gt;opentelemetry.io/docs/specs/otlp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DMTF Redfish standard:&lt;/strong&gt; &lt;a href=&quot;https://www.dmtf.org/standards/redfish&quot;&gt;dmtf.org/standards/redfish&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Redfish emulator:&lt;/strong&gt; &lt;a href=&quot;https://github.com/DMTF/Redfish-Interface-Emulator&quot;&gt;github.com/DMTF/Redfish-Interface-Emulator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HPE OpsRamp OTLP integration:&lt;/strong&gt; &lt;a href=&quot;https://docs.opsramp.com/integration/opentelemetry/&quot;&gt;docs.opsramp.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prometheus remote_write:&lt;/strong&gt; &lt;a href=&quot;https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write&quot;&gt;prometheus.io/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Jaeger tracing:&lt;/strong&gt; &lt;a href=&quot;https://www.jaegertracing.io&quot;&gt;jaegertracing.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[7 Questions for CHAMPS Developers: Empowering Academic R&D to Create Cutting-Edge CFD Apps in Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-champs-developers-empowering-academic-r-d-to-create-cutting-edge-cfd-apps-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-champs-developers-empowering-academic-r-d-to-create-cutting-edge-cfd-apps-in-chapel/</guid><pubDate>Thu, 26 Mar 2026 23:06:40 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cascading dropdown lists for HPE Morpheus Enterprise]]></title><description><![CDATA[Introduction Authors frequently need to capture additional information during the provisioning of workloads and services. This information…]]></description><link>https://developer.hpe.com/cascading-dropdown-lists-on-hpe-morpheus-enterprise-forms/</link><guid isPermaLink="false">https://developer.hpe.com/cascading-dropdown-lists-on-hpe-morpheus-enterprise-forms/</guid><pubDate>Fri, 20 Mar 2026 11:16:09 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Authors frequently need to capture additional information during the provisioning of workloads and services. This information is typically collected through user interface controls such as drop-down lists, often populated dynamically from external systems. Downstream, these selections are consumed by automation workflows that tailor deployments and integrate with ancillary systems such as CMDBs, IPAM platforms, or other operational tooling.&lt;/p&gt;
&lt;p&gt;HPE Morpheus Enterprise provides extensive flexibility for customizing form elements across instance provisioning, workflows, and service catalog items. Drop-down Option Lists can be populated using a variety of Option Source types, including REST-based endpoints and plugin-backed integrations.&lt;/p&gt;
&lt;p&gt;However, in real-world environments, form inputs are rarely independent. The selection made in one field often determines the valid values in another.&lt;/p&gt;
&lt;p&gt;This is where cascading (interdependent) drop-down lists become essential.&lt;/p&gt;
&lt;p&gt;By dynamically filtering or populating one field based on the selection of another, cascading drop-downs introduce context awareness into provisioning forms. This reduces user error, improves data integrity, and ensures that deployments align with environmental constraints and governance requirements.&lt;/p&gt;
&lt;p&gt;This article demonstrates how to implement cascading drop-down lists in Morpheus using both REST-based and plugin-based Option Lists.&lt;/p&gt;
&lt;h2&gt;HPE Morpheus terms explained&lt;/h2&gt;
&lt;p&gt;Some of the terminology used in this article may be misleading or confusing due to its ubiquitous use across a wide range of products and technology domains, including ITSM, virtualization platforms, and service orchestration systems. This section explains the common terms used for HPE Morpheus Enterprise concepts throughout the article:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Option List&lt;/strong&gt;&lt;br/&gt;
A list of &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; pairs used to populate UI controls such as drop-downs, radio lists, and type-ahead fields. In the HPE Morpheus Enterprise UI, Option Lists are defined under &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Option Lists&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Option Source&lt;/strong&gt;&lt;br/&gt;
The underlying data source used to populate an Option List. The Option Source type may be static data (JSON or CSV), REST response data, LDAP query results, HPE Morpheus Enterprise API data, or a plugin-based provider. Option Sources are defined as part of creating Option Lists.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Input&lt;/strong&gt;&lt;br/&gt;
A web UI control used in an HPE Morpheus Enterprise wizard. A wizard typically contains multiple Input controls, such as when provisioning a VM Instance. Input types include checkbox, hidden value, number, password, radio list, select list, text, text area, and type-ahead. Inputs are defined under &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A select list Input is populated by a corresponding Option List. This article focuses exclusively on select list Inputs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Forms&lt;/strong&gt;&lt;br/&gt;
HPE Morpheus Forms are collections of Inputs organized in sections called Field Groups. Forms are used exclusively with Catalog Items in the HPE Morpheus Enterprise Service Catalog. Forms are created and configured under &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Forms&lt;/strong&gt;&lt;/em&gt; in the UI. Forms &lt;em&gt;&lt;strong&gt;will not be covered&lt;/strong&gt;&lt;/em&gt; by this article.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE Morpheus Plugin&lt;/strong&gt;&lt;br/&gt;
A compiled &lt;em&gt;&lt;strong&gt;.jar&lt;/strong&gt;&lt;/em&gt; file containing logic that extends the functionality of HPE Morpheus. Plugins are typically written in Groovy and compiled using the Java toolchain.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Option Source Provider&lt;/strong&gt;&lt;br/&gt;
A plugin class responsible for retrieving and constructing the data used to populate Option Lists programmatically.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Wizard&lt;/strong&gt;
This article uses the term &lt;strong&gt;wizard&lt;/strong&gt; to indicate a section of the web user interface where the end user provides input values. An example of this would be the input fields when a user provisions an instance. In this article an operational workflow wizard is used to show and test Input controls.&lt;/p&gt;
&lt;h2&gt;Demo environment&lt;/h2&gt;
&lt;p&gt;To illustrate how to reference external data sources on wizard drop-down Inputs, and then make them interdependent, this article makes use of two demo lab VMs. These include a &lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; VM and an &lt;em&gt;&lt;strong&gt;HPE Morpheus Enterprise&lt;/strong&gt;&lt;/em&gt; appliance. This lab has been tested on HPE Morpheus Enterprise 8.0.&lt;/p&gt;
&lt;p&gt;To demonstrate how a plugin can supply the data via custom integration, a development environment assumes plugin compile capability as described in the article &lt;a href=&quot;https://developer.hpe.com/blog/morpheus-plugin-tutorial-how-to-build-and-compile/&quot;&gt;A Beginner’s Guide to Building and Compiling HPE Morpheus Enterprise Plugins&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the purposes of this article, the &lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; was setup on a clean Debian 12 install, on the same network segment as the &lt;em&gt;&lt;strong&gt;HPE Morpheus Enterprise&lt;/strong&gt;&lt;/em&gt; appliance. Data for the &lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; web endpoint below is supplied by a &lt;em&gt;&lt;strong&gt;locations.json&lt;/strong&gt;&lt;/em&gt; file. The content for this file can be found &lt;a href=&quot;https://github.com/neilvrhpe/OptionSourceDemo/blob/main/locations.json&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;&lt;strong&gt;v1.0.0&lt;/strong&gt;&lt;/em&gt; was setup using the following commands:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apt update&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apt install nodejs npm -y&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;npm install json-server&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vi locations.json&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;npx json-server --host 0.0.0.0 --port 80 locations.json&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;(Bear in mind that minimal Debian doesn&apos;t install with the &lt;em&gt;&lt;strong&gt;sudo&lt;/strong&gt;&lt;/em&gt; command by default. Prepend &lt;em&gt;&lt;strong&gt;sudo&lt;/strong&gt;&lt;/em&gt; to administrative commands where appropriate on your OS distribution)&lt;/p&gt;
&lt;p&gt;In this demo, the &lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; web endpoint responds on &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;http://demojsonserver&quot;&gt;http://demojsonserver&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt; and renders this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/jsonserver_home.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The 3 sub-pages: &lt;em&gt;&lt;strong&gt;/countries&lt;/strong&gt;&lt;/em&gt;, &lt;em&gt;&lt;strong&gt;/states&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;/cities&lt;/strong&gt;&lt;/em&gt; reflect the base objects in the &lt;em&gt;&lt;strong&gt;locations.json&lt;/strong&gt;&lt;/em&gt; file:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/locations_json.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the endpoint &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;http://demojsonserver/countries&quot;&gt;http://demojsonserver/countries&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt; provides the 3 countries in the* &lt;strong&gt;locations.json&lt;/strong&gt;* file:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/countries_json.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Option Source Types&lt;/h2&gt;
&lt;p&gt;There are several types of Option Lists. These represent the different ways by which an Option Source can populate the resulting drop-down control. Consider the TYPE input on the NEW OPTION LIST wizard:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/option_list_types.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;REST:&lt;/strong&gt; Web endpoint requests.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Morpheus Api:&lt;/strong&gt; HPE Morpheus Enterprise platform elements like VMs or Networks directly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;LDAP:&lt;/strong&gt; Use LDAP Search Filter syntax to populate Option Lists.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Manual:&lt;/strong&gt; Define Option List data with CSV or JSON.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Plugin:&lt;/strong&gt; OptionSourceProvider classes within uploaded plugins supply data in name / value pairs.&lt;/p&gt;
&lt;p&gt;In this article, I am focusing on the &lt;em&gt;&lt;strong&gt;REST Web&lt;/strong&gt;&lt;/em&gt; endpoints and an &lt;em&gt;&lt;strong&gt;OptionSourceProvider Plugin&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Creating the Option Lists&lt;/h2&gt;
&lt;p&gt;To create the &lt;em&gt;&lt;strong&gt;Option Lists&lt;/strong&gt;&lt;/em&gt; in &lt;em&gt;&lt;strong&gt;HPE Morpheus Enterprise&lt;/strong&gt;&lt;/em&gt;, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Options Lists&lt;/strong&gt;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Add&lt;/strong&gt;&lt;/em&gt;. The following dialog will be displayed:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add_optionlist.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For you to obtain proper results, replace the URL hostname with the appropriate hostname or IP address of the &lt;em&gt;&lt;strong&gt;JSON Server&lt;/strong&gt;&lt;/em&gt; in your environment. Then create three Option Lists that reflect the values below:&lt;/p&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Countries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;REST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE URL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;http://demojsonserver/countries&quot;&gt;http://demojsonserver/countries&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE METHOD:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;REAL TIME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Checked&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;States&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;REST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE URL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;http://demojsonserver/states&quot;&gt;http://demojsonserver/states&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE METHOD:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;REAL TIME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Checked&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;REST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE URL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;http://demojsonserver/cities&quot;&gt;http://demojsonserver/cities&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SOURCE METHOD:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;REAL TIME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Checked&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Verify that the three Option Lists reflect the below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/option_lists.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Create the Inputs&lt;/h2&gt;
&lt;p&gt;On their own, Option Lists aren&apos;t useful as wizard inputs. To use the values in a UI control, the Option List is attached to a list-based Input Type. These include HTML drop-downs, option lists and type-ahead fields. &lt;br /&gt;
Inputs are also variables, with the &lt;em&gt;&lt;strong&gt;fieldName&lt;/strong&gt;&lt;/em&gt; property as the variable name. This means that an Input with a &lt;em&gt;&lt;strong&gt;fieldName&lt;/strong&gt;&lt;/em&gt; of &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; will have it&apos;s selected value stored as &lt;em&gt;&lt;strong&gt;input.country&lt;/strong&gt;&lt;/em&gt;. These variables will be used for filtering in the next section.&lt;br /&gt;
To create the wizard Inputs, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs&lt;/strong&gt;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Add&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create_input.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create 3 Inputs that correspond to the Option Lists above, with the following values:&lt;/p&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Country&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FIELD NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;country&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Select List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LABEL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Country&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OPTION LIST:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Countries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;State&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FIELD NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Select List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LABEL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;State&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OPTION LIST:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;States&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;City&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FIELD NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;city&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Select List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LABEL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;City&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OPTION LIST:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cities&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Inputs should be created as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/inputs.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The rest of the available Input types are not driven by Option Source list data. These include HTML text, textarea, checkbox, hidden, number and password. Text based Inputs can be validated using regular expressions.&lt;/p&gt;
&lt;h2&gt;How to test Option List Inputs&lt;/h2&gt;
&lt;p&gt;A simple way to test Inputs in HPE Morpheus Enterprise is to create an Operational Workflow. These workflows can use Inputs as Workflow wizard Inputs.
Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt;. Click &lt;strong&gt;&lt;em&gt;Add&lt;/em&gt; &gt; &lt;em&gt;Operational Workflow&lt;/em&gt;&lt;/strong&gt;. Provide &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; as the &lt;em&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/em&gt; and add &lt;em&gt;&lt;strong&gt;Country&lt;/strong&gt;&lt;/em&gt;, &lt;em&gt;&lt;strong&gt;State&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;City&lt;/strong&gt;&lt;/em&gt; to the type-ahead &lt;em&gt;&lt;strong&gt;Inputs&lt;/strong&gt;&lt;/em&gt; field:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/new_workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under &lt;strong&gt;&lt;em&gt;Library&lt;/em&gt; &gt; &lt;em&gt;Automation&lt;/em&gt; &gt; &lt;em&gt;Workflows&lt;/em&gt;&lt;/strong&gt;, click the name of the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow. Click the &lt;em&gt;&lt;strong&gt;EXECUTE&lt;/strong&gt;&lt;/em&gt; button. Check that all three drop-downs contain data from the JSON Server endpoints:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/execute_workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Filtering data&lt;/h2&gt;
&lt;p&gt;When Option Lists are populated by Option Source data, the list items are stored against an inherent &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; object. This &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; construct consists of a list of &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; pairs.&lt;/p&gt;
&lt;p&gt;Consider the current country JSON data:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
        {
            &quot;value&quot;: 1,
            &quot;name&quot;: &quot;United States&quot;
        },
        {
            &quot;value&quot;: 2,
            &quot;name&quot;: &quot;Canada&quot;
        },
        {
            &quot;value&quot;: 3,
            &quot;name&quot;: &quot;Australia&quot;
        }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the JSON keys in the list are not exactly &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt;, then the drop-down will not be correctly populated. Should the JSON keys be different, an additional step is needed to populate the values correctly. This step is covered in the next step.&lt;/p&gt;
&lt;p&gt;When the Option List is populated, each entry in the JSON Option Source list will be added onto the &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; object, causing the corresponding HTML page &lt;em&gt;&lt;strong&gt;select&lt;/strong&gt;&lt;/em&gt; tag to be populated with &lt;em&gt;&lt;strong&gt;option&lt;/strong&gt;&lt;/em&gt; tags.&lt;/p&gt;
&lt;p&gt;Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt; and click on the name of the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow. Click the &lt;em&gt;&lt;strong&gt;EXECUTE&lt;/strong&gt;&lt;/em&gt; button.
Using developer tools on your browser, inspect the &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; drop-down HTML element on the web UI page. This reveals that the drop-down control is populated with country name and value IDs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dropdown_html_after.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Filter the state by country using a translation script&lt;/h3&gt;
&lt;p&gt;At this stage, the &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; drop-down contains all states in the data source, regardless of which &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; is selected. Similarly, &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; also remain unfiltered, regardless of the selected &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This section will look at the two available mechanisms for filtering Option List data based on the values of other UI wizard Inputs, &lt;em&gt;&lt;strong&gt;Translation Scripts&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;Request Scripts&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Translation scripts use javascript syntax and consist of these components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An &lt;em&gt;&lt;strong&gt;input&lt;/strong&gt;&lt;/em&gt; object map that represents the selected values of other inputs on the same UI wizard&lt;/li&gt;
&lt;li&gt;A &lt;em&gt;&lt;strong&gt;data&lt;/strong&gt;&lt;/em&gt; list/array object that contains the raw data from the Option Source&lt;/li&gt;
&lt;li&gt;A &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; list/array object that the script needs to add &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; objects onto&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; objects that get added onto the &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; object. E.g., {&quot;name&quot;: &quot;USA&quot;, &quot;value&quot;: 1}&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Options Lists&lt;/strong&gt;&lt;/em&gt; and edit the previously created &lt;em&gt;&lt;strong&gt;States&lt;/strong&gt;&lt;/em&gt; Option List using the corresponding pencil icon on the right:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/edit_optionslist.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Add the following code to the &lt;em&gt;&lt;strong&gt;TRANSLATION SCRIPT&lt;/strong&gt;&lt;/em&gt; field and click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;for (var i = 0; i &amp;#x3C; data.length; i++) {
  if (data[i].countryId == input.country) {
 	 results.push({&quot;name&quot;: data[i].name, &quot;value&quot;: data[i].id})  
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/translation_script_code.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The above script loops through the state &lt;em&gt;&lt;strong&gt;data&lt;/strong&gt;&lt;/em&gt; set and pushes entries onto the &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; list. &lt;br /&gt;
This time there is no &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; field in the data. The &lt;em&gt;&lt;strong&gt;id&lt;/strong&gt;&lt;/em&gt; field is used for the value instead. It is necessary to provide &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; objects via translation scripts if the JSON data doesn&apos;t specifically use &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; keys. &lt;br /&gt;
The conditional if statement ensures that the selected value of the &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; Input matches the &lt;em&gt;&lt;strong&gt;countryId&lt;/strong&gt;&lt;/em&gt; of the JSON list entry before it can be added to the &lt;em&gt;&lt;strong&gt;results&lt;/strong&gt;&lt;/em&gt; list:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/country_id.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Inputs use &lt;em&gt;&lt;strong&gt;DEPENDENT FIELD&lt;/strong&gt;&lt;/em&gt; to trigger an Option List refresh when another field changes. Supply the &lt;em&gt;&lt;strong&gt;FIELD NAME&lt;/strong&gt;&lt;/em&gt; value of the other Input (country in this case) to trigger the refresh of the state Input. &lt;br /&gt;
To set up the refresh trigger of the &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; drop-down field, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs&lt;/strong&gt;&lt;/em&gt; and edit the &lt;em&gt;&lt;strong&gt;State&lt;/strong&gt;&lt;/em&gt; Input. Set the value of the &lt;em&gt;&lt;strong&gt;DEPENDENT FIELD&lt;/strong&gt;&lt;/em&gt; input to &lt;strong&gt;country&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dependent_on_country.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Navigate back to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt; and open the workflow execution dialog for the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow again. This time, &lt;em&gt;&lt;strong&gt;states&lt;/strong&gt;&lt;/em&gt; are filtered by the selected &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt; value:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/filtered_states.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Filter the city by state using a request script&lt;/h3&gt;
&lt;p&gt;Some REST web endpoints support filtering by URL parameters. As an example, consider the HPE Moprpheus Enterprise REST API endpoint for servers. Here is an example where the GET request URL is used to filter the list of servers on their &lt;em&gt;&lt;strong&gt;parentServerId&lt;/strong&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://some.morpheus.appliance/api/servers?parentServerId=42&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In this case you don&apos;t need to use a request script. For example, to filter &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; by a &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; id of 3, the URL would look like this:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;http://demojsonserver/cities?stateId=3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Here is the GET request result:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cities_filtered_by_url.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To implement URL request parameter filtering on the &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; Option Source, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Options Lists&lt;/strong&gt;&lt;/em&gt; and edit the &lt;em&gt;&lt;strong&gt;Cities&lt;/strong&gt;&lt;/em&gt; Option Source. Populate the* &lt;strong&gt;REQUEST SCRIPT&lt;/strong&gt;* text box with the below code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;results.push({ name: &apos;stateId&apos;, value: data.state || &quot;NoState&quot; });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/request_script.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This line of code effectively sets the &lt;em&gt;&lt;strong&gt;stateId&lt;/strong&gt;&lt;/em&gt; request parameter to the value of the &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; Form Input, or to &quot;NoState&quot; if no state is selected. The reason for this is that a blank &lt;strong&gt;stateId&lt;/strong&gt; request parameter value causes JSON Server to remove the filter entirely, thus showing all entries, instead of no entries.&lt;/p&gt;
&lt;p&gt;To trigger the refresh of &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; upon the selection of a &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt;, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs&lt;/strong&gt;&lt;/em&gt; and edit the &lt;strong&gt;State&lt;/strong&gt; input using the pencil icon on the right. Set the value of the &lt;em&gt;&lt;strong&gt;DEPENDENT FIELD&lt;/strong&gt;&lt;/em&gt; to &lt;strong&gt;state&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dependent_on_state.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The selection of &lt;em&gt;&lt;strong&gt;city&lt;/strong&gt;&lt;/em&gt; is now based on &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt;, which is based on the selected &lt;em&gt;&lt;strong&gt;country&lt;/strong&gt;&lt;/em&gt;. Navigate back to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt; and open the workflow execution dialog for the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow again. This time, &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; are filtered by the selected &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; value:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/country_state_city.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Explore, compile, and upload the Option Source plugin&lt;/h2&gt;
&lt;p&gt;HPE Morpheus Enterprise uses plugins to extend platform functionality, usually onto 3rd party platforms like hypervisors or IPAM systems. This is achieved through Groovy code projects that compile to java archives (.jar files). The .jar files are uploaded via the HPE Morpheus Enterprise UI or API. Plugins implement domain-specific class files called providers. To programmatically populate Option Lists from plugins, you need to implement an Option Source Provider.&lt;br /&gt;
This section explores an example of an Option Source plugin by adding zip codes to the above countries, states, cities example. &lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/plugin_providers_diag.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For more information pertaining to the anatomy of HPE Morpheus Enterprise Plugins, please refer to official plugin documentation at &lt;a href=&quot;https://developer.morpheusdata.com&quot;&gt;developer.morpheusdata.com&lt;/a&gt; or have a look at the blog article &lt;a href=&quot;https://developer.hpe.com/blog/morpheus-plugin-tutorial-how-to-build-and-compile/&quot;&gt;A Beginner’s Guide to Building and Compiling HPE Morpheus Enterprise Plugins&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Download or clone the plugin repository from &lt;a href=&quot;https://github.com/neilvrhpe/OptionSourceDemo&quot;&gt;https://github.com/neilvrhpe/OptionSourceDemo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Open the project directory using a simple IDE, like Visual Studio code, or even a text editor tool. Expand the &lt;em&gt;&lt;strong&gt;src &gt; main &gt; groovy &gt; com &gt; hpe &gt; morpheus &gt; demo&lt;/strong&gt;&lt;/em&gt; directory. View the &lt;em&gt;&lt;strong&gt;OptionsSourceDemoPlugin.groovy&lt;/strong&gt;&lt;/em&gt; class file:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/plugin_class.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This file is the HPE Morpheus Enterprise appliance&apos;s entry point into the plugin. &lt;em&gt;&lt;strong&gt;HPE Morpheus Enterprise plugins&lt;/strong&gt;&lt;/em&gt; always register one or more &lt;em&gt;&lt;strong&gt;provider classes&lt;/strong&gt;&lt;/em&gt;. The &lt;em&gt;&lt;strong&gt;provider classes&lt;/strong&gt;&lt;/em&gt; supply the plugin functionality. The method call on line 30, &lt;em&gt;&lt;strong&gt;this.registerProvider&lt;/strong&gt;&lt;/em&gt;, registers the &lt;em&gt;&lt;strong&gt;OptionSourceProvider&lt;/strong&gt;&lt;/em&gt;, which will provide a list of zip codes to the &lt;em&gt;&lt;strong&gt;ZipCodes Option List&lt;/strong&gt;&lt;/em&gt; in the next section.&lt;/p&gt;
&lt;p&gt;View the &lt;em&gt;&lt;strong&gt;DemoOptionSourceProvider.groovy&lt;/strong&gt;&lt;/em&gt; class file:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/provider_class.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This class extends &lt;em&gt;&lt;strong&gt;AbstractOptionSourceProvider&lt;/strong&gt;&lt;/em&gt;, which enables the plugin to provide a list of &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; pairs for an Option List through a collection of methods. The methods are made available to the platform via the &lt;em&gt;&lt;strong&gt;getMethodNames&lt;/strong&gt;&lt;/em&gt; method (line 21). In this example, there is only one method called &lt;em&gt;&lt;strong&gt;listZipCodes&lt;/strong&gt;&lt;/em&gt;, which is defined on line 45. It returns static &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;value&lt;/strong&gt; pairs, although the plugin provides flexibility on how the list is built. Data can easily be retrieved from other systems via SDKs, APIs, or database connections.&lt;/p&gt;
&lt;p&gt;Open the project directory in a command line terminal and compile the plugin with the relevant &lt;em&gt;&lt;strong&gt;gradlew&lt;/strong&gt;&lt;/em&gt;(Linux) or &lt;em&gt;&lt;strong&gt;gradlew.bat&lt;/strong&gt;&lt;/em&gt;(Windows) script using the &lt;em&gt;&lt;strong&gt;shadowJar&lt;/strong&gt;&lt;/em&gt; argument:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/compile_plugin.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The compiled &lt;em&gt;&lt;strong&gt;.jar&lt;/strong&gt;&lt;/em&gt; file will be found in the &lt;em&gt;&lt;strong&gt;build/libs&lt;/strong&gt;&lt;/em&gt; subdirectory:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/jar_file.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upload the &lt;em&gt;&lt;strong&gt;.jar&lt;/strong&gt;&lt;/em&gt; file to the &lt;em&gt;&lt;strong&gt;Administration &gt; Integrations &gt; Plugins &gt; Add&lt;/strong&gt;&lt;/em&gt; dialog. The plugin should appear in the list as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/uploaded_plugin.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Edit the uploaded plugin using the pencil icon to confirm that the &lt;em&gt;&lt;strong&gt;Option Source Provider&lt;/strong&gt;&lt;/em&gt; is present:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/plugin_provider.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Add the zip code Option List and Input&lt;/h2&gt;
&lt;p&gt;At this stage, only the name of the city is added to the &lt;em&gt;&lt;strong&gt;City&lt;/strong&gt;&lt;/em&gt; drop-down. This is due to the missing &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; property, as seen earlier with the &lt;em&gt;&lt;strong&gt;state&lt;/strong&gt;&lt;/em&gt; Option List.&lt;/p&gt;
&lt;p&gt;Add the following code snippet to the &lt;em&gt;&lt;strong&gt;cities&lt;/strong&gt;&lt;/em&gt; Option List &lt;em&gt;&lt;strong&gt;TRANSLATION SCRIPT&lt;/strong&gt;&lt;/em&gt; field by navigating to &lt;strong&gt;Library&lt;/strong&gt; &gt; &lt;strong&gt;Options&lt;/strong&gt; &gt; &lt;strong&gt;Options Lists&lt;/strong&gt; and clicking the corresponding pencil icon on the right:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;for (var x = 0; x &amp;#x3C; data.length; x++) {
  results.push({&quot;name&quot;: data[x].name,&quot;value&quot;: data[x].name})
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/cities_translation_script.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The above code sets the &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; property of a city to both the &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; of the city drop-down entry. You can verify this by opening the execution dialog of the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow. Inspecting the HTML once a &lt;strong&gt;city&lt;/strong&gt; is selected confirms this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cities_dropdown_html.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;&lt;strong&gt;listZipCodes&lt;/strong&gt;&lt;/em&gt; method returns &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; pairs with the &lt;em&gt;&lt;strong&gt;city name&lt;/strong&gt;&lt;/em&gt; and the* &lt;strong&gt;zip code value&lt;/strong&gt;*. As the Request Script will match using the &lt;em&gt;&lt;strong&gt;city name&lt;/strong&gt;&lt;/em&gt;, the value of the drop-down has to be the &lt;em&gt;&lt;strong&gt;city name&lt;/strong&gt;&lt;/em&gt; (not the city &lt;em&gt;&lt;strong&gt;id&lt;/strong&gt;&lt;/em&gt;).&lt;/p&gt;
&lt;p&gt;Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Option&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;&lt;strong&gt;Lists&lt;/strong&gt;&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Add&lt;/strong&gt;&lt;/em&gt;. Supply the field values as shown below:&lt;/p&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ZipCodes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plugin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OPTION LIST:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Option Source Demo: listZipCodes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;TRANSLATION SCRIPT:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;for (var i = 0; i &amp;#x3C; data.length; i++) {
  if (data[i].name == input.city) {
 	 results.push({&quot;name&quot;: data[i].value, &quot;value&quot;: data[i].value})  
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;The above populates the &lt;em&gt;&lt;strong&gt;ZipCodes&lt;/strong&gt;&lt;/em&gt; Option List with the &lt;em&gt;&lt;strong&gt;zip code&lt;/strong&gt;&lt;/em&gt; as both the &lt;em&gt;&lt;strong&gt;name&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt; (the drop-down &lt;em&gt;&lt;strong&gt;label&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;value&lt;/strong&gt;&lt;/em&gt;). Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/zipcodes_option_list.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs&lt;/strong&gt;&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Add&lt;/strong&gt;&lt;/em&gt;. Use the below field values:&lt;/p&gt;
&lt;hr&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ZipCode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FIELD NAME:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;zipCode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DEPENDENT FIELD:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;city&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;VISIBILITY FIELD:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;city&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TYPE:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Select List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LABEL:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Zip Code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OPTION LIST:&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ZipCodes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src=&quot;/img/zipcodes_input.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt; and edit the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow. As before, add the ZipCode Input to the workflow and save.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/zipcode_workflow_input.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click the name of the &lt;em&gt;&lt;strong&gt;Test Inputs&lt;/strong&gt;&lt;/em&gt; workflow and click &lt;em&gt;&lt;strong&gt;EXECUTE&lt;/strong&gt;&lt;/em&gt;. The &lt;em&gt;&lt;strong&gt;Zip Code drop-down&lt;/strong&gt;&lt;/em&gt; now appears in the workflow execution dialog, once the &lt;strong&gt;city&lt;/strong&gt; is selected:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/zipcode_dropdown.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To implement cascading visibility, set the corresponding &lt;em&gt;&lt;strong&gt;VISIBILITY FIELD&lt;/strong&gt;&lt;/em&gt; values on the other inputs to make the visibility of each subsequent drop-down depend on the one before. &lt;em&gt;&lt;strong&gt;State to country&lt;/strong&gt;&lt;/em&gt;, and &lt;em&gt;&lt;strong&gt;city to state&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This post only explores HPE Morpheus Enterprise Inputs and Option lists and Option Source Provider Plugins. Under &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Forms&lt;/strong&gt;&lt;/em&gt;, you can find more advanced Forms as collections of inputs. These form controls enrich functionality on UI forms and enable the customization of the inputs for workloads and services.&lt;/p&gt;
&lt;p&gt;At the more advanced end of the spectrum are other Plugin Provider Types that model core infrastructure components. These include integrations for Clouds, Networks, Storage systems, and many others. Such Providers tend to be more complex because they interact deeply with HPE Morpheus Enterprise’s provisioning, synchronization, and lifecycle management layers. Understanding how these Provider Types fit together is key to building powerful, production-grade Plugins.&lt;/p&gt;
&lt;p&gt;Explore the following resources for more information on the different Plugin/Provider types:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.morpheusdata.com/&quot;&gt;https://developer.morpheusdata.com&lt;/a&gt; (The official plugin documentation)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://share.morpheusdata.com/&quot;&gt;https://share.morpheusdata.com&lt;/a&gt; (Follow the repository link under the details page of a plugin to see the corresponding source code)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/hewlettpackard&quot;&gt;https://github.com/hewlettpackard&lt;/a&gt; (Several repositories with source code for various plugins and automation code samples)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&quot;&gt;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&lt;/a&gt; (Plugin code generator demo video)&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Akihiro Hayashi: Early Chapel GPU Support through Multiresolution Abstractions]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-akihiro-hayashi-early-chapel-gpu-support-through-multiresolution-abstractions/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-akihiro-hayashi-early-chapel-gpu-support-through-multiresolution-abstractions/</guid><pubDate>Wed, 18 Mar 2026 17:10:29 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Implementing a local LLM using S3-based model storage and vLLM in HPE Private Cloud AI]]></title><description><![CDATA[Deploying a local Large Language Model (LLM) architecture using S3‑compatible object storage and vLLM as the inference engine provides a…]]></description><link>https://developer.hpe.com/implementing-a-local-llm-using-s3-based-model-storage-and-vllm-in-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/implementing-a-local-llm-using-s3-based-model-storage-and-vllm-in-hpe-private-cloud-ai/</guid><pubDate>Tue, 17 Mar 2026 14:18:20 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Deploying a local Large Language Model (LLM) architecture using S3‑compatible object storage and &lt;em&gt;vLLM&lt;/em&gt; as the inference engine provides a scalable, cost‑efficient, and secure foundation for enterprise AI adoption. This approach enables organizations to operationalize AI workloads while maintaining full control over data, performance, and model lifecycle management.&lt;/p&gt;
&lt;p&gt;This blog post outlines the implementation of a fully local LLM deployment within the HPE Private Cloud AI (PCAI) environment. It deploys &lt;em&gt;MinIO&lt;/em&gt;, a high-performance, S3-compatible object storage platform optimized for cloud-native and containerized workloads, as the local model store, providing fast, parallel access to large model artifacts. The architecture then uses &lt;em&gt;vLLM&lt;/em&gt; as the optimized inference engine to deliver high-throughput model execution and efficient GPU utilization. Together, these components form a fully self-hosted LLM pipeline within the PCAI environment, enabling organizations to serve models reliably, scale efficiently, and run secure, high‑performance generative AI workloads entirely on‑premises without dependence on external cloud services. &lt;/p&gt;
&lt;h3&gt;HPE Private Cloud AI&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (PCAI)&lt;/a&gt; is a unified, private, full-stack AI cloud platform built for enterprises that require complete control over how they deploy, govern, and scale AI. It is designed to give organizations ownership of their AI environment while enabling rapid innovation across diverse use cases. PCAI directly addresses the most pressing challenges enterprises face as AI adoption accelerates: maintaining data sovereignty, ensuring security, reducing operational complexity, and avoiding the escalating costs and inefficiencies of fragmented, multi-vendor AI infrastructure.&lt;/p&gt;
&lt;p&gt;PCAI provides a comprehensive, turnkey foundation for end-to-end enterprise AI. It delivers a secure, scalable, and ready-to-use private cloud environment that includes a curated set of pre-built NVIDIA NIM-optimized LLMs and an integrated suite of AI/ML tools and frameworks for data engineering, analytics, and data science. This creates a consistent, governed operational layer for building and running AI services, ensuring that organizations retain full sovereignty over their data, models, and infrastructure while benefiting from a modern, production-ready AI platform.&lt;/p&gt;
&lt;h3&gt;AI managed by you&lt;/h3&gt;
&lt;p&gt;At the core of PCAI’s &lt;em&gt;&apos;AI managed by you&apos;&lt;/em&gt; philosophy are two complementary approaches that give enterprises both flexibility and operational rigor when deploying and operating AI services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Import Framework&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Machine Learning Inference Software (MLIS)&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;&lt;em&gt;Import Framework&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;The PCAI Import Framework feature offers an open, extensible mechanism for organizations to integrate any AI application, framework or third-party tool into the PCAI environment. It enables customers to import partner or open-source AI frameworks and add domain-specific or business-specific AI applications. Once imported, these components are managed through PCAI’s unified lifecycle management, ensuring consistent deployment, monitoring, and governance across the entire private cloud platform.&lt;/p&gt;
&lt;p&gt;This Import Framework feature is what makes PCAI truly AI-application agnostic. Customers are not limited to prepackaged tools and can freely build the AI ecosystem that fits their strategy.&lt;/p&gt;
&lt;h4&gt;&lt;em&gt;Machine Learning Inference Software (MLIS)&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;HPE Machine Learning Inference Software (MLIS) is an enterprise‑grade platform designed to streamline and operationalize large-scale deployment, management, and monitoring of machine learning (ML) models. It supports the full AI service lifecycle, including model registry configuration, LLM model onboarding, deployment creation, and high-throughput inference serving. Delivered as a prepackaged PCAI‑integrated application, similar to components such as &lt;em&gt;Kubeflow&lt;/em&gt; and &lt;em&gt;Ray&lt;/em&gt;, MLIS provides a controlled execution layer that handles model versioning, GPU scheduling, performance tuning, and comprehensive observability across availability, latency, and compliance metrics.&lt;/p&gt;
&lt;p&gt;A core capability of PCAI is its vendor-agnostic support for heterogeneous LLM models. Beyond pre-built NVIDIA NIM models, PCAI can deploy open-source LLMs (e.g., Hugging Face), third-party or proprietary models, and artifacts stored in external or internal object stores such as &lt;em&gt;MinIO&lt;/em&gt;. This flexibility enables organizations to consolidate diverse model sources within a unified deployment and governance framework while maintaining enterprise-grade reliability, visibility, and operational consistency.&lt;/p&gt;
&lt;p&gt;The following sections detail the implementation of a local LLM deployment within the PCAI environment using the Import Framework feature and MLIS.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE Private Cloud AI version 1.5.0 or later, running HPE AI Essentials version 1.9.1 or later.&lt;/li&gt;
&lt;li&gt;Access to an HPE Private Cloud AI workspace (with the &lt;em&gt;Private Cloud AI Administrator&lt;/em&gt; role), allowing to performe administrative operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The deployment examples in the following sections use the &lt;em&gt;kubectl&lt;/em&gt; CLI and &lt;em&gt;kubeconfig&lt;/em&gt; to interact with the PCAI Kubernetes (K8s) cluster. However, direct cluster access via &lt;em&gt;kubectl&lt;/em&gt; is generally not required.&lt;/p&gt;
&lt;h3&gt;Setting up model storage using &lt;em&gt;MinIO&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.min.io/&quot;&gt;&lt;em&gt;MinIO&lt;/em&gt;&lt;/a&gt; is a high‑performance, S3‑compatible object storage platform designed for cloud‑native and containerized workloads, offering a lightweight, K8s-friendly architecture that makes it exceptionally easy to deploy and scale. Beyond simple S3 compatibility, &lt;em&gt;MinIO&lt;/em&gt; provides enterprise‑grade durability through distributed erasure coding, high aggregate throughput for parallel I/O, and consistent operational behavior across local, on‑premises, and cloud deployments. These characteristics make it particularly effective for AI/ML and LLM pipelines, where fast, reliable access to large, immutable model artifacts is critical for efficient training and inference execution.&lt;/p&gt;
&lt;p&gt;The following sections describe how to deploy &lt;em&gt;MinIO&lt;/em&gt; within the PCAI environment using the Import Framework and configure it as a local model repository for storing and managing LLM artifacts.&lt;/p&gt;
&lt;h4&gt;Deploy &lt;em&gt;MinIO&lt;/em&gt; via &lt;em&gt;Import Framework&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;Using a revised &lt;em&gt;MinIO&lt;/em&gt; Helm chart, available in the GitHub repository &lt;a href=&quot;https://github.com/GuopingJia/pcai-helm-examples/tree/main/minio&quot;&gt;pcai-helm-examples&lt;/a&gt;, &lt;em&gt;MinIO&lt;/em&gt; can be deployed into the PCAI environment through the Import Framework by following the steps outlined below. This Helm chart is derived from the official &lt;a href=&quot;https://github.com/minio/minio/tree/master/helm/minio&quot;&gt;MinIO charts&lt;/a&gt; and augmented with the required &lt;em&gt;Istio&lt;/em&gt; &lt;em&gt;VirtualService&lt;/em&gt; and Kyverno &lt;em&gt;ClusterPolicy&lt;/em&gt; manifests to ensure compatibility with PCAI’s service mesh and policy controls.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the PCAI left navigation panel, select &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Import Framework&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/pcai-tools-frameworks-import-framework.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;By following the Import Framework wizard workflow, &lt;em&gt;MinIO&lt;/em&gt; can be deployed into the PCAI environment within minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Run the following commands to verify the &lt;em&gt;MinIO&lt;/em&gt; deployment in the namespace &lt;em&gt;&apos;minio&apos;&lt;/em&gt; of the PCAI K8s cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n minio
NAME          READY   STATUS    RESTARTS   AGE
pod/minio-0   1/1     Running   0          4h30m
pod/minio-1   1/1     Running   0          4h30m
pod/minio-2   1/1     Running   0          4h30m
pod/minio-3   1/1     Running   0          4h30m

NAME                    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/minio           ClusterIP   10.96.2.234   &amp;#x3C;none&gt;        9000/TCP   4h30m
service/minio-console   ClusterIP   10.96.1.181   &amp;#x3C;none&gt;        9001/TCP   4h30m
service/minio-svc       ClusterIP   None          &amp;#x3C;none&gt;        9000/TCP   4h30m

NAME                     READY   AGE
statefulset.apps/minio   4/4     4h30m

$ kubectl get pvc -n minio
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
export-minio-0   Bound    pvc-8b5e262b-3667-4da8-a886-8e8596b1cbe0   500Gi      RWO            gl4f-filesystem   &amp;#x3C;unset&gt;                 4h30m
export-minio-1   Bound    pvc-04770c71-301c-47f1-b92f-d7461c8bfca0   500Gi      RWO            gl4f-filesystem   &amp;#x3C;unset&gt;                 4h30m
export-minio-2   Bound    pvc-527387c5-0ca8-4991-8d0f-340dfdead8b4   500Gi      RWO            gl4f-filesystem   &amp;#x3C;unset&gt;                 4h30m
export-minio-3   Bound    pvc-d52bfe41-b7a4-4f06-8275-d2b566198012   500Gi      RWO            gl4f-filesystem   &amp;#x3C;unset&gt;                 4h30m
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Access &lt;em&gt;MinIO&lt;/em&gt; console via its endpoint&lt;/h4&gt;
&lt;p&gt;After &lt;em&gt;MinIO&lt;/em&gt; is deployed via the Import Framework, an imported &lt;em&gt;MinIO&lt;/em&gt; tile appears under &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio-imported.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Open&lt;/strong&gt;&lt;/em&gt; from the &lt;em&gt;MinIO&lt;/em&gt; tile will open the &lt;em&gt;MinIO&lt;/em&gt; console login page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio-login.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create a dedicated bucket&lt;/h4&gt;
&lt;p&gt;After logging into the &lt;em&gt;MinIO&lt;/em&gt; console, click &lt;em&gt;&lt;strong&gt;Create a Bucket&lt;/strong&gt;&lt;/em&gt; to create a S3 bucket for LLM models.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-minio-bucket.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Enter &lt;em&gt;Name&lt;/em&gt;, e.g., &lt;em&gt;&apos;s3-ai-models&apos;&lt;/em&gt; and select optional features for &lt;em&gt;Versioning&lt;/em&gt;, &lt;em&gt;Object Locking&lt;/em&gt;, or &lt;em&gt;Quota&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Create Bucket&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-ai-model-bucket.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Upload model artifacts to the bucket&lt;/h4&gt;
&lt;p&gt;As an example LLM model, the &lt;em&gt;Qwen3-0.6B-Base&lt;/em&gt; has been retrieved from the &lt;a href=&quot;https://huggingface.co/Qwen/Qwen3-0.6B-Base&quot;&gt;&lt;em&gt;Hugging Face&lt;/em&gt;&lt;/a&gt; and stored locally in the directory &lt;em&gt;&apos;Qwen3-0.6B-Base&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ git clone https://huggingface.co/Qwen/Qwen3-0.6B-Base
Cloning into &apos;Qwen3-0.6B-Base&apos;...
remote: Enumerating objects: 46, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 46 (delta 0), reused 0 (delta 0), pack-reused 43 (from 1)
Unpacking objects: 100% (46/46), done.
Checking out files: 100% (10/10), done.

$ ls -al  Qwen3-0.6B-Base/
total 1175909
drwxr-xr-x 1 GUJ 1049089          0 Mar  4 18:54 .
drwxr-xr-x 1 GUJ 1049089          0 Mar  4 18:52 ..
drwxr-xr-x 1 GUJ 1049089          0 Mar  4 18:54 .git
-rw-r--r-- 1 GUJ 1049089       1554 Mar  4 18:52 .gitattributes
-rw-r--r-- 1 GUJ 1049089        757 Mar  4 18:52 config.json
-rw-r--r-- 1 GUJ 1049089        144 Mar  4 18:52 generation_config.json
-rw-r--r-- 1 GUJ 1049089      11544 Mar  4 18:52 LICENSE
-rw-r--r-- 1 GUJ 1049089    1823241 Mar  4 18:52 merges.txt
-rw-r--r-- 1 GUJ 1049089 1192135096 Mar  4 18:54 model.safetensors
-rw-r--r-- 1 GUJ 1049089       3030 Mar  4 18:52 README.md
-rw-r--r-- 1 GUJ 1049089    7334926 Mar  4 18:53 tokenizer.json
-rw-r--r-- 1 GUJ 1049089       9916 Mar  4 18:53 tokenizer_config.json
-rw-r--r-- 1 GUJ 1049089    2776833 Mar  4 18:53 vocab.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the &lt;em&gt;MinIO&lt;/em&gt; console, select the created bucket &lt;em&gt;&apos;s3-ai-models&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/s3-ai-model-bucket.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Upload&lt;/strong&gt;&lt;/em&gt;, choose &lt;em&gt;Upload Folder&lt;/em&gt;, and select the &lt;em&gt;&apos;Qwen3-0.6B-Base&apos;&lt;/em&gt; directory to upload the model from local storage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio-ai-bucket.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After few minutes, the bucket &lt;em&gt;&apos;s3-ai-models&apos;&lt;/em&gt; displays the uploaded LLM model weights along with the associated configuration and tokenizer files from the local directory.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio-ai-model.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Configure access credentials&lt;/h4&gt;
&lt;p&gt;In the &lt;em&gt;MinIO&lt;/em&gt; console, navigate to &lt;strong&gt;Access Keys&lt;/strong&gt;,   click &lt;em&gt;&lt;strong&gt;Create access key +&lt;/strong&gt;&lt;/em&gt;. Specify the &lt;em&gt;Expiry&lt;/em&gt;, &lt;em&gt;Name&lt;/em&gt;, &lt;em&gt;Description&lt;/em&gt; and &lt;em&gt;Comments&lt;/em&gt; fields. Click &lt;em&gt;&lt;strong&gt;Create&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-framework-minio-create-access-key.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Store the newly generated &lt;em&gt;Access Key&lt;/em&gt; and &lt;em&gt;Secret Key&lt;/em&gt; in a secure location.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bucket-access-key.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These &lt;em&gt;Access Key&lt;/em&gt; and &lt;em&gt;Secret Key&lt;/em&gt; values will be required in the subsequent configuration steps to enable secure model retrieval.&lt;/p&gt;
&lt;h4&gt;Connect S3 data source&lt;/h4&gt;
&lt;p&gt;The following steps describe how to register &lt;em&gt;MinIO&lt;/em&gt; as a S3 data source in PCAI. Instead of accessing &lt;em&gt;MinIO&lt;/em&gt; directly through its internal service endpoint, PCAI connects to a configured S3-compatible &lt;em&gt;MinIO&lt;/em&gt; endpoint using the provided access and secret keys to retrieve model artifacts. This S3 data source serves as the object-storage backend for model ingestion and runtime access, and can also be accessed by external clients such as &lt;em&gt;Spark&lt;/em&gt; or &lt;em&gt;Kubeflow&lt;/em&gt; notebooks to ensure consistent, interoperable data access across the platform.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the PCAI left navigation panel, go to &lt;strong&gt;Data Engineering&lt;/strong&gt; and select &lt;em&gt;Data Sources&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/data-sources.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Object Store Data&lt;/strong&gt; tab. Click &lt;em&gt;&lt;strong&gt;Add New Object Store&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/object-store-data.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Locate the &lt;strong&gt;MinIO S3&lt;/strong&gt; tile. Click &lt;em&gt;&lt;strong&gt;Add MinIO S3&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-minio-s3-data-source-type.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enter &lt;em&gt;Name&lt;/em&gt; as &lt;em&gt;&apos;s3-minio&apos;&lt;/em&gt;, specify &lt;em&gt;Endpoint&lt;/em&gt; (for example, &lt;em&gt;&apos;&lt;a href=&quot;http://minio.minio.svc.cluster.local:90000&quot;&gt;http://minio.minio.svc.cluster.local:90000&lt;/a&gt;&apos;&lt;/em&gt;), provide the &lt;em&gt;Access Key&lt;/em&gt; and &lt;em&gt;Secret Key&lt;/em&gt; created earlier, and select &lt;em&gt;Insecure&lt;/em&gt; option. Click &lt;em&gt;&lt;strong&gt;Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-minio-s3-data-source.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A new tile named &lt;em&gt;&apos;s3-minio&apos;&lt;/em&gt;, displaying its endpoint URL (for example, &lt;em&gt;&apos;&lt;a href=&quot;http://s3-minio-service.ezdata-system.svc.cluster.local:30000&quot;&gt;http://s3-minio-service.ezdata-system.svc.cluster.local:30000&lt;/a&gt;&apos;&lt;/em&gt;) now appears on the &lt;strong&gt;Data Sources&lt;/strong&gt; page.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/minio-s3-data-source.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Model registry configuration and deployment in &lt;em&gt;MLIS&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;HPE Machine Learning Inference Software (MLIS) is natively integrated into PCAI to provide a production-ready, standardized runtime for large-scale AI inference, and the following sections detail the configuration of the S3-based model registry, the creation of a LLM model using this registry, and its deployment through &lt;em&gt;vLLM&lt;/em&gt; within MLIS.&lt;/p&gt;
&lt;h4&gt;Define a local S3 model registry&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;In the PCAI left navigation panel, go to &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt;, and select the &lt;strong&gt;HPE MLIS&lt;/strong&gt; tile. Click &lt;em&gt;&lt;strong&gt;Open&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/pcai-mlis.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Registries&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Create Registry&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-new-registry.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Internal S3 registry&lt;/strong&gt; as the model registry provider. Click &lt;em&gt;&lt;strong&gt;Continue&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/mlis-internal-s3-registry.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enter &lt;em&gt;Name&lt;/em&gt; as &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt;, choose &lt;em&gt;Object store&lt;/em&gt; as &lt;em&gt;&apos;s3-minio&apos;&lt;/em&gt;, and select &lt;em&gt;Bucket&lt;/em&gt; as &lt;em&gt;&apos;s3-ai-models&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Create registry&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-s3-minio-registry.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A registry named &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt; now appears on the &lt;strong&gt;Registries&lt;/strong&gt; page.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/s3-minio-registry.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create a packaged model&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;In MLIS, navigate to &lt;strong&gt;Packaged Models&lt;/strong&gt; and click &lt;em&gt;&lt;strong&gt;Create Packaged Model&lt;/strong&gt;&lt;/em&gt;. Under the &lt;strong&gt;Your model&lt;/strong&gt; tab, enter &lt;em&gt;Name&lt;/em&gt; as &lt;em&gt;&apos;qwen3-06b-basae&apos;&lt;/em&gt; and &lt;em&gt;Description&lt;/em&gt; as &lt;em&gt;&apos;Qwen3-0.6B-Base&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-s3-packaged-model.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Storage&lt;/strong&gt; tab, set &lt;em&gt;Registry&lt;/em&gt; to &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt;, choose &lt;em&gt;Model format&lt;/em&gt; as &lt;em&gt;&apos;Custom&apos;&lt;/em&gt;, specify &lt;em&gt;image&lt;/em&gt; as &lt;em&gt;&apos;vllm/vllm-openai:latest&apos;&lt;/em&gt;, set &lt;em&gt;URL&lt;/em&gt; to &lt;em&gt;&apos;s3://s3-ai-models/Qwen3-0.6B-Base&apos;&lt;/em&gt;, and select &lt;em&gt;Model category&lt;/em&gt; as &lt;em&gt;&apos;llm&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-s3-packaged-model-storage.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Resources&lt;/strong&gt; tab, select &lt;em&gt;Resource Template&lt;/em&gt;, for example as &lt;em&gt;&apos;gpu-tiny&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-s3-packaged-model-resources.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Advanced&lt;/strong&gt; tab, set &lt;em&gt;Arguments&lt;/em&gt; to &lt;em&gt;&apos;--model /mnt/models --port 8080&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Done&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-s3-packaged-model-advanced.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MLIS retrieves the model from the mount point &lt;em&gt;&apos;/mnt/models&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;h4&gt;Create model deployment&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;In MLIS, navigate to &lt;strong&gt;Deployments&lt;/strong&gt; and click &lt;em&gt;&lt;strong&gt;Create Deployment&lt;/strong&gt;&lt;/em&gt;. Under the &lt;strong&gt;Deployment&lt;/strong&gt; tab, enter &lt;em&gt;Name&lt;/em&gt; as &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt; and select &lt;em&gt;Namespace&lt;/em&gt; as &lt;em&gt;&apos;project-user-guoping-jia&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Packaged Model&lt;/strong&gt; tab, select &lt;strong&gt;Packaged model&lt;/strong&gt; as &lt;em&gt;&apos;qwen3-06b-base&apos;&lt;/em&gt; from the drop-down. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment-packaged-model.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Scaling&lt;/strong&gt; tab, select &lt;em&gt;Auto scaling template&lt;/em&gt;, such as &lt;em&gt;&apos;fixed-1&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment-scaling.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under the &lt;strong&gt;Advanced&lt;/strong&gt; tab, add &lt;em&gt;Environment Variables&lt;/em&gt;, for example &lt;em&gt;&apos;AIOLI_DISABLE_LOGGER = 1&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Done&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment-advanced.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;After a few minutes, the deployment &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt; appears in &lt;em&gt;Ready&lt;/em&gt; status on the &lt;strong&gt;Deployments&lt;/strong&gt; page, along with its configured model deployment endpoint, for example: &lt;em&gt;&apos;&lt;a href=&quot;https://s3-minio-registry.project-user-guoping-jia.serving.ai-application.pcai0109.dc15.hpecolo.net/&quot;&gt;https://s3-minio-registry.project-user-guoping-jia.serving.ai-application.pcai0109.dc15.hpecolo.net/&lt;/a&gt;&apos;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment-s3-minio-registry.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click the &lt;strong&gt;...&lt;/strong&gt; menu next to the deployment &lt;em&gt;&apos;s3-minio-registry&apos;&lt;/em&gt; and select &lt;strong&gt;Open&lt;/strong&gt;. The &lt;strong&gt;Timeline&lt;/strong&gt; tab displays all model deployment details.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-model-deployment-s3-minio-registry-open.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You have successfully configured a new registry backed by the local &lt;em&gt;MinIO&lt;/em&gt; S3 data source, packaged the model, and deployed it through MLIS. With managed access tokens provided during configuration, the resulting inference service endpoint can be integrated into a wide range of AI tooling, including the &lt;em&gt;VSCode AI Toolkit&lt;/em&gt; and LLM-serving frameworks such as &lt;em&gt;Open WebUI&lt;/em&gt;, to enable code generation workflows and virtual assistant capabilities.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post discussed and demonstrated the implementation of a fully local, privacy-preserving LLM deployment using &lt;em&gt;MinIO&lt;/em&gt; for centralized S3-compatible model storage and &lt;em&gt;vLLM&lt;/em&gt; for high-performance inference, integrated into the PCAI environment through the Import Framework and MLIS. This architecture enables scalable, secure, and cost-efficient LLM operations while eliminating reliance on external APIs or third-party model-hosting services.&lt;/p&gt;
&lt;p&gt;A local LLM stack provides a robust foundation for advanced AI initiatives, including &lt;em&gt;RAG&lt;/em&gt; pipelines, domain-specific fine-tuning, agent-based systems, and multimodal model integration. By combining S3-compatible storage with &lt;em&gt;vLLM&lt;/em&gt;&apos;s optimized inference runtime, organizations gain a flexible and extensible platform capable of evolving with emerging AI capabilities without requiring major architectural changes or introducing new vendor dependencies.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud AI and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Managing application accounts on HPE iL07+ servers with iLOrest - A complete guide]]></title><description><![CDATA[Introduction Starting with iLO 7, HPE introduced a new security paradigm for host-based applications that need to communicate with the…]]></description><link>https://developer.hpe.com/managing-application-accounts-on-hpe-ilo7-servers-with-ilorest-a-complete-guide/</link><guid isPermaLink="false">https://developer.hpe.com/managing-application-accounts-on-hpe-ilo7-servers-with-ilorest-a-complete-guide/</guid><pubDate>Sun, 15 Mar 2026 14:32:16 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Starting with &lt;strong&gt;iLO 7&lt;/strong&gt;, HPE introduced a new security paradigm for host-based applications that need to communicate with the Integrated Lights-Out (iLO) management processor: &lt;strong&gt;Application Accounts&lt;/strong&gt; (appaccounts). Unlike traditional iLO user accounts that humans use to log in via the iLO web GUI or SSH, application accounts are purpose-built credentials that allow &lt;em&gt;software running on the host OS&lt;/em&gt; to authenticate with iLO securely — with the secret material stored inside the server&apos;s &lt;strong&gt;Trusted Platform Module (TPM)&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;ilorest&lt;/code&gt; command-line tool provides a dedicated &lt;code&gt;appaccount&lt;/code&gt; command that lets administrators create, delete, check the existence of, view details of, and reactivate these application accounts. This guide walks through every capability in depth, with real-world examples and explanations of the internal mechanics.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Terminology: Application account vs. Application token&lt;/h2&gt;
&lt;p&gt;Before diving in, it&apos;s important to understand two related but distinct concepts:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Term&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;What It Is&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Where It Lives&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Lifecycle&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Application account&lt;/strong&gt; (appaccount)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The application&apos;s identity record in iLO. Includes app name, ID, and authorization configuration.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;iLO&lt;/strong&gt; (Redfish API)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Created / deleted explicitly by the admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Application token&lt;/strong&gt; (apptoken)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The cryptographic secret the application uses to authenticate. Has a &lt;strong&gt;defined lifespan&lt;/strong&gt;.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;TPM&lt;/strong&gt; (hardware)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Created with the account; expires and is renewed via &lt;code&gt;reactivate&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;When to use which term:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;&lt;code&gt;appaccount create&lt;/code&gt;&lt;/strong&gt; / &lt;strong&gt;&lt;code&gt;appaccount delete&lt;/code&gt;&lt;/strong&gt; when you&apos;re managing the &lt;em&gt;full lifecycle&lt;/em&gt; of an application&apos;s registration — creating or removing it from both TPM and iLO.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;&lt;code&gt;appaccount reactivate&lt;/code&gt;&lt;/strong&gt; when the &lt;em&gt;token has expired&lt;/em&gt; but the account itself is still valid — you&apos;re renewing the secret, not recreating the account.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;&lt;code&gt;appaccount exists&lt;/code&gt;&lt;/strong&gt; / &lt;strong&gt;&lt;code&gt;appaccount details&lt;/code&gt;&lt;/strong&gt; to inspect the state of accounts and their tokens across both stores.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Think of it this way: the &lt;strong&gt;appaccount&lt;/strong&gt; is the &quot;identity card&quot; and the &lt;strong&gt;apptoken&lt;/strong&gt; is the &quot;password&quot; on that card. The card persists, but the password expires and needs periodic renewal.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Which applications are supported?&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; As of today, application accounts can only be created for the following HPE-recognized applications:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Application&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;SUM&lt;/strong&gt; (Smart Update Manager)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;HPE&apos;s firmware and driver update tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;SUT&lt;/strong&gt; (Smart Update Tools)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Automated OS-level update agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;AMS&lt;/strong&gt; (Agentless Mgmt Service)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Collects host OS information for iLO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;iLOrest&lt;/strong&gt; (self-registered)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;HPE&apos;s RESTful interface tool itself&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Custom or third-party applications are not yet supported.&lt;/strong&gt; You cannot register an arbitrary application with its own &lt;code&gt;hostappid&lt;/code&gt; / &lt;code&gt;hostappname&lt;/code&gt; / &lt;code&gt;salt&lt;/code&gt; unless it is one of the recognized HPE tools listed above.&lt;/p&gt;
&lt;p&gt;When using the &lt;code&gt;--self&lt;/code&gt; flag, iLOrest creates a self-registered account with the reserved ID prefix &lt;code&gt;00b5&lt;/code&gt;, which is exclusively allocated to iLOrest&apos;s own self-registration mechanism.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Before running any &lt;code&gt;appaccount&lt;/code&gt; command, ensure the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;iLO 7 v1.11.00 or later firmware&lt;/strong&gt; is installed on the server. The command explicitly checks the iLO generation and will fail with an &lt;code&gt;IncompatibleiLOVersionError&lt;/code&gt; on iLO5 or iLO6.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Virtual NIC (VNIC) is enabled&lt;/strong&gt; in iLO. All appaccount operations communicate with iLO over the internal VNIC interface at &lt;code&gt;https://16.1.15.1&lt;/code&gt;. If VNIC is not enabled or misconfigured, you will see a &lt;code&gt;VnicExistsError&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Root or Administrator privileges&lt;/strong&gt; are required on the host OS. The command checks for &lt;code&gt;root&lt;/code&gt; on Linux and &lt;code&gt;Administrator&lt;/code&gt; on Windows; unprivileged users are blocked with a &lt;code&gt;UserNotAdminError&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;iLO Administrator credentials&lt;/strong&gt; (&lt;code&gt;-u&lt;/code&gt; / &lt;code&gt;-p&lt;/code&gt;) are required for create, reactivate, and delete operations. Even for self-registered account deletion, credentials are strongly recommended to ensure the account is fully removed from both TPM and iLO (see the detailed explanation under the &lt;strong&gt;Deleting an application account&lt;/strong&gt; section).&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on CHIF vs. VNIC:&lt;/strong&gt; The CHIF (Channel Interface) driver used in iLO 5 and iLO 6 is &lt;strong&gt;not exposed in iLO7&lt;/strong&gt;. All host-to-iLO communication on iLO 7 is performed exclusively through the &lt;strong&gt;VNIC (Virtual NIC)&lt;/strong&gt; interface. The appaccount command uses the VNIC for both local TPM operations and iLO REST API calls.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;Understanding dual storage: TPM + iLO&lt;/h2&gt;
&lt;p&gt;Every application account lives in &lt;strong&gt;two places simultaneously&lt;/strong&gt;:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Storage&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;What it holds&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Access mechanism&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;TPM&lt;/strong&gt; (Trusted Platform Module)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The &lt;strong&gt;apptoken&lt;/strong&gt; — cryptographic secret for authentication. Has an expiry lifecycle.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;VNIC driver call from the host OS (no REST auth needed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;iLO&lt;/strong&gt; (Redfish REST API)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The &lt;strong&gt;appaccount&lt;/strong&gt; — identity record iLO uses to authorize REST API requests.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Authenticated REST session (username + password)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This distinction is critical: &lt;strong&gt;TPM operations are local VNIC driver calls&lt;/strong&gt; that don&apos;t require iLO REST authentication, while &lt;strong&gt;iLO-side operations always require a valid authenticated REST session&lt;/strong&gt;. This has real implications for commands like &lt;code&gt;delete --self&lt;/code&gt;, as explained below.&lt;/p&gt;
&lt;p&gt;When everything is in sync, both locations agree. But situations like a &lt;strong&gt;TPM clear&lt;/strong&gt; can cause the two to go out of sync — the appaccount still exists in iLO but the apptoken is no longer in TPM. The &lt;code&gt;appaccount&lt;/code&gt; command handles these &lt;strong&gt;orphaned accounts&lt;/strong&gt; transparently during the &lt;code&gt;create&lt;/code&gt; workflow.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Subcommands at a glance&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Subcommand&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Purpose&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Does it require iLO Credentials?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;create&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Create a new appaccount (apptoken in TPM + appaccount in iLO). Silently cleans up orphans.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt; — always&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;delete&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Remove an appaccount from both TPM and iLO&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt; — always recommended&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;exists&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Check whether an appaccount / apptoken exists in TPM or iLO&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;details&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;List all appaccounts with their TPM / iLO presence status&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;No (more complete with credentials)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;reactivate&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Renew an &lt;strong&gt;expired apptoken&lt;/strong&gt; in TPM (token rotation). Does &lt;strong&gt;not&lt;/strong&gt; handle orphans.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Yes&lt;/strong&gt; — always&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on &lt;code&gt;delete --self&lt;/code&gt;:&lt;/strong&gt; The CLI allows running &lt;code&gt;delete --self&lt;/code&gt; without credentials, but this is &lt;strong&gt;not recommended&lt;/strong&gt;. Without credentials, only the TPM token is deleted. The iLO-side account &lt;strong&gt;cannot&lt;/strong&gt; be removed without an authenticated REST session, leaving an &lt;strong&gt;orphaned account in iLO&lt;/strong&gt;. Always provide &lt;code&gt;-u&lt;/code&gt; and &lt;code&gt;-p&lt;/code&gt; for a complete deletion.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;1. Creating an application account&lt;/h2&gt;
&lt;h3&gt;What it does&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount create&lt;/code&gt; subcommand generates a new apptoken and saves it in TPM, while simultaneously registering the corresponding appaccount in iLO. This is the primary way to set up application-level authentication.&lt;/p&gt;
&lt;p&gt;Behind the scenes, the command:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Validates prerequisites&lt;/strong&gt; — checks for root/admin, VNIC access, iLO7+.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Silently handles orphans&lt;/strong&gt; — if a previous account with the same ID exists in iLO but &lt;em&gt;not&lt;/em&gt; in TPM (e.g., after a TPM clear), the command &lt;strong&gt;automatically and silently cleans up&lt;/strong&gt; the stale iLO account, then proceeds to create a fresh account in both stores as if nothing happened. The user sees only the success message — no extra steps are required.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Generates and saves&lt;/strong&gt; — calls &lt;code&gt;generate_save_token()&lt;/code&gt; to create the apptoken in TPM and register the appaccount in iLO in a single operation.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Why &lt;code&gt;create&lt;/code&gt; is the recommended recovery path after TPM clear&lt;/h3&gt;
&lt;p&gt;After a TPM clear, all apptokens are wiped but the corresponding appaccounts remain in iLO (orphaned). Rather than requiring a manual delete-then-create sequence, the &lt;code&gt;create&lt;/code&gt; command detects this situation automatically:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;TPM clear occurs → apptokens wiped → appaccounts remain in iLO (orphaned)
                                        ↓
         ilorest appaccount create (with same parameters)
                                        ↓
         Orphaned iLO account silently deleted → fresh account created in both TPM and iLO
                                        ↓
         &quot;Application account has been generated and saved successfully.&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The user doesn&apos;t need to know about the orphan — &lt;code&gt;create&lt;/code&gt; handles it transparently.&lt;/p&gt;
&lt;h3&gt;Syntax&lt;/h3&gt;
&lt;p&gt;There are two modes:&lt;/p&gt;
&lt;h4&gt;Mode 1: Named application (for SUM, SUT, or AMS)&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;ilorest appaccount create --hostappname &amp;#x3C;name&gt; --hostappid &amp;#x3C;id&gt; --salt &amp;#x3C;salt&gt; -u &amp;#x3C;ilo_user&gt; -p &amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All three parameters (&lt;code&gt;--hostappname&lt;/code&gt;, &lt;code&gt;--hostappid&lt;/code&gt;, &lt;code&gt;--salt&lt;/code&gt;) are &lt;strong&gt;mandatory&lt;/strong&gt; in this mode. They must correspond to one of the recognized HPE applications (SUM, SUT, AMS). The &lt;code&gt;hostappid&lt;/code&gt; is a hex string of 4 or more characters that uniquely identifies the application.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example — Creating an account for SUM:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount create --hostappname SUM --hostappid a1b2c3d4 --salt sumsecret -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account has been generated and saved successfully.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Mode 2: Self-registration (for iLOrest itself)&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;ilorest appaccount create --self -u &amp;#x3C;ilo_user&gt; -p &amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;--self&lt;/code&gt; flag tells iLOrest to create an account for &lt;em&gt;itself&lt;/em&gt;, using the reserved ID prefix &lt;code&gt;00b5&lt;/code&gt;. You &lt;strong&gt;cannot&lt;/strong&gt; combine &lt;code&gt;--self&lt;/code&gt; with &lt;code&gt;--hostappname&lt;/code&gt;, &lt;code&gt;--hostappid&lt;/code&gt;, or &lt;code&gt;--salt&lt;/code&gt; — doing so will produce an error.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example — Self-registering iLOrest:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount create --self -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account has been generated and saved successfully.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;What if the account already exists?&lt;/h3&gt;
&lt;p&gt;If you try to create an account that already exists in both TPM and iLO, the command does &lt;strong&gt;not&lt;/strong&gt; fail — it simply informs you:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account already exists for the specified host application.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Error scenarios&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Error&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;SavinginTPMError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The apptoken could not be written to TPM. Delete the account and retry.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;SavinginiLOError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The appaccount could not be registered in iLO. Delete and retry.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;InvalidCredentialsError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;The iLO username / password is wrong.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;GenerateAndSaveAccountError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;A general failure occurred during generation. Retry later.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In all failure cases, the error message also suggests: &lt;em&gt;&quot;Alternatively, you can use the &lt;code&gt;--no_app_account&lt;/code&gt; option in the &lt;strong&gt;login&lt;/strong&gt; Command to log in using your iLO user account credentials.&quot;&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;2. Deleting an application account&lt;/h2&gt;
&lt;h3&gt;What it does&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount delete&lt;/code&gt; subcommand removes an application account from &lt;strong&gt;both&lt;/strong&gt; TPM (the apptoken) and iLO (the appaccount). The command follows a two-step process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Delete the apptoken from TPM&lt;/strong&gt; — removes the cryptographic secret via a VNIC driver call. This does &lt;strong&gt;not&lt;/strong&gt; require iLO REST authentication.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delete the appaccount from iLO&lt;/strong&gt; — removes the account via an authenticated &lt;code&gt;DELETE&lt;/code&gt; call to &lt;code&gt;/redfish/v1/AccountService/Oem/Hpe/AppAccounts/&amp;#x3C;id&gt;&lt;/code&gt;. This &lt;strong&gt;requires a valid iLOrest session&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If the account exists in only one location (e.g., iLO but not TPM after a TPM clear), the command still succeeds as long as it was removed from at least one.&lt;/p&gt;
&lt;h3&gt;Why credentials are always recommended&lt;/h3&gt;
&lt;p&gt;iLO does not allow any modifications to its REST API resources without authentication. This means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TPM deletion&lt;/strong&gt; (apptoken) works without credentials — it&apos;s a local VNIC driver operation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;iLO deletion&lt;/strong&gt; (appaccount) requires credentials to create a REST session, query the account list, and perform the DELETE call.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you run &lt;code&gt;delete --self&lt;/code&gt; without &lt;code&gt;-u&lt;/code&gt; / &lt;code&gt;-p&lt;/code&gt;, the command will:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;✅ Successfully delete the apptoken from TPM.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Silently skip&lt;/strong&gt; the iLO-side appaccount deletion (no REST session available).&lt;/li&gt;
&lt;li&gt;✅ Report &quot;success&quot; because the TPM deletion succeeded.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;The result is an orphaned appaccount left in iLO.&lt;/strong&gt; While this orphan is harmless (it can&apos;t be used without the matching TPM token) and will be silently cleaned up by a subsequent &lt;code&gt;create&lt;/code&gt;, it clutters the account list. To ensure a &lt;strong&gt;clean, immediate deletion&lt;/strong&gt;, always provide credentials.&lt;/p&gt;
&lt;h3&gt;Credential rules&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deleting another application&apos;s account&lt;/strong&gt; (e.g., SUM, SUT, AMS): iLO Administrator credentials (&lt;code&gt;-u&lt;/code&gt; and &lt;code&gt;-p&lt;/code&gt;) are &lt;strong&gt;mandatory&lt;/strong&gt;. The command will reject the request without them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deleting your own self-registered account&lt;/strong&gt; (&lt;code&gt;--self&lt;/code&gt; or ID containing &lt;code&gt;00b5&lt;/code&gt;): Credentials are not enforced by the CLI, but are &lt;strong&gt;strongly recommended&lt;/strong&gt; for a complete deletion.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Short appid expansion&lt;/h3&gt;
&lt;p&gt;When you provide a 4-character &lt;code&gt;hostappid&lt;/code&gt;, the command automatically expands it to the full ID using the &lt;code&gt;ExpandAppId()&lt;/code&gt; function. This makes it convenient to use the short IDs shown in the &lt;code&gt;details&lt;/code&gt; output.&lt;/p&gt;
&lt;h3&gt;Examples&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Delete a SUT account (credentials required):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount delete --hostappid a1b2 -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Delete the iLOrest self-registered account (credentials recommended):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount delete --self -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Expected output on success:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account has been deleted successfully.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;If the account does not exist:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;The application account you are trying to delete does not exist.
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;3. Checking if an application account exists&lt;/h2&gt;
&lt;h3&gt;What it does&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount exists&lt;/code&gt; subcommand checks whether an appaccount/apptoken is present in &lt;strong&gt;either&lt;/strong&gt; TPM or iLO (or both). It is a read-only operation.&lt;/p&gt;
&lt;p&gt;Internally, it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Expands a short (4-char) &lt;code&gt;hostappid&lt;/code&gt; to the full ID if possible.&lt;/li&gt;
&lt;li&gt;Checks for the apptoken in &lt;strong&gt;TPM&lt;/strong&gt; (this also detects &lt;em&gt;expired/inactive&lt;/em&gt; tokens — they still &quot;exist&quot;).&lt;/li&gt;
&lt;li&gt;Checks for the appaccount in &lt;strong&gt;iLO&lt;/strong&gt; by querying the REST API (requires credentials for a complete check).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If found in either location, the account is reported as existing.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Without credentials, only the TPM check is performed. Provide &lt;code&gt;-u&lt;/code&gt; / &lt;code&gt;-p&lt;/code&gt; for a complete existence check across both stores.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Examples&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Check by application ID:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount exists --hostappid a1b2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Check the self-registered iLOrest account:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount exists --self
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Output when found:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account exists for this host application.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Output when not found:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account does not exist for this hostapp.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command returns a distinct exit code (&lt;code&gt;ACCOUNT_DOES_NOT_EXIST_ERROR&lt;/code&gt;) when the account is not found, which is useful for scripting and automation.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;4. Viewing application account details&lt;/h2&gt;
&lt;h3&gt;What it does&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount details&lt;/code&gt; subcommand provides a &lt;strong&gt;consolidated view&lt;/strong&gt; of all application accounts, showing which ones have apptokens in TPM, appaccounts in iLO, or both. This is the most powerful diagnostic tool for understanding the state of appaccounts on a server.&lt;/p&gt;
&lt;p&gt;Internally, the command:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Calls &lt;code&gt;ListAppIds()&lt;/code&gt; to enumerate all apptokens known to TPM.&lt;/li&gt;
&lt;li&gt;Queries the iLO REST API at &lt;code&gt;/redfish/v1/AccountService/Oem/Hpe/AppAccounts/&lt;/code&gt; to find accounts registered in iLO.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Merges&lt;/strong&gt; the two lists — any appaccount found only in iLO (orphaned after a TPM clear) is included with &lt;code&gt;ExistsInTPM: no&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Without credentials, only TPM-side data is returned. Providing &lt;code&gt;-u&lt;/code&gt; / &lt;code&gt;-p&lt;/code&gt; enables the iLO REST query, giving you the complete picture.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Viewing all accounts&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --hostappid all -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Sample output (table format):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application name: SUM
Application Id: **c3d4
App account exists in TPM: yes
App account exists in iLO: yes

Application name: SUT
Application Id: **e5f6
App account exists in TPM: yes
App account exists in iLO: yes

Application name: iLOrest
Application Id: **00b5
App account exists in TPM: yes
App account exists in iLO: yes
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Security note:&lt;/strong&gt; Application IDs are &lt;strong&gt;masked&lt;/strong&gt; — only the last 4 characters are shown, prefixed with &lt;code&gt;**&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;JSON output (for scripting and automation)&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --hostappid all --json -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Sample JSON output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
  {
    &quot;ApplicationName&quot;: &quot;SUM&quot;,
    &quot;ApplicationID&quot;: &quot;**c3d4&quot;,
    &quot;ExistsInTPM&quot;: true,
    &quot;ExistsIniLO&quot;: true
  },
  {
    &quot;ApplicationName&quot;: &quot;SUT&quot;,
    &quot;ApplicationID&quot;: &quot;**e5f6&quot;,
    &quot;ExistsInTPM&quot;: true,
    &quot;ExistsIniLO&quot;: true
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Filtering: TPM-only or iLO-only&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Show only apptoken (TPM) status:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --hostappid all --only_token
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Show only appaccount (iLO) status:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --hostappid all --only_account -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Viewing a specific account&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --hostappid a1b2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Supports both 4-char short IDs and full IDs.&lt;/p&gt;
&lt;h3&gt;Viewing the self-registered account&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount details --self
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If no self-registered account exists:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;No self-registered iLOrest app account found.
Use &apos;appaccount details --hostappid all&apos; to see all app accounts.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Detecting orphaned accounts&lt;/h3&gt;
&lt;p&gt;After a TPM clear, you might see:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application Name: SUM
Application Id: **c3d4
App account exists in TPM: no
App account exists in iLO: yes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This tells you the SUM apptoken has been wiped from TPM but the appaccount persists in iLO. To restore it, simply run &lt;code&gt;appaccount create&lt;/code&gt; with the same parameters — the orphaned iLO account will be cleaned up automatically and silently.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;5. Reactivating an expired apptoken&lt;/h2&gt;
&lt;h3&gt;What it does&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount reactivate&lt;/code&gt; subcommand is designed &lt;strong&gt;exclusively&lt;/strong&gt; for renewing &lt;strong&gt;expired or inactive apptokens&lt;/strong&gt; as part of the token &lt;strong&gt;expiry and rotation&lt;/strong&gt; lifecycle. Application tokens stored in TPM have a defined lifespan. When a token expires, the application can no longer authenticate with iLO. The &lt;code&gt;reactivate&lt;/code&gt; command renews the token in place without requiring a full delete-and-recreate cycle.&lt;/p&gt;
&lt;h3&gt;What reactivate does not do&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;❌ &lt;strong&gt;Does not handle orphaned accounts.&lt;/strong&gt; If the apptoken has been removed from TPM entirely (e.g., after a TPM clear), &lt;code&gt;reactivate&lt;/code&gt; will fail with an error. Use &lt;code&gt;appaccount create&lt;/code&gt; instead — it handles orphan cleanup automatically and silently.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Does not create new accounts.&lt;/strong&gt; It only renews existing, expired tokens.&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;Does not modify the iLO-side appaccount.&lt;/strong&gt; Only the TPM-side apptoken is renewed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;When to use reactivate vs. create&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Scenario&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;What happened&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Recommended command&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Token expired (still in TPM)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Apptoken&apos;s lifespan has elapsed&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;appaccount reactivate&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Periodic token rotation&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Scheduled credential renewal&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;appaccount reactivate&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;TPM was cleared (orphan in iLO)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Apptoken wiped; appaccount remains in iLO&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;appaccount create&lt;/code&gt; (silently cleans up the orphan)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Account does not exist at all&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Fresh setup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;appaccount create&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Account needs to be removed&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Decommissioning an application&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;appaccount delete&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;How it works internally&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Checks whether the apptoken exists in TPM. The &lt;code&gt;_check_exists_in_tpm()&lt;/code&gt; function returns &lt;code&gt;True&lt;/code&gt; for both active and inactive/expired tokens — an expired token still &lt;em&gt;exists&lt;/em&gt;, it&apos;s just no longer valid for authentication.&lt;/li&gt;
&lt;li&gt;If the token does &lt;strong&gt;not&lt;/strong&gt; exist in TPM at all (it was wiped, not just expired), the command raises an error and directs the user to use &lt;code&gt;create&lt;/code&gt; instead.&lt;/li&gt;
&lt;li&gt;If the token exists but is expired/inactive, it calls &lt;code&gt;reactivate_token()&lt;/code&gt; to renew it in place.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Syntax&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;ilorest appaccount reactivate --hostappname &amp;#x3C;name&gt; --hostappid &amp;#x3C;id&gt; --salt &amp;#x3C;salt&gt; -u &amp;#x3C;ilo_user&gt; -p &amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or for self-registered:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ilorest appaccount reactivate --self -u &amp;#x3C;ilo_user&gt; -p &amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;iLO Administrator credentials are &lt;strong&gt;always&lt;/strong&gt; required — no exceptions.&lt;/p&gt;
&lt;h3&gt;Examples&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Reactivate an expired SUM token:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest appaccount reactivate --hostappname SUM --hostappid a1b2c3d4 --salt sumsecret -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Output on success:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Application account has been reactivated successfully.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;If the token doesn&apos;t exist in TPM (e.g., after TPM clear):&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;The application account you are trying to reactivate does not exist.
If the account was orphaned after a TPM clear, please use &apos;appaccount delete&apos; followed by &apos;appaccount create&apos; to recreate it.
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; If you see this error after a TPM clear, just run &lt;code&gt;appaccount create&lt;/code&gt; with the same parameters. It will silently clean up the orphaned iLO account and create everything fresh.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;Orphaned accounts: what they are and how they&apos;re handled&lt;/h2&gt;
&lt;h3&gt;What is an orphaned account?&lt;/h3&gt;
&lt;p&gt;An orphaned account is one where the &lt;strong&gt;appaccount exists in iLO&lt;/strong&gt; but the &lt;strong&gt;apptoken is missing from TPM&lt;/strong&gt;. The account can&apos;t be used for authentication because the secret is gone.&lt;/p&gt;
&lt;h3&gt;How orphans occur&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Trigger&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;TPM clear (via BIOS or iLO)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;All apptokens wiped from TPM; appaccounts remain in iLO&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Failed creation (partial write)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Apptoken may exist in TPM but not in iLO, or vice versa&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Deleting &lt;code&gt;--self&lt;/code&gt; without credentials&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Apptoken removed from TPM; appaccount left in iLO (no REST session)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;How the tool handles orphans&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;appaccount create&lt;/code&gt; command is the &lt;strong&gt;primary and recommended recovery mechanism&lt;/strong&gt; for orphaned accounts. When you run &lt;code&gt;create&lt;/code&gt; for an application that has an orphaned account:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It detects that the appaccount exists in iLO but the apptoken is missing from TPM.&lt;/li&gt;
&lt;li&gt;It &lt;strong&gt;silently deletes&lt;/strong&gt; the orphaned appaccount from iLO.&lt;/li&gt;
&lt;li&gt;It creates a fresh apptoken in TPM and a fresh appaccount in iLO.&lt;/li&gt;
&lt;li&gt;The user sees only: &lt;code&gt;&quot;Application account has been generated and saved successfully.&quot;&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;No manual cleanup is required.&lt;/strong&gt; The entire orphan recovery is invisible to the user.&lt;/p&gt;
&lt;h3&gt;Summary by subcommand&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Subcommand&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Orphan behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;create&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Silently cleans up&lt;/strong&gt; the orphaned iLO account, then creates fresh in both stores. User sees a normal success message.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;delete&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Attempts deletion from both TPM and iLO independently. Succeeds if removed from at least one.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;details&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Shows orphans clearly — &lt;code&gt;ExistsInTPM: no&lt;/code&gt; alongside &lt;code&gt;ExistsIniLO: yes&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;exists&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Reports the account as existing if found in either store.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;reactivate&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;strong&gt;Does not handle orphans.&lt;/strong&gt; Fails if the apptoken is missing from TPM. Only works on expired tokens still in TPM.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2&gt;Credential encoding support&lt;/h2&gt;
&lt;p&gt;For security-sensitive automation pipelines, the &lt;code&gt;appaccount&lt;/code&gt; command supports &lt;strong&gt;encoded credentials&lt;/strong&gt;. When the &lt;code&gt;--encode&lt;/code&gt; flag is active, the provided &lt;code&gt;-u&lt;/code&gt; (username) and &lt;code&gt;-p&lt;/code&gt; (password) values are treated as encoded strings and decoded internally using the &lt;code&gt;Encryption.decode_credentials()&lt;/code&gt; utility before use.&lt;/p&gt;
&lt;p&gt;This prevents plaintext passwords from appearing in process listings, shell history, or CI/CD logs.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Session management under the hood&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;appaccount&lt;/code&gt; command communicates with iLO in two fundamentally different ways:&lt;/p&gt;
&lt;h3&gt;1. VNIC driver calls (for TPM operations)&lt;/h3&gt;
&lt;p&gt;Operations like &lt;code&gt;generate_save_token&lt;/code&gt;, &lt;code&gt;delete_token&lt;/code&gt;, &lt;code&gt;token_exists&lt;/code&gt;, &lt;code&gt;reactivate_token&lt;/code&gt;, and &lt;code&gt;ListAppIds&lt;/code&gt; are performed through the &lt;strong&gt;VNIC driver&lt;/strong&gt; — a direct host-to-iLO channel that does not go through the REST API. These calls do not require REST authentication.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In iLO7, the CHIF (Channel Interface) driver used in iLO5/iLO6 is &lt;strong&gt;not available&lt;/strong&gt;. All host-to-iLO communication is through VNIC exclusively.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;2. Redfish REST API over VNIC (for iLO account operations)&lt;/h3&gt;
&lt;p&gt;Querying, creating, and deleting appaccounts in iLO is done via the Redfish REST API at &lt;code&gt;https://16.1.15.1&lt;/code&gt; (the VNIC IP). &lt;strong&gt;iLO requires authentication for all REST API operations&lt;/strong&gt; — both reads and writes.&lt;/p&gt;
&lt;p&gt;Since &lt;code&gt;appaccount&lt;/code&gt; runs without a prior &lt;code&gt;ilorest login&lt;/code&gt; session, it creates &lt;strong&gt;temporary REST sessions&lt;/strong&gt; on demand:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;_create_ilo_session()&lt;/code&gt; — POSTs to &lt;code&gt;/redfish/v1/SessionService/Sessions/&lt;/code&gt; with the provided credentials, obtaining an &lt;code&gt;X-Auth-Token&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;_delete_ilo_session()&lt;/code&gt; — DELETEs the session when done.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These temporary sessions are always cleaned up in a &lt;code&gt;finally&lt;/code&gt; block, even if an error occurs — preventing session exhaustion on iLO.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Any operation that touches the iLO REST API (create, delete from iLO, details with iLO data, orphan cleanup) requires credentials. Operations that only touch TPM (token existence check, token deletion, token reactivation) use the VNIC driver and don&apos;t need REST credentials — though the CLI still requires them for create and reactivate to ensure the full operation succeeds.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;Quick reference: Common workflows&lt;/h2&gt;
&lt;h3&gt;Workflow 1: First-time setup of iLOrest self-registration&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Create the self-registered account (credentials required)
ilorest appaccount create --self -u admin -p iLOpassw0rd

# Verify it was created
ilorest appaccount exists --self

# View the details
ilorest appaccount details --self
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Workflow 2: Recovery after TPM clear&lt;/h3&gt;
&lt;p&gt;No manual cleanup needed — just re-run &lt;code&gt;create&lt;/code&gt;. Orphans are handled silently.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Simply recreate the account (orphaned iLO account is cleaned up automatically)
ilorest appaccount create --hostappname SUM --hostappid a1b2c3d4 --salt sumsecret -u admin -p iLOpassw0rd

# Verify recovery
ilorest appaccount details --hostappid a1b2 -u admin -p iLOpassw0rd
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Workflow 3: Token expiry — reactivating an expired apptoken&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Reactivate the expired token (token must still exist in TPM, just expired)
ilorest appaccount reactivate --hostappname SUM --hostappid a1b2c3d4 --salt sumsecret -u admin -p iLOpassw0rd

# Verify the account is active
ilorest appaccount exists --hostappid a1b2
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Workflow 4: Audit and cleanup&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# List all accounts in JSON (credentials enable iLO-side data)
ilorest appaccount details --hostappid all --json -u admin -p iLOpassw0rd

# Delete a stale account (credentials required for iLO-side removal)
ilorest appaccount delete --hostappid e5f6 -u admin -p iLOpassw0rd

# Confirm deletion
ilorest appaccount exists --hostappid e5f6
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Workflow 5: Deleting the self-registered account&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Always provide credentials for a complete deletion from both TPM and iLO
ilorest appaccount delete --self -u admin -p iLOpassw0rd

# Verify
ilorest appaccount exists --self
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Reminder:&lt;/strong&gt; Running &lt;code&gt;delete --self&lt;/code&gt; without credentials only removes the apptoken from TPM. The appaccount in iLO will remain as an orphan until the next &lt;code&gt;create --self&lt;/code&gt; silently cleans it up. Always include credentials for an immediate, complete deletion.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2&gt;Error reference&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;Error&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Cause&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;Resolution&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;IncompatibleiLOVersionError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Running on iLO5 or iLO6&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Upgrade to iLO7 firmware&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;VnicExistsError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;VNIC not enabled or misconfigured&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Enable VNIC in iLO settings; verify host OS NIC config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;UserNotAdminError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Not running as root / Administrator&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Use &lt;code&gt;sudo&lt;/code&gt; (Linux) or run as Administrator (Windows)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;UsernamePasswordRequiredError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Missing &lt;code&gt;-u&lt;/code&gt; or &lt;code&gt;-p&lt;/code&gt; for a command that requires them&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Provide iLO admin credentials&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;InvalidCommandLineError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Wrong flag combination (e.g., &lt;code&gt;--self&lt;/code&gt; with &lt;code&gt;--hostappid&lt;/code&gt;)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Check syntax; use &lt;code&gt;-h&lt;/code&gt; for help&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;NoAppAccountError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Account / token not found for the given ID&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Verify &lt;code&gt;hostappid&lt;/code&gt;; use &lt;code&gt;create&lt;/code&gt; if the token was wiped&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;SavinginTPMError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;TPM write failed during creation&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Delete the account and retry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;SavinginiLOError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;iLO REST API write failed during creation&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Delete the account and retry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;AppAccountExistsError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Account already exists (create) or ID not found (details)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Use &lt;code&gt;exists&lt;/code&gt; to verify; delete first if needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;ReactivateAppAccountTokenError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Token reactivation failed&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Retry; check iLO logs for details&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;code&gt;GenBeforeLoginError&lt;/code&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Cannot determine iLO version&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Ensure VNIC is enabled on the iLO7 server&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h2&gt;Security best practices&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Always provide iLO credentials&lt;/strong&gt; for create, delete, and reactivate operations to ensure both TPM and iLO stay in sync.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Never embed plaintext credentials in scripts.&lt;/strong&gt; Use the &lt;code&gt;--encode&lt;/code&gt; flag or integrate with a secrets manager.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audit appaccounts regularly.&lt;/strong&gt; Use &lt;code&gt;appaccount details --hostappid all --json -u admin -p &amp;#x3C;password&gt;&lt;/code&gt; and feed the output into your SIEM or compliance tools.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clean up unused accounts.&lt;/strong&gt; If an application (e.g., SUM) is no longer deployed on a server, delete its appaccount with credentials.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;After TPM clear, just re-run &lt;code&gt;create&lt;/code&gt;.&lt;/strong&gt; The command silently handles orphaned iLO accounts — no manual cleanup needed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitor token expiry.&lt;/strong&gt; When an application suddenly fails to authenticate with iLO, check if its apptoken has expired using &lt;code&gt;appaccount details&lt;/code&gt; and reactivate if needed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Restrict iLO admin credentials.&lt;/strong&gt; Only users who manage appaccounts should have iLO admin access.&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2&gt;In summary&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;appaccount&lt;/code&gt; command in iLOrest is a powerful, security-first tool for managing application-level authentication on iLO7+ servers. By leveraging dual storage — &lt;strong&gt;apptokens in TPM&lt;/strong&gt; for hardware-backed secret security and &lt;strong&gt;appaccounts in iLO&lt;/strong&gt; for REST API authorization — it provides a robust foundation for automated server management.&lt;/p&gt;
&lt;p&gt;Key strengths of the design include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Transparent orphan recovery&lt;/strong&gt; — the &lt;code&gt;create&lt;/code&gt; command silently cleans up stale iLO accounts after a TPM clear, requiring no manual intervention.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Token lifecycle management&lt;/strong&gt; — the &lt;code&gt;reactivate&lt;/code&gt; command handles token expiry and rotation without disrupting the account.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive diagnostics&lt;/strong&gt; — the &lt;code&gt;details&lt;/code&gt; command provides a unified view of both stores, making it easy to spot inconsistencies.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Currently limited to &lt;strong&gt;SUM, SUT, AMS, and iLOrest&lt;/strong&gt; (with custom application support not yet available), it covers the core HPE management stack comprehensively. Whether you&apos;re automating firmware updates with SUM, collecting host data with AMS, or scripting iLO configuration with iLOrest, the &lt;code&gt;appaccount&lt;/code&gt; command ensures your application credentials are secure, auditable, and recoverable.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Get started today&lt;/h2&gt;
&lt;h3&gt;Ready to secure your iLO7 application authentication? Here&apos;s how to take the first step:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Verify your environment&lt;/strong&gt; — Confirm that your server is running iLO7 firmware and that VNIC is enabled.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create your first appaccount&lt;/strong&gt; — Run &lt;code&gt;ilorest appaccount create --self -u &amp;#x3C;admin&gt; -p &amp;#x3C;password&gt;&lt;/code&gt; to register iLOrest itself.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audit your existing accounts&lt;/strong&gt; — Run &lt;code&gt;ilorest appaccount details --hostappid all --json -u &amp;#x3C;admin&gt; -p &amp;#x3C;password&gt;&lt;/code&gt; to see what&apos;s already registered.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integrate into your automation&lt;/strong&gt; — Add appaccount create/delete steps to your provisioning scripts and CI/CD pipelines so every server is configured consistently.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bookmark this guide&lt;/strong&gt; — Refer back to the &lt;a href=&quot;#quick-reference-common-workflows&quot;&gt;Workflows&lt;/a&gt; and &lt;a href=&quot;#error-reference&quot;&gt;Error reference&lt;/a&gt; sections when troubleshooting.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Interested in learning more....stay tuned with the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community&lt;/a&gt;....&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.8!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-8/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-8/</guid><pubDate>Thu, 12 Mar 2026 20:40:09 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM observability and cost management on HPE Private Cloud AI]]></title><description><![CDATA[LLM (Large language Model) observability and cost management are critical for deploying reliable, secure, and financially sustainable AI…]]></description><link>https://developer.hpe.com/llm-observability-and-cost-management-on-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/llm-observability-and-cost-management-on-hpe-private-cloud-ai/</guid><pubDate>Fri, 06 Mar 2026 06:06:26 GMT</pubDate><content:encoded>&lt;p&gt;LLM (Large language Model) observability and cost management are critical for deploying reliable, secure, and financially sustainable AI applications. By tracking metrics like token usage, latency, and output quality, teams can prevent runaway costs, reduce hallucinations, and ensure regulatory compliance.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.litellm.ai/&quot;&gt;LiteLLM&lt;/a&gt; and &lt;a href=&quot;https://langfuse.com/&quot;&gt;Langfuse&lt;/a&gt; together provide a powerful, open-source stack for LLM observability and cost management (tokenomics), allowing developers to unify, trace, and monitor API usage across hundreds of models. LiteLLM acts as the proxy/SDK to normalize requests and track usage, while Langfuse records these interactions for detailed analysis of token usage, latency, and costs.&lt;/p&gt;
&lt;p&gt;This blog post walks you through deployment and configuration of LiteLLM and Langfuse on HPE Private Cloud AI. By leveraging these technologies, organizations can perform token-level cost tracking, granular tracing, output streaming and cost analysis of LLMs and AI applications deployed on HPE Private Cloud AI.&lt;/p&gt;
&lt;h2&gt;HPE Private Cloud AI&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (HPE PCAI)&lt;/a&gt; offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate LLMs to efficiently hosting and deploying them. Beyond these core functions, HPE Private Cloud AI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated &lt;em&gt;NVIDIA Inference Microservices (NIM)&lt;/em&gt; LLMs, along with a powerful suite of AI tools and frameworks for data engineering, analytics, and data science.&lt;/p&gt;
&lt;p&gt;HPE Machine Learning Inference Software (MLIS) is an enterprise-grade solution designed to simplify the deployment, management, and monitoring of machine learning (ML) models at scale. It specifically targets the complexities of moving models from development into production, with a particular focus on large language models.&lt;/p&gt;
&lt;p&gt;HPE AI Essentials (AIE) Software is the integrated software layer that provides the tools for building, deploying, and managing generative AI applications, including HPE MLIS. It provides a flexible &lt;strong&gt;Import Framework&lt;/strong&gt; that enables organizations to deploy their own applications or third-party solutions, like LiteLLM and Langfuse.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-121708.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Deploy Langfuse and LiteLLM via Import Framework&lt;/h2&gt;
&lt;h3&gt;1. Prepare the Helm charts for Langfuse&lt;/h3&gt;
&lt;p&gt;Obtain the Helm chart for Langfuse from the &lt;a href=&quot;https://github.com/langfuse/langfuse-k8s&quot;&gt;GitHub repository&lt;/a&gt; and implement the prerequisites. Here&apos;s the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie18hen_us&amp;#x26;page=ManageClusters/importing-applications.html&quot;&gt;reference document&lt;/a&gt; for the import framework prerequisites.&lt;/p&gt;
&lt;p&gt;These updates are implemented in the revised Langfuse Helm charts, and are available in the &lt;a href=&quot;https://github.com/ai-solution-eng/frameworks/tree/main/langfuse&quot;&gt;GitHub repository&lt;/a&gt;. With these customizations, &lt;em&gt;Langfuse&lt;/em&gt; can now be deployed on HPE Private Cloud AI using &lt;strong&gt;Import Framework&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;2. Deploy and configure Langfuse&lt;/h3&gt;
&lt;p&gt;Use import framework in HPE Private Cloud AI to deploy Langfuse.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-152537.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-152941.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-153003.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-153018.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After few minutes, Langfuse gets deployed in HPE Private Cloud AI and will be in &lt;strong&gt;Ready&lt;/strong&gt; state.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-153639.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;3. Configure Langfuse and create API Keys&lt;/h3&gt;
&lt;p&gt;Access the Langfuse application deployed on HPE Private Cloud AI by creating a new sign-in account. Set up your &lt;strong&gt;organization&lt;/strong&gt; and &lt;strong&gt;project&lt;/strong&gt; in Langfuse and create a new API key for this project. &lt;em&gt;Project Settings-&gt;Project API Keys-&gt; create new API keys&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-153759.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Secure the generated API keys, these will be used while deploying LiteLLM.&lt;/p&gt;
&lt;h3&gt;4. Prepare the Helm charts for LiteLLM&lt;/h3&gt;
&lt;p&gt;Obtain the Helm chart for LiteLLM from &lt;a href=&quot;https://github.com/BerriAI/litellm/tree/main/deploy/charts/litellm-helm&quot;&gt;litellm-helm&lt;/a&gt; repository and implement the prerequisites. Here&apos;s the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie18hen_us&amp;#x26;page=ManageClusters/importing-applications.html&quot;&gt;reference document&lt;/a&gt; for the import framework prerequisites.&lt;/p&gt;
&lt;p&gt;These updates are implemented in the revised LiteLLM Helm charts, and are available in the &lt;a href=&quot;https://github.com/ai-solution-eng/frameworks/tree/main/litellm-helm&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;5. Deploy and configure LiteLLM&lt;/h3&gt;
&lt;p&gt;Use the  &lt;strong&gt;Import Framework&lt;/strong&gt; in HPE Private Cloud AI to deploy LiteLLM.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-160236.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-160303.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Set the default username/password (UI_USERNAME/UI_PASSWORD) for LiteLLM application and Langfuse details (LANGFUSE_HOST, LANGFUSE_PUBIC_KEY, LANGFUSE_SECRET_KEY - Obtained in Step#3) in &lt;em&gt;values.yaml&lt;/em&gt; as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-160517.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-160546.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After few minutes, LiteLLM will be deployed in HPE Private Cloud AI and will be in &lt;strong&gt;Ready&lt;/strong&gt; state.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-161531.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;6. Deploy LLM in HPE MLIS&lt;/h2&gt;
&lt;p&gt;HPE MLIS is accessed by clicking on &lt;strong&gt;HPE MLIS&lt;/strong&gt; tile in &lt;strong&gt;Tools &amp;#x26; Frameworks&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162245.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To deploy a pre-packaged LLM (llama-3.1-8b-instruct) in HPE MLIS, you need to create a new deployment as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162547.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click on &lt;strong&gt;Create Deployment&lt;/strong&gt;, give a name to the new deployment, choose the appropriate packaged model, and set the scaling factor.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162617.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162630.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162649.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162710.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After few minutes, the deployment status will be &lt;strong&gt;Ready&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-162732.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;7. LLM endpoint and API keys&lt;/h3&gt;
&lt;p&gt;LLM endpoint details can be obtained via &lt;strong&gt;GenAI&lt;/strong&gt;-&gt;&lt;strong&gt;Model Endpoints&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-163934.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Generate an API token for the LLM via, &lt;strong&gt;Actions&lt;/strong&gt; -&gt; &lt;strong&gt;Generate API Token&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-164059.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After generating and securing the API token, you will configure LiteLLM with the LLM details.&lt;/p&gt;
&lt;h3&gt;8. Configure LiteLLM&lt;/h3&gt;
&lt;p&gt;Launch the LiteLLM application deployed on HPE Private Cloud AI and sign-in using the credentials set in &lt;em&gt;values.yaml&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-165129.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Add the LLM information, Models + EndPoints -&gt; Add Model and provide the LLM details like Provider, LLM Model Name, API Base and OpenAI API Key.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-165511.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can associate the cost to this model by updating &lt;strong&gt;Input Cost (per 1M tokens)&lt;/strong&gt; and &lt;strong&gt;Output Cost (per 1M tokens)&lt;/strong&gt; inside &lt;strong&gt;Model Settings&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-04-114358.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, create a new virtual key in LiteLLM to access the model,&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-06-165904.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using the LiteLLM virtual key and the LiteLLM URL, you can access the LLM (meta/llama-3.1) and use it in any AI application.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: In this example, security is not handled for LiteLLM endpoint URL and anyone can hit a request to proxy URL. You can protect the UI by putting it behind outh2-proxy.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Sample code snippet to call meta/llama via LiteLLM. (Replace your LiteLLM API key in the code)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import requests
import json
import os

LITELLM_PROXY_API_KEY = &quot;sk-***********&quot;
url = &apos;https://litellm.ai-application.pcai0109.dc15.hpecolo.net/chat/completions&apos;

headers = {
    &apos;Content-Type&apos;: &apos;application/json&apos;,
    &apos;Authorization&apos;: f&apos;Bearer {LITELLM_PROXY_API_KEY}&apos;
}

data = {
    &quot;model&quot;: &quot;openai/meta/llama-3.1-8b-instruct&quot;,
    &quot;messages&quot;: [
        {
            &quot;role&quot;: &quot;user&quot;,
            &quot;content&quot;: &quot;Describe Angkor Wat in 300 words&quot;
        }
    ]
}
response = requests.post(url, headers=headers, json=data, verify=False)
print(json.dumps(response.json(), indent=2))
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;9. LLM observability and cost analysis in Langfuse&lt;/h3&gt;
&lt;p&gt;Access the Langfuse application deployed on HPE Private Cloud AI and log in using the credentials. The traces of the LLM calls will appear under &lt;strong&gt;Observability&lt;/strong&gt; -&gt; &lt;strong&gt;Tracing&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-09-113852.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The home page of the project shows various metrics from LLM traces, which provides details on LLM usage, associated costs, etc.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-09-114413.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Dashboards with custom widgets can be created in Langfuse to observe various parameters of LLM traces. A sample custom dashboard created in Langfuse is shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-03-09-114052.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;By combining capabilities of LiteLLM and Langfuse with HPE AIE&apos;s robust model management, HPE Private Cloud AI empowers organizations to observe perform cost management of LLMs in their AI solutions. This integrated approach ensures data privacy, operational control, and scalability for deployments.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more guides and best practices on leveraging HPE Private Cloud AI for your AI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI storage tips, open source at HPE, simplified permissions, and more!]]></title><link>https://developer.hpe.com/2026-march-04/</link><guid isPermaLink="false">https://developer.hpe.com/2026-march-04/</guid><pubDate>Wed, 04 Mar 2026 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Building an MCP server to take advantage of OpsRamp monitoring - A Step-by-Step Implementation Guide Part 2]]></title><description><![CDATA[Introduction In my previous article, I explored the Model Context Protocol (MCP) as the universal connector for AI applications. Now, let's…]]></description><link>https://developer.hpe.com/sivabala-building-opsramps-mcp-server-a-step-by-step-implementation-guide/</link><guid isPermaLink="false">https://developer.hpe.com/sivabala-building-opsramps-mcp-server-a-step-by-step-implementation-guide/</guid><pubDate>Wed, 25 Feb 2026 22:44:12 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my &lt;a href=&quot;https://developer.hpe.com/blog/sivabala-model-context-protocol-mcp-the-universal-connector-for-ai-applications/&quot;&gt;previous article&lt;/a&gt;, I explored the Model Context Protocol (MCP) as the universal connector for AI applications. Now, let&apos;s roll up our sleeves and dive into the actual implementation of OpsRamp&apos;s MCP server, transforming monitoring data into AI-accessible intelligence.&lt;/p&gt;
&lt;p&gt;This isn&apos;t just another code walkthrough – it&apos;s a practical guide that takes you from project setup to a fully functional MCP server that exposes OpsRamp&apos;s monitoring capabilities to AI applications like Claude Desktop.&lt;/p&gt;
&lt;h2&gt;The OpsRamp challenge: Bridging monitoring and AI&lt;/h2&gt;
&lt;p&gt;OpsRamp&apos;s monitoring platform generates thousands of alerts daily, tracks hundreds of devices, and processes massive volumes of operational telemetry. Yet despite this wealth of data, operations teams working with it have found themselves manually correlating information, parsing through dashboards, and spending valuable time translating monitoring data into actionable insights.&lt;/p&gt;
&lt;p&gt;The challenge wasn&apos;t lack of data—it was the cognitive overhead of making sense of that data quickly and accurately. What if operations teams could simply ask questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&quot;Show me all critical alerts from the last hour&quot;&lt;/li&gt;
&lt;li&gt;&quot;What&apos;s the current status of our AWS EC2 instances?&quot;&lt;/li&gt;
&lt;li&gt;&quot;Which devices need attention today?&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To make this vision a reality, we needed to build an MCP server that bridges OpsRamp&apos;s monitoring platform with AI applications.&lt;/p&gt;
&lt;h2&gt;Setting up the project with UV&lt;/h2&gt;
&lt;p&gt;Before diving into code walkthrough, let&apos;s set up a modern Python project using UV – a fast, reliable Python package installer and resolver that&apos;s become the go-to choice for modern Python development.&lt;/p&gt;
&lt;h3&gt;Why UV?&lt;/h3&gt;
&lt;p&gt;UV offers several advantages over traditional Python package management:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lightning fast&lt;/strong&gt;: Up to 10-100x faster than pip&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reliable&lt;/strong&gt;: Deterministic dependency resolution&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Modern&lt;/strong&gt;: Built-in virtual environment management&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple&lt;/strong&gt;: Straightforward commands and workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Installing UV&lt;/h3&gt;
&lt;p&gt;To install UV on our system, start with this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# On macOS and Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# On Windows
powershell -c &quot;irm https://astral.sh/uv/install.ps1 | iex&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Creating the project structure&lt;/h3&gt;
&lt;p&gt;Once that is complete, you can begin creating your OpsRamp MCP server project:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Create a new project directory
mkdir opsramp-mcp-server
cd opsramp-mcp-server

# Initialize a new UV project
uv init

# Create the virtual environment
uv venv

# Activate the virtual environment
# On macOS/Linux:
source .venv/bin/activate
# On Windows:
.venv\Scripts\activate
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Installing dependencies&lt;/h3&gt;
&lt;p&gt;Create a &lt;code&gt;pyproject.toml&lt;/code&gt; file with our project dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;[project]
name = &quot;opsramp-mcp-server&quot;
version = &quot;0.1.0&quot;
description = &quot;MCP server for OpsRamp monitoring platform&quot;
readme = &quot;README.md&quot;
requires-python = &quot;&gt;=3.10&quot;
dependencies = [
    &quot;mcp&gt;=0.9.0&quot;,
    &quot;aiohttp&gt;=3.9.0&quot;,
    &quot;pydantic&gt;=2.0.0&quot;,
]

[project.scripts]
opsramp-mcp-server = &quot;opsramp_mcp_server:run_main&quot;

[build-system]
requires = [&quot;hatchling&quot;]
build-backend = &quot;hatchling.build&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install the dependencies:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;uv pip install -e .
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Project structure&lt;/h3&gt;
&lt;p&gt;Your project structure should look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;opsramp-mcp-server/
├── .venv/                    # Virtual environment (created by UV)
├── opsramp_mcp_server.py     # Main server implementation
├── pyproject.toml            # Project configuration
├── README.md                 # Documentation
└── .env                      # Environment variables (don&apos;t commit!)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Architecture overview: The three-layer approach&lt;/h2&gt;
&lt;p&gt;The MCP server being set up via the instructions in this guide follows a clean three-layer architecture:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Authentication layer&lt;/strong&gt;: Handles OAuth 2.0 token management and API security&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API Communication layer&lt;/strong&gt;: Manages HTTP requests and responses with OpsRamp&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP Interface layer&lt;/strong&gt;: Exposes tools and resources through the MCP protocol&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This separation ensures maintainability, testability, and clear boundaries between concerns.&lt;/p&gt;
&lt;h2&gt;Implementation deep dive: Building the server&lt;/h2&gt;
&lt;p&gt;Now let&apos;s walk through the implementation step by step, understanding each component and design decision.&lt;/p&gt;
&lt;h3&gt;Step 1: Understanding the MCP Python SDK&lt;/h3&gt;
&lt;p&gt;Before diving into the imports, it&apos;s important to understand the MCP Python SDK.&lt;/p&gt;
&lt;p&gt;Anthropic provides an official Python SDK that simplifies building MCP servers by handling the protocol details, message serialization, and transport layer complexity. The SDK provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Server framework&lt;/strong&gt;: Core classes for building MCP servers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Type definitions&lt;/strong&gt;: Strongly-typed interfaces for tools, resources, and prompts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transport layers&lt;/strong&gt;: Built-in support for stdio (standard input/output) communication&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Protocol handling&lt;/strong&gt;: Automatic serialization and deserialization of MCP messages&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This allows developers to focus on business logic rather than protocol implementation details.&lt;/p&gt;
&lt;h3&gt;Step 2: Imports and logging configuration&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;#!/usr/bin/env python3
&quot;&quot;&quot;
OpsRamp MCP Server
Provides access to OpsRamp OpsQL API with OAuth 2.0 client credentials authentication
&quot;&quot;&quot;

import asyncio
import json
import logging
import os
import sys

from typing import Any, Dict, Optional
from datetime import datetime, timedelta

import aiohttp
import mcp.server.stdio
import mcp.types as types
from mcp.server import Server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Understanding the MCP imports:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;import mcp.server.stdio&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This module provides the standard input/output transport layer for MCP servers. It enables communication between your server and AI applications through stdin/stdout streams, which is the standard way MCP servers communicate with host applications like Claude Desktop. This transport mechanism allows your server to run as a subprocess that AI applications can interact with.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;import mcp.types as types&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This module contains all the type definitions for MCP protocol objects including &lt;code&gt;Tool&lt;/code&gt;, &lt;code&gt;Resource&lt;/code&gt;, &lt;code&gt;Prompt&lt;/code&gt;, and &lt;code&gt;TextContent&lt;/code&gt;. These types ensure type safety and provide clear interfaces for defining what your server exposes to AI applications. For example, &lt;code&gt;types.Tool&lt;/code&gt; is used to define each tool with its name, description, and input schema, while &lt;code&gt;types.TextContent&lt;/code&gt; represents the text responses your tools return.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;from mcp.server import Server&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;Server&lt;/code&gt; class is the core of your MCP server implementation. It provides decorators like &lt;code&gt;@server.list_tools()&lt;/code&gt; and &lt;code&gt;@server.call_tool()&lt;/code&gt; that you use to register and implement your server&apos;s capabilities. The Server instance handles all the protocol-level details like capability negotiation, message routing, and error handling, allowing you to focus on implementing your specific tools and resources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key decisions here:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Async from the ground up&lt;/strong&gt;: We use &lt;code&gt;asyncio&lt;/code&gt; and &lt;code&gt;aiohttp&lt;/code&gt; for non-blocking I/O, essential for handling multiple concurrent AI requests efficiently&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive logging&lt;/strong&gt;: Detailed logging helps troubleshoot issues in production environments&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Type hints&lt;/strong&gt;: Using Python&apos;s type system makes code more maintainable and catches errors early&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The logging configuration writes to both a file and console, crucial for debugging MCP servers since they run as background processes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;logging.basicConfig(
    level=logging.DEBUG,
    format=&apos;%(asctime)s - %(name)s - %(levelname)s - %(filename)s:%(lineno)d - %(funcName)s - %(message)s&apos;,
    handlers=[
        logging.FileHandler(&apos;opsramp_opsql_mcp.log&apos;),
        logging.StreamHandler(sys.stdout)
    ]
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: The OpsRampClient class—authentication layer&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;OpsRampClient&lt;/code&gt; class handles all interactions with OpsRamp&apos;s API, starting with OAuth 2.0 authentication:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class OpsRampClient:
    &quot;&quot;&quot;Client for OpsRamp API with OAuth 2.0 authentication&quot;&quot;&quot;
    
    def __init__(self, base_url: str, client_id: str, client_secret: str, tenant_id: str):
        self.base_url = base_url.rstrip(&apos;/&apos;)
        self.client_id = client_id
        self.client_secret = client_secret
        self.tenant_id = tenant_id
        self.access_token: Optional[str] = None
        self.token_expires_at: Optional[datetime] = None
        self.session = aiohttp.ClientSession()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Design considerations:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Token caching&lt;/strong&gt;: The system stores the access token and its expiration time to avoid unnecessary authentication requests&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Session reuse&lt;/strong&gt;: A single &lt;code&gt;aiohttp.ClientSession&lt;/code&gt; is maintained for connection pooling and better performance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tenant isolation&lt;/strong&gt;: The tenant ID ensures proper data isolation in multi-tenant environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 4: OAuth 2.0 token management&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;_get_access_token&lt;/code&gt; method implements intelligent token management:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;async def _get_access_token(self) -&gt; str:
    &quot;&quot;&quot;Get or refresh the OAuth 2.0 access token&quot;&quot;&quot;
    # Check if we have a valid token
    if (self.access_token and self.token_expires_at and 
        datetime.now() &amp;#x3C; self.token_expires_at - timedelta(minutes=5)):
        logger.debug(&quot;Using existing access token&quot;)
        return self.access_token
    
    # Request new token
    token_url = f&quot;{self.base_url}/auth/oauth/token&quot;
    
    data = {
        &apos;grant_type&apos;: &apos;client_credentials&apos;,
        &apos;client_id&apos;: self.client_id,
        &apos;client_secret&apos;: self.client_secret
    }
    
    headers = {
        &apos;Content-Type&apos;: &apos;application/x-www-form-urlencoded&apos;,
        &apos;Accept&apos;: &apos;application/json&apos;
    }
    
    async with self.session.post(token_url, data=data, headers=headers) as response:
        if response.status == 200:
            token_data = await response.json()
            self.access_token = token_data[&apos;access_token&apos;]
            expires_in = token_data.get(&apos;expires_in&apos;, 3600)
            self.token_expires_at = datetime.now() + timedelta(seconds=expires_in)
            logger.info(&quot;Successfully obtained access token&quot;)
            return self.access_token
        else:
            error_text = await response.text()
            raise Exception(f&quot;Failed to get access token: {response.status} - {error_text}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Smart token management:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;5-minute buffer&lt;/strong&gt;: We refresh tokens 5 minutes before expiration to prevent race conditions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automatic refresh&lt;/strong&gt;: Expired tokens are automatically refreshed transparently&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Error handling&lt;/strong&gt;: Failed authentication attempts are logged with detailed error messages&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 5: Authenticated API requests&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;_make_request&lt;/code&gt; method wraps all API calls with authentication:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;async def _make_request(self, method: str, endpoint: str, **kwargs) -&gt; Dict[str, Any]:
    &quot;&quot;&quot;Make an authenticated request to the OpsRamp API&quot;&quot;&quot;
    access_token = await self._get_access_token()
    
    headers = kwargs.get(&apos;headers&apos;, {})
    headers.update({
        &apos;Authorization&apos;: f&apos;Bearer {access_token}&apos;,
        &apos;Accept&apos;: &apos;application/json&apos;
    })
    kwargs[&apos;headers&apos;] = headers
    
    url = f&quot;{self.base_url}{endpoint}&quot;
    
    logger.debug(f&quot;Making {method} request to: {url}&quot;)
    
    async with self.session.request(method, url, **kwargs) as response:
        response_text = await response.text()
        
        if response.status &gt;= 200 and response.status &amp;#x3C; 300:
            try:
                return await response.json()
            except json.JSONDecodeError:
                return {&quot;response&quot;: response_text}
        else:
            raise Exception(f&quot;API request failed: {response.status} - {response_text}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Robust request handling:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Automatic authentication&lt;/strong&gt;: Every request includes a valid Bearer token&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexible parameters&lt;/strong&gt;: The &lt;code&gt;**kwargs&lt;/code&gt; pattern allows passing any HTTP parameters&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JSON parsing with fallback&lt;/strong&gt;: Attempts to parse JSON, falls back to raw text if needed&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive error reporting&lt;/strong&gt;: Failed requests include status codes and response bodies&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 6: OpsQL query execution&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;execute_opsql_query&lt;/code&gt; method provides access to OpsRamp&apos;s query language:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;async def execute_opsql_query(self, query: str) -&gt; Dict[str, Any]:
    &quot;&quot;&quot;Execute an OpsQL query&quot;&quot;&quot;
    logger.info(f&quot;Preparing to execute OpsQL query: {query[:100]}&quot;)
    
    endpoint = f&quot;/v3/api/opsql/{self.tenant_id}/queries&quot;
    
    headers = {
        &apos;Content-Type&apos;: &apos;application/json&apos;
    }
    
    result = await self._make_request(&apos;POST&apos;, endpoint, json=json.loads(query), headers=headers)
    logger.info(&quot;OpsQL query executed successfully&quot;)
    return result
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;OpsQL integration:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tenant-scoped queries&lt;/strong&gt;: Queries are automatically scoped to the authenticated tenant&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JSON payload&lt;/strong&gt;: Queries are sent as structured JSON for parsing and validation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Result streaming&lt;/strong&gt;: Large result sets are handled efficiently through async I/O&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 7: MCP server initialization&lt;/h3&gt;
&lt;p&gt;Now create the MCP server instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;server = Server(&quot;opsramp-mcp-server&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This single line creates an MCP server with the name &quot;opsramp-mcp-server&quot; that will be visible to AI applications.&lt;/p&gt;
&lt;h3&gt;Step 8: Tool registration&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;@server.list_tools()&lt;/code&gt; decorator registers available tools with the MCP protocol:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@server.list_tools()
async def handle_list_tools() -&gt; list[types.Tool]:
    &quot;&quot;&quot;List available tools&quot;&quot;&quot;
    logger.info(&quot;Listing available tools&quot;)
    
    tools = [
        types.Tool(
            name=&quot;execute_opsql_query&quot;,
            description=&quot;Execute an OpsQL query against the OpsRamp platform&quot;,
            inputSchema={
                &quot;type&quot;: &quot;object&quot;,
                &quot;properties&quot;: {
                    &quot;query&quot;: {
                        &quot;type&quot;: &quot;string&quot;,
                        &quot;description&quot;: &quot;The OpsQL query to execute&quot;
                    }
                }
            }
        ),
        types.Tool(
            name=&quot;get_alerts&quot;,
            description=&quot;Get alerts from OpsRamp&quot;,
            inputSchema={
                &quot;type&quot;: &quot;object&quot;,
                &quot;properties&quot;: {
                    &quot;query_filter&quot;: {
                        &quot;type&quot;: &quot;string&quot;,
                        &quot;description&quot;: &quot;Query filter for alerts&quot;
                    },
                    &quot;limit&quot;: {
                        &quot;type&quot;: &quot;integer&quot;,
                        &quot;description&quot;: &quot;Maximum number of alerts to return&quot;,
                        &quot;default&quot;: 100
                    }
                }
            }
        ),
        types.Tool(
            name=&quot;get_minimal_resource_details&quot;,
            description=&quot;Get Minimal Resource Details from OpsRamp&quot;,
            inputSchema={
                &quot;type&quot;: &quot;object&quot;,
                &quot;properties&quot;: {
                    &quot;query_filter&quot;: {
                        &quot;type&quot;: &quot;string&quot;,
                        &quot;description&quot;: &quot;Query filter for alerts&quot;
                    }
                }
            }
        ),
        types.Tool(
            name=&quot;get_release_version&quot;,
            description=&quot;Get release version from OpsRamp&quot;,
            inputSchema={
                &quot;type&quot;: &quot;object&quot;
            }
        ),
        types.Tool(
            name=&quot;get_alert_statistics_dashboard&quot;,
            description=&quot;Get Alert Statistics dashboard from OpsRamp&quot;,
            inputSchema={
                &quot;type&quot;: &quot;object&quot;
            }
        )
    ]
    
    return tools
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Tool design principles:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clear descriptions&lt;/strong&gt;: Each tool has a human-readable description that AI models can understand&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JSON Schema validation&lt;/strong&gt;: Input schemas ensure AI applications provide properly formatted data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Intuitive naming&lt;/strong&gt;: Tool names follow verb-noun patterns (get_alerts, execute_query)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Progressive disclosure&lt;/strong&gt;: Simple tools (get_release_version) require no parameters, while complex ones (execute_opsql_query) have detailed schemas&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 9: Tool execution handler&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;@server.call_tool()&lt;/code&gt; decorator implements the actual tool logic. This is where the magic happens – where AI requests transform into actual OpsRamp API calls:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@server.call_tool()
async def handle_call_tool(
    name: str, arguments: dict[str, Any] | None
) -&gt; list[types.TextContent]:
    &quot;&quot;&quot;Handle tool calls&quot;&quot;&quot;
    global opsramp_client
    
    if not opsramp_client:
        return [types.TextContent(
            type=&quot;text&quot;,
            text=&quot;Error: OpsRamp client not initialized. Please check your configuration.&quot;
        )]
    
    try:
        if name == &quot;execute_opsql_query&quot;:
            query = arguments.get(&quot;query&quot;, &quot;&quot;)
            
            if not query.strip():
                return [types.TextContent(
                    type=&quot;text&quot;,
                    text=&quot;Error: Query cannot be empty&quot;
                )]
            
            result = await opsramp_client.execute_opsql_query(query)
            
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Query executed successfully:\n\n```json\n{json.dumps(result, indent=2)}\n```&quot;
            )]
        
        elif name == &quot;get_alerts&quot;:
            endpoint = f&quot;/api/v2/tenants/{opsramp_client.tenant_id}/alerts/search&quot;
            query_filter = arguments.get(&quot;query_filter&quot;, &quot;&quot;)
            limit = arguments.get(&quot;limit&quot;, 100)
            
            result = await opsramp_client._make_request(
                method=&quot;GET&quot;,
                endpoint=endpoint,
                params={&quot;query&quot;: query_filter, &quot;limit&quot;: limit}
            )
            
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Query executed successfully:\n\n```json\n{json.dumps(result, indent=2)}\n```&quot;
            )]
        
        elif name == &quot;get_minimal_resource_details&quot;:
            endpoint = f&quot;/api/v2/tenants/{opsramp_client.tenant_id}/resources/minimal&quot;
            query_filter = arguments.get(&quot;query_filter&quot;, &quot;&quot;)
            
            result = await opsramp_client._make_request(
                method=&quot;GET&quot;,
                endpoint=endpoint,
                params={&quot;query&quot;: query_filter}
            )
            
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Query executed successfully:\n\n```json\n{json.dumps(result, indent=2)}\n```&quot;
            )]
        
        elif name == &quot;get_release_version&quot;:
            endpoint = f&quot;/tenancy/api/v3/release-version&quot;
            result = await opsramp_client._make_request(method=&quot;GET&quot;, endpoint=endpoint)
            
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Query executed successfully:\n\n```json\n{json.dumps(result, indent=2)}\n```&quot;
            )]
        
        elif name == &quot;get_alert_statistics_dashboard&quot;:
            dashboard_link = &quot;https://pod7.opsramp.com/portal/dashboards/4dfc7792-d03d-11ec-9e13-0242ac120006&quot;
            
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Alert Statistics Dashboard: {dashboard_link}&quot;
            )]
        
        else:
            return [types.TextContent(
                type=&quot;text&quot;,
                text=f&quot;Unknown tool: {name}&quot;
            )]
    
    except Exception as e:
        logger.error(f&quot;Error in tool call {name}: {e}&quot;)
        return [types.TextContent(
            type=&quot;text&quot;,
            text=f&quot;Error executing {name}: {str(e)}&quot;
        )]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Robust tool execution:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Validation first&lt;/strong&gt;: Input parameters are validated before making API calls&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Consistent error handling&lt;/strong&gt;: All errors are caught and returned as user-friendly messages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JSON formatting&lt;/strong&gt;: Results are formatted in readable JSON with syntax highlighting&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive logging&lt;/strong&gt;: Every tool call is logged for debugging and audit trails&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 10: Query examples helper&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;get_query_examples&lt;/code&gt; function provides AI applications with sample queries to learn from:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_query_examples(category: str) -&gt; str:
    &quot;&quot;&quot;Get OpsQL query examples by category&quot;&quot;&quot;
    examples = {
        &quot;alerts&quot;: [
            &apos;&apos;&apos;{
  &quot;objectType&quot;: &quot;alert&quot;,
  &quot;fields&quot;: [&quot;id&quot;, &quot;clientId&quot;, &quot;component&quot;, &quot;currentState&quot;],
  &quot;filterCriteria&quot;: &quot;currentState=critical&quot;
}&apos;&apos;&apos;
        ],
        &quot;devices&quot;: [
            &apos;&apos;&apos;{
  &quot;fields&quot;: [&quot;ipAddress&quot;, &quot;installedAppName&quot;, &quot;name&quot;, &quot;clientName&quot;],
  &quot;objectType&quot;: &quot;resource&quot;,
  &quot;filterCriteria&quot;: &quot;availableAppName = &apos;AWS&apos; and installedAppName = &apos;OpsRamp PM&apos;&quot;,
  &quot;pageNo&quot;: 1,
  &quot;pageSize&quot;: 50
}&apos;&apos;&apos;
        ],
        &quot;topology&quot;: [
            &apos;&apos;&apos;{
  &quot;objectType&quot;: &quot;topology&quot;,
  &quot;filterCriteria&quot;: &quot;(intAppId = &apos;INTG-73e5a7fa-8674-4f33-ae53-26a2b9c049ea&apos;)&quot;,
  &quot;fields&quot;: [&quot;id&quot;, &quot;sourceId&quot;, &quot;targetId&quot;, &quot;relation&quot;]
}&apos;&apos;&apos;
        ]
    }
    
    if category == &quot;all&quot;:
        result = &quot;&quot;
        for cat, queries in examples.items():
            result += f&quot;\n**{cat.upper()} Examples:**\n&quot;
            for i, query in enumerate(queries, 1):
                result += f&quot;{i}. {query}\n&quot;
        return result
    
    elif category in examples:
        result = f&quot;**{category.upper()} Examples:**\n&quot;
        for i, query in enumerate(examples[category], 1):
            result += f&quot;{i}. {query}\n&quot;
        return result
    
    return &quot;Category not found. Available categories: alerts, devices, topology, all&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Learning by example:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Categorized samples&lt;/strong&gt;: Examples are organized by domain (alerts, devices, topology)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Real-world queries&lt;/strong&gt;: Each example represents actual use cases&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Progressive complexity&lt;/strong&gt;: Examples range from simple to complex&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI-friendly format&lt;/strong&gt;: Structured to help AI models understand query patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 11: Main entry point and server startup&lt;/h3&gt;
&lt;p&gt;The &lt;code&gt;main&lt;/code&gt; function orchestrates server initialization and lifecycle:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;async def main():
    &quot;&quot;&quot;Main entry point&quot;&quot;&quot;
    global opsramp_client
    
    # Get configuration from environment variables
    base_url = os.getenv(&quot;OPSRAMP_BASE_URL&quot;, &quot;https://develop.opsramp.com&quot;)
    client_id = os.getenv(&quot;OPSRAMP_CLIENT_ID&quot;)
    client_secret = os.getenv(&quot;OPSRAMP_CLIENT_SECRET&quot;)
    tenant_id = os.getenv(&quot;OPSRAMP_TENANT_ID&quot;)
    
    if not all([client_id, client_secret, tenant_id]):
        logger.error(&quot;Missing required environment variables:&quot;)
        logger.error(&quot;- OPSRAMP_CLIENT_ID&quot;)
        logger.error(&quot;- OPSRAMP_CLIENT_SECRET&quot;)
        logger.error(&quot;- OPSRAMP_TENANT_ID&quot;)
        logger.error(&quot;Optional: OPSRAMP_BASE_URL&quot;)
        return
    
    # Initialize the OpsRamp client
    opsramp_client = OpsRampClient(base_url, client_id, client_secret, tenant_id)
    
    logger.info(f&quot;Starting OpsRamp MCP Server&quot;)
    logger.info(f&quot;Base URL: {base_url}&quot;)
    logger.info(f&quot;Tenant ID: {tenant_id}&quot;)
    
    # Run the server
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            server.create_initialization_options()
        )

def run_main():
    &quot;&quot;&quot;Entry point wrapper&quot;&quot;&quot;
    asyncio.run(main())

if __name__ == &quot;__main__&quot;:
    run_main()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Startup sequence:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Environment validation&lt;/strong&gt;: Checks for required configuration before starting&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client initialization&lt;/strong&gt;: Creates the OpsRamp client with credentials&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;STDIO transport&lt;/strong&gt;: Uses standard input/output for communication with AI applications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Graceful error handling&lt;/strong&gt;: Missing configuration results in helpful error messages&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Configuration: Environment variables&lt;/h2&gt;
&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file for your configuration (and add it to &lt;code&gt;.gitignore&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;OPSRAMP_BASE_URL=https://develop.opsramp.com
OPSRAMP_CLIENT_ID=your_client_id_here
OPSRAMP_CLIENT_SECRET=your_client_secret_here
OPSRAMP_TENANT_ID=your_tenant_id_here
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Security best practices:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Never commit credentials&lt;/strong&gt;: Add &lt;code&gt;.env&lt;/code&gt; to &lt;code&gt;.gitignore&lt;/code&gt; immediately&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use environment-specific configs&lt;/strong&gt;: Different values for dev/staging/prod&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rotate secrets regularly&lt;/strong&gt;: Change client secrets periodically&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Minimal permissions&lt;/strong&gt;: Use credentials with only necessary API access&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Testing the server locally&lt;/h2&gt;
&lt;p&gt;Before integrating with Claude Desktop, test your server locally:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Set environment variables
export OPSRAMP_BASE_URL=https://develop.opsramp.com
export OPSRAMP_CLIENT_ID=your_client_id
export OPSRAMP_CLIENT_SECRET=your_client_secret
export OPSRAMP_TENANT_ID=your_tenant_id

# Run the server
python opsramp_mcp_server.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Watch the log file for connection and authentication messages:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;tail -f opsramp_opsql_mcp.log
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see log entries like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;2025-08-12 10:30:15 - __main__ - INFO - Starting OpsRamp MCP Server
2025-08-12 10:30:15 - __main__ - INFO - Base URL: https://develop.opsramp.com
2025-08-12 10:30:15 - __main__ - INFO - Tenant ID: client_123
2025-08-12 10:30:16 - __main__ - INFO - Successfully obtained access token
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Key architectural decisions explained&lt;/h2&gt;
&lt;p&gt;Let&apos;s revisit some crucial design choices that make this implementation robust:&lt;/p&gt;
&lt;h3&gt;Asynchronous design&lt;/h3&gt;
&lt;p&gt;I chose &lt;code&gt;asyncio&lt;/code&gt; and &lt;code&gt;aiohttp&lt;/code&gt; for several compelling reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Concurrent request handling&lt;/strong&gt;: Multiple AI applications can query simultaneously without blocking&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Efficient I/O&lt;/strong&gt;: Network requests don&apos;t block the event loop&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: The server can handle many concurrent connections with minimal resource overhead&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Natural fit for MCP&lt;/strong&gt;: The MCP protocol itself is designed for async communication&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Token management strategy&lt;/h3&gt;
&lt;p&gt;The OAuth 2.0 implementation includes intelligent token caching:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;5-minute expiration buffer&lt;/strong&gt;: Prevents edge cases where tokens expire mid-request&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automatic refresh&lt;/strong&gt;: Transparent to calling code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Thread-safe&lt;/strong&gt;: Single token instance shared across all requests&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Error handling philosophy&lt;/h3&gt;
&lt;p&gt;Every layer includes comprehensive error handling:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Validation errors&lt;/strong&gt;: Caught early with clear messages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network errors&lt;/strong&gt;: Logged with full context&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authentication errors&lt;/strong&gt;: Specific messages for credential issues&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API errors&lt;/strong&gt;: Include status codes and response bodies&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Logging strategy&lt;/h3&gt;
&lt;p&gt;Multi-level logging provides visibility without overwhelming:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DEBUG level&lt;/strong&gt;: Detailed request/response information&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;INFO level&lt;/strong&gt;: Important lifecycle events&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ERROR level&lt;/strong&gt;: Failures that need attention&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;File + console&lt;/strong&gt;: Useful for both development and production&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What&apos;s next?&lt;/h2&gt;
&lt;p&gt;Congratulations! You&apos;ve successfully built a production-ready MCP server that exposes OpsRamp&apos;s monitoring capabilities to AI applications. The implementation demonstrates key patterns:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clean separation of concerns&lt;/strong&gt; with distinct layers for auth, API, and MCP&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Robust error handling&lt;/strong&gt; at every level&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive logging&lt;/strong&gt; for troubleshooting&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Async-first design&lt;/strong&gt; for performance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Type safety&lt;/strong&gt; through Python type hints&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In my next article, &lt;strong&gt;Part 3: Testing and Integration&lt;/strong&gt;, I&apos;ll explore:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Testing the server with MCP Inspector&lt;/li&gt;
&lt;li&gt;Integrating with Claude Desktop&lt;/li&gt;
&lt;li&gt;Real-world usage scenarios and queries&lt;/li&gt;
&lt;li&gt;Debugging tips and common pitfalls&lt;/li&gt;
&lt;li&gt;Performance optimization strategies&lt;/li&gt;
&lt;li&gt;Production deployment considerations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The journey from monitoring data to AI-accessible intelligence continues. Stay tuned for how you can bring this MCP server to life with real AI interactions!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Part 3 of this series will demonstrate testing the OpsRamp MCP server with MCP Inspector and integrating it with Claude Desktop for real-world AI-powered operations workflows.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplifying permission management using Kubernetes ClusterRole aggregation in HPE Private Cloud AI]]></title><description><![CDATA[When operating Kubernetes (K8s), Role‑Based Access Control (RBAC) serves as a foundational security mechanism, mapping users and workloads…]]></description><link>https://developer.hpe.com/simplifying-permission-management-using-kubernetes-clusterrole-aggregation-in-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/simplifying-permission-management-using-kubernetes-clusterrole-aggregation-in-hpe-private-cloud-ai/</guid><pubDate>Tue, 17 Feb 2026 07:36:43 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;When operating Kubernetes (K8s), Role‑Based Access Control (RBAC) serves as a foundational security mechanism, mapping users and workloads to define permissions and enforcing the principle of least privilege. However, as clusters evolve and new access requirements arise, managing K8s Roles and ClusterRoles through manual updates becomes increasingly difficult and error-prone.&lt;/p&gt;
&lt;p&gt;This blog post introduces ClusterRole aggregation as an effective way to simplify that challenge. It explains the key concepts and advantages of aggregated ClusterRoles and shows how they streamline permission management by reducing manual RBAC updates on existing roles. The post also provides practical examples of applying ClusterRole aggregation in the HPE Private Cloud AI (PCAI) environment, demonstrating how this approach makes RBAC administration more scalable, maintainable, and efficient.&lt;/p&gt;
&lt;h3&gt;What is K8s RBAC?&lt;/h3&gt;
&lt;p&gt;K8s RBAC is a native authorization framework integrated directly into the K8s API server. It is composed of four primary object types: &lt;em&gt;Roles&lt;/em&gt;, which define permissions for namespaced resources; &lt;em&gt;ClusterRoles&lt;/em&gt;, which define permissions at the cluster scope; &lt;em&gt;RoleBindings&lt;/em&gt;, which associate a role with a user, group, or ServiceAccount within a specific namespace; and &lt;em&gt;ClusterRoleBindings&lt;/em&gt;, which bind subjects to a ClusterRole across the entire cluster. Together, these constructs provide fine‑grained, declarative control over resource access and enforce least-privilege authorization across the environment.&lt;/p&gt;
&lt;p&gt;Despite its flexibility, K8s RBAC introduces several operational challenges, largely due to its mixed namespace and cluster-scoped permission model. Because RBAC permissions must span both namespaced and global resources, teams often struggle to maintain strict least‑privilege boundaries without resorting to overly broad ClusterRoles, raising the risk of misconfiguration and privilege escalation. The high granularity of K8s API resources and verbs, combined with the separation of Roles/ClusterRoles from their bindings, makes it difficult to understand the effective permissions granted to a subject. As multiple teams modify RBAC objects over time, policies tend to drift, accumulate inconsistencies, and unintentionally propagate privileges through shared bindings. In large-scale environments such as PCAI, K8s RBAC management becomes complex, error‑prone, and operationally fragile.&lt;/p&gt;
&lt;p&gt;To address these challenges as environments grow and the number of roles increases, K8s provides &lt;em&gt;ClusterRole aggregation&lt;/em&gt;, a mechanism designed to simplify and streamline permission management across the cluster.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;What is ClusterRole aggregation?&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles&quot;&gt;ClusterRole aggregation&lt;/a&gt;, introduced in K8s v1.9, is a mechanism that automatically aggregates several ClusterRoles into one combined ClusterRole based on label selectors. A controller, running in the cluster control plane, watches for ClusterRole objects that define an &lt;em&gt;aggregationRule&lt;/em&gt; section. This rule specifies a set of label selectors that the controller uses to match other ClusterRoles whose rules should be merged into the &lt;em&gt;rules&lt;/em&gt; field of the aggregated ClusterRole. The resulting ClusterRole is dynamically constructed by combining the permissions of all matching rules.&lt;/p&gt;
&lt;p&gt;K8s ships with several built-in &lt;em&gt;user-facing&lt;/em&gt; roles, such as &lt;em&gt;view&lt;/em&gt;, &lt;em&gt;edit&lt;/em&gt; and &lt;em&gt;admin&lt;/em&gt;, implemented using this aggregation mechanism. These default roles represent common permission tiers, ranging from ready-only access to full namespac-level administrative capabilities. They are automatically assembled by the controller using labels of the form &lt;em&gt;&apos;rbac.authorization.k8s.io/aggregate-to-&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;When additional permissions are required, cluster administrators can define them as standalone ClusterRoles and apply the appropriate label. The controller will automatically incorporate these roles into the corresponding aggregated roles (e.g., &lt;em&gt;edit&lt;/em&gt;, &lt;em&gt;view&lt;/em&gt;), eliminating the need to manually modify existing ClusterRoles whenever new access requirements arise. This approach shifts the operational focus toward managing small, purpose‑specific roles that are automatically composed into higher‑level permission sets, making RBAC policy management more scalable, maintainable, and efficient.&lt;/p&gt;
&lt;p&gt;The following sections will show you some practical examples of permission management using ClusterRole aggregation in the HPE PCAI environment.&lt;/p&gt;
&lt;h3&gt;HPE Private Cloud AI&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (PCAI)&lt;/a&gt; is a turnkey, enterprise‑ready platform that integrates HPE and NVIDIA technologies to simplify and accelerate AI workload deployment on a K8s foundation. By leveraging standard K8s constructs, PCAI deploys AI models, inference services, and supporting AI/ML components into dedicated K8s namespaces, ensuring clean resource separation, scalability, and lifecycle management.&lt;/p&gt;
&lt;p&gt;As part of its user‑centric design, PCAI automatically provisions a default Jupyter notebook environment for each authenticated user. This Jupyter notebook runs as a containerized Pod within the user’s personal K8s namespace, providing an isolated workspace for experimentation, data preparation, and model development.&lt;/p&gt;
&lt;h4&gt;Kubeflow Notebooks&lt;/h4&gt;
&lt;p&gt;As part of the pre‑integrated toolset in PCAI, &lt;a href=&quot;https://www.kubeflow.org/&quot;&gt;Kubeflow&lt;/a&gt; is deployed along with its associated custom resource definitions (CRDs) and built‑in ClusterRoles. These ClusterRoles follow K8s standard role patterns, &lt;em&gt;kubeflow-view&lt;/em&gt;, &lt;em&gt;kubeflow-edit&lt;/em&gt;, and &lt;em&gt;kubeflow-admin&lt;/em&gt;, and can be assigned by cluster administrators to users or ServiceAccounts to manage access control within the cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

# kubectl get clusterroles | grep -e &quot;kubeflow-edit&quot; -e &quot;kubeflow-view&quot; -e &quot;kubeflow-admin&quot;
kubeflow-admin                                                         2025-11-20T03:25:34Z
kubeflow-edit                                                          2025-11-20T03:25:34Z
kubeflow-view                                                          2025-11-20T03:25:34Z

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When a user logs into PCAI, a default Jupyter notebook named &lt;em&gt;&apos;default-notebook&apos;&lt;/em&gt; is already available under &lt;strong&gt;Notebook Servers&lt;/strong&gt;. This Jupyter notebook is pre-created using the tensorflow image through Kubeflow Notebooks and is deployed within the user&apos;s dedicated project namespace, such as &lt;em&gt;&apos;project-user-guoping-jia&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kubeflow-notebooks.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Within this namespace, a RoleBinding named &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; associates the ServiceAccount &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; with the Kubeflow ClusterRole &lt;em&gt;&apos;kubeflow-edit&apos;&lt;/em&gt;. This ClusterRole grants the standard set of permissions required for typical Kubeflow operations.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get serviceaccount -n project-user-guoping-jia default-editor
NAME                      SECRETS   AGE

default-editor            0         24d

# kubectl get rolebinding -n project-user-guoping-jia default-editor
NAME                       ROLE                                   AGE
default-editor             ClusterRole/kubeflow-edit              24d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this default RoleBinding in place, the user obtains the &lt;em&gt;&apos;kubeflow-edit&apos;&lt;/em&gt; permissions within the namespace and can run &lt;em&gt;kubectl&lt;/em&gt; commands from the Jupyter notebook terminal to access most K8s objects, including Pods, Deployments, Services, and more.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/notebook-server-terminal.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Permission restriction in Jupyter notebooks&lt;/h4&gt;
&lt;p&gt;While the user can access most K8s objects in the project namespace, certain operations remain restricted. For example, they cannot list all Secrets or perform privileged actions such as executing commands inside a running Pod’s container. These elevated permissions are sometimes necessary, for instance, when verifying private container image configurations that rely on specific Secrets or when troubleshooting issues in a failed Pod.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;






(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i list secrets
no
(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i get pods --subresource=exec
no
(base) guoping-jia@default-notebook-0:~$ kubectl get secrets
Error from server (Forbidden): secrets is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; in the namespace &quot;project-user-guoping-jia&quot;
(base) guoping-jia@default-notebook-0:~$ kubectl exec -it default-notebook-0 -- sh
Error from server (Forbidden): pods &quot;default-notebook-0&quot; is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot create resource &quot;pods/exec&quot; in API group &quot;&quot; in the namespace &quot;project-user-guoping-jia&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Permission management for Jupyter notebook terminal access&lt;/h3&gt;
&lt;p&gt;This section explains how to grant additional permissions in a Jupyter notebook environment using ClusterRole aggregation to simplify RBAC permission management.&lt;/p&gt;
&lt;p&gt;Considering the fact that the ClusterRole &lt;em&gt;&apos;kubeflow-edit&apos;&lt;/em&gt; is bound to the ServiceAccount of every authenticated PCAI user, modifying this shared ClusterRole to add additional permissions is not an appropriate approach. Doing so would grant all users elevated privileges in their Jupyter notebooks, which violates the principle of least privilege.&lt;/p&gt;
&lt;h4&gt;Granting additional permissions&lt;/h4&gt;
&lt;p&gt;Rather than modifying the existing Kubeflow ClusterRoles, this section describes how to create a custom aggregated ClusterRole that can be easily extended by combining multiple smaller, purpose-specific ClusterRoles.&lt;/p&gt;
&lt;p&gt;All ClusterRole and RoleBinding YAML manifests referenced here are available in &lt;a href=&quot;https://github.com/GuopingJia/aggregate-clusterroles&quot;&gt;my GitHub repository&lt;/a&gt;. If you plan to configure this for your Jupyter notebook environment, remember to replace the project&apos;s user namespace with your own.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create an aggregated ClusterRole&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Create the following aggregated ClusterRole named &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;


# cat custom-kubeflow-edit.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-kubeflow-edit
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.authorization.kubeflow.org/aggregate-to-custom-kubeflow-edit: &quot;true&quot;
rules: []

# kubectl apply -f custom-kubeflow-edit.yaml
clusterrole.rbac.authorization.k8s.io/custom-kubeflow-edit created

# kubectl get clusterroles custom-kubeflow-edit
NAME                   CREATED AT
custom-kubeflow-edit   2026-02-17T10:43:00Z
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that the &lt;em&gt;&apos;rules&apos;&lt;/em&gt; field in this aggregated ClusterRole is intentionally left empty, with only the label &lt;em&gt;&apos;rbac.authorization.kubeflow.org/aggregate-to-custom-kubeflow-edit: &quot;true&quot;&apos;&lt;/em&gt; defined. The control plane automatically fills the &lt;em&gt;&apos;rules&apos;&lt;/em&gt; field by merging permissions from other ClusterRoles that match this label. Any rules you manually add to an aggregated ClusterRole will be overwritten. To modify or extend permissions, you must update or create the individual ClusterRole objects selected by the aggregation label.&lt;/p&gt;
&lt;p&gt;Run the following command to inspect the &lt;em&gt;&apos;rules&apos;&lt;/em&gt; section of the deployed ClusterRole &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get clusterrole custom-kubeflow-edit -o jsonpath=&apos;{.rules}&apos; | jq .
null
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Create a focused ClusterRole &lt;em&gt;&apos;custom-list-secrets&apos;&lt;/em&gt; to grant permission for listing Secrets.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

# cat custom-list-secrets.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-list-secret
  labels:
    rbac.authorization.kubeflow.org/aggregate-to-custom-kubeflow-edit: &quot;true&quot;
rules:
  - apiGroups:
    - &quot;&quot;
    resources:
    - secrets
    verbs:
    - get
    - list

# kubectl apply -f custom-list-secrets.yaml
clusterrole.rbac.authorization.k8s.io/custom-list-secret created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The ClusterRole includes the label &lt;em&gt;&apos;rbac.authorization.kubeflow.org/aggregate-to-custom-kubeflow-edit: &quot;true&quot;&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Run the following command to verify that permission to list Secrets has been incorporated and merged into the aggregated ClusterRole &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get clusterrole custom-kubeflow-edit -o jsonpath=&apos;{.rules}&apos; | jq .
[
  {
    &quot;apiGroups&quot;: [
      &quot;&quot;
    ],
    &quot;resources&quot;: [
      &quot;secrets&quot;
    ],
    &quot;verbs&quot;: [
      &quot;get&quot;,
      &quot;list&quot;
    ]
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Create another focused ClusterRole &lt;em&gt;&apos;custom-exec-pods&apos;&lt;/em&gt; to grant permission for executing commands inside a running Pod&apos;s container.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# cat custom-exec-pods.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-exec-pods
  labels:
    rbac.authorization.kubeflow.org/aggregate-to-custom-kubeflow-edit: &quot;true&quot;
rules:
  - apiGroups:
    - &quot;*&quot;
    resources:
    - pods/exec
    verbs:
    - &quot;*&quot;

# kubectl apply -f custom-exec-pods.yaml
clusterrole.rbac.authorization.k8s.io/custom-exec-pods created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following command to verify that the exec-command permissions have also been incorporated and merged into the aggregated ClusterRole &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get clusterrole custom-kubeflow-edit -o jsonpath=&apos;{.rules}
&apos; | jq .
[
  {
    &quot;apiGroups&quot;: [
      &quot;*&quot;
    ],
    &quot;resources&quot;: [
      &quot;pods/exec&quot;
    ],
    &quot;verbs&quot;: [
      &quot;*&quot;
    ]
  },
  {
    &quot;apiGroups&quot;: [
      &quot;&quot;
    ],
    &quot;resources&quot;: [
      &quot;secrets&quot;
    ],
    &quot;verbs&quot;: [
      &quot;get&quot;,
      &quot;list&quot;
    ]
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Create a &lt;em&gt;custom&lt;/em&gt; RoleBinding named &lt;em&gt;&apos;custom-editor&apos;&lt;/em&gt; to bind the aggregated ClusterRole &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt; to the ServiceAccount &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; in the user namespace &lt;em&gt;&apos;project-user-guoping-jia&apos;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

# cat custom-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-editor
  namespace: project-user-guoping-jia
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-kubeflow-edit
subjects:
- kind: ServiceAccount
  name: default-editor
  namespace: project-user-guoping-jia

# kubectl apply -f custom-rolebinding.yaml
rolebinding.rbac.authorization.k8s.io/custom-editor created

# kubectl get rolebinding -n project-user-guoping-jia custom-editor
NAME            ROLE                               AGE
custom-editor   ClusterRole/custom-kubeflow-edit   84s                                                 2026-02-17T10:43:00Z
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Verify the added permissions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From the Jupyter notebook terminal, you should now be able to list Secrets and execute &lt;em&gt;bash&lt;/em&gt; commands inside a running Pod&apos;s container within the namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;


(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i list secrets
yes
(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i get pods --subresource=exec
yes




(base) guoping-jia@default-notebook-0:~$ kubectl get secrets
NAME                    TYPE                             DATA   AGE
access-token            Opaque                           1      24d
af-cluster-airflowui    Opaque                           6      24d
hpe-imagepull-secrets   kubernetes.io/dockerconfigjson   1      24d
imagepull               kubernetes.io/dockerconfigjson   1      24d
ngc-cli-secret          Opaque                           2      24d
(base) guoping-jia@default-notebook-0:~$ kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
default-notebook-0                         2/2     Running   0          45h
fs-65cbb7b876-x9564                        2/2     Running   0          24d
ml-pipeline-ui-artifact-696cff4647-46slx   2/2     Running   0          24d
(base) guoping-jia@default-notebook-0:~$ kubectl exec -it default-notebook-0 -- bash
root@default-notebook-0:/# pwd
/
root@default-notebook-0:/# exit
exit
(base) guoping-jia@default-notebook-0:~$
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Granting user access to other namespaces&lt;/h4&gt;
&lt;p&gt;As another practical example of permission management, this section covers a more advanced requirement: granting a user access to a namespace other than their default one. In this additional namespace, where AI applications may be deployed, for example through the PCAI &lt;em&gt;Import Framework&lt;/em&gt;, the user may need access for debugging or inspection.&lt;/p&gt;
&lt;p&gt;By default, the ClusterRole and RoleBinding configuration applied in the Jupyter notebook environment restricts each authenticated user to their own namespace. They cannot access other namespaces, such as &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt;, unless additional permissions are explicitly granted.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;



# kubectl create ns custom-ns
namespace/custom-ns created
# kubectl get ns custom-ns
custom-ns                               Active   3s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following error appears when attempting to access the newly created namespace from the Jupyter notebook terminal.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;


(base) guoping-jia@default-notebook-0:~$ kubectl get ns custom-ns
Error from server (Forbidden): namespaces &quot;custom-ns&quot; is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot get resource &quot;namespaces&quot; in API group &quot;&quot; in the namespace &quot;custom-ns&quot;
(base) guoping-jia@default-notebook-0:~$ kubectl get pods -n custom-ns
Error from server (Forbidden): pods is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;custom-ns&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;User access to the other namespace can be granted by creating a RoleBinding named &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;




# cat ns-default-editor.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-editor
  namespace: custom-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubeflow-edit
subjects:
- kind: ServiceAccount
  name: default-editor
  namespace: project-user-guoping-jia


# kubectl apply -f ns-default-editor.yaml
rolebinding.rbac.authorization.k8s.io/default-editor created


# kubectl get rolebinding -n custom-ns
NAME             ROLE                        AGE
default-editor   ClusterRole/kubeflow-edit   10s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The RoleBinding &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; binds the Kubeflow ClusterRole &lt;em&gt;kubeflow-edit&lt;/em&gt; to the ServiceAccount &lt;em&gt;default-editor&lt;/em&gt; in the namespace &lt;em&gt;custom-ns&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Run the following commands to verify that the user can now access the namespace &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

(base) guoping-jia@default-notebook-0:~$ kubectl get ns custom-ns
NAME        STATUS   AGE
custom-ns   Active   13m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, with the default RoleBinding &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; in place, the user still cannot list Secrets in the namespace &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt;, nor can they execute the commands inside any running Pod&apos;s container within that namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i list secrets -n custom-ns
no
(base) guoping-jia@default-notebook-0:~$ kubectl get secrets -n custom-ns
Error from server (Forbidden): secrets is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot list resource &quot;secrets&quot; in API group &quot;&quot; in the namespace &quot;custom-ns&quot;
(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i get pods --subresource=exec -n custom-ns
no
(base) guoping-jia@default-notebook-0:~$ kubectl exec -it mybox-84bdf9578-b6tkh -n custom-ns -- bash
Error from server (Forbidden): pods &quot;mybox-84bdf9578-b6tkh&quot; is forbidden: User &quot;system:serviceaccount:project-user-guoping-jia:default-editor&quot; cannot create resource &quot;pods/exec&quot; in API group &quot;&quot; in the namespace &quot;custom-ns&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These additional permissions can be granted in the namespace &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt; by creating another RoleBinding named &lt;em&gt;&apos;custom-editor&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

# cat ns-custom-editor.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-editor
  namespace: custom-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-kubeflow-edit
subjects:
- kind: ServiceAccount
  name: default-editor
  namespace: project-user-guoping-jia


# kubectl apply -f ns-custom-editor.yaml
rolebinding.rbac.authorization.k8s.io/custom-editor created


# kubectl get rolebinding -n custom-ns
NAME             ROLE                               AGE
custom-editor    ClusterRole/custom-kubeflow-edit   8s
default-editor   ClusterRole/kubeflow-edit          12m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The RoleBinding &lt;em&gt;&apos;custom-editor&apos;&lt;/em&gt; binds the aggregated ClusterRole &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt;, created in the previous section, to the ServiceAccount &lt;em&gt;&apos;default-editor&apos;&lt;/em&gt; in the namespace &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;With the RoleBinding &lt;em&gt;&apos;custom-editor&apos;&lt;/em&gt; in place, and with &lt;em&gt;&apos;custom-kubeflow-edit&apos;&lt;/em&gt; ClusterRole already incorporating the two additional permissions, you can now verify that both listing Secrets and executing a &lt;em&gt;bash&lt;/em&gt; command inside a running Pod&apos;s container work as expected in the namespace &lt;em&gt;&apos;custom-ns&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i list secrets -n custom-ns
yes
(base) guoping-jia@default-notebook-0:~$ kubectl auth can-i get pods --subresource=exec -n custom-ns
yes



(base) guoping-jia@default-notebook-0:~$ kubectl create deploy nginx -n custom-ns --image=nginx 
deployment.apps/nginx created
(base) guoping-jia@default-notebook-0:~$ kubectl get pods -n custom-ns
NAME                     READY   STATUS    RESTARTS   AGE
nginx-5869d7778c-b4tcw   1/1     Running   0          56s
(base) guoping-jia@default-notebook-0:~$ kubectl exec -it nginx-5869d7778c-b4tcw -n custom-ns -- bash
root@nginx-5869d7778c-b4tcw:/# ls -al  
total 4
drwxr-xr-x.    1 root root   39 Feb  5 14:49 .
drwxr-xr-x.    1 root root   39 Feb  5 14:49 ..
lrwxrwxrwx.    1 root root    7 Jan  2 12:35 bin -&gt; usr/bin
drwxr-xr-x.    2 root root    6 Jan  2 12:35 boot
drwxr-xr-x     5 root root  360 Feb  5 14:49 dev

drwxr-xr-x.    1 root root   32 Feb  5 14:49 etc
drwxr-xr-x.    2 root root    6 Jan  2 12:35 home
lrwxrwxrwx.    1 root root    7 Jan  2 12:35 lib -&gt; usr/lib
lrwxrwxrwx.    1 root root    9 Jan  2 12:35 lib64 -&gt; usr/lib64

drwxr-xr-x.    2 root root    6 Feb  2 00:00 mnt
drwxr-xr-x.    2 root root    6 Feb  2 00:00 opt
dr-xr-xr-x. 3149 root root    0 Feb  5 14:49 proc
drwx------.    2 root root   37 Feb  2 00:00 root

lrwxrwxrwx.    1 root root    8 Jan  2 12:35 sbin -&gt; usr/sbin
drwxr-xr-x.    2 root root    6 Feb  2 00:00 srv
dr-xr-xr-x.   13 root root    0 Nov 14 17:04 sys
drwxrwxrwt.    2 root root    6 Feb  2 00:00 tmp
drwxr-xr-x.    1 root root   66 Feb  2 00:00 usr
drwxr-xr-x.    1 root root   19 Feb  2 00:00 var
root@nginx-5869d7778c-b4tcw:/# exit
exit
(base) guoping-jia@default-notebook-0:~$ 






&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If additional permission are needed in the namespace, you can follow the same approach by defining a purpose-specific ClusterRole with the appropriate aggregation label. Once the ClusterRole is applied, its permissions will be automatically merged into the aggregated ClusterRole.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post explored and demonstrated how permission management for accessing K8s resources in the HPE Private Cloud AI environment can be simplified and streamlined through ClusterRole aggregation. When additional permissions are required, they can be defined as independent, purpose‑built ClusterRoles, which are then automatically incorporated into an aggregated ClusterRole by applying the appropriate labels. This approach eliminates the need to modify existing ClusterRoles for each new permission request and significantly reduces RBAC maintenance overhead. By relying on smaller, focused roles that aggregates cleanly, permission management becomes more flexible, scalable, and easier to maintain.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud AI and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI Tips: Why is storage important for KV cache?]]></title><description><![CDATA[Although KV (Key-value) cache is usually described as an LLM inference optimization, it is actually best understood as a specialized, high…]]></description><link>https://developer.hpe.com/why-storage-is-important-for-kv-cache/</link><guid isPermaLink="false">https://developer.hpe.com/why-storage-is-important-for-kv-cache/</guid><pubDate>Mon, 16 Feb 2026 08:29:33 GMT</pubDate><content:encoded>&lt;style&gt;
table {
    display: table;
    width: 100%;
    max-width: 100%;
    margin: 20px auto;
    border-collapse: collapse;
    -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border: 1px solid grey;
}
th, td {
    -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border: 1px solid grey;
    text-align: left !important;
    font-weight: normal !important;
    padding: 10px !important;
}
th {
    text-align: center !important;
    font-weight: bold !important;
    background-color: #f5f5f5;
    font-weight: bold !important;
}
&lt;/style&gt;
&lt;p&gt;Although KV (&lt;em&gt;Key-value)&lt;/em&gt; cache is usually described as an LLM inference optimization, it is actually best understood as a specialized, high‑performance storage layer that holds intermediate attention states. This article explores this aspect of KV cache and its relationship with storage.&lt;/p&gt;
&lt;br/&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h2&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt; What KV cache is&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;You can think of KV cache as a volatile GPU–resident, key–value store that stores per-token features so they don’t need to be recomputed. In essence, it acts like a memory tier within a multi-layer storage hierarchy.&lt;/p&gt;
&lt;div align=&quot;center&quot;&gt;
Here’s how to map KV cache to conventional storage concepts:
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Storage concept&lt;/th&gt;
&lt;th&gt;LLM equivalent&lt;/th&gt;
&lt;th&gt;Explanation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Registers&lt;/td&gt;
&lt;td&gt;Tensor cores, attention units&lt;/td&gt;
&lt;td&gt;Compute engines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L1/L2 cache&lt;/td&gt;
&lt;td&gt;KV cache slices currently in use&lt;/td&gt;
&lt;td&gt;Immediate access attention data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;Overall KV cache across all layers&lt;/td&gt;
&lt;td&gt;Working set for model inference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSD / object storage&lt;/td&gt;
&lt;td&gt;Prompt, documents&lt;/td&gt;
&lt;td&gt;Fed in before KV cache populates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cold storage&lt;/td&gt;
&lt;td&gt;Archived corpora, vector DB, documents&lt;/td&gt;
&lt;td&gt;Retrieved only as needed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt; &lt;em&gt;You can think of the KV cache as a model&apos;s “High-Bandwidth memory (HBM) scratchpad.”&lt;/em&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br/&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h2&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt; Why KV cache is important for AI and generative AI&lt;/span&gt;&lt;/h2&gt;
&lt;h3&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt;Why KV cache is essential to Retrieval Augmented Generation (RAG) and inference in general&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;KV cache is essential for RAG because it lets the model handle long-retrieved contexts without having to recompute attention over all those tokens at every decoding step. When a large block of retrieved text is inserted into a prompt, the model encodes it once, stores the keys and values, and then reuses them during generation. This means new tokens only attend to the cached prefix instead of reprocessing thousands of context tokens repeatedly. As the retrieved context grows, KV cache keeps latency stable, prevents compute from exploding with sequence length, and makes long-context RAG both feasible and efficient.&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt; Why KV cache is important for agentic AI&lt;/span&gt;&lt;/h3&gt;
&lt;h4&gt;Agentic AI requires&lt;/h4&gt;
&lt;p&gt;A couple of things to note about KV cache when used in agentic AI situations include its:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Long contexts, long chains of thought, and multi-turn loops: Agents concatenate system prompts + chat history + retrieved chunks + tool outputs. Every added token expands the KV working set. The model then rereads prior tokens’ K/V at each step.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sensitivity to bandwidth: The attention kernel’s speed is often limited by HBM bandwidth, not FLOPs. If KV reads stall, per token latency increases, tail latencies widen, and throughput collapses under concurrency.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Persistence across actions and Memory of past steps: Agent steps (plan, call tools, reflect) frequently reuse the same conversation context. KV reuse avoids recomputing attention for the past—saving both time and power.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means agents spend minutes inside a single inference session. That makes KV cache the central runtime storage system for agent state.
If the KV cache is slow, insufficient, or mismanaged:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The agent must recompute attention resulting in a huge latency spike&lt;/li&gt;
&lt;li&gt;The model must truncate context resulting in memory loss&lt;/li&gt;
&lt;li&gt;Multi-step reasoning grinds to a halt&lt;/li&gt;
&lt;/ul&gt;
&lt;br/&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h3&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt;Why KV cache is a storage problem&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;When examined closely, the KV cache becomes a storage issue.&lt;/p&gt;
&lt;p&gt;When a large language model generates text one token at a time, it keeps a memory of everything it has already seen. This memory lives in the KV cache a set of tensors that store the keys and values for every layer and every past token.&lt;/p&gt;
&lt;p&gt;At first, this cache is small. A few tokens, a few layers, a bit of GPU memory. But as the conversation grows longer—say from 1,000 tokens to 32,000—this “memory” expands linearly with the size of the context. The model must keep every past key/value vector around so that new tokens can attend back to them. And suddenly, the KV cache, not the model weights, becomes the largest memory consumer.&lt;/p&gt;
&lt;p&gt;As GPUs have limited High-Bandwidth Memory (HBM), when the KV cache grows too large, systems must decide whether:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Moving it (offloading) to CPU RAM or even to fast storage helps with capacity but reduces speed, because every new token must fetch pieces of that cache across slower connection interconnects.&lt;/li&gt;
&lt;li&gt;Compressing it saves space but may reduce accuracy.&lt;/li&gt;
&lt;li&gt;Storing everything in GPU memory maintains performance but limits the number of users served simultaneously.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is easy to see that KV cache becomes not just a compute issue but a storage orchestration challenge—balancing capacity, bandwidth, latency, and cost. Long-context models, multi-user serving, and high-throughput inference all rely on how cleverly we can store, move, compress, or reuse this cache. Ultimately, generating text efficiently depends as much on memory engineering as on math.&lt;/p&gt;
&lt;p&gt;As KV cache is a storage orchestration challenge, the ability to move back and forward the KV cache in a fast storage system is a critical strategy for managing the KV cache. Here&apos;s more information that details the reasons behind this.&lt;/p&gt;
&lt;h4&gt;KV cache requires extremely high bandwidth (HBM class)&lt;/h4&gt;
&lt;p&gt;KV cache access patterns behave in such a way that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Every new token must retrieve all previous keys and values&lt;/li&gt;
&lt;li&gt;Access frequency is done per layer × per head&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means KV cache cannot be used if in CPU memory, SSD, or network storage. To be used, it must remain in GPU HBM, which is effectively the highest performance “storage tier” available.&lt;/p&gt;
&lt;h4&gt;KV cache often can&apos;t fit in GPU memory&lt;/h4&gt;
&lt;p&gt;For effectiveness, KV cache must remain in GPU memory, but unfortunately, it often can’t fit there. The size of the KV cache increases linearly with both the context length and the number of layers, creating capacity and bandwidth challenges.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A 70B model can require tens of gigabytes of KV cache just for 32k tokens.&lt;/li&gt;
&lt;li&gt;Larger contexts (100k–1M tokens) would require storage-tier thinking, such as sharding, compression, paging, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When the KV cache exceeds the HBM memory, it must be stored elsewhere (CPU memory first, then fast storage).&lt;/p&gt;
&lt;h4&gt;KV cache often doesn’t reside in a single GPU&lt;/h4&gt;
&lt;p&gt;But there is an additional complexity. The KV cache often doesn’t reside in a single GPU. When models run across multiple GPUs, the KV cache is sharded across them. KV cache “follows” the model parallelism:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tensor parallelism: shard across layers/heads&lt;/li&gt;
&lt;li&gt;Sequence parallelism: shard by tokens&lt;/li&gt;
&lt;li&gt;Context parallelism: shard by ranges of inputs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This makes KV management similar to distributed file systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Placement&lt;/li&gt;
&lt;li&gt;Replication&lt;/li&gt;
&lt;li&gt;Access path optimization&lt;/li&gt;
&lt;li&gt;Paging / eviction&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt; &lt;em&gt;In conclusion, KV cache pagination requires a memory-tiered storage&lt;/em&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Indeed, recent trends see the development of storage systems optimized for tiering with KV-cache. Storage systems are, therefore, part of the KV-cache pagination management:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GPU HBM → hot KV&lt;/li&gt;
&lt;li&gt;CPU RAM → warm KV&lt;/li&gt;
&lt;li&gt;NVMe / SSD → cold KV&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h3&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt;Finally, why RDMA/GPUDirect (for objects or files) is important for KV cache&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;This diagram shows the end-to-end RAG + inference flow and where RDMA / GPUDirect optimizes movement into the GPU—before the KV cache becomes active.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2026-02-16-at-10.41.09.png&quot; alt=&quot;&quot; title=&quot;Figure 1 - RDMA/GDS role in a RAG pipeline&quot;&gt;&lt;/p&gt;
&lt;p&gt;RDMA / GPUDirect help:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Storage → GPU ingestion:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;RDMA (object/file) + GPUDirect allow NIC or storage to DMA directly into GPU memory, bypassing CPU copies.&lt;/li&gt;
&lt;li&gt;This accelerates document load, embedding pipelines, vector index updates, and LLM input streaming.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Before KV activation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;KV cache is populated after tokenization and initial forward passes occur on GPU. RDMA/GPUDirect primarily reduce the time to first token by accelerating data arrival to the GPU.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;During RAG loops:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Frequent retrieval (top k) + reranking benefits from GPU-resident vector DB and RDMA reads from warm storage. The faster the context assembly, the sooner the LLM can append to KV cache.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, RDMA/GPUDirect accelerate the front half (docs/embeddings/context into GPU memory). Once generation starts, the KV cache dominates the hot path, acting as the L1/L2 like working store of the decoder.&lt;/p&gt;
&lt;br/&gt;
&lt;hr&gt;
&lt;br/&gt;
&lt;h2&gt;&lt;span style=&quot;color:blue; font-family:Arial; font-size:1em&quot;&gt;Conclusion&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;KV cache management is a storage orchestration challenge that affects capacity, bandwidth, latency, and cost, and efficient text generation depends as much on memory engineering as on algorithms. Fast storage is essential for maximizing KV cache performance. By placing the right portions of the cache on fast, low‑latency. RDMA- and GDS-enabled storage - such as HPE Alletra MP X10000 - and offloading overflow to cost‑efficient tiers, organizations can balance speed, scale, and efficiency.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blogs&lt;/a&gt;and AI Tips for more guides and best practices on AI and Storage for AI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Monitoring GreenLake GPUs, GitOps for MKS, iLOrest bulk onboarding feature & Chapel user insights]]></title><link>https://developer.hpe.com/2026-february-09/</link><guid isPermaLink="false">https://developer.hpe.com/2026-february-09/</guid><pubDate>Mon, 09 Feb 2026 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[7 Questions for Oliver Alvarado Rodriguez: Exploiting Chapel's Distributed Arrays for Graph Analysis through Arachne]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-oliver-alvarado-rodriguez-exploiting-chapels-distributed-arrays-for-graph-analysis-through-arachne/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-oliver-alvarado-rodriguez-exploiting-chapels-distributed-arrays-for-graph-analysis-through-arachne/</guid><pubDate>Thu, 22 Jan 2026 00:38:36 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automating application deployments to MKS clusters using GitOps with Argo CD]]></title><description><![CDATA[While Kubernetes (K8s) is widely adopted, enterprises often face challenges when deploying and managing application workloads across K8s…]]></description><link>https://developer.hpe.com/automating-application-delivery-to-mks-clusters-using-gitops-with-argocd/</link><guid isPermaLink="false">https://developer.hpe.com/automating-application-delivery-to-mks-clusters-using-gitops-with-argocd/</guid><pubDate>Tue, 13 Jan 2026 13:52:48 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;While Kubernetes (K8s) is widely adopted, enterprises often face challenges when deploying and managing application workloads across K8s clusters, including configuration drift, inconsistent deployment practices, and limited operational visibility. As environments grow, manual processes introduce variability and make compliance and rollback procedures increasingly difficult. A GitOps model addresses these challenges by using Git as the &lt;em&gt;single source of truth&lt;/em&gt; for all declarative configurations and maintaining alignment through continuous reconciliation. This approach enhances reliability, strengthens auditability, and enables predictable, repeatable deployments across complex K8s environments.&lt;/p&gt;
&lt;p&gt;This blog post describes how to automate application deployments to Morpheus Kubernetes Service (MKS) clusters using a GitOps workflow powered by &lt;em&gt;Argo CD&lt;/em&gt;. While the implementation example focuses on HPE Private Cloud Enterprise, the same approach applies seamlessly to HPE Morpheus Enterprise as long as a MKS cluster is provisioned. By taking advantage of &lt;em&gt;Argo CD&lt;/em&gt;’s real‑time monitoring and alerting features, the solution provides clear visibility into application deployment health and enforces strict alignment between the live cluster state and the declarative configuration stored in Git. This ensures reliable, consistent, and version‑controlled application delivery across both environments.&lt;/p&gt;
&lt;h3&gt;What is GitOps?&lt;/h3&gt;
&lt;p&gt;GitOps is an operational framework that extends core DevOps principles, such as version control, collaboration, compliance, and CI/CD (continuous integration and continuous delivery), and applies them to infrastructure automation. In a GitOps workflow, Git acts as the single source of truth for all declarative infrastructure and application configurations. The entire desired state of a Kubernetes (K8s) cluster, including resources such as &lt;em&gt;Deployments&lt;/em&gt;, &lt;em&gt;Services&lt;/em&gt;, &lt;em&gt;ConfigMaps&lt;/em&gt;, and more, is stored in Git. A GitOps controller continuously compares the cluster&apos;s actual state with the desired state defined in the repository and reconciles any differences to ensure consistency. Several well-known GitOps tools exist, including &lt;em&gt;Argo CD&lt;/em&gt;, &lt;em&gt;Flux CD&lt;/em&gt;, &lt;em&gt;Jenkins X&lt;/em&gt;, and &lt;em&gt;Spinnaker&lt;/em&gt;. For the purpose of this blog, &lt;em&gt;Argo CD&lt;/em&gt; will be used as the primary tool for demonstrating GitOps automation.&lt;/p&gt;
&lt;h3&gt;What is Argo CD?&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://argoproj.github.io/cd/&quot;&gt;Argo CD&lt;/a&gt;&lt;/em&gt; is a declarative, GitOps-driven continuous delivery platform for K8s. It automates consistent and repeatable application deployments by using Git as the single source of truth for all configuration artifacts. &lt;em&gt;Argo CD&lt;/em&gt; continuously monitors the live state of applications running in a K8s cluster and compares it against the desired state defined in a Git repository. When developers push changes to Git, &lt;em&gt;Argo CD&lt;/em&gt; detects the updates and synchronizes them to the cluster.&lt;/p&gt;
&lt;p&gt;Synchronization can be configured to run automatically, commonly used for development and test environments, or manually, which is typically preferred for production workflows. By defining the target environment state in Git, &lt;em&gt;Argo CD&lt;/em&gt; ensures that the applications deployed in the K8s cluster remain aligned with the declared configuration.&lt;/p&gt;
&lt;p&gt;Beyond synchronization, &lt;em&gt;Argo CD&lt;/em&gt; provides real-time insights into application health, deployment status, and configuration drift through its monitoring and alerting capabilities. It integrates seamlessly with existing CI/CD pipelines and enforces GitOps best practices throughout the application deployment lifecycle.&lt;/p&gt;
&lt;p&gt;The following sections provide a technical walkthrough for automating application deployments to MKS clusters using a GitOps workflow built around &lt;em&gt;Argo CD&lt;/em&gt;. This blog specifically demonstrates the end‑to‑end process in a &lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; environment, including installing &lt;em&gt;Argo CD&lt;/em&gt; on a MKS cluster, connecting application source repositories, configuring deployment parameters, and monitoring the application state within the cluster.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A MKS cluster has been provisioned from an HPE Private Cloud Enterprise workspace. You can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt; to provision an MKS cluster.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool has been properly installed, along with the kubeconfig file used to access the MKS cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install Argo CD&lt;/h3&gt;
&lt;p&gt;You can install &lt;em&gt;Argo CD&lt;/em&gt; using either the provided &lt;a href=&quot;https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml&quot;&gt;installation YAML file&lt;/a&gt; with &lt;em&gt;kubectl&lt;/em&gt; or the &lt;a href=&quot;https://argoproj.github.io/argo-helm&quot;&gt;Helm charts&lt;/a&gt; with &lt;em&gt;helm&lt;/em&gt;. Refer to the &lt;a href=&quot;https://argo-cd.readthedocs.io/en/stable/getting_started/&quot;&gt;Argo CD installation documentation&lt;/a&gt; for detailed instructions.&lt;/p&gt;
&lt;p&gt;For MKS clusters in HPE Private Cloud Enterprise, you can automate the installation by creating a &lt;em&gt;Shell&lt;/em&gt; script and add it as a Morpheus task. Running this task from the MKS master node will deploy &lt;em&gt;Argo CD&lt;/em&gt; to the cluster.&lt;/p&gt;
&lt;p&gt;Below is the &lt;em&gt;Shell&lt;/em&gt; script used to install &lt;em&gt;Argo CD&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;#!/bin/bash

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

chmod 700 get_helm.sh

./get_helm.sh

helm repo add argo https://argoproj.github.io/argo-helm

if [ -f &quot;/etc/kubernetes/admin.conf&quot; ];
then
  helm install argocd argo/argo-cd --version 9.1.3 -n argocd --create-namespace --kubeconfig /etc/kubernetes/admin.conf
else
  echo &quot;Its worker node.!! skiping installation&quot;
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Follow the steps below to install &lt;em&gt;Argo CD&lt;/em&gt; on the MKS cluster.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the Morpheus Dashboard, navigate to &lt;strong&gt;Library -&gt; Automation -&gt;&lt;/strong&gt; &lt;em&gt;Tasks&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+ Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-task.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;Install ArgoCD&lt;/em&gt; and select TYPE as &lt;em&gt;Shell Script&lt;/em&gt;. Enable &lt;em&gt;SUDO&lt;/em&gt;, select SOURCE as &lt;em&gt;Local&lt;/em&gt;, paste the &lt;em&gt;Shell&lt;/em&gt; script into CONTENT. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/install-argocd-task.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Infrastructure -&gt; Clusters&lt;/strong&gt;. Click the target MKS cluster NAME, e.g., &lt;em&gt;mks-test&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-test-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Open the &lt;em&gt;Nodes&lt;/em&gt; tab. Click the MKS master node, e.g., &lt;em&gt;mks-test-master&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-test-nodes.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt;. Select &lt;em&gt;Run Task&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-test-master-node.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Select &lt;em&gt;Install ArgoCD&lt;/em&gt; as TASK.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-test-task-execute.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Open &lt;em&gt;History&lt;/em&gt; tab. Click info (&lt;strong&gt;&quot;i&quot;&lt;/strong&gt;) icon to view the logs for &lt;em&gt;Run Task: Install ArgoCD&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-test-task-history.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can verify that &lt;em&gt;Argo CD&lt;/em&gt; has been successfully deployed to the namespace &lt;em&gt;&apos;argocd&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ns argocd
NAME     STATUS   AGE
argocd   Active   52s

$ k get all -n argocd
NAME                                                    READY   STATUS      RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running     0          48s
pod/argocd-applicationset-controller-579f778f57-m4t4d   1/1     Running     0          48s
pod/argocd-dex-server-7d99b44d96-bjvxn                  1/1     Running     0          48s
pod/argocd-notifications-controller-57dd69d5d9-5dtw2    1/1     Running     0          48s
pod/argocd-redis-9ff9dddb8-x58gp                        1/1     Running     0          48s
pod/argocd-redis-secret-init-9hqtt                      0/1     Completed   0          52s
pod/argocd-repo-server-56dbf9bd9-xbx8z                  1/1     Running     0          48s
pod/argocd-server-5d56b98664-q2l5f                      1/1     Running     0          48s

NAME                                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/argocd-applicationset-controller   ClusterIP   172.30.219.75   &amp;#x3C;none&gt;        7000/TCP            50s
service/argocd-dex-server                  ClusterIP   172.30.32.229   &amp;#x3C;none&gt;        5556/TCP,5557/TCP   50s
service/argocd-redis                       ClusterIP   172.30.65.186   &amp;#x3C;none&gt;        6379/TCP            50s
service/argocd-repo-server                 ClusterIP   172.30.1.116    &amp;#x3C;none&gt;        8081/TCP            50s
service/argocd-server                      ClusterIP   172.30.64.166   &amp;#x3C;none&gt;        80/TCP,443/TCP      50s

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           50s
deployment.apps/argocd-dex-server                  1/1     1            1           49s
deployment.apps/argocd-notifications-controller    1/1     1            1           50s
deployment.apps/argocd-redis                       1/1     1            1           49s
deployment.apps/argocd-repo-server                 1/1     1            1           50s
deployment.apps/argocd-server                      1/1     1            1           49s

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-579f778f57   1         1         1       50s
replicaset.apps/argocd-dex-server-7d99b44d96                  1         1         1       49s
replicaset.apps/argocd-notifications-controller-57dd69d5d9    1         1         1       50s
replicaset.apps/argocd-redis-9ff9dddb8                        1         1         1       49s
replicaset.apps/argocd-repo-server-56dbf9bd9                  1         1         1       49s
replicaset.apps/argocd-server-5d56b98664                      1         1         1       49s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     49s

NAME                                 STATUS     COMPLETIONS   DURATION   AGE
job.batch/argocd-redis-secret-init   Complete   1/1           3s         53s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Keep in mind that the &lt;em&gt;&apos;argocd-server&apos;&lt;/em&gt; service is configured as the &lt;em&gt;ClusterIP&lt;/em&gt; type, meaning it is accessible only from within the cluster.&lt;/p&gt;
&lt;h3&gt;Configure Argo CD access&lt;/h3&gt;
&lt;p&gt;To use &lt;em&gt;Argo CD&lt;/em&gt; for application deployments, the &lt;em&gt;Argo CD&lt;/em&gt; service must be accessible from outside the MKS cluster. This can be achieved through several methods, including &lt;em&gt;port forwarding&lt;/em&gt;, &lt;em&gt;NodePort&lt;/em&gt; or &lt;em&gt;LoadBalancer&lt;/em&gt; type services, or a manual virtual private network (VPN) setup.&lt;/p&gt;
&lt;p&gt;In this blog, I show an alternative approach: exposing the &lt;em&gt;Argo CD&lt;/em&gt; service to the public Internet using &lt;em&gt;Tailscale&lt;/em&gt;. For background on &lt;em&gt;Tailscale&lt;/em&gt; and the service-exposure workflow, refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/exposing-grafana-service-using-tailscale-for-mks-monitoring-in-hpe-private-cloud-enterprise/&quot;&gt;Exposing Grafana service using Tailscale for MKS monitoring in HPE Private Cloud Enterprise&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Follow the steps below to expose the &lt;em&gt;Argo CD&lt;/em&gt; service endpoint using &lt;em&gt;Tailscale&lt;/em&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Update the &lt;em&gt;Argo CD&lt;/em&gt; service type from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;LoadBalancer&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl edit svc/argocd-server -n argocd 
service/argocd-server edited
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Confirm that the service type has been updated successfully.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get svc -n argocd
NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
argocd-applicationset-controller   ClusterIP      172.30.219.75   &amp;#x3C;none&gt;          7000/TCP                     2m47s
argocd-dex-server                  ClusterIP      172.30.32.229   &amp;#x3C;none&gt;          5556/TCP,5557/TCP            2m47s
argocd-redis                       ClusterIP      172.30.65.186   &amp;#x3C;none&gt;          6379/TCP                     2m47s
argocd-repo-server                 ClusterIP      172.30.1.116    &amp;#x3C;none&gt;          8081/TCP                     2m47s
argocd-server                      LoadBalancer   172.30.64.166   172.20.20.242   80:30147/TCP,443:31601/TCP   2m47s
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Create an &lt;em&gt;Ingress&lt;/em&gt; YAML manifest and apply it to the cluster.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-argocd.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-argocd
  namespace: argocd
  annotations:
    tailscale.com/funnel: &quot;true&quot;
spec:
  defaultBackend:
    service:
      name: argocd-server
      port:
        number: 443
  ingressClassName: tailscale
  tls:
    - hosts:
        - argocd

$ kubectl apply -f ingress-argocd.yaml
ingress.networking.k8s.io/ingress-argocd created
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Verify that the &lt;em&gt;Ingress&lt;/em&gt; resource &lt;em&gt;&apos;ingress-argocd&apos;&lt;/em&gt; has been created and assigned an address.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ingress -n argocd
NAME             CLASS       HOSTS   ADDRESS                    PORTS     AGE
ingress-argocd   tailscale   *       argocd.qilin-beta.ts.net   80, 443   8s
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;In the &lt;em&gt;Tailscale&lt;/em&gt; admin console, a new machine named &lt;em&gt;&apos;argocd&apos;&lt;/em&gt; should appear under the &lt;em&gt;Machines&lt;/em&gt; tab.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-argocd.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Open a browser and point to the &lt;em&gt;Argo CD&lt;/em&gt; Funnel URL, e.g., &lt;em&gt;&apos;argocd.qilin-beta.ts.net &apos;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-login.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Log in using the username &lt;em&gt;&apos;admin&apos;&lt;/em&gt;. Run the following command to retrieve the admin password.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy applications using &lt;em&gt;Argo CD&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;You can now use the &lt;em&gt;Argo CD&lt;/em&gt; service endpoint to connect your application&apos;s code repository and begin deploying applications to the MKS cluster.&lt;/p&gt;
&lt;h4&gt;Connect application repository&lt;/h4&gt;
&lt;p&gt;The sample &lt;em&gt;WordPress&lt;/em&gt; application&apos;s Helm charts are available in the GitHub repository &lt;a href=&quot;https://github.com/GuopingJia/helm-demo&quot;&gt;helm-demo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this section, I will outline the steps for connecting that repository to &lt;em&gt;Argo CD&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the &lt;em&gt;Argo CD&lt;/em&gt; UI, navigate to &lt;em&gt;&lt;strong&gt;Settings -&gt; Repository&lt;/strong&gt;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;+ CONNECT REPO&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-git.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;em&gt;VIA HTTP/HTTPS&lt;/em&gt; as the connection method, choose &lt;em&gt;git&lt;/em&gt; as Type, set Project to &lt;em&gt;default&lt;/em&gt;, optional name as &lt;em&gt;wordpress&lt;/em&gt;, and enter Repository URL, e.g., &lt;em&gt;&lt;a href=&quot;https://github.com/GuopingJia/helm-demo&quot;&gt;https://github.com/GuopingJia/helm-demo&lt;/a&gt;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;CONNECT&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-git-connect.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;After a few moments, the repository&apos;s CONNECTION STATUS should display as &lt;em&gt;Successful&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-git-wordpress.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create application&lt;/h4&gt;
&lt;p&gt;Follow the steps below to create a new application for deployment.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the &lt;em&gt;Argo CD&lt;/em&gt; UI, navigate to &lt;em&gt;&lt;strong&gt;Applications&lt;/strong&gt;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;CREATE APPLICATION&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-application.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Enter Application Name as &lt;em&gt;wordpress-app&lt;/em&gt;, set Project Name to &lt;em&gt;default&lt;/em&gt; and choose &lt;em&gt;Automatic&lt;/em&gt; for SYNC POLICY. For SOURCE, select Repository URL, set Revision to &lt;em&gt;HEAD&lt;/em&gt;, and specify Path as &lt;em&gt;wordpress&lt;/em&gt;. For DESTINATION, choose Cluster URL as &lt;em&gt;&lt;a href=&quot;https://kubernetes.default.svc&quot;&gt;https://kubernetes.default.svc&lt;/a&gt;&lt;/em&gt; and set Namespace to &lt;em&gt;wordpress&lt;/em&gt;. Enable additional options such as ENABLE AUTO-SYNC, SELF HEAL, AUTO-CREATE NAMESPACE and RETRY. Click &lt;em&gt;&lt;strong&gt;CREATE&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/em&gt;: If you choose &lt;em&gt;Manual&lt;/em&gt; under SYNC POLICY, &lt;em&gt;Argo CD&lt;/em&gt; will detect the changes in the Git repository but will not apply them automatically. You must manually click &lt;em&gt;SYNC&lt;/em&gt; on Applications page to apply updates.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-application-create.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;The &lt;em&gt;wordpress-app&lt;/em&gt; deployment will begin immediately.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-application-pogressing.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;After a short time, the application should complete deployment with APP HEALTH showing &lt;em&gt;Healthy&lt;/em&gt; and SYNC STATUS showing &lt;em&gt;Synced&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-application-running.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Make changes to application repository&lt;/h4&gt;
&lt;p&gt;Now let’s update the application repository by increasing the number of replicas for the &lt;em&gt;WordPress&lt;/em&gt; application from the default value of &lt;em&gt;1&lt;/em&gt; to &lt;em&gt;2&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/git-repo-changes.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the change is committed, &lt;em&gt;Argo CD&lt;/em&gt; will detect the update and automatically begin synchronizing it to the MKS cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/argocd-application-syncing.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the sync completes, run the following command to verify that &lt;em&gt;WordPress&lt;/em&gt; is now running with 2 PODs in the cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n wordpress
NAME                                READY   STATUS    RESTARTS   AGE
pod/wordpress-app-df478fc99-gwffz   1/1     Running   0          23s
pod/wordpress-app-df478fc99-qrrg4   1/1     Running   0          22s
pod/wordpress-app-mariadb-0         1/1     Running   0          162m

NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/wordpress-app           LoadBalancer   172.30.18.82    172.20.20.243   80:30704/TCP,443:32092/TCP   162m
service/wordpress-app-mariadb   ClusterIP      172.30.18.199   &amp;#x3C;none&gt;          3306/TCP                     162m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/wordpress-app   2/2     2            2           162m

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/wordpress-app-69fdf9dcd9   0         0         0       162m
replicaset.apps/wordpress-app-df478fc99    2         2         2       23s

NAME                                     READY   AGE
statefulset.apps/wordpress-app-mariadb   1/1     162m
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Access WordPress application&lt;/h3&gt;
&lt;p&gt;You can expose the &lt;em&gt;WordPress&lt;/em&gt; application using &lt;em&gt;Tailscale&lt;/em&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create an &lt;em&gt;Ingress&lt;/em&gt; YAML manifest.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-wordpress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wordpress
  namespace: wordpress
  annotations:
    tailscale.com/funnel: &quot;true&quot;
spec:
  defaultBackend:
    service:
      name: wordpress-app
      port:
        number: 443
  ingressClassName: tailscale
  tls:
    - hosts:
        - wordpress
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Apply the &lt;em&gt;Ingress&lt;/em&gt; to the namespace &lt;em&gt;wordpress&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f ingress-wordpress.yaml
ingress.networking.k8s.io/ingress-wordpress created
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Verify that the &lt;em&gt;Ingress&lt;/em&gt; resource &lt;em&gt;&apos;ingress-wordpress&apos;&lt;/em&gt; has been created and assigned an address, e.g., &lt;em&gt;&apos;wordpress.qilin-beta.ts.net&apos;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ingress -n wordpress
NAME                CLASS       HOSTS   ADDRESS                       PORTS     AGE
ingress-wordpress   tailscale   *       wordpress.qilin-beta.ts.net   80, 443   7s
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;In the &lt;em&gt;Tailscale&lt;/em&gt; admin console, a new machine named &lt;em&gt;&apos;wordpress&apos;&lt;/em&gt; should appear under the &lt;em&gt;Machines&lt;/em&gt; tab.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-wordpress.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can now access the &lt;em&gt;WordPress&lt;/em&gt; application by opening its &lt;em&gt;Tailscale Funnel&lt;/em&gt; URL in a browser.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wordpress-ui.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post provided a detailed walkthrough of how to automate application deployments to MKS clusters using a GitOps workflow with &lt;em&gt;Argo CD&lt;/em&gt;. HPE Private Cloud Enterprise was selected as the example environment for this demonstration. It covered the process of deploying &lt;em&gt;Argo CD&lt;/em&gt; through a Morpheus task executed on the MKS master node in the cluster, configuring application deployments via a publicly accessible &lt;em&gt;Argo CD&lt;/em&gt; endpoint exposed with &lt;em&gt;Tailscale&lt;/em&gt;, and monitoring the real-time state of applications running in the MKS cluster.&lt;/p&gt;
&lt;p&gt;It&apos;s important to remember that while &lt;em&gt;Argo CD&lt;/em&gt; manages continuous delivery (CD), it still relies on a separate continuous integration (CI) pipeline. The CI pipeline is responsible for testing and building the application whenever developers update the source code. During this phase, the application is validated, container images are built and pushed to an image registry, and the CI system can update the configuration repository, often a separate repository from the application&apos;s source code and connected to &lt;em&gt;Argo CD&lt;/em&gt;. These updates then trigger &lt;em&gt;Argo CD&lt;/em&gt; to synchronize the desired state with the cluster. This workflow represents a common GitOps pattern used across many organizations. By integrating seamlessly with existing CI/CD systems, &lt;em&gt;Argo CD&lt;/em&gt; helps teams implement GitOps practices effectively.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streamlining server management: Bulk onboarding to HPE Compute Ops Management with iLOrest]]></title><description><![CDATA[This blog post has been moved to the Server Management portal.]]></description><link>https://developer.hpe.com/streamlining-server-management-bulk-onboarding-to-hpe-compute-ops-management-with-ilorest-1/</link><guid isPermaLink="false">https://developer.hpe.com/streamlining-server-management-bulk-onboarding-to-hpe-compute-ops-management-with-ilorest-1/</guid><pubDate>Fri, 09 Jan 2026 08:04:01 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/firmware_updates/part1/firmware_update_part1&quot;&gt;Server Management portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[Monitoring HPE GreenLake Servers Running GPUs Using Grafana Cloud]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/monitoring-hpe-greenlake-servers-running-gpus-using-grafana-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/monitoring-hpe-greenlake-servers-running-gpus-using-grafana-cloud/</guid><pubDate>Fri, 09 Jan 2026 06:36:37 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Try Morpheus Community Edition, get a Chapel Transformer guide & join our GreenLake MCP webinar]]></title><link>https://developer.hpe.com/2026-january-07/</link><guid isPermaLink="false">https://developer.hpe.com/2026-january-07/</guid><pubDate>Wed, 07 Jan 2026 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Announcing Chapel 2.7!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-7/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-7/</guid><pubDate>Thu, 18 Dec 2025 22:17:48 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Morpheus Enterprise case study: Fix network pool links via a Custom Task Plugin]]></title><description><![CDATA[A previous blog post explored the basics around building and compiling HPE Morpheus Enterprise Plugins. This post expands on the subject by…]]></description><link>https://developer.hpe.com/hpe-morpheus-enterprise-case-study-fix-network-pool-links-via-a-custom-task-plugin/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-morpheus-enterprise-case-study-fix-network-pool-links-via-a-custom-task-plugin/</guid><pubDate>Thu, 18 Dec 2025 11:45:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A previous &lt;a href=&quot;https://developer.hpe.com/blog/morpheus-plugin-tutorial-how-to-build-and-compile/&quot;&gt;blog&lt;/a&gt; post explored the basics around building and compiling HPE Morpheus Enterprise Plugins. This post expands on the subject by implementing a minimal Custom Task Plugin. The logic will target IP Pools, where links between the host entries and the VM workloads are broken or missing.&lt;/p&gt;
&lt;h2&gt;Problem statement&lt;/h2&gt;
&lt;p&gt;When IP Pools are added to Networks, or changed, after Instance provisioning, where a Network Interface falls within the same &lt;em&gt;&lt;strong&gt;Network&lt;/strong&gt;&lt;/em&gt;/&lt;em&gt;&lt;strong&gt;NetworkPool&lt;/strong&gt;&lt;/em&gt;, the &lt;em&gt;&lt;strong&gt;NetworkPoolIp&lt;/strong&gt;&lt;/em&gt; entry is not linked to a &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt; object.&lt;/p&gt;
&lt;p&gt;When a server is provisioned to a Network where an IP Pool is attached, reference links are created via the &lt;em&gt;&lt;strong&gt;refType&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; properties of the &lt;em&gt;&lt;strong&gt;NetworkPoolIp&lt;/strong&gt;&lt;/em&gt; object, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_reftype_refid_small.png&quot; alt=&quot;refType and refId relationship&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon provisioning, the &lt;em&gt;&lt;strong&gt;refType&lt;/strong&gt;&lt;/em&gt; property is set to the literal value of &apos;ComputeServer&apos; and the &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; property is set to the &lt;em&gt;&lt;strong&gt;Id&lt;/strong&gt;&lt;/em&gt; of the &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt; object itself.&lt;/p&gt;
&lt;p&gt;When a &lt;em&gt;&lt;strong&gt;NetworkPool&lt;/strong&gt;&lt;/em&gt; is migrated/changed, or added to the &lt;em&gt;&lt;strong&gt;Network&lt;/strong&gt;&lt;/em&gt; after Instance provisioning, &lt;em&gt;&lt;strong&gt;NetworkPoolIp&lt;/strong&gt;&lt;/em&gt; records are created or synchronized from IPAM, without the &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; link populated. This causes orphan host entries when workloads are later deleted.&lt;/p&gt;
&lt;p&gt;To reproduce this, provision a single VM Instance into a simple lab, then add the IP Pool to the related Network, afterward.&lt;/p&gt;
&lt;h2&gt;Normal behavior&lt;/h2&gt;
&lt;p&gt;First, consider the normal day-to-day use case. The Network is associated with the IP Pool as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_network_with_pool.png&quot; alt=&quot;Network with IP Pool&quot;&gt;&lt;/p&gt;
&lt;p&gt;When provisioning a VM Instance into this Network, a Host Record entry is created in the IP Pool:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_instance_provisioned.png&quot; alt=&quot;Instance provisioned&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_host_record.png&quot; alt=&quot;Host record created&quot;&gt;&lt;/p&gt;
&lt;p&gt;Querying the Host Record via the REST API shows the link back to the &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt; within the Instance:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_api_host_record.png&quot; alt=&quot;Host record via REST API&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Provision without IP Pool&lt;/h2&gt;
&lt;p&gt;To illustrate the broken reference issue, remove the Network-to-IP-Pool association:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_network_without_pool.png&quot; alt=&quot;Network without IP Pool&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then provision a VM Instance to the Network:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_provision_vm.png&quot; alt=&quot;Provision VM to Network&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_provisioned_vm.png&quot; alt=&quot;Provisioned VM to Network&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, create a Host Record entry manually within the IP Pool:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_manual_host_record.png&quot; alt=&quot;New Pool IP&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, add the IP Pool back onto the Network:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_network_pool_readded.png&quot; alt=&quot;Network Pool re-added onto Network&quot;&gt;&lt;/p&gt;
&lt;p&gt;This time, the REST API response shows that the &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; property is NOT populated on the &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt;, despite the matching hostname on the VM:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_api_host_record_nolink.png&quot; alt=&quot;API host record with no link&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Steps to compile and upload the Custom Task Plugin&lt;/h2&gt;
&lt;p&gt;Download or clone the Plugin repository from &lt;a href=&quot;https://github.com/neilvrhpe/link-network-hosts&quot;&gt;https://github.com/neilvrhpe/link-network-hosts&lt;/a&gt;.
Open the project directory and compile with the relevant &lt;em&gt;&lt;strong&gt;gradlew&lt;/strong&gt;&lt;/em&gt; (Linux) or &lt;em&gt;&lt;strong&gt;gradlew.bat&lt;/strong&gt;&lt;/em&gt; (Windows) script using the &lt;em&gt;&lt;strong&gt;shadowJar&lt;/strong&gt;&lt;/em&gt; argument:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_compile_jar.png&quot; alt=&quot;Compile the Plugin jar file&quot;&gt;&lt;/p&gt;
&lt;p&gt;The compiled &lt;em&gt;&lt;strong&gt;.jar&lt;/strong&gt;&lt;/em&gt; file will be found in the &lt;em&gt;&lt;strong&gt;build/libs&lt;/strong&gt;&lt;/em&gt; subdirectory:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_compiled_jar.png&quot; alt=&quot;Compiled jar file in build/libs subdirectory&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upload the &lt;em&gt;&lt;strong&gt;.jar&lt;/strong&gt;&lt;/em&gt; file to the &lt;em&gt;&lt;strong&gt;Administration &gt; Integrations &gt; Plugins &gt; Add&lt;/strong&gt;&lt;/em&gt; dialog. The Plugin should appear in the list as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_Plugin_uploaded.png&quot; alt=&quot;Plugin uploaded and shown in list&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Setup the Custom Task Workflow&lt;/h2&gt;
&lt;p&gt;Uploading the Plugin in the previous step introduces a new &lt;em&gt;&lt;strong&gt;TaskType&lt;/strong&gt;&lt;/em&gt; to the HPE Morpheus Enterprise appliance. This can be seen under the edit dialog of the uploaded Plugin:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_new_tasktype.png&quot; alt=&quot;New Task Type introduced&quot;&gt;&lt;/p&gt;
&lt;p&gt;To use this new Custom Task Type in the UI, provide an &lt;em&gt;&lt;strong&gt;OptionSource&lt;/strong&gt;&lt;/em&gt; Input. Create the corresponding &lt;em&gt;&lt;strong&gt;OptionList&lt;/strong&gt;&lt;/em&gt; under &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Option Lists &gt; Add&lt;/strong&gt;&lt;/em&gt;. The type is &lt;em&gt;&lt;strong&gt;Plugin&lt;/strong&gt;&lt;/em&gt; and the Option List will be &lt;em&gt;&lt;strong&gt;Link Network Hosts: getNetworkPools&lt;/strong&gt;&lt;/em&gt;, as provided by the Plugin that was uploaded in the previous step. Provide &apos;ChooseNetworkPool&apos; as the name, so that it matches the name in the next step:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_create_optionsource.png&quot; alt=&quot;Create the Option Source&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, create the &lt;em&gt;&lt;strong&gt;Input&lt;/strong&gt;&lt;/em&gt; that represents the &lt;em&gt;&lt;strong&gt;OptionList&lt;/strong&gt;&lt;/em&gt; entries to the end user dropdown in the UI. This is done using the &lt;em&gt;&lt;strong&gt;Library &gt; Options &gt; Inputs &gt; Add&lt;/strong&gt;&lt;/em&gt; button. Then provide &apos;networkPool&apos; as the &lt;em&gt;&lt;strong&gt;Field Name&lt;/strong&gt;&lt;/em&gt; for the Custom Task to reference in the next step. Choose &lt;em&gt;&lt;strong&gt;Select List&lt;/strong&gt;&lt;/em&gt; as the &lt;em&gt;&lt;strong&gt;Type&lt;/strong&gt;&lt;/em&gt; and use &lt;em&gt;&lt;strong&gt;ChooseNetworkPool&lt;/strong&gt;&lt;/em&gt; as the &lt;em&gt;&lt;strong&gt;Option List&lt;/strong&gt;&lt;/em&gt; field value:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_form_input.png&quot; alt=&quot;Create the form input for the Option Source&quot;&gt;&lt;/p&gt;
&lt;p&gt;To set up the Task, navigate to &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Tasks &gt; Add&lt;/strong&gt;&lt;/em&gt;. Provide a Task &lt;em&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/em&gt; and a &lt;em&gt;&lt;strong&gt;Network Pool Id&lt;/strong&gt;&lt;/em&gt; value of &lt;em&gt;&lt;strong&gt;&amp;#x3C;%=customOptions.networkPool%&gt;&lt;/strong&gt;&lt;/em&gt;. This reference will insert the value from the Input created in the previous step.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_task_setup.png&quot; alt=&quot;Custom Task setup&quot;&gt;&lt;/p&gt;
&lt;h3&gt;MATCH FULL FQDN&lt;/h3&gt;
&lt;p&gt;Checking this box will include the full domain name suffixed to the hostname in the match comparison. Unchecked, only the hostname portion is matched. Matching is case insensitive.&lt;/p&gt;
&lt;h3&gt;REPLACE EXISTING LINKS&lt;/h3&gt;
&lt;p&gt;Checking this box will overwrite any existing &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; links when the hostname matches.&lt;/p&gt;
&lt;p&gt;Next, create an &lt;em&gt;&lt;strong&gt;Operational Workflow&lt;/strong&gt;&lt;/em&gt; to run the Task with the correct Input context (Pool ID). Create the Workflow under &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Add &gt; Operational Workflow&lt;/strong&gt;&lt;/em&gt;. Provide a name for the Workflow, add the &lt;em&gt;&lt;strong&gt;Task from the previous step&lt;/strong&gt;&lt;/em&gt; under &lt;em&gt;&lt;strong&gt;Tasks&lt;/strong&gt;&lt;/em&gt; and add the &lt;em&gt;&lt;strong&gt;ChooseNetworkPool&lt;/strong&gt;&lt;/em&gt; Input under &lt;em&gt;&lt;strong&gt;Inputs&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_workflow_setup.png&quot; alt=&quot;Operational Workflow setup&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Run the Workflow against the Network Pool&lt;/h2&gt;
&lt;p&gt;Review the REST API call to confirm that the host record &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; property is not currently set against the provisioned &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_link_ref_missing.png&quot; alt=&quot;Confirm missing refId link to ComputeServer&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under &lt;em&gt;&lt;strong&gt;Library &gt; Automation &gt; Workflows&lt;/strong&gt;&lt;/em&gt;, click the name of the Workflow created in the previous step to view the Workflow details:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_workflow_details.png&quot; alt=&quot;View Workflow details&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;&lt;strong&gt;Execute&lt;/strong&gt;&lt;/em&gt; button brings up the Workflow execution dialog. Select the IP Pool that the test VM Instance is deployed to. The &lt;em&gt;&lt;strong&gt;Execution Config/Context&lt;/strong&gt;&lt;/em&gt; can be ignored, as this Task will always run on the local HPE Morpheus Enterprise appliance in its own context:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_workflow_execute.png&quot; alt=&quot;Execute the Workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under the &lt;em&gt;&lt;strong&gt;Executions&lt;/strong&gt;&lt;/em&gt; tab, view the output of the Task, showing which &lt;em&gt;&lt;strong&gt;ComputeServer&lt;/strong&gt;&lt;/em&gt; objects have been allocated to Host Records within the IP Pool, via the &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; property:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetworks_workflow_executed.png&quot; alt=&quot;Workflow execution results&quot;&gt;&lt;/p&gt;
&lt;p&gt;Re-running the REST API call confirms that the &lt;em&gt;&lt;strong&gt;refType&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;refId&lt;/strong&gt;&lt;/em&gt; link was created:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_api_host_record_linked.png&quot; alt=&quot;RefType and refId link created&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Workflow via the API&lt;/h2&gt;
&lt;p&gt;In large environments it would be impractical to execute the Workflow for each IP Pool by hand in the UI. For these scenarios, execute the Workflow via the REST API. Provide the &lt;em&gt;&lt;strong&gt;id&lt;/strong&gt;&lt;/em&gt; of the &lt;em&gt;&lt;strong&gt;Workflow&lt;/strong&gt;&lt;/em&gt; in the request URL and the &lt;em&gt;&lt;strong&gt;id&lt;/strong&gt;&lt;/em&gt; of the IP Pool to the &lt;em&gt;&lt;strong&gt;networkPool&lt;/strong&gt;&lt;/em&gt; body parameter to execute the POST request:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/morphblog_linknetwork_execute_via_api.png&quot; alt=&quot;Execute Workflow via API&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;From here, it would make sense to explore the unused possibilities around Custom Tasks. Whereas this example only ever runs on the local HPE Morpheus Enterprise appliance, Tasks can run on different targets (remote, resource) and against different contexts (instances, servers).&lt;/p&gt;
&lt;p&gt;In essence, this post only explores a very particular use case to potentially assist in IPAM migrations and day 2 IPAM adoption.&lt;/p&gt;
&lt;p&gt;At the more advanced end of the spectrum are Provider Types that model core infrastructure components. These include integrations for Clouds, Networks, Storage systems, and many others. Such Providers tend to be more complex because they interact deeply with HPE Morpheus Enterprise’s provisioning, synchronization, and lifecycle management layers. Understanding how these Provider Types fit together is key to building powerful, production-grade Plugins.&lt;/p&gt;
&lt;p&gt;Explore the following resources for more information on the different Plugin/Provider types:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.morpheusdata.com&quot;&gt;https://developer.morpheusdata.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://share.morpheusdata.com&quot;&gt;https://share.morpheusdata.com&lt;/a&gt; (follow the repository link under the Plugin details to see the source code of a Plugin)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/hewlettpackard&quot;&gt;https://github.com/hewlettpackard&lt;/a&gt; &lt;a href=&quot;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&quot;&gt;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Transformers From Scratch in Chapel and C++]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/transformers-from-scratch-in-chapel-and-c/</link><guid isPermaLink="false">https://developer.hpe.com/transformers-from-scratch-in-chapel-and-c/</guid><pubDate>Sat, 13 Dec 2025 03:07:14 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Dive into the latest resources & tutorials designed to empower your work with HPE technologies]]></title><link>https://developer.hpe.com/2025-december-08/</link><guid isPermaLink="false">https://developer.hpe.com/2025-december-08/</guid><pubDate>Mon, 08 Dec 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Reflections on ChapelCon '25]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/reflections-on-chapelcon-25/</link><guid isPermaLink="false">https://developer.hpe.com/reflections-on-chapelcon-25/</guid><pubDate>Fri, 05 Dec 2025 20:07:37 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Bringing AI assistants to GreenLake with MCP Servers]]></title><description><![CDATA[Overview Modern cloud platforms generate vast amounts of operational data. GreenLake customers often find themselves toggling between…]]></description><link>https://developer.hpe.com/bringing-ai-assistants-to-hpe-greenlake-with-mcp-servers/</link><guid isPermaLink="false">https://developer.hpe.com/bringing-ai-assistants-to-hpe-greenlake-with-mcp-servers/</guid><pubDate>Wed, 03 Dec 2025 18:40:27 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Overview&lt;/h1&gt;
&lt;p&gt;Modern cloud platforms generate vast amounts of operational data. GreenLake customers often find themselves toggling between multiple interfaces, running complex API queries, and piecing together information from various services to understand their infrastructure state. What if your AI assistant could directly interact with your GreenLake environment in a secure, controlled manner?&lt;/p&gt;
&lt;p&gt;The Model Context Protocol (MCP) makes this possible by providing a standardized way for AI assistants to access external data sources and tools. This article explores how MCP servers bring intelligent, conversational access to GreenLake APIs while maintaining strict security boundaries.&lt;/p&gt;
&lt;h2&gt;The Challenge: Bridging AI and Enterprise APIs&lt;/h2&gt;
&lt;p&gt;AI assistants have transformed how we work with information, but they face a fundamental limitation when it comes to enterprise platforms: they don&apos;t have direct access to your live operational data. When you ask questions about your GreenLake environment, the AI can only provide general guidance, not specific insights based on your actual workspaces, devices, or audit logs.&lt;/p&gt;
&lt;p&gt;Traditional approaches to this problem involve building custom integration layers, managing authentication flows, and maintaining bespoke code for each API endpoint. This creates several challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Development overhead&lt;/strong&gt;: Each integration requires custom code and ongoing maintenance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security concerns&lt;/strong&gt;: Storing credentials and managing access control adds complexity&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Limited reusability&lt;/strong&gt;: Custom integrations rarely work across different tools or AI platforms&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Slow iteration&lt;/strong&gt;: Adding new capabilities means writing more integration code&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Introducing the Model Context Protocol&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://modelcontextprotocol.io/&quot;&gt;Model Context Protocol&lt;/a&gt; is an open standard that enables AI applications to securely connect to external data sources and tools. Think of it as a universal adapter that allows AI assistants to &quot;plug in&quot; to your enterprise systems in a controlled, standardized way.&lt;/p&gt;
&lt;p&gt;MCP operates on a client-server architecture:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mcp-architecture.png&quot; alt=&quot;mcp architecture&quot; title=&quot;mcp architecture&quot;&gt;&lt;/p&gt;
&lt;p&gt;The protocol defines three core primitives:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Resources&lt;/strong&gt;: Data sources that can be read (configuration files, API responses, documents)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tools&lt;/strong&gt;: Functions that can be invoked (API calls, calculations, data transformations)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prompts&lt;/strong&gt;: Reusable templates for common operations&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For GreenLake integration, MCP servers primarily expose &lt;strong&gt;tools&lt;/strong&gt; that map to API endpoints, allowing AI assistants to query workspaces, retrieve device information, search audit logs, and more.&lt;/p&gt;
&lt;h2&gt;Why MCP Servers Are a Natural Fit for GreenLake?&lt;/h2&gt;
&lt;p&gt;MCP servers bring a set of benefits that align well with GreenLake’s secure, enterprise-scale architecture. At a high level, they provide local execution, per-user credentials, and a standardized interface for AI tools to safely interact with GreenLake APIs.&lt;/p&gt;
&lt;h3&gt;Local Execution, Complete Control&lt;/h3&gt;
&lt;p&gt;An MCP server runs entirely on your machine. It doesn’t open network ports or send your credentials to an external service. Instead, it acts as a controlled bridge between your AI assistant and the GreenLake APIs.&lt;/p&gt;
&lt;p&gt;This design provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data stays local&lt;/strong&gt;: API responses are processed on your device before the AI sees them.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Credential isolation&lt;/strong&gt;: OAuth secrets never leave your environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Read-only safety&lt;/strong&gt;: Only GET operations are exposed, preventing accidental changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Firewall friendly behavior&lt;/strong&gt;: Only outbound HTTPS requests are sent to GreenLake APIs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Takeaway: MCP servers give AI assistants visibility into your GreenLake environment without sacrificing security, privacy, or control.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Personal API Client Pattern&lt;/h3&gt;
&lt;p&gt;MCP servers use your personal API credentials, similar to how you would interact with GreenLake through the CLI or web console. This means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Workspace scoping&lt;/strong&gt;: You only access resources within your authorized workspaces&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Role-based access&lt;/strong&gt;: Your existing RBAC permissions apply to all API operations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Individual accountability&lt;/strong&gt;: All actions are attributed to your user account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No shared secrets&lt;/strong&gt;: Each user maintains their own credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Standardized Tool Interface&lt;/h3&gt;
&lt;p&gt;Once configured, the MCP server exposes GreenLake APIs as standardized tools that any MCP-compatible AI assistant can use. This provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Consistent experience&lt;/strong&gt;: The same tools work in Claude Desktop, VS Code, and other MCP clients&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Discoverable capabilities&lt;/strong&gt;: AI assistants automatically learn available operations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Type-safe parameters&lt;/strong&gt;: Input validation happens before API calls&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Structured responses&lt;/strong&gt;: API data is returned in predictable formats&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Operating Modes: Static vs. Dynamic&lt;/h3&gt;
&lt;p&gt;MCP servers support two operating modes depending on the size and complexity of the API surface.&lt;/p&gt;
&lt;h4&gt;Static Mode&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Best for&lt;/strong&gt;: APIs with &amp;#x3C;50 endpoints&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool Model&lt;/strong&gt;: One tool per endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Validation&lt;/strong&gt;: Compile-time, type-safe&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Fast execution&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ideal Use&lt;/strong&gt;: Smaller, well-defined API sets&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Dynamic Mode&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Best for&lt;/strong&gt;: APIs with 50+ endpoints&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool Model&lt;/strong&gt;: Three meta-tools handle everything&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Validation&lt;/strong&gt;: Runtime schema validation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Lower memory footprint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ideal Use&lt;/strong&gt;: Large, evolving API sets&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both modes expose the same capabilities — the difference lies in how efficiently the server manages larger API collections.&lt;/p&gt;
&lt;h2&gt;Security Architecture&lt;/h2&gt;
&lt;p&gt;Security is paramount when connecting AI assistants to production infrastructure. MCP servers for GreenLake implement defense-in-depth principles.&lt;/p&gt;
&lt;h3&gt;Authentication and Authorization&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/authn-seq-diagram.png&quot; alt=&quot;authn authz sequence diagram&quot; title=&quot;Authentication - sequence diagram&quot;&gt;&lt;/p&gt;
&lt;p&gt;The MCP server handles OAuth2 authentication using the client credentials flow, automatically managing token lifecycle and refresh operations. Your credentials are read from local environment variables or configuration files, never transmitted to the AI service.&lt;/p&gt;
&lt;h3&gt;Read-Only Operations&lt;/h3&gt;
&lt;p&gt;MCP servers for GreenLake deliberately expose only GET operations from the OpenAPI specifications. This design choice ensures no unintended modifications, as write options are excluded, allowing users to query extensively without risk of changes. This design choice reduces the blast radius, so even if misconfigured, the server cannot modify infrastructure.&lt;/p&gt;
&lt;h3&gt;Network Isolation&lt;/h3&gt;
&lt;p&gt;The MCP server communicates with AI assistants via standard input/output (stdio) rather than network protocols. This means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;No listening ports&lt;/strong&gt;: The server doesn&apos;t expose network services&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Process-level isolation&lt;/strong&gt;: Communication happens through OS process pipes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No remote access&lt;/strong&gt;: The server cannot be accessed from other machines&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Firewall friendly&lt;/strong&gt;: Only outbound HTTPS to GreenLake APIs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Credential Management&lt;/h3&gt;
&lt;p&gt;Following security best practices, MCP servers support multiple credential sources in priority order:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Environment variables (recommended for development)&lt;/li&gt;
&lt;li&gt;Local configuration files with restricted permissions&lt;/li&gt;
&lt;li&gt;Secure credential stores (platform keychain integration)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Credentials are never logged, and API responses are sanitized to remove sensitive tokens before being shared with the AI assistant.&lt;/p&gt;
&lt;h2&gt;Real-World Use Case&lt;/h2&gt;
&lt;p&gt;MCP servers unlock powerful workflows by combining AI reasoning with live GreenLake data:&lt;/p&gt;
&lt;h3&gt;Audit Log Analysis&lt;/h3&gt;
&lt;p&gt;Investigating security events becomes conversational: &quot;Show me all failed login attempts from the last 24 hours and identify patterns.&quot;&lt;/p&gt;
&lt;p&gt;The MCP server enables:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Querying audit logs with time-based filters&lt;/li&gt;
&lt;li&gt;Natural language parsing of filter syntax&lt;/li&gt;
&lt;li&gt;Pattern recognition across multiple log entries&lt;/li&gt;
&lt;li&gt;Correlation with workspace and user data&lt;/li&gt;
&lt;li&gt;Summary reports with actionable recommendations&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Intelligent Device Inventory Management&lt;/h3&gt;
&lt;p&gt;Managing thousands of devices across distributed infrastructure becomes conversational:
&quot;Show me all unassigned compute devices in the production environment&quot;.
&quot;Which devices haven&apos;t been updated in 30 days and are approaching warranty expiration?&quot;
&quot;Generate an inventory report showing all PCI-compliant devices with their support levels.&quot;&lt;/p&gt;
&lt;p&gt;The MCP server enables:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Querying devices with complex filter expressions&lt;/li&gt;
&lt;li&gt;Natural language parsing of device attributes&lt;/li&gt;
&lt;li&gt;Pattern recognition across device types and states&lt;/li&gt;
&lt;li&gt;Automated capacity planning and forecasting&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Subscriptions&lt;/h3&gt;
&lt;p&gt;MCP servers unlock powerful workflows by combining AI reasoning with live GreenLake data:&lt;/p&gt;
&lt;p&gt;&quot;Show me all subscriptions expiring in the next 90 days and their renewal status.&quot;
&quot;Which subscriptions have less than 20% utilization and could be right-sized?&quot;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Subscriptions&lt;/strong&gt; MCP server enables comprehensive subscription analytics. You can query subscriptions with time-based filters, monitor expirations, and verify licenses. The &lt;strong&gt;Subscriptions&lt;/strong&gt; MCP server can identify underutilized resources and forecast renewal costs. This transforms subscription management from a reactive process into strategic planning conversations with your data.&lt;/p&gt;
&lt;h3&gt;User Management&lt;/h3&gt;
&lt;p&gt;Security teams identify access risks conversationally:&lt;/p&gt;
&lt;p&gt;&quot;Show me all active users who haven&apos;t logged in for 180 days and should be reviewed for deactivation.&quot;
&quot;How many new users were onboarded last month and what&apos;s their login activity?&quot;&lt;/p&gt;
&lt;p&gt;These capabilities support comprehensive user lifecycle management. The &lt;strong&gt;Users&lt;/strong&gt; MCP server can detect dormant accounts, analyze user growth trends to inform capacity planning, track onboarding metrics, and perform month-over-month comparisons to identify usage patterns.&lt;/p&gt;
&lt;h2&gt;Getting Started&lt;/h2&gt;
&lt;p&gt;Setting up an MCP server for GreenLake requires three steps: obtaining API credentials, configuring the server, and connecting your AI assistant.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;GreenLake workspace with API access&lt;/li&gt;
&lt;li&gt;Python 3.10 or higher&lt;/li&gt;
&lt;li&gt;An MCP-compatible AI client (Claude Desktop, VS Code with Claude Code extension)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 1: Obtain API Credentials&lt;/h3&gt;
&lt;p&gt;From the GreenLake console:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Settings&lt;/strong&gt; &gt; &lt;strong&gt;API Clients&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Create a new API client with appropriate scopes&lt;/li&gt;
&lt;li&gt;Note the &lt;strong&gt;Client ID&lt;/strong&gt;, &lt;strong&gt;Client Secret&lt;/strong&gt;, and &lt;strong&gt;Workspace ID&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Step 2: Generate and Configure the MCP Server&lt;/h3&gt;
&lt;p&gt;For this example, we&apos;ll set up an audit logs MCP server. The MCP generator tool (used internally by HPE to accelerate development) creates production-ready servers from OpenAPI specifications, though the generated servers can be deployed and configured independently.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Clone or download the pre-generated MCP server
git clone https://github.com/HewlettPackard/gl-mcp
cd src/audit-logs

# Install dependencies using uv (fast Python package manager)
uv sync

# Configure environment variables
cat &gt; .env.local &amp;#x3C;&amp;#x3C; EOF
GREENLAKE_API_BASE_URL=https://global.api.greenlake.hpe.com
GREENLAKE_CLIENT_ID=your-client-id
GREENLAKE_CLIENT_SECRET=your-client-secret
GREENLAKE_WORKSPACE_ID=your-workspace-id
MCP_TOOL_MODE=static
GREENLAKE_LOG_LEVEL=INFO
EOF

# Test the server
make test
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Connect Your AI Assistant&lt;/h3&gt;
&lt;p&gt;For Claude Desktop, add the server to your configuration file (&lt;code&gt;~/Library/Application Support/Claude/claude_desktop_config.json&lt;/code&gt; on macOS). See the following code sample:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;mcpServers&quot;: {
    &quot;greenlake-audit-logs&quot;: {
      &quot;command&quot;: &quot;uv&quot;,
      &quot;args&quot;: [&quot;run&quot;, &quot;python&quot;, &quot;__main__.py&quot;],
      &quot;cwd&quot;: &quot;/path/to/greenlake-mcp-servers/audit-logs&quot;,
      &quot;env&quot;: {
        &quot;GREENLAKE_API_BASE_URL&quot;: &quot;https://global.api.greenlake.hpe.com&quot;,
        &quot;GREENLAKE_CLIENT_ID&quot;: &quot;your-client-id&quot;,
        &quot;GREENLAKE_CLIENT_SECRET&quot;: &quot;your-client-secret&quot;,
        &quot;GREENLAKE_WORKSPACE_ID&quot;: &quot;your-workspace-id&quot;,
        &quot;MCP_TOOL_MODE&quot;: &quot;static&quot;
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For VS Code with the Claude Code extension, create or update &lt;code&gt;.vscode/mcp.json&lt;/code&gt; in your workspace. See the following code sample:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;servers&quot;: {
    &quot;greenlake-audit-logs&quot;: {
      &quot;type&quot;: &quot;stdio&quot;,
      &quot;command&quot;: &quot;uv&quot;,
      &quot;args&quot;: [&quot;run&quot;, &quot;python&quot;, &quot;__main__.py&quot;],
      &quot;cwd&quot;: &quot;/path/to/greenlake-mcp-servers/audit-logs&quot;,
      &quot;env&quot;: {
        &quot;GREENLAKE_API_BASE_URL&quot;: &quot;https://global.api.greenlake.hpe.com&quot;,
        &quot;GREENLAKE_CLIENT_ID&quot;: &quot;your-client-id&quot;,
        &quot;GREENLAKE_CLIENT_SECRET&quot;: &quot;your-client-secret&quot;,
        &quot;GREENLAKE_WORKSPACE_ID&quot;: &quot;your-workspace-id&quot;
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Restart your AI client, and the MCP server will automatically connect.&lt;/p&gt;
&lt;h3&gt;Example Interaction&lt;/h3&gt;
&lt;p&gt;Once configured, you can interact naturally:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;You&lt;/strong&gt;: &quot;Show me the 10 most recent audit log entries&quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Claude&lt;/strong&gt; (using the &lt;strong&gt;getAuditLogs&lt;/strong&gt; tool with &lt;strong&gt;limit=10&lt;/strong&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;Here are the 10 most recent audit log entries:

1. [2025-01-15 14:32:15] User Management - User admin@example.com logged in
2. [2025-01-15 14:28:43] Device Management - Device server-01 status changed to active
3. [2025-01-15 14:15:22] Workspace - Workspace Production updated
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The AI automatically invokes the appropriate MCP tool, formats parameters, makes the API call through the server, and presents results in a readable format.&lt;/p&gt;
&lt;h2&gt;Architecture Deep Dive&lt;/h2&gt;
&lt;p&gt;Understanding the internal architecture helps appreciate how MCP servers maintain security while providing powerful capabilities.&lt;/p&gt;
&lt;h3&gt;Component Overview&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/component-overview.png&quot; alt=&quot;component overview&quot; title=&quot;Component Overview&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Request Flow&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;AI Request&lt;/strong&gt;: The AI assistant formulates a tool invocation (e.g., &quot;get audit logs with category &apos;User Management&apos;&quot;)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP Protocol&lt;/strong&gt;: The request is serialized as JSON-RPC and sent via stdio to the MCP server&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool Resolution&lt;/strong&gt;: The server&apos;s tool registry identifies the corresponding tool implementation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Parameter Validation&lt;/strong&gt;: Input parameters are validated against the tool&apos;s schema&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authentication Check&lt;/strong&gt;: The auth manager verifies token validity and freshness&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API Call&lt;/strong&gt;: The HTTP client constructs and sends the HTTPS request to GreenLake&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Response Processing&lt;/strong&gt;: The raw API response is parsed and formatted&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP Response&lt;/strong&gt;: Structured data is returned to the AI via JSON-RPC&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI Presentation&lt;/strong&gt;: The assistant formats and presents results to the user&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Tool Implementation&lt;/h3&gt;
&lt;p&gt;Each MCP tool implements a consistent interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class GetAuditLogsTool(BaseTool):
    &quot;&quot;&quot;Tool for querying GreenLake audit logs.&quot;&quot;&quot;

    name = &quot;getAuditLogs&quot;
    description = &quot;Retrieve audit logs with optional filtering&quot;

    input_schema = {
        &quot;type&quot;: &quot;object&quot;,
        &quot;properties&quot;: {
            &quot;filter&quot;: {&quot;type&quot;: &quot;string&quot;, &quot;description&quot;: &quot;OData filter expression&quot;},
            &quot;limit&quot;: {&quot;type&quot;: &quot;integer&quot;, &quot;default&quot;: 100},
            &quot;offset&quot;: {&quot;type&quot;: &quot;integer&quot;, &quot;default&quot;: 0}
        }
    }

    async def execute(self, filter: str = None, limit: int = 100, offset: int = 0):
        &quot;&quot;&quot;Execute the audit log query.&quot;&quot;&quot;
        # Parameter validation
        params = {&quot;limit&quot;: limit, &quot;offset&quot;: offset}
        if filter:
            params[&quot;filter&quot;] = filter

        # Make authenticated API call
        response = await self.http_client.get(&quot;/audit-log/v1/logs&quot;, params=params)

        # Return structured data
        return response
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This abstraction allows the MCP server to expose dozens of API endpoints with minimal code duplication.&lt;/p&gt;
&lt;h3&gt;Dynamic Mode Optimization&lt;/h3&gt;
&lt;p&gt;For large APIs, dynamic mode provides significant performance benefits. Instead of individual tool files for each endpoint, three meta-tools handle all operations:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. list_endpoints&lt;/strong&gt; - Fast discovery&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Returns: [&quot;GET /audit-logs/v1/logs&quot;, &quot;GET /devices/v1/servers&quot;, ...]
# Allows AI to browse available operations
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2. get_endpoint_schema&lt;/strong&gt; - On-demand schema loading&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Input: &quot;GET /audit-logs/v1/logs&quot;
# Returns: Full parameter schema, response types, descriptions
# AI learns how to use an endpoint only when needed
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;3. invoke_dynamic_tool&lt;/strong&gt; - Validated execution&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Input: endpoint, parameters
# Validates parameters against schema
# Makes API call
# Returns structured response
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This architecture scales efficiently to APIs with hundreds of endpoints without overwhelming the AI&apos;s context window.&lt;/p&gt;
&lt;h2&gt;What&apos;s Next?&lt;/h2&gt;
&lt;p&gt;We are actively working on extending our GreenLake MCP Servers capabilities. Stay tuned for future updates.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;MCP servers bridge the gap between AI assistants and enterprise APIs, enabling natural language interaction with GreenLake without sacrificing security or control. By running locally and using personal API credentials, they provide a secure, auditable way to extend AI capabilities into your infrastructure management workflows.&lt;/p&gt;
&lt;p&gt;The read-only nature of these servers makes them ideal for exploration, reporting, and analysis tasks. Combined with the power of large language models, they transform how teams interact with cloud platforms, shifting from manual API navigation to conversational queries and automated insights.&lt;/p&gt;
&lt;p&gt;Whether you&apos;re investigating audit logs, documenting infrastructure, or generating compliance reports, MCP servers make GreenLake data accessible where you need it: in your AI-powered development environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get the HPE Morpheus Enterprise Software Community Edition for your home lab]]></title><description><![CDATA[I'm thrilled to announce the availability of the HPE Morpheus Enterprise Software Community Edition. HPE Morpheus Enterprise is a powerful…]]></description><link>https://developer.hpe.com/hpe-morpheus-enterprise-software-community-edition-available-now/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-morpheus-enterprise-software-community-edition-available-now/</guid><pubDate>Tue, 02 Dec 2025 09:29:28 GMT</pubDate><content:encoded>&lt;!--\\\[if gte mso 9]&gt;&lt;xml&gt;
 &lt;o:OfficeDocumentSettings&gt;
  &lt;o:AllowPNG/&gt;
 &lt;/o:OfficeDocumentSettings&gt;
&lt;/xml&gt;&lt;!\\\[endif]--&gt;
&lt;!--\\\[if gte mso 9]&gt;&lt;xml&gt;
 &lt;w:WordDocument&gt;
  &lt;w:View&gt;Normal&lt;/w:View&gt;
  &lt;w:Zoom&gt;0&lt;/w:Zoom&gt;
  &lt;w:TrackMoves/&gt;
  &lt;w:TrackFormatting/&gt;
  &lt;w:PunctuationKerning/&gt;
  &lt;w:ValidateAgainstSchemas/&gt;
  &lt;w:SaveIfXMLInvalid&gt;false&lt;/w:SaveIfXMLInvalid&gt;
  &lt;w:IgnoreMixedContent&gt;false&lt;/w:IgnoreMixedContent&gt;
  &lt;w:AlwaysShowPlaceholderText&gt;false&lt;/w:AlwaysShowPlaceholderText&gt;
  &lt;w:DoNotPromoteQF/&gt;
  &lt;w:LidThemeOther&gt;EN-US&lt;/w:LidThemeOther&gt;
  &lt;w:LidThemeAsian&gt;X-NONE&lt;/w:LidThemeAsian&gt;
  &lt;w:LidThemeComplexScript&gt;X-NONE&lt;/w:LidThemeComplexScript&gt;
  &lt;w:Compatibility&gt;
   &lt;w:BreakWrappedTables/&gt;
   &lt;w:SnapToGridInCell/&gt;
   &lt;w:WrapTextWithPunct/&gt;
   &lt;w:UseAsianBreakRules/&gt;
   &lt;w:DontGrowAutofit/&gt;
   &lt;w:SplitPgBreakAndParaMark/&gt;
   &lt;w:EnableOpenTypeKerning/&gt;
   &lt;w:DontFlipMirrorIndents/&gt;
   &lt;w:OverrideTableStyleHps/&gt;
  &lt;/w:Compatibility&gt;
  &lt;m:mathPr&gt;
   &lt;m:mathFont m:val=&quot;Cambria Math&quot;/&gt;
   &lt;m:brkBin m:val=&quot;before&quot;/&gt;
   &lt;m:brkBinSub m:val=&quot;&amp;#45;-&quot;/&gt;
   &lt;m:smallFrac m:val=&quot;off&quot;/&gt;
   &lt;m:dispDef/&gt;
   &lt;m:lMargin m:val=&quot;0&quot;/&gt;
   &lt;m:rMargin m:val=&quot;0&quot;/&gt;
   &lt;m:defJc m:val=&quot;centerGroup&quot;/&gt;
   &lt;m:wrapIndent m:val=&quot;1440&quot;/&gt;
   &lt;m:intLim m:val=&quot;subSup&quot;/&gt;
   &lt;m:naryLim m:val=&quot;undOvr&quot;/&gt;
  &lt;/m:mathPr&gt;&lt;/w:WordDocument&gt;
&lt;/xml&gt;&lt;!\\\[endif]--&gt;
&lt;!--\\\[if gte mso 9]&gt;&lt;xml&gt;
 &lt;w:LatentStyles DefLockedState=&quot;false&quot; DefUnhideWhenUsed=&quot;false&quot;
  DefSemiHidden=&quot;false&quot; DefQFormat=&quot;false&quot; DefPriority=&quot;99&quot;
  LatentStyleCount=&quot;376&quot;&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;0&quot; QFormat=&quot;true&quot; Name=&quot;Normal&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 7&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 8&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;heading 9&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 7&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 8&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index 9&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 7&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 8&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;toc 9&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Normal Indent&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;footnote text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;annotation text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;header&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;footer&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;index heading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;35&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;caption&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;table of figures&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;envelope address&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;envelope return&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;footnote reference&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;annotation reference&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;line number&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;page number&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;endnote reference&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;endnote text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;table of authorities&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;macro&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;toa heading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Bullet&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Number&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Bullet 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Bullet 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Bullet 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Bullet 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Number 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Number 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Number 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Number 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;10&quot; QFormat=&quot;true&quot; Name=&quot;Title&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Closing&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Signature&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;Default Paragraph Font&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text Indent&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Continue&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Continue 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Continue 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Continue 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;List Continue 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Message Header&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;11&quot; QFormat=&quot;true&quot; Name=&quot;Subtitle&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Salutation&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Date&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text First Indent&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text First Indent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Note Heading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text Indent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Body Text Indent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Block Text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Hyperlink&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;FollowedHyperlink&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;22&quot; QFormat=&quot;true&quot; Name=&quot;Strong&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;20&quot; QFormat=&quot;true&quot; Name=&quot;Emphasis&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Document Map&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Plain Text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;E-mail Signature&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Top of Form&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Bottom of Form&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Normal (Web)&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Acronym&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Address&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Cite&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Code&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Definition&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Keyboard&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Preformatted&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Sample&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Typewriter&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;HTML Variable&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Normal Table&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;annotation subject&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;No List&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Outline List 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Outline List 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Outline List 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Simple 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Simple 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Simple 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Classic 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Classic 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Classic 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Classic 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Colorful 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Colorful 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Colorful 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Columns 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Columns 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Columns 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Columns 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Columns 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 7&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Grid 8&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 7&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table List 8&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table 3D effects 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table 3D effects 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table 3D effects 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Contemporary&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Elegant&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Professional&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Subtle 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Subtle 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Web 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Web 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Web 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Balloon Text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;Table Grid&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Table Theme&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; Name=&quot;Placeholder Text&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; QFormat=&quot;true&quot; Name=&quot;No Spacing&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; Name=&quot;Revision&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;34&quot; QFormat=&quot;true&quot;
   Name=&quot;List Paragraph&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;29&quot; QFormat=&quot;true&quot; Name=&quot;Quote&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;30&quot; QFormat=&quot;true&quot;
   Name=&quot;Intense Quote&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; Name=&quot;Light Shading Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; Name=&quot;Light List Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; Name=&quot;Light Grid Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; Name=&quot;Medium Shading 1 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; Name=&quot;Medium Shading 2 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; Name=&quot;Medium List 1 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; Name=&quot;Medium List 2 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; Name=&quot;Medium Grid 1 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; Name=&quot;Medium Grid 2 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; Name=&quot;Medium Grid 3 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; Name=&quot;Dark List Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; Name=&quot;Colorful Shading Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; Name=&quot;Colorful List Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; Name=&quot;Colorful Grid Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;19&quot; QFormat=&quot;true&quot;
   Name=&quot;Subtle Emphasis&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;21&quot; QFormat=&quot;true&quot;
   Name=&quot;Intense Emphasis&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;31&quot; QFormat=&quot;true&quot;
   Name=&quot;Subtle Reference&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;32&quot; QFormat=&quot;true&quot;
   Name=&quot;Intense Reference&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;33&quot; QFormat=&quot;true&quot; Name=&quot;Book Title&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;37&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; Name=&quot;Bibliography&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; SemiHidden=&quot;true&quot;
   UnhideWhenUsed=&quot;true&quot; QFormat=&quot;true&quot; Name=&quot;TOC Heading&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;41&quot; Name=&quot;Plain Table 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;42&quot; Name=&quot;Plain Table 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;43&quot; Name=&quot;Plain Table 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;44&quot; Name=&quot;Plain Table 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;45&quot; Name=&quot;Plain Table 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;40&quot; Name=&quot;Grid Table Light&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot; Name=&quot;Grid Table 1 Light&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot; Name=&quot;Grid Table 6 Colorful&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot; Name=&quot;Grid Table 7 Colorful&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;Grid Table 1 Light Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;Grid Table 2 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;Grid Table 3 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;Grid Table 4 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;Grid Table 5 Dark Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;Grid Table 6 Colorful Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;Grid Table 7 Colorful Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot; Name=&quot;List Table 1 Light&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot; Name=&quot;List Table 6 Colorful&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot; Name=&quot;List Table 7 Colorful&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 1&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 2&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 3&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 4&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 5&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;46&quot;
   Name=&quot;List Table 1 Light Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;47&quot; Name=&quot;List Table 2 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;48&quot; Name=&quot;List Table 3 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;49&quot; Name=&quot;List Table 4 Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;50&quot; Name=&quot;List Table 5 Dark Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;51&quot;
   Name=&quot;List Table 6 Colorful Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;52&quot;
   Name=&quot;List Table 7 Colorful Accent 6&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Mention&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Smart Hyperlink&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Hashtag&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Unresolved Mention&quot;/&gt;
  &lt;w:LsdException Locked=&quot;false&quot; SemiHidden=&quot;true&quot; UnhideWhenUsed=&quot;true&quot;
   Name=&quot;Smart Link&quot;/&gt;
 &lt;/w:LatentStyles&gt;
&lt;/xml&gt;&lt;!\\\[endif]--&gt;
&lt;!--\\\[if gte mso 10]&gt;
&lt;style&gt;
 /* Style Definitions */
 table.MsoNormalTable
	{mso-style-name:&quot;Table Normal&quot;;
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:&quot;&quot;;
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin-top:0in;
	mso-para-margin-right:0in;
	mso-para-margin-bottom:8.0pt;
	mso-para-margin-left:0in;
	line-height:107%;
	mso-pagination:widow-orphan;
	font-size:11.0pt;
	font-family:&quot;Calibri&quot;,sans-serif;
	mso-ascii-font-family:Calibri;
	mso-ascii-theme-font:minor-latin;
	mso-hansi-font-family:Calibri;
	mso-hansi-theme-font:minor-latin;
	mso-bidi-font-family:&quot;Times New Roman&quot;;
	mso-bidi-theme-font:minor-bidi;}
&lt;/style&gt;
&lt;!\\\[endif]--&gt;
&lt;!--StartFragment--&gt;
&lt;p&gt;I&apos;m thrilled to announce the availability of the &lt;strong&gt;HPE Morpheus Enterprise Software Community Edition.&lt;/strong&gt; HPE Morpheus Enterprise is a powerful and flexible solution designed to accelerate your multi-cloud and hybrid IT automation journey. Whether you’re just starting out or looking to optimize your existing cloud infrastructure, this is your chance to explore HPE Morpheus Enterprise and learn why &lt;a href=&quot;https://www.hpe.com/psnow/doc/a00141838enw&quot;&gt;ISG in its latest quadrant report named HPE Morpheus the leader in Hybrid Cloud Management Platform&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is HPE Morpheus Enterprise?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HPE Morpheus Enterprise is a cutting-edge software platform that empowers IT teams to unify management, streamline automation, and simplify operations across private and public clouds. It&apos;s built to help organizations modernize their infrastructure while improving agility and reducing complexity. With features tailored for VM lifecycle management, self-service provisioning, and policy-based governance, HPE Morpheus Enterprise is the ultimate tool for managing hybrid cloud environments. Here&apos;s &lt;a href=&quot;http://www.hpe.com/morpheus&quot;&gt;a link to the hpe.com page&lt;/a&gt; where you can learn more by reading other content and watch videos. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Community Edition highlights&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The HPE Morpheus Enterprise Software Community Edition includes all the features in HPE Morpheus VM Essentials, enabling you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deploy virtual machines quickly and efficiently through intuitive self-service workflows.&lt;/li&gt;
&lt;li&gt;Automate day-to-day VM operations with seamless provisioning and lifecycle management.&lt;/li&gt;
&lt;li&gt;Leverage policies for governance and compliance, ensuring consistent management across environments.&lt;/li&gt;
&lt;li&gt;Experiment with the platform at no cost and discover how it can transform your IT infrastructure.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Whether you’re an IT or VM administrator, a DevOps engineer, or a cloud architect, the Community Edition is the perfect way to experience the power of HPE Morpheus Enterprise firsthand. However, note that the Community Edition is a non-commercial, &quot;&lt;em&gt;personal use&quot;&lt;/em&gt; license. It has limited features and supplies a 3-socket license. If you are interested in testing Morpheus Enterprise in your environment, your HPE representative can assist you in finding ways to do that.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Get started today&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Ready to dive in? The HPE Morpheus Enterprise Software Community Edition is available now, and I can’t wait to see how you use it to innovate and streamline your IT operations. Join the HPE Community and explore the possibilities with this exciting release! Send an email to &lt;strong&gt;&lt;a href=&quot;mailto:morpheus_communityedition@hpe.com&quot;&gt;morpheus_communityedition@hpe.com&lt;/a&gt;&lt;/strong&gt; with your request. If you are eligible, I&apos;ll send you an invite to get the license and download the binaries. &lt;/p&gt;
&lt;!--EndFragment--&gt;</content:encoded></item><item><title><![CDATA[Configuring SAML SSO Authentication with HPE GreenLake: A Guide for the Top 3 Identity Providers and Passwordless Integration for HPECOMCmdlets]]></title><description><![CDATA[NONE]]></description><link>https://developer.hpe.com/configuring-saml-sso-authentication-with-hpe-greenlake-a-guide-for-the-top-3-identity-providers-and-passwordless-integration-for-hpecomcmdlets/</link><guid isPermaLink="false">https://developer.hpe.com/configuring-saml-sso-authentication-with-hpe-greenlake-a-guide-for-the-top-3-identity-providers-and-passwordless-integration-for-hpecomcmdlets/</guid><pubDate>Wed, 19 Nov 2025 12:17:54 GMT</pubDate><content:encoded>&lt;p&gt;NONE&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A Beginner’s Guide to Building and Compiling HPE Morpheus Enterprise Plugins]]></title><description><![CDATA[Introduction HPE Morpheus Enterprise is a hybrid cloud platform that unifies diverse products and technologies into a consistent workload…]]></description><link>https://developer.hpe.com/morpheus-plugin-tutorial-how-to-build-and-compile/</link><guid isPermaLink="false">https://developer.hpe.com/morpheus-plugin-tutorial-how-to-build-and-compile/</guid><pubDate>Wed, 19 Nov 2025 08:43:36 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;HPE Morpheus Enterprise is a hybrid cloud platform that unifies diverse products and technologies into a consistent workload-lifecycle orchestration, governance, and control framework.&lt;/p&gt;
&lt;p&gt;This makes HPE Morpheus Enterprise ideally positioned to integrate with a broad ecosystem of cloud-related service vendors. These integrations are enabled through technology-specific plugin providers. HPE Morpheus Enterprise is extendable with custom plugins for clouds, task types, UI tabs, reports, approvals, cypher, IPAM, backups and more.&lt;/p&gt;
&lt;p&gt;This article covers the process of generating and compiling a basic HPE Morpheus Enterprise generic plugin project on Windows 11. To understand how the workflow fits together, this blog will cover:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Generating a new project using the plugin code generator&lt;/li&gt;
&lt;li&gt;Unzipping and opening the project in an IDE&lt;/li&gt;
&lt;li&gt;Exploring main plugin file components&lt;/li&gt;
&lt;li&gt;Compiling the plugin on Windows&lt;/li&gt;
&lt;li&gt;Uploading the compiled plugin to HPE Morpheus Enterprise&lt;/li&gt;
&lt;li&gt;Compiling the plugin remotely on Linux, using Visual Studio Code&lt;/li&gt;
&lt;li&gt;Compiling the plugin using Docker&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;JDK Prerequisite&lt;/h2&gt;
&lt;p&gt;For the labs in this document, we assume a &lt;strong&gt;Windows 11 host&lt;/strong&gt; with internet access and &lt;strong&gt;Visual Studio Code&lt;/strong&gt; installed.&lt;/p&gt;
&lt;p&gt;You’ll also need to have &lt;strong&gt;Java JDK 11 or 17&lt;/strong&gt; installed. The vendor distribution of Java is not important — both OpenJDK and Oracle JDK are supported.&lt;/p&gt;
&lt;p&gt;When using JDK 17, the project’s compile &lt;strong&gt;compatibility level is set to version 1.11&lt;/strong&gt; to maintain compatibility with earlier environments.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Open a Windows command prompt (Press Win + R or click Start, type cmd, press enter)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To install OpenJDK 17, run the following command and click yes to provide administrative privileges where needed:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;winget install jdkbuild.openjdk.17.jdk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To verify your OpenJDK install, run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;java -version
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/0installjdk.png&quot; alt=&quot;Installing and testing Java JDK&quot; title=&quot;Install and test java&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another build dependency is the &lt;strong&gt;Gradle build system&lt;/strong&gt;. This will automatically be provided by the &lt;strong&gt;Gradle wrapper configuration&lt;/strong&gt; under the &lt;strong&gt;gradle directory&lt;/strong&gt; of the project we will generate.&lt;/p&gt;
&lt;h2&gt;Creating a plugin project&lt;/h2&gt;
&lt;p&gt;Creating a project that compiles code into usable plugins can be a daunting task, especially for developers who are not familiar with Java, Groovy, or Gradle.&lt;/p&gt;
&lt;p&gt;To simplify this process and make it easier for potential plugin builders to get started, the HPE Morpheus Enterprise engineering team created the &lt;strong&gt;Morpheus Plugin Code Generator&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;We’ll use this handy tool as a starting point to create our new plugin project.
The preferred source language for HPE Morpheus Enterprise plugins is &lt;strong&gt;Groovy&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Groovy features a concise, flexible syntax and includes many helper methods that make coding easier. It’s fully interoperable with Java and compiles to the same JVM bytecode.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Using a web browser, navigate to &lt;a href=&quot;https://developer.morpheusdata.com/&quot;&gt;https://developer.morpheusdata.com/&lt;/a&gt;. Click the Get Started Now button.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/1developer_getting_started_button.png&quot; alt=&quot;Launch plugin code generator button&quot; title=&quot;Launch plugin code generator&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For this lab, we provide the following field values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;: Plugin Demo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Code&lt;/strong&gt;: pluginDemo&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Providers&lt;/strong&gt;: Generic Integration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/2developer_generate_plugin.png&quot; alt=&quot;Generate plugin project&quot; title=&quot;Generate plugin project&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Unzip the plugin project for use in an IDE. For this example, we will unzip the plugin to the Windows Documents folder.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/3unzip_plugin_project.png&quot; alt=&quot;Extract code project&quot; title=&quot;Extract code project&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Authoring plugin projects in an IDE&lt;/h2&gt;
&lt;p&gt;Adding logic and complexity to a working plugin is an exercise in object-oriented programming. Writing code in plain text editors can be tedious, time-consuming, and error prone. To make development easier, we use an IDE such as Visual Studio Code.&lt;/p&gt;
&lt;p&gt;Although we use Visual Studio Code in this example, several more powerful Java/Groovy IDEs are available, including IntelliJ IDEA, Eclipse, and NetBeans.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Open &lt;strong&gt;Visual Studio Code&lt;/strong&gt; and select &lt;strong&gt;Open Folder&lt;/strong&gt; from the &lt;strong&gt;File&lt;/strong&gt; menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/4vsc_open_folder.png&quot; alt=&quot;Visual Studio Code Open Folder&quot; title=&quot;VS Code Open Folder&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Respond &lt;strong&gt;Yes&lt;/strong&gt; to the trust prompt. If this is the first project opened of its type, VS Code will prompt for the installation of java related extension packs. Respond by clicking &lt;strong&gt;Install&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/5trust_and_install_extensions.png&quot; alt=&quot;Trust and install extensions prompt&quot; title=&quot;Trust and install extensions prompt&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The extension pack will take a while to install and build/configure the project. This can be seen at the bottom left of the VS Code window.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/61configuring.png&quot; alt=&quot;Configuring the extension pack in Visual Studio Code&quot; title=&quot;Extension pack setup&quot;&gt;&lt;/p&gt;
&lt;p&gt;Wait for the &lt;strong&gt;Gradle: configure project&lt;/strong&gt; message to disappear and the &lt;strong&gt;Java: Ready&lt;/strong&gt; message to remain.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/62configured.png&quot; alt=&quot;Project successfully configured in Visual Studio Code&quot; title=&quot;Project build&quot;&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;Explorer&lt;/strong&gt; view by clicking the corresponding icon at the top left. The &lt;strong&gt;Welcome&lt;/strong&gt; and any open &lt;strong&gt;Extension&lt;/strong&gt; tabs can now be closed.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/7explorer_view.png&quot; alt=&quot;Explorer view in Visual Studio Code&quot; title=&quot;Open explorer view&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Important project files&lt;/h2&gt;
&lt;p&gt;We will explore some of the most important files briefly. In different article, we will cover the project structure and the build files in more detail.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;gradlew.bat&lt;/strong&gt;&lt;br&gt;
OS shell wrapper script used for compiling the plugin on Windows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;gradlew&lt;/strong&gt;&lt;br&gt;
OS shell wrapper script used for compiling on Linux.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;gradle.properties&lt;/strong&gt;&lt;br&gt;
Variables used in the in the gradle build. Typically version numbers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;build.gradle&lt;/strong&gt;&lt;br&gt;
This is the actual build script. Build dependencies are declared here. This configures how the .jar file is built.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PluginDemoPlugin.groovy&lt;/strong&gt;&lt;br&gt;
i.The main plugin entry point class that HPE Morpheus Enterprise will load.&lt;br&gt;
ii. Extends &lt;strong&gt;com.morpheusdata.core.Plugin&lt;/strong&gt;&lt;br&gt;
iii. Specified in the build.gradle file as the &lt;strong&gt;Plugin-Class&lt;/strong&gt;&lt;br&gt;
iv. Registers &lt;strong&gt;provider&lt;/strong&gt; classes that add functionality to the plugin&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PluginDemoGenericProvider&lt;/strong&gt;&lt;br&gt;
A provider class that add generic functionality. Many types of providers can be added to plugins.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;pluginDemoShow.hbs&lt;/strong&gt;&lt;br&gt;
Handlebars markup to display UI elements in the HPE Morpheus Enterprise web UI.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Compiling locally on Windows&lt;/h2&gt;
&lt;p&gt;As a first compile option, we will look at the local Windows environment.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Start by opening a terminal. You may need to click the ellipses to exposes the entire menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/81terminal.png&quot; alt=&quot;Opening a terminal in Visual Studio Code&quot; title=&quot;Open terminal&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the Windows wrapper batch script command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-batch&quot;&gt;.\gradlew.bat clean build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;strong&gt;shadowJar&lt;/strong&gt; parameter can be used instead &lt;strong&gt;build&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/8gradlew_bat.png&quot; alt=&quot;Running the gradlew.bat command in terminal&quot; title=&quot;Build/compile command&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Find the generated &lt;strong&gt;.jar&lt;/strong&gt; file that was generated on successful build under the &lt;strong&gt;build &gt; libs&lt;/strong&gt; directory. Use the &lt;strong&gt;.jar&lt;/strong&gt; file suffixed with &lt;strong&gt;-all&lt;/strong&gt;. This file contains all compilation dependencies and is therefore safer to upload into HPE Morpheus Enterprise.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/9.-jar-file.png&quot; alt=&quot;Browse to the generated jar file&quot; title=&quot;Browse to the jar file&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Upload the plugin to the UI by navigating to &lt;strong&gt;Administration &gt; Integrations &gt; Plugins &gt; Add&lt;/strong&gt;. Drag the file onto the dialog or browse to the .jar file and click Upload.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/10.-upload-jar.png&quot; alt=&quot;Upload jar file to the UI&quot; title=&quot;Upload jar file to the UI&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A successful upload will add the plugin name to the list:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/11.-uploaded.png&quot; alt=&quot;Plugin successfully uploaded to the list&quot; title=&quot;Uploaded plugin into list&quot;&gt;&lt;/p&gt;
&lt;p&gt;To view the registered providers of an uploaded plugin, click the edit pencil to view the dialog.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/12.-plugin-providers.png&quot; alt=&quot;Plugin providers dialog in HPE Morpheus Enterprise&quot; title=&quot;View plugin details&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Compiling the plugin on Linux&lt;/h2&gt;
&lt;p&gt;Another compile option is to connect to a &lt;strong&gt;remote Linux&lt;/strong&gt; machine using an &lt;strong&gt;SSH&lt;/strong&gt; session from &lt;strong&gt;Visual Studio Code.&lt;/strong&gt; As a prerequisite for this, unzip the initial project to your Linux machine and ensure remote SSH connectivity. The &lt;strong&gt;Java JDK v17&lt;/strong&gt; will need to be present in this environment as well.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click the &lt;strong&gt;Open Remote Window&lt;/strong&gt; at the bottom left of the Visual Studio Code window. In the top popup option box, choose &lt;strong&gt;Connect to Host&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/13.-connect-to-host.png&quot; alt=&quot;Connect to remote host in Visual Studio Code&quot; title=&quot;Connect to remote host&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose &lt;strong&gt;&apos;+&apos; Add New SSH Host&lt;/strong&gt; and enter your &lt;strong&gt;user@host&lt;/strong&gt; combo. Choose any SSH configuration file on the system to save connection details to.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/13.-specify-host.png&quot; alt=&quot;Specify SSH host configuration&quot; title=&quot;Provide user host pair&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the &lt;strong&gt;Open Remote Window&lt;/strong&gt; at the bottom left of the Visual Studio Code window &lt;strong&gt;again&lt;/strong&gt;. Choose &lt;strong&gt;Connect to Host again&lt;/strong&gt;. This time, &lt;strong&gt;select the host&lt;/strong&gt; that was entered and saved in the previous step.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/13choose_host.png&quot; alt=&quot;Choose the configured host connection&quot; title=&quot;Choose host connection&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Satisfy any possible certificate security prompts and enter your password when prompted. A new Visual Studio Code window will be opened. As with the previous exercise, &lt;strong&gt;Open Folder&lt;/strong&gt; from the main &lt;strong&gt;File menu&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/13open_folder.png&quot; alt=&quot;Open folder on remote system&quot; title=&quot;Open remote folder&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select your &lt;strong&gt;Plugin Demo&lt;/strong&gt; folder and click &lt;strong&gt;OK&lt;/strong&gt;. You will be prompted to supply the password again. As before, allow for the installation of the required Java Extension packs and wait for the &lt;strong&gt;Gradle: configure project&lt;/strong&gt; message to disappear.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/61configuring.png&quot; alt=&quot;Configuring project on remote system&quot; title=&quot;Configuring project&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Java: Ready&lt;/strong&gt; status message should remain in the bottom status bar.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/62configured.png&quot; alt=&quot;Project configured successfully on remote system&quot; title=&quot;Project configured&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the top main menu, choose &lt;strong&gt;terminal &gt; New Terminal&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/14open-terminal.png&quot; alt=&quot;Open new terminal in remote Visual Studio Code&quot; title=&quot;Open new terminal&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure that the gradle &lt;strong&gt;wrapper script&lt;/strong&gt; is &lt;strong&gt;executable&lt;/strong&gt; by running the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;chmod +x ./gradlew
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;This time, we run the &lt;strong&gt;gradlew&lt;/strong&gt; script, &lt;strong&gt;instead of gradlew.bat&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./gradlew clean build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/15linux_compile.png&quot; alt=&quot;Compile using gradlew on Linux&quot; title=&quot;Compile using gradlew&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Compiling using Docker&lt;/h2&gt;
&lt;p&gt;To avoid the need for a dedicated local development environment or specific JDK installations, you can compile the plugin inside a &lt;strong&gt;Docker container&lt;/strong&gt;. Using the official Gradle image, the build process runs in an isolated environment that already includes the correct JDK version and Gradle tooling. This ensures consistent results across different systems and eliminates dependency issues.&lt;/p&gt;
&lt;p&gt;The prerequisite to this is a &lt;strong&gt;working Docker installation&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo docker run --rm \
  -u &quot;$(id -u):$(id -g)&quot; \
  -v &quot;$PWD&quot;:/home/gradle/project \
  -v gradle-cache:/home/gradle/.gradle \
  -w /home/gradle/project \
  gradle:8.10.2-jdk17 \
  bash -lc &quot;chmod +x ./gradlew &amp;#x26;&amp;#x26; ./gradlew clean build&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;From here, we can explore the &lt;strong&gt;mechanics of the interfaces&lt;/strong&gt; exposed by the HPE Morpheus Enterprise Plugin Core. These interfaces are organized into various &lt;strong&gt;provider types&lt;/strong&gt;, each defining a specific kind of integration point within HPE Morpheus Enterprise.&lt;/p&gt;
&lt;p&gt;In essence, a &lt;em&gt;provider type&lt;/em&gt; represents a particular extension area in the platform — such as a &lt;strong&gt;custom tab&lt;/strong&gt;, &lt;strong&gt;analytics page&lt;/strong&gt;, &lt;strong&gt;dashboard widget&lt;/strong&gt;, or c&lt;strong&gt;ustom report&lt;/strong&gt;. These allow developers to inject new functionality directly into the UI or automation workflows.&lt;/p&gt;
&lt;p&gt;At the more advanced end of the spectrum are provider types that model &lt;strong&gt;core infrastructure components&lt;/strong&gt;. These include integrations for &lt;strong&gt;clouds&lt;/strong&gt;, &lt;strong&gt;networks&lt;/strong&gt;, &lt;strong&gt;storage systems&lt;/strong&gt;, and many others. Such providers tend to be more complex because they interact deeply with HPE Morpheus Enterprise’s provisioning, synchronization, and lifecycle management layers. Understanding how these provider types fit together is key to building powerful, production-grade plugins.&lt;/p&gt;
&lt;p&gt;Explore the following resources for more information on the different plugin/provider types:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.morpheusdata.com&quot;&gt;https://developer.morpheusdata.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://share.morpheusdata.com&quot;&gt;https://share.morpheusdata.com&lt;/a&gt; (follow the repository link under the plugin details to see the source code of a plugin)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hewlettpackard&quot;&gt;https://github.com/hewlettpackard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&quot;&gt;https://youtu.be/1twoNvPoEV4?si=elUEzCYGo88TIffX&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 8: Striving Toward Adoptability]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-8-striving-toward-adoptability/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-8-striving-toward-adoptability/</guid><pubDate>Thu, 13 Nov 2025 07:17:53 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes How-Tos, agentic AI for project managers, open-source workshop development and more]]></title><link>https://developer.hpe.com/2025-nov-03/</link><guid isPermaLink="false">https://developer.hpe.com/2025-nov-03/</guid><pubDate>Mon, 03 Nov 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 7: Minimalist Language Designs]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-7-minimalist-language-designs/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-7-minimalist-language-designs/</guid><pubDate>Wed, 15 Oct 2025 18:23:22 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Sourcing Workshops-on-Demand part 2: How to Deploy the infrastructure]]></title><description><![CDATA[In the first article of this series, I described the reasons behind the decision to open source our Workshops-on-Demand (WoD) project and…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part-2-how-to-deploy-the-infrastructure/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part-2-how-to-deploy-the-infrastructure/</guid><pubDate>Thu, 09 Oct 2025 12:50:13 GMT</pubDate><content:encoded>&lt;p&gt;In the first &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;article&lt;/a&gt; of this series, I described the reasons behind the decision to open source our Workshops-on-Demand (WoD) project and gave you a comprehensive picture of the project&apos;s overall infrastructure. In this second article, I will explain how to deploy it.&lt;/p&gt;
&lt;p&gt;The overall infrastructure can run on physical servers or Virtual Machines. We usually designate one server for the frontend and a second server for the backend. You could also decide to separate every single component of each side.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-5.png&quot; alt=&quot;&quot; title=&quot;wod infrastucture&quot;&gt;&lt;/p&gt;
&lt;h2&gt;H﻿ow to deploy your own Workshops-on-Demand infrastructure...&lt;/h2&gt;
&lt;p&gt;A﻿s explained in the previous &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;article&lt;/a&gt;, the project is split into multiple repositories from the architectural and public / private aspects. Since the publication if of the previous article, additionnal repositories showed up as the installation process evolved over time. The architecture is divided between the frontend, api-db, and backend. The project admins will need to decide whether they are willing to develop and propose public-only content to the participants or add any proprietary and private content.&lt;/p&gt;
&lt;p&gt;I﻿ will start with the simpliest scenario: A public-only approach. Then I will dive into the specificities related the private approach.&lt;/p&gt;
&lt;h3&gt;P﻿ublic-only deployment: No private backend nor private workshops&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;T﻿his part is compulsory for any type of deployment. Public only or public + private.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;F﻿irst, you need a repository to clone. The Workshops-on-Demand GitHub projects can be found &lt;a href=&quot;https://github.com/Workshops-on-Demand/&quot;&gt;here&lt;/a&gt;. W﻿e have packaged the project over several GitHub repos. Each repository handles a specific role in the overall architecture.&lt;/p&gt;
&lt;p&gt;Here&apos;s a quick look at what can be found in each:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-repository-2025.png&quot; alt=&quot;&quot; title=&quot;WOD Repositories&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-notebooks&quot;&gt;w﻿od-notebooks&lt;/a&gt;:&lt;/strong&gt; Public Workshops-on-Demand based on Jupyter Notebooks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can test them live at &lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot;&gt;https://hackshack.hpedev.io/workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-install&quot;&gt;w﻿od-install&lt;/a&gt;:&lt;/strong&gt; Installer part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend&quot;&gt;w﻿od-backend&lt;/a&gt;:&lt;/strong&gt; Back-end part of our Workshops-on-Demand setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend&quot;&gt;w﻿od-frontend&lt;/a&gt;:&lt;/strong&gt; Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Based on NGINX and NodeJS technologies, it provides the participtants&apos; Registration Portal used to enable booking of the workshops.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db&quot;&gt;w﻿od-api-db&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open API 3.0 based api used to manage the Workshops-on-Demand project. It also provides a database hosting the different status of participants, workshops, and students.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-private&quot;&gt;w﻿od-private&lt;/a&gt;:&lt;/strong&gt; Example Private configuration for Workshops-on-Demand (WoD).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend-private&quot;&gt;w﻿od-frontend-private&lt;/a&gt;:&lt;/strong&gt; Private Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db-private&quot;&gt;w﻿od-api-db-private&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This provide examples for creating your own cutomization layer on top of the public standard WoD Backend / WoD Notebooks content. Do not put any confidential data here as this is a public repository!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: T﻿here are now 9 repositories available for now.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie2-2repos.png&quot; alt=&quot;&quot; title=&quot;Workshops-on-Demand repositories&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Workshops-on-Demand project provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An Installer that allows you to install either Backend, Api-DB server, or Frontend using a single line of command.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A complete JupyterHub server with some add-ons (additional JupyterHub kernels, Ansible galaxies, and PowerShell libraries) on your system, ready to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Postfix server used for the procmail API&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Ansible engine to allow automation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Fail2Ban service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Admin user to manage everything&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A set of scripts to handle different tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Notebooks deployment&lt;/li&gt;
&lt;li&gt;JupyterHub compliancy&lt;/li&gt;
&lt;li&gt;Users compliancy&lt;/li&gt;
&lt;li&gt;Security Management&lt;/li&gt;
&lt;li&gt;Workshops updates&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Backend server preparation:&lt;/h4&gt;
&lt;p&gt;The installation process is handled by a dedicated repo : wod-install. This repo needs to be cloned on every single machine  constituting the WoD architecture. B﻿efore cloning the wod-install repository, you will need to prepare the server that will host the backend features. When ready, you will proceed with the cloning and then the installation process.&lt;/p&gt;
&lt;h5&gt;Prerequesites:&lt;/h5&gt;
&lt;p&gt;In order to setup the backend server, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fresh OS install on physical / virtualized server running Ubuntu 24.04 or Centos 7.9 leveraging any deployment mechanism of your choice.(e.g. iLO, vagrant, etc.). You may even use this vagrant file to automatically generate a complete setup leveraging vagrant, libvirt and QEMU/KVM.&lt;/li&gt;
&lt;li&gt;A Linux account with sudo priviledges on your Linux distro. Name it &lt;code&gt;install&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: In order to support 100 concurrent users, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 cpus or more machine&lt;/li&gt;
&lt;li&gt;128 GB of RAM&lt;/li&gt;
&lt;li&gt;500 GB of storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a point of interest, we are currently using  an HPE ProLiant DL360 Gen10 server on our different production sites.&lt;/p&gt;
&lt;p&gt;When done with OS installation and preparation&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From the WoD-backend server (aka JupyterHub server), as the install user, you will need to clone the wod-install repo first.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;install$ git clone https://github.com/Workshops-on-Demand/wod-install.git
install$ cd wod-install/
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Examine default installation parameters and adapt when necessary accordingly. Files are self-documented.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Look at the following files within ansible/group_vars directory.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;all.yml&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi all.yml
---
# We create fixed user accounts to provide an isolated execution environment to run the jupyter notebooks
# They are called studentXXX where XXX is set between USERMIN and USERMAX defined below potentially with the addition of an offset (UIDBASE) for their uid/gid
# Their home directory is located under /student and is thus named /student/studentXXX
# Corresponding JupyterHub accounts are also created
#
# USERMIN indicates the starting ID of the Linux and Jupyter user account range
#
USERMIN: 1
#
# USERMAX indicates the ending ID of the Linux and Jupyter user account range
#
USERMAX: 20
#
# UIDBASE is the offset used to create the Linux user account IDs
# Example when creating user 35 with UIDBASE of 2000, the uid created is 2035
#
UIDBASE: 2000
#
# GIDBASE is the offset used to create the Linux group IDs
# Example: When creating user 35 with GIDBASE of 2000, the gid created is 2035
#
GIDBASE: 2000
#
# Set CLEAN to true if you want all Liunx &amp;#x26; Jupyter user accounts to be removed before ansible check
#
CLEAN: false
#
# VAULTPWD is the password used to manage the Ansible vault
#
VAULTPWD: VeryComplexPasswd1234!
#
# NOCHECKSSH are ssh options used to dialog with appliances
# By default, avoid checking Host keys and Host file as they may change on a regular base
#
NOCHECKSSH: -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
#
# Branding management - Use if you want to customize Logo and Notebooks branding
#
BRANDING: &quot;WoD Developer&quot;
BRANDINGWOD: &quot;WoD Developer&quot;
BRANDINGLOGO: &quot;![HPEDEVlogo](Pictures/hpe-dev-logo.png)&quot;
BRANDINGURL: &quot;https://wod.io&quot;
#
# Survey management - Use if you want to ask for feedback on your workshops - Look at existing conclusion notebooks
SURVEYURL: TBD
SURVEYCHALURL: TBD
#
# JPHUB  is the directory used to install the JupyterHub stack (a Python venv)
#
JPHUB: /opt/jupyterhub
#
#
# These variables are defined in Ansible playbooks. Do not change without knowing what you are doing.
#
STUDDIR: &quot;{{ ansible_env.STUDDIR }}&quot;
WODBEDIR: &quot;{{ ansible_env.WODBEDIR }}&quot;
WODPRIVDIR: &quot;{{ ansible_env.WODPRIVDIR }}&quot;
WODNOBO: &quot;{{ ansible_env.WODNOBO }}&quot;
WODAPIDBDIR: &quot;{{ ansible_env.WODAPIDBDIR }}&quot;
WODFEDIR: &quot;{{ ansible_env.WODFEDIR }}&quot;
SCRIPTDIR: &quot;{{ WODBEDIR }}/scripts&quot;
ANSIBLEDIR: &quot;{{ WODBEDIR }}/ansible&quot;
# This is the predefined structure for a private repo
WODPRIVNOBO: &quot;{{ WODPRIVDIR }}/notebooks&quot;
SCRIPTPRIVDIR: &quot;{{ WODPRIVDIR }}/scripts&quot;
ANSIBLEPRIVDIR: &quot;{{ WODPRIVDIR }}/ansible&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wod-backend&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi wod-backend
#
# These variables are located lower in the ansible tree to allow different values required for different backends while keeping a single frontend
#
# BASESTDID is the offset used to create users in the DB. It is required that each backend has a different non overlapping value.
# Overlap is defined by BASESTDID + USERMAX (from all.yml)
#
# Example:
# for student 35 in location A having BASESTDID to 0 the user is create as id 35
# for student 35 in location B having BASESTDID to 2000 the user is create as id 2035
# There is no overlap as long as you do not create more than 2000 users which should be the value of USERMAX in that case.
#
# This is different from the offset UIDBASE used for Linux uid
#
BASESTDID: 0
#
# POSTPORT is the Postfix Port on which the smtp service is listening to receive API mail requests from the frontend
#
POSTPORT: &quot;10025&quot;
#
# In case you are using an LDAP server to use, flag as such the corresponding workshops in the DB and use the following values:
#
LDAPSRVNAME: ldap.example.org
LDAPDMN: example.org
LDAPPWD: MotDePasseLDAPCompliquéAussi123!!!##
LDAPPORT: &quot;389&quot;
#
# For various existing public WoDs - These are needed. Adapt but do not remove!
#
SSHPORT-WKSHP-Docker101: 14101
SSHPORT-WKSHP-Ansible101: 16001
HTTPPORT-WKSHP-Docker101: 14151
HTTPPORT-WKSHP-Ansible101: 16051
HTTPPORT-WKSHP-Spark101: 17161
HTTPPORT-WKSHP-Concourse101: 19061
HTTPPORT-WKSHP-ML101: 18061
HTTPPORT-WKSHP-DataVisu101: 22161
CONCOURSEPORT-WKSHP-Concourse101: 19001
CONCOURSEPORT2-WKSHP-Concourse101: 19031
IP-WKSHP-DataVisu101: x.y.z.t
IP-WKSHP-Concourse101: x.y.z.t
IP-WKSHP-Docker101: x.y.z.t
IP-WKSHP-Ansible101: x.y.z.t
IP-WKSHP-Spark101: x.y.z.t
IP-WKSHP-ML101: x.y.z.t
IP-WKSHP-StackStorm101: x.y.z.t
SPARKPORT-WKSHP-Spark101: 17101
SPARKPORT2-WKSHP-Spark101: 17131
MLPORT-WKSHP-ML101: 18101
MLPORT2-WKSHP-ML101: 18031
DATAVISUPORT1-WKSHP-DataVisu101: 22101
DATAVISUPORT2-WKSHP-DataVisu101: 22131
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wod-system&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi wod-system
#
# Backend API management
#
# Do not change, as the port is fixed in the JupyterHub install
#
WODBEAPIURL: http://{{ WODBEFQDN }}:8000
#
# Replace with a random one - TODO Do that automatically at install time
#
WODBETOKEN: 2c0246e2c8564dc6ac7b12c544b25d77
#
# You may want to use these variables if you have an OPNSense server as a security FW and are allowing http comm internally
#
#OPNSENSEKEY:
#OPNSENSESEC:
#OPNSENSEIP:
#OPNSENSEPORT:
#
# Front-end API management
#
# Do not change, as the port is fixed in the JupyterHub install
#
WODFEAPIURL: https://{{ WODAPIDBFQDN }}/api
#
# Adapt to your setup - Used by installer to setup the frontend
#
WODFEAPIUSER: moderator
WODFEAPIPWD: MotDePasseCompliquéAussi125!!!##
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;S﻿ee the example below for a backend server.&lt;/p&gt;
&lt;h3&gt;B﻿ackend server installation :&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/INSTALL.md#for-private-based-workshops-on-demand-private-backend--private-workshops-or-if-you-need-to-modify-defaults&quot;&gt;&lt;/a&gt;O﻿nce you are done with the files, you can can proceed with the installation itself. T﻿he installation is based on a common install script &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/install/install.sh&quot;&gt;install.sh &lt;/a&gt;that allows the deployment of the different parts of the solution. The script is located under the &lt;code&gt;wod-install/install/&lt;/code&gt; directory.&lt;/p&gt;
&lt;p&gt;It can be called as follows:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;install.sh [-h][-t type][-g groupname][-b backend][-f frontend][-a api-db][-e external][-u user][-s sender]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;As you can see on the command line, the -t parameter will define whether you install a backend, an api-db, or a frontend server. Having this information, the script will clone the relevant repository for the installation. If &lt;code&gt;t=backend&lt;/code&gt;, then the &lt;code&gt;wod-backend&lt;/code&gt; repository is cloned as part of the installation process, and the relevant installation scripts are called. Same goes for api-db, and frontend servers.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-﻿h&lt;/code&gt; parameter provides help.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;install@wod-backend2-u24:~/wod-install/install$ sudo ./install.sh -h
install.sh called with -h
install.sh [-h][-t type][-i ip][-g groupname][-b backend[:beport:[beproto]][-n number][-j backendext[:beportext[:beprotoext]]][-f frontend[:feport[:feproto]]][-w frontendext[:feportext[:feprotoext]]][-a api-db[:apidbport[:apidbproto]]][-e api-dbext[:apidbportext[:apidbprotoext]]][-u user][-p postport][-k][-c][-s sender]

where:
-a api-db    is the FQDN of the REST API/DB server
             potentially with a port (default 8021)
             potentially with a proto (default http)
             example: api.internal.example.org
             if empty using the name of the frontend

-b backend   is the FQDN of the backend JupyterHub server,
             potentially with a port (default 8000).
             potentially with a proto (default http)
             if empty uses the local name for the backend
             If you use multiple backend systems corresponding to
             multiple locations, use option -n to give the backend
             number currently being installed, starting at 1.

             When installing the api-db server you have to specify one
             or multiple backend servers, using their FQDN separated
             with &apos;,&apos; using the same order as given with the -n option
             during backend installation.

-e api-dbext is the FQDN of the REST API server accessible externally
             potentially with a port (default 8021)
             potentially with a proto (default http)
             example: api.external.example.org
             if empty using the name of the api-db
             useful when the name given with -a doesn&apos;t resolve from
             the client browser

-f frontend  is the FQDN of the frontend Web server
             potentially with a port (default 8000).
             potentially with a proto (default http)
             example: fe.external.example.org
             if empty using the name of the backend

-g groupname is the ansible group_vars name to be used
             example: production, staging, test, ...
             if empty using &apos;production&apos;

-i ip        IP address of the backend server being used
             if empty, try to be autodetected from FQDN
             of the backend server
             Used in particular when the IP can&apos;t be guessed (Vagrant)
             or when you want to mask the external IP returned
             by an internal one for /etc/hosts creation

-j backext   is the FQDN of the backend JupyterHub server accessible externally
             potentially with a port (default 8000).
             potentially with a proto (default http)
             example: jupyterhub.external.example.org
             if empty using the name of the backend
             useful when the name given with -b doesn&apos;t resolve from
             the client browser

-k           if used, force the re-creation of ssh keys for
             the previously created admin user
             if not used keep the existing keys in place if any
             (backed up and restored)
             if the name of the admin user is changed, new keys
             systematically re-created

-c           if used, force insecured curl communications
             this is particularly useful for self-signed certificate
             on https services
             if not used keep curl verification, preventing self-signed
             certificates to work

-n           if used, this indicates the number of the backend
             currently installed
             used for the backend installation only, when multiple
             backend systems will be used in the configuration
             example (single backend server install on port 9999):
              -b be.int.example.org:9999
             example (first of the 2 backends installed):
              -b be1.int.example.org:8888 -n 1
             example &apos;second of the 2 backends installed):
              -b be2.int.example.org:8888 -n 2
             example (install of the corresponding api-db server):
              -b be.int.example.org:8888,be2.int.example.org:8888

-p postport  is the port on which the postfix service is listening
             on the backend server
             example: -p 10030
             if empty using default (10025)

-s sender    is the e-mail address used in the WoD frontend to send
             API procmail mails to the WoD backend
             example: sender@example.org
             if empty using wodadmin@localhost

-t type      is the installation type
             valid values: appliance, backend, frontend or api-db
             if empty using &apos;backend&apos;

-u user      is the name of the admin user for the WoD project
             example: mywodadmin
             if empty using wodadmin
-w frontext  is the FQDN of the frontend JupyterHub server accessible externally
             potentially with a port (default 8000).
             potentially with a proto (default http)
             example: frontend.external.example.org
             if empty using the name of the frontend
             useful to solve CORS errors when external and internal names
             are different


Full installation example of a stack with:
- 2 backend servers be1 and be2 using port 8010
- 1 api-db server apidb on port 10000 using https
- 1 frontend server front on port 8000
- all declared on the .local network
- internal postfix server running on port 9000
- e-mail sender being wodmailer@local
- ansible groupname being test
- management user being wodmgr

On the be1 machine:
  ./install.sh -a apidb.local:10000:https -f front.local:8000 \
  -g test -u wodmgr -p 9000 -s wodmailer@local\
  -b be1.local:8010 -n 1 -t backend \
On the be2 machine:
  ./install.sh -a apidb.local:10000:https -f front.local:8000 \
  -g test -u wodmgr -p 9000 -s wodmailer@local\
  -b be2.local:8010 -n 2 -t backend \
On the apidb machine:
  ./install.sh -a apidb.local:10000:https -f front.local:8000 \
  -g test -u wodmgr -p 9000 -s wodmailer@local\
  -b be1.local:8010,be2.local:8010 -t api-db \
On the frontend machine:
  ./install.sh -a apidb.local:10000:https -f front.local:8000 \
  -g test -u wodmgr -p 9000 -s wodmailer@local\
  -t frontend \
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;install.sh&lt;/code&gt; performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-&amp;#x3C;&amp;#x3C; distribution name &gt;&gt;.sh&lt;/code&gt; script&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installs minimal required (&lt;code&gt;ansible, git, jq, openssh server, npm&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creates an admin user as defined upper (default is &lt;code&gt;wodadmin&lt;/code&gt;) with sudo rights&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-common.sh&lt;/code&gt; script that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cleanup&lt;/li&gt;
&lt;li&gt;Github repos cloning (leveraging install.repo file) : public Backend and public Private repos&lt;/li&gt;
&lt;li&gt;Create ssh keys for wodadmin&lt;/li&gt;
&lt;li&gt;Creates GROUPNAME variables&lt;/li&gt;
&lt;li&gt;Creates Ansible inventory files&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install_system.sh&lt;/code&gt; script with the type (backend, frontend, etc..) that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the necessary stack based on selected type&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;wod.sh&lt;/code&gt; script in &lt;code&gt;wod-backend&lt;/code&gt; directory to be used by all other scripts&lt;/li&gt;
&lt;li&gt;Source the &lt;code&gt;wod.sh&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Setup Ansible-galaxies (&lt;code&gt;community.general&lt;/code&gt; and &lt;code&gt;posix&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Setup Ansible and call the playbook &lt;code&gt;install_&amp;#x3C;type&gt;.yml&lt;/code&gt; followed by the &lt;code&gt;ansible\_check\_&amp;#x3C;type&gt;.yml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the end of the installation process:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You will have a JupyterHub server running on port 8000&lt;/li&gt;
&lt;li&gt;You will get a new &lt;code&gt;wodadmin&lt;/code&gt; user (Default admin)&lt;/li&gt;
&lt;li&gt;You will get a set of 20 students (Default value)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A﻿ll playbooks are self-documented. Please check for details.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note: A wod-install.log is available under the home folder of the install user under &lt;code&gt;.wod-install&lt;/code&gt;. It contains the installation log along with a another file containing the wodadmin credentials.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I&apos;ll leave it to you to handle the necessary port redirection and SSL certificates management when needed. In our case, I went for a simple yet efficient solution based on an OPNSense Firewall along with a HAProxy setup to manage ports&apos;redirection, HTTP to HTTPS Redirection, SSL Certificates. The backend also includes a Fail2ban service for login security management.&lt;/p&gt;
&lt;p&gt;At this point, you should be able to access your JupyterHub environment with a few pre-installed set of kernels like &lt;code&gt;Bash, Python, ansible, ssh, PowerShell&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Y﻿ou can then start developing new notebooks for your public based environment. And if you don&apos;t know how to do this, I will explain how in a future article.&lt;/p&gt;
&lt;p&gt;If you need to develop private content that cannot be shared with the wider Open Source Community because of dedicated IP, the next section in this article will explain how to handle this.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;How to handle private-content based Workshops-on-Demand&lt;/strong&gt;&lt;/h3&gt;
&lt;h4&gt;&lt;em&gt;(private backend + private workshops on top of default public backend and notebooks)&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;T﻿he principle remains similar, with a few differences explained below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;After cloning the wod-install repository, y﻿ou will fork the following public private &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-private.git&quot;&gt;repo&lt;/a&gt; on Github under your own Github account (we will refer to it as &lt;code&gt;Account&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, clone the forked repo.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the &lt;code&gt;all.yml&lt;/code&gt; and &lt;code&gt;&amp;#x3C;groupname&gt;&lt;/code&gt; files to customize your setup. T﻿his variable &lt;code&gt;&amp;#x3C;groupname&gt;&lt;/code&gt; defines possible backend server in your environement. By default, the project comes with a sample working file named &lt;code&gt;production&lt;/code&gt; in &lt;code&gt;ansible/group-vars&lt;/code&gt;. But you could have multiple. In the case I&apos;ve presented, I have defined &lt;code&gt;sandbox&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; and several &lt;code&gt;production&lt;/code&gt; files, all defining a different backend environment. These files will be used to override the default values specified by the public version delivered as part of the default public installation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Commit and push changes to your repo.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an &lt;code&gt;install.priv&lt;/code&gt; file located in &lt;code&gt;install&lt;/code&gt; directory when using a private repo (consider looking at &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/install/install.repo&quot;&gt;install.repo&lt;/a&gt; file for a better understanding of the variables).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Define the WODPRIVREPO and WODPRIVBRANCH variables as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;WODPRIVBRANCH=&quot;main&quot;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;WODPRIVREPO=&quot;git@github.com:Account/Private-Repo.git wod-private&quot;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When using a token&lt;/p&gt;
&lt;p&gt;Please refer to the following &lt;a href=&quot;https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token&quot;&gt;url&lt;/a&gt; to generate a &lt;code&gt;token&lt;/code&gt; file in &lt;code&gt;install&lt;/code&gt; directory of WoD-backend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Edit the &lt;code&gt;install.priv&lt;/code&gt; file located in &lt;code&gt;install&lt;/code&gt; directory of wod-install:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create line before variable declaration: &lt;code&gt;token=`cat $EXEPATH/token` &lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Use the token in the url WODPRIVREPO=&quot;git clone &lt;a href=&quot;https://user:$token@github.com/Account/wod-private.git%C2%A0wod-private&quot;&gt;https://user:$token@github.com/Account/wod-private.git wod-private&lt;/a&gt;&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Y﻿ou are now ready to perform the installation again to support a private repository.&lt;/p&gt;
&lt;p&gt;Please note that this setup phase can be concurrent with the public setup phase. Indeed, the install script should detect the presence of the private repository owing to the presence of the install.priv file. It will automatically adjust the different scripts and variables to add the relevant content. It will actually overload some of the variables with private ones.&lt;/p&gt;
&lt;p&gt;Y﻿ou now have a working Workshops-on-Demand backend server in place. In order to install the api-db server as well as the frontend server, please follow the same steps.&lt;/p&gt;
&lt;p&gt;In order to setup the api-db server, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fresh OS install on physical / virtualized server running Ubuntu 24.04 or Centos 7.9 leveraging any deployment mechanism of your choice.(e.g. iLO, vagrant, etc.). You may even use this vagrant file to automatically generate a complete setup leveraging vagrant, libvirt and QEMU/KVM.&lt;/li&gt;
&lt;li&gt;A Linux account with sudo priviledges on your Linux distro. Name it &lt;code&gt;install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;2 cpus or more machine&lt;/li&gt;
&lt;li&gt;16 GB of RAM&lt;/li&gt;
&lt;li&gt;60 GB of storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In order to setup the frontend server, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fresh OS install on physical / virtualized server running Ubuntu 24.04 or Centos 7.9 leveraging any deployment mechanism of your choice.(e.g. iLO, vagrant, etc.). You may even use this vagrant file to automatically generate a complete setup leveraging vagrant, libvirt and QEMU/KVM.&lt;/li&gt;
&lt;li&gt;A Linux account with sudo priviledges on your Linux distro. Name it &lt;code&gt;install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;2 cpus or more machine&lt;/li&gt;
&lt;li&gt;8 GB of RAM&lt;/li&gt;
&lt;li&gt;60 GB of storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Congratulations! you should have now a running Workshops-on-Demand infrastructure. The next article in the series will help you better understand the lifecycle of the backend server. How does a workshop registration work from the backend server&apos;s side? How do you manage this server on a daily basis? How and when do you need to update it ? All these questions will be answered in the next article. And from there, I will help you move  finally to a workshop&apos;s creation process.&lt;/p&gt;
&lt;p&gt;I﻿f you need support for this installation process, use our dedicated &lt;a href=&quot;https://hpedev.slack.com/archives/C01B60X8SSD&quot;&gt;slack channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please be sure to check back &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog site&lt;/a&gt; to read all the articles in this series. Also, check out  the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/42&quot;&gt;Data Visualization 101&lt;/a&gt; is now available! Stay tuned for additional Workshops-on-Demand in our catalog.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Securely accessing public networks, configuring firmware updates, agentic AI & more!]]></title><link>https://developer.hpe.com/2025-oct-07/</link><guid isPermaLink="false">https://developer.hpe.com/2025-oct-07/</guid><pubDate>Tue, 07 Oct 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Exposing Grafana service using Tailscale for MKS monitoring in HPE Private Cloud Enterprise]]></title><description><![CDATA[This blog post describes how to expose the Grafana service running in an MKS cluster within HPE Private Cloud Enterprise to the public…]]></description><link>https://developer.hpe.com/exposing-grafana-service-using-tailscale-for-mks-monitoring-in-hpe-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/exposing-grafana-service-using-tailscale-for-mks-monitoring-in-hpe-private-cloud-enterprise/</guid><pubDate>Fri, 03 Oct 2025 20:28:43 GMT</pubDate><content:encoded>&lt;p&gt;This blog post describes how to expose the &lt;em&gt;Grafana&lt;/em&gt; service running in an MKS cluster within HPE Private Cloud Enterprise to the public Internet using &lt;em&gt;Tailscale&lt;/em&gt; alongside &lt;em&gt;MetalLB&lt;/em&gt;. Without the usual complexity of networking or intricate security configurations, the exposed &lt;em&gt;Grafana&lt;/em&gt; dashboard becomes accessible both within the on-premises environment and from external networks. This approach offers a simple and effective way to monitor MKS clusters running in the HPE Private Cloud Enterprise environment.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; is a fully managed &lt;em&gt;Infrastructure as a Service&lt;/em&gt; (IaaS) offering that brings a modern, cloud-like experience to on-premises environments. It combines the flexibility of hybrid cloud with the enterprise-grade control and security required by enterprise IT.&lt;/p&gt;
&lt;p&gt;Through integration with &lt;a href=&quot;https://www.hpe.com/us/en/morpheus-enterprise-software.html&quot;&gt;HPE Morpheus Enterprise&lt;/a&gt;, which serves as the cloud management and orchestration layer, HPE Private Cloud Enterprise delivers a unified self-service interface for provisioning virtual machines (VMs), creating containers, and deploying applications, all governed by role-based access control (RBAC). This integration now enables support for the Morpheus Kubernetes Service (MKS) feature, allowing users to deploy and manage Kubernetes (K8s) clusters with built-in automation and observability capabilities. You can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt; to learn more about provisioning MKS clusters in HPE Private Cloud Enterprise.&lt;/p&gt;
&lt;p&gt;As you begin deploying applications in MKS clusters, networking quickly emerges as one of the key challenges. Traditional methods such as port forwarding, &lt;em&gt;NodePort&lt;/em&gt; or &lt;em&gt;LoadBalancer&lt;/em&gt; services, or manual virtual private network (VPN) setups can be cumbersome to configure, difficult to secure, and often require deep networking expertise. How can these applications be made accessible, both within the HPE Private Cloud Enterprise environment and from external networks, without the added complexity?&lt;/p&gt;
&lt;p&gt;The following sections describe how to expose services, running in MKS clusters within HPE Private Cloud Enterprise, to the public Internet using &lt;em&gt;Tailscale&lt;/em&gt; and &lt;em&gt;MetalLB&lt;/em&gt;, offering a streamlined and secure alternative to conventional approaches.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An MKS cluster has been provisioned from a HPE Private Cloud Enterprise workspace. You can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning an MKS cluster in HPE Private Cloud Enterprise&lt;/a&gt; to provision an MKS cluster.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the &lt;em&gt;kubeconfig&lt;/em&gt; file for accessing the MKS cluster.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;helm&lt;/em&gt; CLI tool, version 3.12.0 or later.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;em&gt;MetalLB&lt;/em&gt; and &lt;em&gt;Tailscale&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://metallb.io/&quot;&gt;MetalLB&lt;/a&gt;&lt;/em&gt; is a software solution that provides a network load balancer implementation for K8s clusters using standard routing protocols. By installing &lt;em&gt;MetalLB&lt;/em&gt;, it supports the &lt;em&gt;LoadBalancer&lt;/em&gt;-type services by assigning external IPs to services within the K8s clusters. This makes applications easily reachable within your private network, without needing any special hardware or cloud services.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://tailscale.com/&quot;&gt;Tailscale&lt;/a&gt;&lt;/em&gt; is a mesh VPN service that uses the &lt;a href=&quot;https://www.wireguard.com/&quot;&gt;WireGuard&lt;/a&gt; protocol to securely connect devices across different networks. Instead of routing traffic through a central server like traditional VPNs, &lt;em&gt;Tailscale&lt;/em&gt; creates encrypted peer-to-peer connections between devices. These connections form a private network called a &lt;em&gt;tailnet&lt;/em&gt;, where each device receives a unique &lt;em&gt;Tailscale&lt;/em&gt; IP address for direct communication. A tailnet provides a secure, interconnected space of users, devices, and resources, all managed through Tailscale&apos;s admin console, where you can configure access controls, &lt;em&gt;DNS&lt;/em&gt; settings, &lt;em&gt;TLS&lt;/em&gt; certificates, and more.&lt;/p&gt;
&lt;p&gt;By utilizing the external IP addresses assigned by &lt;em&gt;MetalLB&lt;/em&gt; to &lt;em&gt;LoadBalancer&lt;/em&gt;-type services within the local private network, &lt;em&gt;Tailscale&lt;/em&gt; securely exposes these services via publicly accessible URLs, without revealing their underlying service IP addresses.&lt;/p&gt;
&lt;h2&gt;Set up the load balancer with &lt;em&gt;MetalLB&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;You can install &lt;em&gt;MetalLB&lt;/em&gt; and set up the load balancer in the MKS cluster by following the instructions in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/exposing-an-application-using-ingress-and-tls-termination-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Setting up the load balancer with MetalLB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Run the following command to view the deployed &lt;em&gt;MetalLB&lt;/em&gt; in the &lt;em&gt;metallb-system&lt;/em&gt; namespace of the MKS cluster &lt;em&gt;mks-test&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n metallb-system
NAME                                      READY   STATUS    RESTARTS   AGE
pod/metallb-controller-8474b54bc4-gdgmx   1/1     Running   0          14d
pod/metallb-speaker-2f8zj                 4/4     Running   0          14d
pod/metallb-speaker-qgg5p                 4/4     Running   0          14d
pod/metallb-speaker-qsv45                 4/4     Running   0          14d
pod/metallb-speaker-xhhcv                 4/4     Running   0          14d

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/metallb-webhook-service   ClusterIP   172.30.168.138   &amp;#x3C;none&gt;        443/TCP   14d

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/metallb-speaker   4         4         4       4            4           kubernetes.io/os=linux   14d

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/metallb-controller   1/1     1            1           14d

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/metallb-controller-8474b54bc4   1         1         1       14d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can view the virtual IP address range &quot;172.20.40.240-172.20.40.250&quot; defined in the custom resource definition (CRD) &lt;em&gt;IPAddressPool&lt;/em&gt;, along with the layer 2 service IP address announcement specified in the CRD resource &lt;em&gt;L2Advertisement&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ipaddresspool -n metallb-system
NAME       AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cfe-pool   true          false             [&quot;172.20.40.240-172.20.40.250&quot;]

$ kubectl get l2advertisement -n metallb-system
NAME           IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
cfe-l2advert   [&quot;cfe-pool&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Deploy &lt;em&gt;Tailscale&lt;/em&gt;&lt;/h2&gt;
&lt;h3&gt;Install &lt;em&gt;Tailscale&lt;/em&gt; client&lt;/h3&gt;
&lt;p&gt;In order to use &lt;em&gt;Tailscale&lt;/em&gt;, you need to install the &lt;em&gt;Tailscale&lt;/em&gt; client on your device. The &lt;em&gt;Tailscale&lt;/em&gt; client is open source and available for various platforms, such as &lt;em&gt;Linux&lt;/em&gt;, &lt;em&gt;Windows&lt;/em&gt;, &lt;em&gt;macOS&lt;/em&gt;, &lt;em&gt;iOS&lt;/em&gt;, and &lt;em&gt;Android&lt;/em&gt;, etc. It is used, via its admin console, to connect various devices securely to your private &lt;em&gt;Tailscale&lt;/em&gt; network (&lt;em&gt;tailnet&lt;/em&gt;). It is the bridge between your device and the rest of your tailnet.&lt;/p&gt;
&lt;p&gt;Here is the admin console of my Windows &lt;em&gt;Tailscale&lt;/em&gt; client installed using the package available from the &lt;a href=&quot;https://tailscale.com/download&quot;&gt;Tailscale download page&lt;/a&gt;. It uses a &lt;em&gt;Tailscale&lt;/em&gt; account by choosing &lt;em&gt;GitHub&lt;/em&gt; as the identity provider. You can integrate your &lt;em&gt;Tailscale&lt;/em&gt; account using your own identity provider for secure single sign-on (SSO) login and multi-factor authentication (MFA).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-machines.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;My Windows laptop &lt;em&gt;guoping&lt;/em&gt; is connected to the tailnet associated with my &lt;em&gt;GitHub&lt;/em&gt; identity provider.&lt;/p&gt;
&lt;h3&gt;Generate &lt;em&gt;Tailscale&lt;/em&gt; auth key&lt;/h3&gt;
&lt;p&gt;After installing the &lt;em&gt;Tailscale&lt;/em&gt; client, you need to generate an auth key from the &lt;em&gt;Tailscale&lt;/em&gt; admin console.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Settings&lt;/strong&gt; -&gt; &lt;strong&gt;Keys&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Generate auth key&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-settings-keys.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Enter &lt;em&gt;Description&lt;/em&gt; as &lt;em&gt;mks-auth-key&lt;/em&gt; and set &lt;em&gt;Expiration&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Generate key&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-generate-auth-key.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Copy and save the generated new key.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-auth-key.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create a &lt;em&gt;Secret&lt;/em&gt; YAML manifest file &lt;em&gt;&apos;tailscale-auth.yaml&apos;&lt;/em&gt; using the generated auth key.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;apiVersion: v1
kind: Secret
metadata:
  name: tailscale-auth
  namespace: tailscale
stringData:
  TS_AUTHKEY: tskey-auth-&amp;#x3C;hidden&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Apply the &lt;em&gt;Secret&lt;/em&gt; to the namespace &lt;em&gt;tailscale&lt;/em&gt; of the MKS cluster. This secret will be used to securely join the cluster to your &lt;em&gt;Tailscale&lt;/em&gt; network.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create ns tailscale
$ kubectl apply -f tailscale-auth.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create tag &lt;em&gt;k8s-operator&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;You need to create a tag named &lt;em&gt;k8s-operator&lt;/em&gt; in the &lt;em&gt;Tailscale&lt;/em&gt; admin console. This tag is used by &lt;em&gt;Tailscale&lt;/em&gt; to authenticate and identify the &lt;em&gt;Tailscale&lt;/em&gt; K8s operator that will be deployed to the MKS cluster.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Access controls -&gt;&lt;/strong&gt;  &lt;em&gt;Tags&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;Create tag&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-create-tag.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Enter &lt;em&gt;Tag name&lt;/em&gt; as &lt;em&gt;k8s-operator&lt;/em&gt; and select &lt;em&gt;Tag owner&lt;/em&gt; as &lt;em&gt;autogroup:member&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save tag&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-tag-k8s-operator.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Generate &lt;em&gt;Tailscale&lt;/em&gt; OAuth client&lt;/h3&gt;
&lt;p&gt;You need to generate an OAuth client in the &lt;em&gt;Tailscale&lt;/em&gt; admin console.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Settings&lt;/strong&gt; -&gt; &lt;strong&gt;OAuth clients&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Generate OAuth client&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-oauth-client.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Under &lt;strong&gt;Devices&lt;/strong&gt;, select &lt;em&gt;Core&lt;/em&gt; with &lt;em&gt;Read and Write&lt;/em&gt; and add tag &lt;em&gt;k8s-operator&lt;/em&gt;. Under Keys, select &lt;em&gt;Auth Keys&lt;/em&gt; with &lt;em&gt;Read and Write&lt;/em&gt; and add the tag &lt;em&gt;k8s-operator&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Generate client&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-oauth-client-k8s-operator.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Copy and save the generated &lt;em&gt;Client ID&lt;/em&gt; and &lt;em&gt;Client secret&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-oauth-client-details.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Deploy &lt;em&gt;Tailscale&lt;/em&gt; K8s operator&lt;/h3&gt;
&lt;p&gt;You can now install the &lt;em&gt;Tailscale&lt;/em&gt; K8s operator to the namespace &lt;em&gt;tailscale&lt;/em&gt; of the MKS cluster using &lt;em&gt;Helm&lt;/em&gt;, along with the generated &lt;em&gt;Tailscale&lt;/em&gt; OAuth client, specifically the &lt;em&gt;Client ID&lt;/em&gt; and &lt;em&gt;Client secret&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ helm repo add tailscale https://pkgs.tailscale.com/helmcharts
$ helm repo update

$ helm search repo tailscale
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
tailscale/tailscale-operator    1.86.5          v1.86.5         A Helm chart for Tailscale Kubernetes operator

$ helm upgrade --install tailscale-operator tailscale/tailscale-operator \
--namespace=tailscale --set-string oauth.clientId=&amp;#x3C;hidden&gt; \
--set-string oauth.clientSecret=tskey-client-&amp;#x3C;hidden&gt; --wait
Release &quot;tailscale-operator&quot; does not exist. Installing it now.
NAME: tailscale-operator
LAST DEPLOYED: Wed Sep 24 15:02:41 2025
NAMESPACE: tailscale
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed the Tailscale Kubernetes Operator!

Once connected, the operator should appear as a device within the Tailscale admin console:
https://login.tailscale.com/admin/machines

If you have not used the Tailscale operator before, here are some examples to try out:

* Private Kubernetes API access and authorization using the API server proxy
  https://tailscale.com/kb/1437/kubernetes-operator-api-server-proxy

* Private access to cluster Services using an ingress proxy
  https://tailscale.com/kb/1439/kubernetes-operator-cluster-ingress

* Private access to the cluster&apos;s available subnets using a subnet router
  https://tailscale.com/kb/1441/kubernetes-operator-connector

You can also explore the CRDs, operator, and associated resources within the tailscale namespace:

$ kubectl explain connector
$ kubectl explain proxygroup
$ kubectl explain proxyclass
$ kubectl explain recorder
$ kubectl explain dnsconfig

If you&apos;re interested to explore what resources were created:

$ kubectl --namespace=tailscale get all -l app.kubernetes.io/managed-by=Helm


$ kubectl --namespace=tailscale get all -l app.kubernetes.io/managed-by=Helm
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to check the &lt;em&gt;Tailscale&lt;/em&gt; operator deployment details.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n tailscale
NAME                           READY   STATUS    RESTARTS   AGE
pod/operator-945796556-cgg86   1/1     Running   0          41s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/operator   1/1     1            1           41s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/operator-945796556   1         1         1       41s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the &lt;em&gt;Tailscale&lt;/em&gt; operator is successfully installed and running, a new machine named &lt;em&gt;tailscale-operator&lt;/em&gt; appears under the &lt;strong&gt;Machines&lt;/strong&gt; tab in your &lt;em&gt;Tailscale&lt;/em&gt; admin console.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-operator-machine.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Expose &lt;em&gt;Grafana&lt;/em&gt; service&lt;/h2&gt;
&lt;p&gt;As part of the MKS cluster provisioning process, both &lt;em&gt;Prometheus&lt;/em&gt; and &lt;em&gt;Grafana&lt;/em&gt; are installed and configured in the namespace &lt;em&gt;monitoring&lt;/em&gt;. Use the following command to view the details.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n monitoring
NAME                                      READY   STATUS    RESTARTS   AGE
pod/alertmanager-main-0                   2/2     Running   0           4d
pod/alertmanager-main-1                   2/2     Running   0           4d
pod/alertmanager-main-2                   2/2     Running   0           4d
pod/blackbox-exporter-84d969fb75-msbqd    3/3     Running   0           4d
pod/grafana-6698fc66bb-9rjk2              1/1     Running   0           4d
pod/kube-state-metrics-6f5f95b6bf-6b77k   3/3     Running   0           4d
pod/node-exporter-74nzh                   2/2     Running   0           4d
pod/node-exporter-89m4q                   2/2     Running   0           4d
pod/node-exporter-c699g                   2/2     Running   0           4d
pod/node-exporter-prmwt                   2/2     Running   0           4d
pod/node-exporter-vdfvj                   2/2     Running   0           4d
pod/prometheus-adapter-599c88b6c4-nd7xd   1/1     Running   0           4d
pod/prometheus-adapter-599c88b6c4-zh2z5   1/1     Running   0           4d
pod/prometheus-k8s-0                      2/2     Running   0           4d
pod/prometheus-k8s-1                      2/2     Running   0           4d
pod/prometheus-operator-75486dd88-pjdjh   2/2     Running   0           4d

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-main       ClusterIP   172.30.103.195  &amp;#x3C;none&gt;        9093/TCP,8080/TCP             4d
service/alertmanager-operated   ClusterIP   None            &amp;#x3C;none&gt;        9093/TCP,9094/TCP,9094/UDP    4d
service/blackbox-exporter       ClusterIP   172.30.165.12   &amp;#x3C;none&gt;        9115/TCP,19115/TCP            4d
service/grafana                 ClusterIP   172.30.211.119  &amp;#x3C;none&gt;        3000/TCP                      4d
service/kube-state-metrics      ClusterIP   None            &amp;#x3C;none&gt;        8443/TCP,9443/TCP             4d
service/node-exporter           ClusterIP   None            &amp;#x3C;none&gt;        9100/TCP                      4d
service/prometheus-adapter      ClusterIP   172.30.199.24   &amp;#x3C;none&gt;        443/TCP                       4d
service/prometheus-k8s          ClusterIP   172.30.54.40    &amp;#x3C;none&gt;        9090/TCP,8080/TCP             4d
service/prometheus-operated     ClusterIP   None            &amp;#x3C;none&gt;        9090/TCP                      4d
service/prometheus-operator     ClusterIP   None            &amp;#x3C;none&gt;        8443/TCP                      4d

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/node-exporter   5         5         5       5            5           kubernetes.io/os=linux   43d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/blackbox-exporter     1/1     1            1            4d
deployment.apps/grafana               1/1     1            1            4d
deployment.apps/kube-state-metrics    1/1     1            1            4d
deployment.apps/prometheus-adapter    2/2     2            2            4d
deployment.apps/prometheus-operator   1/1     1            1            4d

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/blackbox-exporter-84d969fb75    1         1         1        4d
replicaset.apps/grafana-6698fc66bb              1         1         1        4d
replicaset.apps/kube-state-metrics-6f5f95b6bf   1         1         1        4d
replicaset.apps/prometheus-adapter-599c88b6c4   2         2         2        4d
replicaset.apps/prometheus-operator-75486dd88   1         1         1        4d

NAME                                 READY   AGE
statefulset.apps/alertmanager-main   3/3      4d
statefulset.apps/prometheus-k8s      2/2      4d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To expose the &lt;em&gt;Grafana&lt;/em&gt; service, change its service type from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;LoadBalancer&lt;/em&gt; by running the commmand &lt;em&gt;&apos;kubectl edit svc grafana -n monitoring&apos;&lt;/em&gt;. Once updated, the &lt;em&gt;Grafana&lt;/em&gt; service appears as a &lt;em&gt;LoadBalancer&lt;/em&gt; type and is assigned with an &lt;em&gt;EXTERNAL-IP&lt;/em&gt; address, such as &lt;em&gt;&apos;172.20.40.241&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get svc grafana -n monitoring
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
grafana   LoadBalancer   172.30.211.119   172.20.40.241   3000:31469/TCP    4d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now expose the &lt;em&gt;Grafana&lt;/em&gt; service and make it publicly accessible using &lt;a href=&quot;https://tailscale.com/kb/1223/funnel&quot;&gt;Tailscale Funnel&lt;/a&gt;. &lt;em&gt;Tailscale&lt;/em&gt; Funnel exposes a local service running in the MKS cluster via a unique &lt;em&gt;Funnel URL&lt;/em&gt;, formatted as &lt;em&gt;&amp;#x3C;service-name&gt;.&amp;#x3C;tailscale domain&gt;&lt;/em&gt;. When someone accesses the Funnel URL, the request is routed to the Funnel relay server, which then establishes an encrypted TCP tunnel to the local service. This ensures the data remains secure and the service&apos;s IP address stays hidden. The Funnel relay server cannot decrypt any data transmitted through the tunnel.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Tailscale&lt;/em&gt; Funnel is disabled by default. To enable it, use the &lt;em&gt;Tailscale&lt;/em&gt; CLI command &lt;em&gt;&apos;tailscale funnel&apos;&lt;/em&gt;. The &lt;em&gt;Tailscale&lt;/em&gt; CLI tool is included when you install the &lt;em&gt;Tailscale&lt;/em&gt; client.&lt;/p&gt;
&lt;p&gt;After &lt;em&gt;Tailscale&lt;/em&gt; Funnel is enabled, you need to create a tag named &lt;em&gt;k8s&lt;/em&gt; in the &lt;em&gt;Tailscale&lt;/em&gt; admin console.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-tag-k8s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You also need to add a node attribute, under &lt;strong&gt;Access controls -&gt;&lt;/strong&gt;  &lt;em&gt;Node attributes&lt;/em&gt; tab, in the admin console.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tailscale-node-attribute-k8s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create the following &lt;em&gt;Ingress&lt;/em&gt; YAML manifest file with the annotation &lt;em&gt;tailscale.com/funnel: &quot;true&quot;&lt;/em&gt; and &lt;em&gt;ingressClassName: tailscale&lt;/em&gt;. Apply it to the &lt;em&gt;monitoring&lt;/em&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-grafana.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-grafana
  namespace: monitoring
  annotations:
    tailscale.com/funnel: &quot;true&quot;
spec:
  defaultBackend:
    service:
     name: grafana
      port:
        number: 3000
  ingressClassName: tailscale
  tls:
    - hosts:
        - grafana

$ kubectl apply -f ingress-grafana.yaml
ingress.networking.k8s.io/ingress-grafana created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After a few minutes, the deployed Ingress &lt;em&gt;ingress-grafana&lt;/em&gt; displays its assigned Funnel URL &lt;em&gt;grafana.qilin-beta.ts.net&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ingress -n monitoring
NAME              CLASS       HOSTS   ADDRESS                     PORTS     AGE
ingress-grafana   tailscale   *       grafana.qilin-beta.ts.net   80, 443   36s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;strong&gt;Machines&lt;/strong&gt; tab of the &lt;em&gt;Tailscale&lt;/em&gt; admin console shows the newly added device &lt;em&gt;grafana&lt;/em&gt;, as well as its &lt;em&gt;Tailscale&lt;/em&gt; IP address &lt;em&gt;100.110.103.12&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-machine.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Access &lt;em&gt;Grafana&lt;/em&gt; dashboard&lt;/h2&gt;
&lt;p&gt;You can start your browser by pointing to the Funnel URL &lt;em&gt;&apos;grafana.qilin-beta.ts.net &apos;&lt;/em&gt;. After logging in, you can navigate to one of the pre-configured &lt;em&gt;Grafana&lt;/em&gt; dashboards, e.g., &lt;em&gt;Kubernetes/API server&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-funnel.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can access the exposed &lt;em&gt;Grafana&lt;/em&gt; service from your mobile phone using the same Funnel URL to monitor your MKS cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-mobile.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post showed the steps for exposing the &lt;em&gt;Grafana&lt;/em&gt; service, running within the MKS cluster in HPE Private Cloud Enterprise, using &lt;em&gt;Tailscale&lt;/em&gt; and &lt;em&gt;MetalLB&lt;/em&gt;. Without opening firewall ports or configuring reverse proxies, the &lt;em&gt;Grafana&lt;/em&gt; dashboard becomes publicly accessible via its Funnel URL. Whether you are developing, debugging or showcasing applications in MKS clusters within the HPE Private Cloud Enterprise environment, this approach offers a simple and secure way to expose services.&lt;/p&gt;
&lt;p&gt;However, when applying the &lt;em&gt;Tailscale&lt;/em&gt; setup in production environments, several security considerations remain important. Although traffic to the exposed Grafana service is encrypted end-to-end via &lt;em&gt;HTTPS&lt;/em&gt;, it is recommended to enforce strict access controls using &lt;em&gt;Tailscale&lt;/em&gt; ACLs. Additionally, you should configure &lt;em&gt;Tailscale&lt;/em&gt; to automatically disable Funnel if the connection drops, ensuring services are not unintentionally exposed. With recent updates, &lt;em&gt;Tailscale&lt;/em&gt; supports custom domain integration for Funnel URLs, ideal for creating production-grade public endpoints for your services.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Provisioning Kubernetes clusters using app blueprint with Ansible integration in HPE Private Cloud Enterprise]]></title><description><![CDATA[This blog post provides a detailed step-by-step guide on how to provision a Kubernetes (K8s) cluster using an app blueprint within the HPE…]]></description><link>https://developer.hpe.com/deploying-a-kubernetes-cluster-using-app-blueprint-with-ansible-integration-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-a-kubernetes-cluster-using-app-blueprint-with-ansible-integration-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Fri, 03 Oct 2025 07:25:49 GMT</pubDate><content:encoded>&lt;p&gt;This blog post provides a detailed step-by-step guide on how to provision a Kubernetes (K8s) cluster using an app blueprint within the HPE Private Cloud Enterprise environment. Together with other key Morpheus components, such as &lt;em&gt;Ansible Integration&lt;/em&gt; and &lt;em&gt;Automation Task and Workflow&lt;/em&gt;, an app blueprint for provisioning K8s clusters can be created. Once configured, this app blueprint enables the provisioning of K8s clusters directly in the Morpheus platform in HPE Private Cloud Enterprise.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; is a fully managed &lt;em&gt;Infrastructure as a Service&lt;/em&gt; (IaaS) offering that brings a modern, cloud-like experience to on-premises environments. It combines the flexibility of hybrid cloud with the enterprise-grade control and security required by enterprise IT.&lt;/p&gt;
&lt;p&gt;Through integration with &lt;a href=&quot;https://www.hpe.com/us/en/morpheus-enterprise-software.html&quot;&gt;HPE Morpheus Enterprise&lt;/a&gt;, which serves as the cloud management and orchestration layer, HPE Private Cloud Enterprise delivers a unified self-service interface for provisioning virtual machines (VMs), creating containers, and deploying applications, all governed by role-based access control (RBAC).&lt;/p&gt;
&lt;p&gt;Among the key Morpheus components, &lt;em&gt;Ansible Integration&lt;/em&gt; and &lt;em&gt;Automation Task and Workflow&lt;/em&gt; can be used to create an app blueprint for provisioning K8s clusters using Ansible playbooks available from the &lt;em&gt;GitHub&lt;/em&gt; repository. It automatically creates a list of required virtual machine (VM) instances and deploys K8s on top of these VM instances. This blog post walks through the process of creating an app blueprint and using it for K8s cluster provisioning in HPE Private Cloud Enterprise.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access to a HPE Private Cloud Enterprise workspace with the &lt;em&gt;&apos;Private Cloud Tenant Owner&apos;&lt;/em&gt; role, allowing administrative actions in the &lt;em&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt;&lt;/em&gt; service.&lt;/li&gt;
&lt;li&gt;The group named &lt;em&gt;&apos;CFE Department B Group&apos;&lt;/em&gt; and the network &lt;em&gt;&apos;Green-Net&apos;&lt;/em&gt; have already been created.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Create app blueprint&lt;/h2&gt;
&lt;p&gt;To create an app blueprint, you need to log in to HPE GreenLake Cloud and launch the HPE Morpheus Enterprise Dashboard. For a detailed walkthrough of this process, refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;1.  Add Ansible integration&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;From Morpheus Dashboard, navigate to &lt;strong&gt;Administration&lt;/strong&gt; &gt; &lt;strong&gt;Integrations&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/morpheus-intg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+New Integration&lt;/strong&gt;&lt;/em&gt; and select &lt;em&gt;Ansible&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/morpheus-ansible-intg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Enter NAME as &apos;cfe-ansible-k8s&apos; and specify ANSIBLE GIT URL, DEFAULT BRANCH, PLAYBOOKS PATH, ROLES PATH, GROUP VARIABLES PATH, DESCRIPTION, and HOST VARIABLES PATH. Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-ansible-intg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/em&gt;: Sample Ansible playbooks are available from this &lt;a href=&quot;https://github.com/guopingjia/ansible-k8s&quot;&gt;&lt;em&gt;GitHub&lt;/em&gt; repository&lt;/a&gt; for provisioning a K8s cluster with one master and one worker node, using the native K8s distribution.&lt;/p&gt;
&lt;h3&gt;2. Create tasks and workflows&lt;/h3&gt;
&lt;h4&gt;Create tasks for K8s master and worker&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; -&gt; &lt;strong&gt;Automation&lt;/strong&gt; -&gt; &lt;em&gt;Tasks tab&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;cfe-k8s-master&lt;/em&gt; and select TYPE as &lt;em&gt;Ansible Playbook&lt;/em&gt;. Then specify ANSIBLE REPO as &lt;em&gt;cfe-ansible-k8s&lt;/em&gt; and PLAYBOOK as &lt;em&gt;master.yml&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-master-task.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Repeat &lt;em&gt;step 1&lt;/em&gt; and &lt;em&gt;step 2&lt;/em&gt; to create a task for K8s worker as name &lt;em&gt;cfe-k8s-worker&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-worker-task.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Create workflows for K8s master and worker&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; -&gt; &lt;strong&gt;Automation&lt;/strong&gt; -&gt; &lt;em&gt;Workflows&lt;/em&gt; tab.&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt; and select &lt;em&gt;Provisioning Workflow&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/morpheus-workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;cfe-k8s-master&lt;/em&gt; and select PLATFORM as &lt;em&gt;Linux&lt;/em&gt;. Then search and select the task &lt;em&gt;cfe-k8s-master&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;SAVE CHANGES&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-master-workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Repeat &lt;em&gt;step 1&lt;/em&gt; to &lt;em&gt;step 3&lt;/em&gt; to create a workflow for K8s worker as name &lt;em&gt;cfe-k8s-worker&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-worker-workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Create app blueprint&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; -&gt; &lt;strong&gt;Blueprints&lt;/strong&gt; -&gt; &lt;em&gt;App Blueprints&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;CEF-K8s-Ubuntu&lt;/em&gt; and select TYPE as &lt;em&gt;Morpheus&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-summary.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+&lt;/strong&gt;&lt;/em&gt; (next to &lt;em&gt;CFE-K8s-Ubuntu&lt;/em&gt;) and select &lt;em&gt;Tier Name&lt;/em&gt; as &lt;em&gt;App&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-tier-name.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Click &lt;em&gt;App&lt;/em&gt; and edit its CONFIGURATION with NAME as &lt;em&gt;CFE-K8s-master&lt;/em&gt; and BOOT ORDER as &lt;em&gt;0&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-master-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+&lt;/strong&gt;&lt;/em&gt; again (next to &lt;em&gt;CFE-K8s-Ubuntu&lt;/em&gt;) and select &lt;em&gt;Tier Name&lt;/em&gt; as &lt;em&gt;App&lt;/em&gt;. Then click &lt;em&gt;App&lt;/em&gt; and edit its CONFIGURATION with NAME as &lt;em&gt;CFE-K8s-worker&lt;/em&gt; and BOOT ORDER as &lt;em&gt;1&lt;/em&gt;. Under &lt;strong&gt;Connected Tiers&lt;/strong&gt;, select &lt;em&gt;CFE-K8s-master&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-worker-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+&lt;/strong&gt;&lt;/em&gt; (next to &lt;em&gt;CFE-K8s-master&lt;/em&gt;) and select &lt;em&gt;vmware&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-vmware.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;+&lt;/strong&gt;&lt;/em&gt;  (next to &lt;em&gt;vmware&lt;/em&gt;) and select &lt;em&gt;Group&lt;/em&gt;, &lt;em&gt;Cloud&lt;/em&gt; and &lt;em&gt;Environment&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Add config&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-vmware-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;Click the added config and configure NAME, DESCRIPTION, LAYOUT, PLAN, VOLUMES, NETWORKS and IMAGE.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-master.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;9&quot;&gt;
&lt;li&gt;Repeat &lt;em&gt;step 6&lt;/em&gt; to &lt;em&gt;step 8&lt;/em&gt; to configure K8s worker instance settings.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-worker.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;10&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;Complete&lt;/strong&gt;&lt;/em&gt;. The final app blueprint structure and configuration display.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-blueprint-final.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;11&quot;&gt;
&lt;li&gt;Review and click &lt;em&gt;&lt;strong&gt;Save&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Deploy K8s cluster&lt;/h2&gt;
&lt;p&gt;Perform the following steps to provision a K8s cluster using the app blueprint &lt;em&gt;CFE-K8s-Ubuntu&lt;/em&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Provisioning&lt;/strong&gt; -&gt; &lt;strong&gt;Apps&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Select blueprint &lt;em&gt;CFE-K8S-UBUNTU&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-template.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;CEF-K8s&lt;/em&gt; and select GROUP, DEFAULT CLOUD, and ENVIRONMENT. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-summary.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Click the config under &lt;em&gt;CFE-K8s-master&lt;/em&gt; and wait for green check mark to appear (this indicates that all entries are up to date).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-master-status.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Click the config under &lt;em&gt;CFE-K8s-worker&lt;/em&gt; and wait for green check mark to appear (this indicates that all entries are up to date). Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-app-worker-status.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Review and click &lt;em&gt;&lt;strong&gt;Complete&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-details.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After a few minutes, the K8s cluster &lt;em&gt;CFE-K8s&lt;/em&gt; is successfully provisioned and displays a &lt;em&gt;Running&lt;/em&gt; status.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Click cluster &lt;em&gt;CFE-K8s&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-cluster-details.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The K8s cluster &lt;em&gt;CFE-K8s&lt;/em&gt; has been provisioned using the app blueprint &lt;em&gt;CFE-K8s-Ubuntu&lt;/em&gt; with 2 instances.&lt;/p&gt;
&lt;h2&gt;Access K8s cluster&lt;/h2&gt;
&lt;p&gt;Provision an Ubuntu VM instance with &lt;em&gt;kubectl&lt;/em&gt; and &lt;em&gt;helm&lt;/em&gt; installed, and set it up as jumphost by adding a &lt;em&gt;DNAT&lt;/em&gt; rule to the &lt;em&gt;Router&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-jumpserver.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Copy the &lt;em&gt;kubeconfig&lt;/em&gt; file of the K8s cluster &apos;CFE-K8s&apos; from its master node at IP &lt;em&gt;172.20.20.116&lt;/em&gt;, and save it locally as &lt;em&gt;config&lt;/em&gt;. You can then access the cluster &apos;CFE-K8s&apos; using the command &lt;em&gt;&apos;kubectl --kubeconfig=./config get nodes&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-access.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Delete K8s cluster&lt;/h2&gt;
&lt;p&gt;Perform the following steps to remove the K8s cluster &lt;em&gt;CFE-K8s&lt;/em&gt; once it is no longer required.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Provisioning&lt;/strong&gt;-&gt; &lt;strong&gt;Apps&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;CFE-K8s&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;DELETE&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/k8s-delete.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After a few minutes, the cluster is successfully deleted, and all associated VM instances are removed.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post provided a step-by-step walkthrough for provisioning a K8s cluster using an app blueprint integrated with Ansible in the HPE Private Cloud Enterprise environment. With the support of the Morpheus Kubernetes Service (MKS), HPE Private Cloud Enterprise now empowers users to deploy and manage K8s clusters with built-in automation and observability capabilities.&lt;/p&gt;
&lt;p&gt;You can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt; for a guide to provisioning MKS clusters using predefined MKS cluster layouts. Whether you prefer the flexibility of app blueprints or the streamlined structure of cluster layouts, HPE Private Cloud Enterprise gives you the freedom to choose the approach that best fits your operational needs.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Provisioning K3s clusters using custom cluster layouts in HPE Private Cloud Enterprise]]></title><description><![CDATA[This blog post outlines the steps to create a custom cluster layout for provisioning a Kubernetes (K8s) cluster using K3s, a lightweight K8s…]]></description><link>https://developer.hpe.com/provisioning-k3s-clusters-using-custom-cluster-layouts-in-hpe-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/provisioning-k3s-clusters-using-custom-cluster-layouts-in-hpe-private-cloud-enterprise/</guid><pubDate>Fri, 26 Sep 2025 15:47:53 GMT</pubDate><content:encoded>&lt;p&gt;This blog post outlines the steps to create a custom cluster layout for provisioning a Kubernetes (K8s) cluster using &lt;em&gt;K3s&lt;/em&gt;, a lightweight K8s distribution, within the HPE Private Cloud Enterprise environment. By utilizing a list of key Morpheus components, such as &lt;em&gt;Node Type&lt;/em&gt;, &lt;em&gt;File Template&lt;/em&gt;, &lt;em&gt;Option List&lt;/em&gt;, &lt;em&gt;Input&lt;/em&gt;, &lt;em&gt;Task&lt;/em&gt;, &lt;em&gt;Workflow&lt;/em&gt;, and &lt;em&gt;Cluster Layout&lt;/em&gt;, a custom cluster layout that incorporates the &lt;em&gt;K3s&lt;/em&gt; install script can be created. Once configured, this custom cluster layout enables provisioning and management of K3s clusters directly from the the Morpheus Clusters page.&lt;/p&gt;
&lt;p&gt;Like the Morpheus Kubernetes Service (MKS), K3s clusters benefit from a curated set of built-in operations, including kubeconfig download, cluster scaling, K3s version upgrade, and cluster cleanup. These integrated capabilities simplify and streamline cluster management tasks, making K8s administration more efficient and user-friendly in HPE Private Cloud Enterprise.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; is a fully managed &lt;em&gt;Infrastructure as a Service&lt;/em&gt; (IaaS) offering that brings a modern, cloud-like experience to on-premises environments. It combines the flexibility of hybrid cloud with the enterprise-grade control and security required by enterprise IT.&lt;/p&gt;
&lt;p&gt;Through the integration with &lt;a href=&quot;https://www.hpe.com/us/en/morpheus-enterprise-software.html&quot;&gt;HPE Morpheus Enterprise&lt;/a&gt;, which serves as the cloud management and orchestration layer, HPE Private Cloud Enterprise offers a unified self-service interface for provisioning virtual machines (VMs), creating containers, and deploying applications, all governed by role-based access control (RBAC). This integration now supports the Morpheus Kubernetes Service (MKS), enabling users to provision and manage MKS clusters using a set of prebuilt MKS cluster layouts based on the native K8s distribution. To learn more about provisioning MKS clusters in HPE Private Cloud Enterprise, refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt;. Additionally, users can define custom cluster layouts to provision K8s clusters using third-party K8s distributions such as &lt;em&gt;Amazon EKS Anywhere&lt;/em&gt; or &lt;em&gt;K3s&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The following sections will guide you through the process of creating a custom cluster layout and using it to provision a K3s cluster within HPE Private Cloud Enterprise. Once the cluster is provisioned, a curated list of built-in operations becomes available from the cluster&apos;s &lt;em&gt;Actions&lt;/em&gt; menu. Among these, you will learn how to upgrade K3s version using one of the supported actions. These integrated features streamline key cluster management tasks, making cluster administration easier, faster, and more consistent.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access to an HPE Private Cloud Enterprise workspace with the &apos;&lt;em&gt;Private Cloud Tenant Owner&apos;&lt;/em&gt; role, allowing administrative actions in the &lt;em&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt;&lt;/em&gt; service.&lt;/li&gt;
&lt;li&gt;The group named &lt;em&gt;&apos;Department B Group&apos;&lt;/em&gt; and the network &lt;em&gt;&apos;Green_network&apos;&lt;/em&gt; have already been created.&lt;/li&gt;
&lt;li&gt;HPE Morpheus Enterprise running &lt;em&gt;version 8.0.5&lt;/em&gt; or higher.&lt;/li&gt;
&lt;li&gt;The MKS feature is enabled in HPE Private Cloud Enterprise. You can confirm the presence of the &lt;em&gt;Clusters&lt;/em&gt; menu from the &lt;em&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/em&gt; tab.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Create custom cluster layout&lt;/h2&gt;
&lt;p&gt;To create a custom cluster layout, you need to log in to &lt;em&gt;HPE GreenLake Cloud&lt;/em&gt; and launch the &lt;em&gt;HPE Morpheus Enterprise Dashboard&lt;/em&gt;. For a detailed walkthrough of this process, refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Provisioning MKS clusters in HPE Private Cloud Enterprise&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;1. Create node types&lt;/h3&gt;
&lt;p&gt;From the HPE Morpheus Enterprise &lt;strong&gt;Dashboard&lt;/strong&gt;, perform the following steps to create node types for both the K3s master and worker nodes.&lt;/p&gt;
&lt;h4&gt;Create a K3s primary master node type&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Blueprints &gt; ** &lt;em&gt;Node Types&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;k3s-primary-master&lt;/em&gt;, choose &lt;em&gt;Virtual Image&lt;/em&gt; option and select VM IMAGE as &lt;em&gt;Morpheus Ubuntu 20.04 20250218 (vmdk) - 1178&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-node-type-primary-master.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create a K3s secondary master node type&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Blueprints &gt; ** &lt;em&gt;Node Types&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;k3s-secondary-master&lt;/em&gt;, choose &lt;em&gt;Virtual Image&lt;/em&gt; option and select VM IMAGE as &lt;em&gt;Morpheus Ubuntu 20.04 20250218 (vmdk) - 1178&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-node-type-secondary-master.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create a K3s worker node type&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Blueprints &gt; ** &lt;em&gt;Node Types&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;k3s worker&lt;/em&gt;, choose &lt;em&gt;Virtual Image&lt;/em&gt; option and select VM IMAGE as &lt;em&gt;Morpheus Ubuntu 20.04 20250218 (vmdk) - 1178&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-node-type-worker.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;2. Create file template&lt;/h3&gt;
&lt;h4&gt;Create a file template with K8s Secret YAML&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Templates &gt; ** &lt;em&gt;File Templates&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME and FILE NAME, specify FILE PATH, select PHASE and provide TEMPLATE. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/file-template.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Configure the K3s primary master node&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Blueprints &gt; ** &lt;em&gt;Node Types&lt;/em&gt; tab. Click &lt;em&gt;Edit&lt;/em&gt; to &lt;em&gt;k3s-primary-master&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;VMware VM Options&lt;/strong&gt;, search and add &lt;em&gt;morpheus-sa-file-template&lt;/em&gt; to FILE TEMPLATES. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/primary-master-node-type-edit.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;3. Create option lists&lt;/h3&gt;
&lt;h4&gt;Create an option list for K3s versions&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Option Lists&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Versions&lt;/em&gt;, select TYPE as &lt;em&gt;Manual&lt;/em&gt; and provide DATASET. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/option-list-k3s-version.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create an option list for K3s networking&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Option Lists&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Networking Options&lt;/em&gt;, select TYPE as &lt;em&gt;Manual&lt;/em&gt; and provide DATASET. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/option-list-k3s-networking.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;4. Create inputs&lt;/h3&gt;
&lt;h4&gt;Create an input for K3s version&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Inputs&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Version&lt;/em&gt; and FIELD NAME as &lt;em&gt;k3sversion&lt;/em&gt;, set REQUIRE FIELD and HELP BLOCK, select TYPE as &lt;em&gt;Select List&lt;/em&gt; with LABEL as &lt;em&gt;Version&lt;/em&gt; and choose OPTION LIST as &lt;em&gt;K3s Versions&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/input-k3s-version.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create an input for K3s networking&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Inputs&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Networking&lt;/em&gt; and FIELD NAME as &lt;em&gt;k3snetworking&lt;/em&gt;, set REQUIRE FIELD and HELP BLOCK, select TYPE as &lt;em&gt;Select List&lt;/em&gt; with LABEL as &lt;em&gt;Networking&lt;/em&gt; and choose OPTION LIST as &lt;em&gt;K3s Networking Options&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/input-k3s-networking.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create an input for K3s cluster VIP address&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Inputs&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Cluster VIP Address&lt;/em&gt; and FIELD NAME as &lt;em&gt;k3svipaddress&lt;/em&gt;, set REQUIRE FIELD and HELP BLOCK, and select TYPE as &lt;em&gt;Text&lt;/em&gt; with LABEL as &lt;em&gt;VIP Address&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/input-k3s-vip.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create an input for K3s cluster Classless Inter-Domain Routing (CIDR)&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Inputs&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Cluster CIDR&lt;/em&gt; and FIELD NAME as &lt;em&gt;k3sclustercidr&lt;/em&gt;, set REQUIRE FIELD and HELP BLOCK, and select TYPE as &lt;em&gt;Text&lt;/em&gt; with LABEL as &lt;em&gt;Cluster CIDR&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/input-k3s-cluster-cidr.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create an input for K3s service CIDR&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Options &gt; ** &lt;em&gt;Inputs&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s Service CIDR&lt;/em&gt; and FIELD NAME as &lt;em&gt;k3sservicecidr&lt;/em&gt;, set REQUIRE FIELD and HELP BLOCK, and select TYPE as &lt;em&gt;Text&lt;/em&gt; with LABEL as &lt;em&gt;Service CIDR&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/input-k3s-service-cidr.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;5. Create automation task and workflow&lt;/h3&gt;
&lt;h4&gt;Create an automation task using K3s install script&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to **Library &gt; Automation &gt; ** &lt;em&gt;Tasks&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s HA Install Script&lt;/em&gt;, select TYPE as &lt;em&gt;Shell Script&lt;/em&gt;, choose &lt;em&gt;SUDO&lt;/em&gt; option, select SOURCE as &lt;em&gt;Local&lt;/em&gt;, and provide CONTENT. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-install-task.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The sample K3s HA install script is available from the &lt;a href=&quot;https://github.com/GuopingJia/k3s-demo/blob/main/K3s-Install-Script.sh&quot;&gt;&lt;em&gt;GitHub&lt;/em&gt; repository&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Create an automation workflow&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; &gt; &lt;strong&gt;Automation&lt;/strong&gt; &gt; &lt;em&gt;Workflows&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3s HA Install&lt;/em&gt;, select PLATFORM as &lt;em&gt;Linux&lt;/em&gt;, search and add &lt;em&gt;K3s HA Install Script&lt;/em&gt; under &lt;strong&gt;Provision&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-install-workflow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;6. Create K3s custom cluster layouts&lt;/h3&gt;
&lt;p&gt;The following section outlines the creation of two custom cluster layouts: one for a K3s high-availability (HA) setup and another for a single-master K3s configuration.&lt;/p&gt;
&lt;h4&gt;Create a K3s HA cluster layout&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; &gt; &lt;strong&gt;Blueprints&lt;/strong&gt; &gt; &lt;em&gt;Cluster Layouts&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3S HA Cluster&lt;/em&gt;, select CLUSTER TYPE as &lt;em&gt;Kubernetes Cluster&lt;/em&gt; and TECHNOLOGY as &lt;em&gt;VMware&lt;/em&gt;, search and add &lt;strong&gt;Inputs&lt;/strong&gt;, &lt;strong&gt;Master Nodes&lt;/strong&gt; and &lt;strong&gt;Worker Nodes&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster-layout.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Create a single-master K3s cluster layout&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Library&lt;/strong&gt; &gt; &lt;strong&gt;Blueprints&lt;/strong&gt; &gt; &lt;em&gt;Cluster Layouts&lt;/em&gt; tab. Click &lt;em&gt;&lt;strong&gt;+Add&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Enter NAME as &lt;em&gt;K3S Single-Master Cluster&lt;/em&gt;, select CLUSTER TYPE as &lt;em&gt;Kubernetes Cluster&lt;/em&gt; and TECHNOLOGY as &lt;em&gt;VMware&lt;/em&gt;, search and add &lt;strong&gt;Inputs&lt;/strong&gt;, &lt;strong&gt;Master Nodes&lt;/strong&gt; and &lt;strong&gt;Worker Nodes&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;Save changes&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-single-master-cluster-layout.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Provision a K3s cluster&lt;/h2&gt;
&lt;p&gt;The following section outlines the process to provision a K3s cluster using the custom cluster layout &lt;em&gt;K3S HA Cluster&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Clusters&lt;/strong&gt;. Click &lt;em&gt;&lt;strong&gt;+Add Cluster&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;em&gt;Cluster Type&lt;/em&gt; as &lt;em&gt;KUBERNETES CLUSTER&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-cluster-type.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;em&gt;Group&lt;/em&gt; as &lt;em&gt;Department B Group&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-group.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select CLOUD, enter CLUSTER NAME as &lt;em&gt;k3s-ha&lt;/em&gt;, and configure optional DESCRIPTION, RESOURCE NAME and LABELS. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-cluster-name.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select LAYOUT as &lt;em&gt;K3S HA Cluster&lt;/em&gt; and PLAN, configure VOLUMES, select NETWORKS, VERSION and NETWORKING, and configure VIP ADDRESS, CLUSTER CIDR and SERVICE CIDR. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-cluster-master-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Specify NUMBER OF WORKERS, along with PLAN, VOLUMES, and NETWORKS. You may retain the default settings or reuse the values previously configured. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-cluster-worker-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Skip this step for Automation settings. The cluster review screen displays. Click &lt;em&gt;&lt;strong&gt;Complete&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-cluster-review.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After a few minutes, the K3s cluster &lt;em&gt;k3s-ha&lt;/em&gt; is created using the specified custom cluster layout: &lt;em&gt;K3S HA Cluster&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Verify K3s cluster&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Clusters&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click the K3s cluster &lt;em&gt;k3s-ha&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster-details.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The K3s cluster page displays the count of controller and worker nodes, all showing an &lt;em&gt;Ok&lt;/em&gt; status.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click the &lt;em&gt;History&lt;/em&gt; tab.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster-history.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;It contains all the details of the K3s cluster provisioning process, starting from the master nodes to the worker nodes.&lt;/p&gt;
&lt;h2&gt;Access K3s cluster&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Clusters&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click the K3s cluster &lt;em&gt;k3s-ha&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;From &lt;em&gt;Control&lt;/em&gt; tab, type command, e.g., &lt;em&gt;get nodes&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster-access.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;It shows both master and worker nodes in the cluster, together with their K8s versions and status.&lt;/p&gt;
&lt;p&gt;From the cluster&apos;s &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt; menu, you can click &lt;em&gt;Upgrade Cluster&lt;/em&gt; to upgrade the K3s cluster to its new version, e.g., &lt;em&gt;1.31.13&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k3s-ha-cluster-upgrade.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post walked through the process of creating custom cluster layouts and using them to provision K3s clusters within HPE Private Cloud Enterprise. Supporting both native and third-party K8s distributions, along with preconfigred MKS cluster layouts and user-defined custom cluster layouts, HPE Private Cloud Enterprise offers flexible provisioning options to suit diverse deployment needs. Once a cluster is operational, administrators can simplify day-to-day management using the built-in &lt;em&gt;Actions&lt;/em&gt; menu, enabling downloading kubeconfig files, scaling worker nodes, upgrading cluster version, and decommissioning unused clusters. These integrated capabilities make K8s administration more streamlined, reliable, and intuitive across the enterprise.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Self-activation and onboarding in HPE Complete Care Service – ITOps]]></title><description><![CDATA[One of the HPE Complete Care Service - ITOps slogans is Smart decisions start with good data. HPE Complete Care service - ITOps is an HPE…]]></description><link>https://developer.hpe.com/self-activation-and-onboarding-in-hpe-complete-care-service-–-itops/</link><guid isPermaLink="false">https://developer.hpe.com/self-activation-and-onboarding-in-hpe-complete-care-service-–-itops/</guid><pubDate>Fri, 26 Sep 2025 10:01:32 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;One of the HPE Complete Care Service - ITOps slogans is &lt;strong&gt;Smart decisions start with good data&lt;/strong&gt;. HPE Complete Care service - ITOps is an HPE service that helps customers monitor, manage, and operate their entire IT environment regardless of the location of those IT assets — on-premises or cloud native.&lt;/p&gt;
&lt;p&gt;This service includes a cloud-based observability service, powered by &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html&quot;&gt;HPE OpsRamp Software&lt;/a&gt;, a SaaS-based IT Operations Management (ITOM) solution. HPE OpsRamp Software is an AI-powered command center that simplifies and optimizes IT operations. HPE OpsRamp Software gives customers complete control over their hybrid IT environment to discover, map, monitor, and troubleshoot infrastructure health and service performance, all from a single centralized command center.&lt;/p&gt;
&lt;p&gt;HPE OpsRamp Software leverages OpenTelemetry and the extended Berkeley Packet Filter (eBPF) technologies to collect telemetry data. It enables real time monitoring, spot anomalies, predict potential issues, so customers can stay ahead of performance issues before they impact their business.&lt;/p&gt;
&lt;p&gt;HPE Complete Care service – ITOps provides customers with enterprise licenses, 24x7 access to HPE OpsRamp platform and 24x7 access to HPE OpsRamp experts. The number of licenses is determined by the configuration, such as the number of devices.&lt;/p&gt;
&lt;p&gt;The HPE Account Service Manager (ASM) will reach out to the HPE Complete Care customers to gather information from the customers to configure the observability service enabled by HPE OpsRamp software and begin the onboarding process of the HPE OpsRamp Software command center.&lt;/p&gt;
&lt;p&gt;After you get access to the HPE OpsRamp Software command center for the HPE Complete Care service – ITOps, you can get started with the journey to &lt;strong&gt;self-activate&lt;/strong&gt; the observability service enabled by HPE OpsRamp Software.&lt;/p&gt;
&lt;p&gt;This blog post walks you through the sequence of steps through a series of &lt;strong&gt;audio-visual learning experience&lt;/strong&gt; that helps you get started with the setup of the HPE OpsRamp Software command center to discover, monitor, and observe the health, performance and availability of systems and devices included in your HPE Complete Care Service – ITOps contract.&lt;/p&gt;
&lt;h2&gt;Assumptions&lt;/h2&gt;
&lt;p&gt;In this scenario, it is assumed that the HPE Account Service Manager (ASM) has reached out to you to complete the HPE Complete Care Service – ITOps request form (provided by your HPE ASM)  to setup and configure your HPE OpsRamp Software command center instance.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Once you get your HPE OpsRamp Software command center instance, for a &lt;strong&gt;self-activation&lt;/strong&gt; of the instance to monitor, control and manage resources, there are some setup requirements that must be fulfilled:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Installing a gateway collector appliance in a virtual environment as a prerequisite to enable discovery of systems and resources before they can be monitored.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installing integration modules to discover, monitor and manage agentless SSH-enabled resources and physical compute devices.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creating and customizing modern dashboards to easily identify anomalies and troubleshoot issues quickly.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Installing and configuring a gateway collector appliance&lt;/h2&gt;
&lt;p&gt;Resources need to be discovered before they can be monitored, and metrics are collected. With the HPE OpsRamp Software, you discover, monitor, and manage infrastructure resources (compute, storage, network) using an &lt;strong&gt;agentless&lt;/strong&gt; method with a &lt;strong&gt;gateway collector appliance&lt;/strong&gt; installed &lt;strong&gt;within your firewall&lt;/strong&gt; environment. This appliance can be a virtual machine or a cloud-native application that runs on your own Kubernetes environment.&lt;/p&gt;
&lt;p&gt;There is &lt;a href=&quot;https://www.youtube.com/watch?v=c0ZmdwACq2A&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;a short video on YouTube&lt;/a&gt; that walks through the installation of the gateway collector appliance in the HPE Complete Care Service – ITOps command center instance. For details about installing and registering a gateway collector appliance, you can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/&quot;&gt;Initial configuration to enable the discovery of resources&lt;/a&gt;, section &lt;em&gt;&lt;strong&gt;Installing and configuring a gateway collector appliance&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about the gateway collector appliance installation and activation procedures, and deployment requirements, refer to the &lt;a href=&quot;https://docs.opsramp.com/platform-features/&quot;&gt;HPE OpsRamp Software documentation&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Agent-based methods can also be used for the discovery of resources running Linux and Microsoft Windows Operating Systems. To learn more about agents, refer to the &lt;a href=&quot;https://docs.opsramp.com/platform-features/agents/&quot;&gt;HPE OpsRamp Software Agent documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Installing an agentless SSH-enabled integration module&lt;/h2&gt;
&lt;p&gt;HPE OpsRamp Software command center supports agentless monitoring. Agentless monitors use the &lt;strong&gt;gateway collector appliance&lt;/strong&gt; to discover resources via SSH and monitor IT infrastructure agentless resources to track their health, performance, and availability.&lt;/p&gt;
&lt;p&gt;The SSH Agentless Integration module discovers Linux/Unix-based systems &lt;strong&gt;without installing an agent&lt;/strong&gt;, by securely connecting to the device over SSH via the gateway collector appliance. A &lt;strong&gt;Monitoring Template&lt;/strong&gt; then needs to be assigned to the target agentless system for collecting data for monitoring the metrics and resource availability.&lt;/p&gt;
&lt;p&gt;There is &lt;a href=&quot;https://www.youtube.com/watch?v=a1GVV-b9hCI&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;a short video on YouTube&lt;/a&gt; that walks through the installation of the agentless SSH-enables system integration module in the HPE Complete Care Service – ITOps command center instance. For details about installing and configuring an agentless SSH integration module, you can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;Enabling the monitoring of agentless SSH-enabled systems&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Discovering and monitoring physical computing devices via the Redfish API&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://docs.opsramp.com/integrations/compute/server-hardware-monitoring-redfish/redfish-server/&quot;&gt;Redfish - Server integration module&lt;/a&gt; monitors and manages physical servers via the Redfish API, a modern, standardized interface for out-of-band hardware management.&lt;/p&gt;
&lt;p&gt;The Redfish Server integration module enables the &lt;strong&gt;discovery&lt;/strong&gt; of physical server hardware and its components (CPU, memory, storage, power supplies, fans, temperature, and so on). Once discovered, Redfish Server monitors are &lt;strong&gt;automatically&lt;/strong&gt; applied to the resource via predefined &lt;strong&gt;Global Monitoring Templates&lt;/strong&gt; and &lt;strong&gt;Global Device Management Policies&lt;/strong&gt; to manage the health and status of the server’s hardware components.&lt;/p&gt;
&lt;p&gt;There is &lt;a href=&quot;https://www.youtube.com/watch?v=htZwkW-zG00&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;a short video on YouTube&lt;/a&gt; that walks through the installation of the Redfish - Server integration module in the HPE Complete Care Service – ITOps command center instance. For details about installing a Redfish – Server integration module, you can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;Enabling the monitoring of physical devices&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Creating and customizing modern dashboards&lt;/h2&gt;
&lt;p&gt;Once monitoring tools have collected metrics, the next step in creating actionable insights is to visualize them in a dashboard. A dashboard is a collection of charts based on tiles and widgets for visualizing metrics data measured over intervals of time.&lt;/p&gt;
&lt;p&gt;The purpose of using and creating customizable dashboards is to easily identify anomalies so IT operations can identify them and troubleshoot issues quickly.&lt;/p&gt;
&lt;p&gt;HPE &lt;strong&gt;recommends leveraging the modern dashboard 2.0&lt;/strong&gt; for its advanced capabilities.&lt;/p&gt;
&lt;p&gt;There is &lt;a href=&quot;https://www.youtube.com/watch?v=MPTq-3EA60E&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;a short video on YouTube&lt;/a&gt; that walks through the creation and customization of a modern dashboard 2.0 in the HPE Complete Care Service – ITOps command center instance. For details about creating and customizing a modern dashboard 2.0, you can refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;Enabling the monitoring of physical devices&lt;/a&gt;, section &lt;em&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about &lt;strong&gt;dashboard 2.0&lt;/strong&gt;, see the &lt;a href=&quot;https://docs.opsramp.com/platform-features/feature-guides/dashboards/dashboard20/&quot;&gt;HPE OpsRamp Software Dashboards documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;The HPE Complete Care Service – ITOps, powered by HPE OpsRamp Software, discovers the compute, networks, and storage infrastructure, the applications, and workloads they host, and their dependencies. It then observes and monitors the health, performance, and capacity events, metrics, logs, traces, and network flows providing customers with true end-to-end visibility.&lt;/p&gt;
&lt;p&gt;This blog post walked you through the sequence of steps, leveraging a series of video tutorials, for a self-activation of the HPE Complete Care Service – ITOps command center instance. This set of videos goes through the setup of the HPE OpsRamp Software command center to discover, monitor, and observe the health, performance, and availability of resources through a gateway collector appliance, and creation of a modern dashboard 2.0.&lt;/p&gt;
&lt;p&gt;For &lt;strong&gt;the HPE-assisted activation and onboarding of the HPE Complete Care Service – ITOps&lt;/strong&gt;, refer to the &lt;a href=&quot;https://developer.hpe.com/platform/activation-and-onboarding-in-hpe-complete-care-service-itops/home/&quot;&gt;HPE Complete Care Service – ITOps landing page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To resolve issues with HPE Complete Care Service – ITOps command center instance, contact the HPE support team. Refer to the &lt;a href=&quot;https://www.hpe.com/us/en/collaterals/collateral.a50009342enw.html&quot;&gt;HPE Contractual Support service documentation&lt;/a&gt; for detailed information.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open sourcing Workshops-on-Demand - Part 5: Create a new workshop]]></title><description><![CDATA[I﻿n this article that is part of our series dedicated on open sourcing of our Workshops-on-Demand project, I will focus on the steps…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part-5-create-a-workshop/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part-5-create-a-workshop/</guid><pubDate>Fri, 26 Sep 2025 08:24:03 GMT</pubDate><content:encoded>&lt;p&gt;I﻿n this article that is part of our series dedicated on &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;open sourcing of our Workshops-on-Demand project&lt;/a&gt;, I will focus on the steps necessary  to build up a new workshop. In my previous posts, I have already covered most of the pieces on how to set up the infrastructure to support the workshops. Now let&apos;s focus a little more on the content creation.&lt;/p&gt;
&lt;h1&gt;O﻿verview&lt;/h1&gt;
&lt;p&gt;Let&apos;s start with a simple flowchart describing the 10000-foot view of the creation process:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-b-process.png&quot; alt=&quot;&quot; title=&quot;Workshop&amp;#x27;s creation flow.&quot;&gt;&lt;/p&gt;
&lt;p&gt;A﻿s you can see, there&apos;s no rocket science here. Just common sense. Depending on the workshop you wish to create, some obvious requirements should show up. A workshop based on a programmatic language, for instance, may require the relevant kernel to be set up on the JupyterHub server. The following &lt;a href=&quot;https://gist.github.com/chronitis/682c4e0d9f663e85e3d87e97cd7d1624&quot;&gt;page&lt;/a&gt; lists all available kernels.&lt;/p&gt;
&lt;p&gt;Some workshops might need a specific infrastructure set up in order to run. A Kubernetes 101 workshop, for instance, could not exist without the presence of a proper Kubernetes cluster. The same thing goes for any HPE-related solutions.&lt;/p&gt;
&lt;p&gt;In setting up the infrastructure, there are a number of things you need at a minimum. The design of this open-sourcing project makes it very easy to deploy a development and test environment, a staging environment, and at least one production environment.&lt;/p&gt;
&lt;p&gt;Let&apos;s look at how each of these environments is defined :&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Development Environment:&lt;/strong&gt; This is where application/system development tasks, such as designing, programming, debugging of a workshop, etc., take place.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test Environment:&lt;/strong&gt; As the name implies, this is where the workshop testing is conducted to find and fix errors.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Staging Environment:&lt;/strong&gt; Here, all the work done in the development environment is merged into the built system (often used to automate the process of software compilation) before it is moved into the production environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Production Environment:&lt;/strong&gt; The last environment in workshop development, this is where new builds/updates are moved into production for end users.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When the HPE Developer Community began implementing their Workshop-on-Demand program, they originally only had a development and a test &amp;#x26; staging environment on one end and a production environment on the other.&lt;/p&gt;
&lt;p&gt;In this post, I won&apos;t focus on the subject selection process. I&apos;ll leave that to you to figure it out. I will, however, talk a little bit again about the infrastructure, especially the dedicated scripts and variables that you need to create to support the lifecycle of the workshop. As usual, there are two sides to the workshop&apos;s creation--what should be done on the backend and what needs to be done mainly for the api db server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-archi3.png&quot; alt=&quot;&quot; title=&quot;WOD Overview.&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is a workshop? What do you need to develop?&lt;/h2&gt;
&lt;p&gt;Let me use an example to better explain this. There&apos;s this engineer, Matt, who has a great deal of knowledge that he would like to share. He was kind enough to agree to working with me on creating a new workshop. After our first meeting, where I explained the creation process and the expectations, we were able to quickly start designing a workshop that would help him do this.&lt;/p&gt;
&lt;p&gt;We defined what was needed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A﻿ set of notebooks that will be used by the student&lt;/li&gt;
&lt;li&gt;Containing instructions cells in markdown and run code cells leveraging the relevant kernel. If you are not familiar with Jupyter notebooks, a simple &lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/25&quot;&gt;101 workshop&lt;/a&gt; is available in our Workshops-on-Demand &apos;s catalog.&lt;/li&gt;
&lt;li&gt;A student range for the workshop.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A workshop should contain at least:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;0-ReadMeFirst.ipynb&lt;/li&gt;
&lt;li&gt;1-WKSHP-LAB1.ipynb&lt;/li&gt;
&lt;li&gt;2-WKSHP-LAB2.ipynb&lt;/li&gt;
&lt;li&gt;3-WKSHP-Conclusion.ipynb&lt;/li&gt;
&lt;li&gt;LICENCE.MD&lt;/li&gt;
&lt;li&gt;A pictures folder (if any screenshot is required in lab instructions)&lt;/li&gt;
&lt;li&gt;A README.md (0-ReadMeFirst.ipynb in md format)&lt;/li&gt;
&lt;li&gt;A wod.yml&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To make the workshop compliant to our platform, Matt just needs to provide a final file that contains a set of metadata that will be used  for the workshop&apos;s integration into the infrastructure. this file is called &lt;strong&gt;wod.yml&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Matt can leverage a simple &lt;strong&gt;wod.yml&lt;/strong&gt; file containing them and that can be later parsed in order to feed the database with the relevant info. Quite handy, no? Moreover, the same script that will create the workshop entry in the database can also be used to update it.&lt;/p&gt;
&lt;p&gt;Here is an example of such a file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;%YAML 1.1
# Meta data for the GO101 Workshop to populate seeder
---
name: &apos;GO 101 - A simple introduction to Go Programming Language&apos;
description: &apos;Go, also called Golang or Go language, is an open source programming language that Google developed. Software developers use Go in an array of operating systems and frameworks to develop web applications, cloud and networking services, and other types of software. This workshop will drive you through the basics of this programming language.&apos;
active: true
capacity: 20
priority: 1
range: [151-170]
reset: false
ldap: false
location: &apos;fully qualified domain name of default JupyterHub server&apos;
replayId: 31
varpass: false
compile: false
workshopImg: &apos;https://us-central1-grommet-designer.cloudfunctions.net/images/frederic-passeron-hpe-com/WOD-GO-101-A-simp-introduction-to-Go-programming-language.jpeg&apos;
badgeImg: &apos;https://us-central1-grommet-designer.cloudfunctions.net/images/frederic-passeron-hpe-com/go101-a-simple-introduction-to-go-programming-language.jpg&apos;
beta: false
category: [&apos;Open Source&apos;]
duration: 4
alternateLocation: [&apos;fully qualified domain name of an alternate JupyterHub server&apos;]
presenter: &apos;Matthew Doddler&apos;
role: &apos;FullStack developer&apos;
avatar: &apos;/img/SpeakerImages/MattD.jpg&apos;
replayLink: &apos;https://hpe-developer-portal.s3.amazonaws.com/Workshops-on-Demand-Coming-Soon-Replay.mp4&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following file will be used to update the &lt;strong&gt;workshops table&lt;/strong&gt; in the database. Let&apos;s have a look at what a new entry could look like:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-db-go-1.png&quot; alt=&quot;&quot; title=&quot;GO 101 Workshop DB screenshot&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-db-go-2.png&quot; alt=&quot;&quot; title=&quot;GO 101 Workshop DB screenshot&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a contributor, Matt should be able to provide all the following details.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ID:&lt;/strong&gt; A workshop ID to be used by backend server automation and Replays table to reference the associated replay video of the workshop (automatically created at the import of the wod.yml file process)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;name:&lt;/strong&gt; The workshop&apos;s name as it will will be displayed on the registration portal&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;notebook:&lt;/strong&gt; The name of the folder containing all the workshop&apos;s notebooks (automatically created at the import of the wod.yml file process)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;description:&lt;/strong&gt; The workshop&apos;s abstract as it will will be displayed on the registration portal&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;avatar, role and replayLink&lt;/strong&gt; are superseded by entries in the replay table (I will explain later)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;r﻿eplayId:&lt;/strong&gt; This entry links the dedicated replay video to the workshop and enables the presence of the replay in the learn more page of the workshop (automatically created at the import of the wod.yml file process)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;category:&lt;/strong&gt; The workshops&apos; registration portal proposes several filters to display the catlog&apos;s content. You can view all workshops, the most poular ones, or by category. Use this field to sort workshops accordingly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;d﻿uration:&lt;/strong&gt; All workshops are time bombed. You will define here the time allocated to perform the workshop&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;a﻿ctive:&lt;/strong&gt; Tag to set to enable visibility of the workshop&apos;s tile in the registration portal&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;w﻿orkshopImg:&lt;/strong&gt; As part of the lifecycle of the workshop, several emails are sent to the student. A workshop image is embedded in the first emails&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;session type:&lt;/strong&gt; Workshops-on-Demand by default (automatically created at the import of the wod.yml file process)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following fields are required by the infrastructure. In this example, I will work as the infrastructure Admin with Matt to define them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;capacity:&lt;/strong&gt; The number of maximum concurrent students allowed to take on the workshop&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;range:&lt;/strong&gt; The range between which students get picked at registration time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;reset and ldap&lt;/strong&gt; entries are to be used by backend server automation if dedicated reset scripts and ldap authentication are required by the workshop&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;location:&lt;/strong&gt; If your setup includes multiple JupyterHub servers, use this field to allocate workshops according to your needs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;compile:&lt;/strong&gt; This entry will be filled with the name of a script to be compiled at deployment time. This feature allows the admin to hide login scripts and credentials in non-editable executable files.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;varpass:&lt;/strong&gt;  This defines whether or not a workshop requires a password variable needs to be leveraged&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;b﻿adgeImg:&lt;/strong&gt; As part of the lifecycle of the workshop, several emails are sent to the student. In the final email, a badge is included. It allows the student to share its accomplishment on social media like linkedin for instance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;beta:&lt;/strong&gt; Not implemented yet :-)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;alternateLocation:&lt;/strong&gt; (Future development) The purpose is to allow automation of the relocation of a workshop in case of primary location&apos;s failure&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;replayLink:&lt;/strong&gt; YouTube link of the recorded video to be used as a replay&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;replayid:&lt;/strong&gt; This ID is used to link the correct video to the workshop. This is the replayId present in the workshops table (automatically created at the import of the wod.yml file process)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;monoAppliance:&lt;/strong&gt; Some workshops require a single dedicated appliance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;multiAppliance:&lt;/strong&gt; Some workshops require multiple dedicated appliances&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;N﻿ote:&lt;/strong&gt;&lt;/em&gt; B﻿oth W﻿orkshopImg and B﻿adgeImg are delivered by the frontend web server.&lt;/p&gt;
&lt;p&gt;I﻿f you feel you need more details about the registration process, please take a look at the &lt;strong&gt;Register Phase&lt;/strong&gt; paragraph in &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;the following introductionary blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Matt will create a simple workshop that does not require any infrastructure but the JupyterHub itself. As far as the infrastructure&apos;s requirements, a new kernel was needed. No additional scripts were required for this workshop.&lt;/p&gt;
&lt;p&gt;A﻿s an admin of the Workshops-on-Demand infrastructure, I have to perform several tasks on a development environment and a staging environment:&lt;/p&gt;
&lt;h3&gt;O﻿n the backend server:&lt;/h3&gt;
&lt;p&gt;Testing and validating installation of the new kernel on the staging backend server by:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Creating a new branch for this test&lt;/li&gt;
&lt;li&gt;M﻿odifying the &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/install_backend.yml#L326&quot;&gt;backend server installation yaml file &lt;/a&gt;to include the new kernel&lt;/li&gt;
&lt;li&gt;Validating the changes by testing a new backend install process&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Pushing the changes to the GitHub repo:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a user for the workshop developer on the test/dev and staging backend servers&lt;/li&gt;
&lt;li&gt;Provide to the developer the necessary information to connect to the test/dev and staging backend servers&lt;/li&gt;
&lt;li&gt;Copy in the developer&apos;s home folder a workshop template containing examples of introduction, conclusion, and lab notebooks, allowing him to start his work&lt;/li&gt;
&lt;li&gt;Give the developer the wod-notebook repo url for him to fork the repo and work locally on his machine (when the workshop does not require an appliance but just a Jupyter kernel for instance)&lt;/li&gt;
&lt;li&gt;When ready, a pull request can be made. The admin can then review and accept it. The admin can then perform the necessary steps required to prepare the infrastructure to host the workshop&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;O﻿n the api-db server:&lt;/h3&gt;
&lt;p&gt;Connecting to the api-db server as wodadmin user:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Switch to the relevant branch for the new workshop and perform a git remote update / rebase in the relevant notebook directory.&lt;/li&gt;
&lt;li&gt;Move to wod-api-db/scripts directory&lt;/li&gt;
&lt;li&gt;Update the database by running the wod-update-db.sh script.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This script will update the Database workshops table with the new workshop&apos;s entry.&lt;/p&gt;
&lt;p&gt;A﻿s the developer of the Workshops-on-Demand content, Matt had to perform several tasks:&lt;/p&gt;
&lt;h3&gt;O﻿n the backend server:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Log on to the backend server and clone the notebook repo in his home folder, or as explained earlier, fork the repo on his laptop and work from there&lt;/li&gt;
&lt;li&gt;Create a new branch for his workshop following the naming convention defined with the admin&lt;/li&gt;
&lt;li&gt;L﻿everage the template provided by me to build up the content of his workshop&lt;/li&gt;
&lt;li&gt;T﻿est the workshop locally on his laptop or on the dev server leveraging the &lt;code&gt;wod-test-action.sh&lt;/code&gt; script&lt;/li&gt;
&lt;li&gt;T﻿est the workshop using the staging registration portal&lt;/li&gt;
&lt;li&gt;W﻿hen all tests are green, create a pull request to merge content with the master repo&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A﻿s an admin, I would need to check the pull request and accept it. Once done, the test/dev and staging environments will require an update.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log on to the backend server as wodadmin and update the notebook repository.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;R﻿un a wod-deliver to update the relevant backend server.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git remote update
git rebase
wod-deliver
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;T﻿he very same processes will apply to the move to production phase.&lt;/p&gt;
&lt;h1&gt;Complex workshop example:&lt;/h1&gt;
&lt;p&gt;If you&apos;ve ever had to develop a workshop for something you&apos;re not as familiar with, you&apos;ll want to meet with SMEs and explain what your goals are and how to achieve them. Once you have a clearer understanding of the technology involved, you can move on to determine what the best platform would be on which to run the workshop. I can give you an example here focusing on Ansible, an automation tool. I was not originally familiar with it before I developed the Ansible 101 Workshop. In the steps below, I&apos;ll explain how I pulled it together..&lt;/p&gt;
&lt;p&gt;As the admin for the HPE Developer Community&apos;s Workshops-on-Demand infrastructure, I had to perform several tasks in order to set up this workshop. I had to determine what the workshop required from a backend and fully test it.&lt;/p&gt;
&lt;h5&gt;O﻿n the backend server:&lt;/h5&gt;
&lt;p&gt;The  workshop will require:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A dedicated server running Linux to allow the student to run Ansible playbooks against during the workshops&apos; labs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This means preparing a server (VM or physical) : We will consider it as an appliance&lt;/li&gt;
&lt;li&gt;Updating the relevant variable file to associate the IP address of the server to the workshop (there could be multiple servers too associated to a given workshop)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A﻿ set of scripts under wod-backend/scripts or wod-private/scripts folders depending on the nature of the workshop to manage the workshop&apos;s lifecycle&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Some generic scripts applying to all workshops&apos; appliances : general setup phase setting up common requirements for any appliance (student users creation, ssh keys, etc..) up to some specific ones dedicated to a given workshop&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;create-appliance.sh (ssh keys, ldap setup)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;setup-appliance.sh [WKSHP-NAME] (Student setup, Appliance Setup, Workshop setup) calls:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;setup-globalappliance.sh (global / generic setup)&lt;/li&gt;
&lt;li&gt;setup-[WKSHP-NAME].sh (Prepare appliance with workshop&apos;s reqs, Docker image for instance)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;create-[WKSHP-NAME].sh (called at deployement time to instantiate the necessary appliance(s) requiered by the workshop&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;reset-appliance (reset ssh keys and students credentials on appliance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;cleanup-[WKSHP-NAME].sh (takes care of cleanup some workshop&apos;s specifics)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;reset-[WKSHP-NAME].sh (reset of the workshop&apos;s appliance, docker compose down of a container for instance)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A set of variables to be leveraged by the notebooks. These variables are to be set in yml format. They will be parsed at deployment time to set student ids, appliance IP addresses, and other relevant parameters like ports, or simulated hardware information&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When all the scripts are functional and the necessary actions have been performed both on backend and frontend servers, some functional tests can be conducted using cli and later webui as described earlier for the simple workshop example.&lt;/p&gt;
&lt;p&gt;Testing the workshop:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;One can leverage the wod-test-action.sh script to test a workshop lifecycle action from deployment (CREATE) to CLEANUP, RESET, or PURGE.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dev@dev3:~$ wod-test-action.sh
Syntax: wod-test-action.sh &amp;#x3C;CREATE|CLEANUP|RESET|PURGE|PDF|WORD&gt; WKSHOP [MIN[,MAX]
ACTION is mandatory
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;Note: The available trace under ~/.mail/from will detail the different steps of the action and allow you to troubelshoot any issue.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;When all tests have validated the workshop, it can follow the move to prod cycle.&lt;/p&gt;
&lt;p&gt;You should now have a better understanding of the necessary tasks associated to the creation of a workshop. As you can see, it requires steps on the various sides of the infrastructure.&lt;/p&gt;
&lt;p&gt;This was the last blog post of the series. The Workshops-on-Demand project is available &lt;a href=&quot;https://github.com/Workshops-on-Demand&quot;&gt;here&lt;/a&gt;. Further update of the documentation will occur on this github repository.&lt;/p&gt;
&lt;p&gt;If we can be of any help in clarifying any of this, please reach out to us on &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;Slack&lt;/a&gt;. Please be sure to check back at &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; for a follow up on this. Also, don&apos;t forget to check out also the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt;! Willing to collaborate with us? Contact us so we can build more workshops!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.6!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-6/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-6/</guid><pubDate>Thu, 18 Sep 2025 23:37:13 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 6: Performance of Higher-Level Languages]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-6-performance-of-higher-level-languages/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-6-performance-of-higher-level-languages/</guid><pubDate>Thu, 18 Sep 2025 00:14:11 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Marjan Asgari: Optimizing Hydrological Models with Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-marjan-asgari-optimizing-hydrological-models-with-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-marjan-asgari-optimizing-hydrological-models-with-chapel/</guid><pubDate>Mon, 15 Sep 2025 18:34:43 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI agents as the meeting whisperers]]></title><description><![CDATA[Part 7 of our Gen AI for PM series Introduction Meetings often move fast, ideas overlap, action items get lost, and key insights can slip…]]></description><link>https://developer.hpe.com/ai-agents-as-the-meeting-whisperers-1/</link><guid isPermaLink="false">https://developer.hpe.com/ai-agents-as-the-meeting-whisperers-1/</guid><pubDate>Tue, 09 Sep 2025 15:01:50 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 7 of our &lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Meetings often move fast, ideas overlap, action items get lost, and key insights can slip through the cracks. This is where &lt;strong&gt;AI agents step in as the meeting whisperers&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Working quietly in the background, they &lt;strong&gt;listen, summarize, and highlight what matters most&lt;/strong&gt;, from decisions and deadlines to risks and follow-ups. Instead of spending hours sifting through notes or trying to recall what was agreed upon, project managers (PMs) get &lt;strong&gt;concise, structured outputs&lt;/strong&gt; that keep everyone aligned.&lt;/p&gt;
&lt;p&gt;With AI handling the details, teams can focus on the conversation itself, making meetings more productive and outcomes far clearer.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Less meetings, more impact&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you’re a project manager, this stat will hit home: PMs spend &lt;strong&gt;30–50% of their time in meetings.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Some days, it feels like an endless relay of Zoom calls, each one blending into the next. You jot down notes, chase action items, and try to keep everyone aligned. But by the end of the week, you&apos;re plagued with meeting fatigue with precious little time left for strategic work.&lt;/p&gt;
&lt;p&gt;This is where the &lt;strong&gt;AI agent steps in as your “meeting whisperer”&lt;/strong&gt; capturing what matters, automating the follow-up, and helping you cut meetings in half without losing alignment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;As an example:&lt;/strong&gt; The side-by-side illustration highlights the difference AI makes in managing meetings. On the left, a frazzled project manager is shown juggling &lt;strong&gt;sticky notes and back-to-back Zoom calls&lt;/strong&gt;, struggling to keep track of decisions and action items. On the right, the same manager looks calm and focused while an &lt;strong&gt;AI agent projects live meeting notes and action items directly onto a dashboard.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This contrast shows how &lt;strong&gt;AI meeting whisperers&lt;/strong&gt; turn chaos into clarity—freeing managers from the scramble of manual note-taking and giving them structured, real-time outputs that keep the whole team aligned.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;PM meeting&quot; title=&quot;PM meeting&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 1: Agents as real-time note-takers and action trackers&lt;/h3&gt;
&lt;p&gt;Most PMs spend as much energy capturing what’s said in meetings as they do participating in them. That’s double work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI agents change this dynamic&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;They &lt;strong&gt;transcribe meetings&lt;/strong&gt; live and highlight key points.&lt;/li&gt;
&lt;li&gt;They &lt;strong&gt;tag action items&lt;/strong&gt; and assign them to owners.&lt;/li&gt;
&lt;li&gt;They &lt;strong&gt;log commitments&lt;/strong&gt; into project management tools automatically.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example:&lt;/strong&gt; In a sprint review, instead of manually typing “Dev team to fix API bug by Tuesday,” the AI agent captures it in real time and adds it to Jira, assigned to the right person, with a deadline.&lt;/p&gt;
&lt;p&gt;The picture below shows how AI turns meeting conversations directly into structured outputs. On the dashboard, &lt;strong&gt;live meeting notes&lt;/strong&gt; are captured in real time, with &lt;strong&gt;key action items highlighted&lt;/strong&gt; for clarity. These action items are then &lt;strong&gt;automatically logged as tasks&lt;/strong&gt;, ensuring that nothing slips through the cracks.&lt;/p&gt;
&lt;p&gt;This workflow demonstrates how AI eliminates the need for manual note-taking and follow-up tracking, giving project managers a single, reliable source of truth after every meeting.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Real-time note&quot; title=&quot;Real-time note&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 2: Auto-generating follow-up tasks&lt;/h3&gt;
&lt;p&gt;We’ve all been there: someone says, &lt;em&gt;“I’ll check the budget tomorrow,”&lt;/em&gt; and then… it’s forgotten. Unless the PM follows up, the task vanishes into thin air.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI agents prevent this by&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Converting natural-language commitments into structured tasks.&lt;/li&gt;
&lt;li&gt;Adding deadlines automatically based on context.&lt;/li&gt;
&lt;li&gt;Sending reminders so nothing slips.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example:&lt;/strong&gt; During a vendor negotiation, someone says, &lt;em&gt;“I’ll send the updated proposal by Friday.”&lt;/em&gt; The AI agent creates a task, assigns it to the vendor lead, and sends a reminder on Thursday. No PM chasing required.&lt;/p&gt;
&lt;p&gt;The flow chart below shows how AI can transform casual spoken commitments into actionable tasks. When a team member makes a &lt;strong&gt;verbal commitment&lt;/strong&gt;, for example, “I’ll send the report tomorrow” the system’s &lt;strong&gt;AI detection&lt;/strong&gt; picks it up. From there, it automatically &lt;strong&gt;creates a task&lt;/strong&gt; in the project management system and sets a &lt;strong&gt;reminder&lt;/strong&gt;, ensuring the promise doesn’t get lost in conversation.&lt;/p&gt;
&lt;p&gt;This flow helps project managers capture important actions without relying on manual note-taking or memory, keeping accountability strong and projects moving forward.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Auto-generating task&quot; title=&quot;Auto-generating task&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Sentiment and engagement analysis in meetings&lt;/h3&gt;
&lt;p&gt;Not all meeting insights are about tasks. Sometimes the tone of the conversation matters more than the words.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI agents analyze sentiment and engagement&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flagging if a team sounds disengaged or stressed.&lt;/li&gt;
&lt;li&gt;Generating word clouds of recurring themes (e.g., “delays,” “handoffs,” “QA”).&lt;/li&gt;
&lt;li&gt;Helping PMs spot morale or alignment issues early.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example:&lt;/strong&gt; In a cross-team sync, AI reports: &lt;em&gt;“Marketing expressed repeated concern about engineering handoffs”&lt;/em&gt; and highlights it visually. That insight helps the PM address friction before it festers.&lt;/p&gt;
&lt;p&gt;The dashboard below shows how AI makes meeting insights more visual and actionable. A &lt;strong&gt;sentiment chart&lt;/strong&gt; breaks down team mood into &lt;strong&gt;positive, neutral, and negative tones&lt;/strong&gt;, helping leaders quickly sense the overall atmosphere. Alongside it, a &lt;strong&gt;word cloud&lt;/strong&gt; highlights the most frequently mentioned terms—like “deadlines,” “QA,” or “communication”—so recurring themes stand out at a glance.&lt;/p&gt;
&lt;p&gt;Together, these visuals turn meeting discussions into structured intelligence, making it easier to identify concerns, celebrate wins, and take targeted follow-up actions.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI meeting&quot; title=&quot;AI meeting&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Meeting-free decision-making culture&lt;/h3&gt;
&lt;p&gt;The real power of AI meeting agents isn’t just in better notes — it’s in &lt;strong&gt;fewer meetings&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When updates, tasks, and risks are captured automatically, you don’t need 10 weekly check-ins. Half of them can be replaced by AI-generated summaries. The remaining meetings can focus on &lt;strong&gt;strategic decisions, not status updates&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: A software team cuts their weekly syncs from 10 to 5. Instead of rehashing status updates, they review AI-prepared summaries and spend meetings on problem-solving.&lt;/p&gt;
&lt;p&gt;The illustration below shows how AI meeting summaries can dramatically reduce the time spent in meetings. In the &lt;strong&gt;“Before”&lt;/strong&gt; view, project managers sit through around &lt;strong&gt;10 meetings per week&lt;/strong&gt;, often just to gather updates. In the &lt;strong&gt;“After”&lt;/strong&gt; view, with AI generating clear, reliable summaries, the number of meetings drops to &lt;strong&gt;5 per week&lt;/strong&gt;, freeing up time for deeper work while keeping everyone equally (or even better) informed.&lt;/p&gt;
&lt;p&gt;This comparison makes it clear: &lt;strong&gt;AI doesn’t just make meetings smarter—it helps cut them in half.&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Decision-making culture&quot; title=&quot;Decision-making culture&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Conclusion: Less meeting fatigue, more results&lt;/h3&gt;
&lt;p&gt;Meetings will never disappear, but they don’t have to consume half your workweek. With &lt;strong&gt;Agentic AI as the meeting whisperer&lt;/strong&gt;, project managers can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spend less time taking notes and chasing action items.&lt;/li&gt;
&lt;li&gt;Gain richer insights through sentiment and engagement analysis.&lt;/li&gt;
&lt;li&gt;Cut meeting load without sacrificing alignment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The result? &lt;strong&gt;Less fatigue, more clarity, and better outcomes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&quot;AI remembers and follows up, so PMs don’t have to -- freeing them to focus on leadership, not logistics.&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This scene captures how AI ensures that no important decision or responsibility slips through the cracks—turning meeting outcomes into clear, actionable roadmaps without extra effort from the manager.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/7.6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Future AI meeting&quot; title=&quot;Future AI meeting&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[The Agentic AI command center for project managers]]></title><description><![CDATA[Part 6 of our Gen AI for PM series Imagine a single screen where every moving part of your project comes together—tasks, risks, resources…]]></description><link>https://developer.hpe.com/the-agentic-ai-command-center-for-project-managers/</link><guid isPermaLink="false">https://developer.hpe.com/the-agentic-ai-command-center-for-project-managers/</guid><pubDate>Mon, 08 Sep 2025 14:27:27 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 6 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Imagine a single screen where every moving part of your project comes together—tasks, risks, resources, and communications—all continuously monitored by AI. That’s the vision behind the Agentic AI command center.&lt;/p&gt;
&lt;p&gt;Rather than juggling spreadsheets, dashboards, and status meetings, through its use, project managers gain a central hub powered by intelligent agents. Each agent watches a different dimension of the project—tracking schedules, scanning for risks, analyzing sentiment, or monitoring budgets—and feeds insights into one unified view. The result is a real-time command center that helps managers move from reacting to problems to proactively steering projects toward success.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;PM dashboards vs. intelligent command centers&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every project manager knows the struggle fifteen open browser tabs, dashboards from five different tools, Slack buzzing with updates, and a Gantt chart that was outdated yesterday. Traditional dashboards are useful, but they’re static snapshots to show you what happened, not what’s about to happen.&lt;/p&gt;
&lt;p&gt;Now imagine this instead: A single command center powered by intelligent agents, orchestrating every aspect of your project in real time. Not just a dashboard but a digital control room that listens, predicts, and suggests.&lt;/p&gt;
&lt;p&gt;That’s the promise of the Agentic AI command center.&lt;/p&gt;
&lt;p&gt;The side-by-side illustration shows the difference between today’s fragmented workflows and the future of project management with Agentic AI. On the left, a project manager is overwhelmed, juggling multiple dashboards and sticky notes, each tracking only part of the project. On the right, everything comes together in a clean, futuristic AI command center—a single screen that consolidates tasks, risks, resources, and updates in real time. This shift makes project oversight simpler, smarter, and far less stressful.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI command center&quot; title=&quot;AI command center&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 1: Unified view of project health with agent orchestration&lt;/h3&gt;
&lt;p&gt;Instead of siloed dashboards, an Agentic AI command center unifies everything. Think of it as a &lt;strong&gt;digital war room&lt;/strong&gt; where specialized agents handle different dimensions of project health:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Finance agent&lt;/strong&gt; → tracks budgets, forecasts, and alerts on overruns.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource agent&lt;/strong&gt; → monitors workloads and reallocates tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Risk agent&lt;/strong&gt; → scans dependencies and external factors for early warnings.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Communication agent&lt;/strong&gt; → tailors updates for each stakeholder group.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: If a supplier misses a milestone, the Risk agent flags it, the Resource agent suggests reallocation, the Finance agent recalculates the budget, and the Communication agent updates stakeholders is all seamlessly orchestrated in the command center.&lt;/p&gt;
&lt;p&gt;The illustration below shows how &lt;strong&gt;multiple specialized AI&lt;/strong&gt; agents work together to support project managers. Each agent—covering &lt;strong&gt;Finance, Resource, Risk, and Communication&lt;/strong&gt; monitors its domain in real time and feeds insights into a &lt;strong&gt;central Project Management Dashboard Hub.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This circular flow ensures that no critical aspect is overlooked. Finance agents track budgets, resource agents optimize workloads, risk agents flag potential blockers, and communication agents keep stakeholders aligned. By consolidating all of this intelligence into one hub, project managers gain a &lt;strong&gt;360° view of the project&lt;/strong&gt;, enabling faster, more confident decisions.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Agent orchestration&quot; title=&quot;Agent orchestration&quot;&gt;&lt;/center&gt;
&lt;h3&gt;&lt;strong&gt;Section 2: Natural language queries for real-time insights&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Forget clicking through endless filters or waiting for reports. In the AI command center, you simply ask questions in plain language:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;“Which milestones are most at risk this quarter?”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;“Show me the cost impact if Vendor X slips by two weeks.”&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;“Who’s overbooked for next sprint?”&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The system parses your question, pulls data from across tools, and gives you instant, visual insights.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;As an example:&lt;/strong&gt; A PM types, “Show me the top 3 risks for Project Alpha.” The command center instantly displays a ranked heatmap with probabilities, impact scores, and suggested mitigations.&lt;/p&gt;
&lt;p&gt;The illustration below shows how project management becomes more conversational with Agentic AI. A project manager simply types a question into the system, and within seconds the screen displays graphs, risk scores, and tailored recommendations. This natural, question-and-answer interaction allows managers to skip manual data digging and instead get instant, actionable insights—turning complex project analysis into a simple conversation.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;NL queries&quot; title=&quot;NL queries&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 3: Integrated decision support&lt;/h3&gt;
&lt;p&gt;Traditional dashboards tell you the “what.” The Agentic AI command center also tells you the &lt;strong&gt;“what next.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It doesn’t just show a risk — it &lt;strong&gt;suggests responses&lt;/strong&gt; based on data and simulations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adjust timelines.&lt;/li&gt;
&lt;li&gt;Reallocate resources.&lt;/li&gt;
&lt;li&gt;Increase budget buffers.&lt;/li&gt;
&lt;li&gt;Recommend alternate vendors.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: When a critical vendor signals a delay, the command center simulates scenarios:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Option A&lt;/strong&gt;: Wait for the vendor → project slips by 10 days.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Option B&lt;/strong&gt;: Switch to backup vendor → budget rises 5%.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Option C&lt;/strong&gt;: Reallocate internal resources → timeline preserved, but lower capacity for another workstream.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The PM chooses, but the AI provides the &lt;strong&gt;decision support&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The decision-tree diagram below shows how AI helps project managers evaluate different choices when facing a delay. The decision tree outlines three possible actions:&lt;/p&gt;
&lt;p&gt;·   &lt;strong&gt;Switch Vendor&lt;/strong&gt;, which is predicted to result in &lt;strong&gt;increased costs&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;·   &lt;strong&gt;Do nothing and accept the delay&lt;/strong&gt;, which leads to a &lt;strong&gt;missed deadline&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;·   &lt;strong&gt;Reallocate Resources&lt;/strong&gt;, which keeps the &lt;strong&gt;project on track&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;By mapping out the likely outcomes of each option, the system gives managers the clarity they need to make faster and more informed decisions.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI decision support&quot; title=&quot;AI decision support&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Security and audit logs for compliance&lt;/h3&gt;
&lt;p&gt;In industries like finance, healthcare, and government, compliance is just as important as delivery. The AI command center builds &lt;strong&gt;compliance into the workflow&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Every decision is logged with a timestamp and rationale.&lt;/li&gt;
&lt;li&gt;Audit-ready reports are generated automatically.&lt;/li&gt;
&lt;li&gt;Sensitive data is flagged and access-controlled.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: A central bank project team uses the command center. When budget reallocations happen, the system auto-generates an audit report with “&lt;em&gt;Who approved, When, Why,&lt;/em&gt; &lt;em&gt;Impact Analysis.&lt;/em&gt;” Compliance is no longer a scramble at the end — it’s baked into daily operations.&lt;/p&gt;
&lt;p&gt;The illustration below shows how an &lt;strong&gt;AI audit log&lt;/strong&gt; keeps track of every action in a project for transparency and accountability. Each entry records the &lt;strong&gt;timestamp&lt;/strong&gt;, the &lt;strong&gt;action taken&lt;/strong&gt;, who initiated it—whether an &lt;strong&gt;AI agent&lt;/strong&gt; or a &lt;strong&gt;project manager&lt;/strong&gt;—and the &lt;strong&gt;status&lt;/strong&gt; of that action. For example, task approvals and budget changes are neatly logged and marked as &lt;strong&gt;approved&lt;/strong&gt;, making it easy to review decisions and maintain trust in AI-assisted project management.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Audit logs&quot; title=&quot;Audit logs&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Conclusion: The “One screen to rule them all” for PMs&lt;/h2&gt;
&lt;p&gt;For decades, project managers have been stuck hopping between tools, spreadsheets, and dashboards. The Agentic AI command center changes that.&lt;/p&gt;
&lt;p&gt;It becomes the &lt;strong&gt;one screen to rule them all&lt;/strong&gt; -- unifying project health, answering questions in natural language, providing decision support, and ensuring compliance.&lt;/p&gt;
&lt;p&gt;Instead of drowning in fragmented tools, project managers can finally focus on &lt;strong&gt;strategy, leadership&lt;/strong&gt;, &lt;strong&gt;and outcomes.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI command centers replace fragmented tool-hopping — giving project managers foresight, clarity, and control in one place.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/6.6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Future AI&quot; title=&quot;Future AI&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[Agentic AI in agile: Smarter sprints, faster retros]]></title><description><![CDATA[Part 5 of our Gen AI for PM series Intro Agile teams thrive on speed, collaboration, and adaptability—but even the best processes can get…]]></description><link>https://developer.hpe.com/agentic-ai-in-agile-smarter-sprints-faster-retros/</link><guid isPermaLink="false">https://developer.hpe.com/agentic-ai-in-agile-smarter-sprints-faster-retros/</guid><pubDate>Fri, 05 Sep 2025 15:29:28 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 5 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Intro&lt;/h2&gt;
&lt;p&gt;Agile teams thrive on speed, collaboration, and adaptability—but even the best processes can get bogged down by manual tracking, missed dependencies, or lengthy retrospectives. This is where &lt;strong&gt;Agentic AI steps in.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;By acting as a smart co-pilot, Agent AI can automatically &lt;strong&gt;analyze sprint progress, flag blockers in real time, and suggest adjustments&lt;/strong&gt; to keep velocity high. When the sprint ends, it can instantly pull together &lt;strong&gt;retrospective insights&lt;/strong&gt;—highlighting what worked, what slowed the team down, and how to improve next time. The result: smarter sprints and faster retros, giving teams more time to focus on building rather than chasing updates.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Agile is great — but still needs better data flow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Agile has transformed how teams build products. Short sprints, daily stand-ups, and constant feedback loops keep teams adaptive and focused. But let’s be honest — even the most disciplined Agile teams still wrestle with messy data flow.&lt;/p&gt;
&lt;p&gt;Burndown charts lag behind reality because updates are manual. Sprint retrospectives sometimes feel like therapy sessions without data to back insights. Backlogs balloon into chaos, with “top priority” items buried under noise.&lt;/p&gt;
&lt;p&gt;Agile gives us the right philosophy, but execution still demands better visibility, accuracy, and speed. That’s where Agentic AI steps in — not as a replacement for scrum masters or project manager (PMs), but as a co-pilot that keeps Agile teams lean, data-driven, and one step ahead.&lt;/p&gt;
&lt;p&gt;The below side-by-side comparison shows the shift from traditional sprint planning to AI-enhanced sprint planning. On the left, sticky notes and a messy board represent the manual, error-prone process that often leaves teams overwhelmed. On the right, a clean digital board powered by Agentic AI provides real-time updates, capacity forecasts, and smarter tracking—helping teams plan with confidence and adapt quickly.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Sprint Planning&quot; title=&quot;Sprint Planning&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 1: Sprint planning with Agentic AI forecasting&lt;/h3&gt;
&lt;p&gt;Sprint planning is part science, part guessing game. Teams estimate velocity, but surprises (like unexpected bugs or absences) can derail the best-laid plans.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI changes the game&lt;/strong&gt; by forecasting sprint outcomes with remarkable accuracy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Analyzes &lt;strong&gt;historical sprint velocity&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Considers &lt;strong&gt;team availability&lt;/strong&gt; (holidays, PTO, workload).&lt;/li&gt;
&lt;li&gt;Factors in &lt;strong&gt;complexity of tasks&lt;/strong&gt; (based on past data).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: Instead of debating whether the team can handle 40 story points, an AI agent predicts, “Based on the past 6 sprints and current workload, your realistic capacity is 32 points.” Suddenly, sprint planning shifts from guesswork to data-backed decisions.&lt;/p&gt;
&lt;p&gt;The illustration below shows how Agentic AI can make sprint planning smarter. On the sprint board, the AI overlay highlights the &lt;strong&gt;predicted sprint capacity is 32 points&lt;/strong&gt; giving teams a clear forecast of what can realistically be completed. This proactive insight helps prevent overcommitment and ensures more reliable sprint outcomes.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Sprint board&quot; title=&quot;Sprint board&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 2: Automated burndown tracking&lt;/h3&gt;
&lt;p&gt;Burndown charts are critical for visibility — but too often, they’re outdated or inaccurate because they depend on manual updates.&lt;/p&gt;
&lt;p&gt;With Agentic AI:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Task completions update the burndown chart in real time.&lt;/li&gt;
&lt;li&gt;Scope creep is flagged instantly when new work sneaks into the sprint.&lt;/li&gt;
&lt;li&gt;Velocity anomalies trigger alerts before they become blockers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: Mid-sprint, an AI agent detects that only 20% of tasks are complete halfway through the timeline. It pings the Scrum Master with: “At this rate, sprint completion probability is 60%. Recommend re-scoping backlog.”&lt;/p&gt;
&lt;p&gt;The chart below shows how AI can make burndown charts more insightful. Instead of only showing completed work against the original plan, the chart includes a red dotted line that projects a likely delay based on current progress. This proactive signal gives teams an early warning so they can adjust resources, scope, or priorities before the milestone is missed.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Burndown chart&quot; title=&quot;Burndown chart&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 3: Instant retro summaries with AI sentiment analysis&lt;/h3&gt;
&lt;p&gt;Sprint retrospectives are gold mines for learning — but they’re also time-consuming, and insights are often anecdotal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI applies sentiment analysis and pattern detection&lt;/strong&gt; to team feedback:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It scans chat channels, retro boards, and surveys.&lt;/li&gt;
&lt;li&gt;Identifies recurring blockers (e.g., “deployment delays mentioned 5 times”).&lt;/li&gt;
&lt;li&gt;Detects team morale trends via tone of feedback.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: Instead of a retro filled with vague “communication issues,” the AI summarizes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Top blocker&lt;/strong&gt;: Delayed QA environment setup.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sentiment trend&lt;/strong&gt;: Developer morale dropped 15% due to unclear requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The illustration below shows how retrospectives become more &lt;strong&gt;data-driven and actionable&lt;/strong&gt; with AI support. Instead of relying only on memory or scattered notes, the system generates a &lt;strong&gt;summary dashboard&lt;/strong&gt; that highlights key themes through &lt;strong&gt;word clouds&lt;/strong&gt;—such as “delays,” “QA,” or “communication”—and presents &lt;strong&gt;sentiment graphs&lt;/strong&gt; to show team mood over the sprint. This makes it easier for teams to spot recurring issues, celebrate wins, and agree on concrete next steps for improvement.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Retro board&quot; title=&quot;Retro board&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Section 4: Backlog prioritization with AI recommendations&lt;/h3&gt;
&lt;p&gt;A bloated backlog is every PM’s nightmare. It often contains hundreds of tickets, each marked as “important,” while the team has only limited sprint capacity to handle them.&lt;/p&gt;
&lt;p&gt;Agentic AI helps by &lt;strong&gt;ranking backlog items&lt;/strong&gt; based on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Business impact&lt;/li&gt;
&lt;li&gt;Dependencies&lt;/li&gt;
&lt;li&gt;Estimated effort&lt;/li&gt;
&lt;li&gt;Team capacity&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: Instead of arguing over priorities, the AI generates a ranked list:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Fix payment gateway bug (high impact, high urgency)&lt;/li&gt;
&lt;li&gt;Optimize API response time (medium effort, high impact)&lt;/li&gt;
&lt;li&gt;Redesign landing page (low effort, medium impact)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The picture below shows how the backlog transforms into a &lt;strong&gt;clear, data-backed roadmap&lt;/strong&gt; with the help of Agentic AI. Each item in the backlog is automatically assigned a &lt;strong&gt;priority score,&lt;/strong&gt; e.g ., 95/100 or 80/100, based on factors like impact, urgency, and dependencies. This gives teams a transparent way to see what matters most, align on priorities, and focus their effort where it delivers the greatest value.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Backlog&quot; title=&quot;Backlog&quot;&gt;&lt;/center&gt;
&lt;h3&gt;&lt;strong&gt;Conclusion: Agile meets intelligence&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Agile taught us to work smarter, not harder. But even Agile struggles when data is fragmented or delayed.&lt;/p&gt;
&lt;p&gt;With &lt;strong&gt;Agentic AI&lt;/strong&gt;, Agile becomes &lt;strong&gt;Agile + Intelligence:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sprint planning shifts from guesswork to forecasting.&lt;/li&gt;
&lt;li&gt;Burndowns update themselves.&lt;/li&gt;
&lt;li&gt;Retros gain clarity with sentiment-driven insights.&lt;/li&gt;
&lt;li&gt;Backlogs transform from chaos into priority pipelines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This isn’t about replacing Agile roles. Scrum Masters, PMs, and Agile coaches remain essential. But now, they’re equipped with AI-powered allies who free them from busywork and give them clarity at speed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key takeaway&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Agentic AI makes every sprint data-driven — helping Agile teams move faster, learn smarter, and deliver better.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/5.6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Agile future&quot; title=&quot;Agile future&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[Explore conversational AI solutions, Agentic AI, Ansible with DSCC usage, Chapel, and more!]]></title><link>https://developer.hpe.com/2025-sep-04/</link><guid isPermaLink="false">https://developer.hpe.com/2025-sep-04/</guid><pubDate>Thu, 04 Sep 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Multi-agent systems for multi-stakeholder projects]]></title><description><![CDATA[Part 4 of our Gen AI for PM series Intro Large projects often involve multiple stakeholders, each with different priorities, risks, and…]]></description><link>https://developer.hpe.com/gen-ai-for-pm-series-part-4-multi-agent-systems-for-multi-stakeholder-projects/</link><guid isPermaLink="false">https://developer.hpe.com/gen-ai-for-pm-series-part-4-multi-agent-systems-for-multi-stakeholder-projects/</guid><pubDate>Sat, 30 Aug 2025 06:50:31 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 4 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Intro&lt;/h2&gt;
&lt;p&gt;Large projects often involve multiple stakeholders, each with different priorities, risks, and information needs. Managing all of these moving parts with a single tool or process quickly becomes overwhelming. This is where &lt;strong&gt;multi-agent systems&lt;/strong&gt; come in.&lt;/p&gt;
&lt;p&gt;By assigning specialized AI agents to handle distinct responsibilities, such as scheduling, compliance, risk management, or communication, projects can scale more smoothly. Each agent works autonomously within its domain but shares insights with others, creating a &lt;strong&gt;coordinated ecosystem&lt;/strong&gt; that adapts to stakeholder demands in real time. For project managers, this means fewer blind spots, faster responses, and better alignment across diverse teams.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Coordinating across departments is chaotic&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you’ve ever managed a cross-department project, you know how messy things can get. Finance is tracking budgets in spreadsheets. Engineering is sprinting toward deadlines in Jira. Marketing is waiting for updates before they can launch campaigns. And operations? They’re buried in vendor calls.&lt;/p&gt;
&lt;p&gt;What ties it all together? Usually, it’s the project manager(PM) stuck in the middle — drowning in emails, calendar invites, and status requests.&lt;/p&gt;
&lt;p&gt;This &lt;strong&gt;coordination chaos&lt;/strong&gt; is where &lt;strong&gt;multi-agent systems&lt;/strong&gt; step in. Instead of a single PM being the communication bottleneck, digital agents act like specialized assistants, each handling a slice of responsibility and keeping everyone aligned.&lt;/p&gt;
&lt;p&gt;The image compares two scenarios of project management:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Left side (“Without AI agents”)&lt;/strong&gt;&lt;br&gt;
A project manager (PM) is overwhelmed, juggling multiple responsibilities at once Finance, Engineering, Marketing, and Operations. This represents the traditional challenge of keeping all stakeholders aligned manually.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Right side (“With Agentic AI”)&lt;/strong&gt;&lt;br&gt;
Instead of juggling, AI agents work together seamlessly. Each ball (Finance, Engineering, Marketing, Operations) is smoothly passed around by specialized AI agents, with the PM overseeing rather than micromanaging.&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Multiple department projects&quot; title=&quot;Multiple department projects&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 1: Defining multi-agent collaboration&lt;/h2&gt;
&lt;p&gt;So, what is multi-agent collaboration?&lt;/p&gt;
&lt;p&gt;Think of it this way: Instead of one AI tool doing everything, you have multiple AI agents, each with a defined role, working together like a team. They:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collect data from their respective domains.&lt;/li&gt;
&lt;li&gt;Communicate findings automatically.&lt;/li&gt;
&lt;li&gt;Coordinate actions with other agents.&lt;/li&gt;
&lt;li&gt;Keep the PM in the loop with summaries, not overload.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s like moving from a single Swiss Army knife to a team of expert specialists — each focused, each efficient, and each accountable.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: In a construction project, a Resource Agent monitors workforce schedules, a Finance Agent tracks cost overruns, and a Compliance Agent ensures safety requirements are met. Together, they keep the project moving without endless manual updates.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;circular diagram&lt;/strong&gt; you mentioned is a way to visualize how &lt;strong&gt;Agentic AI creates a connected ecosystem&lt;/strong&gt; around a project manager’s dashboard:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Finance Agent&lt;/strong&gt; → Feeds budget updates, cost forecasts, and expense alerts directly into the dashboard.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource Agent&lt;/strong&gt; → Monitors workload distribution, capacity, and availability of team members, updating the dashboard in real time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Communication Agent&lt;/strong&gt; → Pulls in updates from team channels, emails, or tools like Slack and pushes them to the dashboard as structured insights.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the center sits the &lt;strong&gt;PM Dashboard&lt;/strong&gt; — the single source of truth where the project manager can see &lt;strong&gt;all three streams of intelligence combined&lt;/strong&gt;.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;PM dashboard&quot; title=&quot;PM dashboard&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 2: Role-based agents in action&lt;/h2&gt;
&lt;p&gt;Let’s break down some of the most useful role-based agents:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Finance agent&lt;/strong&gt; – Tracks spending vs. budget in real time, sends alerts if costs exceed thresholds, and shares instant updates with stakeholders.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scenario: A procurement request pushes spend 5% over budget. The Finance Agent flags it, informs the Resource Agent, and updates the PM automatically.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Resource agent&lt;/strong&gt; – Balances workloads, reallocates tasks when someone is overbooked, and ensures skills are matched to priorities.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scenario: A key engineer is on sick leave. The Resource Agent shifts their tasks to the next available developer, updates Jira, and notifies the team.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Communication agent&lt;/strong&gt; – Tailors updates for each stakeholder group without flooding inboxes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scenario: Instead of a giant status email, the Communication Agent sends Finance a budget snapshot, Marketing a timeline update, and the PM a summary dashboard.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This image shows how &lt;strong&gt;different AI agents act as specialized assistants&lt;/strong&gt; for various project needs and how their insights flow directly to the right stakeholders:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Finance Agent&lt;/strong&gt; → Tracks budgets, expenses, and forecasts. Its updates flow to the &lt;strong&gt;Project Manager&lt;/strong&gt; and &lt;strong&gt;Finance Team&lt;/strong&gt;, ensuring they always know where the money is going.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource Agent&lt;/strong&gt; → Balances workloads, predicts availability, and flags bottlenecks. Its insights go to the &lt;strong&gt;Engineers&lt;/strong&gt; and &lt;strong&gt;PM&lt;/strong&gt;, helping optimize who works on what and when.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Communication Agent&lt;/strong&gt; → Monitors updates from meetings, emails, and chat tools. It delivers structured updates to the &lt;strong&gt;Marketing Team&lt;/strong&gt; and other stakeholders, cutting through noise and ensuring everyone stays aligned.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the center of it all is the &lt;strong&gt;Project Manager&lt;/strong&gt;, who no longer has to manually chase updates. Instead, each stakeholder gets the &lt;strong&gt;right information at the right time&lt;/strong&gt;, automatically delivered through these AI agents.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI agent&quot; title=&quot;AI agent&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 3: Automatic cross-team updates&lt;/h2&gt;
&lt;p&gt;One of the biggest pain points in multi-stakeholder projects is &lt;strong&gt;information lag.&lt;/strong&gt; By the time Finance updates the budget, Engineering has already made a decision that conflicts with it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Multi-agent systems eliminate this lag&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Agents share updates across domains instantly.&lt;/li&gt;
&lt;li&gt;Dependencies update automatically in connected dashboards.&lt;/li&gt;
&lt;li&gt;Stakeholders see the latest info without needing to ask.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;In a global product launch:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finance updates ad spend forecasts → Finance Agent updates Marketing instantly.&lt;/li&gt;
&lt;li&gt;Engineering delays a feature by one week → Resource Agent updates the timeline, and Communication Agent informs all stakeholders.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No “Monday catch-up” required — the project updates itself.&lt;/p&gt;
&lt;p&gt;This flow diagram demonstrates how &lt;strong&gt;Agentic AI streamlines cross-team updates automatically&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Finance Agent updates the budget&lt;/strong&gt; → Whenever there’s a change in cost or resource usage, the Finance Agent instantly logs it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Communication Agent sends a timeline update&lt;/strong&gt; → The Communication Agent translates the finance change into a project impact update (e.g., “Timeline adjusted by 2 days”).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Marketing notified instantly&lt;/strong&gt; → Instead of waiting for a weekly sync or status meeting, the Marketing team gets the update in real time, ensuring they can adjust campaigns or customer communications without delay.&lt;/li&gt;
&lt;/ol&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Cross team update&quot; title=&quot;Cross team update&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 4: Transparency without meeting overload&lt;/h2&gt;
&lt;p&gt;Meetings are the traditional fix for misalignment. But too many meetings kill productivity.&lt;/p&gt;
&lt;p&gt;With multi-agent systems, &lt;strong&gt;transparency is built-in:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Live dashboards show the current state of play.&lt;/li&gt;
&lt;li&gt;Agents generate concise, role-specific updates.&lt;/li&gt;
&lt;li&gt;PMs only call meetings for strategic discussions, not routine updates.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example:&lt;/strong&gt; A weekly 2-hour cross-department sync shrinks into a 20-minute strategic review. Why? Because agents have already updated budgets, tasks, and dependencies in real time.&lt;/p&gt;
&lt;p&gt;This graphic illustrates the productivity shift that &lt;strong&gt;AI meeting support brings&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Before:&lt;/strong&gt; A typical project meeting lasts two hours, with messy handwritten or scattered digital notes. Action items often get lost, follow-ups are inconsistent, and team members leave with different interpretations of what was decided.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;After:&lt;/strong&gt; With AI agents in place, the same meeting takes just 20 minutes. The AI captures the discussion in real time, generates a clear summary, assigns owners to tasks, and even prioritizes next steps. The result is a &lt;strong&gt;strategic, focused session&lt;/strong&gt; where everyone leaves aligned — without the fatigue of long, unstructured meetings.&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.7.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Transparency overload&quot; title=&quot;Transparency overload&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Conclusion: Project updates become a living knowledge base&lt;/h2&gt;
&lt;p&gt;Multi-agent systems don’t just automate tasks — they transform how knowledge flows across a project. Updates are no longer buried in inboxes or trapped in one department’s tool. Instead, they become part of a &lt;strong&gt;living, evolving knowledge base&lt;/strong&gt;, accessible to everyone, at any time.&lt;/p&gt;
&lt;p&gt;For PMs, this means less time chasing updates and more time focusing on leadership, strategy, and outcomes. For stakeholders, it means the right information, in the right format, at the right time.&lt;/p&gt;
&lt;h3&gt;Key takeaway&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Each stakeholder gets the right update, at the right time — thanks to multi-agent systems.&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/4.6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;PM future dashboard&quot; title=&quot;PM future dashboard&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[Agentic AI as the risk radar for project managers]]></title><description><![CDATA[Part 3 of our Gen AI for PM series Intro Just as pilots rely on a radar to detect turbulence before it hits, project managers (PMs) can now…]]></description><link>https://developer.hpe.com/gen-ai-for-pm-series-part-3-agentic-ai-as-the-risk-radar-for-project-managers/</link><guid isPermaLink="false">https://developer.hpe.com/gen-ai-for-pm-series-part-3-agentic-ai-as-the-risk-radar-for-project-managers/</guid><pubDate>Sat, 30 Aug 2025 06:16:53 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 3 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Intro&lt;/h3&gt;
&lt;p&gt;Just as pilots rely on a radar to detect turbulence before it hits, project managers (PMs) can now rely on &lt;strong&gt;Agentic AI as their risk radar&lt;/strong&gt;. Instead of waiting for issues to surface in weekly reviews or status meetings, the AI agent continuously scans project data, &lt;strong&gt;predicting risks early and raising alerts&lt;/strong&gt;. This proactive approach ensures that project managers (PMs) can address problems before they escalate, keeping projects on course and stakeholders informed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why risk management is still manual in many orgs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Risk management is one of the most critical responsibilities of a project manager — yet in many organizations, it’s still a manual, reactive process.&lt;/p&gt;
&lt;p&gt;Here’s how it often plays out: A supplier misses a delivery update. A developer is behind on a sprint task. A budget line creeps higher than expected. None of these issues get noticed until the &lt;strong&gt;weekly review meeting&lt;/strong&gt;, at which point the project is already veering off track.&lt;/p&gt;
&lt;p&gt;By then, the PM is in firefighting mode — pulling resources, rearranging timelines, and explaining to stakeholders why things slipped.&lt;/p&gt;
&lt;p&gt;But what if risks didn’t sneak up on you? What if you had a radar system scanning your project continuously, flagging issues before they turned into crises? That’s exactly what Agentic AI brings to project management.&lt;/p&gt;
&lt;p&gt;The comparison below highlights the difference between &lt;strong&gt;traditional&lt;/strong&gt; and &lt;strong&gt;AI-driven&lt;/strong&gt; risk management. On the left, project managers manually review risks during weekly check-ins—often discovering issues only after they’ve already caused delays. On the right, AI systems continuously monitor project data, providing &lt;strong&gt;predictive alerts&lt;/strong&gt; so risks are identified early and corrective action can be taken before they escalate.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Data anomalies&quot; title=&quot;Data anomalies&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 1: How AI agents scan project data for anomalies&lt;/h2&gt;
&lt;p&gt;Unlike human PMs who can only process limited information at a time, &lt;strong&gt;AI agents never sleep&lt;/strong&gt;. They monitor all project data streams — tasks, budgets, communications, even IoT data in some industries — to spot anomalies.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In software development&lt;/strong&gt;: Agents monitor sprint velocity, backlog size, and code check-ins. If velocity drops suddenly, they flag it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;In supply chain projects&lt;/strong&gt;: Agents track supplier updates and logistics data. If a shipment is late, they predict downstream impact immediately.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;In construction&lt;/strong&gt;: Agents monitor workforce schedules and equipment usage, alerting when resources fall below thresholds.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: A PM running a global product rollout doesn’t need to wait for a weekly sync. If a supplier in Asia fails to confirm shipment, the AI agent notices within hours and alerts the PM that downstream assembly tasks in Europe are at risk.&lt;/p&gt;
&lt;p&gt;The illustration below shows how the AI dashboard works in real time. Multiple data streams feed into the AI agent, which continuously monitors for irregularities. When an &lt;strong&gt;anomaly is detected&lt;/strong&gt;, the system immediately raises a &lt;strong&gt;red alert notification&lt;/strong&gt;, ensuring teams can act quickly before small issues turn into bigger problems.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Multiple data streams&quot; title=&quot;Multiple data streams&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 2: Predictive alerts for delays or budgetary overruns&lt;/h2&gt;
&lt;p&gt;Detecting risks is useful. But predicting them? That’s a game-changer.&lt;/p&gt;
&lt;p&gt;Agentic AI uses &lt;strong&gt;predictive analytics&lt;/strong&gt; to forecast delays and overspends before they happen:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the sprint velocity trend suggests the team won’t finish the backlog, the AI warns the PM early.&lt;/li&gt;
&lt;li&gt;If spending on cloud infrastructure grows faster than expected, the AI projects budget overrun for the quarter.&lt;/li&gt;
&lt;li&gt;If resource bottlenecks emerge, the AI highlights the likelihood of milestone delays.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: In a software project, the AI forecasts that the team is likely to miss the sprint deadline by three days. The PM gets the alert a week in advance and reallocates a senior developer to the critical path — preventing a late delivery.&lt;/p&gt;
&lt;p&gt;The illustration below shows how AI helps anticipate project delays before they happen. The &lt;strong&gt;red dotted line&lt;/strong&gt; marks a &lt;strong&gt;predicted delay&lt;/strong&gt;, flagged ahead of the actual milestone, giving teams enough time to adjust plans and keep the project on track.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Milestone&quot; title=&quot;Milestone&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 3: Scenario simulation &amp;#x26; “What-If” planning&lt;/h2&gt;
&lt;p&gt;One of the toughest parts of risk management is deciding how to respond. Should you add resources? Cut scope? Delay the milestone?&lt;/p&gt;
&lt;p&gt;Agentic AI helps by running &lt;strong&gt;scenario simulations&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What if we delay Task A by two days?&lt;/strong&gt; → The AI shows the ripple effect on dependent tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;What if we reassign 2 people from QA to Dev?&lt;/strong&gt; → The AI shows improved velocity but increased bug risk.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;What if we increase budget by 5%?&lt;/strong&gt; → The AI shows how much time can be saved with extra contractors.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: In a construction project, AI simulates the impact of moving indoor painting tasks earlier to cover for upcoming bad weather. The simulation shows minimal disruption, so the PM confidently approves the adjustment.&lt;/p&gt;
&lt;p&gt;The chart below shows how AI supports decision-making by mapping out different choices and their likely outcomes. For instance, &lt;strong&gt;choosing Option A may lead to a delay&lt;/strong&gt;, while &lt;strong&gt;Option B reallocates resources&lt;/strong&gt; and &lt;strong&gt;Option C increases the budget&lt;/strong&gt;. By predicting the results of each path, the system helps project managers make more informed decisions before committing to an action.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Planning&quot; title=&quot;Planning&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 4: Combining AI alerts with human decision-making&lt;/h2&gt;
&lt;p&gt;Agentic AI doesn’t replace PMs — it augments them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;AI surfaces risks&lt;/strong&gt; quickly and accurately.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;PM applies judgment&lt;/strong&gt; — weighing culture, stakeholder politics, and long-term strategy.&lt;/li&gt;
&lt;li&gt;Together, they make &lt;strong&gt;faster, better decisions&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: An AI agent alerts that overspending is likely in a marketing campaign. The AI suggests reducing ad spend in underperforming channels. But the PM knows the client values brand visibility over efficiency, so they adjust strategy to meet both the AI’s warning and the client’s preferences.&lt;/p&gt;
&lt;p&gt;The decision-tree diagram below shows how decision-making is shared between AI and human project managers. The AI first &lt;strong&gt;scans data&lt;/strong&gt;, then &lt;strong&gt;predicts outcomes&lt;/strong&gt;, and finally &lt;strong&gt;recommends actions&lt;/strong&gt;. At that point, the &lt;strong&gt;project manager makes the final decision&lt;/strong&gt;, combining AI-driven insights with human judgment. This loop ensures faster, smarter, and more reliable project decisions.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI &amp; Human decision making&quot; title=&quot;AI &amp; Human decision making&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Conclusion: Risk management shifts from reactive to proactive&lt;/h2&gt;
&lt;p&gt;For too long, risk management has been about looking backward — filling out risk logs, updating registers, and reacting when things go wrong.&lt;/p&gt;
&lt;p&gt;Agentic AI flips the model:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Risks are detected early.&lt;/li&gt;
&lt;li&gt;Delays and overspending are predicted, not discovered too late.&lt;/li&gt;
&lt;li&gt;Scenarios are tested instantly, helping PMs choose the best response.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Instead of firefighting, PMs become &lt;strong&gt;strategic leaders&lt;/strong&gt;, steering their projects with foresight. The ones who embrace AI won’t just manage risks — they’ll &lt;strong&gt;manage confidently, proactively, and with influence.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;Key takeaway&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Agents give PMs foresight, not just hindsight — transforming risk management into a proactive, always-on discipline.&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/3.6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Futuristic risk highlight&quot; title=&quot;Futuristic risk highlight&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[Experimenting with the Model Context Protocol and Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/experimenting-with-the-model-context-protocol-and-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/experimenting-with-the-model-context-protocol-and-chapel/</guid><pubDate>Thu, 28 Aug 2025 19:01:09 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 12: AgentOS - The invisible conductor of enterprise AI]]></title><description><![CDATA[Scene one: The orchestra without a conductor Picture an orchestra hall. The violinist plays beautifully, the percussionist pounds with…]]></description><link>https://developer.hpe.com/part-12-agentos-the-invisible-conductor-of-enterprise-ai/</link><guid isPermaLink="false">https://developer.hpe.com/part-12-agentos-the-invisible-conductor-of-enterprise-ai/</guid><pubDate>Thu, 28 Aug 2025 09:10:38 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;&lt;strong&gt;Scene one: The orchestra without a conductor&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Picture an orchestra hall. The violinist plays beautifully, the percussionist pounds with passion, the flutist adds magic — but there’s no conductor.&lt;/p&gt;
&lt;p&gt;Instead of harmony, you hear chaos.&lt;/p&gt;
&lt;p&gt;This is where most enterprises stand today with AI. They have chatbots for customer service, fraud detection systems in finance, diagnostic assistants in healthcare, HR copilots, and IT ticketing bots. Each plays its part, but without coordination, they overlap, conflict, and underperform.&lt;/p&gt;
&lt;p&gt;Enter &lt;strong&gt;AgentOS&lt;/strong&gt;: the conductor that turns fragmented AI tools into a well-orchestrated symphonic enterprise.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Why enterprises need AgentOS&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;For the past two decades, enterprises have ridden the waves of digital transformation through multiple stages of AI:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Traditional AI&lt;/strong&gt; – Predictive, rule-based systems solving narrow problems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Generative AI&lt;/strong&gt; – Large language models that produce text, code, and content.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI&lt;/strong&gt; – Autonomous helpers reasoning, planning, and acting. Agentic AI even introduced modes like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Coordination Mode –&lt;/strong&gt; Agents interact to accomplish a shared task.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Collaboration Mode –&lt;/strong&gt; Agents divide work and merge results.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Routing Mode –&lt;/strong&gt; Agents pass tasks to the best-suited peer.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These modes gave enterprises glimpses of what AI teamwork could look like. But in practice, enterprises quickly ran into scaling problems: lack of governance, resource wastage, compliance risks, and integration hurdles.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AgentOS was designed to solve this.&lt;/strong&gt;
It is not just a tool. It is the operating system for agents, enabling enterprises to manage, orchestrate, and scale agent ecosystems responsibly.&lt;/p&gt;
&lt;p&gt;Just as operating systems defined enterprise computing, and Android/iOS defined enterprise mobility, &lt;strong&gt;AgentOS defines the enterprise era of agentic intelligence.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;How AgentOS is superior to agentic AI modes&lt;/h2&gt;
&lt;p&gt;While Agentic AI modes (coordination, collaboration, routing) describe &lt;em&gt;how&lt;/em&gt; agents behave at the task level, they often remain limited to small clusters or localized workflows. &lt;strong&gt;AgentOS operates at the system level&lt;/strong&gt;, enabling these modes to run systematically, securely, and at scale across the enterprise. Rather than being an “either/or,” Agentic AI and AgentOS are &lt;strong&gt;complementary layers&lt;/strong&gt;: the former provides the micro-level behaviors of agents, while the latter provides the infrastructure, governance, and optimization needed to operationalize those behaviors across departments, business systems, and compliance requirements. The table below contrasts these perspectives—&lt;strong&gt;Agentic AI modes at the agent level vs. AgentOS at the enterprise level&lt;/strong&gt;—to highlight how they build on each other.&lt;/p&gt;
&lt;h1&gt;Agentic AI vs AgentOS Capabilities&lt;/h1&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%; text-align: left; background: linear-gradient(135deg, #f0f8ff, #e6f7ff); border: 2px solid #0073e6;&quot;&gt;
  &lt;thead&gt;
    &lt;tr style=&quot;background-color: #0073e6; color: white;&quot;&gt;
      &lt;th style=&quot;padding: 12px; font-weight: bold; text-align: left;&quot;&gt;Capability&lt;/th&gt;
      &lt;th style=&quot;padding: 12px; font-weight: bold; text-align: left;&quot;&gt;Agentic AI Modes (Coordination / Collaboration / Routing)&lt;/th&gt;
      &lt;th style=&quot;padding: 12px; font-weight: bold; text-align: left;&quot;&gt;AgentOS (Enterprise-Grade)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Coordination&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Agents communicate on a shared task&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Orchestration across thousands of agents simultaneously&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Collaboration&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Agents divide work and merge results&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Workflow automation integrated with ERP, CRM, and APIs&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Routing&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Routes tasks to relevant agents&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Policy-driven routing with compliance and audit&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Scalability&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Limited to agent clusters or workflows&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Enterprise-wide scale with cross-departmental coverage&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Resource Management&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Task-level optimization&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Enterprise-grade compute allocation, GPU scheduling, cost optimization&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;padding: 10px; background-color:#f9f9f9;&quot;&gt;&lt;b&gt;Governance&lt;/b&gt;&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Minimal logs at agent level&lt;/td&gt;
      &lt;td style=&quot;padding: 10px;&quot;&gt;Centralized compliance, execution history, and audit trails&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;In short:&lt;/strong&gt; Agentic AI lets agents talk. AgentOS ensures they collaborate securely, at scale, under enterprise governance.&lt;/p&gt;
&lt;h2&gt;Features of AgentOS&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Orchestration beyond coordination –&lt;/strong&gt; Manages thousands of agents like a mission control center.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource management –&lt;/strong&gt; Allocates cloud, GPU, or edge resources intelligently.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Workflow automation –&lt;/strong&gt; Converts ad-hoc agent interactions into repeatable enterprise workflows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integration layer –&lt;/strong&gt; Connects to CRMs, ERPs, ITSM tools, data lakes, and APIs seamlessly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Governance &amp;#x26; Trust –&lt;/strong&gt; Provides execution history, compliance enforcement, and role-based access.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Scene two: The smart hospital of tomorrow&lt;/h2&gt;
&lt;p&gt;Fast-forward to 2030. A major hospital runs with AI at its core:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;diagnostic agentic AI&lt;/strong&gt; system scans patient X-rays.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;compliance agentic AI&lt;/strong&gt; system checks treatment against regulations.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;scheduling agentic AI&lt;/strong&gt; system assigns the right doctors and resources.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;finance agentic AI&lt;/strong&gt; system validates insurance in real time.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If these were run purely in Agentic AI modes, they might collaborate, but gaps remain: duplicated tasks, inconsistent audit logs, and overlooked compliance.&lt;/p&gt;
&lt;p&gt;With &lt;strong&gt;AgentOS&lt;/strong&gt;, all these agents operate within a single, governed orchestration framework.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Every recommendation is automatically validated for compliance.&lt;/li&gt;
&lt;li&gt;Compute resources are allocated efficiently.&lt;/li&gt;
&lt;li&gt;End-to-end records are logged for transparency.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The result: safer patients, faster care, and reduced administrative costs.&lt;/p&gt;
&lt;h2&gt;Scene three: The corporate boardroom&lt;/h2&gt;
&lt;p&gt;A future enterprise boardroom operates like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Sales agentic AI team&lt;/strong&gt; negotiates contracts with a client’s AI.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Finance agentic AI team&lt;/strong&gt; calculates profit margins in real-time.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Legal agentic AI team&lt;/strong&gt; ensures terms meet compliance.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Ops&lt;/strong&gt; &lt;strong&gt;agentic AI team&lt;/strong&gt; checks global supply chain capacity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without AgentOS, each agent works in isolation — outputs fragmented across dashboards.&lt;/p&gt;
&lt;p&gt;With AgentOS, they function as a team under enterprise governance.
A CXO can simply ask: “Can we close this deal profitably within compliance?”
AgentOS synthesizes the inputs, runs checks, allocates compute, and produces a clear, enterprise-ready answer.&lt;/p&gt;
&lt;h2&gt;Enterprise benefits of AgentOS&lt;/h2&gt;
&lt;h4&gt;&lt;strong&gt;1. Operational efficiency at scale&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Unifies siloed AI projects across departments, preventing duplication and aligning outcomes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Example:&lt;/strong&gt; A bank saves millions by linking fraud detection, customer support, and compliance agents under one roof.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;&lt;strong&gt;2. Production-grade governance&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Tracks every decision, logs every interaction, enforces policies, and ensures regulations like GDPR, HIPAA, or SOX are met.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Example:&lt;/strong&gt; Healthcare providers ensure every diagnosis recommendation is audit-ready.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;3. Enterprise-wide integration&lt;/h4&gt;
&lt;p&gt;Plug-and-play compatibility with ERP, CRM, HR, and supply chain systems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Example:&lt;/strong&gt; A manufacturing firm syncs predictive maintenance with ERP procurement automatically.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;4. Resource &amp;#x26; cost optimization&lt;/h4&gt;
&lt;p&gt;Allocates GPUs, cloud compute, and workloads smartly to reduce costs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Example:&lt;/strong&gt; Retailers balance demand forecasting, supply optimization, and customer recommendations without runaway GPU bills.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;5. Future-proofing investments&lt;/h4&gt;
&lt;p&gt;Like Windows or Linux, AgentOS is not transient. It is foundational infrastructure that will only advance, not disappear. Enterprises can adopt once and evolve continuously.&lt;/p&gt;
&lt;h2&gt;Types of AgentOS emerging&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Enterprise AgentOS (PwC)&lt;/strong&gt; – AI scaled across departments with compliance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Research AgentOS (SyncIQ)&lt;/strong&gt; – Multi-agent R&amp;#x26;D orchestration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Academic projects&lt;/strong&gt; – Distributed OS for decentralized, mobile agents.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Open-Source AgentOS (Prolog)&lt;/strong&gt; – Community-driven frameworks for coding and standards.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;AgentOS in enterprise production – The game changer&lt;/h2&gt;
&lt;p&gt;AgentOS transforms how organizations operationalize AI by addressing inefficiencies that once slowed adoption. It provides opportunities to unify scattered AI initiatives, reduce infrastructure waste, and embed governance directly into workflows. Consider how things looked before AgentOS—and how they have changed after its introduction.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Before AgentOS&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Siloed AI projects&lt;/li&gt;
&lt;li&gt;Expensive GPU wastage&lt;/li&gt;
&lt;li&gt;Long integration cycles&lt;/li&gt;
&lt;li&gt;Weak governance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;After AgentOS&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unified agent workforce&lt;/li&gt;
&lt;li&gt;Optimized compute usage&lt;/li&gt;
&lt;li&gt;Plug-and-play enterprise integration&lt;/li&gt;
&lt;li&gt;Audit-ready compliance&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Why AgentOS will define the next Era&lt;/h2&gt;
&lt;p&gt;Every digital revolution had a backbone:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PCs → Operating systems&lt;/li&gt;
&lt;li&gt;Web → Browsers and cloud platforms&lt;/li&gt;
&lt;li&gt;Smartphones → Android and iOS&lt;/li&gt;
&lt;li&gt;AI Agents → AgentOS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;AgentOS is the infrastructure that ensures agents aren’t just clever individuals but a &lt;strong&gt;coordinated, compliant, enterprise-ready digital workforce.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Closing thought&lt;/h2&gt;
&lt;p&gt;The future of enterprise AI is not about isolated copilots or one-off pilots.
It is about &lt;strong&gt;agent ecosystems functioning as digital colleagues,&lt;/strong&gt; transforming how businesses operate.&lt;/p&gt;
&lt;p&gt;Agentic AI introduced task-level coordination.
AgentOS delivers system-wide orchestration, governance, and scalability.&lt;/p&gt;
&lt;p&gt;For enterprises, AgentOS is not a passing trend. It is the &lt;strong&gt;technology that will stay,&lt;/strong&gt; advancing year after year, as fundamental as the operating systems that run our laptops and servers today.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The real question for enterprise leaders is not “Should we adopt AgentOS?”
It is &lt;em&gt;&lt;strong&gt;“How fast can we adopt AgentOS to stay competitive in the AI-driven future?”&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title><![CDATA[ Your new PM assistant: The rise of Agentic AI in daily task management]]></title><description><![CDATA[Part 2 of our Gen AI for PM series Intro: Project managers juggle endless moving parts—status updates, task assignments, shifting priorities…]]></description><link>https://developer.hpe.com/gen-ai-for-pm-series-part-2-your-new-pm-assistant-the-rise-of-agent-ai-in-daily-task-management/</link><guid isPermaLink="false">https://developer.hpe.com/gen-ai-for-pm-series-part-2-your-new-pm-assistant-the-rise-of-agent-ai-in-daily-task-management/</guid><pubDate>Wed, 27 Aug 2025 17:39:15 GMT</pubDate><content:encoded> &lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 2 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;Intro:&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Project managers juggle endless moving parts—status updates, task assignments, shifting priorities, and the constant chase for clarity. Too often, valuable time is wasted just piecing together information, instead of driving decisions forward.&lt;/p&gt;
&lt;p&gt;This is where &lt;strong&gt;Agentic AI&lt;/strong&gt; begins to change the game: by streamlining updates, automating repetitive follow-ups, and surfacing the insights you need, right when you need them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The endless cycle of updates&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you’ve ever been a project manager, you know the drill. You open your inbox in the morning, and it’s flooded with updates: “Task delayed,” “Waiting for approval,” “Who’s picking this up?” Then come the Slack pings, calendar invites, and those dreaded status meetings.&lt;/p&gt;
&lt;p&gt;By lunchtime, you’ve spent more time chasing information than making decisions. And still, you find yourself sending the same email you’ve typed a hundred times before: “Where are we on this&lt;/p&gt;
&lt;p&gt;This is the reality of traditional project management — too much &lt;strong&gt;manual overhead&lt;/strong&gt;, not enough time for leadership. But things are changing. With &lt;strong&gt;Agentic AI&lt;/strong&gt;, task management is no longer a reactive grind. It’s becoming &lt;strong&gt;proactive, automated, and intelligent&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;An overwhelmed PM surrounded by emails and sticky notes (left) or one who is calm, cool, and collected, as a result of an AI assistant automatically updating tasks(right).&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/ai-1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Task allocation&quot; title=&quot;Task allocation&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 1: Agentic AI for task allocation and reminders&lt;/h2&gt;
&lt;p&gt;Allocating tasks sounds simple, but in practice, it’s a juggling act. You need to balance skills, availability, deadlines, and workload. Traditionally, that means spreadsheets, endless emails, and lots of guesswork.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;With Agentic AI, this process becomes effortless.&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Skill-based allocation&lt;/strong&gt; – AI matches tasks to the right team members based on expertise and bandwidth.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Load balancing&lt;/strong&gt; – If someone is overbooked, the AI reallocates tasks automatically.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Smart reminders&lt;/strong&gt; – Instead of the PM chasing, AI sends nudges when deadlines are near.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Consider this example&lt;/strong&gt;: Sarah, a PM, notices that her developer John already has three critical tasks. Instead of assigning him another bug fix, the AI routes it to Priya — who has the right skills and availability. John stays productive without burnout, and Priya feels trusted with meaningful work.&lt;/p&gt;
&lt;p&gt;The graphic below shows how tasks move through the AI-driven workflow: They begin with &lt;strong&gt;Task Creation&lt;/strong&gt;, are followed by a &lt;strong&gt;Skill Match&lt;/strong&gt;, then proceed to &lt;strong&gt;Agent Assignment&lt;/strong&gt;. From there, the system ensures follow-through with &lt;strong&gt;Auto Reminders&lt;/strong&gt; and concludes with &lt;strong&gt;Status Updates&lt;/strong&gt; to keep everything transparent and on track.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/ai-2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;AI Task allocation&quot; title=&quot;AI Task allocation&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 2: Natural language inputs can trigger automatic task adjustments&lt;/h2&gt;
&lt;p&gt;One of the most exciting aspects of Agentic AI is its ability to understand &lt;strong&gt;plain human language.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Imagine this:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A developer types in Slack: “Feature A is blocked until the API fix goes live.”&lt;/li&gt;
&lt;li&gt;The AI picks it up, flags “Feature A” as blocked, shifts dependent tasks, and notifies affected stakeholders.&lt;/li&gt;
&lt;li&gt;No PM intervention needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Instead of PMs manually updating Jira boards or rescheduling, the AI listens, interprets, and acts in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Consider this Example&lt;/strong&gt;: During a standup, a designer says, “Wireframes are done, just waiting on review.” Instantly, the AI updates the design task as complete, creates a “Review” sub-task, and alerts the approver.&lt;/p&gt;
&lt;p&gt;The illustration below shows how the system responds when a task becomes &lt;strong&gt;blocked&lt;/strong&gt;: the AI agent steps in to automatically adjust the project timeline and trigger &lt;strong&gt;alert notifications&lt;/strong&gt;, ensuring the issue is surfaced and addressed without delays.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/picture-11.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;NL Auto task&quot; title=&quot;NL Auto task&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 3: Reducing human error in task tracking&lt;/h2&gt;
&lt;p&gt;Manual task tracking is messy. People forget to update boards, mark things “done” that aren’t done, or miss dependencies. These errors snowball into delays, missed deadlines, and finger-pointing.&lt;/p&gt;
&lt;p&gt;Agentic AI tackles this by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Auto-syncing across tools&lt;/strong&gt; – Updates in Jira, Asana, Trello, or email are reflected everywhere instantly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Error detection&lt;/strong&gt; – If a task is marked complete but the code hasn’t been merged, AI raises a flag.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audit trails&lt;/strong&gt; – Every update is logged for transparency and compliance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: Without AI, a QA engineer closes a ticket marked “passed,” but the bug reappears in staging. AI catches the mismatch and reopens the task, preventing the issue from being reported late in production.&lt;/p&gt;
&lt;p&gt;To compare and contrast, consider the image below that shows the differences between manual tracking versus AI-powered tracking.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/picture-12.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Reducing error&quot; title=&quot;Reducing error&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 4: Case Study – 20% time saved in sprint management&lt;/h2&gt;
&lt;p&gt;At a mid-sized software company, the PM team adopted an AI task assistant to manage sprint planning and daily updates.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;After three months, they realized the following results:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;20% reduction in sprint management time&lt;/strong&gt; (less manual backlog grooming, fewer status check-ins).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;30% fewer status meetings&lt;/strong&gt; (updates auto-logged by AI).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Higher developer satisfaction&lt;/strong&gt; (less time spent on admin, more on building features).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;In this case study&lt;/strong&gt;, we observed that instead of three 1-hour sprint syncs per week, the team only needed one. The AI managed the other updates asynchronously, saving the PMs nearly 8 hours per month.&lt;/p&gt;
&lt;p&gt;The bar graph below illustrates the sprint management time that was saved as a result of the adoption of Agentic AI.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/ai-5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Sprint Management&quot; title=&quot;Sprint Management&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Focus on leadership, not micromanagement&lt;/h2&gt;
&lt;p&gt;Here’s the truth: AI isn’t coming to replace project managers. It’s coming to liberate them.&lt;/p&gt;
&lt;p&gt;By offloading repetitive work like task allocation, reminders, and updates, PMs can finally focus on the work that matters most:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Motivating teams&lt;/li&gt;
&lt;li&gt;Driving strategy&lt;/li&gt;
&lt;li&gt;Building relationships&lt;/li&gt;
&lt;li&gt;Guiding projects toward meaningful outcomes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The next generation of PMs won’t be remembered as task chasers — they’ll be celebrated as &lt;strong&gt;strategic leaders powered by intelligent AI assistants&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Key takeaway&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI eliminates the “where are we on this?” emails, replacing them with real-time, self-updating project visibility.&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/ai-6.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Futuristic AI dashboard&quot; title=&quot;Futuristic AI dashboard&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[ From Gantt charts to Generative AI: How Agentic AI is revolutionizing project management]]></title><description><![CDATA[Part 1 of our Gen AI for PM series Intro For decades, project managers have relied on Gantt charts and other static tools to track timelines…]]></description><link>https://developer.hpe.com/1-from-gantt-charts-to-generative-ai-how-agentic-ai-is-revolutionizing-project-management/</link><guid isPermaLink="false">https://developer.hpe.com/1-from-gantt-charts-to-generative-ai-how-agentic-ai-is-revolutionizing-project-management/</guid><pubDate>Wed, 27 Aug 2025 15:04:35 GMT</pubDate><content:encoded>&lt;style&gt;

li {

   font-size: 27px;

   line-height: 33px;

   max-width: none;

}

&lt;/style&gt;
&lt;p&gt;Part 1 of our &lt;em&gt;&lt;strong&gt;Gen AI for PM series&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;For decades, project managers have relied on &lt;strong&gt;Gantt charts&lt;/strong&gt; and other static tools to track timelines and dependencies. While these methods provided structure, they often struggled to keep pace with the speed and complexity of modern projects. Enter &lt;strong&gt;Generative AI and Agentic AI&lt;/strong&gt;—a leap forward that transforms project management from reactive tracking to &lt;strong&gt;proactive intelligence&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Agentic AI doesn’t just visualize timelines; it continuously analyzes data, predicts risks, reallocates resources, and even recommends next best actions. This shift marks a new era where project managers move beyond manual monitoring and become &lt;strong&gt;strategic leaders&lt;/strong&gt;, supported by AI-driven insights that keep projects adaptive, resilient, and future-ready.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;From static plans to intelligent partners&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Picture this: You’re a project manager (PM) juggling deadlines, budget limits, and a flood of emails. You’ve got your trusted Gantt chart open, but within hours, it’s already outdated. A supplier delay changes Task A, which cascades into Tasks B and C, and suddenly your neat timeline looks like spaghetti. Sound familiar?&lt;/p&gt;
&lt;p&gt;For decades, project management has relied on &lt;strong&gt;static tools&lt;/strong&gt; — from sticky notes on whiteboards to spreadsheets and Gantt charts. They’ve helped PMs visualize tasks and dependencies, but they’re fundamentally &lt;strong&gt;snapshots in time&lt;/strong&gt;. The moment reality shifts, the tools fall behind, and project managers are left manually updating and chasing information.&lt;/p&gt;
&lt;p&gt;Today, a new paradigm is emerging. &lt;strong&gt;Agentic AI&lt;/strong&gt; — intelligent agents that adapt, predict, and act — is turning project management from reactive firefighting into proactive leadership. Think of it not as another tool, but as your &lt;strong&gt;digital co-pilot&lt;/strong&gt; who keeps the plan alive, responsive, and self-updating.&lt;/p&gt;
&lt;p&gt;In this blog post, we&apos;ll compare and contrast the differences, as shown in the example below, between traditional project management and project management that is optimized through the use of Agentic AI.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: A split-screen illustration showing Traditional Gantt Chart → Manual Updates → Bottlenecked PM vs. Agentic AI Dashboard → Real-Time Updates → Autonomous Adjustments.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/g1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Gantt chart&quot; title=&quot;Gantt chart&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;Section 1: What is Agentic AI?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;At its core, &lt;strong&gt;Agentic AI&lt;/strong&gt; refers to AI systems that not only respond to queries but also &lt;strong&gt;take initiative&lt;/strong&gt;. Unlike chatbots that wait for prompts, Agentic AI can monitor, make decisions, and act autonomously within defined boundaries.&lt;/p&gt;
&lt;p&gt;In project management, this means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Always-on Monitoring,&lt;/strong&gt; i.e, scanning budgets, tasks, risks, and dependencies in real time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context-aware Decision-Making&lt;/strong&gt;, i.e, adjusting schedules when delays happen.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Proactive communication,&lt;/strong&gt; i.e, alerting stakeholders before problems escalate.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Think of using Agentic AI as hiring a &lt;strong&gt;junior project manager&lt;/strong&gt; who works 24/7, never forgets, and never gets tired.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;As an example&lt;/strong&gt;: If a teammate calls in sick, an &lt;strong&gt;AI scheduling agent&lt;/strong&gt; instantly reallocates their tasks based on skills and availability, updates the timeline, and notifies the team — before you even sip your morning coffee.&lt;/p&gt;
&lt;p&gt;A diagram showing Agents → Scheduling, Risk, Communication → Central PM Dashboard.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/g2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Scheduling agent&quot; title=&quot;Scheduling agent&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 2: How agents automate planning &amp;#x26; scheduling&lt;/h2&gt;
&lt;p&gt;One of the biggest pain points for PMs is the constant back-and-forth of scheduling.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How Agentic AI helps:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic scheduling&lt;/strong&gt; – Agents adjust project timelines automatically when dependencies shift.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: A supplier delays material delivery by two days. As a consequence AI shifts downstream tasks, updates the Gantt chart, and sends alerts.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Skill-based allocation&lt;/strong&gt; – AI assigns tasks to the most suitable team members based on workload and expertise.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example: A high-priority bug fix goes to the developer with a proven track record in similar issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automated reminders&lt;/strong&gt; – AI sends nudges when deadlines approach, reducing PM’s email load.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;The diagram below illustrates how Agentic AI streamlines task management.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;First, new tasks are &lt;strong&gt;created&lt;/strong&gt; and automatically &lt;strong&gt;assigned to the right agent&lt;/strong&gt;. The system then performs a &lt;strong&gt;skill match&lt;/strong&gt; to ensure the most capable resource is selected. Once assigned, the AI sends &lt;strong&gt;automatic reminders&lt;/strong&gt; to keep tasks on track. Finally, it provides &lt;strong&gt;real-time progress tracking&lt;/strong&gt;, ensuring transparency and accountability at every stage.&lt;/p&gt;
&lt;p&gt;All of this is orchestrated under the umbrella of &lt;strong&gt;Automated Task Management&lt;/strong&gt; &lt;strong&gt;powered by Agentic AI&lt;/strong&gt;, minimizing manual intervention and boosting efficiency.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/g3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Automate planning&quot; title=&quot;Automate planning&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;Section 3: Real-time risk detection &amp;#x26; mitigation&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Risk registers are often dusty spreadsheets used by PMs that are updated once a week. By the time risks surface, it’s usually too late.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI acts as a real-time risk radar that offers:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Continuous scanning&lt;/strong&gt;: Agents monitor project velocity, costs, and communications for anomalies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Predictive alerts&lt;/strong&gt;: AI forecasts likely delays or overspending before they occur.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Suggested mitigations&lt;/strong&gt;: Offers solutions, like shifting resources or adjusting scope.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt; &lt;strong&gt;As an example&lt;/strong&gt;: In a construction project, AI detects upcoming storms. It reschedules indoor tasks for those days, preventing downtime and cost overruns.&lt;/p&gt;
&lt;p&gt;In the graphic below, we compare traditional risk management with that which is AI-driven. When compared side-by-side, you can see how traditional risk management is manual and reactive, whilst AI-driven risk management is predictive and proactive.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/g4.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Real-time risk&quot; title=&quot;Real-time risk&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Section 4: Examples in complex, multi-team projects&lt;/h2&gt;
&lt;p&gt;The larger and more complex the project, the more powerful Agentic AI becomes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Global product launch -&lt;/strong&gt;
Marketing, supply chain, and engineering must align across time zones. AI agents track dependencies, flag misalignments, and ensure simultaneous readiness.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Software development at scale -&lt;/strong&gt;
Agents monitor sprint velocity, bug queues, and release schedules. If delays in one module impact another, the AI re-prioritizes tasks automatically.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Healthcare facility rollout -&lt;/strong&gt;
Compliance checks, vendor schedules, and training sessions are intertwined. AI ensures compliance tasks are met and automates approval escalations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The diagram below illustrates this ecosystem, showing how multiple specialized agents—Scheduling, Risk, and Communication—work together under one intelligent system to create a seamless, AI-driven project management experience.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/g5.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Multi team project chart&quot; title=&quot;Multi team project chart&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Conclusion: AI PM as the next career superpower&lt;/h2&gt;
&lt;p&gt;Will Agentic AI replace project managers? Absolutely not. Just like spreadsheets didn’t replace accountants, Agentic AI won’t replace PMs. Instead, it &lt;strong&gt;augments their role&lt;/strong&gt; — handling the repetitive grunt work so PMs can focus on &lt;strong&gt;strategy, leadership, and creativity&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;In fact, PMs who embrace Agentic AI are positioning themselves as the &lt;strong&gt;next generation of strategic leaders.&lt;/strong&gt; With less time spent on “status chasing” and more on stakeholder engagement, they’ll be the ones driving organizational impact.&lt;/p&gt;
&lt;h2&gt;Key takeaway&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI turns project plans into living, self-updating systems — giving PMs foresight, agility, and the freedom to lead.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The visualization below captures this idea: a &lt;strong&gt;futuristic project manager at a unified AI-powered dashboard&lt;/strong&gt;, with supporting agents feeding their insights in real time. It symbolizes the partnership between &lt;strong&gt;human leadership&lt;/strong&gt; and &lt;strong&gt;AI execution&lt;/strong&gt;, where the manager focuses on strategic decisions while the agents handle executional intelligence.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/picture-1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Futuristic PM&quot; title=&quot;Futuristic PM&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 5: Productivity and Magic Compilers]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-5-productivity-and-magic-compilers/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-5-productivity-and-magic-compilers/</guid><pubDate>Thu, 21 Aug 2025 00:19:53 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Model Context Protocol (MCP): The universal connector for AI applications]]></title><description><![CDATA[Introduction In today's AI-driven landscape, large language models (LLMs) are transforming how we approach writing, research, and problem…]]></description><link>https://developer.hpe.com/sivabala-model-context-protocol-mcp-the-universal-connector-for-ai-applications/</link><guid isPermaLink="false">https://developer.hpe.com/sivabala-model-context-protocol-mcp-the-universal-connector-for-ai-applications/</guid><pubDate>Tue, 12 Aug 2025 02:00:38 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In today&apos;s AI-driven landscape, large language models (LLMs) are transforming how we approach writing, research, and problem-solving. Yet even the most sophisticated models face a fundamental limitation: they&apos;re only as effective as the context they operate within. This is where the Model Context Protocol (MCP) emerges as a revolutionary standard, bridging the gap between AI applications and real-world data ecosystems.&lt;/p&gt;
&lt;h2&gt;The challenge: AI applications in isolation&lt;/h2&gt;
&lt;p&gt;Picture this scenario: Your organization has deployed cutting-edge AI tools that can generate brilliant insights, write compelling content, and solve complex problems. Yet these same tools remain blind to your company&apos;s real-time data, can&apos;t access your proprietary systems, and operate without understanding your current operational context.&lt;/p&gt;
&lt;p&gt;This is the reality for most enterprises today. Traditional LLMs operate in silos, disconnected from live systems and dynamic data sources. This isolation severely constrains their utility in practical, enterprise-grade scenarios. Organizations find themselves with powerful AI tools that can&apos;t access their proprietary databases, integrate with existing APIs, or leverage real-time information streams.&lt;/p&gt;
&lt;p&gt;The result? AI applications that feel impressive in demonstrations but fall short in production environments where contextual awareness is paramount. It&apos;s like having a brilliant consultant who knows everything about general business practices but nothing about your specific company, industry challenges, or current operational state.&lt;/p&gt;
&lt;h2&gt;Enter MCP - the game changer&lt;/h2&gt;
&lt;p&gt;Model Context Protocol addresses this challenge head-on by functioning as the &quot;USB-C port&quot; for AI applications. Just as USB-C provides a universal standard for connecting diverse devices and peripherals, MCP provides a standardized interface that allows AI systems to connect seamlessly with external tools, APIs, and data sources.&lt;/p&gt;
&lt;p&gt;As content strategist Gary Vaynerchuk observed, &quot;Content is king, but context is god.&quot; MCP ensures that AI models transcend mere intelligence to achieve deep contextual understanding of your specific environment, data, and operational needs.&lt;/p&gt;
&lt;h2&gt;How MCP works - architecture deep dive&lt;/h2&gt;
&lt;p&gt;Understanding MCP requires grasping its elegant three-component architecture that works in perfect harmony:&lt;/p&gt;
&lt;h3&gt;The three pillars&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Host Application&lt;/strong&gt;&lt;br&gt;
Think of this as your AI command center – applications like Claude Desktop, Cursor, or any AI-powered tool that serves as the primary interface for users. The host houses the MCP Client functionality and acts as the orchestrator of AI-human interactions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MCP Client&lt;/strong&gt;&lt;br&gt;
This is the sophisticated translator and communication bridge. It facilitates seamless data exchange between your AI application and the various MCP servers, handling protocol negotiations and data formatting while ensuring that information flows smoothly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MCP Server&lt;/strong&gt;&lt;br&gt;
These are specialized gateways that expose your organization&apos;s tools, structured data, and computational capabilities to client applications through standardized interfaces. Each server can represent different aspects of your infrastructure – from databases to APIs to monitoring systems.&lt;/p&gt;
&lt;h3&gt;Three interfaces that make the magic happen&lt;/h3&gt;
&lt;p&gt;The protocol operates through three critical interfaces that transform static AI into dynamic, actionable intelligence:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tools interface&lt;/strong&gt;&lt;br&gt;
This transforms passive AI models into active, agentic AI systems capable of real-world interactions. Unlike traditional AI that merely generates recommendations, agentic AI powered by MCP can autonomously execute actions based on context and user intent. Through the Tools Interface, AI agents can trigger deployments, create incident tickets, send notifications, schedule maintenance windows, or execute complex multi-step workflows – all while maintaining appropriate guardrails and approval processes. This represents a fundamental shift from AI as a consultative tool to AI as an operational participant in your infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources interface&lt;/strong&gt;&lt;br&gt;
This provides structured access to your organization&apos;s data repositories, content management systems, and information databases. Your AI models operate with current, relevant information rather than stale training data, ensuring responses are both intelligent and immediately applicable.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prompts interface&lt;/strong&gt;&lt;br&gt;
This facilitates the creation and deployment of reusable templates and workflows, enabling consistent AI behavior patterns across different use cases. Think of it as creating &quot;muscle memory&quot; for your AI applications.&lt;/p&gt;
&lt;h2&gt;The communication dance&lt;/h2&gt;
&lt;p&gt;The MCP communication process follows an elegant handshake protocol that ensures robust, secure interactions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capability discovery&lt;/strong&gt;: The client initiates contact by querying servers for available tools, prompts, and resources – essentially asking &quot;What can you help me with?&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capability acknowledgment&lt;/strong&gt;: Servers respond with comprehensive inventories of their offerings, like a detailed service menu&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Message exchange&lt;/strong&gt;: Once capabilities are established, ongoing communication enables dynamic interaction between AI models and server resources&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let me illustrate with a practical example: Imagine a Weather API server that exposes forecasting tools and meteorological data templates. When an AI application needs weather information, the MCP client can leverage these resources to generate contextually rich, location-specific weather insights that go far beyond generic forecasts.&lt;/p&gt;
&lt;h2&gt;The universal value proposition&lt;/h2&gt;
&lt;p&gt;MCP delivers transformative advantages across the entire AI ecosystem:&lt;/p&gt;
&lt;h3&gt;For AI developers: Acceleration through standardization&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Universal compatibility&lt;/strong&gt;: Connect your applications to any MCP-compliant server without building custom integrations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reduced development overhead&lt;/strong&gt;: Focus on core AI functionality rather than wrestling with bespoke connectors&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalable architecture&lt;/strong&gt;: Easily expand application capabilities by integrating additional MCP servers as your needs grow&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;For tool and API providers: Build once, impact everywhere&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Universal reach&lt;/strong&gt;: Create a single MCP server that works across all compatible AI applications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expanded market access&lt;/strong&gt;: Make your tools and data accessible to the entire growing MCP ecosystem&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Standardized Interface&lt;/strong&gt;: Reduce documentation burden and support overhead through consistent protocols&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;For end users: Intelligence meets reality&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Enhanced AI capabilities&lt;/strong&gt;: Access AI applications that truly understand and interact with your real-world data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Seamless workflows&lt;/strong&gt;: Experience fluid integration between AI tools and your existing systems&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context-aware responses&lt;/strong&gt;: Receive more relevant, actionable insights that reflect your current situation&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;For enterprise organizations: governance meets innovation&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clear separation of concerns&lt;/strong&gt;: Maintain distinct boundaries between AI development and product teams&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security and governance&lt;/strong&gt;: Implement consistent access controls and data governance across AI integrations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalable AI strategy&lt;/strong&gt;: Deploy AI capabilities across your organization without architectural complexity&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Market momentum - the numbers tell the story&lt;/h2&gt;
&lt;p&gt;MCP is rapidly establishing itself as the foundational protocol for AI integration, with adoption metrics that demonstrate serious industry traction:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Over 5,000 active MCP servers&lt;/strong&gt; deployed as of May 2025&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Major AI platforms&lt;/strong&gt; including OpenAI integrated MCP support within one week of release&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enterprise adoption&lt;/strong&gt; accelerating across technology, healthcare, and financial services sectors&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developer community&lt;/strong&gt; growing exponentially with contributions from leading tech companies&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This momentum positions MCP as the &quot;HTTP protocol&quot; of AI integration – a fundamental standard that enables the next generation of intelligent applications.&lt;/p&gt;
&lt;h2&gt;Real-world impact that goes beyond theory&lt;/h2&gt;
&lt;p&gt;Organizations implementing MCP are seeing immediate, measurable benefits:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DevOps teams&lt;/strong&gt; can now ask AI assistants to &quot;Check the health of our production environment and create tickets for any critical alerts&quot; – with the AI actually accessing monitoring systems and ticketing platforms.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sales teams&lt;/strong&gt; leverage AI that understands current pipeline data, customer interaction history, and market conditions to provide genuinely useful recommendations rather than generic advice.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Support organizations&lt;/strong&gt; deploy AI that can access knowledge bases, ticket systems, and customer data to provide contextual, actionable support guidance.&lt;/p&gt;
&lt;h2&gt;The path forward&lt;/h2&gt;
&lt;p&gt;The Model Context Protocol represents more than a technical specification – it embodies a fundamental shift in AI application architecture. By standardizing context exchange mechanisms, MCP enables a new class of AI applications that are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deeply integrated&lt;/strong&gt;: Connected to real-world data and systems&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Contextually aware&lt;/strong&gt;: Operating with current, relevant information&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamically capable&lt;/strong&gt;: Able to perform actions beyond text generation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enterprise ready&lt;/strong&gt;: Designed for production deployment with appropriate security and governance&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Getting started on your MCP journey&lt;/h2&gt;
&lt;p&gt;Organizations considering MCP adoption should focus on these strategic steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Identify high-value use cases&lt;/strong&gt;: Determine where contextual AI can deliver immediate business impact in your environment&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Assess integration points&lt;/strong&gt;: Catalog existing APIs and data sources that would benefit from AI accessibility&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Plan your pilot&lt;/strong&gt;: Start with a focused MCP server implementation that addresses specific organizational needs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Engage the ecosystem&lt;/strong&gt;: Connect with the growing MCP community for best practices and shared learnings&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;What&apos;s next?&lt;/h2&gt;
&lt;p&gt;In my next article, I&apos;ll dive deep into a practical implementation story: Building HPE OpsRamp&apos;s MCP server. I&apos;ll explore the technical architecture decisions, implementation challenges, and real-world benefits that emerge when monitoring and operations data becomes truly AI-accessible.&lt;/p&gt;
&lt;p&gt;The future of AI isn&apos;t just about smarter models – it&apos;s about smarter connections between those models and the real world where business happens. MCP is making that future possible, one standardized connection at a time.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Part 2 of this series will explore the practical implementation of OpsRamp&apos;s MCP server, showcasing how these concepts translate into real-world monitoring and operations intelligence. Stay tuned for deep technical insights and practical guidance.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 11: Agentic AI versus AI agent]]></title><description><![CDATA[Introduction Artificial Intelligence (AI) is evolving rapidly, and two terms — Agentic AI and AI agent — are increasingly appearing in…]]></description><link>https://developer.hpe.com/part-11-agentic-ai-vs-ai-agent/</link><guid isPermaLink="false">https://developer.hpe.com/part-11-agentic-ai-vs-ai-agent/</guid><pubDate>Mon, 11 Aug 2025 08:48:18 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Artificial Intelligence (AI) is evolving rapidly, and two terms — Agentic AI and AI agent — are increasingly appearing in business strategy documents, technical roadmaps, and boardroom discussions. While they sound similar, they represent distinct concepts with different implications for enterprise strategy, operations, and innovation.&lt;/p&gt;
&lt;p&gt;For business leaders and senior managers, understanding the distinction is not just academic — it can determine whether an AI initiative scales effectively, integrates seamlessly into your operations, and delivers measurable Return on investment (ROI).&lt;/p&gt;
&lt;h3&gt;This article breaks down Agentic AI vs AI agent with:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Clear definitions and conceptual differences&lt;/li&gt;
&lt;li&gt;Technical underpinnings&lt;/li&gt;
&lt;li&gt;Business use cases&lt;/li&gt;
&lt;li&gt;Strategic considerations for adoption&lt;/li&gt;
&lt;li&gt;Risks and governance&lt;/li&gt;
&lt;li&gt;Future trends&lt;/li&gt;
&lt;li&gt;References for deeper exploration&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;1. Defining the terms&lt;/h2&gt;
&lt;h3&gt;1.1 AI agent&lt;/h3&gt;
&lt;p&gt;An &lt;strong&gt;AI agent is a single, autonomous software program&lt;/strong&gt; that perceives an environment, makes decisions, and takes actions toward a defined goal, often within a narrow domain.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key characteristics:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Operates &lt;strong&gt;within a predefined scope&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Uses &lt;strong&gt;rules, heuristics, or ML models&lt;/strong&gt; for decision-making&lt;/li&gt;
&lt;li&gt;Limited ability to adapt beyond programmed or trained boundaries&lt;/li&gt;
&lt;li&gt;Often embedded into &lt;strong&gt;applications or workflows&lt;/strong&gt; for a specific function&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A chatbot that answers HR policy questions&lt;/li&gt;
&lt;li&gt;A recommendation engine for an e-commerce site&lt;/li&gt;
&lt;li&gt;An autonomous trading bot&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;1.2 Agentic AI&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI&lt;/strong&gt; is a system of multiple AI agents orchestrated to work collaboratively, often with dynamic planning, self-reflection, and multi-step reasoning capabilities. It moves beyond isolated automation toward goal-oriented, adaptive, and multi-role AI-driven ecosystems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key characteristics:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-agent orchestration&lt;/strong&gt;: Different specialized agents work together&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Autonomy in task decomposition&lt;/strong&gt;: Breaks high-level goals into sub-tasks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reasoning loops&lt;/strong&gt;: Self-reflects, evaluates outcomes, retries or adjusts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool integration&lt;/strong&gt;: Uses APIs, databases, and other systems dynamically&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adaptability&lt;/strong&gt;: Learns and optimizes over time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An AI-powered compliance team where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Agent A scans documents&lt;/li&gt;
&lt;li&gt;Agent B applies regulatory rules&lt;/li&gt;
&lt;li&gt;Agent C drafts compliance reports&lt;/li&gt;
&lt;li&gt;Orchestrator Agent manages workflows and escalations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An industrial repair assistant that autonomously diagnoses, orders parts, and schedules technicians.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Quick analogy:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Let me offer a simple analogy to bring clarity to the difference between an AI agent and Agentic AI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI Agent&lt;/strong&gt; = A skilled individual employee&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agentic AI =&lt;/strong&gt; A &lt;strong&gt;self-managed, multi-skilled team&lt;/strong&gt; with a project manager, analysts, and doers — all AI-driven&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;2. Technical architecture differences&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Feature&lt;/th&gt;
      &lt;th&gt;AI Agent&lt;/th&gt;
      &lt;th&gt;Agentic AI&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Scope&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Narrow, task-specific&lt;/td&gt;
      &lt;td&gt;Broad, multi-task, goal-oriented&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Single process or microservice&lt;/td&gt;
      &lt;td&gt;Multi-agent framework with orchestration layer&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Decision-making&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Rule-based or model-based within fixed scope&lt;/td&gt;
      &lt;td&gt;Multi-step reasoning, task decomposition&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Adaptability&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Limited&lt;/td&gt;
      &lt;td&gt;High (dynamic adaptation to changing contexts)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Integration&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Usually integrates with one system&lt;/td&gt;
      &lt;td&gt;Connects to multiple tools, APIs, data sources&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Examples of frameworks&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Rasa, Botpress, Dialogflow&lt;/td&gt;
      &lt;td&gt;LangChain Agents, AutoGPT, BabyAGI, AGNO framework&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;3. Business use cases&lt;/h2&gt;
&lt;h3&gt;3.1 AI agent use cases&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Customer support bots&lt;/strong&gt; – Provide FAQs and simple troubleshooting&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automated trading systems&lt;/strong&gt; – Execute trades based on pre-defined signals&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HR chatbots&lt;/strong&gt; – Answer leave policy questions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;**Business Impact:**Quick to deploy, lower cost, but limited in complexity and scope.&lt;/p&gt;
&lt;h3&gt;3.2 Agentic AI use cases&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Regulatory compliance automation&lt;/strong&gt; – Multiple agents scan, analyze, summarize, and report&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Healthcare assistants&lt;/strong&gt; – Agents for symptoms checking, scheduling, and generating discharge summaries&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Complex industrial troubleshooting&lt;/strong&gt; – Agents for diagnostics, parts ordering, repair instructions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Business impact:&lt;/strong&gt; Higher complexity but greater ROI potential through process automation at scale.&lt;/p&gt;
&lt;h2&gt;4. Strategic considerations for business leaders&lt;/h2&gt;
&lt;h3&gt;4.1 When to use an AI agent&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You have a &lt;strong&gt;clear, narrow task&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The process is &lt;strong&gt;repeatable with predictable inputs/outputs&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;ROI needs to be realized quickly with low implementation risk&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;4.2 When to use Agentic AI&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Multiple complex workflows need &lt;strong&gt;coordination&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;There is &lt;strong&gt;uncertainty and variability&lt;/strong&gt; in the environment&lt;/li&gt;
&lt;li&gt;Long-term scalability and adaptability are priorities&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Case example:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A bank could deploy:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI agent:&lt;/strong&gt; To answer customer queries about loan status&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agentic AI:&lt;/strong&gt; To orchestrate fraud detection, compliance checks, and customer communication in an integrated way&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;5. Risks, challenges, and governance&lt;/h2&gt;
&lt;h3&gt;5.1 AI agent risks&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Overfitting to narrow tasks&lt;/li&gt;
&lt;li&gt;Limited scalability&lt;/li&gt;
&lt;li&gt;Vulnerable to changing business requirements&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;5.2 Agentic AI risks&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Complexity in orchestration&lt;/li&gt;
&lt;li&gt;Higher cost of development and maintenance&lt;/li&gt;
&lt;li&gt;AI hallucinations amplified if orchestration lacks guardrails&lt;/li&gt;
&lt;li&gt;Governance challenges (data security, compliance, ethics)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Mitigation strategies:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To reduce the potential risks and challenges identified, the following strategies can be implemented. Each one is aimed at ensuring operational resilience, improving outcomes, and minimizing negative impacts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Guardrails&lt;/strong&gt;: NeMo Guardrails, policy frameworks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Auditability&lt;/strong&gt;: Maintain decision logs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ethics&lt;/strong&gt;: Align with corporate AI principles&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Testing&lt;/strong&gt;:Continuous evaluation under real-world conditions&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;6. Technology enablers&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;For AI agents:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When deploying AI Agents, the following mitigation strategies help ensure reliable, safe, and efficient performance within their defined scope&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rasa, Dialogflow, Botpress&lt;/li&gt;
&lt;li&gt;Domain-specific ML models&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;For Agentic AI:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Agentic AI systems operate with greater autonomy and complexity, so their mitigation strategies should account for adaptability, multi-step reasoning, and integration across multiple systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;LangChain multi-agent orchestration&lt;/li&gt;
&lt;li&gt;AutoGPT &amp;#x26; BabyAGI architectures&lt;/li&gt;
&lt;li&gt;AGNO Framework (for enterprise-grade agent teams)&lt;/li&gt;
&lt;li&gt;Vector databases (Qdrant, Milvus)&lt;/li&gt;
&lt;li&gt;LLMs (GPT-4, Claude, LLaMA variants)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;7. Future trends&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Hybrid Systems:&lt;/strong&gt; AI agents enhanced with Agentic AI orchestration&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Industry-Specific Agent Ecosystems:&lt;/strong&gt; Pre-built for finance, healthcare, logistics&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agent Marketplaces:&lt;/strong&gt;  Plug-and-play agents that integrate into orchestrators&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integration with IoT &amp;#x26; Edge AI:&lt;/strong&gt; Enabling real-time decision-making in physical environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;8. Decision framework for leaders&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Question&lt;/th&gt;
      &lt;th&gt;If “Yes” →&lt;/th&gt;
      &lt;th&gt;Answer&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Is the task narrow &amp;amp; predictable?&lt;/td&gt;
      &lt;td&gt;AI Agent&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Does it require multi-step reasoning?&lt;/td&gt;
      &lt;td&gt;Agentic AI&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Will it integrate with one system only?&lt;/td&gt;
      &lt;td&gt;AI Agent&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Do you need adaptability to changing inputs?&lt;/td&gt;
      &lt;td&gt;Agentic AI&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Is speed-to-market the top priority?&lt;/td&gt;
      &lt;td&gt;AI Agent&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Is scalability across processes the goal?&lt;/td&gt;
      &lt;td&gt;Agentic AI&lt;/td&gt;
      &lt;td&gt;✅&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;9. Conclusion&lt;/h2&gt;
&lt;p&gt;The choice between &lt;strong&gt;AI agent&lt;/strong&gt; and &lt;strong&gt;Agentic AI&lt;/strong&gt; is not binary — many enterprises will deploy both. The key is &lt;strong&gt;understanding the maturity of your AI roadmap&lt;/strong&gt;, your operational complexity, and your scalability ambitions.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI agents&lt;/strong&gt; are quick wins for automation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agentic AI&lt;/strong&gt; is a long-term strategic play for transformation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By aligning your choice with business strategy and technical capability, you position your organization to move from isolated AI successes to enterprise-wide AI transformation.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Russell, S., &amp;#x26; Norvig, P. (2021). Artificial Intelligence: A Modern Approach. Pearson. &lt;a href=&quot;https://www.pearson.com/en-us/subject-catalog/p/artificial-intelligence%E2%80%91a%E2%80%91modern%E2%80%91approach/P200000003500/9780137505135&quot;&gt;https://www.pearson.com/en-us/subject-catalog/p/artificial-intelligence‑a‑modern‑approach/P200000003500/9780137505135&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LangChain Documentation – &lt;a href=&quot;https://docs.langchain.com&quot;&gt;https://docs.langchain.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Auto-GPT – &lt;a href=&quot;https://github.com/Torantulino/Auto-GPT%5B%5D(https://github.com/Torantulino/Auto-GPT)&quot;&gt;https://github.com/Torantulino/Auto-GPT[](https://github.com/Torantulino/Auto-GPT)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;AGNO Framework – &lt;a href=&quot;https://docs.agno.com/introduction&quot;&gt;https://docs.agno.com/introduction&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[Provisioning MKS clusters in HPE Private Cloud Enterprise]]></title><description><![CDATA[HPE Private Cloud Enterprise now includes the Morpheus Kubernetes Service (MKS) feature, allowing users to deploy and manage Kubernetes (K8s…]]></description><link>https://developer.hpe.com/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/provisioning-mks-clusters-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Fri, 08 Aug 2025 08:09:07 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; now includes the Morpheus Kubernetes Service (MKS) feature, allowing users to deploy and manage Kubernetes (K8s) clusters directly through &lt;a href=&quot;https://www.hpe.com/us/en/morpheus-enterprise-software.html&quot;&gt;HPE Morpheus Enterprise software&lt;/a&gt;. With HPE Private Cloud Enterprise, now in its &lt;em&gt;Beta&lt;/em&gt; phase with MKS feature, customers can take advantage of streamlined MKS cluster provisioning using predefined cluster layouts, making it easier to launch and manage their containerized workloads.&lt;/p&gt;
&lt;p&gt;In this blog post, I will guide you through the process of provisioning an MKS cluster in HPE Private Cloud Enterprise, followed by key post-deployment tasks. These include downloading the &lt;em&gt;kubeconfig&lt;/em&gt; file, scaling the cluster by adding worker nodes, upgrading the K8s cluster version,  deploying applications via running workflows, and finally, deleting the MKS cluster when it&apos;s no longer needed.&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-private-cloud-enterprise.html&quot;&gt;HPE Private Cloud Enterprise&lt;/a&gt; is a fully managed &lt;em&gt;Infrastructure as a Service&lt;/em&gt; (IaaS) offering that brings a modern, cloud-like experience to on-premises environments. It combines the flexibility of hybrid cloud with the enterprise-grade control and security required by enterprise IT.&lt;/p&gt;
&lt;p&gt;Through the integration with &lt;a href=&quot;https://www.hpe.com/us/en/morpheus-enterprise-software.html&quot;&gt;HPE Morpheus Enterprise&lt;/a&gt;, which serves as the cloud management and orchestration layer, HPE Private Cloud Enterprise delivers a unified self-service interface for provisioning virtual machines (VMs), creating containers, and deploying applications, all governed by role-based access control (RBAC). This integration now enables support for the Morpheus Kubernetes Service (MKS) feature, allowing users to deploy and manage K8s clusters with built-in automation and observability capabilities.&lt;/p&gt;
&lt;p&gt;HPE Morpheus Enterprise provides a set of prebuilt MKS cluster layouts that support a variety of K8s versions and cluster types. These cluster layouts provision MKS clusters using the native K8s distribution, streamlining and accelerating deployment. This blog post walks through the process of creating an MKS cluster using one of these preconfigured cluster layouts.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Ensure that the following prerequisites are fulfilled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access to an HPE Private Cloud Enterprise workspace with the &apos;&lt;em&gt;Private Cloud Tenant Owner&apos;&lt;/em&gt; role, allowing administrative actions in the &lt;em&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt;&lt;/em&gt; service.&lt;/li&gt;
&lt;li&gt;The group named &lt;em&gt;&apos;Customer Department B&apos;&lt;/em&gt; and the network &lt;em&gt;&apos;Green-Segment&apos;&lt;/em&gt; have already been created.&lt;/li&gt;
&lt;li&gt;HPE Morpheus Enterprise running version 8.0.5 or higher.&lt;/li&gt;
&lt;li&gt;The MKS feature is enabled in HPE Private Cloud Enterprise. You can confirm the presence of the &lt;em&gt;Clusters&lt;/em&gt; menu from &lt;em&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/em&gt; tab.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Provisioning an MKS cluster&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Log in to HPE GreenLake Cloud at &lt;em&gt;&lt;a href=&quot;https://common.cloud.hpe.com/&quot;&gt;https://common.cloud.hpe.com/&lt;/a&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Locate an HPE Private Cloud Enterprise workspace and click &lt;em&gt;&lt;strong&gt;Go to Workspace&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/workspace.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;From the &lt;em&gt;Getting Started&lt;/em&gt; screen, click &lt;em&gt;&lt;strong&gt;Find Services&lt;/strong&gt;&lt;/em&gt;. (If you&apos;ve already launched HPE GreenLake Flex Solutions, the service will appear under &lt;em&gt;Recent Services&lt;/em&gt;, from which you can click &lt;em&gt;&lt;strong&gt;Launch&lt;/strong&gt;&lt;/em&gt;, then skip to the step &lt;em&gt;6&lt;/em&gt; below.)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/get-started.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;From the &lt;em&gt;Services Catalog&lt;/em&gt;, enter &lt;em&gt;&apos;HPE GreenLake Flex Solutions&apos;&lt;/em&gt;. Click the &lt;em&gt;&lt;strong&gt;HPE GreenLake Flex Solutions&lt;/strong&gt;&lt;/em&gt; Workloads result.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/service-catalog.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;From the Workloads &lt;em&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/em&gt; tab, click &lt;em&gt;&lt;strong&gt;Launch&lt;/strong&gt;&lt;/em&gt; to open the HPE GreenLake Flex Solutions.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/launch-glc.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;From the Cloud Services &lt;em&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;&lt;/em&gt;, locate the &lt;em&gt;Private Cloud Services&lt;/em&gt; card and ensure that the correct location is selected from the drop-down list.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/launch-morpheus.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Click &lt;em&gt;&lt;strong&gt;Launch HPE Morpheus Enterprise&lt;/strong&gt;&lt;/em&gt;. The Morpheus Dashboard screen (&lt;strong&gt;Operations&lt;/strong&gt; &gt; &lt;strong&gt;Dashboard&lt;/strong&gt;) displays.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/morpheus-dashboard.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;In the &lt;em&gt;Service Console&lt;/em&gt;, click &lt;em&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/em&gt; and select &lt;em&gt;Clusters&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/mks-feature.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;9&quot;&gt;
&lt;li&gt;From the clusters screen, click &lt;em&gt;&lt;strong&gt;+Add Cluster&lt;/strong&gt;&lt;/em&gt; to initiate MKS cluster provisioning.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;9.1 &lt;strong&gt;Choose cluster type&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;em&gt;CREATE CLUSTER&lt;/em&gt; panel, select &lt;em&gt;&apos;KUBERNETES CLUSTER&apos;&lt;/em&gt; as the &lt;em&gt;Cluster Type&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-type.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;9.2 &lt;strong&gt;Select a group&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Select a group, for example, &lt;em&gt;&apos;Customer Department B&apos;&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-group.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;9.3 &lt;strong&gt;Specify the cluster name&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Enter &lt;em&gt;CLUSTER NAME&lt;/em&gt; as &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt;, and optionally specify &lt;em&gt;RESOURCE NAME&lt;/em&gt;, &lt;em&gt;DESCRIPTION&lt;/em&gt;, and &lt;em&gt;LABELS&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-name.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;9.4 &lt;strong&gt;Select cluster layout &amp;#x26; configure master&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Select &lt;em&gt;LAYOUT&lt;/em&gt; and &lt;em&gt;PLAN&lt;/em&gt;, configure &lt;em&gt;VOLUMES&lt;/em&gt;, and select the &lt;em&gt;NETWORKS&lt;/em&gt;, such as &lt;em&gt;&apos;Green-Segment&apos;&lt;/em&gt;. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;NOTE: For demonstration purpose, the &lt;em&gt;&apos;MKS Kubernetes 1.31 Cluster on Ubuntu 22.04&apos;&lt;/em&gt; is selected. This cluster layout provisons an MKS cluster using a single master with K8s version &lt;em&gt;1.31&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The Classless Inter-Domain Routing (CIDR) for the &lt;em&gt;POD&lt;/em&gt; and &lt;em&gt;SERVICE&lt;/em&gt; define the internal IP ranges by routers to K8s &lt;em&gt;Pods&lt;/em&gt; and &lt;em&gt;Services&lt;/em&gt;. To prevent conflicts, ensure these CIDRs are distinct from the network settings.&lt;/p&gt;
&lt;p&gt;9.5 &lt;strong&gt;Configure worker&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Specify &lt;em&gt;NUMBER OF WORKERS&lt;/em&gt;, along with &lt;em&gt;PLAN&lt;/em&gt;, &lt;em&gt;VOLUMES&lt;/em&gt;, and &lt;em&gt;NETWORKS&lt;/em&gt;. You may retain the default settings or reuse the values previously configured for the master. Click &lt;em&gt;&lt;strong&gt;Next&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-worker.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;9.6 &lt;strong&gt;Review cluster details&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Skip this step for &lt;strong&gt;Automation&lt;/strong&gt; settings. The cluster review screen displays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-review.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Complete&lt;/strong&gt;&lt;/em&gt;. The MKS cluster provisioning process initiates.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-provisioning.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Verify MKS cluster&lt;/h2&gt;
&lt;p&gt;Approximately after a few minutes, the cluster &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; is created using the specified cluster layout: &lt;em&gt;MKS Kubernetes 1.31 Cluster on Ubuntu 22.04&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-status.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Access MKS cluster&lt;/h2&gt;
&lt;p&gt;Click the &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; cluster to view its details from the &lt;em&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/em&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-details.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Navigate to the &lt;em&gt;&lt;strong&gt;Control&lt;/strong&gt;&lt;/em&gt; tab and run the command &lt;em&gt;&apos;kubectl get nodes&apos;&lt;/em&gt; to view the cluster&apos;s node information.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-nodes.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In line with the cluster type, &lt;em&gt;MKS Kubernetes 1.31 Cluster on Ubuntu 22.04&lt;/em&gt;, the &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; cluster consists of one master and three workers.&lt;/p&gt;
&lt;h2&gt;Run daily cluster operations&lt;/h2&gt;
&lt;p&gt;From the provisioned MKS cluster, the &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt; menu provides a curated set of supported operations that simplify and streamline day-to-day cluster management. From downloading kubeconfig to scaling the cluster and performing upgrade, these built-in actions help automate key cluster operations, making cluster administration faster, easier, and more consistent.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-actions.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following sections explore how to perform these day-to-day cluster operations.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;View and download kube config&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From the &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; cluster, click &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/view-kubeconfig-menu.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;View Kube Config&lt;/strong&gt;&lt;/em&gt; to view the Kube config of the cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/view-kubeconfig.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Save the Kube config content to a file. For example, &lt;em&gt;&apos;mks-demo.kubeconfig&apos;&lt;/em&gt; on a &lt;em&gt;Linux&lt;/em&gt; client as shown in the following output. This file can then be used with the &lt;em&gt;kubectl&lt;/em&gt; CLI or the &lt;em&gt;Helm&lt;/em&gt; tool to access the MKS cluster and deploy applications using either YAML manifests or &lt;em&gt;Helm&lt;/em&gt; charts.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kubectl-console.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Add addtional worker&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In K8s, adding new workers to an existing cluster involves setting up self-registration, provisioning VM instances, and integrating them into the cluster. This process can be time-consuming and often demands custom automation scripts to streamline the workflow.&lt;/p&gt;
&lt;p&gt;From the MKS screen, navigate to a cluster and click &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-worker.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Add VMware Kubernetes Worker&lt;/strong&gt;&lt;/em&gt; to initiate adding new worker to the cluster.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Enter &lt;em&gt;NAME&lt;/em&gt; and an optional brief &lt;em&gt;DESCRIPTION&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-add-worker-name.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Select &lt;em&gt;PLAN&lt;/em&gt; and configure &lt;em&gt;VOLUMES&lt;/em&gt; and &lt;em&gt;NETWORKS&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-add-worker-config.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To add multiple workers at once, set the desired value in the &lt;em&gt;NUMBER OF WORKERS&lt;/em&gt; field.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Review worker details.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Skip this step for &lt;strong&gt;Automation&lt;/strong&gt; settings. The worker review screen displays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-add-worker-review.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Complete&lt;/strong&gt;&lt;/em&gt; to initiate provisioning new VM instances and adding them to the cluster as new workers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-new-worker-adding.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Verify new worker.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Navigate to the &lt;em&gt;&lt;strong&gt;Nodes&lt;/strong&gt;&lt;/em&gt; tab to check the new worker &lt;em&gt;&apos;new-mks-worker&apos;&lt;/em&gt; is listed in the node list.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-new-worker.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Upgrade MKS cluster&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;K8s follows a frequent release cycle, every 4 months on average, to ensure stability, innovation, and timely security updates. While upgrading a K8s cluster is crucial for maintaining security, performance, and access to the latest features, it remains a complex and demanding task. The upgrade process presents significant challenges, including managing scale and complexity, minimizing downtime risks, and handling substantial operational overhead that can span weeks and require coordination across multiple teams.&lt;/p&gt;
&lt;p&gt;From the MKS screen, navigate to a cluster and click &lt;em&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/upgrade-cluster-menu.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;em&gt;&lt;strong&gt;Upgrade Cluster&lt;/strong&gt;&lt;/em&gt;. The &lt;em&gt;UPGRADE CLUSTER&lt;/em&gt; screen displays the list of supported versions available for upgrade.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/upgrade-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select version &lt;em&gt;1.32.7&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Apply&lt;/strong&gt;&lt;/em&gt; to initiate upgrading the cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/upgrading-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Approximately after a few minutes, the cluster status updates to &lt;em&gt;&lt;strong&gt;Ok&lt;/strong&gt;&lt;/em&gt; and displays its cluster layout as &lt;em&gt;&apos;MKS Kubernetes 1.32 Cluster on Ubuntu 22.04&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/upgraded-cluster.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Deploy applications&lt;/h2&gt;
&lt;p&gt;After downloading the kubeconfig file as outlined earlier, you can easily deploy applications to the MKS cluster using the &lt;em&gt;kubectl&lt;/em&gt; CLI or &lt;em&gt;Helm&lt;/em&gt;. This section includes steps to deploy applications using the built-in features available from the provisioned MKS cluster in the &lt;em&gt;Service Console&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deploy applications from the cluster&apos;s &lt;em&gt;Control&lt;/em&gt; tab&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Under &lt;em&gt;&lt;strong&gt;Control&lt;/strong&gt;&lt;/em&gt; tab of the &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; cluster, deploy a sample application &lt;em&gt;&apos;nginx-demo&apos;&lt;/em&gt; to the namespace &lt;em&gt;&apos;app-demo&apos;&lt;/em&gt; by running the commands &lt;em&gt;&apos;kubectl create namespace app-demo&apos;&lt;/em&gt; and &lt;em&gt;&apos;kubectl create deployment nginx-demo --image=nginx --namespace app-demo&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kubectl-create-deploy.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deploy applications from the cluster&apos;s &lt;em&gt;Actions&lt;/em&gt; menu&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From the &lt;em&gt;&apos;mks-demo&apos;&lt;/em&gt; cluster screen, select &lt;strong&gt;Actions&lt;/strong&gt; &gt; &lt;strong&gt;Run Workload&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/run-workload-menu.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Define the following &lt;em&gt;Deployment&lt;/em&gt; YAML in &lt;em&gt;CUSTOM SPEC&lt;/em&gt; and click &lt;em&gt;&lt;strong&gt;Apply&lt;/strong&gt;&lt;/em&gt;. It will deploy the &lt;em&gt;&apos;nginx-demo&apos;&lt;/em&gt; application to the &lt;em&gt;&apos;default&apos;&lt;/em&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/run-workload.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Run the following command from the cluster&apos;s &lt;em&gt;&lt;strong&gt;Control&lt;/strong&gt;&lt;/em&gt; tab to confirm the &lt;em&gt;&apos;nginx-demo&apos;&lt;/em&gt; has been deployed successfully to the &lt;em&gt;default&lt;/em&gt; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl get all -n default
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/get-workload.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post provided a step-by-step guide to provisioning an MKS cluster using the MKS Beta feature within the HPE Private Cloud Enterprise environment. By selecting from a list of preconfigured MKS cluster layouts, you can quickly deploy an MKS cluster with your preferred cluster type and K8s version. Once provisioned, adding more workers is as simple as clicking the button from the cluster&apos;s &lt;em&gt;Actions&lt;/em&gt; menu. Cluster upgrading to a newer K8s version follows the same streamlined process. This makes cluster administration more efficient, consistent, and user-friendly.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake webhooks with Splunk, hybrid observability, agentic AI & more!]]></title><link>https://developer.hpe.com/2025-aug-05/</link><guid isPermaLink="false">https://developer.hpe.com/2025-aug-05/</guid><pubDate>Tue, 05 Aug 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[How to configure iSUT and AMS for vLCM-Based firmware updates on HPE Gen12 servers (High Security Mode)]]></title><description><![CDATA[HPE Gen12 servers introduce enhanced security by supporting only High Security modes (SecureStandard, CNSA, FIPS). This impacts how you…]]></description><link>https://developer.hpe.com/configure-isut-and-ams-for-vlcm-based-firmware-updates-on-gen-12-servers/</link><guid isPermaLink="false">https://developer.hpe.com/configure-isut-and-ams-for-vlcm-based-firmware-updates-on-gen-12-servers/</guid><pubDate>Mon, 04 Aug 2025 10:04:54 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;HPE Gen12 servers introduce enhanced security by supporting only High Security modes (SecureStandard, CNSA, FIPS). This impacts how you configure Intelligent System Update Tool &lt;strong&gt;(iSUT)&lt;/strong&gt; and Agentless Management Service &lt;strong&gt;(AMS)&lt;/strong&gt; for vSphere Lifecycle Manager &lt;strong&gt;(vLCM)&lt;/strong&gt; based firmware updates.&lt;/p&gt;
&lt;p&gt;Unlike previous generations, configuration through the HPE OneView for VMware vCenter &lt;strong&gt;(OV4VC)&lt;/strong&gt; and HPE Compute Ops Management plug-in for VMware vCenter &lt;strong&gt;(COM4VC)&lt;/strong&gt; vLCM Pre-Check page is not available in these modes, as iLO credentials are now required. Instead, you must manually configure AMS and iSUT by creating an application account and providing valid HPE iLO credentials.&lt;/p&gt;
&lt;p&gt;In this blog post, I’ll show you how to configure iSUT and AMS to enable vLCM-based firmware updates on HPE Gen12 servers.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;HPE Gen12 server with iLO 7&lt;/li&gt;
&lt;li&gt;vSphere environment with vLCM enabled&lt;/li&gt;
&lt;li&gt;iLO credentials with sufficient privileges&lt;/li&gt;
&lt;li&gt;Access to server CLI (SSH or local console)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 1: Create an Application Account on iLO 7&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Application accounts are service accounts in iLO 7, used by host applications (like iSUT and AMS) to securely authenticate and communicate with iLO.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To create an application account using CLI:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sut appaccount create -u &amp;#x3C;ilo_username&gt; -p &amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Alternatively, to proceed without creating an application account, provide the iLO credentials using the following CLI command:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sut -set ilousername=&amp;#x3C;ilo_username&gt; ilopassword=&amp;#x3C;ilo_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 2: Set iSUT Mode to AutoDeploy&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Set the iSUT mode to &lt;code&gt;AutoDeploy&lt;/code&gt; to enable automated firmware updates:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sut -set mode=AutoDeploy
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 3: Configure AMS Application Account (for VMware)&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;For VMware environments, create the AMS application account:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;/opt/amsdv/bin/amsdCli appaccount create -u &amp;#x3C;iLO_username&gt; -p &amp;#x3C;iLO_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 4: Verify Application Account in iLO&lt;/strong&gt;&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;iLO GUI&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;iLO Settings&lt;/strong&gt; &gt; &lt;strong&gt;User Management&lt;/strong&gt; &gt; &lt;strong&gt;Users&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Application Account&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Confirm the application account details are present&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 5: Check AMS status in iLO GUI&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Ensure AMS status is reported as &lt;strong&gt;Available&lt;/strong&gt; in the iLO GUI.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Step 6: Verify iSUT and AMS status in vSphere&lt;/strong&gt;&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Log in to &lt;strong&gt;VMware vSphere&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the required &lt;strong&gt;cluster&lt;/strong&gt; and click the &lt;strong&gt;Configure&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;In the left panel, go to &lt;strong&gt;Cluster &gt; Configure &gt; HPE Server Hardware&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;vLCM Pre-Check&lt;/strong&gt; panel, check the &lt;strong&gt;iSUT mode&lt;/strong&gt; and &lt;strong&gt;AMS state&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Refresh the page and confirm both statuses are &lt;strong&gt;green&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;With AMS and iSUT properly configured, you are ready to proceed with vLCM-based firmware updates on HPE Gen12 servers, including both &lt;strong&gt;ProLiant&lt;/strong&gt; and &lt;strong&gt;Synergy&lt;/strong&gt; models. This ensures secure, automated, and compliant lifecycle management in high-security environments.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Always refer to the latest HPE and VMware documentation for updates on security practices and supported configurations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Digital Layover: What happens before you’re allowed online at the airport]]></title><description><![CDATA[Picture this: You're at Gate 23. Your flight’s delayed again and all you want is to unwind with a Netflix binge or clear out your inbox. You…]]></description><link>https://developer.hpe.com/the-digital-layover-what-happens-before-you’re-allowed-online-at-the-airport/</link><guid isPermaLink="false">https://developer.hpe.com/the-digital-layover-what-happens-before-you’re-allowed-online-at-the-airport/</guid><pubDate>Fri, 01 Aug 2025 05:35:23 GMT</pubDate><content:encoded>&lt;p&gt;Picture this: You&apos;re at Gate 23. Your flight’s delayed again and all you want is to unwind with a Netflix binge or clear out your inbox. You connect to the “Airport_Free_WiFi,” open your browser, and BANG, you get redirected to a page asking you to accept terms and conditions, enter your email, or maybe even watch a short ad. Annoying? Perhaps. Necessary? Absolutely! And sometimes, oddly enough, it just works with no login page, no interaction whatsoever. So what’s really going on behind the scenes?&lt;/p&gt;
&lt;p&gt;Welcome to the world of captive portals, the digital gatekeepers of public Wi-Fi. In this post, I&apos;m going to pull back the curtain on how airport Wi-Fi actually works, why you’re asked to “sign in” sometimes, and how some devices skip that step altogether like VIPs at a club.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh7-rt.googleusercontent.com/docsz/AD_4nXcunUUNAp1SO7Fyx6GtdrnVcr15KEATXWXv_LSzcxRC7j3fybLlYLo2KB7dAixFhopQpfGHqZOUFY3tugNHJOeT7A8eFJ5qB4I_8qezNq0C-nbjRXlKySRoigTfIeDwtcPYWzjN?key=_tjnUcJBKaxknXG-TWFLfw&quot; alt=&quot;Image of a person trying to connect to the Airport WiFi&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is a captive portal?&lt;/h2&gt;
&lt;p&gt;Think of a captive portal like a hotel’s front desk. You&apos;ve stepped into the building, you’re technically inside, but before you can head to your room or enjoy the amenities, you have to stop at the reception. There, you&apos;re asked to provide some details, show identification, sign a few forms, or maybe even put down a deposit. Only after completing that interaction are you given your room key and granted full access.&lt;/p&gt;
&lt;p&gt;In the same way, a captive portal is a special web page that appears automatically when you connect to a configured Wi-Fi network. While your device is physically connected to the network, your access to the internet is temporarily blocked or “captured” until you interact with the portal. The network essentially puts you in a “digital lobby”. Just like different hotels have different check-in policies depending on their brand or pricing tier, captive portals vary based on how the network is configured. Some are simple and quick, others require more steps. Common requests include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accepting terms of service, like a digital waiver&lt;/li&gt;
&lt;li&gt;Providing your email address or flight number&lt;/li&gt;
&lt;li&gt;Entering a voucher code, boarding pass, or access key&lt;/li&gt;
&lt;li&gt;Watching a sponsored video or promotional message&lt;/li&gt;
&lt;li&gt;Paying for access, especially for higher-speed or premium tiers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The word &quot;captive&quot; might sound harsh, but that&apos;s literally what&apos;s happening. Your web browser is temporarily being captured and redirected until you complete the required action. Once that’s done, the network releases the restrictions and lets you roam the internet freely.&lt;/p&gt;
&lt;h2&gt;What&apos;s really happening in the background? &lt;/h2&gt;
&lt;p&gt;When you connect to public Wi-Fi, the process happening behind the scenes is a little more involved than just tapping “connect.” First, your device sends out a request to join the wireless network. This request is picked up by a wireless access point, which is basically a specialized router that handles connections from lots of people at once. If everything checks out, the network lets your device in. But even though you’re now connected to the network, you’re not yet &quot;connected&quot; to the internet.&lt;/p&gt;
&lt;p&gt;Next, your device needs to get an IP address, which acts like a temporary digital “home address” for your session on the network. It requests this from the airport’s system using something called Dynamic Host Configuration Protocol (DHCP). Once your device receives this address, it’s officially on the local network, but it still can’t browse the web just yet.&lt;/p&gt;
&lt;p&gt;At this point, you might open your browser or launch an app that tries to connect to the internet, like checking your email or visiting Google. But instead of taking you directly to your destination, the network steps in and intercepts your request. &lt;strong&gt;Why?&lt;/strong&gt; Because it needs you to &quot;check in&quot; first, just like a hotel front desk &lt;strong&gt;that won&apos;t&lt;/strong&gt; let you use the pool until you&apos;ve signed in.&lt;/p&gt;
&lt;p&gt;So, instead of loading the site you asked for, the network reroutes you to the captive portal configured. This is where you might be asked to accept terms and conditions, provide your email, or enter a flight number. Until you complete this step, the network won’t let your traffic reach the wider internet. Once you’ve fulfilled the requirement, the network clears you for full access.&lt;/p&gt;
&lt;p&gt;All of this is quietly managed behind the scenes by something called an access controller. You can think of it like a digital traffic cop. It monitors all the devices trying to connect and decides who gets through immediately, who needs to stop at the portal, and who might not be allowed on at all. It helps keep the network secure, fair, and manageable, especially in busy places like airports where thousands of people are trying to get online at the same time.&lt;/p&gt;
&lt;h2&gt;The VIP lane: MAC address authentication caching&lt;/h2&gt;
&lt;p&gt;Every single device that can connect to a network, your smartphone, laptop, tablet, or even a smartwatch, has an unique identifier called a Media Access Control (MAC) address. This isn&apos;t like your phone number or email; it&apos;s a permanent, hardware-level address, often described as your device&apos;s digital fingerprint. Unlike an IP address, which can change frequently, your MAC address is hard-coded into your device&apos;s network card and stays the same. Airport networks, or any public Wi-Fi network for that matter, can leverage MAC addresses to streamline the connection process for specific users or devices. &lt;/p&gt;
&lt;p&gt;Here&apos;s how it generally works: Network administrators maintain a list, which is a pre-approved list of trusted MAC addresses. When your device tries to connect to the Wi-Fi network, the access controller immediately checks its MAC address. If your device&apos;s MAC address is on that list, perhaps because you&apos;ve successfully logged in during a previous visit or because you&apos;re authorized airport staff, the network instantly recognizes you. This means you skip the captive portal entirely, gaining immediate and seamless internet access without needing to re-enter details or watch ads. This clever use of MAC addresses makes for a much smoother and faster experience, especially for frequent travelers or those whose devices have been previously authenticated on the network.&lt;/p&gt;
&lt;h2&gt;Step-by-Step: What happens when you connect to airport Wi-Fi&lt;/h2&gt;
&lt;p&gt;Whether you&apos;re leveraging a pre-registered device or connecting for the first time, your device undergoes a series of well-defined steps. Here&apos;s how it goes:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1: 802.11 association and IP assignment&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When you choose &quot;Airport_Free_WiFi,&quot; your device reaches out to the airport&apos;s Wi-Fi network. They complete a quick DHCP handshake, and then your device is assigned a temporary IP address. At this point, you&apos;re connected to their internal network, but an access controller is still preventing you from reaching the wider internet.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2: Access controller decision point – MAC address authentication vs. captive portal redirection&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Upon IP address assignment, the access controller steps in as a policy enforcer. It immediately intercepts all initial traffic coming from devices that haven&apos;t yet been authenticated. The access controller then performs a MAC Address Lookup: it inspects your device&apos;s unique MAC address, which it learned when your device first connected. This MAC address is checked against a list, which is a database of pre-approved or authorized MAC addresses, often stored internally. If your device&apos;s MAC address is recognized on this list, the access controller instantly applies a policy that grants direct internet access for your device&apos;s specific MAC and IP address combination. However, if the MAC address is unknown or not authorized, the access controller then forces a redirection of your device&apos;s web traffic to the captive portal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3A: MAC address authentication flow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If your device&apos;s unique identifier (MAC address) is on the network&apos;s special &quot;approved&quot; list, the access controller immediately opens the gate, allowing your device&apos;s internet requests to go straight out to the internet without any detours. This means when you type in a website address, the network doesn&apos;t mess with it; it goes directly to find that website. So, all your normal internet activities like browsing, using apps, and checking secure sites work instantly and without any hassle.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3B: Captive portal flow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When a device that hasn’t yet been authenticated tries to access a website, say &lt;a href=&quot;&quot;&gt;&lt;/a&gt;&lt;em&gt;&lt;a href=&quot;http://www.google.com&quot;&gt;www.google.com&lt;/a&gt;&lt;/em&gt;, the access controller intervenes using a technique known as DNS interception. Instead of resolving the domain to the real IP address of Google, the access controller returns the IP address of the captive portal. This causes the device to resolve the Google domain to the captive portal’s IP address. Once the device sends an HTTP or HTTPS request to that spoofed address, the captive portal responds with an HTTP 302 redirect, a standard mechanism used to point browsers to a different URL. This redirect guides the browser away from the originally requested site and instead delivers the user to the captive portal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Authentication/Terms agreement via captive portal&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The user then interacts with the captive portal, a process that typically involves submitting information like an email address, flight number, or an acceptance of terms through an HTTP POST request to the captive portal server. This server performs backend validation, checking the provided data against its own internal databases, external authentication servers or even simply logging the acceptance of terms. Upon successful validation, the captive portal server sends a request to the access controller, saying &quot;my job is done here&quot;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 5: Authorization and internet access granted&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Once the user has completed what the captive portal asked for, the access controller gets the message that you&apos;re good to go. It then quickly changes its internal rules, essentially lifting the blockade it had on your device. This opens up the internet for you, allowing all your web requests and other online activities to flow freely out to the rest of the internet without any further restrictions.&lt;/p&gt;
&lt;h2&gt;How devices detect captive portals automatically&lt;/h2&gt;
&lt;p&gt;Ever notice how your phone automatically shows a login screen the moment you connect to a network? This automatic detection is achieved by what&apos;s commonly known as a captive portal detection mechanism. Your device isn&apos;t just passively waiting for a redirect; it&apos;s actively checking to see if it&apos;s being held behind a portal. Here&apos;s how it generally works:&lt;/p&gt;
&lt;p&gt;Every modern operating system (OS), be it iOS, Android, Windows, macOS, or Linux, has a designated trusted URL that it tries to reach shortly after connecting to a new Wi-Fi network. These URLs are specifically designed to return a very simple, predictable response when they are successfully accessed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For Apple devices, the operating system attempts to load &lt;em&gt;&lt;a href=&quot;http://captive.apple.com/hotspot-detect.html&quot;&gt;http://captive.apple.com/hotspot-detect.html&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Android devices use a similar mechanism, often checking a URL from Google&apos;s servers like &lt;em&gt;&lt;a href=&quot;http://connectivitycheck.gstatic.com/generate_204&quot;&gt;http://connectivitycheck.gstatic.com/generate_204&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Windows devices try to reach this URL: &lt;em&gt;&lt;a href=&quot;http://www.msftconnecttest.com/connecttest.txt&quot;&gt;http://www.msftconnecttest.com/connecttest.txt&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The key here is what happens next. If your device successfully loads this dedicated URL and gets the expected response (HTTP 200 OK), the OS concludes that there&apos;s no captive portal blocking access, and it allows normal internet traffic to flow.&lt;/p&gt;
&lt;p&gt;However, if your device tries to reach those URLs and it doesn’t get the HTTP 200 OK response, it probably means the access controller has intercepted the request and redirected it to the captive portal&apos;s IP address. The OS immediately understands that it&apos;s behind a captive portal. In this scenario, your operating system, acting like a helpful guide, automatically opens a mini-browser window or a notification that directly leads you to the captive portal&apos;s login page. This pre-emptive action saves you the hassle of opening a browser yourself and trying to navigate to a website only to be redirected. It&apos;s a seamless user experience, making it feel as if your device has a built-in travel agent that instinctively knows where to take you when you need to &quot;check in&quot; to the network.&lt;/p&gt;
&lt;h2&gt;Why do airports use these systems?&lt;/h2&gt;
&lt;p&gt;It might seem like a hassle, but there are smart reasons behind these digital gatekeepers:&lt;/p&gt;
&lt;h3&gt;1. Legal compliance and liability protection&lt;/h3&gt;
&lt;p&gt;Requiring users to agree to terms of service helps limit the airport’s liability for how the network is used. It sets boundaries and expectations just like a waiver at a gym.&lt;/p&gt;
&lt;h3&gt;2. Bandwidth management&lt;/h3&gt;
&lt;p&gt;Airports handle thousands of simultaneous users. Captive portals allow administrators to throttle bandwidth, enforce usage limits, or offer paid tiers, ensuring that everyone gets a fair shot at connectivity.&lt;/p&gt;
&lt;h3&gt;3. Revenue generation&lt;/h3&gt;
&lt;p&gt;Captive portals often serve up ads or upsell faster internet access. It’s one way airports offset the cost of offering Wi-Fi for free.&lt;/p&gt;
&lt;h3&gt;4. Security and monitoring&lt;/h3&gt;
&lt;p&gt;By requiring logins or recognizing MAC addresses, airports can keep an eye on network usage and respond more quickly to unusual behavior or security threats.&lt;/p&gt;
&lt;p&gt;So the next time you’re stuck at an airport, waiting for your boarding call and connecting to public Wi-Fi, remember this invisible dance of digital infrastructure working behind the scenes. Whether you’re redirected through a captive portal or glide through with MAC authentication, it’s not just a matter of convenience, it’s a carefully engineered system balancing security and usability. These systems ensure millions of travelers each day can access the internet in a way that’s fast, safe, and fair. And now that you know what’s really happening behind the login screen, you might just appreciate that airport Wi-Fi a little more, buffering and all!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Tiago Carneiro and Guillaume Helbecque: Combinatorial Optimization in Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-tiago-carneiro-and-guillaume-helbecque-combinatorial-optimization-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-tiago-carneiro-and-guillaume-helbecque-combinatorial-optimization-in-chapel/</guid><pubDate>Thu, 31 Jul 2025 01:59:08 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 4: Syntax Matters]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-4-syntax-matters/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-4-syntax-matters/</guid><pubDate>Wed, 23 Jul 2025 17:44:42 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 10: Agentic AI Serving — Hosting agents like LLMs with AGNO Playground]]></title><description><![CDATA[We’re all familiar with model serving — deploying LLMs like GPT, LLaMA, or Mistral behind APIs. But what if you could serve not just a model…]]></description><link>https://developer.hpe.com/part-10-agentic-ai-serving-—-hosting-agents-like-llms-with-agno-playground/</link><guid isPermaLink="false">https://developer.hpe.com/part-10-agentic-ai-serving-—-hosting-agents-like-llms-with-agno-playground/</guid><pubDate>Mon, 21 Jul 2025 11:54:31 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;We’re all familiar with model serving — deploying LLMs like GPT, LLaMA, or Mistral behind APIs. But what if you could serve not just a model — but a &lt;strong&gt;complete AI agent with memory, tools, goals, and personality&lt;/strong&gt;?&lt;/p&gt;
&lt;p&gt;This is the essence of Agentic AI Serving using the AGNO Framework — a next-gen architecture where agents are hosted, monitored, and interacted with like full applications.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can learn more about it by reading &lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-10-agentic-ai-33a7ff1dd010&quot;&gt;My post on Medium.&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;This guide covers:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;What Agentic AI Serving is&lt;/li&gt;
&lt;li&gt;How to serve agents via the AGNO Playground&lt;/li&gt;
&lt;li&gt;Hosting single or multi-agent systems&lt;/li&gt;
&lt;li&gt;Capturing session data for compliance and observability&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What is Agentic AI Serving?&lt;/h2&gt;
&lt;h3&gt;In traditional GenAI:&lt;/h3&gt;
&lt;p&gt;LLMs are stateless tools. You orchestrate logic around them.&lt;/p&gt;
&lt;h3&gt;In Agentic AI:&lt;/h3&gt;
&lt;p&gt;The agent contains the logic — tools, reasoning, goals, and memory — and can be served as an interactive, stateful application.&lt;/p&gt;
&lt;h3&gt;Serving means:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Hosting the agent as an interactive app&lt;/li&gt;
&lt;li&gt;Enabling real-time communication and control&lt;/li&gt;
&lt;li&gt;Monitoring its behavior, tools, and outputs&lt;/li&gt;
&lt;li&gt;Running agents like microservices — locally or in the cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;AGNO enables:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Agent-as-a-Service deployment&lt;/li&gt;
&lt;li&gt;A Browser-based chat interface&lt;/li&gt;
&lt;li&gt;Support for OpenAI, Claude, Ollama, DeepSeek, and more&lt;/li&gt;
&lt;li&gt;Full session monitoring and conversation logging&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Getting practical: Serving your first agent&lt;/h2&gt;
&lt;h3&gt;1. Set up AGNO Playground&lt;/h3&gt;
&lt;p&gt;Ensure you&apos;re using an AGNO-compatible environment such as autogen_py_3_11_11, and that Ollama or OpenAI is accessible.&lt;/p&gt;
&lt;h3&gt;2. Sample playground.py script&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os
from agno.agent import Agent
from agno.models.ollama import Ollama
from agno.playground import Playground, serve_playground_app

# Set environment variables
os.environ[&apos;AGNO_API_KEY&apos;] = &apos;ag-\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\**-U&apos;
os.environ[&apos;AGNO_MONITOR&apos;] = &apos;true&apos;  # Enable session tracking
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Define a news reporter agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;agent = Agent(

model=Ollama(id=&quot;llama3.2&quot;, provider=&quot;Ollama&quot;),

description=&quot;You are an enthusiastic news reporter with a flair for storytelling!&quot;,

markdown=True

)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;(Optional) Test agent locally before serving&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;agent.print_response(&quot;Tell me about a breaking news story from New York.&quot;, stream=True)
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Create playground app&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;app = Playground(agents=agent).get_app()
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Launch local playground&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if __name__ == &quot;__main__&quot;:
serve_playground_app(&quot;playground:app&quot;, reload=True)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Access your hosted agent&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;#After running:
python playground.py

https://app.agno.com/playground/chat?endpoint=localhost:7777&amp;#x26;agent=&amp;#x3C;your-agent-id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You now have a fully interactive Agentic AI service — complete with memory, tools, and autonomy — accessible via browser.&lt;/p&gt;
&lt;h3&gt;Bonus: Team agent serving&lt;/h3&gt;
&lt;p&gt;Want to host multiple agents with different skills, roles, or tools?&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt; app = Playground(agents=team.members).get_app() 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Each agent maintains:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Its own LLM backend (OpenAI, Ollama, Claude, etc.)&lt;/li&gt;
&lt;li&gt;Independent tools and reasoning logic&lt;/li&gt;
&lt;li&gt;A unique personality or domain focus&lt;/li&gt;
&lt;li&gt;Access to shared memory or state if configured&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Perfect for multi-agent collaboration, delegation, or workflows.&lt;/p&gt;
&lt;h3&gt;Session logging &amp;#x26; monitoring&lt;/h3&gt;
&lt;h4&gt;AGNO includes built-in monitoring:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;os.environ\[&apos;AGNO_MONITOR&apos;] = &apos;true&apos; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This activates:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Session logs&lt;/li&gt;
&lt;li&gt;Tool call traces&lt;/li&gt;
&lt;li&gt;Execution monitoring&lt;/li&gt;
&lt;li&gt;Replay/debug capabilities&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These capabilities are essential for enterprise use cases requiring reproducibility, auditing, or compliance.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;As I&apos;ve illustrated in this post, the AGNO Playground offers multiple tools to build and serve AI agents with ease.&lt;/p&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Component&lt;/th&gt;
      &lt;th&gt;Purpose&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;Playground()&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Initializes the app interface&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;agents=agent&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Serve a single agent instance&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;agents=team.members&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Serve a multi-agent team&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;AGNO_MONITOR=true&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Enables observability and logs&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;AGNO_API_KEY&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Authenticates with AGNO cloud if needed&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;serve_playground_app()&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Boots the local or hosted serving app&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Pro tips&lt;/h2&gt;
&lt;p&gt;You can deploy AGNO agents:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Locally, using models from Ollama&lt;/li&gt;
&lt;li&gt;Remotely, using cloud LLM APIs&lt;/li&gt;
&lt;li&gt;In production, with full-stack hosting&lt;/li&gt;
&lt;li&gt;As teams, where each agent plays a defined role&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;Agentic AI Serving is the bridge between prompt engineering and software deployment. You’re not just sending prompts — you’re hosting intelligent, tool-using entities with goals and context.&lt;/p&gt;
&lt;p&gt;When agents are deployed, monitored, and refined, GenAI evolves into true AI Systems.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 9 : Agentic AI with AGNO, Ollama, and local LLaMA3]]></title><description><![CDATA[As Agentic AI evolves, the need for local, private, and flexible inference becomes critical. Frameworks like AGNO provide orchestration, but…]]></description><link>https://developer.hpe.com/post-9-agentic-ai-with-agno-ollama-and-local-llama3/</link><guid isPermaLink="false">https://developer.hpe.com/post-9-agentic-ai-with-agno-ollama-and-local-llama3/</guid><pubDate>Mon, 21 Jul 2025 11:12:57 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;As Agentic AI evolves, the need for local, private, and flexible inference becomes critical. Frameworks like AGNO provide orchestration, but the ability to plug in LLMs running locally is what sets the next-gen agentic stack apart.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can learn more about it by reading &lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-9-agentic-ai-agno-74d74cd0d9f3&quot;&gt;My post on Medium.&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;In this walkthrough, I explore:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;What is Ollama and how it powers local LLMs&lt;/li&gt;
&lt;li&gt;Running LLaMA 3.2 locally with minimal setup&lt;/li&gt;
&lt;li&gt;Connecting Ollama with AGNO framework&lt;/li&gt;
&lt;li&gt;Building an offline agent pipeline using only Python&lt;/li&gt;
&lt;li&gt;How this empowers fully private, offline, and customizable AI deployments&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;AGNO Meets local LLMs via Ollama&lt;/h2&gt;
&lt;p&gt;One of AGNO’s core strengths is modularity — it can interface with any LLM provider, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OpenAI (GPT4o, GPT-3.5)&lt;/li&gt;
&lt;li&gt;Claude (3.5 Sonnet, Haiku)&lt;/li&gt;
&lt;li&gt;DeepSeek&lt;/li&gt;
&lt;li&gt;Mistral&lt;/li&gt;
&lt;li&gt;Ollama (local)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This makes it possible to define agents using LLaMA 3, Mistral, or Gemma without cloud dependencies — while maintaining the full Agentic AI loop: &lt;strong&gt;Think → Plan → Act → Reflect&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What is Ollama?&lt;/h2&gt;
&lt;p&gt;Ollama is a local inference server that can run transformer models on a CPU or GPU. It supports major open-source LLMs like LLaMA, Mistral, DeepSeek, QWEN, and Gemma.&lt;/p&gt;
&lt;p&gt;Once running, it exposes a REST API at &lt;code&gt;http://localhost:11434&lt;/code&gt;, compatible with OpenAI-style inference.&lt;/p&gt;
&lt;p&gt;You can find more information on this at Ollama&apos;s official site: &lt;a href=&quot;https://ollama.com&quot;&gt;ollama.com&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;Install and Run Ollama

# Mac

brew install ollama

# Linux

curl -fsSL https://ollama.com/install.sh | sh

# Run a model

ollama run llama3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running the code above downloads and launches LLaMA 3.2 locally. Once active, Ollama exposes endpoints that work with both synchronous and streaming chat.&lt;/p&gt;
&lt;h2&gt;AGNO + Ollama: An example of how to use them together to build an agent&lt;/h2&gt;
&lt;p&gt;Let’s build a storytelling agent using AGNO connected to Ollama.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from agno.agent import Agent

from agno.models.ollama import Ollama

agent = Agent(

model=Ollama(id=&quot;llama3.2&quot;, provider=&quot;Ollama&quot;),

description=&quot;You are an enthusiastic news reporter with a flair for storytelling!&quot;,

markdown=True

)

agent.print_response(&quot;Tell me about a breaking news story from New York.&quot;, stream=True)\
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Parameters&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Parameter&lt;/th&gt;
      &lt;th&gt;Description&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;model=Ollama(...)&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Connects to a local LLaMA model via Ollama&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;description&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Agent personality and behavior&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;markdown=True&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Outputs markdown-formatted content&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;No framework? No problem.&lt;/h2&gt;
&lt;p&gt;Frameworks like AGNO offer orchestration, but what if you&apos;re running in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Air-gapped networks&lt;/li&gt;
&lt;li&gt;Lightweight environments&lt;/li&gt;
&lt;li&gt;Custom experimental setups?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s how to build a raw agent pipeline using just:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ollama for LLM&lt;/li&gt;
&lt;li&gt;DuckDuckGo search for tool use&lt;/li&gt;
&lt;li&gt;Custom prompt logic&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Full Python agent pipeline&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from ollama import Client

from duckduckgo_search import DDGS

ollama_client = Client(host=&apos;http://localhost:11434&apos;)

### Tool: Web search via DuckDuckGo

def duckduckgo_search(query, max_results=5):

with DDGS() as ddgs:

     return \[r for r in ddgs.text(query, region=&quot;wt-wt&quot;, safesearch=&quot;off&quot;, max_results=max_results)]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Pipeline logic&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def agent_pipeline(user_input):

if &quot;what&quot; in user_input.lower() or &quot;happening&quot; in user_input.lower():

     print(&quot;\[Tool] Searching DuckDuckGo...&quot;)

     search_results = duckduckgo_search(user_input)

     summary = &quot;\n&quot;.join(\[f&quot;- {r[&apos;title&apos;]}: {r\[&apos;href&apos;]}&quot; for r in search_results])

else:

     summary = &quot;&quot;

if summary:

     system_prompt = (

         &quot;You are an assistant. I found these recent search results:\n&quot;

         f&quot;{summary}\n&quot;

         &quot;Now generate a helpful answer for the user question below.\n&quot;

         f&quot;User question: {user_input}&quot;

     )

else:

     system_prompt = f&quot;You are a helpful assistant. Answer this: {user_input}&quot;

response = ollama_client.chat(

     model=&apos;llama3.2&apos;,

     messages=\[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: system_prompt}]

)

print(&quot;\n\[Agent Response]:&quot;)

print(response\[&apos;message&apos;]\[&apos;content&apos;])
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Example outputs&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;agent_pipeline(&quot;What&apos;s happening in New York?&quot;)

agent_pipeline(&quot;Tell me a joke.&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Breakdown of components&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Component&lt;/th&gt;
      &lt;th&gt;Role&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;LLM Backend&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Ollama running LLaMA 3.2 locally&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;Tool&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;DuckDuckGo search for real-time info&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;Orchestration&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Custom Python logic&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;Agentic Behavior&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Manual Think → Act → Reflect implementation&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This pattern gives you full control with zero cloud dependency — and forms the base for private AI workflows.&lt;/p&gt;
&lt;h2&gt;Pro tip&lt;/h2&gt;
&lt;p&gt;Looking for multi-agent orchestration with Ollama?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Check out Langmanus — a framework for:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Graph-based agent orchestration&lt;/li&gt;
&lt;li&gt;Streaming LLM outputs&lt;/li&gt;
&lt;li&gt;Agent-task dependencies and coordination&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;By combining AGNO, Ollama, and LLaMA3, developers can build fully private Agentic AI systems that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Work offline&lt;/li&gt;
&lt;li&gt;Are modular and extensible&lt;/li&gt;
&lt;li&gt;Use both tools and models interchangeably&lt;/li&gt;
&lt;li&gt;Scale from simple scripts to complex workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This stack represents the future of &lt;strong&gt;agent design — grounded, capable, and locally operable.&lt;/strong&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 8: Agentic AI and Qdrant: Building semantic memory with MCP protocol]]></title><description><![CDATA[As Agentic AI systems evolve from reactive language models into structured thinkers, a new challenge emerges: how do we give these agents…]]></description><link>https://developer.hpe.com/part-8-agentic-ai-and-qdrant-building-semantic-memory-with-mcp-protocol/</link><guid isPermaLink="false">https://developer.hpe.com/part-8-agentic-ai-and-qdrant-building-semantic-memory-with-mcp-protocol/</guid><pubDate>Mon, 21 Jul 2025 10:50:25 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;As &lt;strong&gt;Agentic AI&lt;/strong&gt; systems evolve from reactive language models into structured thinkers, a new challenge emerges: &lt;strong&gt;how do we give these agents memory?&lt;/strong&gt; Not just basic logs or static files, but real, &lt;strong&gt;searchable memory&lt;/strong&gt; that understands and adapts to context over time.&lt;/p&gt;
&lt;p&gt;This is where tools like &lt;strong&gt;Qdrant&lt;/strong&gt; and the &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; come in—a modular pairing that brings semantic search and long-term knowledge storage into agent workflows. Together, they enable agents to not only recall relevant information but to reason across past experiences, making &lt;strong&gt;Agentic AI&lt;/strong&gt; systems more intelligent, adaptive, and human-like in their decision-making.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-8-agentic-ai-mcp-281567e26838&quot;&gt;Inspired by my Medium post&lt;/a&gt;, this article explores how &lt;strong&gt;MCP&lt;/strong&gt;, the &lt;strong&gt;Model Context Protocol&lt;/strong&gt;—a kind of connective tissue between LLMs and external tools or data sources—&lt;strong&gt;standardizes interactions&lt;/strong&gt; between intelligent agents and vector databases like &lt;strong&gt;Qdrant&lt;/strong&gt;. By enabling seamless storage and retrieval of embeddings, agents can now “remember” useful information and leverage it in future reasoning.&lt;/p&gt;
&lt;p&gt;Let’s walk through the full architecture and code implementation of this cutting-edge combination.&lt;/p&gt;
&lt;h2&gt;LLMs + MCP + Database = Thoughtful Agentic AI&lt;/h2&gt;
&lt;p&gt;In Agentic AI, a language model doesn’t just generate — it thinks, acts, and reflects using external tools. That’s where MCP comes in.&lt;/p&gt;
&lt;p&gt;Think of MCP as a “USB interface” for AI — it lets agents plug into tools like Qdrant, APIs, or structured databases using a consistent protocol.&lt;/p&gt;
&lt;p&gt;Qdrant itself is a high-performance vector database — capable of powering semantic search, knowledge retrieval, and acting as long-term memory for AI agents. However, direct integration with agents can be messy and non-standardized.&lt;/p&gt;
&lt;p&gt;This is solved by wrapping Qdrant inside an MCP server, giving agents a semantic API they can call like a function.&lt;/p&gt;
&lt;h3&gt;Architecture overview&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;[LLM Agent]
    |
    |-- [MCP Client]
[MCP Protocol]
    |
    |-- [Qdrant MCP Server]
    |   |-- Tool: qdrant-store
    |   |-- Tool: qdrant-find
    |
[Qdrant Vector DB]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Use case: Support ticket memory for AI assistants&lt;/h3&gt;
&lt;p&gt;Imagine an AI assistant answering support queries.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It doesn&apos;t have all answers built-in.&lt;/li&gt;
&lt;li&gt;But it has semantic memory from prior support logs stored in Qdrant.&lt;/li&gt;
&lt;li&gt;It uses qdrant-find to semantically retrieve similar issues .&lt;/li&gt;
&lt;li&gt;It then formulates a contextual response.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Step-by-step implementation&lt;/h2&gt;
&lt;h3&gt;Step 1: Launch Qdrant MCP Server&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;export COLLECTION_NAME=&quot;support-tickets&quot;
export QDRANT_LOCAL_PATH=&quot;./qdrant_local_db&quot;
export EMBEDDING_MODEL=&quot;sentence-transformers/all-MiniLM-L6-v2&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;uvx mcp-server-qdrant --transport sse
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Key parameters:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;COLLECTION_NAME: Name of the Qdrant collection&lt;/li&gt;
&lt;li&gt;QDRANT_LOCAL_PATH: Local vector DB storage path&lt;/li&gt;
&lt;li&gt;EMBEDDING_MODEL: Embedding model for vectorization&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 2: Connect the MCP Client&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def main():
server_params = StdioServerParameters(
     command=&quot;uvx&quot;,
     args=\[&quot;mcp-server-qdrant&quot;],
     env={
         &quot;QDRANT_LOCAL_PATH&quot;: &quot;./qdrant_local_db&quot;,
         &quot;COLLECTION_NAME&quot;: &quot;support-tickets&quot;,
         &quot;EMBEDDING_MODEL&quot;: &quot;sentence-transformers/all-MiniLM-L6-v2&quot;
     }
)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;async with stdio_client(server_params) as (read, write):
     async with ClientSession(read, write) as session:
         await session.initialize()
         tools = await session.list_tools()
         print(tools)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;Expected Output: Lists tools like qdrant-store, qdrant-find
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Ingest a new memory&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;ticket_info = &quot;Order #1234 was delayed due to heavy rainfall in transit zone.&quot;
result = await session.call_tool(&quot;qdrant-store&quot;, arguments={
&quot;information&quot;: ticket_info,
&quot;metadata&quot;: {&quot;order_id&quot;: 1234}
})
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This stores an embedded version of the text in Qdrant.&lt;/p&gt;
&lt;h3&gt;Step 4: Perform a semantic search&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;query = &quot;Why was order 1234 delayed?&quot;
search_response = await session.call_tool(&quot;qdrant-find&quot;, arguments={
&quot;query&quot;: &quot;order 1234 delay&quot;
})
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Example output:&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;[
  {
&quot;content&quot;: &quot;Order #1234 was delayed due to heavy rainfall in transit zone.&quot;,
&quot;metadata&quot;: {&quot;order_id&quot;: 1234}
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Use with LLM&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import openai
context = &quot;\n&quot;.join(\[r[&quot;content&quot;] for r in search_response])
prompt = f&quot;&quot;&quot;
You are a helpful assistant. Use this context to answer:
&quot;&quot;&quot;
{context}
&quot;&quot;&quot;
Question: Why was order #1234 delayed?
&quot;&quot;&quot;
response = openai.ChatCompletion.create(
model=&quot;gpt-3.5-turbo&quot;,
messages=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt}]
)
print(response[&quot;choices&quot;][0][&quot;message&quot;][&quot;content&quot;])
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Final answer:&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;&quot;Order #1234 was delayed due to heavy rainfall in the transit zone.&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Parameter reference&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Tool&lt;/th&gt;
      &lt;th&gt;Parameter&lt;/th&gt;
      &lt;th&gt;Description&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;qdrant-store&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;&lt;code&gt;information&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Raw string to embed&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;/td&gt;
      &lt;td&gt;&lt;code&gt;metadata&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Optional metadata for filtering&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;qdrant-find&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;&lt;code&gt;query&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Natural language query&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;env var&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;&lt;code&gt;EMBEDDING_MODEL&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Model used to create embeddings&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;env var&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;&lt;code&gt;COLLECTION_NAME&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Qdrant vector collection name&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Pro tip: Chain MCP servers&lt;/h2&gt;
&lt;p&gt;You can deploy multiple MCP servers for different tools and plug them into agent workflows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;qdrant-find for memory&lt;/li&gt;
&lt;li&gt;google-search for web data&lt;/li&gt;
&lt;li&gt;postgres-query for structured facts&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then, orchestrate it all using Agentic AI Teams to perform high-level, multi-tool reasoning.&lt;/p&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;By pairing Qdrant with MCP, Agentic AI gains powerful, semantic memory — a critical enabler of contextual understanding and long-term knowledge retention. This pattern abstracts the complexity of vector DBs behind a unified protocol, empowering agents to think, recall, and act without manual data plumbing.&lt;/p&gt;
&lt;p&gt;As the AI stack modularizes further, approaches like this will form the backbone of scalable, pluggable, and intelligent multi-agent ecosystems.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 7: How collaborative teams of agents unlock new intelligence]]></title><description><![CDATA[The rapid shift from Generative AI to Agentic AI marks more than a technical milestone—it represents a philosophical change in how machines…]]></description><link>https://developer.hpe.com/part-7-how-collaborative-teams-of-agents-unlock-new-intelligence/</link><guid isPermaLink="false">https://developer.hpe.com/part-7-how-collaborative-teams-of-agents-unlock-new-intelligence/</guid><pubDate>Mon, 21 Jul 2025 10:34:35 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;The rapid shift from &lt;strong&gt;Generative AI to Agentic AI&lt;/strong&gt; marks more than a technical milestone—it represents a philosophical change in how machines reason, collaborate, and solve problems. Instead of relying on a single, all-purpose model, &lt;strong&gt;Agentic AI&lt;/strong&gt; introduces a dynamic ecosystem of specialized agents that work together like human teams, each offering a distinct capability or perspective.&lt;/p&gt;
&lt;p&gt;One of the most transformative configurations in this space is &lt;strong&gt;Collaborate Mode&lt;/strong&gt;, where multiple agents contribute—either asynchronously or in parallel—to achieve a unified outcome. This mode enables more nuanced problem-solving, especially in complex workflows where different types of reasoning, tools, or perspectives must come together seamlessly.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-7-agentic-ai-a-13ee0b43bc42&quot;&gt;Inspired by my Medium post,&lt;/a&gt; this blog breaks down the architecture, purpose, and code implementation of this mode using the AGNO framework, making the power of distributed machine collaboration more approachable and actionable.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-4.06.15 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;LLM Mode&quot; title=&quot;LLM Mode&quot;&gt;&lt;/center&gt;
&lt;h2&gt;What is Collaborate Mode?&lt;/h2&gt;
&lt;p&gt;Collaborate Mode is an agent orchestration strategy where multiple intelligent agents receive the same task, operate independently, and deliver unique insights that are then synthesized by a coordinator. This design mirrors how effective human teams operate—through parallel expertise, independent judgment, and collaborative synthesis.&lt;/p&gt;
&lt;h3&gt;Ideal use cases:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Brainstorming across different domains&lt;/li&gt;
&lt;li&gt;Aggregating cross-platform knowledge&lt;/li&gt;
&lt;li&gt;Speeding up research through parallelism&lt;/li&gt;
&lt;li&gt;Building consensus across diverse information sources&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How it works visually&lt;/h2&gt;
&lt;p&gt;Imagine each agent as a researcher assigned to a unique platform:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reddit Agent gathers opinions from communities&lt;/li&gt;
&lt;li&gt;HackerNews Agent scans developer insights&lt;/li&gt;
&lt;li&gt;Twitter Agent captures trending conversations&lt;/li&gt;
&lt;li&gt;Academic Agent retrieves scholarly context&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each one returns findings from its ecosystem, which the coordinator blends into a single, meaningful response.&lt;/p&gt;
&lt;h2&gt;AGNO framework code implementation&lt;/h2&gt;
&lt;h3&gt;1. Import modules &amp;#x26; tools&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from textwrap import dedent
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.arxiv import ArxivTools
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.googlesearch import GoogleSearchTools
from agno.tools.hackernews import HackerNewsTools
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Define specialized agents&lt;/h3&gt;
&lt;p&gt;Each agent is built for platform-specific intelligence gathering.&lt;/p&gt;
&lt;h3&gt;Reddit Agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;reddit_researcher = Agent(
name=&quot;Reddit Researcher&quot;,
role=&quot;Research a topic on Reddit&quot;,
model=OpenAIChat(id=&quot;gpt-4o&quot;),
tools=\[DuckDuckGoTools()],
add_name_to_instructions=True,
instructions=dedent(&quot;&quot;&quot;You are a Reddit researcher...&quot;&quot;&quot;),
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;HackerNews Agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;hackernews_researcher = Agent(
name=&quot;HackerNews Researcher&quot;,
model=OpenAIChat(&quot;gpt-4o&quot;),
role=&quot;Research a topic on HackerNews.&quot;,
tools=\[HackerNewsTools()],
add_name_to_instructions=True,
instructions=dedent(&quot;&quot;&quot;You are a HackerNews researcher...&quot;&quot;&quot;),
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Academic Agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;academic_paper_researcher = Agent(
name=&quot;Academic Paper Researcher&quot;,
model=OpenAIChat(&quot;gpt-4o&quot;),
role=&quot;Research academic papers...&quot;,
tools=\[GoogleSearchTools(), ArxivTools()],
add_name_to_instructions=True,
instructions=dedent(&quot;&quot;&quot;You are an academic researcher...&quot;&quot;&quot;),
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Twitter - X Agent&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;twitter_researcher = Agent(
name=&quot;Twitter Researcher&quot;,
model=OpenAIChat(&quot;gpt-4o&quot;),
role=&quot;Research Twitter/X topics&quot;,
tools=[DuckDuckGoTools()],
add_name_to_instructions=True,
instructions=dedent(&quot;&quot;&quot;You are a Twitter researcher...&quot;&quot;&quot;),

)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;3. Define the team&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;agent_team = Team(
name=&quot;Discussion Team&quot;,
mode=&quot;collaborate&quot;,
model=OpenAIChat(&quot;gpt-4o&quot;),
members=[
     reddit_researcher,
     hackernews_researcher,
     academic_paper_researcher,
     twitter_researcher,

],

instructions=[
     &quot;You are a discussion master.&quot;,
     &quot;You must conclude the discussion once consensus is reached.&quot;,
],

success_criteria=&quot;The team has reached a consensus.&quot;,
update_team_context=True,
send_team_context_to_members=True,
show_tool_calls=True,
markdown=True,
show_members_responses=True,
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;4. Running the discussion&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if __name__ == &quot;__main__&quot;:

asyncio.run(
     agent_team.print_response(
         message=&quot;Start the discussion on the topic: &apos;What is the best way to learn to code?&apos;&quot;,
         stream=True,
         stream_intermediate_steps=True,
     )
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Example output&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;* Reddit: Focus on project-building and freeCodeCamp
* HackerNews: Start with Python and open-source
* Academia: Reinforce with spaced repetition and mentorship
* Twitter/X: Emphasize consistency and public learning
* Team Consensus: Use beginner-friendly languages, build real-world projects, and immerse yourself in learning communities.
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-4.06.02 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Agent Parameters&quot; title=&quot;Agent Parameters&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Pro tip: Run agents in parallel&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;asyncio.run(

agent_team.print_response(
     message=&quot;Start the discussion on the topic: &apos;How should we improve remote team collaboration?&apos;&quot;,
     stream=True,
     stream_intermediate_steps=True,
))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using asyncio ensures agents work simultaneously, which dramatically boosts speed and output quality—especially in research-heavy or time-sensitive use cases.&lt;/p&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;Collaborate Mode is more than a clever orchestration pattern—it’s the embodiment of distributed intelligence. By mimicking the structure of human brainstorming, it allows AI to perform with greater breadth, depth, and creativity. With frameworks like AGNO making implementation seamless, the age of intelligent, agent-led collaboration is no longer speculative—it’s operational.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;As we continue evolving from single-shot prompts to structured autonomy, Collaborate Mode stands out as a key innovation for scalable, multi-perspective problem-solving in AI systems.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title><![CDATA[Part 6: Agentic AI teams in Router Mode: Multilingual routing with AGNO]]></title><description><![CDATA[One of the most powerful capabilities in Agentic AI is orchestrating multiple agents to work together—each with its own specialization. In…]]></description><link>https://developer.hpe.com/part-6-agentic-ai-teams-in-router-mode-multilingual-routing-with-agno/</link><guid isPermaLink="false">https://developer.hpe.com/part-6-agentic-ai-teams-in-router-mode-multilingual-routing-with-agno/</guid><pubDate>Mon, 21 Jul 2025 10:04:38 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;One of the most powerful capabilities in Agentic AI is orchestrating multiple agents to work together—each with its own specialization. In this segment, we explore the &lt;strong&gt;Router Mode&lt;/strong&gt; pattern, a configuration where a central team detects &lt;strong&gt;context&lt;/strong&gt; (like language or domain) and routes queries to the right agent accordingly.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What exactly is &quot;context&quot;?&lt;/strong&gt;
In Agentic AI, &lt;em&gt;context&lt;/em&gt; refers to the key details in a user&apos;s input that help the system understand how to respond appropriately. Think of it like clues that tell the AI what kind of help is needed. For example, if a user submits a query in Hindi, the language itself becomes part of the context. Similarly, if the query mentions &quot;insurance claims&quot; or &quot;server configuration,&quot; that reveals the domain or topic the user is focused on.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Simple Analogy:&lt;/strong&gt;
Imagine walking into a large helpdesk at an international airport. You speak to a receptionist and ask for assistance. Based on your language, destination, or issue (lost luggage vs. visa questions), they direct you to the right expert—someone who speaks your language or understands your problem. That receptionist is acting like the router agent. They’re not solving the issue themselves but are smart enough to know &lt;em&gt;who&lt;/em&gt; should help you based on &lt;em&gt;context&lt;/em&gt;. That’s exactly what the Router Mode does in Agentic AI.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This method is especially effective in scenarios requiring multilingual or domain-specific support. Using the &lt;strong&gt;AGNO framework&lt;/strong&gt;, we’ll see how to construct a language-routing team that handles diverse user inputs with precision and fallback logic—making it especially friendly for no-code or low-code setups.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-6-agentic-ai-a-39714050857b&quot;&gt;Inspired by my Medium post.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Router Mode: What it is?&lt;/h2&gt;
&lt;p&gt;In Router Mode, the team acts like a switchboard, rather than executing tasks itself. Its core responsibility is to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Analyze user input&lt;/li&gt;
&lt;li&gt;Detect the appropriate context (e.g., language)&lt;/li&gt;
&lt;li&gt;Route the request to a specialized agent&lt;/li&gt;
&lt;li&gt;Handle unsupported inputs gracefully&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-3.38.34 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Route Mode&quot; title=&quot;Route Mode&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Use case: Multilingual chat support&lt;/h3&gt;
&lt;p&gt;Imagine a chatbot that receives queries in different languages. Router Mode enables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Language detection&lt;/li&gt;
&lt;li&gt;Delegation to language-specific agents (e.g., Japanese, French, German)&lt;/li&gt;
&lt;li&gt;Fallback messages for unsupported languages&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Implementation: AGNO framework setup&lt;/h3&gt;
&lt;p&gt;We’ll define a set of language-specific agents and create a routing team that delegates accordingly.&lt;/p&gt;
&lt;h4&gt;Step 1: Define language agents&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.deepseek import DeepSeek
from agno.models.mistral.mistral import MistralChat
from agno.models.openai import OpenAIChat

english_agent = Agent(
    name=&quot;English Agent&quot;,
    role=&quot;You can only answer in English&quot;,
    model=OpenAIChat(id=&quot;gpt-4.5-preview&quot;),
    instructions=[&quot;You must only respond in English&quot;],
)

japanese_agent = Agent(
    name=&quot;Japanese Agent&quot;,
    role=&quot;You can only answer in Japanese&quot;,
    model=DeepSeek(id=&quot;deepseek-chat&quot;),
    instructions=[&quot;You must only respond in Japanese&quot;],
)

chinese_agent = Agent(
    name=&quot;Chinese Agent&quot;,
    role=&quot;You can only answer in Chinese&quot;,
    model=DeepSeek(id=&quot;deepseek-chat&quot;),
    instructions=[&quot;You must only respond in Chinese&quot;],
)

spanish_agent = Agent(
    name=&quot;Spanish Agent&quot;,
    role=&quot;You can only answer in Spanish&quot;,
    model=OpenAIChat(id=&quot;gpt-4.5-preview&quot;),
    instructions=[&quot;You must only respond in Spanish&quot;],
)

french_agent = Agent(
    name=&quot;French Agent&quot;,
    role=&quot;You can only answer in French&quot;,
    model=MistralChat(id=&quot;mistral-large-latest&quot;),
    instructions=[&quot;You must only respond in French&quot;],
)

german_agent = Agent(
    name=&quot;German Agent&quot;,
    role=&quot;You can only answer in German&quot;,
    model=Claude(&quot;claude-3-5-sonnet-20241022&quot;),
    instructions=[&quot;You must only respond in German&quot;],
)
 

 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Create the Router team&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from agno.team.team import Team

multi_language_team = Team(
    name=&quot;Multi Language Team&quot;,
    mode=&quot;route&quot;,
    model=OpenAIChat(&quot;gpt-4.5-preview&quot;),
    members=[
        english_agent,
        spanish_agent,
        japanese_agent,
        french_agent,
        german_agent,
        chinese_agent,
    ],
    show_tool_calls=True,
    markdown=True,
    show_members_responses=True,
    instructions=[
        &quot;You are a language router that directs questions to the appropriate language agent.&quot;,
        &quot;If the user asks in a language whose agent is not a team member, respond in English with:&quot;,
        &quot;&apos;I can only answer in the following languages: English, Spanish, Japanese, French and German. Please ask your question in one of these languages.&apos;&quot;,
        &quot;Always check the language of the user&apos;s input before routing to an agent.&quot;,
        &quot;For unsupported languages like Italian, respond in English with the above message.&quot;,
    ]
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Run multilingual examples&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;multi_language_team.print_response(&quot;How are you?&quot;, stream=True)         # English
multi_language_team.print_response(&quot;你好吗？&quot;, stream=True)              # Chinese
multi_language_team.print_response(&quot;お元気ですか?&quot;, stream=True)         # Japanese
multi_language_team.print_response(&quot;Comment allez-vous?&quot;, stream=True) # French
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Output:&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;[English Agent]: I&apos;m doing great! How can I help you today?
[Chinese Agent]: 我很好，谢谢你的关心。你呢？
[Japanese Agent]: 元気です。あなたはお元気ですか？
[French Agent]: Je vais bien, merci. Et vous ?
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-3.38.47 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Agent Mode Parameters&quot; title=&quot;Agent Mode Parameters&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Key parameters explained&lt;/h2&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;8&quot; cellspacing=&quot;0&quot; style=&quot;border-collapse: collapse; width: 100%;&quot;&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Parameter&lt;/th&gt;
      &lt;th&gt;Description&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;mode=&quot;route&quot;&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Instructs the team to act as a switchboard, not an executor&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;show_members_responses=True&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Displays individual agent replies for traceability&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;instructions\\\[]&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Core router logic: detect language, enforce exclusivity, manage fallbacks&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;model=OpenAIChat(...)&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Backbone LLM used by the router team for input analysis&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Why Router Mode works&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Context-awareness: Inputs are analyzed for language, not just keywords&lt;/li&gt;
&lt;li&gt;Agent exclusivity: Each agent strictly operates in its assigned language&lt;/li&gt;
&lt;li&gt;Fallback resilience: Unsupported queries are met with a clear, unified message&lt;/li&gt;
&lt;li&gt;Modularity: Each language agent is replaceable or extendable&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Router Mode in Agentic AI introduces scalable intelligence by structuring agents like a multilingual team. Rather than overwhelming a single agent, you delegate responsibility across specialists and keep interactions clean, accurate, and context-driven.&lt;/p&gt;
&lt;p&gt;With the AGNO framework, creating such intelligent, language-aware teams becomes seamless — and your agents become not just reactive, but well-organized and self-aware of their boundaries.&lt;/p&gt;
&lt;p&gt;A structured team, strict instructions, and intelligent routing — that’s the future of responsive AI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 5: Agentic AI: Team coordination mode in action]]></title><description><![CDATA[One of the most transformative patterns in Agentic AI is team-based orchestration — a collaborative approach where specialized agents work…]]></description><link>https://developer.hpe.com/part-5-agentic-ai-team-coordination-mode-in-action/</link><guid isPermaLink="false">https://developer.hpe.com/part-5-agentic-ai-team-coordination-mode-in-action/</guid><pubDate>Mon, 21 Jul 2025 07:24:24 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;One of the most transformative patterns in Agentic AI is team-based orchestration — a collaborative approach where specialized &lt;strong&gt;agents work together to fulfill complex goals&lt;/strong&gt;. In this edition, we explore coordinate mode using the AGNO framework — a design where a team manager delegates, supervises, and integrates the contributions of each agent.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-5-agentic-ai-a-2d6651c9cc5c&quot;&gt;Inspired by my Medium post.&lt;/a&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-12.57.22 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;LLM Mode&quot; title=&quot;LLM Mode&quot;&gt;&lt;/center&gt;
&lt;h2&gt;What are agentic AI teams?&lt;/h2&gt;
&lt;p&gt;An agentic team is a structured collection of AI agents, each performing a specific role with autonomy and tool access. Teams can include roles like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Researcher: Finds and filters relevant data&lt;/li&gt;
&lt;li&gt;Writer: Synthesizes content with tone and structure&lt;/li&gt;
&lt;li&gt;Translator: Converts content across languages&lt;/li&gt;
&lt;li&gt;Planner: Organizes execution based on goals&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;In Coordinate Mode:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;A team manager Agent directs the flow of tasks&lt;/li&gt;
&lt;li&gt;Individual agents handle sub-tasks independently&lt;/li&gt;
&lt;li&gt;Final results are reviewed, refined, and unified by the manager&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;AGNO Framework: Coordinating a multi-agent content team&lt;/h2&gt;
&lt;p&gt;Let’s examine a professional-grade configuration of a New York Times-style editorial team, where search, writing, and editorial review are handled by distinct agents.&lt;/p&gt;
&lt;h3&gt;Imports&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.search import DuckDuckGoTools
from agno.tools.read import Newspaper4kTools
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Searcher agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;searcher = Agent(
    name=&quot;Searcher&quot;,
    role=&quot;Searches the top URLs for a topic&quot;,
    instructions=[
        &quot;Generate 3 search terms for a topic.&quot;,
        &quot;Search the web and return 10 high-quality, relevant URLs.&quot;,
        &quot;Prioritize credible sources, suitable for the New York Times.&quot;
    ],
    tools=[DuckDuckGoTools()],
    add_datetime_to_instructions=True,
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Writer agent&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;writer = Agent(
    name=&quot;Writer&quot;,
    role=&quot;Writes a high-quality article&quot;,
    description=&quot;Senior NYT writer tasked with long-form editorial content.&quot;,
    instructions=[
        &quot;Read all articles using `read_article`.&quot;,
        &quot;Write a structured, engaging article of at least 15 paragraphs.&quot;,
        &quot;Support arguments with factual citations and ensure clarity.&quot;,
        &quot;Never fabricate facts or plagiarize content.&quot;
    ],
    tools=[Newspaper4kTools()],
    add_datetime_to_instructions=True,
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Editor team (Manager agent in Coordinate Mode)&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;editor = Team(
    name=&quot;Editor&quot;,
    mode=&quot;coordinate&quot;,
    model=OpenAIChat(&quot;gpt-4o&quot;),
    members=[searcher, writer],
    description=&quot;You are a senior NYT editor coordinating the team.&quot;,
    instructions=[
        &quot;Delegate research to the search agent.&quot;,
        &quot;Delegate drafting to the writer.&quot;,
        &quot;Review, proofread, and enhance the final article.&quot;,
        &quot;Maintain NYT-level quality, structure, and tone.&quot;
    ],
    add_datetime_to_instructions=True,
    send_team_context_to_members=True,
    show_members_responses=True,
    markdown=True,
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Running the team&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;Method 1: Print output directly
editor.print_response(&quot;Write an article about latest developments in AI.&quot;)

Method 2: Get raw result
response = editor.run(&quot;Write an article about latest developments in AI.&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Key parameters explained&lt;/h3&gt;
&lt;table&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Parameter&lt;/th&gt;
      &lt;th&gt;Purpose&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;mode=&quot;coordinate&quot;&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Enables structured delegation and task flow&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;members=\\\\\[...]&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Assigns role-specific agents&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;send_team_context_to_members&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Shares global task context with all agents&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;show_members_responses=True&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Displays each member&apos;s intermediate output&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code&gt;add_datetime_to_instructions&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Contextualizes outputs with current date/time&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Pro tip: Define success criteria&lt;/h2&gt;
&lt;p&gt;Adding success criteria helps agents align their efforts with measurable outcomes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;strategy_team = Team(
    members=[market_analyst, competitive_analyst, strategic_planner],
    mode=&quot;coordinate&quot;,
    name=&quot;Strategy Team&quot;,
    description=&quot;A team that develops strategic recommendations&quot;,
    success_criteria=&quot;Produce actionable strategic recommendations supported by market and competitive analysis&quot;,
)
response = strategy_team.run(
    &quot;Develop a market entry strategy for our new AI-powered healthcare product&quot;
)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This ensures agents not only act — but act with strategic purpose and direction.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-12.57.44 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Agentic AI Parameters&quot; title=&quot;Agentic AI Parameters&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Coordinate Mode in Agentic AI exemplifies intelligent task distribution, where specialized agents work under centralized leadership to deliver complex, high-quality outputs. The AGNO framework simplifies this orchestration through agent roles, tool integration, and goal alignment &lt;strong&gt;—&lt;/strong&gt; &lt;strong&gt;enabling scalable, auditable AI workflows.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;From editorial pipelines to business strategy engines, multi-agent coordination is redefining how work gets done &lt;strong&gt;— autonomously, intelligently, and collaboratively.&lt;/strong&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 4: The rise of Agentic AI and the power of the AGNO framework]]></title><description><![CDATA[As artificial intelligence continues its rapid evolution, a new frontier has emerged — Agentic AI. This paradigm moves us beyond passive…]]></description><link>https://developer.hpe.com/part-4-the-rise-of-agentic-ai-and-the-power-of-the-agno-framework/</link><guid isPermaLink="false">https://developer.hpe.com/part-4-the-rise-of-agentic-ai-and-the-power-of-the-agno-framework/</guid><pubDate>Mon, 21 Jul 2025 07:02:23 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;As artificial intelligence continues its rapid evolution, a new frontier has emerged — Agentic AI. This paradigm moves us beyond passive, prompt-based LLMs and into an era where AI doesn’t just respond — &lt;strong&gt;it thinks, plans, acts, and collaborates.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Building on insights inspired by &lt;a href=&quot;https://dineshr1493.medium.com/agentic-ai-framework-a4df29a8fc62&quot;&gt;my post on Medium,&lt;/a&gt; this guide explores what Agentic AI truly is, why it matters, and how modern frameworks like AGNO (formerly Phidata) are enabling intelligent agent-based systems that work autonomously in real-world settings.&lt;/p&gt;
&lt;p&gt;Let’s step into the mechanics of intelligent agents and discover how they’re transforming how work gets done.&lt;/p&gt;
&lt;h2&gt;What is Agentic AI?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI&lt;/strong&gt; refers to AI systems designed not just to generate content, but to &lt;strong&gt;autonomously reason, decide, and execute tasks —&lt;/strong&gt; often in coordination with external tools or other agents.&lt;/p&gt;
&lt;p&gt;Unlike basic LLMs or traditional “LLM + tool” stacks, Agentic AI systems can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deconstruct complex goals into sub-tasks&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delegate and execute those sub-tasks via specialized agents&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integrate tools, APIs, and live data sources to take meaningful actions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reflect on their outputs and improve over time&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This evolution from reactive chatbots to proactive agents is redefining automation and digital intelligence.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-12.45.30 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;LLM Evolution&quot; title=&quot;LLM Evolution&quot;&gt;&lt;/center&gt;
&lt;h2&gt;The AGNO framework (Previously Phidata)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;AGNO&lt;/strong&gt; is an open-source framework purpose-built to create modular, autonomous AI agents that &lt;strong&gt;think, plan, act, and adapt&lt;/strong&gt;. It’s one of the most advanced and flexible toolkits for building &lt;em&gt;&lt;strong&gt;real-world Agentic AI systems.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Core capabilities:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Contextual reasoning&lt;/strong&gt; through logic chains&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Task planning and delegation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool invocation&lt;/strong&gt; (APIs, databases, automation systems)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Result reflection&lt;/strong&gt; for improved decisions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multi-agent orchestration&lt;/strong&gt; at scale&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Streaming support&lt;/strong&gt; using protocols like Model Context Protocol (MCP)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Workflow visualization&lt;/strong&gt; and agent team configurations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;🔗 GitHub: &lt;a href=&quot;https://github.com/agno-agi/agno&quot;&gt;AGNO Framework&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Agents, tools, and teams — The building blocks&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;&lt;strong&gt;1. Agents&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;An &lt;strong&gt;agent&lt;/strong&gt; is a self-contained AI module designed to handle a specific task or role.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Operates autonomously or as part of a team&lt;/li&gt;
&lt;li&gt;Can invoke tools, fetch data, or generate content&lt;/li&gt;
&lt;li&gt;Uses reasoning and memory to complete goals&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;strong&gt;2. Tools&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Agents in AGNO use &lt;strong&gt;tools&lt;/strong&gt; to interact with the real world. These can be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;APIs (e.g., Google Search, Slack, Salesforce)&lt;/li&gt;
&lt;li&gt;Databases (e.g., Postgres, MongoDB, Qdrant)&lt;/li&gt;
&lt;li&gt;Custom internal services (e.g., CRMs, file systems)&lt;/li&gt;
&lt;li&gt;Processing modules (e.g., calculators, formatters)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;strong&gt;3. Teams&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Agents can collaborate through structured &lt;strong&gt;team modes&lt;/strong&gt; for complex, multi-faceted workflows.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Modes of teamwork in AGNO&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Modes are the means by which agents communicate with each other I will be walking you through few common modes of agents communication.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-12.45.57 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Modes of Teamwork in AGNO&quot; title=&quot;Modes of Teamwork in AGNO&quot;&gt;&lt;/center&gt;
&lt;h3&gt;&lt;strong&gt;Coordinator Mode&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;In &lt;strong&gt;Coordinator Mode&lt;/strong&gt;, a central agent takes charge of assigning and managing sub-tasks across a network of specialized agents. Think of it like a project manager in a team—delegating responsibilities, tracking progress, and assembling the final output.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Acts as an orchestrator&lt;/strong&gt;, breaking down complex goals into manageable parts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Delegates tasks&lt;/strong&gt; to the most capable agents based on their expertise&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Aggregates results&lt;/strong&gt; and presents a unified final outcome&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Excels in hierarchical workflows&lt;/strong&gt;, such as multi-step reasoning, multi-stage content generation, or structured decision-making pipelines&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This mode becomes particularly powerful when tasks require sequencing, prioritization, or dependency handling across multiple agents.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://developer.hpe.com/blog/part-5-agentic-ai-team-coordination-mode-in-action/&quot;&gt;Will be explored in depth in Part 5.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;&lt;strong&gt;Router Mode&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;In &lt;strong&gt;Router Mode&lt;/strong&gt;, tasks are automatically routed to the most appropriate agent based on the type, language, or domain of the query—without requiring manual intervention.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lightweight and fast&lt;/strong&gt;: It doesn’t require the central agent to deeply understand or process the query itself. Instead, it acts like a traffic controller—quickly identifying what the query is about and directing it to the right specialized agent. This makes it highly efficient, especially in high-volume environments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Common in chatbots, support desks, and multi-skilled assistants&lt;/strong&gt;: For example, in a multilingual support bot, Router Mode can detect the language of a user query and route it to an agent that handles that language. Or it might detect whether a question is about billing, tech support, or product features and send it to the corresponding expert agent.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://developer.hpe.com/blog/part-6-agentic-ai-teams-in-router-mode-multilingual-routing-with-agno/&quot;&gt;Detailed breakdown coming in Part 6.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;&lt;strong&gt;Collaborator Mode&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;In &lt;strong&gt;Collaborator Mode&lt;/strong&gt;, agents work together dynamically—&lt;strong&gt;sharing knowledge, negotiating decisions, and contributing their perspectives&lt;/strong&gt;—to reach a common goal. Unlike Router or Coordinator modes, this pattern embraces simultaneous or iterative agent interactions that mirror how real-world teams brainstorm, refine ideas, or co-develop solutions.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Best for consensus-driven tasks&lt;/strong&gt;, where multiple viewpoints or skills need to be considered&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ideal for creative and collective output&lt;/strong&gt;, such as writing, strategy development, or decision support&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Common in research, design, and system planning&lt;/strong&gt;, where exploration, feedback, and iteration are essential&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://developer.hpe.com/blog/part-7-how-collaborative-teams-of-agents-unlock-new-intelligence/&quot;&gt;Deep dive ahead in Part 7.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-21-at-12.46.13 pm.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;Pro Insight: Langmanus — A Complementary Framework&quot; title=&quot;Pro Insight: Langmanus — A Complementary Framework&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;Pro Insight: Langmanus — A Complementary Framework&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;For developers seeking visual workflows and advanced task orchestration, Langmanus on GitHub offers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Workflow graphs and dashboards&lt;/li&gt;
&lt;li&gt;Real-time task delegation&lt;/li&gt;
&lt;li&gt;Progress tracking across agent teams&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Its system architecture includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Coordinator&lt;/strong&gt; — Routes initial queries&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Planner&lt;/strong&gt; — Builds strategies&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Supervisor&lt;/strong&gt; — Oversees agents&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Researcher&lt;/strong&gt; — Gathers info&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Coder&lt;/strong&gt; — Handles code tasks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Browser&lt;/strong&gt; — Performs online searches&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reporter&lt;/strong&gt; — Summarizes outcomes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;GitHub: &lt;a href=&quot;https://github.com/langmanus/langmanus&quot;&gt;Langmanus Repository&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Agentic AI represents a turning point&lt;/strong&gt; in artificial intelligence — a shift from passive, text-based outputs to autonomous, context-aware action systems. With frameworks like AGNO, developers can create agents that plan, reason, and act just like humans would in complex workflows.&lt;/p&gt;
&lt;p&gt;These agents aren’t just smarter — they’re collaborative, modular, and capable of evolving with the task at hand. As more organizations adopt these systems, the future of automation will belong not to static scripts, but to dynamic agents working in harmony.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/part-5-agentic-ai-team-coordination-mode-in-action/&quot;&gt;Up next in Part 5:&lt;/a&gt;&lt;/strong&gt; We’ll dive deep into &lt;strong&gt;Coordinator Mode&lt;/strong&gt; and how AGNO orchestrates multi-agent task flows like a seasoned project manager.&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title><![CDATA[Part 3: Model Context Protocol (MCP): The protocol that powers AI agents]]></title><description><![CDATA[As AI agents grow beyond text generation into autonomous problem-solvers, a new challenge emerges — communication. Not between humans and AI…]]></description><link>https://developer.hpe.com/model-context-protocol-mcp-the-protocol-that-powers-ai-agents/</link><guid isPermaLink="false">https://developer.hpe.com/model-context-protocol-mcp-the-protocol-that-powers-ai-agents/</guid><pubDate>Fri, 18 Jul 2025 14:23:55 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;As AI agents grow beyond text generation into autonomous problem-solvers, a new challenge emerges — communication. Not between humans and AI, but between AI and the vast world of services, APIs, databases, and tools. That’s where &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; steps in.&lt;/p&gt;
&lt;p&gt;Inspired by &lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-3-mcp-model-context-f026578ff0dd&quot;&gt;my post on medium&lt;/a&gt;, this blog post demystifies the MCP standard — reinterpreted with clarity, depth, and real-world relevance to help you understand how AI agents actually get things done.
If LLMs are the brains, MCP is the nervous system connecting them to the real world. Let’s unpack how this protocol makes agentic AI functional, contextual, and enterprise-ready.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/mcp1.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;MCP Arch&quot; title=&quot;MCP Arch&quot;&gt;&lt;/center&gt;
&lt;h2&gt;What is MCP, and why does it matter?&lt;/h2&gt;
&lt;p&gt;At its core, MCP is a standardized way for AI agents to communicate with external services. Instead of treating each tool or database as a black box, MCP defines a consistent interface — allowing the agent to send structured requests and receive contextual responses.&lt;/p&gt;
&lt;p&gt;Imagine an agent saying:&lt;/p&gt;
&lt;p&gt;“Here’s the context, here’s what I need — now act smartly based on it.”&lt;/p&gt;
&lt;p&gt;That’s the essence of MCP. It removes ambiguity, reduces dependency on ad hoc code, and enables agents to &lt;strong&gt;perform tasks with understanding, not just commands.&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/mcp2.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;MCP Flow&quot; title=&quot;MCP Flow&quot;&gt;&lt;/center&gt;
&lt;h2&gt;The building blocks of MCP&lt;/h2&gt;
&lt;p&gt;MCP is comprised of three major components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;MCP client: Resides inside the AI agent and is responsible for making requests.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MCP server: Wraps around external tools or services and handles incoming requests.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MCP protocol: Uses JSON-RPC over transport layers like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Standard IO for local service calls&lt;/li&gt;
&lt;li&gt;Server-Sent Events (SSE) for remote or network-based integrations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img src=&quot;/img/mcp3.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;MCP Working&quot; title=&quot;MCP Working&quot;&gt;&lt;/center&gt;
&lt;h2&gt;How MCP works — The flow&lt;/h2&gt;
&lt;p&gt;Here’s a simplified view of the interaction:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The agent asks its MCP client to perform a task.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The MCP client sends a well-formed JSON-RPC request to the MCP server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The MCP server either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Executes a tool (e.g., semantic_search)&lt;/li&gt;
&lt;li&gt;Fetches data (e.g., a file or DB record)&lt;/li&gt;
&lt;li&gt;Returns a structured prompt (e.g., a Q&amp;#x26;A template)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The MCP server streams back results or updates.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The agent uses this data to reflect, re-plan, or execute the next step.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This architecture ensures that AI agents don’t just interact with data — they do so with awareness and strategy.&lt;/p&gt;
&lt;h2&gt;MCP + Reflection + Meta-Context = Smarter AI&lt;/h2&gt;
&lt;p&gt;What separates MCP from basic APIs is its inclusion of &lt;strong&gt;meta-context and reflection:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Meta-Context:&lt;/strong&gt; Includes user role, session history, intent, and environment details.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reflection:&lt;/strong&gt; Agents can evaluate responses. If a query fails, they can retry with a better approach.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context-Aware Tools:&lt;/strong&gt; MCP servers can use meta-data to dynamically tailor responses.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool Discovery:&lt;/strong&gt; Agents can ask, “What tools are available right now?” and adjust plans accordingly.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This turns the agent into a &lt;strong&gt;situationally aware operator&lt;/strong&gt;, not just a command runner.&lt;/p&gt;
&lt;h2&gt;The race of MCP&lt;/h2&gt;
&lt;p&gt;Curious about the groundbreaking &lt;em&gt;&lt;strong&gt;startups racing to develop the next wave of MCP&lt;/strong&gt;&lt;/em&gt; (Model Context Protocol) servers? In this roundup, we highlight the most innovative players redefining how AI agents access, interact with, and orchestrate information across tools, databases, financial platforms, and more. For each startup, you’ll find a brief overview of their core technology, real-world use cases, and direct links to explore their solutions further.&lt;/p&gt;
&lt;p&gt;Whether you&apos;re an AI developer, tech enthusiast, or enterprise looking to supercharge your workflows, discover how these emerging MCP platforms are shaping the future of AI-driven connectivity—unlocking seamless integrations and unprecedented automation across industries.&lt;/p&gt;
&lt;table&gt;
  &lt;thead style=&quot;background-color:#f2f2f2&quot;&gt;
    &lt;tr&gt;
      &lt;th&gt;Startup&lt;/th&gt;
      &lt;th&gt;Description&lt;/th&gt;
      &lt;th&gt;Tech Focus&lt;/th&gt;
      &lt;th&gt;Use Case&lt;/th&gt;
      &lt;th&gt;Website&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Anthropic&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Creators of MCP and Claude AI&lt;/td&gt;
      &lt;td&gt;AI Research &amp; Safety&lt;/td&gt;
      &lt;td&gt;Secure tool access via MCP for Claude AI&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://anthropic.com&quot;&gt;anthropic.com&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Replit&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Cloud IDE with AI capabilities&lt;/td&gt;
      &lt;td&gt;Developer Tools &amp; AI Agents&lt;/td&gt;
      &lt;td&gt;MCP-powered code assistant in their IDE&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://replit.com&quot;&gt;replit.com&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Sourcegraph&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Code intelligence &amp; search platform&lt;/td&gt;
      &lt;td&gt;Developer Productivity&lt;/td&gt;
      &lt;td&gt;MCP to connect AI to codebases &amp; tickets&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://sourcegraph.com&quot;&gt;sourcegraph.com&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Qdrant&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Open-source vector database&lt;/td&gt;
      &lt;td&gt;AI Infrastructure (RAG)&lt;/td&gt;
      &lt;td&gt;MCP server for semantic memory in agents&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://qdrant.tech&quot;&gt;qdrant.tech&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Neon&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;Serverless Postgres provider&lt;/td&gt;
      &lt;td&gt;Databases (Postgres Cloud)&lt;/td&gt;
      &lt;td&gt;MCP for AI-driven Postgres analytics &amp; ops&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://neon.tech&quot;&gt;neon.tech&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Real-world applications of MCP&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Faster integrations&lt;/strong&gt;
Instead of hard-coding APIs, developers can plug agents into pre-wrapped MCP servers. This dramatically shortens time-to-integration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Live data access&lt;/strong&gt;
Agents can now access up-to-date information from production-grade systems — avoiding stale, hallucinated responses.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enterprise control&lt;/strong&gt;
MCP enables governance: every action is logged, controlled, and auditable — essential for security-conscious environments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cross-agent compatibility&lt;/strong&gt;
Build a tool once, and any MCP-compliant agent can use it. No more agent-specific wrappers.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;&lt;strong&gt;Case study: Qdrant with MCP&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Qdrant&lt;/strong&gt; is a vector database used for semantic search. Here’s how it operates under MCP:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;MCP server exposes a tool like semantic_search(query: str).&lt;/li&gt;
&lt;li&gt;Agent calls: semantic_search(&quot;incident policy&quot;).&lt;/li&gt;
&lt;li&gt;Qdrant streams back relevant documents in real-time.&lt;/li&gt;
&lt;li&gt;The agent uses those documents as dynamic context to reason or response.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is vector search integrated into an agentic loop — not just storage, but intelligence.&lt;/p&gt;
&lt;h3&gt;Case study: PostgreSQL with MCP&lt;/h3&gt;
&lt;p&gt;A &lt;strong&gt;Postgres MCP Server&lt;/strong&gt; might expose methods such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;get_sales(region: str, quarter: str).&lt;/li&gt;
&lt;li&gt;run_query(sql: str).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An agent could now answer a prompt like:&lt;/p&gt;
&lt;p&gt;“What were APAC sales in Q4?”&lt;/p&gt;
&lt;p&gt;The Postgres MCP Server abstracts the SQL, safely executes it, and returns clean, structured results — instantly usable by the agent.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leading startups driving MCP adoption&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While &lt;strong&gt;Part 8&lt;/strong&gt; will go deeper into startup ecosystems, here are some notable names in the industry who are building or supporting MCP infrastructure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Qdrant&lt;/li&gt;
&lt;li&gt;LangChain&lt;/li&gt;
&lt;li&gt;AutoGen by Microsoft&lt;/li&gt;
&lt;li&gt;OpenDevin&lt;/li&gt;
&lt;li&gt;Auto-GPT (community forks)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These players are shaping a plug-and-play AI world where tools and agents speak a common protocol.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;MCP is more than a technical standard — it&apos;s a &lt;strong&gt;philosophy of interoperability&lt;/strong&gt; for the agentic era. It shifts AI from being a passive responder to an active participant in real-world systems. With MCP, agents don’t just have the ability to talk — they gain the &lt;strong&gt;power to think, act, adapt, and connect&lt;/strong&gt; meaningfully.&lt;/p&gt;
&lt;p&gt;As we continue this series, &lt;a href=&quot;https://developer.hpe.com/blog/part-4-the-rise-of-agentic-ai-and-the-power-of-the-agno-framework/&quot;&gt;the next chapter&lt;/a&gt; will spotlight a top Agentic AI framework and reveal how it uses MCP to orchestrate intelligent, autonomous workflows across environments.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you’re building with AI — or planning to — MCP is the connective tissue you can’t afford to ignore.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title><![CDATA[How to use the DSCC API and Ansible to collect a storage configuration]]></title><description><![CDATA[Capturing the current storage configuration to verify it against best practices or configuration rules is something that customer request…]]></description><link>https://developer.hpe.com/how-to-use-dscc-api-and-ansible-to-collect-the-storage-configuration/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-use-dscc-api-and-ansible-to-collect-the-storage-configuration/</guid><pubDate>Tue, 15 Jul 2025 12:18:34 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Capturing the current storage configuration to verify it against best practices or configuration rules is something that customer request regularly. If the customer uses Ansible as their automation platform, the &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module?tab=readme-ov-file&quot;&gt;HPE 3PAR Ansible module&lt;/a&gt; can be used to create and delete hosts, volumes etc., but it is not really a solution for gathering the complete configuration.&lt;/p&gt;
&lt;p&gt;Furthermore, this module uses the WSAPI of individual Alletra storage systems. The HPE Data Services Cloud Console (DSCC) would be the better option to collect storage configuration data of multiple systems, even those that might be distributed across multiple sites. Through a single location, the DSCC would be able to get the data of all storage systems.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/automating-operations-on-dscc-using-ansible-playbooks/&quot;&gt;Ansible playbooks for the DSCC&lt;/a&gt; were discussed in one of the previous HPE Developer Community blogs. The playbooks offer fact gatherings for storage systems, hosts and volumes. However, once you dig into the details, you will find that the modules have not been updated for  more than two years,  and do not support the HPE Alletra MP B10000 storage array. In this blog post, I will discuss a possible approach for DSCC data gathering using Ansible built-in functionality to overcome the lack of continuous playbook development.&lt;/p&gt;
&lt;h1&gt;Capture the storage system configuration&lt;/h1&gt;
&lt;p&gt;Upon learning that the playbooks for the DSCC were not well maintained, I looked for a different way to capture the configuration data of all arrays of the HPE Customer Technology Center in Böblingen. The &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module&quot;&gt;&lt;/a&gt; &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module?tab=readme-ov-file&quot;&gt;HPE 3PAR Ansible module&lt;/a&gt; requires one to connect to each array individually and does not provide a complete capture of the array configuration. Hence it is not a solution to the problem. A way forward would be to use the HPE Data Services Cloud Console and the corresponding Data Services REST API (the basics are already discussed in previous posts on the HPE Developer Community blog: &lt;a href=&quot;https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/&quot;&gt;Data Services on the HPE GreenLake platform | HPE Developer Portal )&lt;/a&gt;. The Data Services REST API offers a complete list of commands that can be issued on the DSCC.&lt;/p&gt;
&lt;p&gt;The configuration of a storage system generally includes the configuration data of the storage system itself, the details of the configured volumes of a storage array, the host group and the host details. The first step of gathering the configuration information would be to get a list of storage arrays connected to the Data Services Cloud Console. Once you have the list, you can go and gather details of each storage array. The &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;Data Services REST API&lt;/a&gt; is supporting the data gathering by supplying with every array a list of associated links, that refer to controller, disk etc. information of this array. An example of REST API call response is given below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/getstoragesystems.png&quot; alt=&quot;&quot; title=&quot;Get Storage System API call&quot;&gt;&lt;/p&gt;
&lt;p&gt;In order to be independent of any Python library (or the lack of updates to a Python library), I have decided to use Ansible&apos;s built-in functionality to create the DSCC capture playbooks. The basic tasks that are used by the playbooks are the DSCC REST API call using the ansible.builtin.uri function and as a special call variant, the retrieval of the DSCC access token (which is special in terms of the URI used to get the access token).&lt;/p&gt;
&lt;h1&gt;Basic tasks&lt;/h1&gt;
&lt;h2&gt;Retrieving a DSCC access token&lt;/h2&gt;
&lt;p&gt;The steps to first generate the client ID and the client secret used to access the DSCC REST API was already described in a post on the HPE Developer Community blog:  &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Using HPE GreenLake Console&apos;s API Gateway for Data Services Cloud Console&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once you have your client ID and client secret, you can generate an access token that is valid for two hours. This access token will allow you to issue REST API calls to the Data Services Cloud Console, as it identifies you as the user that is linked with the client ID and secret to create the access token.  Hence, it is best practice to store the client ID and secret in a secure place.&lt;/p&gt;
&lt;p&gt;The below code example had the client credentials stored in the credentials.yml file, that was encrypted using ansible-vault. The current Ansible playbook stores the access token in a file that grants access only to the current user (hence, the access mode 600 for this file) to avoid misuse of the retrieved access token.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;— name: Include encrypted vars
  include_vars: credentials.yml

— name: Get Access Token
  ansible.builtin.uri:
    url: &quot;{{ sso_url }}&quot;
    headers:
      Content-Type: &quot;application/x-www-form-urlencoded&quot;
      Authorization: &quot;Basic {{ (dscc_id + &apos;:&apos; + dscc_secret) | b64encode }}&quot;
    method: POST
    body: &quot;grant_type=client_credentials&quot;
    validate_certs: false
  register: oauth_response

— name: Define header
  ansible.builtin.set_fact:
    token: &quot;Bearer {{ oauth_response.json.access_token }}&quot;

— name: Store Token
  ansible.builtin.copy:
    content: &quot;{{ token }}&quot;
    dest: &apos;vars/token.txt&apos;
    mode: &quot;0600&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;DSCC REST API call&lt;/h2&gt;
&lt;p&gt;A DSCC REST API call can be with and without a request body and can have multiple responses depending on the actual API call. Nevertheless, it is good practice to build a modular code approach that uses a generalized REST API call to access the Data Services Cloud Console. The generalized DSCC REST API call, shown in the following code block, has its parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;requestUri (as mentioned in &lt;a href=&quot;https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;Data Services REST API&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;request method (Get, Post, Delete, Put)&lt;/li&gt;
&lt;li&gt;request body (optional)&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- name: Include encrypted vars
  include_vars: vars/credentials.yml

- name: Get Access Token
  ansible.builtin.set_fact:
    token: &quot;{{ lookup(&apos;file&apos;, &apos;vars/token.txt&apos;) }}&quot;

- name: Check the Methood 
  ansible.builtin.fail:
    msg: &quot;DSCC-API-CALL: RestAPI Method is not defined!&quot;
  when: method is not defined

- name: Check for the request Uri 
  ansible.builtin.fail:
    msg: &quot;DSCC-API-Call: Request URI is not defined!&quot;
  when: request_uri is not defined

- name: DSCC Command - {{request_uri}}
  ansible.builtin.uri:
    url: &quot;{{ base_url }}{{ request_uri }}&quot;
    headers:
      Authorization: &quot;{{ token }}&quot;
      Content-Type: &quot;application/json&quot;
    method: &quot;{{ method }}&quot;
    validate_certs: false
    status_code: [200, 201, 202, 401, 404]
  register: result
  when: body is not defined

- name: Set result status
  ansible.builtin.set_fact:
    status: &quot;{{ result.status }}&quot;
    tmpres: &quot;{{ result }}&quot;
  when: body is not defined

- name: DSCC Command with body {{request_uri}}
  ansible.builtin.uri:
    url: &quot;{{ base_url }}{{ request_uri }}&quot;
    headers:
      Authorization: &quot;{{ token }}&quot;
      Content-Type: &quot;application/json&quot;
    method: &quot;{{ method }}&quot;
    body_format: json
    body: &quot;{{ body | to_json }}&quot;
    validate_certs: false
    status_code: [200, 201, 202, 400, 401, 404]
  register: result2
  when: body is defined

- name: Set result status
  ansible.builtin.set_fact:
    status: &quot;{{ result2.status }}&quot;
    tmpres: &quot;{{ result2 }}&quot;
  when: body is defined

- name: Set response when status in [200, 201, 202, 401]
  ansible.builtin.set_fact:
    response: &quot;{{ tmpres }}&quot;
  when: status in [&apos;200&apos;, &apos;201&apos;, &apos;202&apos;,&apos;401&apos;]

- name: Undefine Response when status not in [200...]
  ansible.builtin.set_fact:
    response: &quot;&quot;
  when: status not in [&apos;200&apos;, &apos;201&apos;, &apos;202&apos;,&apos;401&apos;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see, that it first retrieves the stored access token, then checks that the method and the request URI is available. Next it issues the API call either with or without a call body before the possible response status is checked and the call response is set.&lt;/p&gt;
&lt;h1&gt;System configuration capture&lt;/h1&gt;
&lt;p&gt;The complete workflow of the DSCC data capture is shown in the following flow diagram. First, the list of connected storage arrays is compiled and stored in a dictionary. Next, the playbook will loop through the storage array dictionary in order to capture the details of each connected storage array (this includes looping through all associated links of a storage array and the gathering of all volumes that are defined on the storage array). Afterwards, the host group and host details are also captured and stored.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/capturestorage-flowdiagram.png&quot; alt=&quot;&quot; title=&quot;Capture Storage System Flow Diagram&quot;&gt;&lt;/p&gt;
&lt;p&gt;This system configuration capture flow chart can now be implemented using the above mentioned basic task in combination with the correct request URIs and request bodies. You can see in the example below that the playbook first gets the list of storage arrays (request uri: /api/v1/storage-systems). If the command returns a status code of 401 (i.e. unauthorized access), it repeats the same call after retrieving a refreshed access token (this is the difference between the DSCC-API-Call.yaml and the DSCC-API-401.yaml playbook).  After successfully retrieving the system list, a system dictionary is populated first, followed by looping through the dictionary (Loop-Systems.yml playbook) and storing the system configuration information. Afterwards, the host group and hosts details are retrieved and stored.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt; hosts: localhost
  vars:
    method: &quot;GET&quot;

  tasks:
  - name: DSCC API Call GET storage systems
    vars:
      request_uri: &quot;/api/v1/storage-systems&quot; 
    ansible.builtin.include_tasks:
      file: DSCC-API-Call.yaml

  - name: Retry the command if status 401
    vars:
      request_uri: &quot;/api/v1/storage-systems&quot; 
    ansible.builtin.include_tasks:
      file: DSCC-API-401.yaml
    when: status == &apos;401&apos;

  - name: Set Systems
    ansible.builtin.set_fact:
      systems: &quot;{{ response.json[&apos;items&apos;] }}&quot;
    when: status in [&apos;200&apos;, &apos;201&apos;]

  - name: Initialize Storage system dictionary if not defined
    ansible.builtin.set_fact:
      storage_systems: &quot;{{ storage_systems | default({}) }}&quot;
  - name: Create StorageSystems Dictionary
    ansible.builtin.set_fact:
      storage_systems: &quot;{{ storage_systems | combine({item.name: {&apos;id&apos;: item.id, &apos;resourceUri&apos;: item.resourceUri}}) }}&quot;
    with_items: &quot;{{ systems }}&quot;

  - name: Loop Systems
    vars: 
    ansible.builtin.include_tasks:
      file: Loop-Systems.yaml
    with_dict: &quot;{{storage_systems}}&quot;
    loop_control:
      loop_var: my_system
  
  - name: Get HostGroups
    vars:
      request_uri: &quot;/api/v1/host-initiator-groups&quot;
    ansible.builtin.include_tasks:
      file: DSCC-API-Call.yaml    
  - name: Store the HostGroups
    ansible.builtin.copy:
      content: &quot;{{ response.json | to_nice_json }}&quot;
      dest: &quot;../Outputs/hostGroups.json&quot;
      mode: &apos;0600&apos;
    when: response.json is defined
  
  - name: Get Hosts
    ansible.builtin.include_tasks:
      file: GetAllHosts.yaml
  - name: Store the Hosts
    ansible.builtin.copy:
      content: &quot;{{ response.json | to_nice_json }}&quot;
      dest: &quot;../Outputs/hosts.json&quot;
      mode: &apos;0600&apos;
    when: response.json is defined 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Loop-Systems.yaml playbook retrieves the storage system details and loops for each system through all the associated links of this array, providing you with a complete capture of the storage array configuration. The captured data is stored in multiple files with the naming structure: &lt;strong&gt;SystemName.associatedLink-Keyname.json.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Ansible playbooks used to capture the system configuration are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Capture-Systems.yaml&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;DSCC-API-Call.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DSCC-API-401.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Loop-Systems.yaml&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Loop-Links.yaml&lt;/li&gt;
&lt;li&gt;GetAllSystemVolumes.yaml&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GetAllHosts.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In order to keep this blog readable and not code overloaded only a few of the playbooks used are shown, but all playbooks (and even some more) can be retrieved on Github at: &lt;a href=&quot;https://github.com/tbeha/DSCC-Ansible&quot;&gt;https://github.com/tbeha/DSCC-Ansible.&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;It is possible to use Ansible playbooks to capture the storage array configuration using the HPE Data Services Cloud Console REST API and built-in  Ansible functions. Having the storage array captured in one or multiple JSON-files is  leading to an obvious next step: use the captured configuration information to automate the redeployment of a storage array. This is one of my planned next activities. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Build your first AI Chatbot on HPE Private Cloud AI using Flowise and HPE MLIS]]></title><description><![CDATA[In today’s AI-driven landscape, conversational interfaces are transforming how organizations interact with users and automate workflows…]]></description><link>https://developer.hpe.com/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis/</link><guid isPermaLink="false">https://developer.hpe.com/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis/</guid><pubDate>Fri, 11 Jul 2025 13:38:06 GMT</pubDate><content:encoded>&lt;p&gt;In today’s AI-driven landscape, conversational interfaces are transforming how organizations interact with users and automate workflows. Building a secure, scalable, and customizable chatbot solution requires robust infrastructure and flexible AI tooling. HPE Private Cloud AI provides a powerful platform for deploying and managing AI workloads, while Flowise and HPE Machine Learning Inference Software offer the tools to rapidly build, deploy, and manage chatbots powered by large language models (LLMs).&lt;/p&gt;
&lt;p&gt;This blog post walks you through deploying FlowiseAI on HPE PCAI to build a modern chatbot solution. By leveraging these technologies, organizations can accelerate chatbot development, ensure data privacy, and maintain full control over their AI lifecycle.&lt;/p&gt;
&lt;h2&gt;HPE Private Cloud AI&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (HPE PCAI)&lt;/a&gt; offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate LLMs to efficiently hosting and deploying them. Beyond these core functions, HPE Private Cloud AI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated &lt;em&gt;NVIDIA Inference Microservices (NIM)&lt;/em&gt; LLMs, along with a powerful suite of AI tools and frameworks for data engineering, analytics, and data science.&lt;/p&gt;
&lt;p&gt;HPE Machine Learning Inference Software is a user-friendly solution designed to simplify and control the deployment, management, and monitoring of machine learning (ML) models, including LLMs, at any scale.&lt;/p&gt;
&lt;p&gt;HPE Private Cloud AI has pre-integrated NVIDIA NIM LLMs, a suite of AI tools (including HPE Machine Learning Inference Software), and a flexible &lt;em&gt;Import Framework&lt;/em&gt; that enables organizations to deploy their own applications or third-party solutions, like FlowiseAI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/importframework.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Flowise?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://flowiseai.com/&quot;&gt;Flowise&lt;/a&gt; is an open source generative AI development platform for building AI Agents and LLM workflows. It provides a visual interface for designing conversational flows, integrating data sources, and connecting to various LLM endpoints. Flowise provides modular building blocks for you to build any agentic systems, from simple compositional workflows to autonomous agents.&lt;/p&gt;
&lt;h2&gt;Deploying Flowise via import framework&lt;/h2&gt;
&lt;h3&gt;1. Prepare the Helm charts&lt;/h3&gt;
&lt;p&gt;Obtain the Helm chart for Flowise v5.1.1 from &lt;a href=&quot;https://artifacthub.io/packages/helm/cowboysysop/flowise&quot;&gt;artifacthub.io&lt;/a&gt;. Following changes to the Helm chart are needed to deploy it on HPE Private Cloud AI.&lt;/p&gt;
&lt;p&gt;Add the following YAML manifest files to &lt;em&gt;templates/ezua/&lt;/em&gt; directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;virtualService.yaml&lt;/em&gt;: Defines an Istio &lt;em&gt;VirtualService&lt;/em&gt; to configure routing rules for incoming requests.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;kyverno.yaml&lt;/em&gt;: A Kyverno &lt;em&gt;ClusterPolicy&lt;/em&gt; that automatically adds required labels to the deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Updates to &lt;em&gt;values.yaml&lt;/em&gt; file&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set resource request/limits.&lt;/li&gt;
&lt;li&gt;Update the PVC size&lt;/li&gt;
&lt;li&gt;Add the following &lt;em&gt;&apos;ezua&apos;&lt;/em&gt; section to configure the &lt;em&gt;Istio Gateway&lt;/em&gt; and expose the endpoint.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;ezua:
  virtualService:
    endpoint: &quot;flowise.${DOMAIN_NAME}&quot;
    istioGateway: &quot;istio-system/ezaf-gateway&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here&apos;s the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie18hen_us&amp;#x26;page=ManageClusters/importing-applications.html&quot;&gt;reference document&lt;/a&gt; for the import framework prerequisites.&lt;/p&gt;
&lt;p&gt;These updates are implemented in the revised Flowise Helm charts, and are available in the GitHub repository &lt;a href=&quot;https://github.com/ai-solution-eng/frameworks/tree/main/flowise&quot;&gt;ai-solution-eng/frameworks. &lt;/a&gt;With these customizations, &lt;em&gt;Flowise&lt;/em&gt; can now be deployed on HPE Private Cloud AI using &lt;em&gt;Import Framework.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;2. Deploy Flowise via the import framework&lt;/h3&gt;
&lt;p&gt;Use the import framework in HPE Private Cloud AI to deploy Flowise.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowise-deploy-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowise-deploy-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowise-deploy-3.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowise-deploy-4.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;3. Access Flowise UI via its endpoint&lt;/h3&gt;
&lt;p&gt;After deployment, Flowise will appear as a tile under &lt;em&gt;Tools &amp;#x26; Frameworks / Data Engineering&lt;/em&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowsie-deployed.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click the &lt;em&gt;Open&lt;/em&gt; button on the &lt;em&gt;Flowise&lt;/em&gt; Tile, or click on the &lt;em&gt;Endpoint&lt;/em&gt; URL to launch the Flowise login page. Setup the credentials and login.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowise-home-7-11-2025.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Deploy a LLM in HPE MLIS&lt;/h2&gt;
&lt;p&gt;HPE MLIS is accessed by clicking on &lt;em&gt;HPE MLIS&lt;/em&gt; tile in &lt;em&gt;Tools &amp;#x26; Frameworks / Data Engineering&lt;/em&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mlis.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To deploy a pre-packaged LLM (Meta/Llama3-8b-instruct) in HPE MLIS, you need to know how to add a registry, a packaged model, and how to create deployments.&lt;/p&gt;
&lt;h3&gt;1. Adding a registry&lt;/h3&gt;
&lt;p&gt;You&apos;ll first want to add a new registry called &quot;NGC&quot;, which refers to NVIDIA GPU Cloud. This can be used to access pre-packaged LLMs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mlis-registry.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;2. Adding a packaged model&lt;/h3&gt;
&lt;p&gt;Create a new packaged model by clicking the &lt;em&gt;Add New Model&lt;/em&gt; tab. Fill in the details as shown in the below screen shots.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-model-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Choose the registry created in the previous step and select &apos;meta/llama-3.1-8b-instruct&apos; for the &lt;em&gt;NGC Supported Models&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-model-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Set the right resources required for the model. Do this by choosing from either a built-in template or &quot;custom&quot; in the &lt;em&gt;Resource Template&lt;/em&gt; section.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-model-3.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-model-4.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Newly created packaged model appears in the UI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-model-final.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;3. Creating deployments&lt;/h3&gt;
&lt;p&gt;Using the packaged model created in the previous step, create a new deployment by clicking on &lt;em&gt;Create new deployment.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Give a name to the deployment and choose the packaged model created in the previous step.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-3.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Set auto scaling as required. In this example, we have used &apos;fixed-1&apos; template.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-4.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-5.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The LLM is now deployed and can be accessed using the endpoint and corresponding API token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/deployment-6.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Create AI Chatbot in Flowise&lt;/h2&gt;
&lt;p&gt;Use Flowise&apos;s drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Add New Chatflow:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/chatflow-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Save the Chatflow using the name &quot;AI Chatbot&quot; and add the following nodes, making the connections shown in the screenshot.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Chat Models (Chat NVIDIA NIM):&lt;/strong&gt; Set Deployment &apos;Endpoint&apos; from HPE MLIS as &apos;Base Path&apos;, corresponding &apos;Model Name&apos; and &apos;API Key&apos; from HPE MLIS for &apos;Connect Credential&apos;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Memory (Buffer Window Memory):&lt;/strong&gt; Set appropriate &apos;Size&apos;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chains (Conversation Chain):&lt;/strong&gt; Connect &apos;Chat NVIDIA NIM&apos; and &apos;Buffer Window Memory&apos; nodes as shown.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/chatflow-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Your new AI Chatbot is now ready! You may quickly test it by clicking the chat icon on the top right corner of the screen.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chatflow-3.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Accessing AI Chatbot from external applications&lt;/h3&gt;
&lt;p&gt;Flowise provides an API endpoint for the chatbot, with multiple ways of integrating it with your applications. Also, you may explore multiple configurations that are available to enhance the chatbot.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chatflow-4.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;By combining Flowise’s intuitive chatbot builder with HPE MLIS’s robust model management, HPE Private Cloud AI empowers organizations to rapidly develop, deploy, and govern conversational AI solutions. This integrated approach ensures data privacy, operational control, and scalability for enterprise chatbot deployments.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more guides and best practices on leveraging HPE Private Cloud AI for your AI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Part 2: What makes AI agents truly intelligent]]></title><description><![CDATA[In the first part of this series, I discussed the shift from passive large language models to more capable, action-oriented AI. Now, I will…]]></description><link>https://developer.hpe.com/from-generative-to-agentic-ai-—-part-2-what-makes-ai-agents-truly-intelligent/</link><guid isPermaLink="false">https://developer.hpe.com/from-generative-to-agentic-ai-—-part-2-what-makes-ai-agents-truly-intelligent/</guid><pubDate>Wed, 09 Jul 2025 04:15:26 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In the &lt;a href=&quot;https://developer.hpe.com/blog/from-generative-to-agentic-ai-tracing-the-leap-from-words-to-actions/&quot;&gt;first part of this series&lt;/a&gt;, I discussed the shift from passive large language models to more capable, action-oriented AI. Now, I will provide a closer look at what actually powers this transformation — the concept of the AI agent. Far from being just an advanced chatbot, an agent is a structured system that can understand, plan, execute, and respond — much like a real-world assistant, only faster and smarter.&lt;/p&gt;
&lt;p&gt;Inspired by &lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-part-2-agentic-ai-74dcf045aff0&quot;&gt;my post on Medium&lt;/a&gt;, this post builds upon the original work with added clarity, practical examples, and a more conversational tone to help you truly grasp how agentic AI is reshaping automation across industries.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/screenshot-2025-07-18-at-4.10.42 pm.png&quot; width=&quot;500&quot; height=&quot;542&quot; alt=&quot;AI agnet Framework&quot; title=&quot;AI agnet Framework&quot;&gt;&lt;/center&gt;
&lt;h2&gt;What are AI agents?&lt;/h2&gt;
&lt;p&gt;An AI agent is not just something that responds to prompts — it’s something that takes initiative. Unlike traditional LLMs, which generate output only when asked, agents can independently decide what actions to take, how to take them, and when to stop.&lt;/p&gt;
&lt;p&gt;Here’s what makes an agent different:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It can decide how to solve a problem based on context.&lt;/li&gt;
&lt;li&gt;It can use tools such as APIs, search engines, or databases.&lt;/li&gt;
&lt;li&gt;It can take real actions, like analyzing data, sending emails, or making reservations.&lt;/li&gt;
&lt;li&gt;It can break down large tasks into smaller, manageable steps and complete them autonomously.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In essence, an AI agent behaves more like a virtual assistant capable of doing actual work — not just holding a conversation.&lt;/p&gt;
&lt;h2&gt;The agent workflow: Think → Plan → Act → Respond&lt;/h2&gt;
&lt;p&gt;The heart of an agentic system lies in this continuous loop:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Think :&lt;/strong&gt; It starts with understanding the objective or problem at hand.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Plan :&lt;/strong&gt; Based on that understanding, it creates a strategy — often a sequence of steps.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Act :&lt;/strong&gt; It then begins executing the plan, calling tools, retrieving data, or initiating actions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Respond :&lt;/strong&gt; Finally, it summarizes or communicates the results — or loops back to continue solving.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This cycle allows agents to operate with minimal human intervention, even on complex, multi-step workflows.&lt;/p&gt;
&lt;h2&gt;Real-world impact: How agents are already changing industries&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Healthcare:&lt;/strong&gt; AI agents can retrieve patient history, summarize medical notes, and monitor for critical conditions, aiding doctors in real-time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Finance:&lt;/strong&gt; Agents can analyze markets, detect fraud, and automate reporting — operating at speeds no human team could match.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Customer support&lt;/strong&gt;: Instead of generic replies, agents can pull data from CRM systems, open service tickets, or resolve technical issues directly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IT and DevOps:&lt;/strong&gt; Agents are now monitoring systems, fixing bugs, deploying updates — all without waiting on a human operator.&lt;/p&gt;
&lt;p&gt;These are not theoretical applications — they are happening right now, streamlining operations and reducing bottlenecks across the board.&lt;/p&gt;
&lt;h2&gt;Agents + Tools = Real-world superpowers&lt;/h2&gt;
&lt;p&gt;What truly empowers agents is their ability to interface with tools. Think of APIs, web services, internal databases, scripts, and even IoT systems. These integrations allow agents to interact with the real world, not just the digital one.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Google Maps or Search APIs.&lt;/li&gt;
&lt;li&gt;CRM and ERP databases.&lt;/li&gt;
&lt;li&gt;Automation scripts for cloud platforms or internal workflows.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When agents are connected to tools, they don’t just think — they execute.&lt;/p&gt;
&lt;h2&gt;Need for AI agents&lt;/h2&gt;
&lt;p&gt;The value of AI agents lies in their ability to scale thinking into action:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;They automate entire workflows — not just single responses.&lt;/li&gt;
&lt;li&gt;They handle decision-making on the fly.&lt;/li&gt;
&lt;li&gt;They adapt to changing inputs and data.&lt;/li&gt;
&lt;li&gt;They reduce repetitive manual work across industries.&lt;/li&gt;
&lt;li&gt;They can collaborate as multi-agent teams to solve broader, interconnected problems.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Top 5 frameworks to build AI agents&lt;/h2&gt;
&lt;p&gt;If you&apos;re ready to build with agents, here are the top frameworks that developers swear by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Auto-GPT&lt;/li&gt;
&lt;li&gt;BabyAGI&lt;/li&gt;
&lt;li&gt;AGNO&lt;/li&gt;
&lt;li&gt;LangChain&lt;/li&gt;
&lt;li&gt;Autogen by Microsoft&lt;/li&gt;
&lt;/ul&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;GitHub Link&lt;/th&gt;
&lt;th&gt;⭐ Stars&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AGNO (Phidata)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Build fast, multi-modal agents (text + images)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/phidatahq/phidata&quot;&gt;AGNO GitHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;⭐ 21.5k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Auto-GPT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The first viral agent that automates tasks&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/Torantulino/Auto-GPT&quot;&gt;Auto-GPT GitHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;⭐ 174k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BabyAGI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agents that manage task lists autonomously&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/yoheinakajima/babyagi&quot;&gt;BabyAGI GitHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;⭐ 21.2k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LangChain Agents&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Powerful and flexible agent toolkit for developers&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/langchain-ai/langchain&quot;&gt;LangChain GitHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;⭐ 104k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Microsoft Autogen&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Build multi-agent systems that work together&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/microsoft/autogen&quot;&gt;Autogen GitHub&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;⭐ 42k&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Each offers a different approach — some focus on chaining tasks, others on autonomy and memory. Together, they make it easier than ever to bring agentic AI to life.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;AI agents are no longer a futuristic idea — they’re here, and they’re transforming how work gets done. By combining decision-making, planning, and tool usage, agents represent the leap from intelligent text generation to intelligent action. They’re bridging the gap between knowing what needs to be done and actually doing it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/model-context-protocol-mcp-the-protocol-that-powers-ai-agents/&quot;&gt;In Part 3 of this series,&lt;/a&gt; I&apos;ll dig deeper into the architecture behind agentic systems — what components make them tick, how memory and feedback loops work, and how they can scale. If you&apos;re building the future or just trying to understand it, you&apos;re in the right place.&lt;/p&gt;
&lt;p&gt;Until then, keep watching the space where AI stops being a helper… and becomes a doer&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Morpheus VM Essentials, GreenLake webhooks, hybrid observability & Chapel news]]></title><link>https://developer.hpe.com/2025-july-07/</link><guid isPermaLink="false">https://developer.hpe.com/2025-july-07/</guid><pubDate>Mon, 07 Jul 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[From generative to agentic AI: Tracing the leap from words to actions]]></title><description><![CDATA[AI has come a long way from simply finishing our sentences. Today, it’s not just generating content — it’s actively solving problems, making…]]></description><link>https://developer.hpe.com/from-generative-to-agentic-ai-tracing-the-leap-from-words-to-actions/</link><guid isPermaLink="false">https://developer.hpe.com/from-generative-to-agentic-ai-tracing-the-leap-from-words-to-actions/</guid><pubDate>Thu, 03 Jul 2025 10:33:21 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;AI has come a long way from simply finishing our sentences. Today, it’s not just generating content — it’s actively solving problems, making decisions, and executing complex tasks. This blog post kicks off a 10-part series where I&apos;II trace that incredible journey — from basic generative models to fully autonomous agents. Along the way, I’ll unpack the key shifts, architectures, and mindsets that shaped this evolution.&lt;/p&gt;
&lt;p&gt;Inspired by a &lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-65de72254a86&quot;&gt;my post on &lt;/a&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-65de72254a86&quot;&gt;Medium&lt;/a&gt;&lt;a href=&quot;https://dineshr1493.medium.com/all-you-need-to-know-about-the-evolution-of-generative-ai-to-agentic-ai-65de72254a86&quot;&gt;,&lt;/a&gt; this piece reimagines and expands on the original with a human-first lens and practical clarity.&lt;/p&gt;
&lt;p&gt;Whether you&apos;re an AI developer, tech leader, or just curious about where all this is headed — welcome. Let’s dive in.&lt;/p&gt;
&lt;h2&gt;Phase 1: LLMs — The linguistic powerhouse&lt;/h2&gt;
&lt;p&gt;Large Language Models (LLMs) like GPT, DeepSeek, QWEN, and LLaMA burst onto the scene with one incredible skill — understanding and generating human language. These models are trained on massive datasets and excel at:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multilingual conversations&lt;/li&gt;
&lt;li&gt;Summarization, classification, and text generation&lt;/li&gt;
&lt;li&gt;Contextual prediction based on vast patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But here’s the catch:&lt;/p&gt;
&lt;p&gt;LLMs are great at “saying” things… but they don’t do anything.&lt;/p&gt;
&lt;p&gt;On their own, LLMs are like brilliant thinkers without hands — capable of deep analysis, but unable to act in the real world.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/llms.png&quot; width=&quot;600&quot; height=&quot;550&quot; alt=&quot;LLM Evolution&quot; title=&quot;LLM Evolution&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Phase 2: LLMs + Tools — Giving the brain some hands&lt;/h2&gt;
&lt;p&gt;The next leap came when developers began connecting LLMs with external tools — APIs, plugins, databases, and custom workflows. This simple but powerful integration gave models the ability to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Search the web (like Perplexity AI)&lt;/li&gt;
&lt;li&gt;Execute code and commands&lt;/li&gt;
&lt;li&gt;Fetch real-time or contextual information&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This expanded what AI could do. Suddenly, the models weren’t just conversational — they became useful assistants.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;But there was still a problem:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Tool-based systems are fragile. APIs break, schemas change, and workflows can become unreliable.&lt;/p&gt;
&lt;p&gt;Think of it like giving a brain a set of hands — but the hands don’t always listen, or worse, they change shape every other week.&lt;/p&gt;
&lt;h2&gt;Phase 3: LLMs + Agents — The rise of agentic AI&lt;/h2&gt;
&lt;p&gt;This is where things get truly exciting.&lt;/p&gt;
&lt;p&gt;Agentic AI introduces a new layer of intelligence: autonomy. Instead of the model responding directly to every input, agentic systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set goals&lt;/li&gt;
&lt;li&gt;Break them into tasks&lt;/li&gt;
&lt;li&gt;Select and operate tools&lt;/li&gt;
&lt;li&gt;Make iterative decisions&lt;/li&gt;
&lt;li&gt;Learn from outcomes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In essence, AI stops being reactive and starts becoming proactive. These agents operate like digital coordinators — orchestrating actions, delegating responsibilities, and adjusting course as needed. They move beyond simple tasks and begin solving complex workflows.&lt;/p&gt;
&lt;p&gt;This isn’t just a better assistant — it’s the early form of AI co-workers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TL;DR Breakdown&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;LLMs = Great with words, but passive&lt;/li&gt;
&lt;li&gt;LLMs + Tools = Adds capabilities, but brittle and manual&lt;/li&gt;
&lt;li&gt;LLMs + Agents = Autonomous systems that think, plan, and act&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;We’ve moved from “talking AI” to “doing AI.”&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The shift from generative to agentic AI is more than just a technical upgrade — it’s a philosophical turning point in how we think about artificial intelligence. We’re no longer training machines to just converse with us; we’re teaching them to collaborate, adapt, and even take initiative. Agentic AI is the foundation for everything from self-operating software agents to autonomous business logic.&lt;/p&gt;
&lt;p&gt;In the next part of this series, I’ll peel back the curtain on how agentic architectures actually work — the brains behind the autonomy. Until then, consider this: the next time you interact with an AI, it may not just be listening… it may already be planning your next move.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Setting up Harbor as a local container registry in HPE Private Cloud AI]]></title><description><![CDATA[A container registry serves as a centralized system for storing and managing container images. In today’s fast-paced containerized…]]></description><link>https://developer.hpe.com/setting-up-harbor-as-a-local-container-registry-in-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/setting-up-harbor-as-a-local-container-registry-in-hpe-private-cloud-ai/</guid><pubDate>Thu, 03 Jul 2025 07:21:44 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;A container registry serves as a centralized system for storing and managing container images. In today’s fast-paced containerized application development landscape, speed, security and control over container workflows using a robust container registry are critical. While both cloud-based container registries, such as Google Container Registry (&lt;em&gt;GCR&lt;/em&gt;), Azure Container Registry (&lt;em&gt;ACR&lt;/em&gt;), and Amazon Elastic Container Registry (&lt;em&gt;ECR&lt;/em&gt;), and third-party services like &lt;em&gt;DockerHub&lt;/em&gt;, &lt;em&gt;GitHub&lt;/em&gt; / &lt;em&gt;GitLab&lt;/em&gt; Container Registry, and &lt;em&gt;JFrog&lt;/em&gt; Container Registry, offer convenience, organizations often face challenges with latency, external dependencies, and security compliance constraints.&lt;/p&gt;
&lt;p&gt;This blog post describes the process of deploying &lt;em&gt;Harbor&lt;/em&gt; and setting it up as a local container registry within &lt;em&gt;HPE Private Cloud AI&lt;/em&gt;. By using &lt;em&gt;Harbor&lt;/em&gt; as a local container registry, organizations gain faster image access, reduced dependence on external networks, improved security, and a tailored registry environment that aligns with internal compliance and governance needs.&lt;/p&gt;
&lt;h2&gt;HPE Private Cloud AI&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-private-cloud-ai/home/&quot;&gt;HPE Private Cloud AI (HPE PCAI)&lt;/a&gt; offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated &lt;em&gt;NVIDIA NIM&lt;/em&gt; LLMs, along with a powerful suite of AI tools and frameworks for &lt;em&gt;Data Engineering&lt;/em&gt;, &lt;em&gt;Analytics&lt;/em&gt;, and &lt;em&gt;Data Science&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;Import Framework&lt;/em&gt; in HPE PCAI further enhances flexibility by enabling organizations to integrate their own applications or third-party solutions alongside pre-installed components, accommodating a wide range of enterprise-specific use cases.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/pcai-import-framework.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This blog post guides you through the step-by-step process of deploying the open-source &lt;em&gt;Harbor&lt;/em&gt; into HPE PCAI using the &lt;em&gt;Import Framework&lt;/em&gt;. Once deployed and configured, &lt;em&gt;Harbor&lt;/em&gt; can serve as a local container registry within HPE PCAI. With key features such as policy management, role-based access control (RBAC), security scanning, and image signing, &lt;em&gt;Harbor&lt;/em&gt; strengthens container lifecycle security and governance.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Before starting, make sure that &lt;a href=&quot;https://docs.docker.com/engine/install/&quot;&gt;Docker Engine&lt;/a&gt;, version &lt;em&gt;28.1.1&lt;/em&gt; or later, is installed, including the default &lt;em&gt;docker&lt;/em&gt; CLI, which will be used for building and pushing images.&lt;/p&gt;
&lt;p&gt;The following sections show application deployment details using the &lt;em&gt;kubectl&lt;/em&gt; CLI and &lt;em&gt;kubeconfig&lt;/em&gt; to access the HPE PCAI Kubernetes (K8s) cluster. However, direct cluster access via &lt;em&gt;kubectl&lt;/em&gt; is generally not required.&lt;/p&gt;
&lt;h2&gt;Harbor&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Harbor&lt;/em&gt; is an open-source container registry designed for cloud-native environments like K8s. It securely stores and manages container images with policies and RBAC, ensures images are scanned and free from vulnerabilities, and signs images as trusted.&lt;/p&gt;
&lt;p&gt;The following sections describe in detail how to deploy &lt;em&gt;Harbor&lt;/em&gt; into HPE PCAI using the &lt;em&gt;Import Framework&lt;/em&gt;. You will learn how to create a private project, create users and assign them with specific role permissions, and push images using &lt;em&gt;Harbor&lt;/em&gt; credentials. Used as a local image registry within HPE PCAI, &lt;em&gt;Harbor&lt;/em&gt; helps ensure your container images remain secure and well governed.&lt;/p&gt;
&lt;h3&gt;Harbor deployment via HPE PCAI &lt;em&gt;Import Framework&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;Based on the latest Helm charts from the official &lt;a href=&quot;https://helm.goharbor.io/harbor-1.17.0.tgz&quot;&gt;&lt;em&gt;Harbor&lt;/em&gt; site&lt;/a&gt;, the following YAML manifest files have been added under &lt;em&gt;templates/ezua/&lt;/em&gt; directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;virtualService.yaml&lt;/em&gt;: Defines an Istio &lt;em&gt;VirtualService&lt;/em&gt; to configure routing rules for incoming requests.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;kyverno-cluster-policy&lt;/em&gt;: A Kyverno &lt;em&gt;ClusterPolicy&lt;/em&gt; that automatically adds required labels to the deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Additionally, the default &lt;em&gt;values.yaml&lt;/em&gt; file has been modified with the following updates:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;introduced an &lt;em&gt;ezua&lt;/em&gt; section to configure the &lt;em&gt;Istio Gateway&lt;/em&gt; and expose the endpoint:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;      ezua:

        virtualService:
          endpoint: &quot;harbor.${DOMAIN_NAME}&quot;
          istioGateway: &quot;istio-system/ezaf-gateway&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;expanded &lt;em&gt;Harbor&lt;/em&gt; registry storage from the default &lt;em&gt;5G&lt;/em&gt; to &lt;em&gt;500G&lt;/em&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;      persistence.persistentVolumeClaim.registry.size = 500G
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These updates are implemented in the revised &lt;em&gt;Harbor&lt;/em&gt; Helm charts, available in the &lt;em&gt;GitHub&lt;/em&gt; repository &lt;a href=&quot;https://github.com/GuopingJia/pcai-helm-examples/tree/main/harbor&quot;&gt;&lt;em&gt;pcai-helm-examples&lt;/em&gt;&lt;/a&gt;. With these customizations, &lt;em&gt;Harbor&lt;/em&gt; can be easily deployed into HPE PCAI using the &lt;em&gt;Import Framework&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/import-harbor.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Harbor UI access via its endpoint&lt;/h3&gt;
&lt;p&gt;After &lt;em&gt;Harbor&lt;/em&gt; is deployed via the HPE PCAI &lt;em&gt;Import Framework&lt;/em&gt;, an &lt;strong&gt;Imported&lt;/strong&gt; &lt;em&gt;Harbor&lt;/em&gt; tile appears under &lt;em&gt;Tools &amp;#x26; Frameworks&lt;/em&gt; on the &lt;em&gt;Data Science&lt;/em&gt; tab. A service endpoint, e.g., &lt;em&gt;&lt;a href=&quot;https://harbor.ingress.pcai0104.ld7.hpecolo.net&quot;&gt;https://harbor.ingress.pcai0104.ld7.hpecolo.net&lt;/a&gt;&lt;/em&gt;, is automatically configured and exposed, providing access to &lt;em&gt;Harbor&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-deployment.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click the &lt;em&gt;Open&lt;/em&gt; button, or paste the endpoint URL into your browser, to launch the &lt;em&gt;Harbor&lt;/em&gt; login page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-login.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;From there, you can log into &lt;em&gt;Harbor&lt;/em&gt; projects page using the default &lt;em&gt;admin&lt;/em&gt; user credentials:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-ui.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Harbor project and user creation&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Harbor&lt;/em&gt; manages container images through projects, each of which hosts the image repositories for your application. Before pushing images to &lt;em&gt;Harbor&lt;/em&gt;, a project must first be created. A default public project named &lt;em&gt;library&lt;/em&gt; is pre-created, but new projects can be created by clicking &lt;em&gt;+ NEW PROJECT&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-project.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To enhance security, it&apos;s recommended to create &lt;em&gt;private&lt;/em&gt; projects in &lt;em&gt;Harbor&lt;/em&gt; to prevent unauthorized images pulls. In this blog post, a private project named &lt;em&gt;demo&lt;/em&gt; is created with an unlimited quota (&lt;strong&gt;-1&lt;/strong&gt;). For production environments, applying a defined quota, e.g., &lt;em&gt;200G&lt;/em&gt;, can help manage registry storage capacity more effectively.&lt;/p&gt;
&lt;p&gt;After creating the project, users can be created and assigned to projects using role-based access control (RBAC). In this blog post, two sample users, &lt;em&gt;pcai-developer&lt;/em&gt;, &amp;#x26; &lt;em&gt;pcai-admin&lt;/em&gt;, are created:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/two-users-harbor.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These users, along with the default &lt;em&gt;admin&lt;/em&gt; user, are added to the project &lt;em&gt;demo&lt;/em&gt; with distinct roles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;pcai-developer&lt;/em&gt; has the &lt;strong&gt;Developer&lt;/strong&gt; role (with read/write access to project)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;pcai-admin&lt;/em&gt; is assigned the &lt;strong&gt;Maintainer&lt;/strong&gt; role, with extended privileges including image scanning, replication job visibility and image deletion&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/project-member.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For a detailed breakdown of each role&apos;s capabilities, refer to the official &lt;a href=&quot;https://goharbor.io/docs/2.13.0/administration/managing-users/&quot;&gt;Harbor Managing Users page&lt;/a&gt;. As a best practice, production deployments should enforce role separation to maintain security and operational clarity in &lt;em&gt;Harbor&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Pushing images to Harbor registry&lt;/h3&gt;
&lt;p&gt;With the project and users set up, you&apos;re ready to push the container images to &lt;em&gt;Harbor&lt;/em&gt; by following these steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Log in to Harbor registry&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use the Docker client to authenticate with the &lt;em&gt;Harbor&lt;/em&gt; registry using the &lt;em&gt;pcai-admin&lt;/em&gt; user credentials by running the following command :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker login harbor.ingress.pcai0104.ld7.hpecolo.net
Username: pcai-admin
Password:

WARNING! Your credentials are stored unencrypted in &apos;/home/guoping/.docker/config.json&apos;.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you get any certificate error when logging in from a Linux client, update the file &lt;em&gt;/etc/docker/daemon.json&lt;/em&gt; by adding the following entry, replacing the &lt;em&gt;Harbor&lt;/em&gt; registry URL with your own:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;{
  &quot;insecure-registries&quot; : [&quot; harbor.ingress.pcai0104.ld7.hpecolo.net &quot;]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After making this change, reload the daemon and restart the Docker service:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Tag an existing image&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Rather than building a Docker image from a Dockerfile, pull the sample CFE Nginx image, &lt;em&gt;&apos;pcaidemo/cfe-nginx&apos;&lt;/em&gt;, from &lt;em&gt;DockerHub&lt;/em&gt; and tag it with the &lt;em&gt;Harbor&lt;/em&gt; registry URL and project name:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker images
REPOSITORY           TAG       IMAGE ID       CREATED        SIZE
pcaidemo/cfe-nginx   v0.1.0    1e5f3c5b981a   2 months ago   192MB

$ docker tag pcaidemo/cfe-nginx:v0.1.0 harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0

$ docker images
REPOSITORY                                               TAG       IMAGE ID       CREATED        SIZE
pcaidemo/cfe-nginx                                       v0.1.0    1e5f3c5b981a   2 months ago   192MB
harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx   v0.1.0    1e5f3c5b981a   2 months ago   192MB
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Push the image to Harbor registry&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Push the image to the &lt;em&gt;Harbor&lt;/em&gt; registry by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker push harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0
The push refers to repository [harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx]
7e893c1b6ee8: Pushed
463308bed0c9: Pushed
4197a611afec: Pushed
3e96162769d5: Pushed
892e805f6f4f: Pushed
626ab8a5d57b: Pushed
7fb72a7d1a8e: Pushed
v0.1.0: digest: sha256:114dff0fc8ee3d0200c3a12c60e3e2b79d0920dd953175ecb78a0b157425b25e size: 1778
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Verify the image from Harbor registry&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From the &lt;em&gt;Harbor&lt;/em&gt; UI, the image &lt;em&gt;cfe-nginx&lt;/em&gt; appears under &lt;em&gt;Repositories&lt;/em&gt; tab of the &lt;em&gt;demo&lt;/em&gt; project:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demo-project.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;From  the Docker client, log in to the &lt;em&gt;Harbor&lt;/em&gt; registry as the &lt;em&gt;pcai-developer&lt;/em&gt; user, then pull the image &lt;em&gt;cfe-nginx&lt;/em&gt; from the registry. The image should download successfully, confirming that the user has appropriate access and the &lt;em&gt;Harbor&lt;/em&gt; registry is functioning as expected.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ docker login harbor.ingress.pcai0104.ld7.hpecolo.net
Username: pcai-developer
Password:

WARNING! Your credentials are stored unencrypted in &apos;/home/guoping/.docker/config.json&apos;.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded

$ docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
$ docker pull harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0
v0.1.0: Pulling from demo/cfe-nginx
b895f377d09e: Already exists
3b00567da964: Pull complete
56b81cfa547d: Pull complete
1bc5dc8b475d: Pull complete
979e6233a40a: Pull complete
d2a7ba8dbfee: Pull complete
32e44235e1d5: Pull complete
Digest: sha256:114dff0fc8ee3d0200c3a12c60e3e2b79d0920dd953175ecb78a0b157425b25e
Status: Downloaded newer image for harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0
harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0

$ docker images
REPOSITORY                                               TAG       IMAGE ID       CREATED        SIZE
harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx   v0.1.0    1e5f3c5b981a   2 months ago   192MB
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Application deployment using Harbor registry&lt;/h2&gt;
&lt;p&gt;With the container images pushed to the &lt;em&gt;Harbor&lt;/em&gt; registry, the next step is to deploy the application to HPE PCAI using the same &lt;em&gt;Import Framework&lt;/em&gt;. It will demonstrate how to pull images from &lt;em&gt;Harbor&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The Helm charts of the sample &lt;em&gt;CFE Nginx&lt;/em&gt; application are available from &lt;em&gt;GitHub&lt;/em&gt; repository &lt;a href=&quot;https://github.com/GuopingJia/pcai-helm-examples/tree/main/nginx-chart&quot;&gt;pcai-helm-examples&lt;/a&gt;. Alongside the required &lt;em&gt;virtualService&lt;/em&gt; and Kyverno &lt;em&gt;ClusterPolicy&lt;/em&gt; YAML files, the &lt;em&gt;values.yaml&lt;/em&gt; file includes the &lt;em&gt;imageCredentials&lt;/em&gt; section that specifies the &lt;em&gt;Harbor&lt;/em&gt; access credentials for the &lt;em&gt;pcai-developer&lt;/em&gt; user. It also references the &lt;em&gt;imagePullSecrets&lt;/em&gt; field that uses the Secret resource &lt;em&gt;harbor&lt;/em&gt;, which is created during deployment, to securely pull container images from the &lt;em&gt;Harbor&lt;/em&gt; registry.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;image:
  repository: harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: &quot;v0.1.0&quot;

imagePullSecrets: 
  - name: harbor

imageCredentials:
  registry: harbor.ingress.pcai0104.ld7.hpecolo.net
  username: pcai-developer
  password: PCAIDev12345
  email: glcs.cfe@hpe.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using the provided sample Helm charts, the &lt;em&gt;CFE Nginx&lt;/em&gt; application can be easily deployed to HPE PCAI via the &lt;em&gt;Import Framework&lt;/em&gt;. After deployment, an &lt;strong&gt;Imported&lt;/strong&gt; &lt;em&gt;Nginx&lt;/em&gt; tile appears under &lt;em&gt;Tools &amp;#x26; Framework&lt;/em&gt;, along with its configured service endpoint:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-deployment.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Clicking the &lt;em&gt;Open&lt;/em&gt; button launches the &lt;em&gt;CFE Nginx&lt;/em&gt; main page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-ui.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;CFE Nginx&lt;/em&gt; application is deployed to the namespace &lt;em&gt;&apos;nginx&apos;&lt;/em&gt; in the K8s cluster. If you have access to the cluster, you can verify the deployment by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get all -n nginx
NAME                               READY   STATUS    RESTARTS   AGE
pod/nginx-chart-546476cd99-2nqzz   1/1     Running   0          6s

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/nginx-chart   ClusterIP   10.99.78.114   &amp;#x3C;none&gt;        80/TCP    6s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-chart   1/1     1            1           6s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-chart-546476cd99   1         1         1       6s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Within the &lt;em&gt;&apos;nginx&apos;&lt;/em&gt; namespace , a Secret named &lt;em&gt;&apos;harbor&apos;&lt;/em&gt;, of type &lt;em&gt;dockerconfigjson&lt;/em&gt;, is created. This secret is used to authenticate and pull images from the &lt;em&gt;Harbor&lt;/em&gt; registry during the deployment of the &lt;em&gt;CFE Nginx&lt;/em&gt; application:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# kubectl get secret harbor -n nginx
NAME     TYPE                             DATA   AGE
harbor   kubernetes.io/dockerconfigjson   1      3m41s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to observe the &lt;em&gt;cfe-nginx&lt;/em&gt; image, tagged &lt;em&gt;v0.1.0&lt;/em&gt;, being pulled from the &lt;em&gt;demo&lt;/em&gt; private project in &lt;em&gt;Harbor&lt;/em&gt; registry:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;[root@ez-master01 ~]# k describe pod/nginx-chart-546476cd99-2nqzz -n nginx
Name:             nginx-chart-546476cd99-2nqzz
Namespace:        nginx
...
...
Events:
  Type    Reason     Age    From                         Message
  ----    ------     ----   ----                         -------
  Normal  Scheduled  2m18s  scheduler-plugins-scheduler  Successfully assigned nginx/nginx-chart-546476cd99-2nqzz to scs04.pcai0104.ld7.hpecolo.net
  Normal  Pulling    2m18s  kubelet                      Pulling image &quot;busybox&quot;
  Normal  Pulled     2m17s  kubelet                      Successfully pulled image &quot;busybox&quot; in 860ms (860ms including waiting)
  Normal  Created    2m17s  kubelet                      Created container web-content
  Normal  Started    2m17s  kubelet                      Started container web-content
  Normal  Pulling    2m16s  kubelet                      Pulling image &quot;harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0&quot;
  Normal  Pulled     2m16s  kubelet                      Successfully pulled image &quot;harbor.ingress.pcai0104.ld7.hpecolo.net/demo/cfe-nginx:v0.1.0&quot; in 377ms (377ms including waiting)
  Normal  Created    2m16s  kubelet                      Created container nginx-chart
  Normal  Started    2m16s  kubelet                      Started container nginx-chart 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;em&gt;Logs&lt;/em&gt; page of the &lt;em&gt;Harbor&lt;/em&gt; UI provides a comprehensive audit trail, capturing key activities such as project and user creation, as well as image push and pull operations:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/harbor-audit.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I explored how to deploy &lt;em&gt;Harbor&lt;/em&gt; to HPE Private Cloud AI and configure it as a local container registry. By setting up a private &lt;em&gt;Harbor&lt;/em&gt; project and assigning user roles, organizations can securely manage, push and pull container images tailored to their application needs.&lt;/p&gt;
&lt;p&gt;More than just a container registry, &lt;em&gt;Harbor&lt;/em&gt; strengthens security with built-in vulnerability scanning, image signing, and content trust features, ensuring only verified, compliant images are used across deployments. With &lt;em&gt;Harbor&lt;/em&gt; integrated into HPE PCAI, organizations can confidently host container images internally, eliminating the need for external registries. The local container registry offers greater control over image provenance and aligns more effectively with organization security policies and regulatory requirements.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE Private Cloud AI and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 3: New Languages vs. Language Extensions]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-3-new-languages-vs-language-extensions/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-3-new-languages-vs-language-extensions/</guid><pubDate>Thu, 26 Jun 2025 22:20:33 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New features in HPE Morpheus VM Essentials Software 8.0.5]]></title><description><![CDATA[external blog post]]></description><link>https://developer.hpe.com/new-features-in-hpe-morpheus-vm-essentials-software-8-0-5/</link><guid isPermaLink="false">https://developer.hpe.com/new-features-in-hpe-morpheus-vm-essentials-software-8-0-5/</guid><pubDate>Tue, 24 Jun 2025 15:33:52 GMT</pubDate><content:encoded>&lt;p&gt;external blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Integrating HPE GreenLake webhooks with Splunk ]]></title><description><![CDATA[Overview This guide shows you how to connect HPE GreenLake webhooks with Splunk. Splunk is a data platform that collects, indexes, and…]]></description><link>https://developer.hpe.com/integrating-hpe-greenlake-webhooks-with-splunk/</link><guid isPermaLink="false">https://developer.hpe.com/integrating-hpe-greenlake-webhooks-with-splunk/</guid><pubDate>Tue, 24 Jun 2025 11:34:04 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;This guide shows you how to connect HPE GreenLake webhooks with &lt;a href=&quot;https://www.splunk.com/&quot;&gt;Splunk&lt;/a&gt;. Splunk is a data platform that collects, indexes, and analyzes machine-generated data to provide insights for various purposes, including security monitoring, IT operations, and business analytics. When the two are connected, you will be able to see your HPE GreenLake events through Splunk for improved data monitoring and analysis.&lt;/p&gt;
&lt;h2&gt;What you’ll learn&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;How to set up Splunk to receive data from HPE GreenLake&lt;/li&gt;
&lt;li&gt;How to handle HPE GreenLake&apos;s security requirements&lt;/li&gt;
&lt;li&gt;How to build a helper app that makes everything work together&lt;/li&gt;
&lt;li&gt;How to test and monitor your setup&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Overview of Splunk HTTP Event Collector (HEC)&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://dev.splunk.com/enterprise/docs/devtools/httpeventcollector/&quot;&gt;HTTP Event Collector (HEC)&lt;/a&gt; is a Splunk feature that lets you send data and application events to a Splunk deployment over the HTTP and Secure HTTP (HTTPS) protocols. HEC uses a token-based authentication model. You can generate a token and then configure a logging library or HTTP client with the token to send data to HEC in a specific format.&lt;/p&gt;
&lt;h3&gt;Key features of HEC&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Token-based authentication: Each token has a unique value, which is a 128-bit number that is represented as a 32-character globally unique identifier (GUID)&lt;/li&gt;
&lt;li&gt;Secure communication: Supports both HTTP and HTTPS protocols for data transmission&lt;/li&gt;
&lt;li&gt;API key support: Provides secure authentication mechanisms that align perfectly with HPE GreenLake&apos;s security requirements&lt;/li&gt;
&lt;li&gt;Flexible data formats: Accepts both JSON-formatted events and raw text data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Overview of HPE GreenLake webhooks&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/event/public/webhooks/&quot;&gt;HPE GreenLake webhooks&lt;/a&gt; facilitate automated, real-time communication between HPE GreenLake cloud services and an external service of your choosing. For example, a webhook could notify your IT Operation Management platform when a new audit log is created, or when subscriptions are about to expire. A getting started guide to HPE GreenLake webhooks is available &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;HPE GreenLake webhook security features:&lt;/h3&gt;
&lt;p&gt;HPE GreenLake implements robust security measures to ensure webhook authenticity:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Challenge Request Validation: After registering a webhook, a verification challenge is sent to the destination (the webhook URL). The event type is &lt;code&gt;hpe.greenlake.events.v1beta1.webhooks.verification&lt;/code&gt;. The challenge request includes a unique, random string generated by the server that is sent in the body as a payload.&lt;/li&gt;
&lt;li&gt;HMAC SHA-256 Signature Verification: HPE GreenLake webhooks use a verification challenge process to ensure that webhook connections are legitimate and secure. HPE GreenLake secures webhook notifications through HMAC (Hash-based Message Authentication Code) with SHA-256, a cryptographic hashing algorithm.&lt;/li&gt;
&lt;li&gt;Shared Secret Management: The platform supports dual secret rotation for zero-downtime security updates.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Challenge Request Example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;specversion&quot; : &quot;1.0&quot;,
  &quot;type&quot; : &quot;hpe.greenlake.events.v1beta1.webhooks.verification&quot;,
  &quot;source&quot; : &quot;//global.api.greenlake.hpe.com/events&quot;,
  &quot;id&quot; : &quot;C234-1234-1234&quot;,
  &quot;time&quot; : &quot;2018-04-05T17:31:00Z&quot;,
  &quot;datacontenttype&quot; : &quot;application/json&quot;,
  &quot;data&quot; : {
	&quot;challengeRequest&quot; : &quot;&amp;#x3C;TOKEN&gt;&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Challenges and solutions&lt;/h2&gt;
&lt;p&gt;The primary challenge in integrating HPE GreenLake webhooks with Splunk HEC lies in the webhook verification process. The destination must read the value from the challengeRequest field and create an HMAC SHA-256 hash, using the webhook secret as salt and the challengeRequest value as a string to hash. When successful, the destination responds with a JSON object with the format &lt;code&gt;{&quot;verification&quot;: &quot;CREATED_HASH&quot;}&lt;/code&gt; and a HTTP 200 OK status.&lt;/p&gt;
&lt;h3&gt;A challenge&lt;/h3&gt;
&lt;p&gt;Splunk&apos;s HEC endpoint is designed for data ingestion and doesn&apos;t natively support the  challenge-response mechanism required by HPE GreenLake webhooks. HEC expects to receive event data directly and cannot handle the initial verification handshake.&lt;/p&gt;
&lt;h3&gt;The solution&lt;/h3&gt;
&lt;p&gt;This is where &lt;a href=&quot;https://dev.splunk.com/enterprise/docs/devtools/customrestendpoints/&quot;&gt;Splunk&apos;s custom REST endpoints&lt;/a&gt; capability becomes invaluable. A custom REST endpoint is a developer-defined endpoint and associated handler that lets you build out the Splunk REST API to meet your specific needs. We can create a custom endpoint handler that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Intercepts the initial challenge request from HPE GreenLake&lt;/li&gt;
&lt;li&gt;Validates the challenge using HMAC SHA-256&lt;/li&gt;
&lt;li&gt;Responds appropriately to complete the verification&lt;/li&gt;
&lt;li&gt;Forwards validated event data to HEC for ingestion&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Custom REST endpoints in Splunk&lt;/h2&gt;
&lt;p&gt;Splunk&apos;s custom REST endpoints provide powerful extensibility for scenarios exactly like ours. You use a custom endpoint to add a special feature that Splunk doesn&apos;t have built-in, like, in our case, handling the unique secret handshake from HPE GreenLake.&lt;/p&gt;
&lt;p&gt;Key benefits of our integration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flexible request handling: Can process both challenge requests and event data&lt;/li&gt;
&lt;li&gt;Custom logic implementation: Handler code implements HPE GreenLake&apos;s specific validation requirements&lt;/li&gt;
&lt;li&gt;Centralized management: Provides a single endpoint for webhook management&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Where to configure the endpoint handler: Splunk Enterprise vs Splunk Cloud&lt;/h2&gt;
&lt;p&gt;Splunk Enterprise is the self-hosted version that an organization deploys and manages on its own infrastructure, either on-premises (on-prem) or in a private cloud.&lt;/p&gt;
&lt;p&gt;Splunk Cloud Platform is the Software as a Service (SaaS) offering, where the Splunk platform is hosted, managed, and maintained by Splunk.&lt;/p&gt;
&lt;h3&gt;For Splunk Enterprise&lt;/h3&gt;
&lt;p&gt;You can install and configure the endpoint handler directly on your Splunk Enterprise instance by placing it in the etc/apps/ directory and following the steps in this guide. Splunk Enterprise supports custom REST endpoints out of the box.&lt;/p&gt;
&lt;h3&gt;For Splunk Cloud&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud&quot;&gt;Splunk Cloud has extra security controls&lt;/a&gt;, so you might need to take additional steps to allow your helper to communicate with the Splunk REST API.&lt;/p&gt;
&lt;h2&gt;Generate an Splunk REST API key&lt;/h2&gt;
&lt;p&gt;In order to use the custom REST endpoint in Splunk, you need to get an API key which HPE GreenLake will use to authenticate against Splunk when initially setting up the webhook and sending events. You can use the following cURL command to generate a key.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -k https://your-splunk-instance:8089/services/auth/login \
     --data-urlencode username=YOUR_USERNAME \
     --data-urlencode &apos;password=YOUR_PASSWORD’
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The response will be:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&amp;#x3C;response&gt;
  &amp;#x3C;sessionKey&gt;YOUR_SESSION_KEY&amp;#x3C;/sessionKey&gt;
  &amp;#x3C;messages&gt;
    &amp;#x3C;msg code=&quot;&quot;&gt;&amp;#x3C;/msg&gt;
  &amp;#x3C;/messages&gt;
&amp;#x3C;/response&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Sample Python handler&lt;/h2&gt;
&lt;p&gt;Let&apos;s create a custom REST endpoint handler in Python to handle the HPE GreenLake webhook validation and forwards events to Splunk HEC, once validated.&lt;/p&gt;
&lt;h3&gt;Directory structure&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;splunk_hpe_webhook_app/
├── bin/
│   └── hpe_webhook_handler.py
├── default/
│   ├── restmap.conf
│   └── web.conf
└── metadata/
└── default.meta
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Python handler (bin/hpe_webhook_handler.py)&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os
import sys
import json
import hmac
import hashlib
import urllib.request
import urllib.parse
from splunk.rest import BaseRestHandler
class HPEWebhookHandler(BaseRestHandler):
	def __init__(self, command_line, command_arg):
    	super(HPEWebhookHandler, self).__init__(command_line, command_arg)
    	# Configure your HEC endpoint and token
    	self.hec_url = &quot;&amp;#x3C;https://your-splunk-instance:8088/services/collector/event&gt;&quot;
    	self.hec_token = &quot;YOUR_HEC_TOKEN_HERE&quot;
    	self.webhook_secret = &quot;YOUR_WEBHOOK_SECRET_HERE&quot;
	def handle_POST(self):
    	&quot;&quot;&quot;Handle POST requests from HPE GreenLake webhooks&quot;&quot;&quot;
    	try:
        	# Parse the incoming request body
        	request_body = json.loads(self.request[&apos;payload&apos;])
        	event_type = request_body.get(&apos;type&apos;, &apos;&apos;)
        	# Check if this is a verification challenge
        	if event_type == &apos;hpe.greenlake.events.v1beta1.webhooks.verification&apos;:
            	return self._handle_challenge(request_body)
        	else:
            	# Validate signature for regular events
            	if self._validate_signature():
                	return self._forward_to_hec(request_body)
            	else:
                	return self._return_error(&quot;Invalid signature&quot;, 401)
    	except Exception as e:
        	return self._return_error(f&quot;Error processing request: {str(e)}&quot;, 500)
	def _handle_challenge(self, request_body):
    	&quot;&quot;&quot;Handle HPE GreenLake webhook verification challenge&quot;&quot;&quot;
    	try:
        	# Extract challenge request token
        	challenge_request = request_body[&apos;data&apos;][&apos;challengeRequest&apos;]
        	# Create HMAC SHA-256 hash
        	hmac_hash = hmac.new(
            	key=self.webhook_secret.encode(&apos;utf-8&apos;),
            	msg=challenge_request.encode(&apos;utf-8&apos;),
            	digestmod=hashlib.sha256
        	)
        	# Create verification response
        	verification_response = {
            	&quot;verification&quot;: hmac_hash.hexdigest()
        	}
        	# Return successful verification
        	self.response.setHeader(&apos;content-type&apos;, &apos;application/json&apos;)
        	self.response.write(json.dumps(verification_response))
        	return
    	except Exception as e:
        	return self._return_error(f&quot;Challenge validation failed: {str(e)}&quot;, 400)
	def _validate_signature(self):
    	&quot;&quot;&quot;Validate HMAC signature for regular webhook events&quot;&quot;&quot;
    	try:
        	# Get signature from headers
        	signature_header = self.request.get(&apos;headers&apos;, {}).get(&apos;hpe-webhook-signature&apos;, &apos;&apos;)
        	if not signature_header.startswith(&apos;sha256=&apos;):
            	return False
        	expected_signature = signature_header[7:]  # Remove &apos;sha256=&apos; prefix
        	# Calculate expected signature
        	payload = self.request[&apos;payload&apos;]
        	calculated_signature = hmac.new(
            	key=self.webhook_secret.encode(&apos;utf-8&apos;),
            	msg=payload.encode(&apos;utf-8&apos;),
            	digestmod=hashlib.sha256
        	).hexdigest()
        	# Compare signatures
        	return hmac.compare_digest(expected_signature, calculated_signature)
    	except Exception:
        	return False
	def _forward_to_hec(self, event_data):
    	&quot;&quot;&quot;Forward validated event data to Splunk HEC&quot;&quot;&quot;
    	try:
        	# Prepare HEC request
        	hec_data = {
            	&quot;event&quot;: event_data,
            	&quot;sourcetype&quot;: &quot;hpe:greenlake:webhook&quot;,
            	&quot;source&quot;: &quot;hpe_greenlake&quot;,
            	&quot;index&quot;: &quot;main&quot;  # Configure as needed
        	}
        	# Create HTTP request to HEC
        	req = urllib.request.Request(
            	url=self.hec_url,
            	data=json.dumps(hec_data).encode(&apos;utf-8&apos;),
            	headers={
                	&apos;Authorization&apos;: f&apos;Splunk {self.hec_token}&apos;,
                	&apos;Content-Type&apos;: &apos;application/json&apos;
            	}
        	)
        	# Send to HEC
        	with urllib.request.urlopen(req) as response:
            	if response.status == 200:
                	self.response.setHeader(&apos;content-type&apos;, &apos;application/json&apos;)
                	self.response.write(json.dumps({&quot;status&quot;: &quot;success&quot;}))
                	return
            	else:
                	return self._return_error(&quot;Failed to forward to HEC&quot;, 500)
    	except Exception as e:
        	return self._return_error(f&quot;HEC forwarding failed: {str(e)}&quot;, 500)
	def _return_error(self, message, status_code):
    	&quot;&quot;&quot;Return error response&quot;&quot;&quot;
    	self.response.setStatus(status_code)
    	self.response.setHeader(&apos;content-type&apos;, &apos;application/json&apos;)
    	self.response.write(json.dumps({&quot;error&quot;: message}))
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Configuration files&lt;/h3&gt;
&lt;h4&gt;default/restmap.conf&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;[script:hpe_webhook_handler]
match = /hpe/webhook
script = hpe_webhook_handler.py
scripttype = persist
handler = hpe_webhook_handler.HPEWebhookHandler
requireAuthentication = false
output_modes = json
passPayload = true
passHttpHeaders = true
passHttpCookies = false
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;default/web.conf&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;[expose:hpe_webhook_handler]
pattern = hpe/webhook
methods = POST`
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;metadata/default.meta&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;[restmap/hpe_webhook_handler]
export = system
[views]
export = system`
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configuring Splunk HTTP Event Collector (HEC)&lt;/h2&gt;
&lt;p&gt;You need to create an API token to use HEC via its API. You can do this from:&lt;/p&gt;
&lt;p&gt;1.    &lt;strong&gt;Settings &gt; Data Inputs &gt; HTTP Event Collector&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;2.    Select &lt;strong&gt;New token.&lt;/strong&gt; Use this token to update the Python handler script line&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dccf7431-b83b-4799-a795-25bccc7637db.png&quot; alt=&quot;Data inputs settings &quot; title=&quot;Data inputs settings &quot;&gt;&lt;/p&gt;
&lt;p&gt;Your final configuration should look like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/03af2f59-ab50-443b-ae93-90d274738317.png&quot; alt=&quot;HEC final configuration &quot; title=&quot;HEC final configuration &quot;&gt;&lt;/p&gt;
&lt;p&gt;Verify your global settings so that they match the following:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/310bb0be-5363-428a-99a2-5f64bef2ca8a.png&quot; alt=&quot;Global settings&quot; title=&quot;Global settings&quot;&gt;&lt;/p&gt;
&lt;p&gt;This allows you to get your HEC endpoint, which is used in the Python handler to create an incident based on the HPE GreenLake event received via the webhook. The URL of the endpoint should look like this:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&amp;#x3C;https://&amp;#x3C;splunk-host&gt;&gt;:8088/services/collector/event&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Final integration flow&lt;/h2&gt;
&lt;p&gt;The complete integration flow works as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Initial setup&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deploy the custom Splunk endpoint handler using the above HPE webhook handler Python script.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure to set the HEC endpoint in the Python script (line 13)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure to set the Splunk HEC API token in the Python script (line 14)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure to set the HPE GreenLake webhook secret in the Python script (line 15)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Register the webhook handler URL with HPE GreenLake:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;URL of handler: &lt;a href=&quot;https://your-splunk-instance:8089/servicesNS/-/your_app/hpe/webhook&quot;&gt;&lt;code&gt;https://your-splunk-instance:8089/servicesNS/-/your_app/hpe/webhook&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Set the same secret key as we setup in the Python handler (line 15)&lt;/li&gt;
&lt;li&gt;Use API as Authentication type and set the API key to the Splunk REST API Key generated in the section above.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: See &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;this blog&lt;/a&gt; to learn how to register a new webhook handler in HPE GreenLake&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;Webhook handler verification process&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE GreenLake sends a verification challenge to your custom endpoint.&lt;/li&gt;
&lt;li&gt;The custom REST handler receives the challenge and validates it using HMAC SHA-256.&lt;/li&gt;
&lt;li&gt;The handler responds with the computed verification hash.&lt;/li&gt;
&lt;li&gt;HPE GreenLake confirms the webhook and marks it as active.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Event processing flow&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE GreenLake sends event data to the custom handler endpoint.&lt;/li&gt;
&lt;li&gt;The custom REST handler validates the HMAC signature.&lt;/li&gt;
&lt;li&gt;The handler forwards validated events to HEC.&lt;/li&gt;
&lt;li&gt;Splunk HEC ingests the data for analysis and visualization.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data flow diagram&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/diagram.jpg&quot; alt=&quot;Data flow diagram&quot; title=&quot;Data flow diagram&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Benefits of this architecture&lt;/h2&gt;
&lt;p&gt;Security: The custom endpoint handler ensures only validated, authentic events reach your Splunk environment.&lt;/p&gt;
&lt;p&gt;Reliability: If there are more than 20 failures in a 12-hour period, the webhook enters the critical state in HPE GreenLake. If there are no new failures in 12 hours, the webhook returns to the active state. The custom handler can implement robust error handling to maintain webhook health.&lt;/p&gt;
&lt;p&gt;Scalability: The solution can handle multiple webhook types and route them to different HEC endpoints or indexes as needed.&lt;/p&gt;
&lt;p&gt;Monitoring: All webhook interactions are logged within Splunk for troubleshooting and monitoring.&lt;/p&gt;
&lt;h2&gt;Testing and deployment&lt;/h2&gt;
&lt;p&gt;Testing the integration&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Verify custom endpoint: Test your custom REST endpoint using curl:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;`curl -X POST`[`https://your-splunk-instance:8089/servicesNS/-/your_app/hpe/webhook`](https://your-splunk-instance:8089/servicesNS/-/your_app/hpe/webhook)`\`
`-H &quot;Content-Type: application/json&quot; \`
`-d &apos;{&quot;type&quot;: &quot;test.event&quot;, &quot;data&quot;: {&quot;message&quot;: &quot;Hello Splunk&quot;}}&apos;`
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Webhook registration: Register your webhook with HPE GreenLake using the custom endpoint URL.&lt;/li&gt;
&lt;li&gt;Challenge validation: Monitor Splunk logs to ensure the challenge request is handled correctly.&lt;/li&gt;
&lt;li&gt;Event flow testing: Trigger test events from HPE GreenLake and verify they appear in your Splunk index.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Integrating HPE GreenLake webhooks with Splunk via HTTP Event Collector presents unique challenges due to the webhook verification requirements, but Splunk&apos;s custom REST endpoints capabilities provide an elegant solution. Such integration offers several key benefits:&lt;/p&gt;
&lt;p&gt;Enhanced security: The custom REST endpoint handler ensures that only validated, authentic events from HPE GreenLake reach your Splunk environment, maintaining the security standards required by both platforms.&lt;/p&gt;
&lt;p&gt;Seamless event flow: Once configured, events flow automatically from HPE GreenLake to Splunk, enabling real-time monitoring and analysis of your cloud infrastructure.&lt;/p&gt;
&lt;p&gt;Extensible architecture: The custom REST handler can be extended to support multiple webhook types, different routing logic, and additional validation mechanisms as your requirements evolve.&lt;/p&gt;
&lt;p&gt;Whether you&apos;re monitoring subscription changes, or audit events, this integration ensures that your HPE GreenLake data becomes a valuable part of your Splunk analytics ecosystem, empowering your organization with comprehensive, real-time insights into your cloud infrastructure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Sourcing Workshops-on-Demand part 5: Deploying and Managing API-DB server]]></title><description><![CDATA[I﻿n previous articles of this series dedicated to the open sourcing of our Workshops-on-Demand project, I covered the reasons why we open…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part-5-deploying-and-managing-api-db-server/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part-5-deploying-and-managing-api-db-server/</guid><pubDate>Tue, 24 Jun 2025 09:52:09 GMT</pubDate><content:encoded>&lt;p&gt;I﻿n previous articles of this series dedicated to the &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;open sourcing of our Workshops-on-Demand project&lt;/a&gt;, I covered the reasons why we open sourced the project and how we did it. I also explained in details how you could install your own Workshops-on-Demand backend server. I also took the time to detail the automation that was hosted on this backend server. I also described to you the management of this backend server. This is what is often referred to as Day2 operations. I plan now to explain how to deploy and manage the API-DB server.&lt;/p&gt;
&lt;p&gt;The api-db server provides an open API 3.0 based api used to manage the Workshops-on-Demand project. it also provides a Database hosting the different status of participants, workshops, students.&lt;/p&gt;
&lt;p&gt;The following image is describing the different interactions existing between the different components of the wod architecture.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-1.png&quot; alt=&quot;&quot; title=&quot;WOD Architecture&quot;&gt;&lt;/p&gt;
&lt;h2&gt;H﻿ow to deploy your own api-db server...&lt;/h2&gt;
&lt;p&gt;A﻿s explained in the previous &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;article&lt;/a&gt;, the project is split into multiple repositories from the architectural and public / private aspects. The architecture is divided between the frontend and backend. The project admins will need to decide whether they are willing to develop and propose public-only content to the participants or add any proprietary and private content.&lt;/p&gt;
&lt;p&gt;I﻿ will start with the simpliest scenario: A public-only approach. Then we will dive into the specificities related the private approach.&lt;/p&gt;
&lt;h3&gt;P﻿ublic-only Deployment: No private backend nor private workshops&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;T﻿his part is compulsory for any type of deployment. Public only or public + private.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;F﻿irst, you need a repository to clone. The Workshops-on-Demand GitHub projects can be found &lt;a href=&quot;https://github.com/Workshops-on-Demand/&quot;&gt;here&lt;/a&gt;. W﻿e have packaged the solution in several Github repos. Each repository handles a specific role in the overall architecture. We recently introduced a wod-install repository.&lt;/p&gt;
&lt;p&gt;This repository is the most important one when it comes to deploying the wod infrastructure. It contains all the installation scripts for every part of the solution.&lt;/p&gt;
&lt;p&gt;Here&apos;s a quick look at what can be found in each:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie2-2repos.png&quot; alt=&quot;&quot; title=&quot;WOD repositories&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-notebooks&quot;&gt;w﻿od-notebooks&lt;/a&gt;:&lt;/strong&gt; Public Workshops-on-Demand based on Jupyter Notebooks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can test them live at &lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot;&gt;https://hackshack.hpedev.io/workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-install&quot;&gt;w﻿od-install&lt;/a&gt;:&lt;/strong&gt; Installer part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend&quot;&gt;w﻿od-backend&lt;/a&gt;:&lt;/strong&gt; Back-end part of our Workshops-on-Demand setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend&quot;&gt;w﻿od-frontend&lt;/a&gt;:&lt;/strong&gt; Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Based on NGINX and NodeJS technologies, it provides the participtants&apos; Registration Portal used to enable booking of the workshops.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db&quot;&gt;w﻿od-api-db&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open API 3.0 based api used to manage the Workshops-on-Demand project. It also provides a database hosting the different status of participants, workshops, and students.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-private&quot;&gt;w﻿od-private&lt;/a&gt;:&lt;/strong&gt; Example Private configuration for Workshops-on-Demand (WoD).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend-private&quot;&gt;w﻿od-frontend-private&lt;/a&gt;:&lt;/strong&gt; Private Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db-private&quot;&gt;w﻿od-api-db-private&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This provide examples for creating your own cutomization layer on top of the public standard WoD Backend / wod Notebooks content. Do not put any confidential data here as this is a public repository!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: T﻿here are now 7 repositories available for now.&lt;/p&gt;
&lt;p&gt;It provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An Installer allowing you to install either Backend, Api-DB server, or Frontend using a single line of command.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A complete JupyterHub server with some addons (additional Jupyterhub kernels, Ansible galaxies, and PowerShell libraries) on your system, ready to host Workshops-on-Demand that you can find here.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An api-db Server to host Workshops data and provide an API server to retrieve relevant workshops data&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A frontend server to provide registration process&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A postfix server used for the procmail API&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Ansible engine to allow automation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A fail2ban service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Admin user to manage everything&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A set of scripts to handle different tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Notebooks deployment&lt;/li&gt;
&lt;li&gt;Jupyterhub compliancy&lt;/li&gt;
&lt;li&gt;Users compliancy&lt;/li&gt;
&lt;li&gt;Security Management&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;API-DB server preparation:&lt;/h4&gt;
&lt;p&gt;B﻿efore cloning the install repository, you will need to prepare the server that will host the backend features. When ready, you will proceed with the cloning and then the installation process.&lt;/p&gt;
&lt;h5&gt;Prerequesites:&lt;/h5&gt;
&lt;p&gt;In order to setup the api-db server, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fresh OS install on physical / virtualized server running Ubuntu 24.04 or Centos 7.9 leveraging any deployment mechanism of your choice.(e.g. iLO, vagrant, etc.). You may even use this vagrant file to automatically generate a complete setup leveraging vagrant, libvirt and QEMU/KVM.&lt;/li&gt;
&lt;li&gt;A Linux account with sudo priviledges on your Linux distro. Name it &lt;code&gt;install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Git installed on the machine&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our current setup leverages the following specs for our api-db server&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 cpus or more machine&lt;/li&gt;
&lt;li&gt;16 GB of RAM&lt;/li&gt;
&lt;li&gt;80 GB of storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We are currently using a virtual machine on AWS for  our different production sites.&lt;/p&gt;
&lt;p&gt;When done with OS installation and preparation&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From the WoD-api-db server, as the install user, you will need to clone the wod-install repo first.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;install$ git clone https://github.com/Workshops-on-Demand/wod-install.git
install$ cd wod-install/install
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;T﻿he installation is based on a common install script &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/install/install.sh&quot;&gt;install.sh &lt;/a&gt;that allows the deployment of the different parts of the solution. It can be called as follows:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;install$ install.sh [-h][-t type][-g groupname][-b backend][-f frontend][-a api-db][-e external][-u user][-s sender] 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;-﻿h&lt;/code&gt; provides the help&lt;/p&gt;
&lt;p&gt;&lt;code&gt;install.sh&lt;/code&gt; performs the following tasks depending on the type&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-&amp;#x3C;&amp;#x3C; distribution name &gt;&gt;.sh&lt;/code&gt; script&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installs minimal required (&lt;code&gt;ansible, git, jq, openssh server, npm&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creates an admin user as defined upper (default is &lt;code&gt;wodadmin&lt;/code&gt;) with sudo rights&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-common.sh&lt;/code&gt; script that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cleanup&lt;/li&gt;
&lt;li&gt;Github repos cloning (leveraging install.repo file) : public Backend and public Private repos&lt;/li&gt;
&lt;li&gt;Create ssh keys for wodadmin&lt;/li&gt;
&lt;li&gt;Creates GROUPNAME variables&lt;/li&gt;
&lt;li&gt;Creates Ansible inventory files&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install_system.sh&lt;/code&gt; script with the type (api-db, backend, or frontend) that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the necessary stack based on selected type&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;wod.sh&lt;/code&gt; script in &lt;code&gt;wod-backend&lt;/code&gt; directory to be used by all other scripts&lt;/li&gt;
&lt;li&gt;Source the &lt;code&gt;wod.sh&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Setup Ansible-galaxies (&lt;code&gt;community.general&lt;/code&gt; and &lt;code&gt;posix&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Setup Ansible and call the playbook &lt;code&gt;install_&amp;#x3C;type&gt;.yml&lt;/code&gt; followed by the &lt;code&gt;ansible\_check\_&amp;#x3C;type&gt;.yml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the end of the api-db installation process:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;you will have a postgres database running in a docker container and populated with the data coming from the different workshops yaml files.&lt;/li&gt;
&lt;li&gt;You will have a postgres adminer running&lt;/li&gt;
&lt;li&gt;You will get a api server running along with his swagger description.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Please note that this setup phase can be concurrent with the public setup phase. Indeed, the install script should detect the presence of the private repository owing to the presence of the install.priv file. It will automatically adjust the different scripts and variables to add the relevant content. It will actually overload some of the variables with private ones.&lt;/p&gt;
&lt;p&gt;Y﻿ou now have a working Workshops-on-Demand api-db server in place.&lt;/p&gt;
&lt;p&gt;Congratulations! The next article in the series will help you better understand the lifecycle of the backend server. How does a workshop registration work from the backend server &apos;s side? How do you manage this server on a daily basis? How and when do you need to update it ? All these questions will be answered in the next article. And from there, we will move to the frontend side of things and finally to a workshop&apos;s creation process.&lt;/p&gt;
&lt;p&gt;I﻿f you need support for this installation process, use our dedicated &lt;a href=&quot;https://hpedev.slack.com/archives/C01B60X8SSD&quot;&gt;slack channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please be sure to check back &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog site&lt;/a&gt; to read all the articles in this series. Also, check out the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/42&quot;&gt;Data Visualization 101&lt;/a&gt; is now available! Stay tuned for additional Workshops-on-Demand in our catalog.&lt;/p&gt;
&lt;h2&gt;H﻿ow to manage your own api-db server...&lt;/h2&gt;
&lt;p&gt;The main component of the api-db server is database. there are two ways of managing the data content :&lt;/p&gt;
&lt;p&gt;Postgres adminer:&lt;/p&gt;
&lt;p&gt;in order to access the postgres adminer console: browse to either the internal or external (if any) IP Address on port 8083&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2025-06-24-at-15.35.58.png&quot; alt=&quot;&quot; title=&quot;Postgres Adminer console login&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using the console, the admin can update the relevant tables manually.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2025-06-24-at-15.38.11.png&quot; alt=&quot;&quot; title=&quot;Postgres DB Workshops table&quot;&gt;&lt;/p&gt;
&lt;p&gt;However, we now recommand you leverage our scripts that will take care of updating the relevant data automatically.&lt;/p&gt;
&lt;p&gt;Let&apos;s start with a simple example:&lt;/p&gt;
&lt;h3&gt;You want to add a new workshop to your catalog. How should you proceed?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Start by developing a new workshop (a future blog will help you understand how to achieve this).&lt;/li&gt;
&lt;li&gt;As part of the workshop development, you will have to create a simple yaml file that will describe the workshop.&lt;/li&gt;
&lt;li&gt;Once the workshop content is ready, you will create a pull request to the wod-notebooks repository to update its content.&lt;/li&gt;
&lt;li&gt;From the api-db server, as the install user, launch now the seeders script to update the database. The script will parse the different yaml files and proceed with the necessary updates to the database.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Take a look at the folowing example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;%YAML 1.1
# Meta data for the API101 Workshop to populate seeder
---
name: &apos;API 101 - API basics and the value they provide&apos;
description: &apos;You may know that application programming interfaces (APIs) allow applications to talk to other apps, but have you ever used them? Today, APIs are available for most products and solutions. You can take advantage of them when writing automation scripts, integrating code, or defining infrastructure-as-code, as long as you understand the mechanisms used to consume an API. In this hands-on workshop, we’ll review all the jargon and technology used by REST APIs.&apos;
active: true
capacity: 20
priority: 1
range: [1,6]
reset: false
ldap: false
replayId: 9
varpass: false
compile: false
workshopImg: &apos;https://us-central1-grommet-designer.cloudfunctions.net/images/jay-giang-hpe-com/WOD_Opensource-API_101_REST_API_basics_and_the_value_they_provide.jpg&apos;
badgeImg: &apos;https://us-central1-grommet-designer.cloudfunctions.net/images/jay-giang-hpe-com/Opensource-API_101_REST_API_basics_and_the_value_they_provide.jpg&apos;
beta: false
category: [&apos;Open Source&apos;]
duration: 2
presenter: &apos;Didier Lalli&apos;
role: &apos;Distinguished Technologist&apos;
avatar: &apos;/img/wod/SpeakerImages/Didier.png&apos;
replayLink: &apos;https://youtu.be/T57L-6LfUgw&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, it contains many information, from the name of the workshop, its description, whether it should be active in the databse, its capacity and much more...&lt;/p&gt;
&lt;p&gt;Every single filed in this file with be leverage to add a workshop in the workshop table form the database. Using a seeding script, every single yaml file present in each workshop folder of the wod-notebook repositories (public and private) get imported in the databse as part of the api-db server install process.&lt;/p&gt;
&lt;p&gt;Leveraging the very same mechanism, one can add, update a workshop.&lt;/p&gt;
&lt;h3&gt;You want to add a new filed in the workshop table. How should you proceed?&lt;/h3&gt;
&lt;p&gt;In order to achieve this, you will need to update the workshops.js file in the model directory within the wod-api-db folder on the api-db server.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.5!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-5/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-5/</guid><pubDate>Fri, 13 Jun 2025 08:44:29 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake cloud integration with ServiceNow using webhooks]]></title><description><![CDATA[ServiceNow, as a third-party platform, provides robust capabilities to develop custom webhook handlers that can directly receive and process…]]></description><link>https://developer.hpe.com/hpe-greenlake-cloud-integration-with-servicenow-using-webhooks/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-cloud-integration-with-servicenow-using-webhooks/</guid><pubDate>Fri, 06 Jun 2025 14:45:12 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;&lt;a href=&quot;https://servicenow.com/&quot;&gt;ServiceNow&lt;/a&gt;, as a third-party platform, provides robust capabilities to develop custom webhook handlers that can directly receive and process external events in real time. These webhook handlers can be easily integrated with HPE GreenLake webhooks, allowing for smooth communication between systems. This flexibility enables teams to automate a wide range of workflows and trigger specific actions based on customer-defined requirements. By enabling seamless integration with other tools and services, this feature significantly enhances operational efficiency, reduces manual intervention, and ensures faster response to critical business events. Below are the steps to set up ServiceNow, develop a webhook handler, and integrate with the HPE GreenLake unified events framework to receive real-time events.&lt;/p&gt;
&lt;h2&gt;Optional step 1: Login and setup of a ServiceNow instance&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: You can ignore Step 1 if you already have a ServicesNow instance.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Create a ServiceNow developer instance in ServiceNow using the following link: &lt;a href=&quot;https://signon.service-now.com/&quot;&gt;https://signon.service-now.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;After signing in, click on the Request Instance found at the top right corner of the screen.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-eceaeba3-a264-4fa7-af6b-434a1d26832d.jpeg&quot; alt=&quot;Request ServiceNow instance&quot; title=&quot;Request ServiceNow instance&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Once requested, it will take a minute to provision the instance, click on the profile to check what the status is. The status should be Online.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-a8242a8f-67d7-474e-8f06-60985ab1719a.jpeg&quot; alt=&quot;Manage ServiceNow instance&quot; title=&quot;Manage ServiceNow instance&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Click on Manage instance password from the INSTANCE ACTION list, as shown above. Here, you will find the details about your ServiceNow instance such as its URL, username, and password. Log in to your instance URL with the provided credentials.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-0017ecb8-a01f-4ab8-9261-4d626d983491.jpeg&quot; alt=&quot;ServiceNow instance details&quot; title=&quot;ServiceNow instance details&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 2: Create a webhook endpoint&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Once logged into your ServiceNow instance, click on All, search, and click on scripted rest api.&lt;/li&gt;
&lt;li&gt;Click on New in the top-right corner and fill in the Name. Click outside the box. The App Id will automatically pop up. Then, click on submit.&lt;/li&gt;
&lt;li&gt;Once it has been successfully submitted, you can search in the scripted REST API using the same name you created (ex: unified events)., Click on that record, scroll to bottom, and click on New.&lt;/li&gt;
&lt;li&gt;Now, fill in the details: i.e. give it the appropriate Name and relative path, select HTTP method to POST, uncheck Require authentication, and click on Submit.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-cfb0ec5a-f2f3-4982-bf02-361f5efe9d73.jpeg&quot; alt=&quot;Registering a scripted REST API handler&quot; title=&quot;Registering a scripted REST API handler&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Paste the code snippet below in the Script section of the above created record (Change the secret in line 144).&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: To better understand the code below, which implements the verification challenge sent to the ServiceNow handler by HPE GreenLake cloud, refer to the documentation page &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/event/public/webhooks/#webhook-verification&quot;&gt;here&lt;/a&gt; or the following &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;blog post&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;// Code Snippet for GreenLake Webhook handler
(function process( /*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {
    var requestBody = request.body.data;
    var setAuth = gs.getProperty(&apos;setAuth&apos;);
    if (setAuth == &quot;true&quot;) {
        var reqHeaderAuthorization = request.getHeader(&quot;Authorization&quot;);
        if (reqHeaderAuthorization == null || reqHeaderAuthorization == &quot;&quot;) {
            response.setStatus(401);
            response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
            var respWriter = response.getStreamWriter();
            respWriter.writeString(JSON.stringify({
                httpStatusCode: &quot;401&quot;,
                error: &quot;Authorization header is missing&quot;
            }));
            respWriter.writeString(&quot;\n&quot;);
            return;
        } else {
            var includesPrefixBearer = reqHeaderAuthorization.includes(&quot;Bearer&quot;);
            gs.info(&quot;includesPrefixBearer &quot; + includesPrefixBearer);
            if (includesPrefixBearer) {
                // oauth validation
                var token = reqHeaderAuthorization.split(&apos;Bearer&apos;)[1];
                token = token ? token.trim() : &apos;&apos;;
                var validatedToken = validateToken(token);
                if (!validatedToken) {
                    response.setStatus(401);
                    response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
                    var respWriter2 = response.getStreamWriter();
                    respWriter2.writeString(JSON.stringify({
                        httpStatusCode: &quot;401&quot;,
                        error: &quot;Authorization failed&quot;
                    }));
                    respWriter2.writeString(&quot;\n&quot;);
                    return;
                }
            } else {
                // apikey validation
                var validatedResponse = checkAPIKEYValidation(reqHeaderAuthorization);
                if (!validatedResponse) {
                    response.setStatus(401);
                    response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
                    var respWriter4 = response.getStreamWriter();
                    respWriter4.writeString(JSON.stringify({
                        httpStatusCode: &quot;401&quot;,
                        error: &quot;Authorization failed&quot;
                    }));
                    respWriter4.writeString(&quot;\n&quot;);
                    return response;
                }

            }
        }
    }
    eventType = requestBody.type;
    gs.info(&quot;event type &quot; + eventType);
    eventData = requestBody.data;

    if (eventType.includes(&quot;verification&quot;)) {
        gs.info(&quot;Recieved code verification request&quot;);
        codeChallengeStr = requestBody.data.challengeRequest;
        gs.info(&quot;codeChallenege Request  :&quot; + codeChallengeStr);
        var generated;
        var resp;
        generated, resp = getHMACSecretFromLambda(codeChallengeStr);
        if (generated == false) {
            response.setStatus(500);
            response.setContentType(&quot;application/json&quot;);
            var writer = response.getStreamWriter();
            writer.writeString(JSON.stringify({
                httpStatusCode: &quot;500&quot;,
                error: JSON.stringify(resp)
            }));
            writer.writeString(&quot;\n&quot;);
            return;
        }
        response.setStatus(201);
        response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
        var writerResp2 = response.getStreamWriter();
        writerResp2.writeString(JSON.stringify({
            httpStatusCode: &quot;201&quot;,
            verification: resp
        }));
        writerResp2.writeString(&quot;\n&quot;);
        pushEventToIncidentTable(JSON.stringify(eventData), eventType);
        return;
    } else if (eventType.includes(&quot;com.hpe&quot;)) {
        gs.info(&quot;Recieved event with event type : &quot; + requestBody.type);
        hpeWebhookSignature = request.getHeader(&quot;HPE-Webhook-Signature&quot;);
        eventData = requestBody;
        var generatedSignature;
        generated, resp = getHMACSecretFromLambda(JSON.stringify(eventData));
        generatedSignature = &quot;sha256=&quot; + resp;
        if (generatedSignature == hpeWebhookSignature) {
            gs.info(&quot;Received webhook signature are matching&quot;);
        } else {
            gs.info(&quot;webhook signature are not same, Recieved: &quot; + hpeWebhookSignature + &quot; generated : &quot; + resp);
            response.setStatus(412);
            response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
            var writerResp4 = response.getStreamWriter();
            writerResp4.writeString(JSON.stringify({
                httpStatusCode: &quot;412&quot;,
                error: &quot;hmac signature failed to match&quot;
            }));
            writerResp4.writeString(&quot;\n&quot;);
            return;
        }
    } else {
        gs.warn(&quot;Not a valid request body doesn&apos;t contains type in req body&quot;);
        response.setStatus(400);
        response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
        var respWriter3 = response.getStreamWriter();
        respWriter3.writeString(JSON.stringify({
            httpStatusCode: &quot;400&quot;,
            error: &quot;Not a valid request body, type field is missing&quot;
        }));
        respWriter3.writeString(&quot;\n&quot;);
        return;
    }

    // push incident to incident table
    pushEventToIncidentTable(JSON.stringify(eventData), eventType);

    response.setStatus(201);
    response.setHeader(&quot;Content-Type&quot;, &quot;application/json&quot;);
    var writerResp3 = response.getStreamWriter();
    writerResp3.writeString(JSON.stringify({
        httpStatusCode: &quot;200&quot;,
    }));
    writerResp3.writeString(&quot;\n&quot;);

})(request, response);

function getHMACSecretFromLambda(codechallengeReq) {
    function encodeToHex(byteArray) {
        var hexChars = &quot;0123456789abcdef&quot;;
        var hexStr = &quot;&quot;;
        for (var i = 0; i &amp;#x3C; byteArray.length; i++) {
            var bt = byteArray[i] &amp;#x26; 0xFF;
            hexStr += hexChars[(bt &gt;&gt; 4) &amp;#x26; 0x0F];
            hexStr += hexChars[bt &amp;#x26; 0x0F];
        }
        return hexStr;
    }
    key = &quot;your secret&quot;;
    var mac = new GlideCertificateEncryption();
    var base64Key = GlideStringUtil.base64Encode(key);
    var base64HMAC = mac.generateMac(base64Key, &quot;HmacSHA256&quot;, codechallengeReq);
    var byteArray = GlideStringUtil.base64DecodeAsBytes(base64HMAC);
    var hexHMAC = encodeToHex(byteArray);
    gs.info(&quot;hmac generated sig &quot; + hexHMAC);
    return true, hexHMAC;
}

function pushEventToIncidentTable(description, shortDescription) {
    try {
        var gr = new GlideRecord(&apos;incident&apos;);
        gr.initialize();
        gr.short_description = &quot;HPE GLCP: &quot; + shortDescription;
        gr.description = description;
        gr.assigned_to = &quot;admin&quot;;
        gr.assignment_group = &quot;HPE GLCP&quot;;
        gr.caller_id = &quot;System Administrator&quot;;
        gr.urgency = 1;
        gr.priority = 1;
        gr.insert();
        gs.info(&quot;✅ Pushed event to Incident table: &quot; + gr.number);
    } catch (ex) {
        gs.warn(&quot;Error in pushing event to Incident table: &quot; + ex.message);
    }
}

function checkAPIKEYValidation(apikey) {
    var gr = new GlideRecord(&quot;u_unified_events_api_key&quot;);
    gr.query();
    var response = sn_ws_int.RESTAPIResponse
    var apiKeyValid = false;
    while (gr.next()) {
        var now = new GlideDateTime();
        var expiryTime = new GlideDateTime(gr.u_expiry_time);
        gs.info(&quot;expiryTime.after(now) &quot; + expiryTime.after(now));
        if (apikey == gr.u_api_key &amp;#x26;&amp;#x26; expiryTime.after(now)) {
            gs.info(&quot;API key is valid&quot;);
            apiKeyValid = true;
            break;
        }
    }
    if (!apiKeyValid) {
        gs.info(&quot; the requested api key is not a valid key or might had expired: &quot; + apikey);
        return false;
    }
    return true;
}


function decodeBase64Url(base64Url) {
    var base64 = base64Url.replace(/-/g, &apos;+&apos;).replace(/_/g, &apos;/&apos;);
    while (base64.length % 4) base64 += &apos;=&apos;;
    return GlideStringUtil.base64Decode(base64);
}

function validateToken(token) {
    var parts = token.split(&apos;.&apos;);
    if (parts.length === 3) {
        var header = decodeBase64Url(parts[0]);
        var payload = decodeBase64Url(parts[1]);
        gs.info(&quot;JWT Header: &quot; + header);
        gs.info(&quot;JWT Payload: &quot; + payload);
    } else {
        gs.error(&quot;Invalid JWT token format.&quot;);
        return false;
    }

    var payloadObj = JSON.parse(payload);
    gs.info(&quot;Expires At (epoch): &quot; + payloadObj.exp);

    var now = new Date().getTime() / 1000;
    if (payloadObj.exp &amp;#x26;&amp;#x26; payloadObj.exp &amp;#x3C; now) {
        gs.error(&quot;Token is expired.&quot;);
    } else {
        gs.info(&quot;Token is still valid&quot;);
        return false;
    }
    return true;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3: Set up the API key&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;You can set up an API Key as an authentication method, To do this, you need to create a new table.&lt;/li&gt;
&lt;li&gt;Search for tables in All (header bar), look for System Definitions → Tables, and click on it, Click on New (top-right corner), fill in the details for its label name. It will auto-populate by clicking outside, Click submit. (ex: unified-events-api-key)&lt;/li&gt;
&lt;li&gt;Now, search in tables with above-created name, making sure to select for text as a filter. Click on the table with matched result.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-4d7ccf04-4626-45a9-998f-35d9a03acf6f.jpeg&quot; alt=&quot;Setting up API key&quot; title=&quot;Setting up API key&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Now, add 2 more columnar fields to the table api_key and expiry_time, as shown below, with proper types by clicking on insert new rows.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-f470575f-6d7a-48d1-a98c-ef161a54a827.jpeg&quot; alt=&quot;Adding api_key and expiry_time fields&quot; title=&quot;Adding api_key and expiry_time fields&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Next, search this table for unified-events-api-key in All using the search bar. Click on new to add api-keys by generating new alphanumeric key, and select expiry time by calendar pick. One can add more API keys with other expiry time for easy rotation.(&lt;a href=&quot;https://generate-random.org/api-key-generator?count=1&amp;#x26;length=128&amp;#x26;type=mixed-numbers&amp;#x26;prefix=&quot;&gt;https://generate-random.org/api-key-generator?count=1&amp;#x26;length=128&amp;#x26;type=mixed-numbers&amp;#x26;prefix=&lt;/a&gt;.)&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Step 4: Configure the incidents table to get notifications in ServiceNow.&lt;/h2&gt;
&lt;p&gt;This step will allow you to trigger notifications in ServiceNow so one can easily verify the triggered events.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In All section search for Business rules (System definitions → Business rules), click on and search Abort changes, and select one, as shown below.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-b59f8ece-6b27-4b70-af46-2bc83a062a3a.jpeg&quot; alt=&quot;Configuring the incidents table&quot; title=&quot;Configuring the incidents table&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Once you click on the abort changes on a group, uncheck the Insert, as shown below.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-eaabb59e-9fbb-4ddc-907b-8796ecd3c5e6.jpeg&quot; alt=&quot;Modifying abort change on group&quot; title=&quot;Modifying abort change on group&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Navigate to Business rules again and click on new to create a new rule, name it, (ex: trigger notification), select table Incident, enable checks for Active, Insert, Update, and add a filter condition (“Short description starts with HPE”), as shown below. Click on Submit.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-c6a22d4d-56b9-41cb-8dfa-a8fdac99ab6d.jpeg&quot; alt=&quot;Adding description for event&quot; title=&quot;Adding description for event&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 5: Call the webhook and test it with ServiceNow&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Get the webhook endpoint. To get the Resource path, navigate to Scripted Rest API then search with created REST API name.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It should look like:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenoew-webhook-note1.jpg&quot; alt=&quot;Finding the endpoint URL&quot; title=&quot;Finding the endpoint URL&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Once all the steps have been completed, call the webhook from Postman or copy the below CURL and paste it in a terminal. Make sure to change the --location url of your instance.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl --location &apos;https://&amp;#x3C;dev instance&gt;.service-now.com/api/1762185/unified_events/webhook1&apos; \
--header &apos;Content-Type: application/json&apos; \
--data &apos;{
  &quot;name&quot;: &quot;Grills&quot;,
  &quot;status&quot;: &quot;SOLD&quot;,
  &quot;description&quot;: &quot;Grills is a Beer&quot;,
  &quot;photo_url&quot;: &quot;http://baseurl/resourceId?action=confirm&quot;,
  &quot;data&quot;:{&quot;challengeRequest&quot;:&quot;e37dc544569d43c73d088&quot;},
  &quot;type&quot;:&quot;com.hpe.verification&quot;
}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Now you will receive events in ServiceNow. Search for Incidents in All using the search bar, click on Assigned to me, and you should see the events triggered in table.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-e980f286-a395-48b7-9f8e-8596f0ab008b.jpeg&quot; alt=&quot;Receiving events from GreenLake&quot; title=&quot;Receiving events from GreenLake&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Also, you will get an event notification in ServiceNow web console, as shown below.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-2743dab1-f57e-4e38-9878-b9d78860ad1b.jpeg&quot; alt=&quot;Event notification in web console&quot; title=&quot;Event notification in web console&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;One can debug from the logs in case of any failures. To do so, search for System Log from All in the search bar, and click on All in System Log to see all levels of logs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/servicenow-webhook-blog-5c9b0e45-419b-4cb2-ab5d-62cd1d753712.jpeg&quot; alt=&quot;events coming in&quot; title=&quot;events coming in&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Now, one can register the ServiceNow webhook using the secret that had been set in the script with Unified events framework, and will start receiving events in ServiceNow.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Leveraging ServiceNow’s webhook handling capabilities allows us to create highly responsive, automated workflows tailored to customer needs. By integrating seamlessly with existing systems, it not only streamlines operations but also enhances agility and scalability in managing real-time events across platforms.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Learn how to leverage modern IT Operations Management (ITOM) tools you may already have]]></title><link>https://developer.hpe.com/2025-june-04/</link><guid isPermaLink="false">https://developer.hpe.com/2025-june-04/</guid><pubDate>Wed, 04 Jun 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 2: Past Failures and Future Attempts]]></title><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-2-past-failures-and-future-attempts/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-2-past-failures-and-future-attempts/</guid><pubDate>Mon, 02 Jun 2025 09:07:43 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Hybrid observability service – Part 4: Enabling the monitoring of physical devices in HPE GreenLake Flex Solutions]]></title><description><![CDATA[In my previous blog post, I introduced the steps to install and configure an agentless SSH-enabled integration module and associated…]]></description><link>https://developer.hpe.com/hybrid-observability-service-–-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-service-–-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/</guid><pubDate>Tue, 27 May 2025 15:36:05 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;my previous blog post&lt;/a&gt;, I introduced the steps to install and configure an agentless SSH-enabled integration module and associated monitoring templates. This enables you to discover and monitor a Linux-based operating system in the hybrid observability service powered by HPE OpsRamp Software.&lt;/p&gt;
&lt;p&gt;Continuing from the third part of this series, I’ll dive into the observability of a physical HPE ProLiant compute server with an Integrated Lights-Out (iLO) 5 and above, out-of-band management interface, and an HPE storage array. I’ll also explore creating and importing a dashboard to visualize the collected metrics in charts.&lt;/p&gt;
&lt;h2&gt;Discovering and monitoring of HPE physical servers via the Redfish API&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://glp.docs.opsramp.com/integrations/compute/server-hardware-monitoring-redfish/redfish-server/&quot;&gt;Redfish - Server integration module&lt;/a&gt; monitors and manages physical servers via the Redfish API, a modern, standardized interface for out-of-band hardware management.&lt;/p&gt;
&lt;p&gt;Redfish is a RESTful API designed by the Distributed Management Task Force (DMTF) for managing servers, storage, and other hardware components over the network. It’s supported by most modern server vendors like Dell iDRAC, HPE iLO, and NVIDIA.&lt;/p&gt;
&lt;p&gt;The Redfish Server integration module enables the &lt;strong&gt;discovery&lt;/strong&gt; of physical server hardware and its components (CPU, memory, storage, power supplies, fans, temperature, and so on). Once discovered, Redfish Server monitors are &lt;strong&gt;automatically&lt;/strong&gt; applied to the resource via predefined Global Monitoring Templates and Global Device Management Policies to manage the health and status of the server hardware components. You can customize the Global Monitoring Templates by cloning the templates as appropriate and applying the cloned monitoring template to your resource. Refer to &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;Part 3 of the series&lt;/a&gt; to learn how to clone and apply a monitoring template to a resource.&lt;/p&gt;
&lt;h3&gt;Installing the Redfish – Server integration module&lt;/h3&gt;
&lt;p&gt;The Redfish Server integration module installation process for an HPE compute server is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the navigation bar go to &lt;strong&gt;Setup &gt; Account &gt; Integrations&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;+ADD&lt;/strong&gt; to search for &lt;em&gt;Redfish&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose to install &lt;strong&gt;Redfish – Server&lt;/strong&gt; by selecting &lt;strong&gt;ADD&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the application version and select &lt;strong&gt;+ADD&lt;/strong&gt; to input basic account information, such as the &lt;strong&gt;Name&lt;/strong&gt; for the configuration and the &lt;strong&gt;IP address/Host name&lt;/strong&gt; of iLO for the HPE compute server that should be accessible from the gateway collector appliance. Specify the &lt;strong&gt;Port&lt;/strong&gt; (typically port 443) and check the &lt;strong&gt;Is Secure&lt;/strong&gt; checkbox.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You must assign credentials for iLO interface. You can either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select existing credentials for access to the iLO interface.&lt;/li&gt;
&lt;li&gt;Create new credentials. Click &lt;strong&gt;+ADD&lt;/strong&gt; and specify the credentials name, a description, the username and password to access the iLO interface of the compute server.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Although optional, it is recommended to select:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;App Failure Notifications&lt;/strong&gt; to be notified when the hybrid observability service can’t collect or process data from the Redfish-enabled server.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Event Polling/Alert Configuration&lt;/strong&gt; for fetching events and alerts from the Integrated Management Log (IML) from the iLO. You can also select &lt;strong&gt;Alert On Root Resource&lt;/strong&gt; for alerts to appear on the main server resource in the hybrid observability service.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Do not check the option &lt;strong&gt;CLI Credentials&lt;/strong&gt; because it is only valid for HPE Edgeline servers.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;
&lt;p&gt;In &lt;strong&gt;RESOURCE TYPE&lt;/strong&gt;, check &lt;strong&gt;ALL&lt;/strong&gt; or specify the hardware components to be discovered.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Define a &lt;strong&gt;Discovery Schedule&lt;/strong&gt;. You can select minutes, hourly, daily, weekly, or monthly schedule.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;ADD&lt;/strong&gt; to save the configuration of the integration module.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In the &lt;strong&gt;ADVANCED SETTINGS&lt;/strong&gt;, notice the &lt;em&gt;&lt;strong&gt;Bypass Resource Reconciliation&lt;/strong&gt;&lt;/em&gt; option is checked by default. It controls how HPE OpsRamp handles previously discovered resources by another integration module when it runs discovery again.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;11&quot;&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;NEXT&lt;/strong&gt; and &lt;strong&gt;select&lt;/strong&gt; a collector profile. That is, the gateway collector appliance used to install the integration module and discover the Redfish – Server hardware resource.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;FINISH&lt;/strong&gt; to complete the installation of the integration module and initiate the resource discovery and monitoring.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The gateway collector appliance displays with the status &lt;strong&gt;CONFIGURED&lt;/strong&gt; for the Redfish – Server integration module. After a few minutes, the status of the gateway collector appliance will change to &lt;strong&gt;RUNNING&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Visualizing the components of the Redfish - Server&lt;/h3&gt;
&lt;p&gt;After a few iterations of discovery and monitoring of the newly installed Redfish Server integration module:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Search&lt;/strong&gt; and select &lt;strong&gt;COMPUTE &gt; Redfish – Server&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select either &lt;strong&gt;MANAGER&lt;/strong&gt; or &lt;strong&gt;SYSTEM&lt;/strong&gt; or &lt;strong&gt;LINUX&lt;/strong&gt; in &lt;strong&gt;MORE&lt;/strong&gt; option to view the sub-components of the compute device and the monitoring status (color indication is green which means the elements are now monitored):&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-redfish-img7.png&quot; alt=&quot;Visualizing the components of Redfish - Server&quot; title=&quot;Visualizing the components of Redfish - Server&quot;&gt;&lt;/p&gt;
&lt;p&gt;When the hybrid observability service connects to a Redfish-enabled system, it typically discovers and maps multiple components as separate resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Manager&lt;/strong&gt; represents the baseboard management controller (BMC) — that is the iLO management controller of the HPE ProLiant server — used for out-of-band monitoring. The Hybrid observability service connects to the controller for inventory, control, and monitoring of the server hardware.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;physical server resource&lt;/strong&gt; to monitor system-level hardware such as the processor, memory, and network interfaces.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;server with the operating system&lt;/strong&gt; (Linux in our example) detected via the agentless SSH integration module previously installed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Visualizing the metrics and monitoring templates&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;From the &lt;strong&gt;Infrastructure &gt; Search &gt; COMPUTE &gt; Redfish – Server&lt;/strong&gt;, select a sub-component of the Redfish - Server to open the resource details page. For example, select the &lt;strong&gt;Compute System&lt;/strong&gt; or the &lt;strong&gt;Manager&lt;/strong&gt;. Click on the component. You will see multiple tabs with detailed information about the Redfish - Server sub-component, such as Overview, Metrics, Attributes, Components, and Inventory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on the &lt;strong&gt;Metrics&lt;/strong&gt; tab to view the metric graphs. A graph is plotted for each metric that is enabled in the Monitoring Template automatically assigned to the sub-component.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the upper right side of the Metrics tab, click the &lt;strong&gt;Settings&lt;/strong&gt; icon to &lt;em&gt;View Monitoring Configuration&lt;/em&gt;, such as the monitored metrics, the thresholds, and the availability monitor.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;ASSIGNED TEMPLATES&lt;/strong&gt; to see the global monitoring template(s) assigned to the sub-component of the Redfish – Server.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-redfish-img7-bis2.png&quot; alt=&quot;Visualizing the metrics and monitoring templates&quot; title=&quot;Visualizing the metrics and monitoring templates&quot;&gt;&lt;/p&gt;
&lt;p&gt;Based on your IT operational requirements, you can clone and customize the Global Monitoring Templates, then apply them to the Redfish - Server components as appropriate. Refer to &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;Part 3 of the series&lt;/a&gt; to learn how to clone and apply a monitoring template to a resource.&lt;/p&gt;
&lt;h3&gt;Getting a real-time view of the Redfish – Server components&lt;/h3&gt;
&lt;p&gt;You can also explore &lt;strong&gt;Topology Maps&lt;/strong&gt;, a visual representation of relationships and dependencies between your infrastructure resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Go to &lt;strong&gt;Infrastructure &gt; Topology Maps&lt;/strong&gt; and select your Redfish - Server.&lt;/li&gt;
&lt;li&gt;You can also select a resource for a sub-component of the Redfish - Server from &lt;strong&gt;Infrastructure &gt; Search &gt; COMPUTE &gt; Redfish – Server&lt;/strong&gt;. Click the ellipsis (…) at the top right and select &lt;strong&gt;View Topology&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-redfish-img8-bis.png&quot; alt=&quot;Topology Maps for Redfish - Server&quot; title=&quot;Topology Maps for Redfish - Server&quot;&gt;&lt;/p&gt;
&lt;p&gt;When viewing &lt;strong&gt;Topology Maps&lt;/strong&gt;, you can customize how much detail is shown and how deep the relationships go using &lt;strong&gt;View Settings&lt;/strong&gt; (gear icon) on the top right and set the &lt;strong&gt;Depth&lt;/strong&gt; and &lt;strong&gt;Layout&lt;/strong&gt; options.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-redfish-img9.png&quot; alt=&quot;Depth and Layout of the Topology Map&quot; title=&quot;Depth and Layout of the Topology Map&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;depth&lt;/strong&gt; setting controls how many levels of relationships you see in the topology view, and the &lt;strong&gt;Layout&lt;/strong&gt; defines how resources and their relationships are visually arranged on the screen:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-redfish-img10.png&quot; alt=&quot;Topology Map with depth and layout&quot; title=&quot;Topology Map with depth and layout&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Visualizing monitoring data in a dashboard&lt;/h3&gt;
&lt;p&gt;You can use the &lt;strong&gt;dashboards&lt;/strong&gt; (classic dashboard and modern dashboard 2.0). A dashboard is a collection of widgets and tiles that present visualizations of your environment’s monitoring data, helping you gain a greater understanding of the state of the infrastructure.&lt;/p&gt;
&lt;h2&gt;Discovering and monitoring of HPE Storage Array&lt;/h2&gt;
&lt;p&gt;HPE OpsRamp provides integration modules to discover and monitor storage arrays from many vendors, including HPE, Dell, Hitachi, NetApp, Vast and more.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://glp.docs.opsramp.com/integrations/storage/hpe/hpe-alletra/&quot;&gt;HPE Alletra MP/9000 integration module&lt;/a&gt; is used for discovering, monitoring and managing HPE Alletra MP/9000 storage systems. This module helps you pull detailed health, performance, and inventory data from the storage arrays (disks, controllers, fans, capacity utilization, input/output operations per second, latency, throughput and so on).&lt;/p&gt;
&lt;p&gt;Once discovered, storage monitors are &lt;strong&gt;automatically&lt;/strong&gt; applied to the resource via predefined Global Monitoring Templates and Global Device Management Policies, to manage the health and status of the storage system components.&lt;/p&gt;
&lt;h3&gt;Installing the storage array integration module&lt;/h3&gt;
&lt;p&gt;The storage array integration module installation process is very similar to the Redfish – Server integration module:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the navigation bar, go to &lt;strong&gt;Setup &gt; Account &gt; Integrations&lt;/strong&gt; and click &lt;strong&gt;+ADD&lt;/strong&gt; to search for &lt;strong&gt;Storage&lt;/strong&gt; category.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose to install &lt;strong&gt;HPE Alletra MP/9000 Series&lt;/strong&gt; by selecting &lt;strong&gt;ADD&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;application version&lt;/strong&gt; and select &lt;strong&gt;+ADD&lt;/strong&gt; to input basic account information such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Name&lt;/strong&gt; for the configuration.&lt;/li&gt;
&lt;li&gt;The storage array type (Alletra MP or Alletra 9000).&lt;/li&gt;
&lt;li&gt;Check the &lt;strong&gt;Is Secure&lt;/strong&gt; checkbox.&lt;/li&gt;
&lt;li&gt;Specify the &lt;strong&gt;IP address/Host name&lt;/strong&gt; of the storage array that should be accessible from the gateway collector appliance.&lt;/li&gt;
&lt;li&gt;Specify the &lt;strong&gt;WSAPI port&lt;/strong&gt; (port 443) and &lt;strong&gt;SSH port&lt;/strong&gt; (port 22).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select existing credentials that apply to the storage array or click &lt;strong&gt;+ADD&lt;/strong&gt; to create the Credentials for the storage array account. Specify the credentials name, a description, the username, and password.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Although optional, it is recommended to select:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;App Failure Notification&lt;/strong&gt; to be notified by an event or an alert when availability is impacted due to an issue within the HPE Alletra MP/9000 storage environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Alert Polling&lt;/strong&gt; that allows HPE OpsRamp to check for and collect alerts generated by the storage array.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API Timeouts&lt;/strong&gt; to define the maximum amount of time HPE OpsRamp will wait for a response from the storage array when making an API call to fetch data such as alerts, events, metrics, or device information.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In &lt;strong&gt;RESOURCE TYPE&lt;/strong&gt;, check &lt;strong&gt;ALL&lt;/strong&gt; or specify the hardware components to be discovered.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The last step is to define a &lt;strong&gt;Discovery Schedule&lt;/strong&gt;. You can select a minute, hourly, daily, weekly, or monthly schedule.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;ADD&lt;/strong&gt; to save the configuration of the integration module.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;NEXT&lt;/strong&gt; and &lt;strong&gt;select&lt;/strong&gt; a collector profile. That is, the gateway collector appliance that will be used to install the integration module and discover the storage array resource.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;FINISH&lt;/strong&gt; to complete the installation of the integration module and initiate the resource discovery.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The gateway collector appliance is displayed with the status &lt;strong&gt;CONFIGURED&lt;/strong&gt; for the storage array integration module. After a few minutes, the status of the gateway collector appliance will change to &lt;strong&gt;RUNNING&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Visualizing the components of the storage array&lt;/h3&gt;
&lt;p&gt;After a few iterations of discovery and monitoring of the newly installed storage array integration module:&lt;/p&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Search&lt;/strong&gt; and select &lt;strong&gt;STORAGE &gt; HPE Alletra MP/9000 Series&lt;/strong&gt; to check the monitoring status (color indication is green) of the storage array:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/integration-alletramp-img5.png&quot; alt=&quot;Visualizing the components of the storage array&quot; title=&quot;Visualizing the components of the storage array&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;HPE Alletra Storage System&lt;/strong&gt; is the top-level resource being discovered and monitored. It refers to the entire managed storage array instance — a physical and logical collection of components that operate together to deliver enterprise-grade storage services. Components include controllers, disks, virtual storage pool, volumes, network interfaces, chassis, cache modules, and so on.&lt;/p&gt;
&lt;p&gt;You can select each component of the storage array to visualize its monitoring status, attributes, and metrics.&lt;/p&gt;
&lt;h3&gt;Checking the Monitoring Templates assigned to the storage array&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select a component of the storage array to open the resource details page.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on the &lt;strong&gt;Metrics&lt;/strong&gt; tab to view the metric graphs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the upper right side of the &lt;strong&gt;Metrics&lt;/strong&gt; tab, click the &lt;strong&gt;Settings&lt;/strong&gt; icon to view monitoring configurations such as the monitored metrics, the thresholds, and the availability monitor.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;ASSIGNED TEMPLATES&lt;/strong&gt; to see the global monitoring templates assigned to the component of the storage array.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Based on your IT operational requirements, you can clone the Global Monitoring Templates and customize them, then apply them to the storage array components as appropriate.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt; For the HPE Alletra MP/9000 storage, although a Global Monitoring Template is automatically applied to the &lt;strong&gt;HPE Alletra Storage System&lt;/strong&gt; component, &lt;strong&gt;no availability monitor&lt;/strong&gt; is assigned. This feature will be made available in an upcoming HPE OpsRamp release. Meanwhile, to enable monitoring of the storage system component, you need to clone the monitoring template assigned to the &lt;strong&gt;HPE Alletra Storage System&lt;/strong&gt;. Then you must customize it to enable one or two availability monitors for the metrics that are more important for the resource. Refer to &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;part 3 of the series&lt;/a&gt; to learn how to clone and apply a monitoring template to a resource.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Getting a real-time view of the storage array components&lt;/h3&gt;
&lt;p&gt;You can explore &lt;strong&gt;Topology Maps&lt;/strong&gt; to get a graphical, real-time view of the storage array components, their relationships, and dependencies.&lt;/p&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Infrastructure &gt; Search&lt;/strong&gt; and select the storage system, click the ellipsis (…) at the top right, and select &lt;strong&gt;View Topology&lt;/strong&gt;. Use the &lt;strong&gt;View Settings&lt;/strong&gt; (gear icon) on the top right to set the &lt;strong&gt;Depth&lt;/strong&gt; and &lt;strong&gt;Layout&lt;/strong&gt; of the graphical representation.&lt;/p&gt;
&lt;p&gt;Finally, you can use the dashboards (classic dashboard and modern dashboard 2.0) that represent monitoring data of your environment.&lt;/p&gt;
&lt;h2&gt;Dashboard&lt;/h2&gt;
&lt;p&gt;Once monitoring tools have collected metrics, the next step in creating actionable insights is to visualize them in a dashboard. A dashboard is a collection of charts based on tiles and widgets for visualizing metrics data measured over intervals of time. Hybrid observability service powered by HPE OpsRamp leverages the open-source query language, Prometheus Query Language (PromQL), for dashboard creation.&lt;/p&gt;
&lt;p&gt;The purpose of using and creating customizable dashboards is to easily identify anomalies so team members can identify them and troubleshoot issues quickly.&lt;/p&gt;
&lt;p&gt;You can choose from the classic dashboard or dashboard 2.0. HPE &lt;strong&gt;recommends leveraging the modern dashboard 2.0&lt;/strong&gt; for its advanced capabilities.&lt;/p&gt;
&lt;p&gt;Dashboard 2.0 allows you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a dashboard with common chart types for observability such as line charts, bar charts, value charts, list charts, gauge charts, honeycomb charts, or pie charts.&lt;/li&gt;
&lt;li&gt;Leverage a set of predefined, &lt;strong&gt;curated&lt;/strong&gt; dashboards. They are pre-built, use-case-specific dashboards that give IT teams immediate visibility into key areas of IT operations without needing to build dashboards.&lt;/li&gt;
&lt;li&gt;Group related dashboards by function, project, or team into the default collection (&lt;strong&gt;My Dashboards&lt;/strong&gt;) or create a new collection.&lt;/li&gt;
&lt;li&gt;Copy a curated dashboard and customize it according to your operational requirements.&lt;/li&gt;
&lt;li&gt;Designate a dashboard as the default dashboard.&lt;/li&gt;
&lt;li&gt;Export a dashboard as a JSON file and import it into another environment or client account for your partner domain.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To learn more about dashboards, see the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/feature-guides/dashboards/&quot;&gt;HPE OpsRamp Dashboards documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Creating a dashboard&lt;/h3&gt;
&lt;p&gt;As a &lt;em&gt;partner administrator&lt;/em&gt; you can create and manage dashboards. In this example, I’ll show you the sequence of steps to create a new dashboard 2.0 in the default collection for the client account &lt;em&gt;DreamCompany&lt;/em&gt; to monitor and manage the discovered Redfish - Server.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt;, navigate to &lt;strong&gt;Dashboards&lt;/strong&gt;, and click &lt;strong&gt;Dashboard&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the menu (&lt;strong&gt;three stacked horizontal lines&lt;/strong&gt;), also referred to as &lt;em&gt;hamburger icon&lt;/em&gt;, on the left side of the dashboard.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;+CREATE DASHBOARD&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Specify a dashboard name for the default collection &lt;strong&gt;My Dashboards&lt;/strong&gt;. You can specify a new collection name.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;CREATE&lt;/strong&gt; to start designing the dashboard by adding tiles and widgets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img2.png&quot; alt=&quot;Creating a dashboard 2.0&quot; title=&quot;Creating a dashboard 2.0&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;CREATE TILE&lt;/strong&gt; or &lt;strong&gt;+&lt;/strong&gt; on the toolbar.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Text &amp;#x26; Images&lt;/strong&gt; tile to create a header.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Header&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;You can set the font, the size, and the background color of the header.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;ADD TILE&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;+&lt;/strong&gt; in the toolbar to add another tile. Here, I will add a &lt;strong&gt;Metric&lt;/strong&gt; tile to visualize metrics coming from the Redfish - Server:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Build my own&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;DATA&lt;/strong&gt; section, click &lt;strong&gt;+QUERY&lt;/strong&gt; and select the metric that is important for you. For example, I selected the &lt;strong&gt;ComputeSystem Average CPU&lt;/strong&gt; and the &lt;strong&gt;Line/Bar&lt;/strong&gt; chart. You can also select the duration over which the selected metric data is displayed and calculated, for example, last 1 hour or last 24 hours.&lt;/li&gt;
&lt;li&gt;Optionally, you can define &lt;strong&gt;filters and operations&lt;/strong&gt; to help refine what data is shown and how it’s calculated.&lt;/li&gt;
&lt;li&gt;Click the optional &lt;strong&gt;Legend&lt;/strong&gt; icon and enter double curly brackets &lt;strong&gt;{{&lt;/strong&gt; in the &lt;strong&gt;Query a Legend&lt;/strong&gt; field to see a list of options. For example, I specify the &lt;em&gt;name&lt;/em&gt; and &lt;em&gt;rootResourceIp&lt;/em&gt; of the resource from the list of options to identify the resource in the chart. Click &lt;strong&gt;DONE.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img7.png&quot; alt=&quot;Metric tile&quot; title=&quot;Metric tile&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img8.png&quot; alt=&quot;Add a legend&quot; title=&quot;Add a legend&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;9&quot;&gt;
&lt;li&gt;In the &lt;strong&gt;VISUALIZATION&lt;/strong&gt; section, you can specify a header for the tile. For example, &lt;em&gt;Avg CPU Utilization&lt;/em&gt;. You can select &lt;strong&gt;Line&lt;/strong&gt; or &lt;strong&gt;Bar&lt;/strong&gt; as the graph type and its color.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img9.png&quot; alt=&quot;Select header and graph for the metric tile&quot; title=&quot;Select header and graph for the metric tile&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;10&quot;&gt;
&lt;li&gt;You can also define &lt;strong&gt;axis labels&lt;/strong&gt; for the chart and &lt;strong&gt;thresholds&lt;/strong&gt;. Thresholds define &lt;strong&gt;value ranges&lt;/strong&gt; for a metric and assign them a &lt;strong&gt;color/status&lt;/strong&gt; (OK, Warning, Critical) to reflect performance or health. They help visually flag performance degradation or anomalies in the metrics and take corrective actions.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img10.png&quot; alt=&quot;Define axis labels and thresholds&quot; title=&quot;Define axis labels and thresholds&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;11&quot;&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;SAVE&lt;/strong&gt; or &lt;strong&gt;CREATE&lt;/strong&gt; to save the tile.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To resize a tile, hover the mouse over the bottom-right corner of the tile in the dashboard. You will see a resize handle shown by a diagonal arrow. Click and drag the corner to increase or decrease the tile size both horizontally and vertically.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To edit a tile, hover the mouse to the top right of the tile, click the ellipsis (…), and select &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To move a tile, hover the mouse over the top of the tile, and position the tile on your dashboard.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can add other tiles to the dashboard for additional metrics, resource tile, alert tile, and so on.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here is an example of a basic dashboard for monitoring the status of the Redfish - Server for the client account &lt;em&gt;DreamCompany&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-dashboard-img14.png&quot; alt=&quot;Example of dashboard to monitor Redfish - Server&quot; title=&quot;Example of dashboard to monitor Redfish - Server&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Importing a dashboard&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;Import Dashboard&lt;/strong&gt; feature allows you to upload and reuse dashboard configurations, usually in JSON format. This helps replicate dashboards, share dashboards between teams, and quickly deploy standard monitoring views.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt;, navigate to &lt;strong&gt;Dashboards&lt;/strong&gt; and click &lt;strong&gt;Dashboard&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the menu icon (three stacked horizontal lines) on the left side.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;IMPORT DASHBOARD&lt;/strong&gt; and upload a JSON dashboard file for the default collection of &lt;strong&gt;My Dashboards&lt;/strong&gt;. Click &lt;strong&gt;IMPORT&lt;/strong&gt; to import the dashboard JSON file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The imported dashboard appears in the collection, and you can edit each tile to adjust the data to be collected the same way you created a new dashboard. You can also add, remove tiles, move tiles, and adjust the size of the tiles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;The hybrid observability solution, powered by HPE OpsRamp Software, discovers the compute, networks, and storage infrastructure, the applications, and workloads they host, and their dependencies. It then observes and monitors the health, performance, and capacity events, metrics, logs, traces, and network flows providing customers with true end-to-end visibility.&lt;/p&gt;
&lt;p&gt;This blog series walked you through the sequence of steps to provision and activate the hybrid observability service in HPE GreenLake workspace, and set up the service to discover, monitor, and observe the health, performance, and availability of agentless SSH-enabled systems, and physical infrastructure devices included in the HPE GreenLake Flex Solutions contract.&lt;/p&gt;
&lt;p&gt;There’s also a series of video tutorials on the &lt;a href=&quot;https://developer.hpe.com/greenlake/hybrid-observability-flex-solutions/home/&quot;&gt;Hybrid observability in HPE GreenLake Flex Solutions landing page&lt;/a&gt; that walks you through the contents described in this blog series for readers who prefer an audio-visual learning experience.&lt;/p&gt;
&lt;p&gt;To resolve issues with HPE GreenLake Flex Solutions or HPE hybrid observability service powered by HPE OpsRamp, contact the support team. While logged in to your HPE GreenLake workspace:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Help &amp;#x26; Support&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Help&lt;/strong&gt;, select &lt;strong&gt;OpsRamp&lt;/strong&gt; or &lt;strong&gt;HPE GreenLake Flex Solutions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create New Case&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[Hybrid observability service – Part 3: Enabling the monitoring of agentless SSH-enabled systems in HPE GreenLake Flex Solutions]]></title><description><![CDATA[In my previous blog post, I introduced the steps to install and configure a gateway collector appliance for a specific tenant client account…]]></description><link>https://developer.hpe.com/hybrid-observability-service-–-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-service-–-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/</guid><pubDate>Mon, 26 May 2025 15:35:32 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/&quot;&gt;my previous blog post&lt;/a&gt;, I introduced the steps to install and configure a gateway collector appliance for a specific tenant client account in the hybrid observability service powered by HPE OpsRamp Software in HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;p&gt;The gateway collector appliance is a prerequisite to enable the discovery of infrastructure devices in HPE GreenLake Flex Solutions before they can be monitored.&lt;/p&gt;
&lt;p&gt;Continuing from the second part of this series, I’ll now explore:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How to deploy an integration module to discover an agentless SSH-enabled server running Linux as the operating system.&lt;/li&gt;
&lt;li&gt;How to apply a monitoring template to &lt;em&gt;non SDK resources&lt;/em&gt; such as the agentless SSH resource to monitor it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;the last Part&lt;/a&gt; of the blog series, I’ll explore the observability of physical devices and their monitoring using dashboards to visualize the collected metrics in charts.&lt;/p&gt;
&lt;h2&gt;Applying integration modules and monitoring configuration to discover and monitor resources&lt;/h2&gt;
&lt;h3&gt;Integration modules&lt;/h3&gt;
&lt;p&gt;Once the gateway collector appliance is installed and configured to communicate with the hybrid observability service in the cloud, the next step is to install &lt;strong&gt;Integration modules&lt;/strong&gt; to discover IT infrastructure devices. For example, compute, storage, and network equipment. Further, integration modules enable data exchange between the service in the cloud and IT infrastructure devices included in the HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;p&gt;Once a resource is discovered, you can apply a &lt;strong&gt;Monitoring Template&lt;/strong&gt; to periodically track the resource&apos;s performance, availability, and health metrics, based on configured metrics.&lt;/p&gt;
&lt;h3&gt;Monitoring Templates&lt;/h3&gt;
&lt;p&gt;Monitoring is a method for periodically querying IT resources and forwarding resource metrics to the service for processing and analysis. &lt;a href=&quot;https://glp.docs.opsramp.com/solutions/monitoring/template/&quot;&gt;Monitoring Templates&lt;/a&gt; are used to apply standardized monitoring configurations to resources. These templates define the &lt;strong&gt;metrics&lt;/strong&gt; and &lt;strong&gt;thresholds&lt;/strong&gt; that will be monitored.&lt;/p&gt;
&lt;p&gt;The hybrid observability service powered by HPE OpsRamp comes with a library of pre-configured &lt;strong&gt;Global Monitoring Templates&lt;/strong&gt; that cover popular resource types. These built-in Global Monitoring Templates apply &lt;strong&gt;automatically&lt;/strong&gt; through &lt;strong&gt;Device Management Policies&lt;/strong&gt; to a wide range of resource types such as Operating System (Linux, Windows), compute servers, network devices, storage arrays, applications, virtualization and databases.&lt;/p&gt;
&lt;h3&gt;Device Management Policies&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/feature-guides/resource-management/&quot;&gt;Device Management Policies&lt;/a&gt; are rules that &lt;strong&gt;automate&lt;/strong&gt; managing and monitoring resources after discovery.&lt;/p&gt;
&lt;p&gt;There are predefined &lt;strong&gt;Global Device Management Policies&lt;/strong&gt; available for common platforms and device types. These built-in policies are designed to help you &lt;strong&gt;automatically&lt;/strong&gt; bind the Monitoring Templates to discovered resources based on matching criteria (for example, &lt;em&gt;Native Type = Linux&lt;/em&gt;, &lt;em&gt;Operating System contains CentOS&lt;/em&gt;, &lt;em&gt;Make = HPE&lt;/em&gt;).&lt;/p&gt;
&lt;p&gt;There might be some scenarios where you need to define a Monitoring Template and apply it either &lt;strong&gt;manually&lt;/strong&gt; to a specific resource or &lt;strong&gt;automatically&lt;/strong&gt; via a Device Management Policy to monitor a set of resources of the same type. For example, discovered agentless resources (also referred to as Non SDK resources) require a Monitoring Template to be defined and applied manually or via a Device Management Policy to a single resource or a set of resources.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can navigate to &lt;strong&gt;Setup &gt; Setup &gt; Clients&lt;/strong&gt; to &lt;strong&gt;edit&lt;/strong&gt; the &lt;em&gt;client account&lt;/em&gt; and check the &lt;strong&gt;Enable Global Policies&lt;/strong&gt; checkbox to automatically apply a Global Device Monitoring Policy, &lt;strong&gt;if one exists&lt;/strong&gt;, for &lt;strong&gt;Non SDK resources&lt;/strong&gt; such as the SSH-enabled system. In this post, I have &lt;strong&gt;not&lt;/strong&gt; enabled this option for the &lt;em&gt;client account&lt;/em&gt; to explore how to clone a built-in Global Monitoring Template, customize it, and assign it to the Linux system to enable its monitoring.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Agentless discovery and monitoring of SSH-enabled devices&lt;/h2&gt;
&lt;p&gt;Let us start by deploying an Integration module and creating a Monitoring Template to discover and monitor an &lt;a href=&quot;https://glp.docs.opsramp.com/integrations/os/linux-os-agentless-discovery/&quot;&gt;agentless SSH-enabled system&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The hybrid observability service supports agentless monitoring. Agentless monitors use the &lt;strong&gt;gateway collector appliance&lt;/strong&gt; to discover resources via SSH and monitor IT infrastructure agentless resources to track their health, performance, and availability.&lt;/p&gt;
&lt;p&gt;The SSH Agentless Integration module discovers Linux/Unix-based systems &lt;strong&gt;without installing an agent&lt;/strong&gt;, by securely connecting to the device over SSH via the gateway collector appliance. The Monitoring Template then needs to be assigned to the target agentless system for monitoring.&lt;/p&gt;
&lt;h3&gt;Requirements:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;A gateway collector appliance is installed and registered.&lt;/li&gt;
&lt;li&gt;The gateway collector appliance has network access (through SSH protocol – typically on port 22) to connect to the target Linux server.&lt;/li&gt;
&lt;li&gt;IP address of the target SSH-based machine(s).&lt;/li&gt;
&lt;li&gt;SSH credentials (username and password) of the target machine.&lt;/li&gt;
&lt;li&gt;A Monitoring Template and Device Management Policy to monitor the target resource through the gateway collector appliance and collect data for monitoring the metrics and resource availability.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install the Linux agentless integration module in a client account&lt;/h3&gt;
&lt;p&gt;As Partner Administrator, let’s install an &lt;strong&gt;agentless SSH Integration module&lt;/strong&gt; for the &lt;em&gt;client account&lt;/em&gt; to discover an HPE ProLiant server added in the HPE GreenLake workspace devices inventory. This compute server runs the Linux operating system. The Linux OS agentless SSH Integration module facilitates agentless discovery of SSH-enabled devices using the gateway collector appliance. Here is the procedure:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To discover a compute resource, select the &lt;strong&gt;client account&lt;/strong&gt;, then select &lt;strong&gt;Setup&lt;/strong&gt; in the navigation bar, and select &lt;strong&gt;Account&lt;/strong&gt; in the drop-down menu to navigate to the account details page.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Integrations&lt;/strong&gt; tile to access the Integrations App store and click &lt;strong&gt;+ADD&lt;/strong&gt; to display all the integration applications available for the &lt;em&gt;client account&lt;/em&gt;. If this is your first integration module you want to install, click &lt;strong&gt;+ADD&lt;/strong&gt; directly from the Integrations tile.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Filter the list of available integrations by a variety of &lt;strong&gt;Categories&lt;/strong&gt; or use the &lt;strong&gt;Search&lt;/strong&gt; feature. For agentless SSH-enabled devices, you can filter through category &lt;strong&gt;OS&lt;/strong&gt; or in the Search field, type &lt;strong&gt;SSH&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-integration-img2.png&quot; alt=&quot;Filter by category or Search&quot; title=&quot;Filter by category or Search&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;Choose the integration module to install the &lt;strong&gt;Linux OS – Agentless (SSH)&lt;/strong&gt; by selecting &lt;strong&gt;ADD&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigate to the agentless-SSH configuration page. The first step in the process is to input &lt;strong&gt;Basic Account Information&lt;/strong&gt;. Select &lt;strong&gt;+ADD&lt;/strong&gt; at the right of the &lt;strong&gt;Configuration&lt;/strong&gt; tab, then enter a &lt;strong&gt;Name&lt;/strong&gt; for the configuration. For example, &lt;em&gt;Name = SSH Compute integration&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Host Name/IP Address&lt;/strong&gt; box, enter an IP address, or the hostname of the target agentless system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;SSH Credential&lt;/strong&gt; box and click &lt;strong&gt;+ADD&lt;/strong&gt; to create new credentials to discover the agentless server. Specify:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The credential name and a brief description.&lt;/li&gt;
&lt;li&gt;The authentication type (typically Password) and appropriate credentials (username and password).&lt;/li&gt;
&lt;li&gt;The SSH port (typically port 22).&lt;/li&gt;
&lt;li&gt;Check the box &lt;strong&gt;Secure&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Finally, click &lt;strong&gt;ADD&lt;/strong&gt; to create the SSH credential.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can also select existing SSH credentials if one already exists and that applies to your resource.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-integration-img3.png&quot; alt=&quot;SSH Credential&quot; title=&quot;SSH Credential&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;
&lt;p&gt;In &lt;strong&gt;PERFORM ACTIONS&lt;/strong&gt;, select the &lt;strong&gt;Manage Device&lt;/strong&gt; checkbox.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, define a &lt;strong&gt;Discovery Schedule&lt;/strong&gt;. The schedule ensures that all resources and components inventory are in sync between the hybrid observability service and the agentless (SSH) server. You can select hourly, daily, weekly, or monthly schedule.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now that all the configuration details are entered, select &lt;strong&gt;ADD&lt;/strong&gt; to save the configuration of the integration module.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;NEXT&lt;/strong&gt; and &lt;strong&gt;select&lt;/strong&gt; a collector profile, the gateway collector appliance that will be used to install the integration module and discover the agentless resource.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;FINISH&lt;/strong&gt; to complete the installation of the integration module and initiate the resource discovery.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the Integrations tile, and click on the &lt;strong&gt;Linux OS – Agentless (SSH)&lt;/strong&gt; tile to check the status of the agentless SSH integration module:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The gateway collector appliance displays with the status &lt;strong&gt;RUNNING&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the gateway collector appliance to check the status of the agentless SSH integration module. The discovery status should be &lt;strong&gt;Completed&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Based on your discovery schedule, the value for the devices might be set to &lt;strong&gt;0&lt;/strong&gt;. In this case, select the ellipsis (...) on the right and select &lt;strong&gt;Discover&lt;/strong&gt; to discover the device manually.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;DONE&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Connect again to the SSH integration module and select the gateway collector appliance. You should see now a device discovered in &lt;strong&gt;Devices&lt;/strong&gt; tab. If a device is not discovered, ensure the prerequisites are fulfilled: the gateway collector appliance is installed and registered successfully, and it can connect to the target agentless SSH-enabled server.&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;1&lt;/strong&gt; for the devices to visualize the resource information for the Linux server. Notice the status is &lt;strong&gt;UNDEFINED&lt;/strong&gt; (color indication brown). The reason is that in our case, &lt;strong&gt;no&lt;/strong&gt; default Global Monitoring Template is automatically assigned to an agentless SSH-enabled system. In the next step, you will define a monitoring template and assign it to the agentless device.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-integration-img6.png&quot; alt=&quot;Resource information&quot; title=&quot;Resource information&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;14&quot;&gt;
&lt;li&gt;Select the server&apos;s name to collect attributes of the Linux server device such as the &lt;em&gt;Operating System (CentOS)&lt;/em&gt;, &lt;em&gt;Native type (Linux)&lt;/em&gt; and &lt;em&gt;Make (HPE)&lt;/em&gt;. You will need this information in the next step to configure the filter criteria for the monitoring policy of the agentless system.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Define a monitoring template and assign it to agentless devices&lt;/h2&gt;
&lt;p&gt;When the option &lt;strong&gt;Enable Global Policies&lt;/strong&gt; for non SDK resources is not enabled for the &lt;em&gt;client account&lt;/em&gt;, and an agentless system is discovered through the gateway collector appliance using the appropriate Integration module, you need to define and apply a &lt;strong&gt;Monitoring Template&lt;/strong&gt; for the gateway collector appliance to monitor the device for a specific client account.&lt;/p&gt;
&lt;p&gt;The built-in Global Monitoring Templates are &lt;strong&gt;not editable&lt;/strong&gt;. &lt;strong&gt;Copy&lt;/strong&gt; an appropriate built-in Global Monitoring Template and customize the &lt;em&gt;metrics&lt;/em&gt;, &lt;em&gt;thresholds&lt;/em&gt;, and &lt;em&gt;resource availability monitor&lt;/em&gt; for the most important metric(s) for the resource to meet the IT operational needs for the client account.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;a href=&quot;https://glp.docs.opsramp.com/solutions/availability/resource-availability/&quot;&gt;Resource Availability documentation&lt;/a&gt; refers to the uptime and operational status of a monitored resource. It tracks whether the resource is running or experiencing downtime based on the metrics with the availability monitor applied.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can then &lt;strong&gt;manually&lt;/strong&gt; assign the customized Monitoring Template to a particular resource, or via a &lt;strong&gt;Device Management Policy&lt;/strong&gt; to automatically bind your Monitoring Template to the discovered agentless devices.&lt;/p&gt;
&lt;p&gt;Let’s see how to define a Monitoring Template and the methods to assign it to the discovered agentless Linux device(s).&lt;/p&gt;
&lt;p&gt;The gateway collector appliance discovered the agentless SSH system. It will monitor the agentless SSH system using a Monitoring Template. Therefore, let’s select a Global Monitoring Template with a name that contains the word &lt;em&gt;gateway&lt;/em&gt; and &lt;strong&gt;clone&lt;/strong&gt; this template:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select a &lt;strong&gt;client account&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Go to &lt;strong&gt;Setup &gt; Setup &gt; Monitoring &gt; Templates&lt;/strong&gt; and click &lt;strong&gt;Advanced&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This feature will be made available under &lt;strong&gt;Setup &gt; Account &gt; Monitoring&lt;/strong&gt; in an upcoming HPE OpsRamp releases.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Search for Scope &lt;em&gt;Global Templates&lt;/em&gt; and enter &lt;em&gt;gateway&lt;/em&gt; or &lt;em&gt;gateway - linux&lt;/em&gt; as the Template Name. Then click &lt;strong&gt;Search&lt;/strong&gt; at the bottom of the &lt;strong&gt;Advanced&lt;/strong&gt; search.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-monitoring-img1.png&quot; alt=&quot;Search Global Monitoring Template for gateway - linux&quot; title=&quot;Search Global Monitoring Template for gateway - linux&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Select the &lt;strong&gt;latest version&lt;/strong&gt; of the Global Monitoring Template &lt;strong&gt;Gateway - Linux OS Performance Remote Monitoring&lt;/strong&gt; available. You can click the arrow on the left of the template to visualize the metrics and thresholds.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-monitoring-img2.png&quot; alt=&quot;Metrics and thresholds of the monitoring template&quot; title=&quot;Metrics and thresholds of the monitoring template&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Template&lt;/strong&gt; checkbox. Click &lt;strong&gt;Copy&lt;/strong&gt; to clone the template and click &lt;strong&gt;Yes&lt;/strong&gt; to duplicate the template.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt;, give a &lt;strong&gt;Name&lt;/strong&gt; to your template. Use the &lt;em&gt;client account&lt;/em&gt; name in the &lt;em&gt;Template Name&lt;/em&gt; to identify the template for a particular client account easily. For example, &lt;em&gt;DreamCompany-Agentless-Linux-Performance&lt;/em&gt;. Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To list the Monitoring Templates for the &lt;em&gt;client account&lt;/em&gt;, go to &lt;strong&gt;Monitoring &gt; Templates &gt; Advanced Search&lt;/strong&gt;, select &lt;strong&gt;Client Templates&lt;/strong&gt; for the &lt;em&gt;Scope&lt;/em&gt;. You can now customize the template to adjust some parameters such as the &lt;em&gt;tags&lt;/em&gt;, metrics such as the &lt;strong&gt;warning&lt;/strong&gt; or &lt;strong&gt;critical thresholds&lt;/strong&gt;, and the &lt;strong&gt;availability monitor&lt;/strong&gt; per your IT operational requirements.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To monitor the resource, enable an &lt;strong&gt;Availability Monitor&lt;/strong&gt; for a metric (for example, &lt;em&gt;os.uptime&lt;/em&gt;).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select the cloned Monitoring Template and click &lt;strong&gt;Edit&lt;/strong&gt; at the top right of the template.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt; at bottom of the page and select the metric that is more important for the resource by clicking on the pencil to the right of the metric.&lt;/li&gt;
&lt;li&gt;Check the &lt;strong&gt;Apply Availability Monitor&lt;/strong&gt; checkbox and click &lt;strong&gt;Update&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;You can also modify the warning and critical thresholds of any metric per your operational requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-monitoring-img4.png&quot; alt=&quot;Apply Availability Monitor&quot; title=&quot;Apply Availability Monitor&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;9&quot;&gt;
&lt;li&gt;Finally, click &lt;strong&gt;Save&lt;/strong&gt; to save the Monitoring Template.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can now assign the Monitoring Template to the agentless resource either manually or via a Device Management Policy.&lt;/p&gt;
&lt;p&gt;You may want to assign the Monitoring Template manually if you have a single resource. If you are planning to monitor a set of resources of the same type, it is recommended that you apply the Monitoring Template via a &lt;strong&gt;Device Management Policy&lt;/strong&gt;. The policy will automatically apply the resource policy to the resources based on a set of filter criteria you specify in the policy.&lt;/p&gt;
&lt;p&gt;Let’s walk you through the two methods to apply the Monitoring Template to a resource:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Method 1 — Manual assignment to a particular resource&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt; and go to &lt;strong&gt;Infrastructure &gt; Search&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select your Linux server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the checkbox of your server, click &lt;strong&gt;ACTIONS&lt;/strong&gt; on the right, and select &lt;strong&gt;Assign Templates&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Search for the &lt;strong&gt;Monitoring Template&lt;/strong&gt; you have just created, check the box for the Monitoring Template, and click &lt;strong&gt;ASSIGN TEMPLATE&lt;/strong&gt; button.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Method 2 — Automatic assignment to a set of resources via a Device Management Policy&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;client account&lt;/strong&gt; and from the navigation bar go to &lt;strong&gt;Setup &gt; Setup &gt; Resources &gt; Device Management Policies&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This feature will be made available under &lt;strong&gt;Setup &gt; Account &gt; Monitoring&lt;/strong&gt; in an upcoming HPE OpsRamp releases.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;If it is the first time you want to create a Device Management Policy, click &lt;strong&gt;Create New&lt;/strong&gt;. If you have already created policies, click the option &lt;strong&gt;+Add&lt;/strong&gt; to create a new Device Management Policy for the agentless SSH system. The Device Management Policy page is displayed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure the &lt;strong&gt;Client&lt;/strong&gt; option is selected for the &lt;strong&gt;Scope&lt;/strong&gt; and the appropriate &lt;strong&gt;client account&lt;/strong&gt; has been selected. Give a policy &lt;strong&gt;Name&lt;/strong&gt; (for example &lt;em&gt;Agentless Linux Monitoring Policy&lt;/em&gt;) and ensure the &lt;strong&gt;Add members automatically using filter criteria&lt;/strong&gt; checkbox is checked.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select one or more &lt;strong&gt;Filter Criteria&lt;/strong&gt; that apply to your agentless device(s) and click &lt;strong&gt;Show Matching Members&lt;/strong&gt; to ensure the resources matching the filter criteria are the expected devices. For example, you can filter on &lt;em&gt;Native Type&lt;/em&gt; and &lt;em&gt;Make&lt;/em&gt; of the device as shown here:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-monitoring-img7.png&quot; alt=&quot;Filter criteria for the device management policy&quot; title=&quot;Filter criteria for the device management policy&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Perform Actions&lt;/strong&gt; section, select the &lt;strong&gt;Assign Monitoring Templates&lt;/strong&gt; checkbox. A list of available monitoring templates is listed. Select the Monitoring Template you previously created and click the right arrow to move it to the &lt;strong&gt;Assigned Monitoring Templates&lt;/strong&gt; column on the right.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Save &amp;#x26; Run Now&lt;/strong&gt; to assign the policy to the resources matching the filter criteria. The policy will apply to all resources that match the filter criteria you specified in the policy.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After a few minutes, check the agentless SSH device in &lt;strong&gt;Infrastructure &gt; Search&lt;/strong&gt; and select the device to check the status and the metrics. The status should now be &lt;strong&gt;UP&lt;/strong&gt; (color indication is green) and you can start visualizing the metrics by navigating to METRICS tab.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/agentless-ssh-monitoring-img10.png&quot; alt=&quot;Check status and metrics of the SSH agentless device&quot; title=&quot;Check status and metrics of the SSH agentless device&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post walked you through the steps to set up the hybrid observability service powered by HPE OpsRamp for an agentless SSH-enabled server. With this integration service enabled, you can discover, monitor, and observe the health, performance, and availability of an agentless SSH-enabled system.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;my last part&lt;/a&gt; of this blog series, I’ll dive into the observability of a physical server and a storage array as we continue the journey to set up the hybrid observability service to discover and monitor, through dashboards, physical IT infrastructure devices included in the HPE GreenLake Flex Solutions contract.&lt;/p&gt;
&lt;p&gt;There’s also a series of video tutorials on the &lt;a href=&quot;https://developer.hpe.com/greenlake/hybrid-observability-flex-solutions/home/&quot;&gt;Hybrid observability in HPE GreenLake Flex Solutions landing page&lt;/a&gt; that walks you through the contents described in this blog series for readers who prefer an audio-visual learning experience.&lt;/p&gt;
&lt;p&gt;To resolve issues with HPE GreenLake Flex Solutions or hybrid observability service powered by HPE OpsRamp, contact the support team. While logged in to your HPE GreenLake workspace:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Help &amp;#x26; Support&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Help&lt;/strong&gt;, select &lt;strong&gt;OpsRamp&lt;/strong&gt; or &lt;strong&gt;HPE GreenLake Flex Solutions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create New Case&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[Hybrid observability service – Part 2: Initial configuration to enable the discovery of resources in HPE GreenLake Flex Solutions]]></title><description><![CDATA[In my previous blog post, I covered the steps to provision and activate the hybrid observability service powered by HPE OpsRamp Software in…]]></description><link>https://developer.hpe.com/hybrid-observability-service-–-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-service-–-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/</guid><pubDate>Fri, 23 May 2025 15:34:55 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-1-provisioning-and-activation-in-hpe-greenlake-flex-solutions/&quot;&gt;my previous blog post&lt;/a&gt;, I covered the steps to provision and activate the hybrid observability service powered by HPE OpsRamp Software in a HPE GreenLake cloud workspace as part of the HPE GreenLake Flex Solutions. You must complete this initial step before configuring the hybrid observability service to enable the observability of your IT infrastructure resources.&lt;/p&gt;
&lt;p&gt;This is Part 2 of the blog series, and it explores the &lt;strong&gt;initial configuration&lt;/strong&gt; of the hybrid observability service to &lt;strong&gt;enable the discovery&lt;/strong&gt; of resources before they can be monitored, and metrics data collected from the resources.
As requirements to enable the discovery of physical resources included in your HPE GreenLake Flex Solutions contract, I’ll start setting up the hybrid observability service by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating a &lt;strong&gt;client account&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Managing user accounts and permissions for your organization.&lt;/li&gt;
&lt;li&gt;Installing and registering a &lt;strong&gt;gateway collector appliance&lt;/strong&gt; in a virtual environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;Part 3&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;Part 4&lt;/a&gt; of the blog series, I’ll explore how to enable the observability of IT infrastructure devices included in the HPE GreenLake Flex Solutions by deploying integration modules and assigning monitoring configurations to a set of resources.&lt;/p&gt;
&lt;h2&gt;Creating a client account&lt;/h2&gt;
&lt;p&gt;When the hybrid observability service powered by HPE OpsRamp is deployed in HPE GreenLake cloud by the &lt;strong&gt;Workspace Administrator&lt;/strong&gt;, the service uses a &lt;strong&gt;multi-tenant model&lt;/strong&gt; associated with a &lt;strong&gt;partner account&lt;/strong&gt;. A partner account is the &lt;strong&gt;master&lt;/strong&gt; tenant, including Clients who each have a &lt;em&gt;client account&lt;/em&gt;. A &lt;em&gt;Workspace Administrator&lt;/em&gt; is the user who created the workspace and has this role assigned by default, or invited users can get this role assigned by the &lt;em&gt;Workspace Administrator&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In a multi-tenant model, a &lt;em&gt;client account&lt;/em&gt; is a tenant of a partner account. It represents a monitoring and management instance for an individual organization, a business unit, or a specific customer environment.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; By default, a &lt;em&gt;client account&lt;/em&gt; is created with the name of the HPE GreenLake workspace. You may want to create additional &lt;em&gt;client accounts&lt;/em&gt; for an individual organization, business unit, or customer environment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To start setting up the environment, log in to the service using an account provisioned with the &lt;em&gt;Partner Administrator&lt;/em&gt; role. The &lt;em&gt;Workspace Administrator&lt;/em&gt; who provisioned the HPE OpsRamp service in the HPE GreenLake cloud has this role assigned by default.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The other users with the &lt;em&gt;OpsRamp Access&lt;/em&gt; role granted in the workspace will have the role &lt;em&gt;GLP Invited Partner User&lt;/em&gt; assigned in the service.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To create a new client account:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;From the top navigation menu of HPE OpsRamp user interface, select &lt;strong&gt;Setup &gt; Setup&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This feature will be made available under &lt;strong&gt;Setup &gt; Account&lt;/strong&gt; in an upcoming HPE OpsRamp release.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Clients&lt;/strong&gt; on the left navigation bar and click &lt;strong&gt;+Add&lt;/strong&gt; to add a new client account. If this is the first client of your partner account, click &lt;strong&gt;Create New&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the &lt;em&gt;Client Name&lt;/em&gt;, &lt;em&gt;Address&lt;/em&gt;, &lt;em&gt;Country&lt;/em&gt;, and &lt;em&gt;Time Zone&lt;/em&gt;. Contact email is also recommended. If you specify a client state, the length must be 3 characters maximum. Then click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep the default selection for &lt;strong&gt;Product Package&lt;/strong&gt; entries for Hybrid Discovery and Monitoring, Event and Incident Management, Remediation and Automation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Finish&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the new client account is created, select the new &lt;em&gt;client account&lt;/em&gt; from the list of &lt;strong&gt;Clients&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To see the new client and continue configuring the service, click &lt;strong&gt;OpsRamp&lt;/strong&gt; icon on the upper left.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Alternatively, you can log out from the service and log in again from your HPE GreenLake workspace, and then select the &lt;strong&gt;Partner&lt;/strong&gt; domain for your organization. You should see the new client account, in our example, &lt;strong&gt;DreamCompany&lt;/strong&gt;, on the list of clients. Select the &lt;em&gt;client account&lt;/em&gt; to continue with the configuration of the hybrid observability service for the client account.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-client-setup-img-6.png&quot; alt=&quot;Client Account&quot; title=&quot;Client Account&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;
&lt;p&gt;Enabling advanced add-ons for the client account, as per the client’s operational requirements, is recommended:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Setup &gt; Setup &gt; Accounts &gt; Clients&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;client account&lt;/strong&gt; from the list of clients.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; button for the &lt;strong&gt;ADDONS&lt;/strong&gt; section and enable the Add-ons. To learn more about the Add-ons, see the &lt;a href=&quot;https://glp.docs.opsramp.com/guides/component-model/#hybrid-discovery-and-monitoring&quot;&gt;HPE OpsRamp Hybrid Discovery and Monitoring documentation&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To enable client account Add-ons, ensure your HPE GreenLake workspace details (name, address) do not contain any special characters.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Managing user account&lt;/h2&gt;
&lt;p&gt;The &lt;em&gt;Partner Administrator&lt;/em&gt; role is provided to the HPE GreenLake cloud user who provisioned the hybrid observability service in the workspace. Invited HPE GreenLake users with the &lt;em&gt;OpsRamp Access&lt;/em&gt; role granted in the workspace will have the role &lt;strong&gt;GLP Invited Partner User&lt;/strong&gt; assigned in the service.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When you log in to hybrid observability service user interface powered by HPE OpsRamp, ensure you have selected your &lt;em&gt;partner organization&lt;/em&gt; on the top left menu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the top navigation menu, select &lt;strong&gt;Setup &gt; Account&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Users and Permissions&lt;/strong&gt; tile.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Users&lt;/strong&gt; tile.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select an invited user who launched and logged in to the service to visualize the details description of the user and the roles. You will see the default role assigned to the user as &lt;strong&gt;GLP Invited Partner User&lt;/strong&gt;. You can assign the role of &lt;em&gt;Partner Administrator&lt;/em&gt; to an invited user.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In HPE GreenLake Flex Solutions, as &lt;em&gt;Partner Administrator&lt;/em&gt;, you can manage accounts for your partner organization. You can create permissions sets, roles, user groups, and assign them to users who have been granted &lt;em&gt;OpsRamp Access&lt;/em&gt; role in your workspace and launched the hybrid observability service.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In HPE GreenLake Flex Solutions, you cannot assign roles to users at the &lt;em&gt;client account&lt;/em&gt; level.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To learn more about account management, refer to the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/feature-guides/account-management/&quot;&gt;HPE OpsRamp Account Management documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Installing and configuring a gateway collector appliance&lt;/h2&gt;
&lt;p&gt;Resources need to be discovered before they can be monitored, and metrics collected.&lt;/p&gt;
&lt;p&gt;With the hybrid observability service, you discover, monitor, and manage infrastructure resources (compute, storage, network) included in the HPE GreenLake Flex Solutions using an &lt;strong&gt;agentless&lt;/strong&gt; method with a &lt;strong&gt;gateway collector appliance&lt;/strong&gt; installed &lt;strong&gt;within&lt;/strong&gt; your firewall environment. This appliance can be a virtual machine or a cloud-native application that runs on your own Kubernetes environment.&lt;/p&gt;
&lt;p&gt;To learn more about the gateway collector appliance installation and activation procedures, and deployment requirements, refer to the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/&quot;&gt;HPE OpsRamp Platform documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are two types of gateway collectors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/gateways/&quot;&gt;Classic gateway&lt;/a&gt; that can be installed as a Virtual Machine (VM) in a hypervisor.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/nextgen-gateways/&quot;&gt;NextGen gateway&lt;/a&gt; that runs on a Kubernetes environment. The NextGen gateway is a new generation gateway collector appliance, and it is &lt;strong&gt;HPE’s recommended&lt;/strong&gt; gateway collector appliance.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Agent-based methods can also be used for the discovery of physical devices running Linux and Microsoft Windows Operating Systems. To learn more about agents, refer to the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/agents/&quot;&gt;HPE OpsRamp Agent documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Gateway Collector appliance prerequisites&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Ensure your HPE GreenLake workspace details (name, address) do not contain any special character.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A &lt;strong&gt;virtual environment&lt;/strong&gt; (a hypervisor or a Kubernetes environment) is required within your firewall environment.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The gateway collector appliance needs to be appropriately sized based on the monitored resource count. Refer to the following documentation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/gateways/gateway-deployment-requirements/#gateway-appliance&quot;&gt;Classic gateway collector appliance deployment sizing&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/nextgen-gateways/installation-nextgen-gateway/#gateway-capacity-parameters&quot;&gt;NextGen gateway collector appliance sizing&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connectivity network protocol requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The outbound communication from the gateway collector appliance to the HPE OpsRamp platform is fully secured using a TLS 1.2 tunnel and TCP port 443 to securely transfer data from the gateway collector appliance to the HPE OpsRamp service.&lt;/li&gt;
&lt;li&gt;SNMP: port 161 for outbound communication from the gateway collector appliance to the networking devices and on-premises infrastructure, and port 162 for inbound communication from networking devices to the gateway collector appliance. See the &lt;a href=&quot;https://glp.docs.opsramp.com/integrations/network/snmp-discovery/&quot;&gt;HPE OpsRamp SNMP documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;TCP port 3128 for a proxy connection: outbound communication from compute servers’ agent (Microsoft Windows OS and Linux OS) to the gateway collector appliance’s embedded proxy (if proxy is enabled in the gateway). See the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/agents/agent-reference/&quot;&gt;HPE OpsRamp agent connectivity requirement documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;SSH and Windows Management Instrumentation (WMI) protocols are used to discover infrastructure devices.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/network-deployment-requirements.png&quot; alt=&quot;Network protocol requirements&quot; title=&quot;Network protocol requirements&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Gateway collector appliance installation and activation procedure&lt;/h3&gt;
&lt;p&gt;You then proceed as follows to install the gateway collector appliance on your HPE OpsRamp service environment:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;client account&lt;/strong&gt; you created in the previous step.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the top navigation menu, select &lt;strong&gt;Setup &gt; Account&lt;/strong&gt;, and click &lt;strong&gt;Collector Profiles&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;+ADD&lt;/strong&gt; to create a gateway collector profile. The gateway collector profile page opens.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-2.png&quot; alt=&quot;Add the gateway collector profile&quot; title=&quot;Add the gateway collector profile&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;Enter a &lt;strong&gt;Profile Name&lt;/strong&gt; and a &lt;strong&gt;Description&lt;/strong&gt; for your gateway.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select a &lt;em&gt;Classic gateway&lt;/em&gt; or a &lt;em&gt;NextGen gateway&lt;/em&gt; image based on your environment. In our example, the &lt;strong&gt;recommended NextGen gateway&lt;/strong&gt; is deployed. You can use an ISO (International Organization for Standardization) or OVA (Open Virtual Appliance) image to run the NextGen gateway on a VMware virtual machine. If you want to run the NextGen gateway in your &lt;strong&gt;own Kubernetes environment&lt;/strong&gt;, select the &lt;strong&gt;Cloud-Native Application (Installer)&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here, we install the NextGen gateway collector OVA as a virtual machine on a VMware ESXi server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-4.png&quot; alt=&quot;Install NextGen gateway&quot; title=&quot;Install NextGen gateway&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can click the ellipsis (…) of the selected virtual appliance to view the installation and activation instructions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt;. The gateway profile is displayed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the arrow to download the virtual appliance image in your environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-5.png&quot; alt=&quot;Download the gateway file&quot; title=&quot;Download the gateway file&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Follow the &lt;strong&gt;INSTALLATION&lt;/strong&gt; instructions to install the gateway collector appliance as a virtual machine on the hypervisor. In our case example, our hypervisor is a VMware ESXi server. The OVA will install Ubuntu Linux Operating System in the virtual machine. Summary of the installation steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log in to vSphere.&lt;/li&gt;
&lt;li&gt;Deploy the OVF template on your ESXi server or cluster environment. Select the downloaded OVA file.&lt;/li&gt;
&lt;li&gt;Specify a unique name for the virtual machine and a location in your environment.&lt;/li&gt;
&lt;li&gt;Select a compute resource such as an ESXi cluster, ESXi host, or a Resource Pool for the virtual machine.&lt;/li&gt;
&lt;li&gt;Review the detailed information.&lt;/li&gt;
&lt;li&gt;Select a datastore for the virtual machine and use the &lt;em&gt;Thick Provision Lazy Zeroed&lt;/em&gt; disk format.&lt;/li&gt;
&lt;li&gt;Select a destination Network for the virtual machine.&lt;/li&gt;
&lt;li&gt;Verify the deployment settings and click &lt;em&gt;FINISH&lt;/em&gt; to start creating the virtual machine in the hypervisor.&lt;/li&gt;
&lt;li&gt;Power on the virtual machine.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, once the virtual machine is deployed and powered on, apply the &lt;strong&gt;ACTIVATION&lt;/strong&gt; instructions to activate and register the gateway collector appliance on the virtual machine. The activation process takes a few minutes to complete installation of the Kubernetes cluster and activation of the gateway pods in the Kubernetes cluster. The activation process will instruct you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Step 1 — Log in to the gateway collector appliance.&lt;/li&gt;
&lt;li&gt;Step 2 — Set the hostname for the gateway collector appliance and change the default password.&lt;/li&gt;
&lt;li&gt;Step 3 — Install the K3s Kubernetes cluster on the gateway collector appliance.&lt;/li&gt;
&lt;li&gt;Step 4 — Register the gateway collector appliance in the HPE OpsRamp platform.&lt;/li&gt;
&lt;li&gt;Step 5 — Click &lt;em&gt;FINISH&lt;/em&gt; when the Activation is completed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note:&lt;/strong&gt; If you want to assign &lt;strong&gt;a static IP address&lt;/strong&gt; to the gateway appliance, refer to the &lt;a href=&quot;https://glp.docs.opsramp.com/platform-features/nextgen-gateways/installation-nextgen-gateway/iso-based-installation/#step-3-update-hostname-and-install-kubernetes&quot;&gt;HPE OpsRamp NextGen gateway installation documentation&lt;/a&gt;. Follow these steps before installing the Kubernetes cluster (step 3 above) on the gateway collector appliance.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The activation process will install the K3s Kubernetes cluster on the virtual machine with the main pod (&lt;em&gt;nextgen-gw-0&lt;/em&gt;) where the gateway collector code runs, and the storage pod (&lt;em&gt;nextgen-gw-redis-master-0&lt;/em&gt;).&lt;/p&gt;
&lt;p&gt;When the gateway collector is installed and registered in your HPE OpsRamp service instance, go to &lt;strong&gt;Setup &gt; Account &gt; Collector Profiles&lt;/strong&gt;. The status of the gateway collector appliance should be &lt;strong&gt;CONNECTED&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-8.png&quot; alt=&quot;Gateway is connected&quot; title=&quot;Gateway is connected&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also navigate to &lt;strong&gt;Infrastructure&lt;/strong&gt; tab and select &lt;strong&gt;Resources &gt; Gateway&lt;/strong&gt; or use the &lt;strong&gt;recommended method&lt;/strong&gt;: &lt;strong&gt;Infrastructure &gt; Search &gt; OTHERS &gt; OpsRamp Gateway&lt;/strong&gt;, to visualize detailed information about the newly installed and registered gateway collector appliance.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-9.png&quot; alt=&quot;Gateway resources&quot; title=&quot;Gateway resources&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you see the main pod (nextgen-gw-0) status as &lt;strong&gt;Running&lt;/strong&gt; (color indication &lt;strong&gt;green&lt;/strong&gt;), it means the NextGen gateway is running successfully.&lt;/p&gt;
&lt;p&gt;You can also navigate to &lt;strong&gt;Dashboard &gt; Classic Dashboard&lt;/strong&gt; to drill into infrastructure elements of the gateway collector appliance. The dashboard provides an overview of health, performance, and availability of resources and services.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-gateway-install-img-10.png&quot; alt=&quot;Classic dashboard gateway resources status&quot; title=&quot;Classic dashboard gateway resources status&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The resource status for the &lt;em&gt;NextGen Gateway Cluster&lt;/em&gt; and the &lt;em&gt;NextGen Gateway Node&lt;/em&gt; are shown as &lt;strong&gt;Undefined&lt;/strong&gt; (color indication &lt;strong&gt;brown&lt;/strong&gt;). This is expected because no availability monitor is assigned to these two resources. To monitor the NextGen Gateway Node, you will need to assign a &lt;strong&gt;cloned&lt;/strong&gt; version of the Global Monitoring Template &lt;strong&gt;Agent G2 – Linux OS Performance Monitoring&lt;/strong&gt;. Refer to the next blog post &amp;#x3C;link to blog post part 3&gt; to learn how to assign a Monitoring Template to a resource.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Whitelisting IP addresses to improve network access security&lt;/h2&gt;
&lt;p&gt;HPE recommends restricting a specific set of IP addresses by whitelisting them within your firewall for outbound connection to the hybrid observability service in the cloud.  The whitelist is based on your regional deployment of HPE OpsRamp service in the HPE GreenLake workspace. See the &lt;a href=&quot;https://glp.docs.opsramp.com/support/reference/public-ip-addresses/&quot;&gt;HPE OpsRamp Collector Whitelisted IP Addresses documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post helps you get started with the initial configuration of the hybrid observability service powered by HPE OpsRamp Software to enable the discovery of IT infrastructure devices included in HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;p&gt;The initial setup consists of:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Creating a tenant (client account) of the Partner account for your organization.&lt;/li&gt;
&lt;li&gt;Managing HPE GreenLake users’ roles.&lt;/li&gt;
&lt;li&gt;Installing and registering a gateway collector appliance, a prerequisite to enable the discovery and monitoring of physical devices.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Don’t miss &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-3-enabling-the-monitoring-of-agentless-ssh-enabled-systems-in-hpe-greenlake-flex-solutions/&quot;&gt;part 3&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-4-enabling-the-monitoring-of-physical-devices-in-hpe-greenlake-flex-solutions/&quot;&gt;Part 4&lt;/a&gt; of this blog series, where I’ll further explore the integration set up activities to discover infrastructure resources, start monitoring resources and viewing metrics and alerts.&lt;/p&gt;
&lt;p&gt;There’s also a series of video tutorials on the &lt;a href=&quot;https://developer.hpe.com/greenlake/hybrid-observability-flex-solutions/home/&quot;&gt;Hybrid observability in HPE GreenLake Flex Solutions landing page&lt;/a&gt; that walks you through the contents described in this blog series for readers who prefer an audio-visual learning experience.&lt;/p&gt;
&lt;p&gt;To resolve issues with HPE GreenLake Flex Solutions or hybrid observability service powered by HPE OpsRamp, contact the support team. While logged in to your HPE GreenLake workspace:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Help &amp;#x26; Support&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Help&lt;/strong&gt;, select &lt;strong&gt;OpsRamp&lt;/strong&gt; or &lt;strong&gt;HPE GreenLake Flex Solutions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create New Case&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[Hybrid observability service – Part 1: Provisioning and activation in HPE GreenLake Flex Solutions]]></title><description><![CDATA[As organizations embrace cloud-native technologies, and adopt hybrid or multi-cloud environments, system architecture is becoming more…]]></description><link>https://developer.hpe.com/hybrid-observability-service-–-part-1-provisioning-and-activation-in-hpe-greenlake-flex-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/hybrid-observability-service-–-part-1-provisioning-and-activation-in-hpe-greenlake-flex-solutions/</guid><pubDate>Thu, 22 May 2025 15:42:16 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;As organizations embrace cloud-native technologies, and adopt hybrid or multi-cloud environments, system architecture is becoming more complex and distributed than ever. With this, IT teams are under constant pressure to detect and resolve issues faster — so they can keep services reliable and business operations running smoothly.&lt;/p&gt;
&lt;p&gt;To address these challenges, IT teams are turning to observability solutions to manage complex systems and proactively identify and resolve issues. Observability plays a critical role in modern IT Operations Management (ITOM), offering real-time visibility across hybrid and multi-cloud environments. With the observability tools, teams can understand what’s happening across their infrastructure, detect anomalies early, and resolve incidents quickly – keeping systems efficient, stable, and resilient.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-flex-solutions.html&quot;&gt;HPE GreenLake Flex Solutions&lt;/a&gt; includes a cloud-based hybrid observability service, powered by &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html&quot;&gt;HPE OpsRamp Software&lt;/a&gt;, an AI-powered command center that simplifies and optimizes IT operations.&lt;/p&gt;
&lt;p&gt;Hybrid observability in HPE GreenLake Flex Solutions is a SaaS-based ITOM solution and AIOps monitoring and observability service. It is designed to handle the complexity of hybrid multi-cloud environments. By leveraging the hybrid observability service powered by HPE OpsRamp, you will have visibility and control to centrally monitor and manage the &lt;strong&gt;physical&lt;/strong&gt; infrastructure resources (servers, storage, network equipment) that comes within the HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;p&gt;You can also expand your observability capabilities by purchasing additional subscriptions to provide coverage of &lt;strong&gt;logical resources&lt;/strong&gt; such as virtual machines, containers, and workloads, including those running on public clouds or non-HPE infrastructure.&lt;/p&gt;
&lt;p&gt;Let’s get started with the journey to set up the hybrid observability service powered by HPE OpsRamp Software.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;To enable monitoring and observability of physical devices in HPE GreenLake Flex Solutions, there are some requirements that must be fulfilled:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an HPE account, sign in to HPE GreenLake cloud and create a workspace for your organization. The workspace is the fundamental access management control boundary for HPE GreenLake resources. When you create a workspace, you are automatically assigned the role of &lt;em&gt;&lt;strong&gt;Workspace Administrator&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you haven’t already set up your HPE account and workspace, refer to the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-497192AA-FDC2-49C5-B572-0D2F58A23745.html&quot;&gt;HPE GreenLake Cloud User Guide&lt;/a&gt; to create them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure you receive the hybrid observability subscription key by email as part of the HPE GreenLake Flex Solutions contract.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provision and activate hybrid observability service on the HPE GreenLake workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a client account.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install a gateway collector appliance in a virtual environment as prerequisite to enable discovery of physical resources before they can be monitored.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install integration modules to discover physical resources and enable monitoring and management of these resources.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply monitoring configuration to manage and track the health, performance, and availability status of the discovered resources components.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In this blog post it is assumed that you have created an HPE account and a workspace. You will then learn how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provision the hybrid observability service powered by HPE OpsRamp Software in the HPE GreenLake workspace.&lt;/li&gt;
&lt;li&gt;Activate the associated subscription key.&lt;/li&gt;
&lt;li&gt;Assign appropriate access role to desired users in the HPE GreenLake workspace.&lt;/li&gt;
&lt;li&gt;Launch the hybrid observability service in HPE GreenLake Flex Solutions to begin your hybrid observability journey.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the subsequent blog posts of the series, I will walk you through setting up the hybrid observability service through the HPE OpsRamp user interface to monitor, manage, and control your physical infrastructure resources.&lt;/p&gt;
&lt;h2&gt;Provisioning the hybrid observability service powered by HPE OpsRamp Software in HPE GreenLake Flex Solutions&lt;/h2&gt;
&lt;p&gt;To activate the hybrid observability service in HPE GreenLake Flex Solutions and the associated subscription key on your workspace, you need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Provision the hybrid observability service powered by HPE OpsRamp Software in a desired region in your workspace using your HPE account with the &lt;em&gt;Workspace Administrator&lt;/em&gt; role.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Activate the subscription key for your workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Grant the &lt;em&gt;&lt;strong&gt;OpsRamp Access&lt;/strong&gt;&lt;/em&gt; role to the desired users in your workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Launch the HPE OpsRamp Software from the workspace to use and configure the hybrid observability service.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Let’s get started with this sequence of steps!&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;Step 1 – Provision the HPE OpsRamp Software service in your workspace&lt;/h3&gt;
&lt;p&gt;Log in to your HPE GreenLake cloud workspace using your &lt;em&gt;Workspace Administrator&lt;/em&gt; account. Provision the HPE OpsRamp Software to your workspace and select a deployment region. You can provision the HPE OpsRamp Software before adding the subscription key.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Services &gt; Catalog&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-activation-img-2.png&quot; alt=&quot;HPE GreenLake Service catalog&quot; title=&quot;HPE GreenLake Service catalog&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Management &amp;#x26; Governance&lt;/strong&gt;, select &lt;strong&gt;OpsRamp&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Provision&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select a &lt;strong&gt;Deployment Region&lt;/strong&gt; from the drop-down. You can add more regions once the service is installed.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Terms of Service&lt;/strong&gt; to review the terms, and then select the checkbox to confirm your agreement.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Deploy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-activation-img-5.png&quot; alt=&quot;Provision HPE OpsRamp&quot; title=&quot;Provision HPE OpsRamp&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 2 – Activate the subscription key in your workspace&lt;/h3&gt;
&lt;p&gt;As the &lt;em&gt;Workspace Administrator&lt;/em&gt;, add the hybrid observability &lt;strong&gt;subscription key&lt;/strong&gt; that you received via email for your workspace. A subscription key is needed to launch the service in your workspace:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In your HPE GreenLake workspace, click &lt;strong&gt;Services&lt;/strong&gt; and then &lt;strong&gt;Service Subscriptions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add Service Subscription&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter the &lt;em&gt;Subscription key&lt;/em&gt; you received by email.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;(Optional) You can assign tags to the subscription key or manage tags later. Tags allow you to categorize resources based on purpose, owner, location, or other criteria. Click &lt;strong&gt;Assign&lt;/strong&gt; if you add a tag, then click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Showback rates are optional. Skip this step. Clear the &lt;strong&gt;Add showback rates&lt;/strong&gt; checkbox.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Finish&lt;/strong&gt;. The subscription key is automatically applied to the workspace and the hybrid observability service powered by HPE OpsRamp.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-activation-img-12.png&quot; alt=&quot;Activate HPE OpsRamp subscription key&quot; title=&quot;Activate HPE OpsRamp subscription key&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 3 – Assign the OpsRamp Access role to the desired users&lt;/h3&gt;
&lt;p&gt;After you provisioned the HPE OpsRamp Software to your workspace, you must give the desired users permission to access and run the hybrid observability service. Users must be granted the built-in &lt;em&gt;&lt;strong&gt;Opsramp Access&lt;/strong&gt;&lt;/em&gt; role. Once users are assigned this role, they have the authorization they need.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In your workspace, on the &lt;strong&gt;Home&lt;/strong&gt; page, select &lt;strong&gt;Manage Workspace&lt;/strong&gt; from the &lt;strong&gt;Quick Links&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Workspace identity &amp;#x26; access&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Users&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the desired user or select the ellipsis (...) on the right of the user.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Assign Role&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;OpsRamp&lt;/strong&gt; service.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Opsramp Access&lt;/strong&gt; role.&lt;/li&gt;
&lt;li&gt;Do not slide the &lt;strong&gt;Limit Resource Access&lt;/strong&gt; toggle to the right.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Assign Role&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Change Role Assignment&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-activation-img-19.png&quot; alt=&quot;Assign OpsRamp Access role&quot; title=&quot;Assign OpsRamp Access role&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 4 – Launch the hybrid observability service&lt;/h3&gt;
&lt;p&gt;With the HPE OpsRamp Software deployed in a region and attached to a valid subscription, users are ready to take advantage of the hybrid observability operations management solution.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click the &lt;strong&gt;Launch&lt;/strong&gt; button to access and use the hybrid observability user interface powered by HPE OpsRamp. The hybrid observability service provides a unified view of observability across multi-cloud and on-premises infrastructure devices, resources, and services.&lt;/li&gt;
&lt;li&gt;The first time you access the service user interface, you must accept the &lt;strong&gt;End User Agreement&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/opsramp-activation-img-20.png&quot; alt=&quot;HPE OpsRamp service user interface&quot; title=&quot;HPE OpsRamp service user interface&quot;&gt;&lt;/p&gt;
&lt;p&gt;The service has now launched. You log in to the service as &lt;em&gt;Partner Administrator&lt;/em&gt; role.&lt;/p&gt;
&lt;p&gt;By default, Partner’s advanced monitoring &lt;strong&gt;Add-ons&lt;/strong&gt; are not enabled. It is recommended to enable the &lt;strong&gt;Add-ons&lt;/strong&gt; for the Product Packages at the Partner organization level. To enable the advanced monitoring Add-ons, select &lt;strong&gt;Setup &gt; Setup &gt; Accounts &gt; Partner details&lt;/strong&gt;. Click &lt;strong&gt;Edit&lt;/strong&gt; button at the top right, select the &lt;strong&gt;Add Ons&lt;/strong&gt; tab and check all the Add-ons, then &lt;strong&gt;Save&lt;/strong&gt;. To learn more about the Add-ons, see the &lt;a href=&quot;https://glp.docs.opsramp.com/guides/component-model/#hybrid-discovery-and-monitoring&quot;&gt;HPE OpsRamp Hybrid Discovery and Monitoring documentation&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This feature will be made available under &lt;strong&gt;Setup &gt; Account&lt;/strong&gt; in an upcoming HPE OpsRamp release.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To enable Partner’s Add-ons, ensure your HPE GreenLake workspace details (name, address) do not contain any special character.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The hybrid observability in HPE GreenLake Flex Solutions uses a multi-tenant model. As a &lt;em&gt;Partner Administrator&lt;/em&gt;, you can manage multiple client accounts, each client account representing an individual organization, business unit, or customer environment. You’ll start setting up your hybrid observability service by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Defining a client account.&lt;/li&gt;
&lt;li&gt;Assigning specific access roles to users.&lt;/li&gt;
&lt;li&gt;Installing a virtual appliance, called a gateway collector appliance, that discovers and manages your on-premises physical devices.&lt;/li&gt;
&lt;li&gt;Applying integration modules and monitoring configurations to resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These steps will be explained in the subsequent blog posts of the series.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post helps you get started provisioning of the hybrid observability service powered by HPE OpsRamp Software and activating the subscription key in your HPE GreenLake cloud workspace. For more information about the provisioning of the service  and subscription key activation, check out the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-9EDAAB42-9182-488D-A06F-6E8CB4BFAB60.html&quot;&gt;HPE GreenLake Cloud User Guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Stay tuned for &lt;a href=&quot;https://developer.hpe.com/blog/hybrid-observability-service-%E2%80%93-part-2-initial-configuration-to-enable-the-discovery-of-resources-in-hpe-greenlake-flex-solutions/&quot;&gt;my next post of the series&lt;/a&gt;, where I’ll delve into the hybrid observability service integration setup activities to &lt;strong&gt;discover&lt;/strong&gt; infrastructure resources. I’ll create a tenant, also referred to as client account, for the Partner organization. I’ll also install and configure a gateway collector appliance as a prerequisite to enable the discovery of physical infrastructure devices included in the HPE GreenLake Flex Solutions contract.&lt;/p&gt;
&lt;p&gt;There’s also a series of video tutorials on the &lt;a href=&quot;https://developer.hpe.com/greenlake/hybrid-observability-flex-solutions/home/&quot;&gt;Hybrid observability in HPE GreenLake Flex Solutions landing page&lt;/a&gt; that walks you through the contents described in this blog series for readers who prefer an audio-visual learning experience.&lt;/p&gt;
&lt;p&gt;To resolve issues with HPE GreenLake Flex Solutions or hybrid observability service powered by HPE OpsRamp, contact the support team. While logged in to your HPE GreenLake workspace:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Help &amp;#x26; Support&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Help&lt;/strong&gt;, select &lt;strong&gt;OpsRamp&lt;/strong&gt; or &lt;strong&gt;HPE GreenLake Flex Solutions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create New Case&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[From data centers to centers of data: Navigating the pull of data gravity]]></title><description><![CDATA[“Son, when I don’t have the data—people die.“ The room fell silent as a highly decorated Army colonel spoke about supporting frontline…]]></description><link>https://developer.hpe.com/from-data-centers-to-centers-of-data-navigating-the-pull-of-data-gravity/</link><guid isPermaLink="false">https://developer.hpe.com/from-data-centers-to-centers-of-data-navigating-the-pull-of-data-gravity/</guid><pubDate>Fri, 09 May 2025 11:31:10 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;“Son, when I don’t have the data—people die.“&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The room fell silent as a highly decorated Army colonel spoke about supporting frontline combat operations with real-time IT. His point landed hard: in the fog of war, data isn’t just a convenience—it’s a lifeline. Without it, decisions are delayed. And in combat, delays can cost lives.&lt;/p&gt;
&lt;p&gt;It may sound extreme, but business leaders face a strikingly similar reality. When data is missing, late, or incomplete, decisions suffer. Opportunities are missed. Customers leave. Competitors surge ahead. Whether in warfare or in business, the equation remains simple: &lt;strong&gt;data drives decisions—but only if it can be processed within time to matter&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;In my &lt;a href=&quot;https://developer.hpe.com/blog/the-rise-of-the-distributed-enterprise/&quot;&gt;previous blog post&lt;/a&gt;, I described the rise of the distributed enterprise and how businesses are moving away from centralized data centers and embracing a model that brings technology closer to where value is created. In this post, I&apos;ll get into details about data gravity and the effect it has had on this shift.&lt;/p&gt;
&lt;h2&gt;Ever expanding data—and why you can’t centralize your way out&lt;/h2&gt;
&lt;p&gt;Workloads are getting increasingly complex. The data needed to precisely calibrate a business decision today has grown by several magnitudes. New infrastructure capabilities drive new workloads which in turn expand data mass exponentially. We have witnessed the required data mass more than double every year. This trend has been observable for at least a decade. What used to be a manageable trickle has become a roaring flood. Real-time analytics, new AI models, IoT sensors, and always-on digital services all demand access to more—and faster—data.&lt;/p&gt;
&lt;p&gt;But here’s the problem: traditional, centralized data center architectures can’t keep up. Trying to move petabytes of data across wide-area networks (WANs) introduces costly delays. Even with WAN speeds improving, they’re simply not accelerating fast enough to match data’s pace. As a result, there is a growing impedance mismatch between data generated at a company’s location and the ability to get it processed at the other end of a network connection before the window of opportunity slams shut.&lt;/p&gt;
&lt;p&gt;Today’s reality? &lt;strong&gt;We’re generating too much data, too quickly, in too many places&lt;/strong&gt;. And that’s forcing us to rethink how, and where, we process it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enter: Data gravity&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Data isn’t just growing—it’s anchoring itself, an effect referred to as &lt;em&gt;&lt;strong&gt;data gravity&lt;/strong&gt;&lt;/em&gt;. Like a star with enough mass to pull planets into orbit, large datasets tend to attract everything around them: applications, services, even more data. And the larger the dataset, the harder—and more expensive—it becomes to move back to a centralized data center or public cloud availability zone for processing.&lt;/p&gt;
&lt;p&gt;Instead, the smarter, faster move is to &lt;strong&gt;bring processing power to the data&lt;/strong&gt;, not the other way around.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hybridcloud.hpecorp.net/assets/data-needed-to-make-decisions.png&quot; alt=&quot;Data needed to make a business decision&quot; title=&quot;Data needed to make a business decision&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Source: HPE research&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is especially true at distributed enterprise locations, where data is being created at unprecedented rates: in hospitals, factories, smart cities, autonomous vehicles, and beyond. These environments demand &lt;strong&gt;real-time&lt;/strong&gt; decision making—and that means they can’t wait for data to go elsewhere to be processed.&lt;/p&gt;
&lt;h2&gt;The numbers tell the story&lt;/h2&gt;
&lt;p&gt;Over the last ten years, we’ve witnessed a staggering transformation in the way data is generated, consumed, and leveraged. This change hasn&apos;t just been a byproduct of digital transformation; it&apos;s been a foundational force behind the rise of artificial intelligence, particularly large language models (LLMs) and agentic AI systems.&lt;/p&gt;
&lt;p&gt;Over the past decade, global data volumes have increased by &lt;strong&gt;115% per year&lt;/strong&gt;. That’s not just exponential growth—it’s super-exponential. Think about it: a terabyte of data generated today will likely be a petabyte in just a few years given current trends.&lt;/p&gt;
&lt;p&gt;But as data mass expands, our ability to move that data hasn&apos;t kept pace. During this same period of time, wide-area network (WAN) bandwidth has grown at just &lt;strong&gt;69% per year&lt;/strong&gt;. While that might sound impressive in isolation, it pales in comparison to the data it&apos;s supposed to support.&lt;/p&gt;
&lt;p&gt;Here’s the stark reality:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Over the last 10 years, the mass of decision-making data has increased &lt;strong&gt;2,250X&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;WAN bandwidth? It’s only grown about &lt;strong&gt;200X&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let that sink in. We’re facing a dramatic performance drag—at the exact moment when speed is becoming a critical business differentiator. This gap between data growth and bandwidth growth has led to a critical WAN infrastructure strain.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hybridcloud.hpecorp.net/assets/impedence-mismatch.png&quot; alt=&quot;Relative data mass to WAN bandwidth impedance mismatch&quot; title=&quot;Relative data mass to WAN bandwidth impedance mismatch&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Source: HPE research&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The bandwidth shortfall has real consequences. The lag in WAN infrastructure has introduced an additional 11 seconds of delay per every second of decision-making time over the past decade. In high-speed trading, autonomous vehicles, or real-time AI systems, those seconds are unacceptable. They&apos;re not just delays—they&apos;re the difference between relevance and irrelevance, or in some cases, safety and disaster.&lt;/p&gt;
&lt;p&gt;Take a quality control system in a manufacturing plant, for example. It might generate 100MB of camera data &lt;strong&gt;every second&lt;/strong&gt;. To ascertain product quality, that data needs to be processed instantly. Transmitting it to a central data center requires a 1.2 Gbps link—technically possible, but prohibitively expensive and complex, especially at scale.&lt;/p&gt;
&lt;p&gt;The smarter move? &lt;strong&gt;Process it on site&lt;/strong&gt;. Act on it in real time. Let the factory floor become the “center of data.”&lt;/p&gt;
&lt;h2&gt;A new equation for the real-time enterprise&lt;/h2&gt;
&lt;p&gt;Modern AI systems, especially large language models and agentic AI, are data-hungry by nature. Training a state-of-the-art LLM requires ingesting massive datasets that span languages, domains, and formats. But it&apos;s not just about training—it&apos;s about real-time inference, adaptation, and autonomy.&lt;/p&gt;
&lt;p&gt;Agentic AI, which refers to systems that can act autonomously and make decisions in complex environments, amplifies this need. These systems rely on vast amounts of localized, real-time data: sensor feeds, user interactions, environmental inputs, and more. Their effectiveness hinges not only on access to data but on how quickly and efficiently that data can be processed.&lt;/p&gt;
&lt;p&gt;We need a new way to measure value in a data-driven world.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Try this:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Data mass / Processing time = Business value&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It’s not just about how much data you have—it’s about how quickly you can turn that data into decisions. And the infrastructure to support that transformation is finally here.&lt;/p&gt;
&lt;p&gt;A few years ago, you needed 24 racks to get 768 CPU cores. Today, you can get that same horsepower in &lt;strong&gt;just three 1U servers&lt;/strong&gt;. Flash storage like NVMe drives is &lt;strong&gt;35x faster&lt;/strong&gt; than traditional disks. 400 Gbps networking between servers is no longer bleeding-edge—it’s becoming standard. But only at a given location.&lt;/p&gt;
&lt;p&gt;So, location-specific hardware is ready. The question is whether our thinking—and architecture—has caught up. To fully realize the potential of LLMs and agentic AI, we need to rethink how and where AI is deployed. The future isn&apos;t about massive, centralized models alone. It&apos;s about hybrid architectures that combine the power of the cloud with the immediacy of the edge.&lt;/p&gt;
&lt;p&gt;We’ve entered a new era—one where &lt;strong&gt;location&lt;/strong&gt; is just as important as &lt;strong&gt;speed&lt;/strong&gt; or &lt;strong&gt;scale&lt;/strong&gt;. As data volumes balloon and decision windows shrink, the old model of hauling everything to a central data center no longer works.&lt;/p&gt;
&lt;p&gt;The new model? Flip the script.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Build &lt;strong&gt;edge-native, distributed&lt;/strong&gt; systems that eliminate bottlenecks.&lt;/li&gt;
&lt;li&gt;Leverage r&lt;strong&gt;eal-time intelligence&lt;/strong&gt; at the point of creation so you can act on data where it lives, and then move it when it grows cold&lt;/li&gt;
&lt;li&gt;Design &lt;strong&gt;for gravity&lt;/strong&gt;—not against it—by aligning compute and storage with the natural flow of data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Taking your data’s temperature into account&lt;/h2&gt;
&lt;p&gt;To be clear, not all data needs to stay at the edge forever. Hot, high-value data—the kind that feeds real-time decisions—should absolutely be processed on site. However, when today’s data cools – or becomes less active – then it should move. What? We were just discussing how IT infrastructure like servers, networks, storage, and orchestration needed to move to the distributed locations and no longer remain in the classic old data center. That still stands. And precisely because IT infrastructure now is distributed, it must only be used for data that drives decisions where operational value is high. But as data cools, it can (and should) be archived or moved to lower-cost, centralized storage.&lt;/p&gt;
&lt;p&gt;Too often, organizations treat all data the same, hoarding everything on an expensive, high-performance infrastructure. That’s a recipe for inefficiency. Instead, take a smarter approach and implement &lt;strong&gt;data lifecycle management&lt;/strong&gt; strategies that balance performance, cost, and compliance.&lt;/p&gt;
&lt;p&gt;In other words: &lt;strong&gt;keep your fast storage clean. Let cold data go&lt;/strong&gt;. Make room for what matters now.&lt;/p&gt;
&lt;h2&gt;Data Mass / Time to process X Probability of data reuse = Business value&lt;/h2&gt;
&lt;p&gt;For most data, the probability of data reuse declines exponentially over time. Fresh data generated now has a 100% chance of data use. But the same data set 60 days from now may have fallen to only a ten percent chance of data being accessed. This is a powerful insight.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hybridcloud.hpecorp.net/assets/data-scew.png&quot; alt=&quot;Data skew&quot; title=&quot;Data skew&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Source: Patent number: 9372636 - Dave Zeryck, Denis Vilfort, June 21, 2016&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Since WAN bandwidth is constant, the data set will take the same amount of time to move, but now has 1/10th the impact on operations. Thus, there comes a point in the data lifecycle where the time to move the data outweighs the risk while the savings of freeing up hundreds of GBs of NVMe storage saves considerable cost in purchasing more storage capacity for that location. Moving data promotes good ROI and lowers operating costs by enabling location-specific special re-use. Data movement can be combined further with data de-duplication and data compression for deep archiving of cold data in a central location while recycling location-specific IT infrastructure and making capacities available for new fresh incoming data. Think of this strategy as “hot edge – cold core”.&lt;/p&gt;
&lt;h2&gt;A call to act on data where it lives – Now is the time to evolve&lt;/h2&gt;
&lt;p&gt;The last decade of data growth hasn&apos;t just enabled AI—it&apos;s made it inevitable. But it’s also made clear that our old architectures won’t scale. The math is against us: data growth is outpacing our ability to move it, and data gravity is only increasing.&lt;/p&gt;
&lt;p&gt;To keep up, we need a compute paradigm that respects the physics of data. That means putting AI where the data is, embracing distributed intelligence, and designing systems that can thrive in a world of fragmented, fast-moving, and increasingly immobile information.&lt;/p&gt;
&lt;p&gt;Modern infrastructure gives us the tools. High-performance compute, GPU scaling, NVMe flash, and next-gen networking let us act on data where it lives, unlocking insights without dragging data across expensive networks. And since the business value of a given data set decreases over time, we must modify the impact of data gravity for that data set as it cools, moving it away to make room for new insights.&lt;/p&gt;
&lt;p&gt;If your organization is still operating on a centralized data model, now is the time to evolve. The data gravity challenge isn’t going away—it’s only getting stronger. But with the right architecture, strategy, and partners, it’s also an incredible opportunity.&lt;/p&gt;
&lt;p&gt;At HPE, we help enterprises rethink their infrastructure to meet this challenge head-on—building &lt;strong&gt;centers of data&lt;/strong&gt; that maximize performance, reduce waste, and accelerate decision-making.&lt;/p&gt;
&lt;p&gt;Don’t wait. Don’t move your data.&lt;/p&gt;
&lt;p&gt;Act on it**—where it lives**.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Fuel AI innovation, reach your sustainability goals, and build an intelligent private cloud]]></title><link>https://developer.hpe.com/2025-may-05/</link><guid isPermaLink="false">https://developer.hpe.com/2025-may-05/</guid><pubDate>Mon, 05 May 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[10 Myths About Scalable Parallel Programming Languages (Redux), Part 1: Productivity and Performance]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-1-productivity-and-performance/</link><guid isPermaLink="false">https://developer.hpe.com/10-myths-about-scalable-parallel-programming-languages-redux-part-1-productivity-and-performance/</guid><pubDate>Wed, 30 Apr 2025 15:05:02 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel/Fortran Interop in an Ocean Model: Introduction]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/chapel-fortran-interop-in-an-ocean-model-introduction/</link><guid isPermaLink="false">https://developer.hpe.com/chapel-fortran-interop-in-an-ocean-model-introduction/</guid><pubDate>Fri, 25 Apr 2025 05:56:41 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Migrating vSphere Virtual Volumes to KubeVirt with HPE CSI Driver for Kubernetes]]></title><description><![CDATA[When we talk about moving to the cloud and in the advent of cloud repatriation there’s no denying that data has gravity. Sometimes we refer…]]></description><link>https://developer.hpe.com/migrating-vsphere-virtual-volumes-to-kubevirt-with-hpe-csi-driver-for-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/migrating-vsphere-virtual-volumes-to-kubevirt-with-hpe-csi-driver-for-kubernetes/</guid><pubDate>Tue, 22 Apr 2025 15:04:21 GMT</pubDate><content:encoded>&lt;p&gt;When we talk about moving to the cloud and in the advent of cloud repatriation there’s no denying that data has gravity. Sometimes we refer to this phenomenon as &quot;Hotel California&quot; referencing the American rock band song by the Eagles about being held captive within a hotel. Without a doubt, this concept springs to mind when we think about migrating virtual machines (VMs) within our private cloud between hypervisors. Is your data being held captive by your legacy virtual infrastructure? The most likely answer is &quot;It depends&quot;.&lt;/p&gt;
&lt;p&gt;There are plenty of migration tools and options out there that will assist application owners to either copy data out, restore from a backup, or similar. This works great for small workloads with a handful of gigabytes. The downtime involved while making the transaction isn&apos;t very significant and can thus be performed within existing Service Level Agreements (SLA).&lt;/p&gt;
&lt;p&gt;Consider the scenario where you have a large virtual file server with a complex file structure of tiny files where everyone knows that copying or restoring the file structure alone would take days and incremental updates would take hours to traverse. How do you tackle this challenge?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/intro.png&quot; alt=&quot;&quot; title=&quot;Migration workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this blog I’ll revisit the concept of &lt;a href=&quot;https://developer.hpe.com/blog/lift-and-transform-apps-with-hpe-csi-driver-for-kubernetes/&quot;&gt;Lift and Transform Apps with HPE CSI Driver for Kubernetes&lt;/a&gt; and I&apos;ll show you how to apply this same methodology in migrating large data volumes from VMware vSphere to KubeVirt using native features of the HPE CSI Driver for Kubernetes with HPE Alletra Storage MP B10000.&lt;/p&gt;
&lt;p&gt;The intermediary clones of the source volume that is used to test migration before cutover is what makes the solution compelling and very powerful. Iteratively testing the migration workflow and carefully planning the cutover will prevent mishaps and unnecessary downtime that might occur with error prone copy and restore procedures.&lt;/p&gt;
&lt;p&gt;This blog will go through each relevant step in detail. There’s also a &lt;a href=&quot;https://www.youtube.com/embed/5PYROAFVD-A?si=UBBOMb2FXdAgJuKi&quot;&gt;short demo on YouTube&lt;/a&gt; that walks through the contents described below for readers who prefer a more audio-visual learning experience.&lt;/p&gt;
&lt;div class=&quot;gatsby-resp-iframe-wrapper&quot; style=&quot;padding-bottom: 56.212424849699396%; position: relative; height: 0; overflow: hidden; margin-bottom: 1.0725rem&quot; &gt; &lt;iframe src=&quot;https://www.youtube.com/embed/5PYROAFVD-A?si=UBBOMb2FXdAgJuKi&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot; style=&quot; position: absolute; top: 0; left: 0; width: 100%; height: 100%; &quot;&gt;&lt;/iframe&gt; &lt;/div&gt;
&lt;h1&gt;Assumptions&lt;/h1&gt;
&lt;p&gt;In this scenario we’ll work with a VM that uses a separate data volume for a MariaDB database. There’s one Fedora VM running on vSphere and one Fedora VM running on KubeVirt. A sample test database is being deployed, cloned and migrated where it’s easy to validate the contents of database from an application perspective at each checkpoint.&lt;/p&gt;
&lt;p&gt;For full disclosure, Ansible playbooks are being provided in this blog post to better describe each step in the migration workflow. Since each application and environment will be different for each step of the way, it’s assumed that automation workflows to conduct the migration will be written according to the application’s best practices.&lt;/p&gt;
&lt;p&gt;KubeVirt is running on OKD 4.17 (KubeVirt v1.3.1 and Kubernetes v1.30) and has HPE CSI Driver for Kubernetes v2.5.2 installed.&lt;/p&gt;
&lt;h1&gt;Environment&lt;/h1&gt;
&lt;p&gt;The Ansible inventory consists of two VMs; one grouped as “kubevirt” and one grouped as “legacy” to distinguish which VM the plays are executed on. Both share a “common” group to allow certain prerequisites to be fulfilled on both VMs.&lt;/p&gt;
&lt;p&gt;The entire migration workflow can easily be done with Ansible but certain steps are illustrated with the respective platform’s graphical user interfaces for demonstrative purposes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;all:
  children:
    legacy:
      hosts: 
        mariadb-legacy:
          ansible_user: packer
          source_host_device: /dev/sdb
    kubevirt:
      hosts:
        mariadb-kubevirt:
          ansible_user: fedora
    common:
      children:
        legacy:
        kubevirt:
      vars:
        filesystem_label: mariadb-testdb
        var_lib_mysql: /var/lib/mysql
        test_db_tgz: https://github.com/datacharmer/test_db/releases/download/v1.0.7/test_db-1.0.7.tar.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The prerequisites playbook is idempotent and can be run as a sanity checker at any time to declare the VMs primed for either cloning or migration. It’s assumed there’s an empty data disk attached to the source VM residing on a VMware vSphere vVol datastore hosted by an HPE Alletra Storage MP B10000 or earlier system. Note that this workflow works with any HPE storage platform compatible with the HPE CSI Driver for Kubernetes. There are nuances on how cloning is executed and how to find the backing vVol volume name but the principles remain the same.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
# Attach data disk to VM prior
- hosts: legacy
  become: yes
  tasks:
  - name: Assert variables
    assert:
      that:
      - source_host_device
      - filesystem_label
      - test_db_tgz

  - name: Gather device info
    stat:
      path: &quot;{{ source_host_device }}&quot;
    register: source_host_device_exist

  - name: Make sure device exist
    assert:
      that:
      - source_host_device_exist.stat.isblk is true

  - name: Create filesystem on device with LABEL
    filesystem:
      dev: &quot;{{ source_host_device }}&quot;
      fstype: ext4
      opts: &quot;-L {{ filesystem_label }}&quot;

  - name: Mount filesystem and persist
    mount:
      path: &quot;{{ var_lib_mysql }}&quot;
      state: mounted
      src: &quot;LABEL={{ filesystem_label }}&quot;
      fstype: ext4

# Install MariaDB on both VMs and download Test_DB
- hosts: common
  become: yes
  tasks:
  - name: Install prereqs manually
    command: dnf install -y virt-what mariadb-server

  - name: Download Test_DB
    unarchive:
      src: &quot;{{ test_db_tgz }}&quot;
      remote_src: yes
      dest: &quot;{{ ansible_user_dir }}&quot;
      creates: test_db

# Disable/stop MariaDB on destination and remove mountpoint
- hosts: kubevirt
  become: yes
  tasks:
  - name: Disable MariaDB
    service:
      enabled: no
      state: stopped
      name: mariadb
  - name: Remove default MariaDB datadir
    file: 
      path: &quot;{{ var_lib_mysql }}&quot;
      state: directory
      recurse: true

# Start/enable MariaDB and deploy TestDB on source
- hosts: legacy
  become: yes
  tasks:
  - name: Set permissions
    file:
      path: &quot;{{ var_lib_mysql }}&quot;
      owner: mysql
      group: mysql
      mode: &quot;0755&quot;
      seuser: system_u
      serole: object_r
      setype: mysqld_db_t
      selevel: s0

  - name: Enable MariaDB
    service:
      name: mariadb
      state: started
      enabled: true

  - name: Deploy Test_DB
    shell: mysql &amp;#x3C; employees_partitioned.sql
    args:
      creates: &quot;{{ var_lib_mysql }}/employees&quot;
      chdir: &quot;{{ ansible_user_dir }}/test_db&quot;

  - name: Test Test_DB
    shell: mysql -t &amp;#x3C; test_employees_sha.sql &amp;#x26;&amp;#x26; mysql -t &amp;#x3C;&amp;#x3C;&amp;#x3C; &apos;select @@hostname as &apos;VM&apos;;&apos;
    args:
      chdir: &quot;{{ ansible_user_dir }}/test_db&quot;
    register: test_db_results

  - name: Show results
    pause:
      prompt: &quot;{{ test_db_results.stdout }}&quot;
      seconds: 10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Another detail in the playbook that might need clarification is that the Test_DB sources is downloaded to the destination VM only because the database queries are needed for validation of the cloning and migration.&lt;/p&gt;
&lt;p&gt;The last few output lines of the playbook reveal that the database was deployed and tested properly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;...
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
+----------------+
| VM             |
+----------------+
| mariadb-legacy |
+----------------+:
ok: [mariadb-legacy]

PLAY RECAP *************************************************************************
mariadb-kubevirt           : ok=6    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
mariadb-legacy             : ok=15   changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the prerequisites have been installed, you can also inspect the hypervisor platform each VM is running on to ensure what you&apos;re doing is actually being executed where expected.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ ansible -m command all -b -a virt-what
mariadb-legacy | CHANGED | rc=0 &gt;&gt;
vmware
mariadb-kubevirt | CHANGED | rc=0 &gt;&gt;
redhat
kvm
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Preparing for the first failure&lt;/h1&gt;
&lt;p&gt;What is being described in this blog is a perfectly canned example. In reality, the first experiment always fails. Maybe even the second, and third. The point is that it should be non-disruptive and easy to recover from the failures to iteratively improve as the migration is taken across the finish line with as brief downtime as possible. What happens before the cutover may take days and weeks of planning and risk-free test execution.&lt;/p&gt;
&lt;p&gt;In the first iteration, the vSphere vVol needs to be cloned into a PersistentVolume (PV) using a Persistent Volume Claim (PVC). The PVC is then attached to the KubeVirt VM as a disk.&lt;/p&gt;
&lt;p&gt;PVC creation may rely on pre-provisioned “static” PVs but more commonly a StorageClass is used to dynamically provision the PV. To allow the PV to clone a pre-existing volume on the storage array, the HPE CSI Driver for Kubernetes needs to be supplied a parameter with the corresponding volume name.&lt;/p&gt;
&lt;p&gt;The CSI driver supports a construct that allows Kubernetes users to supply their own values to StorageClass parameters called “allowOverrides”. For this exercise, you&apos;ll need to craft a StorageClass that allows users to influence the “importVolumeName” (used for migration) and “importVolAsClone” (used for cloning).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: legacy-migration
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  csi.storage.k8s.io/fstype: ext4
  description: &quot;Volume cloned or imported by the HPE CSI Driver for Kubernetes&quot;
  allowOverrides: importVolumeName,importVolAsClone
  compression: &quot;false&quot;
  provisioningType: &quot;&quot;
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Users then call out the parameter and the value they desire in the PVC using an annotation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testdb-cloned
  namespace: hpe-vmi
  annotations:
    csi.hpe.com/importVolAsClone: dat-mariadbl-56480669
spec:
  volumeMode: Block
  storageClassName: legacy-migration
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 128Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, where does “dat-mariadbl-56480669” come from? It’s the storage array volume name. It’s available to storage administrators of the storage array and it’s also available through the HPE Storage Integration Pack for VMware vCenter (SIP4VC) when used with HPE Alletra Storage MP B10000 and earlier platforms.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sip4vc.png&quot; alt=&quot;&quot; title=&quot;Virtual machine view in SIP4VC&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create the PVC and attach the PVC as a disk to the running KubeVirt VM.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create -f pvc-clone.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, login to OKD.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/attach-disk.png&quot; alt=&quot;&quot; title=&quot;Attach disk to VM&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once attached, a “migrate” playbook will mount the disk inside the VM and fire up the database and test it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
# With PVC attached to the KubeVirt VM, mount and start MariaDB
- hosts: kubevirt
  become: yes
  tasks:
  - name: Mount filesystem and persist
    mount:
      path: &quot;{{ var_lib_mysql }}&quot;
      state: mounted
      src: &quot;LABEL={{ filesystem_label }}&quot;
      fstype: ext4

  - name: Ensure correct perms and SELinux labels
    file:
      path: &quot;{{ var_lib_mysql }}&quot;
      owner: mysql
      group: mysql
      mode: &quot;0755&quot;
      seuser: system_u
      serole: object_r
      setype: mysqld_db_t
      selevel: s0 
      recurse: yes
  - name: Enable MariaDB
    service:
      name: mariadb
      state: started
      enabled: true

  - name: Test Test_DB
    shell: mysql -t &amp;#x3C; test_employees_sha.sql &amp;#x26;&amp;#x26; mysql -t &amp;#x3C;&amp;#x3C;&amp;#x3C; &apos;select @@hostname as &apos;VM&apos;;&apos;
    args:
      chdir: &quot;{{ ansible_user_dir }}/test_db&quot;
    register: test_db_results

  - name: Show results
    pause:
      prompt: &quot;{{ test_db_results.stdout }}&quot;
      seconds: 10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Executing the playbook should reveal the test results and you can see the database is actually running on the KubeVirt VM.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;...
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
+------------------+
| VM               |
+------------------+
| mariadb-kubevirt |
+------------------+:
ok: [mariadb-kubevirt]

PLAY RECAP *************************************************************************
mariadb-kubevirt           : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Success! The cloning process worked. Now, before finalizing with the actual volume. Let’s run a “detach” playbook to stop the database and unmount the filesystem.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
# With PVC attached to the KubeVirt VM, stop MariaDB and unmount the filesystem
- hosts: kubevirt
  become: yes
  tasks:
  - name: Make sure MariaDB is stopped and disabled
    service:
      name: mariadb
      state: stopped
      enabled: no

  - name: Remove data disk mounts
    mount:
      path: &quot;{{ var_lib_mysql }}&quot;
      state: absent
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the OKD UI, detach the volume and delete the PVC.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/detach-disk.png&quot; alt=&quot;&quot; title=&quot;Detach disk from VM&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl delete -f pvc-clone.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let’s migrate the workload permanently.&lt;/p&gt;
&lt;h1&gt;Finalizing&lt;/h1&gt;
&lt;p&gt;The last step involves simply shutting down the source vSphere VM to ensure the source volume isn’t accessed. Once the VM is properly shut down, create a PVC with the “importVolumeName” annotation to move the volume out of the vVol datastore and make it into a PV.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: testdb-migrated
  namespace: hpe-vmi
  annotations:
    csi.hpe.com/importVolumeName: dat-mariadbl-56480669
spec:
  volumeMode: Block
  storageClassName: legacy-migration
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 128Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create the PVC.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create -f pvc-migrate.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, repeat the step within OKD to attach the PVC as a disk to the running VM. Once attached, run the “migrate” playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;...
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
+------------------+
| VM               |
+------------------+
| mariadb-kubevirt |
+------------------+:
ok: [mariadb-kubevirt]

PLAY RECAP *************************************************************************
mariadb-kubevirt           : ok=6    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see identical results as during the cloning procedure.&lt;/p&gt;
&lt;p&gt;Next, delete the source VM in vCenter and plan the next migration.&lt;/p&gt;
&lt;h1&gt;Learn more&lt;/h1&gt;
&lt;p&gt;I’ve barely scratched the surface of the HPE CSI Driver for Kubernetes features with HPE Alletra Storage MP B10000. The “allowOverrides” construct along with “importVolAsClone” and “importVolumeName” lends itself to many different use cases where data migration is one of them. Importing a “foreign” volume to Kubernetes enables sophisticated DevOps processes where terabytes of data can be made available to developers, data analysts, and testing frameworks using CI/CD pipelines.&lt;/p&gt;
&lt;p&gt;Visit &lt;a href=&quot;http://scod.hpedev.io&quot;&gt;scod.hpedev.io&lt;/a&gt; to learn more about the HPE CSI Driver for Kubernetes and neighboring technologies. Sign up to the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack&lt;/a&gt; to interact with HPE staff and technologists from customers and partners.&lt;/p&gt;
&lt;p&gt;New to KubeVirt? Check out the author’s previous blog post on &lt;a href=&quot;https://developer.hpe.com/blog/management-paradigms-for-virtual-machines-running-on-kubernetes/&quot;&gt;management paradigms for virtual machines running on Kubernetes&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Building the Intelligent Private Cloud with the Triple-O Loop]]></title><description><![CDATA[This is the second installment in our series that explores the evolution of the intelligent private cloud. From the foundation provided…]]></description><link>https://developer.hpe.com/building-the-intelligent-private-cloud-with-the-triple-o-loop/</link><guid isPermaLink="false">https://developer.hpe.com/building-the-intelligent-private-cloud-with-the-triple-o-loop/</guid><pubDate>Tue, 22 Apr 2025 10:24:51 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;This is the second installment in our series that explores the evolution of the intelligent private cloud. From the foundation provided through virtualization to AI-driven automation, each blog post will examine a critical aspect of building a modern, efficient, and scalable private cloud infrastructure.&lt;/p&gt;
&lt;h2&gt;Building an intelligent private cloud&lt;/h2&gt;
&lt;p&gt;Today&apos;s private clouds are highly sophisticated combinations of hardware infrastructure, software, and services. They leverage advanced data observability and AI capabilities to enhance performance, optimize resource utilization, and enable as-a-service consumption models, all while ensuring robust security and compliance.&lt;/p&gt;
&lt;p&gt;In my &lt;a href=&quot;https://developer.hpe.com/blog/the-rise-commoditization-and-integration-of-virtualization-into-the-private-cloud/&quot;&gt;previous post&lt;/a&gt;, I covered the importance of virtualization in the development of these intelligent private clouds, and how it allows for the breaking down of the infrastructure into modular, abstracted layers using open-source tools like KVM, Open vSwitch, and Ceph. However, virtualization alone doesn’t make a private cloud intelligent. That transformation requires operationalizing the cloud using what I call the Triple-O Loop.&lt;/p&gt;
&lt;p&gt;In this post, I’ll explore how the Triple-O loop turns a virtualized environment into an adaptive, AI-enhanced platform — delivering today’s Intelligent Private Cloud.&lt;/p&gt;
&lt;h2&gt;Operationalizing intelligence: Introducing the Triple-O Loop&lt;/h2&gt;
&lt;p&gt;At the heart of the Intelligent Private Cloud is the Triple-O Loop — a closed, continuous cycle where cloud components work together to Orchestrate, Observe, and Optimize operations. This concept is somewhat akin to the decision-making model developed by United States Air Force Colonel John Boyd, known as the OODA loop. A major difference, however, is that in the Triple-O Loop, orchestration, observation, and optimization are used in a proactive manner to drive system behaviors, as opposed to the more reactive OODA model that starts by observing the field operations environment and then reacts to field events.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/adly-lakhdar-tiple-o-loop-img1.png&quot; width=&quot;410&quot; height=&quot;421&quot; alt=&quot;Introducing the Triple-O Loop&quot; title=&quot;Introducing the Triple-O Loop&quot;&gt;&lt;/center&gt;
&lt;p&gt;In the Triple-O Loop, each element of the loop enables the platform to take action, sense its environment, and adapt intelligently.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Orchestrate:&lt;/strong&gt; Executes provisioning, workflows, and policy actions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observe:&lt;/strong&gt; Gathers and correlates telemetry, events, and behavior.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Optimize:&lt;/strong&gt; Uses AI to tune resources and improve performance, sustainability, and compliance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These elements operate not as standalone silos, but as a tightly integrated system. Each informs and enhances the others, forming the feedback engine behind a truly intelligent infrastructure.&lt;/p&gt;
&lt;h2&gt;Aligning elements to the private cloud&lt;/h2&gt;
&lt;p&gt;To visualize how these elements function within the architecture, imagine the Triple-O Loop converted into horizontal layers, with each mapped to a role in the private cloud stack:&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/adly-lakhdar-aligning-elements-img2.png&quot; width=&quot;479&quot; height=&quot;382&quot; alt=&quot;Elements to the Private Cloud&quot; title=&quot;Elements to the Private Cloud&quot;&gt;&lt;/center&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Top:&lt;/strong&gt; Optimize layer – intelligence and control plane with AI and compliance logic.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Middle:&lt;/strong&gt; Orchestrate layer – automation and policy enforcement.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Middle:&lt;/strong&gt; Observe layer – telemetry and system awareness.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Base:&lt;/strong&gt; Hardware and software infrastructure.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These layers correspond directly to the structure of a modern private cloud, from unified control and security at the top to observability and telemetry at the base. Here, the Triple-O Loop transforms the virtualized environment into an adaptive, AI-enhanced platform — delivering today’s Intelligent Private Cloud.&lt;/p&gt;
&lt;h2&gt;Agentic AI: Intelligence that acts&lt;/h2&gt;
&lt;p&gt;A defining feature of the Intelligent Private Cloud architecture is the presence of Agentic AI—intelligent agents embedded within each layer of the Triple-O Loop. These agents operate autonomously, performing crucial tasks that enhance the cloud&apos;s intelligence and responsiveness.&lt;/p&gt;
&lt;p&gt;These agents are capable of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; infrastructure and application behavior continuously, ensuring that all components are functioning optimally.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Predicting&lt;/strong&gt; failures or inefficiencies based on learned patterns, thereby preventing potential issues before they impact operations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Taking&lt;/strong&gt; corrective action by enforcing policies, or escalating issues without human intervention, ensuring that the cloud can self-manage and adhere to compliance requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By aligning Triple-O capabilities with architectural layers and empowering them with Agentic AI, the Intelligent Private Cloud transcends being a mere management system. It becomes a self-optimizing, mission-aware platform capable of sustaining operations in real-time, regulated, and mission-critical environments. This transformation is pivotal in creating a cloud environment that not only adapts to current conditions but also anticipates future needs and challenges.&lt;/p&gt;
&lt;h2&gt;The expanded loop&lt;/h2&gt;
&lt;p&gt;To further understand the intricacies of the Triple-O Loop, let&apos;s delve into each of its components: Orchestrate, Observe, and Optimize.&lt;/p&gt;
&lt;h3&gt;Orchestrate:&lt;/h3&gt;
&lt;p&gt;At its core, the orchestration layer executes provisioning, workflows, and policy actions. This involves automating the deployment and management of resources, ensuring that the infrastructure can scale dynamically and efficiently. Tools such as Kubernetes, Ansible and HPE Morpheus are instrumental in this layer, enabling seamless orchestration of containers, configurations, and infrastructure as code.&lt;/p&gt;
&lt;h3&gt;Observe:&lt;/h3&gt;
&lt;p&gt;The observation layer is crucial for maintaining system awareness. It gathers and correlates telemetry, events, and behavior from various components of the infrastructure. This data is then analyzed to provide insights into the performance and health of the cloud environment. Tools like Prometheus, ELK Stack, Grafana and HPE OpsRamp play a vital role in providing visibility and actionable insights.&lt;/p&gt;
&lt;h3&gt;Optimize:&lt;/h3&gt;
&lt;p&gt;The optimization layer leverages AI to fine-tune resources and improve overall cost, performance, sustainability, and compliance. By continuously analyzing data, AI algorithms can dynamically adjust resource allocation, identify inefficiencies, and implement improvements. This ensures that the cloud environment remains optimized and compliant with regulatory and sustainability standards.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/adly-lakhdar-agenti-ai-img3.png&quot; width=&quot;590&quot; height=&quot;668&quot; alt=&quot;Agentic AI - Intelligence That Acts&quot; title=&quot;Agentic AI - Intelligence That Acts&quot;&gt;&lt;/center&gt;
&lt;p&gt;In a Venn diagram representation, the three elements — &lt;strong&gt;Orchestrate&lt;/strong&gt;, &lt;strong&gt;Observe&lt;/strong&gt;, and &lt;strong&gt;Optimize&lt;/strong&gt; — form layered capabilities that interact continuously across the private cloud stack. Signals from observability trigger automation via orchestration, and insights generated through optimization feed both layers to refine behavior over time.&lt;/p&gt;
&lt;p&gt;At the center of the Venn diagram, where all three capabilities intersect, lies the autonomous, agentic platform — a zone of continuous feedback and intelligent action. Here, AI agents monitor, decide, and execute in real time, creating a living, learning system that adapts dynamically to workload, policy, and mission demands.&lt;/p&gt;
&lt;h2&gt;The outcome: The Intelligent Private Cloud&lt;/h2&gt;
&lt;p&gt;The Intelligent Private Cloud integrates open-source technologies, enterprise-grade orchestration, AI-augmented optimization, and deep observability to deliver a living infrastructure system.&lt;/p&gt;
&lt;p&gt;Key features include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A built-in data fabric that ensures seamless data management and integration across various environments.&lt;/li&gt;
&lt;li&gt;Integrated GPUs for AI/ML/GenAI workloads, enabling high-performance computing capabilities.&lt;/li&gt;
&lt;li&gt;Seamless module onboarding via orchestration, which simplifies the deployment and management of new services and applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Together, these features enable hybrid, edge, and multi-cloud operations with full visibility, intelligence, and control.&lt;/p&gt;
&lt;h2&gt;A note on organizational alignment&lt;/h2&gt;
&lt;p&gt;To effectively deliver such a closely integrated private cloud, organizations must rethink their team structures. Moving away from discrete product silos and aligning teams around the three core capabilities — Orchestrate, Observe, and Optimize — can enhance agility, quality, modularity, and efficiency.&lt;/p&gt;
&lt;p&gt;This concept, known as the reverse Conway law, will be explored in greater detail in a subsequent blog post. Organizations that follow the reverse Conway law intentionally design their team and organizational structures to align with the desired architecture, thereby ensuring that the technology and organizational strategies are in harmony.&lt;/p&gt;
&lt;h2&gt;Conclusion: A living cloud&lt;/h2&gt;
&lt;p&gt;The Intelligent Private Cloud represents more than just infrastructure — it is a dynamic system that senses, decides, and acts. Powered by the Triple-O Loop, it revolutionizes IT operations by transitioning from reactive management to proactive, intelligent automation.&lt;/p&gt;
&lt;p&gt;This transformation is critical for organizations seeking to stay ahead in an increasingly complex and fast-paced digital landscape. By embracing the principles of the Intelligent Private Cloud, enterprises can achieve unprecedented levels of efficiency, agility, and resilience.&lt;/p&gt;
&lt;p&gt;In my next blog post, I’ll do a deep dive into observability and AI-powered insight and show you the power of OpsRamp as demonstrated within this architecture.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Beyond generic AI: achieve contextual accuracy with HPE's Knowledge Bases]]></title><description><![CDATA[The demand for AI applications that deliver accurate, contextually relevant insights from enterprise data is rapidly increasing. However…]]></description><link>https://developer.hpe.com/beyond-generic-ai-achieve-contextual-accuracy-with-hpes-knowledge-bases/</link><guid isPermaLink="false">https://developer.hpe.com/beyond-generic-ai-achieve-contextual-accuracy-with-hpes-knowledge-bases/</guid><pubDate>Tue, 15 Apr 2025 08:45:24 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;The demand for AI applications that deliver accurate, contextually relevant insights from enterprise data is rapidly increasing. However, the challenge of integrating fragmented, often sensitive, data into generative models remains a significant hurdle.&lt;/p&gt;
&lt;p&gt;To solve this, Hewlett Packard Enterprise (HPE) has introduced knowledge bases in HPE AI Essentials Software, the software foundation that makes HPE Private Cloud AI a comprehensive, user-friendly platform for enterprises seeking to deploy and scale AI solutions. This new feature provides a fully managed Retrieval Augmented Generation (RAG) experience, enabling secure and efficient connection between foundation models and internal data. This streamlined approach handles everything from vector database setup to sophisticated query handling, allowing data science professionals to create highly customized and accurate AI applications that are tailored to their specific business needs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/abhipicture12-img1.png&quot; alt=&quot;HPE AI Essentials workflow to automate connecting LLMs to enterprise data&quot; title=&quot;HPE AI Essentials workflow to automate connecting LLMs to enterprise data&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE AI Essentials simplifies the implementation of RAG by automating the critical steps involved with connecting Large Language Modules (LLM) to enterprise data. Users retain control over LLM selection and embedding model choice, while the platform manages the underlying infrastructure. HPE AI Essentials automatically handles the conversion of diverse data formats into vector embeddings and ensures efficient storage and retrieval through a managed vector database. This approach allows developers to focus on application logic, rather than the intricacies of RAG pipeline management.&lt;/p&gt;
&lt;p&gt;The platform manages the creation, storage, maintenance, and updates of vector embeddings, the numerical representations of semantic textual data. This automation simplifies data synchronization, allowing users to efficiently update source data. HPE AI Essentials Software provides granular control over the RAG pipeline through configurable parameters for chunking, retrieval, and response generation. This enables data science professionals to tailor a model&apos;s processing and understanding to specific use cases, resulting in improved retrieval accuracy and response coherence.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/abhipicture21-img2.png&quot; alt=&quot;HPE AI Essentials: Streamlining LLM Application with Intuitive Data Source and Model Management.&quot; title=&quot;HPE AI Essentials: Streamlining LLM Application with Intuitive Data Source and Model Management.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Automated document chunking defaults to 512-word segments, optimized for question-answering tasks. Users can further customize chunk sizes and overlaps, with a recommended 0-20% overlap for accuracy gains, while being aware of the potential for reduced relevancy with excessive overlap.&lt;/p&gt;
&lt;p&gt;HPE AI Essentials features a playground, a dedicated environment for interactive knowledge base exploration and management across multiple sessions. This tool enables iterative refinement of model behavior through customizable response parameters and prompt templates. Users can inject background data, define user-specific constraints, and implement detailed prompting strategies, providing the flexibility required for advanced AI development and optimization.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/abhipicture31-img3.png&quot; alt=&quot;Starting a playground for knowledge base exploration and management.&quot; title=&quot;Starting a playground for knowledge base exploration and management.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/abhipicture42-img4-merged.png&quot; alt=&quot;Customization of knowledge base playground components &quot; title=&quot;Customization of knowledge base playground components &quot;&gt;&lt;/p&gt;
&lt;p&gt;To support applications requiring sophisticated, data-driven workflows, knowledge bases can be accessed programmatically via dedicated endpoints within HPE AI Essentials. Secure authorization is achieved using long-lived authorization tokens, allowing for sustained interaction with these endpoints. The following code example provides a clear demonstration of endpoint usage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/abhipicture51-img5.png&quot; alt=&quot;Code to programmatically access knowledge base via endpoints&quot; title=&quot;Code to programmatically access knowledge base via endpoints &quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Leverage HPE AI Essentials knowledge bases to streamline the development of data-driven AI applications. The platform automates key RAG pipeline components, including vector embedding management and data synchronization reducing operational overhead. Data scientists can focus on application logic and customization, utilizing programmable endpoints and the interactive playground for efficient development. For implementation details and API integration, refer to the technical resources listed below.&lt;/p&gt;
&lt;h2&gt;Learn more:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.hpe.com/psnow/product-documentation?oid=1014847366&amp;#x26;cc=my&amp;#x26;lc=en&amp;#x26;jumpid=in_pdp-psnow-docs&quot;&gt;HPE Private Cloud AI Documentation&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://hpe.com/support/PCAIUserGuide&quot;&gt;Administration Guide&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.hpe.com/support/AIEDocs&quot;&gt;HPE AI Essentials Software &lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie16hen_us&amp;#x26;page=Tutorials/Tutorials/Tutorials.html&quot;&gt;Tutorials for HPE AI Essentials on HPE Private Cloud AI&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/HPEEzmeral/aie-tutorials/tree/aie-1.7.0&quot;&gt;GitHub Tutorials in HPE AI Essentials Software &lt;img src=&quot;Github&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://www.brighttalk.com/webcast/19535/640132?utm_source=HPE&amp;#x26;utm_medium=brighttalk&amp;#x26;utm_campaign=640132&quot;&gt;Technical Demo Video&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Memory Safety in Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/memory-safety-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/memory-safety-in-chapel/</guid><pubDate>Thu, 10 Apr 2025 17:49:51 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Fueling AI innovation with HPE Data Fabric Software]]></title><description><![CDATA[external blog post]]></description><link>https://developer.hpe.com/fueling-ai-innovation-with-hpe-data-fabric-software/</link><guid isPermaLink="false">https://developer.hpe.com/fueling-ai-innovation-with-hpe-data-fabric-software/</guid><pubDate>Wed, 09 Apr 2025 12:00:06 GMT</pubDate><content:encoded>&lt;p&gt;external blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Practical tips to streamline operations, AI deployment, product updates, and more!]]></title><link>https://developer.hpe.com/2025-april-04/</link><guid isPermaLink="false">https://developer.hpe.com/2025-april-04/</guid><pubDate>Thu, 03 Apr 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Announcing ChapelCon '25!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapelcon-25/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapelcon-25/</guid><pubDate>Mon, 31 Mar 2025 19:25:45 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The rise, commoditization, and integration of virtualization into the private cloud]]></title><description><![CDATA[This is the first installment in a four-part series exploring the evolution of the intelligent private cloud. From the foundation provided…]]></description><link>https://developer.hpe.com/the-rise-commoditization-and-integration-of-virtualization-into-the-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/the-rise-commoditization-and-integration-of-virtualization-into-the-private-cloud/</guid><pubDate>Mon, 24 Mar 2025 16:40:57 GMT</pubDate><content:encoded>&lt;p&gt;This is the first installment in a four-part series exploring the evolution of the intelligent private cloud. From the foundation provided through virtualization to AI-driven automation, each blog post will examine a critical aspect of building a modern, efficient, and scalable private cloud infrastructure.&lt;/p&gt;
&lt;h2&gt;Virtualization: The foundation of private and hybrid cloud&lt;/h2&gt;
&lt;p&gt;Virtualization has been a cornerstone of IT infrastructure for decades, evolving from a call for greater mainframe efficiency in the 1960s to the x86-based hypervisors of the 2000s that enabled more scalable and flexible workloads. Initially, businesses relied on costly, inflexible bare-metal servers to handle their computing needs, but virtualization transformed IT infrastructure by allowing multiple virtual machines (VMs) to run on a single system, optimizing resources and paving the way for public cloud adoption.&lt;/p&gt;
&lt;p&gt;By the 2000s, companies like VMware, Red Hat, and IBM helped accelerate virtualization adoption, enabling resource sharing, multi-tenancy, and on-demand services to set the stage for cloud computing. As a result, businesses moved many workloads to the public cloud, taking advantage of the ease and flexibility of Infrastructure-as-a-Service. But while public cloud services offer greater flexibility, they also introduce cost unpredictability and security concerns, leading enterprises to invest in private clouds for better control. Private clouds allow them to replicate the cloud experience while maintaining governance and compliance.&lt;/p&gt;
&lt;p&gt;I remember the early days of cloud adoption when developers in my team started bypassing IT to spin up workloads in the public cloud. It was called shadow IT. It was fast and easy but also introduced risks — uncontrolled spending, security blind spots, and compliance challenges. To regain control, we built our first private cloud, mimicking the public cloud experience but with enterprise-grade governance. It was a turning point. The lessons learned from that transition shaped my understanding of how virtualization and other private cloud technologies must work together to balance agility with control.&lt;/p&gt;
&lt;h2&gt;Virtualization and commoditization&lt;/h2&gt;
&lt;p&gt;Today, virtualization remains essential, supporting modern hybrid cloud strategies where VMs and containers coexist. More than just a tool for efficiency, virtualization is the backbone of private cloud architectures, ensuring scalability, security, and cost management.&lt;/p&gt;
&lt;p&gt;Despite the rise of containerization, virtualization remains a $100 billion+ industry, proving its enduring value.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtualization-privatecloud-image1.png&quot; alt=&quot;Server Virtualization Software Global Market Report of 2025 from The Business Research Company&quot; title=&quot;Server Virtualization Software Global Market Report of 2025 from The Business Research Company&quot;&gt;&lt;/p&gt;
&lt;p&gt;The cloud-first (private and public) shift that virtualization initiated has led to it no longer being considered a standalone product — it is now an assumed layer of infrastructure much like networking and storage. As enterprises transition to infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) models, virtualization remains the backbone for their self-service private cloud environments.&lt;/p&gt;
&lt;h2&gt;Open source as the standard, not an alternative&lt;/h2&gt;
&lt;p&gt;Virtualization was once considered a premium feature, but open-source innovation has commoditized it. Open-source technology now dominates the private cloud landscape. According to Gartner (2023), 70% of enterprises use open-source virtualization. Forrester (2023) agrees, stating that 80% of new virtualization deployments are open-source. As a result, proprietary virtualization markets are shrinking by 5-8% year-over-year (IDC). According to Gartner (2023), hybrid cloud adoption has reached 75%, driven by the adoption of open-source.&lt;/p&gt;
&lt;p&gt;HPE uses an open-core business model where the core software is open-source, while the management plane, enterprise-grade indemnification, and support offered are proprietary. This is a business model adopted by many other companies, including IBM, Red Hat, HashiCorp, and Docker. HPE adopts this model for many of its products like HPE AI Essentials, HPE Ezmeral Unified Analytics, and HPE Ezmeral Container Platform.&lt;/p&gt;
&lt;h2&gt;Cost: Per-socket vs. per-core&lt;/h2&gt;
&lt;p&gt;Licensing costs have played a significant role in virtualization’s evolution. Traditional per-core pricing penalizes hardware advancements, while per-socket pricing ensures predictable costs and higher return on investment.&lt;/p&gt;
&lt;p&gt;The Next Platform (2024) found that licensing costs can be reduced by 50% with single-socket, high-core CPUs. High-core CPUs also reduce power per virtual machine by more than 30%. IDC reports that per-core pricing increases costs exponentially as CPUs scale, making per-socket a preferred model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtualization-privatecloud-image2.png&quot; alt=&quot;Comparing per-core versus per-socket pricing models&quot; title=&quot;Comparing per-core versus per-socket pricing models&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Examining the layered infrastructure: Bare metal, virtual machines, containers, and orchestration&lt;/h2&gt;
&lt;p&gt;Let’s explore the HPE model a bit more, where open-core virtualization acts as a building block that bridges Linux kernel technologies with the HPE hybrid cloud orchestration platform. Modern cloud architectures leverage a layered approach where bare metal, virtual machines, and containers coexist. The mixture serves to accommodate many different types of workloads. Bare metal is well suited for high-performance computing, artificial intelligence, machine learning, and low-latency workloads. Virtual machines provide security, multi-tenancy, and flexibility. Containers enable cloud-native applications and DevOps agility.&lt;/p&gt;
&lt;p&gt;It is worth noting that both virtualization and containerization share foundational similarities, leveraging the Linux kernel’s namespaces, cgroups, SELinux, AppArmor, iptables, systemd, and OverlayFS for isolation, security, and resource management. While virtualization relies on KVM (kernel module) with user-space QEMU, containerization uses Kubernetes, Docker, and Ceph in user space, built on the same Linux primitives.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hand-drawn-k8s-larger-2.png&quot; alt=&quot;Virtualization and containerization&quot; title=&quot;Virtualization and containerization&quot;&gt;&lt;/p&gt;
&lt;p&gt;This approach allows efficient resource allocation and high-density application hosting on high-density servers. These technologies have now been fully commoditized at the platform level, ensuring seamless integration into private cloud environments.&lt;/p&gt;
&lt;p&gt;With HPE’s new virtualization solution, HPE VME (VM Essentials) is an open-core virtualization building block that simultaneously integrates with the bare-metal Linux OS and the HPE private cloud orchestration platform (through the HPE VM Essentials Manager), enabling seamless workload deployment from edge to cloud in a distributed enterprise. The HPE KVM-based VM Essentials enhances virtualization with enterprise-grade cluster management and capabilities such as high availability, live migration, distributed workload placement, integrated data protection, secure hardening, and external storage support.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtualization-privatecloud-image4.png&quot; alt=&quot;HPE VM Essentials architecture&quot; title=&quot;HPE VM Essentials architecture&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;This blog post is the first in a series that will take us on a journey from traditional virtualization to lean, cost-efficient, AI-driven infrastructure that scales; where open-source is the default at its core, efficiency is the priority, and artificial intelligence is the future.&lt;/p&gt;
&lt;p&gt;My next post will explore orchestration as the engine of the modern cloud. Subsequent posts will cover observability and AI, leading to the intelligent and fully automated private cloud.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.4!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-4/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-4/</guid><pubDate>Thu, 20 Mar 2025 21:43:27 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[No-code integration of HPE GreenLake cloud with ServiceNow]]></title><description><![CDATA[In my previous tutorial on webhooks for HPE GreenLake cloud, I used a no-code/low-code platform called Make.com to implement a webhook…]]></description><link>https://developer.hpe.com/no-code-integration-of-hpe-greenlake-cloud-with-servicenow/</link><guid isPermaLink="false">https://developer.hpe.com/no-code-integration-of-hpe-greenlake-cloud-with-servicenow/</guid><pubDate>Tue, 18 Mar 2025 07:49:59 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In my &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;previous tutorial&lt;/a&gt; on webhooks for HPE GreenLake cloud, I used a no-code/low-code platform called &lt;a href=&quot;https://www.make.com/&quot;&gt;Make.com&lt;/a&gt; to implement a webhook handler. The webhook handler’s URL endpoint was then registered with HPE GreenLake cloud to subscribe to audit log events from the platform. In my simplistic use case, I stored these events in a Google Sheet as they arrived.&lt;/p&gt;
&lt;p&gt;For this handler, I was responsible for taking care of the handshake initiated by HPE GreenLake cloud when the new webhook was registered. This is a security feature that uses a secret key shared by HPE GreenLake cloud and the webhook handler. The following diagram illustrates the mechanics involved in this process.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/slide-pour-blog-webhooks.jpg&quot; alt=&quot;HPE Greenlake cloud event framework&quot; title=&quot;HPE Greenlake cloud event framework&quot;&gt;&lt;/p&gt;
&lt;p&gt;As this implementation was working well, I thought I would reuse this low-code handler technique to demonstrate another use case of an integration with an IT service management (ITSM) platform. I decided to use ServiceNow because it’s a very popular ITSM platform and because it provides a very nice way for developers to fire up a &lt;a href=&quot;https://devportaluat.service-now.com/dev.do#!/learn/learning-plans/washingtondc/new_to_servicenow/app_store_learnv2_buildmyfirstapp_washingtondc_personal_developer_instances&quot;&gt;personal developer instance&lt;/a&gt; (PDI) and get their work done.&lt;/p&gt;
&lt;h2&gt;Problems, incidents and change orders&lt;/h2&gt;
&lt;p&gt;Most of these ITSM platforms are managed through the lifecycle of different types of tickets (New, In Progress, Resolved). One of the most common ticket types is called &lt;em&gt;Incident&lt;/em&gt;, which is the type of ticket that users might open when they face an issue and want to raise it to their IT team for resolution. In most cases, end users use a web portal to do this. However, there is also an API to programmatically do it, and this is exactly what I’m going to use in this blog.&lt;/p&gt;
&lt;h2&gt;What’s the right API then?&lt;/h2&gt;
&lt;p&gt;To be fair, I must admit that I initially reached out to ChatGPT to get started, asking it &lt;em&gt;“how do I create an incident in ServiceNow?”&lt;/em&gt;. To my surprise it returned a short Python script that, once edited to use my ServiceNow personal developer instance, my username and password, worked immediately. This confirmed my choice of selecting ServiceNow.&lt;/p&gt;
&lt;p&gt;Here is the Python script returned by ChatGPT:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import requests
import json
# Replace these variables with your own information
instance_url = &quot;https://your-instance.service-now.com&quot;
username = &quot;your-username&quot;
password = &quot;your-password&quot;
service_id = &quot;service-id&quot;  # The Service ID related to the incident

# Define the API endpoint for creating an incident
url = f&quot;{instance_url}/api/now/table/incident&quot;

# Create the headers for the request (set the content-type to application/json)
headers = {
    &quot;Content-Type&quot;: &quot;application/json&quot;,
    &quot;Accept&quot;: &quot;application/json&quot;
}

# Define the data to be sent in the request body
data = {
    &quot;short_description&quot;: &quot;Issue with email service&quot;,
    &quot;description&quot;: &quot;The email service is down and needs urgent attention.&quot;,
    &quot;category&quot;: &quot;Inquiry&quot;,
    &quot;impact&quot;: &quot;3&quot;,  # 1=High, 2=Medium, 3=Low (you can adjust accordingly)
    &quot;urgency&quot;: &quot;2&quot;,  # 1=High, 2=Medium, 3=Low (you can adjust accordingly)
    &quot;service&quot;: service_id  # The ID of the service being impacted
}

# Make the POST request to create the incident
response = requests.post(url, auth=(username, password), headers=headers, data=json.dumps(data))

# Check the response
if response.status_code == 201:
    print(f&quot;Incident created successfully! Incident Number: {response.json()[&apos;result&apos;][&apos;number&apos;]}&quot;)
else:
    print(f&quot;Failed to create incident. Status Code: {response.status_code}&quot;)
    print(f&quot;Response: {response.text}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: ServiceNow provides the full documentation for their API once you sign up as a ServiceNow developer and it is free.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;The script it provided was very helpful because it helped me understand:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The API I need to use is the table API and the table name to be used is called incident, thus the API call: POST /api/now/table/incident&lt;/li&gt;
&lt;li&gt;The (minimal) payload I need to pass in the API call&lt;/li&gt;
&lt;li&gt;The necessary headers&lt;/li&gt;
&lt;li&gt;The response code expected if it worked should be 201&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;From event to incident&lt;/h2&gt;
&lt;p&gt;Now that I understand how to create an incident, it would be good to think about what represents an incident within HPE GreenLake cloud. In my &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;previous tutorial&lt;/a&gt;, I subscribed to audit log events (workspace loaded by user X, user Y logged out, user Z changed these settings, etc.) but you most likely would not want to open an incident for each of these. First, they are not really incidents, and second, you don’t want to flood your incident table with inappropriate data, as real incidents might get lost. For this use case, I decided to use another event that could be triggered by the &lt;strong&gt;Subscriptions Management&lt;/strong&gt; service in HPE GreenLake cloud when a subscription is expiring in 1, 30, 60, and 90 days. The event is called &lt;strong&gt;Expiring Subscriptions&lt;/strong&gt;. The full description of this event can be found on the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/subscription-management/public/catalog/subscriptions-unified-events-glcp-v1/paths/Expiring%20Subscriptions/post/&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This can be considered an incident and could be added in ServiceNow incident table.&lt;/p&gt;
&lt;p&gt;The payload sent to the webhook handler when this event is triggered by the Subscriptions Management service has the following format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;specversion&quot;: &quot;1.0&quot;,
  &quot;id&quot;: &quot;fb25f344-5e20-4f13-9937-c3ecc0327dc3&quot;,
  &quot;source&quot;: &quot;https://global.api.greenlake.hpe.com/subscriptions&quot;,
  &quot;type&quot;: &quot;com.hpe.greenlake.subscriptions.v1.expiring-subscriptions&quot;,
  &quot;datacontenttype&quot;: &quot;application/json&quot;,
  &quot;time&quot;: &quot;2024-10-30T21:09:16.127443806Z&quot;,
  &quot;data&quot;: {
  	&quot;key&quot;: &quot;OFKIAASTEST168IZ&quot;,
  	&quot;expirydate&quot;: &quot;10/25/2024&quot;,
  	&quot;sku&quot;: &quot;S0B80AAE&quot;,
  	&quot;licensetier&quot;: &quot;ENHANCED PROLIANT&quot;,
  	&quot;quantity&quot;: 500,
  	&quot;username&quot;: &quot;John Doe Inc.&quot;,
  	&quot;subscriptionendinsecs&quot;: 1729848600,
  	&quot;subscriptiontype&quot;: &quot;Compute Subscription&quot;,
  	&quot;producttype&quot;: &quot;DEVICE&quot;,
  	&quot;platformcustomerid&quot;: &quot;22e7e552184511ef88e9cacb36fa7032&quot;,
  	&quot;devices&quot;: []
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that I know what my webhook handler will receive as input and how I can open an incident in ServiceNow, I can get started.&lt;/p&gt;
&lt;h2&gt;From code to no-code&lt;/h2&gt;
&lt;p&gt;Let me show you how to translate these learnings in Make.com. In my previous &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;article&lt;/a&gt;, I created a scenario in Make.com that looked like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/second-route-taken.jpg&quot; alt=&quot;Original scenario in make &quot; title=&quot;Original scenario in make &quot;&gt;&lt;/p&gt;
&lt;p&gt;The topmost branch was used to handle the initial security handshake. I will keep that intact. If you need details about this branch, check out my &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;previous tutorial&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What I will do now is duplicate this scenario, delete the Google Sheets module, and replace it with another one called &lt;strong&gt;HTTP Make a request&lt;/strong&gt;. Then I will link the new module to the Webhook response.&lt;/p&gt;
&lt;p&gt;The HTTP Make a request module allows you to make any type of REST API call. It comes in very handy for integration use cases.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Make.com also provides a native ServiceNow integration, but you need an Enterprise plan to use it, which I don’t have.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;My scenario looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-2.jpg&quot; alt=&quot;New scenario in make&quot; title=&quot;New scenario in make&quot;&gt;&lt;/p&gt;
&lt;p&gt;My new scenario includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Custom webhook&lt;/strong&gt; module as a trigger of your webhook scenario with &lt;em&gt;advanced option&lt;/em&gt; enabled&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Set variable&lt;/strong&gt; module and the &lt;strong&gt;Webhook response&lt;/strong&gt; module to validate the verification challenge and ensure the webhook communication between HPE GreenLake cloud and the webhook handler is authenticated&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Router&lt;/strong&gt; module in the scenario to branch your flow into several routes and process the data within each route differently, based on the type of event received&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;HTTP Make a request&lt;/strong&gt; module connected to the bottom branch of the Router module along with the &lt;strong&gt;Webhook response&lt;/strong&gt; module connected to the HTTP Make a request module to create the incident in ServiceNow and return a 200 status to HPE GreenLake cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information, you can refer to the chapter &lt;em&gt;“Getting started with Make”&lt;/em&gt; in my first &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;blog post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Next, I’ll dive into the properties of the &lt;strong&gt;HTTP Make a request&lt;/strong&gt; module that needs to be configured based on the details collected from the Python script:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The URL endpoint of my ServiceNow instance: &lt;em&gt;https://{your-instance}/api/now/table/incident&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Method: &lt;em&gt;POST&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Two custom headers for Accept and Content-Type both set to &lt;em&gt;application/json&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Body type: &lt;em&gt;Raw&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-3.jpg&quot; alt=&quot;Setting up HTTP Make a request properties - part 1&quot; title=&quot;Setting up HTTP Make a request properties - part 1&quot;&gt;&lt;/p&gt;
&lt;p&gt;To setup the JSON payload as shown below, I need to first configure the first step of my webhook scenario to use the JSON payload of the &lt;strong&gt;Expiring Subscriptions&lt;/strong&gt; event described earlier.&lt;/p&gt;
&lt;p&gt;I then need to set up the JSON payload (&lt;strong&gt;Request content&lt;/strong&gt; as shown below) with ServiceNow information such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;em&gt;short description&lt;/em&gt; (for example, the subscription type is expiring soon)&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;description&lt;/em&gt; (for example, License &lt;license type&gt; expires on &lt;date&gt;)&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;category&lt;/em&gt;, and the &lt;em&gt;subcategory&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;The level of &lt;em&gt;urgency&lt;/em&gt; (1=high, 2=medium, 3=low) and the level of the &lt;em&gt;impact&lt;/em&gt; (1=high, 2=medium, 3=low) which will be used to compute &lt;em&gt;priority&lt;/em&gt; in ServiceNow&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;caller_id&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;service&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once this is in place, I can build the Request content and drag/drop items from the JSON input payload (shown in red) into the Request content of the HTTP call as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-4.jpeg&quot; alt=&quot;Setting up HTTP Make a request properties - part 2&quot; title=&quot;Setting up HTTP Make a request properties - part 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;It’s important not to forget to specify username and password obtained when creating a ServiceNow personal developer instance.&lt;/p&gt;
&lt;h2&gt;Putting it all together&lt;/h2&gt;
&lt;p&gt;Voila! Now the Webhook scenario is in place. I need to subscribe to the event with HPE GreenLake cloud. As noted in my first &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/&quot;&gt;blog&lt;/a&gt; post, I can apply the same logic, except that this time I will subscribe to &lt;strong&gt;Expiring Subscriptions&lt;/strong&gt; event as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-5.jpg&quot; alt=&quot;Subscribing to expiring subscriptions&quot; title=&quot;Subscribing to expiring subscriptions&quot;&gt;&lt;/p&gt;
&lt;p&gt;If everything goes well, I should see the following subscribed event configuration:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-6.jpg&quot; alt=&quot;Webhook armed&quot; title=&quot;Webhook armed&quot;&gt;&lt;/p&gt;
&lt;p&gt;First, let’s take a look at the open incidents page of the ServiceNow console:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-7.jpg&quot; alt=&quot;ServiceNow console before event received&quot; title=&quot;ServiceNow console before event received&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once a subscription reaches an expiration threshold, HPE GreenLake cloud will send an event to my webhook handler, which will, in turn, open a new incident to track it in ServiceNow, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-8.jpg&quot; alt=&quot;ServiceNow console after event received&quot; title=&quot;ServiceNow console after event received&quot;&gt;&lt;/p&gt;
&lt;p&gt;The new incident, visible at the top of the list above, was automatically opened by the webhook handler. I can open it to check out the details that were set up by the webhook handler:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhook-blog-servicenow-picture-9.jpg&quot; alt=&quot;Detail of incident created&quot; title=&quot;Detail of incident created&quot;&gt;&lt;/p&gt;
&lt;p&gt;This HPE GreenLake cloud event is now an incident in ServiceNow. It will continue its lifecycle until its closure in ServiceNow.&lt;/p&gt;
&lt;h2&gt;Call to action&lt;/h2&gt;
&lt;p&gt;Webhooks together with the HPE GreenLake cloud events framework provide a great way to integrate with HPE GreenLake cloud using modern technology. They provide great flexibility on where you can run the subscriber code and allow you to choose the language in which to write the webhook code. They also allow you to build a tight integration with an existing platform such as ServiceNow or HPE OpsRamp.&lt;/p&gt;
&lt;p&gt;Additional benefits I can see from this technique are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No need to verify that a polling code is still running&lt;/li&gt;
&lt;li&gt;No risk of API token expiration&lt;/li&gt;
&lt;li&gt;No loss of events&lt;/li&gt;
&lt;li&gt;It’s a well-established industry standard mechanism&lt;/li&gt;
&lt;li&gt;There is a huge choice of implementation methods including low code/no code ones&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: While a low-code/no-code approach was taken in this tutorial, the same logic applies to any other programming language selected.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can request access to this feature by signing up to the Automations/Webhook Access Beta Program &lt;a href=&quot;https://app.smartsheet.com/b/form/0e61e8c2bd6d48c7829845ab824c11d6&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using structured outputs in vLLM]]></title><description><![CDATA[Generating predictable and reliable outputs from large language models (LLMs) can be challenging, especially when those outputs need to…]]></description><link>https://developer.hpe.com/using-structured-outputs-in-vllm/</link><guid isPermaLink="false">https://developer.hpe.com/using-structured-outputs-in-vllm/</guid><pubDate>Sun, 16 Mar 2025 19:28:00 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Generating predictable and reliable outputs from large language models (LLMs) can be challenging, especially when those outputs need to integrate seamlessly with downstream systems. Structured outputs solve this problem by enforcing specific formats, such as JSON, regex patterns, or even formal grammars. vLLM, an open source inference and serving engine for LLMs, has supported structured outputs for a while. However, there is little documentation on how to use it. This is why I decided to contribute and write the &lt;a href=&quot;https://docs.vllm.ai/en/latest/usage/structured_outputs.html&quot;&gt;Structured Outputs documentation page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this blog post, I&apos;ll explain how structured outputs work in vLLM and walk you through how to use them effectively.&lt;/p&gt;
&lt;h2&gt;Why structured outputs?&lt;/h2&gt;
&lt;p&gt;LLMs are incredibly powerful, but their outputs can be inconsistent when a specific format is required. Structured outputs address this issue by restricting the model’s generated text to adhere to predefined rules or formats, ensuring:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; Outputs are predictable and machine-readable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compatibility:&lt;/strong&gt; Seamless integration with APIs, databases, or other systems.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Efficiency:&lt;/strong&gt; No need for extensive post-processing to validate or fix outputs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Imagine there is an external system which receives a JSON object with the all the details to trigger an alert, and you want your LLM-based system to be able to use it. Of course you could try to explain the LLM what should be the output format and that it must be a valid JSON object, but LLMs are not deterministic and thus you may end up with an invalid JSON. Probably, if you have tried to do something like this before, you would have found yourself in this situation.&lt;/p&gt;
&lt;p&gt;How do these tools work? The idea behind them is to filter a list of possible next tokens to force a valid token to be generated that produces the desired output format, for example, a valid JSON object.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/structured_outputs_thumbnail.png&quot; alt=&quot;Structured outputs in vLLM&quot; title=&quot;Structured outputs in vLLM&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is vLLM?&lt;/h2&gt;
&lt;p&gt;vLLM is a state-of-the-art, open-source inference and serving engine for LLMs. It’s built for performance and simplicity, offering:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;PagedAttention:&lt;/strong&gt; An innovative memory management mechanism for efficient attention key-value handling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Continuous batching:&lt;/strong&gt; Supports concurrent requests dynamically.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Advanced optimizations:&lt;/strong&gt; Includes features like quantization, speculative decoding, and CUDA graphs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These optimizations make vLLM one of the fastest and most versatile engines for production environments.&lt;/p&gt;
&lt;h2&gt;Structured outputs on vLLM&lt;/h2&gt;
&lt;p&gt;vLLM extends the OpenAI API with additional parameters to enable structured outputs. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;guided_choice:&lt;/strong&gt; Restricts output to a set of predefined choices.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;guided_regex:&lt;/strong&gt; Ensures outputs match a given regex pattern.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;guided_json:&lt;/strong&gt; Validates outputs against a JSON schema.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;guided_grammar:&lt;/strong&gt; Enforces structure using context-free grammars.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s how each works, along with example outputs:&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;1. Guided choice&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Guided choice is the simplest form of structured output. It ensures the response is one from of a set of predefined options.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from openai import OpenAI

client = OpenAI(base_url=&quot;http://localhost:8000/v1&quot;, api_key=&quot;-&quot;)

completion = client.chat.completions.create(
    model=&quot;Qwen/Qwen2.5-3B-Instruct&quot;,
    messages=[
        {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Classify this sentiment: vLLM is wonderful!&quot;}
    ],
    extra_body={&quot;guided_choice&quot;: [&quot;positive&quot;, &quot;negative&quot;]},
)
print(completion.choices[0].message.content)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;positive
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;&lt;strong&gt;2. Guided Regex&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;A guided regex constrains the output to match a regex pattern, which is useful for formats like email addresses.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;completion = client.chat.completions.create(
    model=&quot;Qwen/Qwen2.5-3B-Instruct&quot;,
    messages=[
        {
            &quot;role&quot;: &quot;user&quot;,
            &quot;content&quot;: &quot;Generate an example email address for Alan Turing at Enigma. End in .com.&quot;,
        }
    ],
    extra_body={&quot;guided_regex&quot;: r&quot;\w+@\w+\.com\n&quot;, &quot;stop&quot;: [&quot;\n&quot;]},
)
print(completion.choices[0].message.content)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;alan.turing@enigma.com
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;&lt;strong&gt;3. Guided JSON&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Guided JSON enforces a valid JSON format based on a schema, simplifying integration with other systems.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from pydantic import BaseModel
from enum import Enum

class CarType(str, Enum):
    sedan = &quot;sedan&quot;
    suv = &quot;SUV&quot;
    truck = &quot;Truck&quot;
    coupe = &quot;Coupe&quot;

class CarDescription(BaseModel):
    brand: str
    model: str
    car_type: CarType

json_schema = CarDescription.model_json_schema()

completion = client.chat.completions.create(
    model=&quot;Qwen/Qwen2.5-3B-Instruct&quot;,
    messages=[
        {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Generate a JSON for the most iconic car from the 90s.&quot;}
    ],
    extra_body={&quot;guided_json&quot;: json_schema},
)
print(completion.choices[0].message.content)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;brand&quot;: &quot;Toyota&quot;,
  &quot;model&quot;: &quot;Supra&quot;,
  &quot;car_type&quot;: &quot;coupe&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;&lt;strong&gt;4. Guided grammar&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Guided grammar uses an Extended Backus-Naur Form (EBNF) grammar syntax to define complex output structures, such as SQL queries.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;completion = client.chat.completions.create(
    model=&quot;Qwen/Qwen2.5-3B-Instruct&quot;,
    messages=[
        {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Generate a SQL query to find all users older than 30.&quot;}
    ],
    extra_body={
        &quot;guided_grammar&quot;: &quot;&quot;&quot;
        query ::= &quot;SELECT&quot; fields &quot;FROM users WHERE&quot; condition;
        fields ::= &quot;name, age&quot; | &quot;*&quot;;
        condition ::= &quot;age &gt;&quot; number;
        number ::= [0-9]+;
        &quot;&quot;&quot;
    },
)
print(completion.choices[0].message.content)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Example output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;SELECT * FROM users WHERE age &gt; 30;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;strong&gt;Next steps&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;To start integrating structured outputs into your projects:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Explore the documentation:&lt;/strong&gt; Check out the &lt;a href=&quot;https://docs.vllm.ai/en/latest/&quot;&gt;official documentation&lt;/a&gt; for more examples and detailed explanations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Install vLLM locally:&lt;/strong&gt; Set up the inference server on your local machine using the &lt;a href=&quot;https://github.com/vllm-project/vllm&quot;&gt;vLLM GitHub repository&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Experiment with structured outputs:&lt;/strong&gt; Try out different formats (choice, regex, JSON, grammar) and observe how they can simplify your workflow.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deploy in production:&lt;/strong&gt; Once comfortable, deploy vLLM to your production environment and integrate it with your applications.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Structured outputs make LLMs not only powerful but also practical for real-world applications. Dive in and see what you can build!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Management paradigms for virtual machines running on Kubernetes]]></title><description><![CDATA[With the rise of virtual machine containerization, it’s imperative to familiarize ourselves with the different aspects of performing VM…]]></description><link>https://developer.hpe.com/management-paradigms-for-virtual-machines-running-on-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/management-paradigms-for-virtual-machines-running-on-kubernetes/</guid><pubDate>Wed, 12 Mar 2025 20:13:53 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;With the rise of virtual machine containerization, it’s imperative to familiarize ourselves with the different aspects of performing VM management on Kubernetes. From crude CLIs, to declarative GitOps patterns, and further extending to lush UIs where your next VM is just a right-click away, having a handle on each of these disciplines is essential for Kubernetes VM management regardless of which role you&apos;re in.&lt;/p&gt;
&lt;p&gt;Whether you&apos;re a classic sysadmin, site reliability engineer (SRE) or have any kind of role in IT operations touching virtualization, the winds of change are catching up. Collectively we need to re-evaluate the VM estate, understand platform requirements for mission-critical applications and look for alternatives with the least amount of friction and resistance to ease migration.&lt;/p&gt;
&lt;p&gt;KubeVirt, an open source project governed by the Cloud-Native Computing Foundation (CNCF), is an add-on for Kubernetes that allows management of virtual machines alongside containers using a single API endpoint. KubeVirt is where a large chunk of the market is gravitating towards, whether the abstractions are disguised by a glossy frontend or deployed manually on existing Kubernetes clusters, KubeVirt needs to be considered for any new virtualization project.&lt;/p&gt;
&lt;p&gt;This blog post brushes over the basics in VM management on KubeVirt covering the most common patterns to give you an idea of what tools and processes to adopt in your organization, like declarative CLIs, imperative GUIs or idempotent IT platform automation tools such as Ansible. There are strengths and weaknesses across the different interfaces but understanding how to operate them is fundamental for any VM management journey with KubeVirt.&lt;/p&gt;
&lt;p&gt;But first, let me give you a brief introduction to KubeVirt.&lt;/p&gt;
&lt;h1&gt;A KubeVirt crash course&lt;/h1&gt;
&lt;p&gt;KubeVirt provides abstractions to Kubernetes users for Linux Kernel Virtual Machines (KVM). KVM has been around for about two decades now with several successful commercial hypervisors built around the implementation and is, at this point, considered mature.&lt;/p&gt;
&lt;p&gt;KubeVirt itself does not have a user interface that most VM administrators are used to. The point of abstraction is through standard Kubernetes tools by manipulating API resources of different &lt;code&gt;Kinds&lt;/code&gt; provided by &lt;code&gt;CustomResourceDefinitions&lt;/code&gt; (CRDs).&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;CRDs&lt;/code&gt; allow users to manage VM resources through a set of KubeVirt’s controllers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kubevirt.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Deploying KubeVirt on upstream Kubernetes and other distributions is straightforward. The &lt;a href=&quot;https://kubevirt.io/user-guide/&quot;&gt;official documentation&lt;/a&gt; walks through the different distributions and platform specific quirks that needs to be considered.&lt;/p&gt;
&lt;p&gt;The examples below use KubeVirt provided by the KubeVirt HyperConverged Cluster Operator installed on OKD, the community distribution of Kubernetes that powers Red Hat OpenShift.&lt;/p&gt;
&lt;p&gt;Most VM administrators connect VMs to existing networks that assign IP addresses and DNS names. Having the VM immediately reachable from your desktop computer or other already established infrastructure management tools makes the transition from legacy VM management platforms to KubeVirt much smoother.&lt;/p&gt;
&lt;p&gt;As a prerequisite for this exercise and examples, the following resources have been created prior:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An SSH public key has been created on the cluster as a &lt;code&gt;Secret&lt;/code&gt; to be injected into the VM instance during initialization.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;NodeNetworkConfigurationPolicy&lt;/code&gt; using the Kubernetes NMState Operator that creates a bridge on a NIC connected to the data center management network.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;NetworkAttachmentDefinition&lt;/code&gt; in my VM instance &lt;code&gt;Namespace&lt;/code&gt; to connect virtual NICs to.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For the sake of completeness, this is what those resources look like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: Namespace
metadata:
  name: hpe-vmi
---
apiVersion: v1
kind: Secret
metadata:
  name: desktop
  namespace: hpe-vmi
stringData:
  key: ssh-rsa &amp;#x3C;public key string&gt; you@yourdesktop
---
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br0-ens224
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: &quot;&quot;
  maxUnavailable: 3
  desiredState:
    interfaces:
      - name: br0
        type: linux-bridge
        state: up
        ipv4:
          enabled: false
        bridge:
          options:
            stp:
              enabled: false
          port:
            - name: ens224
---
apiVersion: &quot;k8s.cni.cncf.io/v1&quot;
kind: NetworkAttachmentDefinition
metadata:
  name: mgmt
  namespace: hpe-vmi
  annotations:
    k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0
spec:
  config: |
    {
      &quot;cniVersion&quot;: &quot;0.3.1&quot;,
      &quot;name&quot;: &quot;mgmt&quot;,
      &quot;type&quot;: &quot;cnv-bridge&quot;,
      &quot;bridge&quot;: &quot;br0&quot;,
      &quot;ipam&quot;: {},
      &quot;macspoofchk&quot;: true,
      &quot;preserveDefaultVlan&quot;: false
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Another essential prerequisite is that a &lt;code&gt;StorageClass&lt;/code&gt; exists on the cluster that supports KubeVirt. The examples below use the HPE CSI Driver for Kubernetes but that could be swapped out for any vendor or platform supporting the bare minimum requirements for KubeVirt (see the KubeVirt &lt;a href=&quot;https://kubevirt.io/user-guide/storage/clone_api/&quot;&gt;admin guide&lt;/a&gt; for details).&lt;/p&gt;
&lt;p&gt;Now that the environment is primed, let’s provision a VM and take KubeVirt for a spin.&lt;/p&gt;
&lt;h1&gt;The command line interface&lt;/h1&gt;
&lt;p&gt;It is entirely possible to use &lt;code&gt;kubectl&lt;/code&gt; out-of-the-box to deploy and manage VM resources. The &lt;code&gt;virtctl&lt;/code&gt; CLI features a more rich experience with the ability to upload disk images, connect to the VM console and manage power states more easily. The most important task of &lt;code&gt;virtctl&lt;/code&gt; is to render tedious manifests from just a few arguments to deploy new VMs.&lt;/p&gt;
&lt;p&gt;Installing &lt;code&gt;virtctl&lt;/code&gt; varies by platform and KubeVirt distribution. It’s advised at this point to have the same client and server version, which at the time of writing is 1.4.0. If using a Mac with Brew installed, it’s simply:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;brew install virtctl
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;First we need to inform ourselves what &lt;code&gt;DataSources&lt;/code&gt; are available on the cluster. Building new &lt;code&gt;DataSources&lt;/code&gt; or importing new ones is out of scope for this blog post. VMs are cloned into new &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; (PVCs) from &lt;code&gt;DataSources&lt;/code&gt;. List the existing the &lt;code&gt;DataSources&lt;/code&gt; on the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get datasources -A
NAMESPACE            NAME             AGE
kubevirt-os-images   centos-stream8   13h
kubevirt-os-images   centos-stream9   13h
kubevirt-os-images   centos6          13h
kubevirt-os-images   centos7          13h
kubevirt-os-images   fedora           13h
kubevirt-os-images   opensuse         13h
kubevirt-os-images   rhel7            13h
kubevirt-os-images   rhel8            13h
kubevirt-os-images   rhel9            13h
kubevirt-os-images   ubuntu           13h
kubevirt-os-images   win10            13h
kubevirt-os-images   win11            13h
kubevirt-os-images   win2k16          13h
kubevirt-os-images   win2k19          13h
kubevirt-os-images   win2k22          13h 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Not all &lt;code&gt;DataSources&lt;/code&gt; are populated by default. On OKD, only “fedora” and “centos-stream9” are available. It can be checked by examining &lt;code&gt;DataImportCrons&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get dataimportcrons -A
NAMESPACE            NAME                        FORMAT
kubevirt-os-images   centos-stream9-image-cron   pvc
kubevirt-os-images   fedora-image-cron           pvc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s create a Fedora VM, assign the SSH public key and connect it to the management LAN, but first create a manifest named “my-network.yaml” to describe the network we want to connect the VM to.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;spec:
  template:
    spec:
      domain:
        devices:
          interfaces:
            - bridge: {}
              model: virtio
              name: my-vnic-0
      networks:
        - multus:
            networkName: mgmt
          name: my-vnic-0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, create the VM and attach it to the network:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;virtctl create vm --name my-vm-0 \
  --access-cred type:ssh,src:desktop,user:fedora \
  --volume-import=type:ds,src:kubevirt-os-images/fedora,size:64Gi \
| kubectl create -n hpe-vmi -f- &amp;#x26;&amp;#x26; \
kubectl patch vm/my-vm-0 -n hpe-vmi --type=merge --patch-file my-network.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Monitor the progress of the VM:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get vm -n hpe-vmi -w
NAME      AGE   STATUS         READY
my-vm-0   13s   Provisioning   False
my-vm-0   29s   Starting       False
my-vm-0   42s   Running        False
my-vm-0   42s   Running        True
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the VM is running, it’s possible to login with the SSH identity and hostname given to the VM (assuming DHCP registers the hostname in DNS on the management network).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ ssh fedora@my-vm-0
[fedora@my-vm-0 ~]$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, what does the VM instance actually look like? Let’s install some tools and inspect.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ sudo dnf install -yq fastfetch virt-what
$ sudo virt-what
redhat
kvm
$ fastfetch --pipe --localip-show-ipv4 false
             .&apos;,;::::;,&apos;.                 fedora@my-vm-0
         .&apos;;:cccccccccccc:;,.             --------------
      .;cccccccccccccccccccccc;.          OS: Fedora Linux 41 (Cloud Edition) x86_64
    .:cccccccccccccccccccccccccc:.        Host: KubeVirt (RHEL-9.4.0 PC (Q35 + ICH9, 2009))
  .;ccccccccccccc;.:dddl:.;ccccccc;.      Kernel: Linux 6.11.4-301.fc41.x86_64
 .:ccccccccccccc;OWMKOOXMWd;ccccccc:.     Uptime: 7 mins
.:ccccccccccccc;KMMc;cc;xMMc;ccccccc:.    Packages: 550 (rpm)
,cccccccccccccc;MMM.;cc;;WW:;cccccccc,    Shell: bash 5.2.32
:cccccccccccccc;MMM.;cccccccccccccccc:    Terminal: /dev/pts/0
:ccccccc;oxOOOo;MMM000k.;cccccccccccc:    CPU: Intel Core (Haswell, no TSX, IBRS) @ 2.60 GHz
cccccc;0MMKxdd:;MMMkddc.;cccccccccccc;    GPU: Unknown Device 1111 (VGA compatible)
ccccc;XMO&apos;;cccc;MMM.;cccccccccccccccc&apos;    Memory: 435.27 MiB / 3.80 GiB (11%)
ccccc;MMo;ccccc;MMW.;ccccccccccccccc;     Swap: 0 B / 3.80 GiB (0%)
ccccc;0MNc.ccc.xMMd;ccccccccccccccc;      Disk (/): 805.66 MiB / 62.92 GiB (1%) - btrfs
cccccc;dNMWXXXWM0:;cccccccccccccc:,       Locale: en_US.UTF-8
cccccccc;.:odl:.;cccccccccccccc:,.
ccccccccccccccccccccccccccccc:&apos;.
:ccccccccccccccccccccccc:;,..
 &apos;:cccccccccccccccc::;,.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Except for the “Host” hint, this looks like any VM instance on a KVM hypervisor.&lt;/p&gt;
&lt;p&gt;With &lt;code&gt;virtctl&lt;/code&gt; it’s possible to live migrate, pause/unpause, stop/start and restart the VM. Deleting the VM requires &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl delete -n hpe-vmi vm/my-vm-0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will remove all resources created with &lt;code&gt;virtctl&lt;/code&gt;, including &lt;code&gt;PVCs&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;User experience with web user interfaces&lt;/h1&gt;
&lt;p&gt;KubeVirt does not have an official graphical user interface. That is a tall threshold for new users who are familiar with legacy VM management solutions where everything is a right-click away, structured in an intuitive manner. In a way, the KubeVirt project assumes that the user has a fundamental knowledge of KVM and can scrape through by managing Kubernetes resources through the CLI.&lt;/p&gt;
&lt;p&gt;Fortunately, there are KubeVirt implementations that heavily focus on a graphical user experience and provide a great way to learn and explore the capabilities, very similar to legacy hypervisors.&lt;/p&gt;
&lt;p&gt;Let&apos;s take a closer look at OKD, the upstream Kubernetes distribution of OpenShift, and Harvester, an Hyper Converged Infrastructure (HCI) solution built for VMs on KubeVirt with striking simplicity.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2025-03-05-at-12.18.46-pm.png&quot; alt=&quot;OKD Virtualization landing page.&quot; title=&quot;OKD Virtualization landing page.&quot;&gt;&lt;/p&gt;
&lt;p&gt;OKD is the upstream open source project of Red Hat OpenShift. Enabling virtualization is a two-click operation and considered the gold standard for managing VMs and containers with a unified control plane. KubeVirt has been part of OKD and OpenShift since 2020.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2025-03-05-at-12.48.03-pm.png&quot; alt=&quot;Harvester landing page&quot; title=&quot;Harvester landing page&quot;&gt;&lt;/p&gt;
&lt;p&gt;Harvester is an open source HCI solution primarily focused on running a highly opinionated stack of software and tools on Kubernetes designed solely for running VMs. Harvester can be consumed by Rancher to allow Rancher to deploy and manage Kubernetes clusters on Harvester in a symbiotic relationship.&lt;/p&gt;
&lt;p&gt;Walking through the UIs are out of scope for this blog post but the same outcomes can be accomplished in a few clicks similar to using the CLI with &lt;code&gt;virtctl&lt;/code&gt; and &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Ansible&lt;/h1&gt;
&lt;p&gt;Using CLIs and graphical UIs are great for exploratory administration and one-offs. They’re usually tedious and error prone when it comes to repeating the same set of tasks indefinitely. This is where Ansible comes it. Idempotent and declarative interfaces allow it to distill very complex tasks across multiple layers of infrastructure to gain full control all the way up to deploying the application. This kind of IT automation lends itself to GitOps and self-service patterns in large scale environments. Write once, delegate and reuse with ease, like cookie cutter templates.&lt;/p&gt;
&lt;p&gt;Ansible has historically been well integrated with other KVM-based hypervisors such as oVirt/RHEV and provides VM management at scale quite elegantly.&lt;/p&gt;
&lt;p&gt;Ansible can be installed on your desktop computer in a multitude of ways and will not be covered in this blog. Once Ansible is in place, install the KubeVirt collection:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ansible-galaxy collection install kubevirt.core
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a couple of distinct patterns for managing cloud compute instances (VMs on KubeVirt in this case) with Ansible.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Declaratively CRUD (Create, Read, Update Delete) the instances from a pre-rendered inventory, preferable templatized with Ansible, idempotent with desired parameters. Manage the OS and apps with playbooks using the rendered inventory.&lt;/li&gt;
&lt;li&gt;Imperatively CRUD the instances with some other tooling, either from the cloud provider directly or idempotent with something like OpenTofu. Employ dynamic inventory plugins to manage the OS and apps inside the instances.&lt;/li&gt;
&lt;li&gt;Imperatively CRUD the instances with Ansible playbooks and using a dynamic inventory plugin to manage OS and apps.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For the sake of simplicity and clarity the examples will imperatively CRUD the instances and showcase the dynamic inventory plugin with KubeVirt. In a production scenario where collaboration among engineers is required, the first option is the more elegant choice.&lt;/p&gt;
&lt;p&gt;Create a playbook named “create_vm.yaml” or similar.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- hosts: localhost
  connection: local
  tasks:
  - name: Ensure VM name
    assert:
      that: vm is defined
  - name: Create a VM
    kubevirt.core.kubevirt_vm:
      state: present
      name: &quot;{{ vm }}&quot;
      namespace: hpe-vmi
      labels:
        app: my-example-label
      instancetype:
        name: u1.medium
      preference:
        name: fedora
      data_volume_templates:
        - metadata:
            name: &quot;{{ vm }}-0&quot;
          spec:
            sourceRef:
              kind: DataSource
              name: fedora
              namespace: kubevirt-os-images
            storage:
              resources:
                requests:
                  storage: 64Gi
      spec:
        domain:
          devices:
            interfaces:
            - name: mgmt
              bridge: {}
        networks:
        - name: mgmt
          multus:
            networkName: mgmt
        accessCredentials:
        - sshPublicKey:
            propagationMethod:
              qemuGuestAgent:
                users:
                - fedora
            source:
              secret:
                secretName: desktop
        volumes:
        - cloudInitConfigDrive:
            userData: |-
              #cloud-config
              # The default username is: fedora
              runcmd:
                - [ setsebool, -P, &apos;virt_qemu_ga_manage_ssh&apos;, &apos;on&apos; ]
          name: cloudinitdisk
        - dataVolume:
            name: &quot;{{ vm }}-0&quot;
          name: &quot;{{ vm }}-0&quot;
      wait: yes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Many attributes have been hardcoded in this example but it illustrates the similarities of what &lt;code&gt;virtctl&lt;/code&gt; outputs based on the parameters provided.&lt;/p&gt;
&lt;p&gt;Use the playbook to create a VM:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ansible-playbook -e vm=my-vm-0 create_vm.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It takes a minute or so for the VM to come up. When the prompt comes back, create a file named “hosts.kubevirt.yaml” (the “kubevirt.yaml” part of the filename is mandatory):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;plugin: kubevirt.core.kubevirt
namespaces:
  - hpe-vmi
host_format: &quot;{name}&quot;
network_name: mgmt
label_selector: app=my-example-label
compose:
  ansible_user: &quot;&apos;fedora&apos;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s now possible to use the KubeVirt inventory plugin to manage the OS and apps in the VM. Let’s see if it connects:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ansible -i hosts.kubevirt.yaml -m ping my-vm-0
my-vm-0 | SUCCESS =&gt; {
    &quot;ansible_facts&quot;: {
        &quot;discovered_interpreter_python&quot;: &quot;/usr/bin/python3.13&quot;
    },
    &quot;changed&quot;: false,
    &quot;ping&quot;: &quot;pong&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point it’s possible to manage the VM like any other host provisioned on any kind of server, hypervisor or cloud platform.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;It doesn’t matter what your distinct VM management workflow looks like, KubeVirt serves all popular patterns. That said, current tools and processes will require an overhaul. Why not switch to idempotent VM management through GitOps while transitioning from your legacy hypervisor in the meantime? That&apos;s a topic for another day.&lt;/p&gt;
&lt;p&gt;Connect with the HPE Developer Community via &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;Slack&lt;/a&gt; or sign up for the &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt; to immerse yourself in the latest breakthrough technologies from HPE, customers, and partners.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OpsRamp OpsQL API: Fulfilling wishes for IT Ops management]]></title><description><![CDATA[In today's data centers, IT Ops managers face mounting challenges in managing and retrieving data efficiently. They've introduced so many…]]></description><link>https://developer.hpe.com/sivabala-opsql-opsramps-kalpavriksha-wish-fulfilling-tree/</link><guid isPermaLink="false">https://developer.hpe.com/sivabala-opsql-opsramps-kalpavriksha-wish-fulfilling-tree/</guid><pubDate>Mon, 10 Mar 2025 21:32:15 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In today&apos;s data centers, IT Ops managers face mounting challenges in managing and retrieving data efficiently. They&apos;ve introduced so many network, storage, and computing devices into their environment that they have become unmanageable. At times performance issues and critical services going down keep happening and they&apos;re left with headaches, higher costs, and poor performance. If only they had a Kalpavriksha, a magical wish-fulfilling tree that could answer their needs for quickly searching for any resource from their complex, wide data centre to make all their dreams come true.&lt;/p&gt;
&lt;p&gt;With OpsRamp, a comprehensive IT operations management platform, they may very well get their wish. With its OpsRamp Query Language (OpsQL) and its powerful API, users can perform complex searches within the OpsRamp platform. It allows users to retrieve specific data based on various attributes and conditions. To quickly access and manipulate data in order to maintain optimal system performance and resolve issues, OpsQL is a Kalpavriksha that truly grants wishes. In this blog post, I will explain the basics of OpsQL and how to use it.&lt;/p&gt;
&lt;h2&gt;What is OpsQL?&lt;/h2&gt;
&lt;p&gt;OpsQL is a flexible and powerful query language to search objects within the OpsRamp platform. It allows users to retrieve specific data based on various attributes and conditions. OpsQL is essential for IT administrators to quickly access and manipulate data, in order to maintain optimal system performance and resolve issues. IT Ops Managers, users can run OpsQL in intuitive OpsRamp UI or using OpsQL API.&lt;/p&gt;
&lt;h2&gt;Basic syntax and structure&lt;/h2&gt;
&lt;p&gt;Let&apos;s say you want to make a query. The general syntax for an OpsQL query is very straightforward:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt; &amp;#x3C;attribute&gt; &amp;#x3C;operator&gt; | &amp;#x3C;coperator&gt; &quot;&amp;#x3C;value&gt;&quot; [[&amp;#x3C;operator&gt; [&amp;#x3C;attribute&gt; | &amp;#x3C;coperator&gt; &quot;&amp;#x3C;value&gt;&quot;[)]] ... ]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can use logical operators  &lt;code&gt;AND&lt;/code&gt; and &lt;code&gt;OR&lt;/code&gt; to refine your search further.&lt;/p&gt;
&lt;h2&gt;Operators&lt;/h2&gt;
&lt;p&gt;Operator is the key of the query. It relates the attribute to the value.
OpsQL supports a variety of operators to create precise queries.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Equality operators&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;=&lt;/code&gt;, &lt;code&gt;!=&lt;/code&gt;
2.  &lt;strong&gt;String operators&lt;/strong&gt;
&lt;code&gt;CONTAINS&lt;/code&gt;, &lt;code&gt;STARTS WITH&lt;/code&gt;, &lt;code&gt;LIKE&lt;/code&gt;
3.  &lt;strong&gt;Logical operators&lt;/strong&gt;
&lt;code&gt;AND&lt;/code&gt;, &lt;code&gt;OR&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;For example, to find all resources with an agent installed and of type &quot;Windows,&quot; you would use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;agentInstalled = &quot;true&quot; AND type = &quot;Windows&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Attributes&lt;/h2&gt;
&lt;p&gt;Attributes are different types of information available on an object. For instance, a resource might have attributes like &lt;code&gt;make&lt;/code&gt;, &lt;code&gt;ipAddress&lt;/code&gt;, and &lt;code&gt;agentInstalled&lt;/code&gt;, while an alert might have attributes like &lt;code&gt;priority&lt;/code&gt;, &lt;code&gt;currentState&lt;/code&gt;, and &lt;code&gt;createdTime&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For more details, you can refer to the &lt;a href=&quot;https://docs.opsramp.com/platform-features/feature-guides/query-language-reference/query-language-ref/&quot;&gt;OpsRamp Query Language Reference Documentation.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;More examples&lt;/h2&gt;
&lt;p&gt;Here are some OpsQL examples to search resources on the OpsRamp platform.&lt;/p&gt;
&lt;h3&gt;Search for resources that were discovered by an AWS integration​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;installedAppName = aws
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for resources that I tagged in AWS with AWS tag OwnerName :SivaBalaSubramanian&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;installedAppName = &quot;aws&quot; AND tags.name = &quot;OwnerName&quot; and tags.value = &quot;SivaBalaSubramanian&quot;​
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for resources with alerts that have been open for the last 2 hours​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;createdTime &gt; &quot;-7200sec&quot; ​
createdTime &gt; &quot;-120min&quot;​
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for open alerts that are critical​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;currentState = &quot;CRITICAL&quot; AND status = &quot;OPEN&quot; ​
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for open and critical alerts that have an open incident​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;currentState = &quot;CRITICAL&quot; AND status = &quot;OPEN&quot; AND incidentId IS NOT NULL ​
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for alerts that have been Open for longer than 2 hours​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;createdTime &amp;#x3C; &quot;-7200sec&quot; ​
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Search for Open and critical alerts on resources tagged with AWS tag “BU: bu-123”​&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-PlainText&quot;&gt;currentState = &quot;CRITICAL&quot; AND status = &quot;OPEN&quot; AND resource.tags.name = &quot;Team&quot; AND resource.tags.value = &quot;Opsqa&quot;​
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What is OpsQL API?&lt;/h2&gt;
&lt;p&gt;The OpsQL API is a powerful interface that allows users to execute OpsQL queries programmatically. This API provides the flexibility to filter, search data within the OpsRamp platform, making it an indispensable tool for IT administrators and developers.
IT Administrators can invoke OpsQL API using tools such as Postman, cURL, Python.&lt;/p&gt;
&lt;h4&gt;Key features&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive querying&lt;/strong&gt;: The OpsQL API supports a wide range of query operations, allowing users to filter data based on various attributes and conditions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexibility&lt;/strong&gt;: Users can create complex queries using logical operators and a variety of comparison operators.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integration&lt;/strong&gt;: The API can be integrated into custom applications, scripts, and workflows, enhancing automation and efficiency.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Basic syntax and structure&lt;/h4&gt;
&lt;p&gt;The general structure of an OpsQL API request involves specifying the tenant ID and the query payload.&lt;/p&gt;
&lt;p&gt;Here’s a basic example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;POST /opsql/api/v3/tenants/{tenantId}/queries
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The request body typically includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;objectType&lt;/code&gt;: The type of object to query (e.g., resource, alert, ticket).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fields&lt;/code&gt;: The fields to retrieve.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;filterCriteria&lt;/code&gt;: The criteria to filter the objects.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Common use cases and code samples&lt;/h2&gt;
&lt;h3&gt;Filtering critical alerts&lt;/h3&gt;
&lt;p&gt;There are times that you would want to filter only critical alerts. This is how that would be done:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import requests
import json
accessToken=&apos;valid access token&apos;


url = &quot;https://server/opsql/api/v3/tenants/client_id/queries&quot;

payload = json.dumps({
  &quot;objectType&quot;: &quot;alert&quot;,
  &quot;fields&quot;: [
    &quot;id&quot;,
    &quot;clientId&quot;,
    &quot;component&quot;,
    &quot;currentState&quot;
  ],
  &quot;filterCriteria&quot;: &quot;currentState=critical&quot;
})
headers = {
  &apos;Accept&apos;: &apos;application/json&apos;,
  &apos;Content-Type&apos;: &apos;application/json&apos;,
  &apos;Authorization&apos;: accessToken
}

response = requests.request(&quot;POST&quot;, url, headers = headers, data = payload)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Saving OpsQL response as CSV file&lt;/h3&gt;
&lt;p&gt;There are times that you would want to save an OpsQL response as a CSV file, perhaps for further analysis. This is how that would be done:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def invoke_opsql() -&gt; None:
  response = requests.request(&quot;POST&quot;, url, headers = headers, data = payload)
  timestr = time.strftime(&quot;%Y%m%d-%H%M%S&quot;)
  json_file_name = &quot;siva_aws_resources-&quot; + timestr + &quot;.json&quot;
  csv_file_name = &quot;siva_aws_resources-&quot; + timestr + &quot;.csv&quot;

  with open(json_file_name, &quot;wb&quot;) as file:
      file.write(response.content)

  json_to_csv(json_file_name, csv_file_name)

def json_to_csv(resources_json, file_csv) -&gt; None :
        with open(resources_json) as json_file:
            data = json.load(json_file)

        opsql_response = data[&apos;results&apos;]
        data_file = open(file_csv, &apos;w&apos;)
        csv_writer = csv.writer(data_file)
        count = 0

        for opsql_row in opsql_response:
            if count == 0:
                header = opsql_row.keys()
                csv_writer.writerow(header)
                count += 1

            csv_writer.writerow(opsql_row.values())

        data_file.close()

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The OpsQL API is a powerful tool that enhances the capabilities of the OpsRamp platform, providing users with the flexibility to perform complex queries and manage data efficiently. By leveraging the OpsQL API, IT administrators and developers can streamline their operations, improve data management, and enhance overall productivity. Thus OpsQL grants the wishes of IT Ops Managers with a single pane of glass and unified searching capability across their wide, complex real estate.
For more details, you can refer to the &lt;a href=&quot;https://develop.opsramp.com/v3/api/opsql/tenantid-queries/&quot;&gt;OpsQL API Documentation&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Learning to manage HPE iLO 7 with RESTful Interface Tool 6.0]]></title><description><![CDATA[Introduction We know you love servers by Hewlett Packard Enterprise (HPE) for their security and ease of management. With the release of HPE…]]></description><link>https://developer.hpe.com/learning-to-manage-ilo-7-with-restful-interface-tool-ilorest-6-0/</link><guid isPermaLink="false">https://developer.hpe.com/learning-to-manage-ilo-7-with-restful-interface-tool-ilorest-6-0/</guid><pubDate>Mon, 10 Mar 2025 14:05:22 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;We know you love servers by Hewlett Packard Enterprise (HPE) for their security and ease of management. With the release of HPE iLO 7 on Gen12 servers, HPE has taken security to the next level. Unlike previous generations where Production was the default &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/securityservice/#ilo-security-state&quot; target=&quot;_blank&quot;&gt;security state&lt;/a&gt;, iLO 7 introduces the Secure Standard state by default, enhancing system security.&lt;/p&gt;
&lt;p&gt;Another major change is in the default login method - Virtual NIC (&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/vnic/#the-ilo-redfish-host-interface-virtual-nic&quot; target=&quot;_blank&quot;&gt;VNIC&lt;/a&gt;) replaces the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/etc/glossaryterms/&quot; target=&quot;_blank&quot;&gt;CHIF&lt;/a&gt; interface, which was used in iLO 6 and earlier versions.&lt;/p&gt;
&lt;p&gt;In this article, I&apos;ll walk you through how to manage HPE iLO 7 using the iLOrest 6.0 tool via in-band access. We’ll start by installing &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot; target=&quot;_blank&quot;&gt;iLOrest&lt;/a&gt; on different operating systems and then dive into logging into iLO 7. Since iLO 7 introduces a new &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/securityservice/#application-accounts&quot; target=&quot;_blank&quot;&gt;application account&lt;/a&gt; login method, I’ll also cover how iLOrest 6.0 fully supports this feature.&lt;/p&gt;
&lt;p&gt;Let’s get started! 🚀&lt;/p&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;h3&gt;Linux:&lt;/h3&gt;
&lt;p&gt;On Linux, iLOrest can be installed as an RPM package. If you already have a previous version installed, you can upgrade it using the -Uvh option.
For a fresh installation, use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;rpm -ivh ilorest-6.0.x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is a screenshot of a successful installation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/rpm_linux.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Application account creation during iLOrest installation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;During the RPM installation of iLOrest 6.0, you might notice that the installer prompts for iLO credentials to create an Application account. While this step is optional, HPE strongly recommends creating the Application account during installation itself. Why?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Application account provides an additional method for in-band authentication with iLO 7, enhancing security and flexibility.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once created, the Application account allows you to log in without needing traditional credentials every time.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you choose to skip this step, you can always create the Application account later using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilorest appaccount create -u ilo-user -p password --self
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By leveraging this new authentication method, managing iLO 7 becomes even more seamless!!!&lt;/p&gt;
&lt;h3&gt;Windows:&lt;/h3&gt;
&lt;p&gt;On Microsoft Windows, iLOrest is installed using an MSI package. During installation, a user interface will appear, prompting you to enter iLO credentials for the Application account creation.
Just like on Linux, this step is optional but recommended by HPE, as the Application account allows for in-band authentication with iLO 7 without requiring traditional credentials.
Here’s a screenshot of the Application account creation dialog box during installation:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/windows_msi.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;All other OSes:&lt;/h3&gt;
&lt;p&gt;On operating systems like Ubuntu and the macOS, the iLOrest tool can be installed effortlessly via PyPI using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install ilorest
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On VMware ESXi, it is installed using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;esxcli software component apply -d ilorest-component.zip
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Creating an Application account&lt;/h4&gt;
&lt;p&gt;For these OSes, you can create an Application account using the following iLOrest command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilorest appaccount create -u ilo-user -p password --self
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Logging into iLO 7&lt;/h4&gt;
&lt;p&gt;Once the Application account is created, you can perform an inband login with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilorest login
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you prefer to log in without using an Application account, you can opt for credential-based login instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilorest login --no_app_account -u ilo-user -p password
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this guide, I have demonstrated how to install iLOrest 6.0 across different operating systems and leverage the new Application account login method introduced in iLO 7. Get started today! Download &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot; target=&quot;_blank&quot;&gt;iLOrest&lt;/a&gt;, explore its exciting new features, and take full control of iLO 7 with ease.&lt;/p&gt;
&lt;p&gt;For more information on HPE iLO, along with some tips and tricks in working with it, make sure you check out the HPE Developer blog regularly.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore webhooks with HPE GreenLake cloud, HPE Private Cloud for AI, & the effects of the distributed enterprise on IT]]></title><link>https://developer.hpe.com/2025-march-04/</link><guid isPermaLink="false">https://developer.hpe.com/2025-march-04/</guid><pubDate>Tue, 04 Mar 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[The Rise of the Distributed Enterprise]]></title><description><![CDATA[A major shift is underway in IT, and customers are leading the charge. Across industries — retail, logistics, healthcare, and manufacturing…]]></description><link>https://developer.hpe.com/the-rise-of-the-distributed-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/the-rise-of-the-distributed-enterprise/</guid><pubDate>Mon, 24 Feb 2025 18:31:55 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;A major shift is underway in IT, and customers are leading the charge. Across industries — retail, logistics, healthcare, and manufacturing — businesses are moving away from centralized data centers and embracing a model that brings technology closer to where value is created. The distributed enterprise isn’t just an emerging trend; it’s a complete rethinking of IT strategy.&lt;/p&gt;
&lt;p&gt;In this blog series, I’ll explore why this shift matters, how it’s reshaping IT infrastructure, and what it means for organizations looking to stay ahead. The distributed model isn’t just about efficiency — it’s about unlocking new levels of agility, resilience, and innovation.&lt;/p&gt;
&lt;h2&gt;If the next slide has a cloud, this meeting is over!&lt;/h2&gt;
&lt;p&gt;In 1997, I found myself in the middle of a wide-area networking pitch to a key customer executive. My slides were packed with cloud diagrams, showing how devices would seamlessly connect. Five slides in, the executive stopped me cold.&lt;/p&gt;
&lt;p&gt;“If the next slide has a cloud, this meeting is over.”&lt;/p&gt;
&lt;p&gt;Without missing a beat, I smiled, shut my laptop, and said, “No problem. Let’s talk about what really matters. What are you trying to do?”&lt;/p&gt;
&lt;p&gt;Long before cloud computing was even a thing, that moment taught me a lesson that has stuck with me for decades: customers bring their own priorities, their own perspectives, and their own challenges to the table. What’s important to me as a vendor might be completely irrelevant to them. Their time is limited. Their goals are non-negotiable. And the most valuable insights don’t come from talking — they come from listening. And what they’re saying today may surprise you.&lt;/p&gt;
&lt;h2&gt;Fast forward to 2025: IT’s biggest challenge isn’t what you think&lt;/h2&gt;
&lt;p&gt;At Hewlett Packard Enterprise (HPE), we have some of the most forward-thinking customers in the world. What we’re hearing is that the biggest IT challenge today isn’t simply choosing between cloud, AI, or edge computing. It’s something deeper.&lt;/p&gt;
&lt;p&gt;Decades of IT decisions — investments in legacy architecture, vendor lock-in, and rigid operating models — have created an enormous weight that enterprises are struggling to escape. Many organizations find themselves not just burdened by technical debt, but by the outdated IT philosophies that shaped those decisions. Even modernization efforts, like shifting workloads to the cloud or navigating compliance challenges, have often resulted in fragmented, complex, and costly environments that are difficult to manage.&lt;/p&gt;
&lt;p&gt;It’s a frustrating paradox. The technologies meant to drive innovation are now causing bottlenecks. Instead of shaping bold, forward-looking strategies, many IT leaders are spending their time maintaining toolchains and platforms built for a different era.&lt;/p&gt;
&lt;p&gt;The solution isn’t another incremental upgrade or a slightly better cloud migration strategy. It requires something far more fundamental: a shift in mindset.&lt;/p&gt;
&lt;h2&gt;Breaking free: Clean-slate thinking&lt;/h2&gt;
&lt;p&gt;For years, IT has been shaped by a data center-first mindset. The enterprise computing model was built around large, centralized hubs that processed and stored everything, with branch locations, remote offices, and manufacturing sites acting as secondary nodes. That model worked well in an era when businesses were centralized, and network capacity was scarce.&lt;/p&gt;
&lt;p&gt;But today, enterprises operate as dynamic, distributed ecosystems. Retail chains, healthcare providers, and logistics companies don’t function as a single monolithic entity — they are sprawling, interconnected networks, each with unique operational needs and real-time data demands. IT should reflect that reality.&lt;/p&gt;
&lt;p&gt;Clean-slate thinking offers a different approach. Instead of trying to optimize an aging model, it asks: if we were starting from scratch today, how would we design IT for a distributed world? The answer isn’t about moving everything to the cloud or holding onto traditional infrastructure — it’s about rethinking IT as a flexible, location-aware, and real-time system that adapts to business needs rather than forcing businesses to adapt to IT limitations.&lt;/p&gt;
&lt;h2&gt;Cracks in the old model&lt;/h2&gt;
&lt;p&gt;Real-time applications are pushing IT to the edge — literally — and putting immense pressure on legacy IT infrastructure. AI-driven analytics, high-resolution cameras, and IoT sensors are generating massive amounts of data — data that must be processed instantly to be valuable.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/the-rise-of-the-distributed-enterprise-image-1.png&quot; alt=&quot;Real-time applications are pushing IT to the edge&quot; title=&quot;Real-time applications are pushing IT to the edge&quot;&gt;&lt;/p&gt;
&lt;p&gt;Consider modern quality assurance systems in manufacturing. A single AI-powered inspection system can generate &lt;strong&gt;100MB of data per second&lt;/strong&gt;, all of which needs to be analyzed immediately. Sending that data over a network to a distant data center introduces &lt;strong&gt;10 to 20 seconds of delay&lt;/strong&gt; — an eternity when a pass/fail decision must be made in under a second.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/the-rise-of-the-distributed-enterprise-image-2.png&quot; alt=&quot;Consider modern quality assurance systems in manufacturing&quot; title=&quot;Consider modern quality assurance systems in manufacturing&quot;&gt;&lt;/p&gt;
&lt;p&gt;Retailers face similar challenges. AI-driven theft detection systems rely on real-time video analysis. If video feeds must be sent to a centralized cloud or data center before an action can be taken, the system fails in its primary goal: preventing theft in the moment.&lt;/p&gt;
&lt;p&gt;For industries dependent on instant decision-making, the old model simply doesn’t work. A centralized processing approach turns real-time data into historical data before it can be acted upon. And in these cases, delayed insights are as good as no insights at all.&lt;/p&gt;
&lt;p&gt;Organizations that cling to a rigid, centralized model are putting themselves at a disadvantage. Those that embrace a distributed approach, where data is processed where it’s generated, will gain an operational edge.&lt;/p&gt;
&lt;h2&gt;Rethinking IT for 2025 and beyond&lt;/h2&gt;
&lt;p&gt;Enterprise IT wasn’t always built this way. In the early days of computing, a single &lt;strong&gt;central processing unit (CPU)&lt;/strong&gt; ran everything. Early mainframes were massive, expensive machines housed in specialized rooms, accessible to only a handful of people. Adjusted for 2025 dollars, running a single-CPU mainframe in the 1950s would cost upwards of &lt;strong&gt;$200,000 per month&lt;/strong&gt; — not including the space and personnel required to operate it.&lt;/p&gt;
&lt;p&gt;The world moved on. Computing became distributed. Processing power expanded beyond a single CPU to networks of interconnected systems. But despite these advances, enterprise IT remained largely centralized, tied to the idea that data must flow to a core processing hub before it can be useful.&lt;/p&gt;
&lt;p&gt;That mindset is outdated. Data is no longer just an asset to be stored and analyzed — it’s the lifeblood of a modern enterprise. And just like goods, services, and financial transactions, it needs to flow freely to be valuable.&lt;/p&gt;
&lt;p&gt;Shifting from a data center-centric approach to a distributed enterprise model requires more than just new technology. It demands a new way of thinking. Today’s use cases require a model where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Instead of a few massive data centers, IT infrastructure is spread across a network of private clouds, edge locations, and intelligent local processing nodes, using spot instances, GPU-sharing, and dynamic workload placement to maximize efficiency.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Instead of relying on centralized security models and rigid VPNs, organizations embrace Zero Trust principles, where access is identity-based and data is protected at every level.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Instead of monolithic applications that require extensive backend processing, businesses build microservices that run dynamically in containers and serverless environments.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And, instead of constantly pinging data back and forth to the cloud, AI and analytics run where data is generated, with distributed data fabrics and federated learning ensuring that insights are shared efficiently.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is not just about efficiency. It’s about agility, resilience, and the ability to operate at the speed of business. Enterprises that distribute their compute, storage, and analytics capabilities will be positioned to make faster decisions, reduce costs, and unlock new opportunities for innovation.&lt;/p&gt;
&lt;h2&gt;What comes next?&lt;/h2&gt;
&lt;p&gt;The future of enterprise IT isn’t about holding on to outdated models—it’s about unlocking the full potential of real-time data where it’s actually needed. The distributed enterprise is the natural evolution of IT, and companies that embrace it will be at the forefront of the next wave of innovation.&lt;/p&gt;
&lt;p&gt;Watch for my next posts where I’ll delve into &lt;strong&gt;data gravity&lt;/strong&gt; — how it shapes IT strategy and why understanding the lifecycle of data is key to making the distributed enterprise a reality, as well as other aspects of this model.&lt;/p&gt;
&lt;p&gt;See you then.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Optimizing data processing with Apache Spark: Best practices and strategies]]></title><description><![CDATA[Big Data processing is at the core of modern analytics, and Apache Spark has emerged as a leading framework for handling large-scale data…]]></description><link>https://developer.hpe.com/optimizing-data-processing-with-apache-spark-best-practices-and-strategies/</link><guid isPermaLink="false">https://developer.hpe.com/optimizing-data-processing-with-apache-spark-best-practices-and-strategies/</guid><pubDate>Mon, 24 Feb 2025 13:39:56 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Big Data processing is at the core of modern analytics, and &lt;strong&gt;Apache Spark&lt;/strong&gt; has emerged as a leading framework for handling large-scale data workloads. However, optimizing Spark jobs for &lt;strong&gt;efficiency, performance, and scalability&lt;/strong&gt; remains a challenge for many data engineers. Traditional data processing systems struggle to keep up with the exponential growth of data, leading to issues like &lt;strong&gt;resource bottlenecks, slow execution, and increased complexity&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This whitepaper explores &lt;strong&gt;best practices and optimization strategies&lt;/strong&gt; to enhance Spark’s performance, improve resource utilization, and ensure scalability. With &lt;strong&gt;data collection becoming cheaper and more widespread&lt;/strong&gt;, organizations must focus on extracting business value from massive datasets efficiently. &lt;strong&gt;Apache Spark was designed to solve some of the biggest challenges in Big Data&lt;/strong&gt;, enabling everything from basic data transformations to advanced machine learning and deep learning workloads.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Understanding Apache Spark&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Apache Spark, an open-source distributed data processing framework, addresses these challenges through its innovative architecture and in-memory computing capabilities, making it significantly faster than traditional data processing systems.&lt;/p&gt;
&lt;p&gt;Apache Spark was developed to address several limitations and challenges that were present in existing big data processing frameworks, such as Hadoop MapReduce. It supports multiple programming languages, including Python (PySpark), Scala, and Java, and is widely used in ETL, machine learning, and real-time streaming applications.&lt;/p&gt;
&lt;p&gt;Here are the key reasons why Spark came into existence and what sets it apart from other frameworks in the big data world:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;In-memory processing&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Iterative and interactive processing&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Ease of use&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Unified framework&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Resilient distributed datasets (RDDs)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Lazy evaluation and DAG execution&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Interactive analytics&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Streaming&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Machine learning libraries&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Graph processing&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Advanced analytics&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Challenges in Spark optimization&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While Spark is designed for speed and scalability, several challenges can impact its performance:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Inefficient data partitioning&lt;/strong&gt; - Poor partitioning can lead to data skew and uneven workload distribution.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;High shuffle costs&lt;/strong&gt; - Excessive shuffling of data can slow down performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Improper resource allocation&lt;/strong&gt; - Inefficient use of memory and CPU can cause bottlenecks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Slow data reads and writes&lt;/strong&gt; - Suboptimal file formats and storage choices can degrade performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Poorly written code&lt;/strong&gt; - Unoptimized transformations and actions can increase execution time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No storage layer&lt;/strong&gt; – Spark does not have a built-in storage layer, so it relies on external storage systems for data persistence.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Best practices for Spark optimization&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To address these challenges, the following best practices should be adopted:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Optimize data partitioning&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use appropriate partitioning techniques based on data volume and usage patterns.&lt;/li&gt;
&lt;li&gt;Leverage bucketing and coalescing to manage partition sizes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;2. Reduce shuffle operations&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Avoid wide transformations like groupBy() and reduceByKey() when possible.&lt;/li&gt;
&lt;li&gt;Use broadcast joins for small datasets to minimize shuffling.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;3. Efficient memory management&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tune Spark configurations like spark.executor.memory and spark.driver.memory.&lt;/li&gt;
&lt;li&gt;Optimize garbage collection settings for long-running jobs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;4. Use optimized file formats&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prefer columnar storage formats like Parquet or ORC over CSV and JSON.&lt;/li&gt;
&lt;li&gt;Enable compression to reduce I/O overhead.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5. Leverage catalyst optimizer and tungsten execution engine&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Let Spark’s Catalyst Optimizer handle query optimization.&lt;/li&gt;
&lt;li&gt;Utilize Tungsten’s bytecode generation and memory management features.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;6. Optimize code for performance&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use DataFrame API instead of RDDs for better optimization.&lt;/li&gt;
&lt;li&gt;Avoid unnecessary collect() and count() operations.&lt;/li&gt;
&lt;li&gt;Cache and persist intermediate results where necessary.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;7. Monitor and debug performance&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use Spark UI and Event Logs to analyze job execution.&lt;/li&gt;
&lt;li&gt;Employ metrics and monitoring tools like Ganglia and Prometheus.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Optimizing Apache Spark requires a strategic approach that combines efficient data handling, resource management, and code optimization. By implementing the best practices outlined in this whitepaper, organizations can enhance performance, reduce costs, and accelerate large-scale data processing.&lt;/p&gt;
&lt;p&gt;As big data continues to grow, mastering Spark’s fundamentals will empower organizations to unlock its full potential, drive innovation, and make smarter data-driven decisions in the digital era.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Deploying a Small Language Model in HPE Private Cloud AI using a Jupyter Notebook]]></title><description><![CDATA[Deploying new language models for users to interact with can be challenging for beginners. HPE developed Private Cloud AI to help users set…]]></description><link>https://developer.hpe.com/deploying-a-hugging-face-llm-in-hpe-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-a-hugging-face-llm-in-hpe-private-cloud-ai/</guid><pubDate>Thu, 20 Feb 2025 20:03:50 GMT</pubDate><content:encoded>&lt;p&gt;Deploying new language models for users to interact with can be challenging for beginners. HPE developed Private Cloud AI to help users set up and implement AI solutions quickly and easily.&lt;/p&gt;
&lt;p&gt;In this post, we will show how to use the HPE Machine Learning Inference Service (MLIS) as a part of HPE Private Cloud AI to add a new packaged model from a Hugging Face repository and create an endpoint to query the model. This is done using a Jupyter Notebook.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;This tutorial uses the &lt;a href=&quot;https://www.hpe.com/us/en/private-cloud-ai.html&quot;&gt;HPE Private Cloud AI&lt;/a&gt; (PCAI) platform. A PCAI system is required for these steps to work. It is assumed that the PCAI system is physically installed, patched and running with user accounts provisioned.&lt;/p&gt;
&lt;h3&gt;Steps to deploy&lt;/h3&gt;
&lt;p&gt;First, you will need to choose a model to deploy. In this case, we&apos;ve chosen a model hosted on Hugging Face called &lt;a href=&quot;https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct&quot;&gt;SmolLM2 1.7B&lt;/a&gt;. This is a compact model that can solve a wide range of problems even though it is relatively diminutive at 1.7B parameters.&lt;/p&gt;
&lt;h3&gt;Launching the interface&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/mlis.png&quot; alt=&quot;Computer screen showing the HPE Private Cloud AI user interface and the HPE MLIS tile is highlighted.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next select &quot;Add new model&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/new-model.png&quot; alt=&quot;Computer screen showing packaged AI models and a selection to add a new model.&quot;&gt;&lt;/p&gt;
&lt;p&gt;This brings up the &quot;Add new packaged model&quot; dialog box. Fill in the the name of the model, storage requirements, and resources. We have reduced the default resources, given that this is a small model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/define-parameters.png&quot; alt=&quot;Dialog box for defining a new packaged model.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the package is set up, you will receive a confirmation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/package-running.png&quot; alt=&quot;Shows running packaged model.&quot;&gt;&lt;/p&gt;
&lt;p&gt;With the new packaged model complete, you will need to deploy it for use. Select &quot;create new deployment&quot; from the HPE MLIS &quot;Deployments&quot; tab. Select submit when all tabs are filled out as shown below.&lt;/p&gt;
&lt;p&gt;This will create an endpoint for use in the notebook and provide an API token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/new-deployment.png&quot; alt=&quot;New deployment for AI model&quot;&gt;&lt;/p&gt;
&lt;p&gt;When the process is complete, an endpoint will be provided.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/endpoint.png&quot; alt=&quot;Endpoint provided by MLIS system&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next up, let&apos;s take the now deployed model that&apos;s ready for inference and connect to it and interact with it from a Jupyter Notebook.&lt;/p&gt;
&lt;h3&gt;Building the Jupyter Notebook&lt;/h3&gt;
&lt;p&gt;First, install &lt;code&gt;openai&lt;/code&gt; if you do not already have it and import.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# vLLM Chat OpenAI
# !pip intall openai
from openai import OpenAI
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, using the endpoint and key generated by HPE MLIS, enter them into your Jupyter Notebook. Be sure to append /v1 to the URL.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Grab endpoint URL and API key from MLIS, remember to include &quot;/v1&quot; for latest version of the OpenAI-compatible API
model = &quot;HuggingFaceTB/SmolLM2-1.7B-Instruct&quot;
openai_api_base = &quot;https://smollm2-1-7b-vllm-predictor-dave-wright-hpe-1073f7cd.hpepcai-ingress.pcai.hpecic.net/v1&quot;
openai_api_key = &quot;eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3Mzk5MzgzMzAsImlzcyI6ImFpb2xpQGhwZS5jb20iLCJzdWIiOiI5MjNhM2JhOC1mMGU4LTQxOTQtODNkMS05ZWY4NzNjZGYxOWYiLCJ1c2VyIjoiZGF2ZS53cmlnaHQtaHBlLmNvbSJ9.YwH9gGPxTWxy4RSdjnQA9-U3_u7P0OIcarqw25DV8bOiftU1L4IvvyERHspj2lMGtZWbff1F3uh84wjAePHaHDcDTLoGtq6gJYwo_qRU03xV8Q2lwBetCCLUE4OHqS608gjJ-j1SLyqwxFxlXkqMOtnBY5_nswlAwCzHV28P8u8XxxfWuXFmoJpSA1egCWVVfEoTuK8CTz9kUJJ5opSp6m8qdqJmC2qxH0igcpKmL2H_MZ-62UHfEf240VRtc0DRNlOjeCoDM79aVPs3SjCtGeVkeEHimJwJbfGFIcu3LibX3QjbABUzWb5BPPZjzyEYUVM5ak12_sJ8j1mUW-r0sA&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will now need to create an OpenAI client interface.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# create OpenAI client interface
client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to interact with the model, you will need to create a chat function. For the purposes of our example, let&apos;s give it a history feature as well as basic chat.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Interactive chat function with message history. 
def chat():
    # Initialize conversation history
    messages = []
    
    print(&quot;Chat with &quot;+model+&quot;! Type &apos;quit&apos; to exit.&quot;)
    
    while True:
        # Get user input
        user_input = input(&quot;\nYou: &quot;).strip()
        
        # Check for quit command
        if user_input.lower() == &apos;quit&apos;:
            print(&quot;Goodbye!&quot;)
            break
        
        # Add user message to history
        messages.append({&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: user_input})
        
        try:
            # Get model response using chat completion
            response = client.chat.completions.create(
                model=model,
                messages=messages
            )
            
            # Extract assistant&apos;s message
            assistant_message = response.choices[0].message.content
            
            # Add assistant&apos;s response to history
            messages.append({&quot;role&quot;: &quot;assistant&quot;, &quot;content&quot;: assistant_message})
            
            # Print the response
            print(&quot;\nAssistant:&quot;, assistant_message)
            
        except Exception as e:
            print(f&quot;\nError: {str(e)}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/jupyter.png&quot; alt=&quot;Jupyter Notebook showing imported model endpoint and API key.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once this is done, you can interact with the model through a simple chat.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chat-interface.png&quot; alt=&quot;Interaction with the SmolLM2 Small Language Model in a Jupyter Notebook&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can access &lt;a href=&quot;https://www.youtube.com/watch?v=oqjc-2c1Vtk&quot;&gt;this link&lt;/a&gt; to see a recorded demonstration that shows this process in real time. &lt;a href=&quot;https://www.youtube.com/watch?v=oqjc-2c1Vtk&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;With HPE Private Cloud AI, loading new models into the system and providing endpoints is just a few simple clicks and easily integrates with popular tools like Jupyter Notebooks. To learn more about HPE Private Cloud AI, please visit: &lt;a href=&quot;https://www.hpe.com/us/en/private-cloud-ai.html&quot;&gt;https://www.hpe.com/us/en/private-cloud-ai.html&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Bill Reus: Interactive Supercomputing with Chapel for Cybersecurity]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-bill-reus-interactive-supercomputing-with-chapel-for-cybersecurity/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-bill-reus-interactive-supercomputing-with-chapel-for-cybersecurity/</guid><pubDate>Thu, 13 Feb 2025 02:40:08 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with webhooks on HPE GreenLake cloud]]></title><description><![CDATA[Polling API or subscribing to events: That IS the question In one of my previous blog posts, I used the HPE GreenLake API to query the audit…]]></description><link>https://developer.hpe.com/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-hpe-greenlake-cloud-eventing-framework/</guid><pubDate>Thu, 06 Feb 2025 13:53:06 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;Polling API or subscribing to events: That IS the question&lt;/h2&gt;
&lt;p&gt;In one of my previous blog posts, I used the HPE GreenLake API to query the audit log and, if anything appeared in the audit log over the course of a few minutes, I arranged for it to be displayed on screen. To do this, I had to continuously poll the API at a regular polling interval. While this method works, it is not ideal, since it is not done in real time, and you might get notified of an important event after, at max, your polling interval. A better approach that is often available on software platforms is called events, also referred to as webhooks. HPE GreenLake cloud provides this functionality and, in this post, I will explain how to leverage it.&lt;/p&gt;
&lt;h2&gt;It&apos;s a publisher/subscriber world&lt;/h2&gt;
&lt;p&gt;HPE GreenLake cloud provides an events framework in which event publishers (any of the HPE GreenLake cloud services) can register event types with the platform and event subscribers can declare what event types they would like to subscribe to. After they establish a security handshake, HPE GreenLake cloud forwards selected events to the subscriber in a close-to-real-time mode. No polling is necessary as the event handler (webhook) will be notified asynchronously.&lt;/p&gt;
&lt;p&gt;The following diagram illustrates the mechanism by which this works:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/slide-pour-blog-webhooks.jpg&quot; alt=&quot;HPE GreenLake cloud events framework&quot; title=&quot;HPE GreenLake cloud events framework&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How do you write webhook handlers?&lt;/h2&gt;
&lt;p&gt;There are many ways you can write a webhook handler. You could use traditional languages such as Node.js (using Express.js), Python (using Flask) or Ruby (using Sinatra). Also very popular is the use of serverless functions such as AWS Lambda or Google Cloud Functions. You could also use low-code/no-code platforms such as Zapier, Make or Automate.io. Another option is to use an existing platform that supports webhooks such as ServiceNow or HPE OpsRamp.&lt;/p&gt;
&lt;p&gt;In any case, a webhook handler will do the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Wait for an HTTP POST request&lt;/li&gt;
&lt;li&gt;Verify request validity&lt;/li&gt;
&lt;li&gt;Parse and process data&lt;/li&gt;
&lt;li&gt;Return an HTTP status code&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Additional requirements for writing a webhook handler for HPE GreenLake include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The URL for the endpoint needs to be a Fully Qualified Domain Name (FQDN)&lt;/li&gt;
&lt;li&gt;The endpoint needs to use HTTPS&lt;/li&gt;
&lt;li&gt;The handler needs to respond to the initial challenge through the use of a shared secret key&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Taking the challenge&lt;/h2&gt;
&lt;p&gt;From the illustration above, you can see that the webhook handler is a piece of code that runs outside of the HPE GreenLake cloud, possibly posing a security risk. In order to avoid calling a rogue code, HPE GreenLake will establish a trust relationship with the webhook handler by issuing a challenge request and expecting a very specific response. The challenge is an HTTP POST request that is sent to the webhook handler (via its URL) with a payload containing a &lt;strong&gt;challengeRequest&lt;/strong&gt; as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;data&quot;: {
  &quot;challengeRequest&quot;: &quot;8cbecfea-d708-4706-bba3-669225084a10&quot;
  },
  &quot;datacontenttype&quot;: &quot;application/json&quot;,
  &quot;id&quot;: &quot;9d2698f2-0dd4-4870-b026-7e18c755e121&quot;,
  &quot;source&quot;: &quot;https://global.api.greenlake.hpe.com/events&quot;,
  &quot;specversion&quot;: &quot;1.0&quot;,
  &quot;subject&quot;: &quot;3009de2825f211ec8a84fedebcb4a754&quot;,
  &quot;time&quot;: &quot;2024-12-12T19:09:05Z&quot;,
  &quot;type&quot;: &quot;com.hpe.greenlake.events.v1beta1.webhooks.verification&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Example of a challenge payload&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;The mission of the challenge handler is to provide the correct answer to the challenge in a timely fashion and with an HTTP response in the following form, where &lt;strong&gt;&lt;CHALLENGE-RESPONSE&gt;&lt;/strong&gt; is the computed &lt;a href=&quot;https://en.wikipedia.org/wiki/SHA-2&quot;&gt;SHA-256&lt;/a&gt; &lt;a href=&quot;https://en.wikipedia.org/wiki/HMAC&quot;&gt;HMAC&lt;/a&gt; (Hash-based Message Authentication Code) of the &lt;strong&gt;challengeRequest&lt;/strong&gt; provided in the input payload.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Status code: &lt;strong&gt;200&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;JSON body: &lt;strong&gt;{ &quot;verification&quot; : &quot;&lt;CHALLENGE-RESPONSE&gt;&quot; }&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Header: &lt;strong&gt;content-type: application/json&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: There are &lt;a href=&quot;https://www.devglan.com/online-tools/hmac-sha256-online&quot;&gt;online ways&lt;/a&gt; to generate SHA256 MAC encodings and test different options but what you need to remember is that you have to provide a hexadecimal format of the hash and that a secret key is used as a “salt” for the calculation of that hash. That secret key is shared by HPE GreenLake cloud and the webhook handler.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Processing events as they arrive&lt;/h3&gt;
&lt;p&gt;If the challenge was successful, meaning that the webhook handler has been recognized as a valid target for HPE GreenLake event forwarding, the next task of the webhook handler is to process events as they are received through subsequent POST requests. The payload shown below is an example of an event received from the HPE GreenLake audit log. It describes the fact that a user has logged out of the HPE GreenLake cloud. More details about other event payloads can be found in the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/event/public/events/&quot;&gt;HPE GreenLake developer portal&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
     &quot;specversion&quot;: &quot;1.0&quot;,
     &quot;id&quot;: &quot;uJoQ5JMB_HAPiq2sSllt&quot;,
     &quot;source&quot;: &quot;//global.api.greenlake.hpe.com/audit-log&quot;,
     &quot;type&quot;: &quot;com.hpe.greenlake.audit-log.v1.logs.created&quot;,
     &quot;datacontenttype&quot;: &quot;application/json&quot;,
     &quot;dataschema&quot;: &quot;https://developer.greenlake.hpe.com/docs/greenlake/services/audit-logs/public/catalog/audit-log-event-latest/paths/Audit%20Log%20Created/post/&quot;,
     &quot;time&quot;: &quot;2024-12-20T12:34:53.161Z&quot;,
     &quot;data&quot;: {
         &quot;id&quot;: &quot;uJoQ5JMB_HAPiq2sSllt&quot;,
         &quot;user&quot;: {
             &quot;username&quot;: &quot;john.doe@hpedev.io&quot;
         },
         &quot;workspace&quot;: {
             &quot;id&quot;: &quot;3009de2825f211ec8a84fedebcb4a754&quot;,
             &quot;workspace_name&quot;: null,
             &quot;workspace_type&quot;: null
         },
         &quot;application&quot;: {
             &quot;id&quot;: &quot;00000000-0000-0000-0000-000000000000&quot;
         },
         &quot;category&quot;: &quot;user_management&quot;,
         &quot;description&quot;: &quot;User john.doe@hpedev.io logged out&quot;,
         &quot;created_at&quot;: 1734698093161,
         &quot;updated_at&quot;: 1734698093161,
         &quot;additional_info&quot;: {
             &quot;ip_address&quot;: &quot;194.9.99.5&quot;,
             &quot;account_name&quot;: &quot;HPEDEV -GLCP- Hackshack&quot;,
             &quot;ip_address_str&quot;: &quot;1.1.1.1&quot;
         },
         &quot;has_details&quot;: false
     }
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Example of an event payload&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now that you have a clear understanding of what has to be done within a webhook handler, let me show you how to build one.&lt;/p&gt;
&lt;h2&gt;Using Make.com to create a webhook handler&lt;/h2&gt;
&lt;p&gt;I decided to use Make (previously known as Integromat , which was a great name as it was a contraction of Integration-Automat and that is exactly what it is) to demonstrate this because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;it doesn’t require any specific programming language knowledge&lt;/li&gt;
&lt;li&gt;it provides a FQDN of the webhook over HTTPS&lt;/li&gt;
&lt;li&gt;it’s free for simple prototyping usage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It turned out to be a very elegant and fast way to get the work done.&lt;/p&gt;
&lt;h3&gt;Getting started with Make&lt;/h3&gt;
&lt;p&gt;First, you will need to create an account at &lt;a href=&quot;https://make.com&quot;&gt;https://make.com&lt;/a&gt;. Next you will create a new scenario (aka workflow).&lt;/p&gt;
&lt;p&gt;The first step in the scenario will be a Webhooks module:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-a-webhook.jpg&quot; alt=&quot;Create a webhook in Make&quot; title=&quot;Create a webhook in Make&quot;&gt;&lt;/p&gt;
&lt;p&gt;Give it a name and leave the rest of the settings as their defaults.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-webhook-2.jpg&quot; alt=&quot;Use URL of Webhook&quot; title=&quot;Use URL of Webhook&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can already &lt;strong&gt;Copy address to clipboard&lt;/strong&gt; to get the URL of your webhook. Save it for later.&lt;/p&gt;
&lt;p&gt;Select the &lt;strong&gt;Add&lt;/strong&gt; button, then click on &lt;strong&gt;Show Advanced Option&lt;/strong&gt; to add a data structure using the challenge payload example from the Take the challenge section above. Click &lt;strong&gt;generate,&lt;/strong&gt; paste the JSON and save it as &lt;strong&gt;challenge&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Make sure the &lt;strong&gt;challenge&lt;/strong&gt; data structure is selected in the advanced settings of the Webhooks module before continuing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/select-challenge.jpg&quot; alt=&quot;Select challenge data structure &quot; title=&quot;Select challenge data structure &quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, add another module to compute the HMAC. Click the plus sign and add a &lt;strong&gt;Tools Set&lt;/strong&gt; &lt;strong&gt;variable&lt;/strong&gt; module (use search to avoid the long list of modules).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-set-variabke-step.jpg&quot; alt=&quot;Add set variable step&quot; title=&quot;Add set variable step&quot;&gt;&lt;/p&gt;
&lt;p&gt;Configure the Set variable module with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Variable name: &lt;strong&gt;hmac&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Variable value: &lt;strong&gt;sha256( challengeRequest ; ; &lt;SecretKey&gt; )&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: you can drag the &lt;strong&gt;challengeRequest&lt;/strong&gt; property from the &lt;strong&gt;Webhooks&lt;/strong&gt; module, and drop it as your first parameter. &lt;strong&gt;&lt;SecretKey&gt;&lt;/strong&gt; is a placeholder which you will replace later, once we know the the real shared secret key.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/drag-drop-properties.jpg&quot; alt=&quot;drag and drop properties&quot; title=&quot;drag and drop properties&quot;&gt;&lt;/p&gt;
&lt;p&gt;The final step is to prepare the response of the webhook. For this, you need to add another module after the &lt;strong&gt;Set variable&lt;/strong&gt; module, called &lt;strong&gt;Webhook response&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-webhook-response.jpg&quot; alt=&quot;Add webhook response&quot; title=&quot;Add webhook response&quot;&gt;&lt;/p&gt;
&lt;p&gt;Set the status to &lt;strong&gt;200&lt;/strong&gt; and the body to &lt;strong&gt;{&quot;verification&quot;:&quot;hmac&quot;}&lt;/strong&gt;. Feel free to drag/drop the &lt;strong&gt;hmac&lt;/strong&gt; property from the &lt;strong&gt;Set variable&lt;/strong&gt; step in between the double quotes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/setwebhook-response.jpg&quot; alt=&quot;Set webhook response&quot; title=&quot;Set webhook response&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the advanced settings, add a custom header for &lt;strong&gt;content-type: application/json&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Your overall workflow basically looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/step1-final.jpg&quot; alt=&quot;Part 1 workflow&quot; title=&quot;Part 1 workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the parameter of the scenario (bottom of your editor), make sure the green check &lt;strong&gt;to run Immediately as data arrives&lt;/strong&gt; is checked.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/run-immediatly.jpg&quot; alt=&quot;Run immediately as data arrives &quot; title=&quot;Run immediately as data arrives &quot;&gt;&lt;/p&gt;
&lt;h3&gt;Debugging webhook using Postman&lt;/h3&gt;
&lt;p&gt;You can use Postman to test this workflow first before you declare it in HPE GreenLake cloud.&lt;/p&gt;
&lt;p&gt;For this, create a &lt;strong&gt;POST&lt;/strong&gt; request using the URL of the webhook (which you saved earlier) and the challenge example payload in JSON (we used it earlier). Make sure you add a &lt;strong&gt;content-type:application/json&lt;/strong&gt; header before clicking &lt;strong&gt;Send&lt;/strong&gt;. Check:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;That the status code is 200&lt;/li&gt;
&lt;li&gt;That the response body is the expected &lt;strong&gt;hmac&lt;/strong&gt; value&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-challenge.jpg&quot; alt=&quot;Try challenge from Postman&quot; title=&quot;Try challenge from Postman&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Handling events in Make&lt;/h3&gt;
&lt;p&gt;Now that you know how to handle the initial challenge, you need to also take care of the other type of events that will be sent to the webhook handler.&lt;/p&gt;
&lt;p&gt;To make that distinction, you will use a new module in Make called a &lt;strong&gt;Router&lt;/strong&gt; and insert it in between the &lt;strong&gt;Webhooks&lt;/strong&gt; and the &lt;strong&gt;Tools Set variable&lt;/strong&gt; modules (right-click add a router to &lt;strong&gt;Webhooks&lt;/strong&gt;).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-a-router.jpg&quot; alt=&quot;Add a router&quot; title=&quot;Add a router&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, you will have to place a filter on this first branch (the challenge case) so that only the challenge call gets routed on the topmost branch.&lt;/p&gt;
&lt;p&gt;To do this, click on the topmost link and set up a filter to verify that property &lt;strong&gt;challengeRequest&lt;/strong&gt; exists in the payload. As earlier, you can use drag/drop to setup the condition of the filter, which is very convenient.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/setup-filter.jpg&quot; alt=&quot;Setup a filter on top most branch&quot; title=&quot;Setup a filter on top most branch&quot;&gt;&lt;/p&gt;
&lt;p&gt;Earlier, I showed an example of a payload received when an event is triggered. You can create another structure in the custom webhook (like what was done with the challenge JSON earlier). Call this structure &lt;strong&gt;event&lt;/strong&gt; and make sure it’s selected before continuing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/select-event-struture.jpg&quot; alt=&quot;Select event structure&quot; title=&quot;Select event structure&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now you can proceed and add a second branch to the router. This branch will be used to process all other events (when it’s not a challenge) received by the handler. For this blog post, I have decided to keep it simple and simply add a row within a Google Sheet.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/add-a-google-sheet.jpg&quot; alt=&quot;Add a Google Sheet&quot; title=&quot;Add a Google Sheet&quot;&gt;&lt;/p&gt;
&lt;p&gt;For this, you must add a Google Sheet module and configure it to fit your needs. As you can see below, I have configured mine with my Google account, and the path to an existing Google Sheet called AuditLog. I also mapped the fields from the data structure that I’d like to record in each new row.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/build-a-row.jpg&quot; alt=&quot;Build a new row in google sheet&quot; title=&quot;Build a new row in google sheet&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, you also need to add to this branch a &lt;strong&gt;Webhook response&lt;/strong&gt; module (like we did in the topmost branch). This one is simpler, as it only returns a status 200 (no body, no header needed).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/second-webhook-response.jpg&quot; alt=&quot;Add a second webhook response&quot; title=&quot;Add a second webhook response&quot;&gt;&lt;/p&gt;
&lt;p&gt;The last thing you need to do is to place a fallback filter on the bottom branch, so it is used for all events except the challenge.&lt;/p&gt;
&lt;p&gt;To do this, select the properties of the bottom branch of the &lt;strong&gt;Router&lt;/strong&gt; and select &lt;strong&gt;yes&lt;/strong&gt; for &lt;strong&gt;fallback&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/set-fallback.jpg&quot; alt=&quot;Set a fallback branch&quot; title=&quot;Set a fallback branch&quot;&gt;&lt;/p&gt;
&lt;p&gt;Make sure you save your scenario.&lt;/p&gt;
&lt;h3&gt;Debugging webhook using Postman… again&lt;/h3&gt;
&lt;p&gt;Create a second Postman &lt;strong&gt;POST&lt;/strong&gt; request using the same webhook URL but this time, in the body, use the audit log event payload used earlier.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/post-event-in-postman.jpg&quot; alt=&quot;Post event in Postman&quot; title=&quot;Post event in Postman&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;strong&gt;Send&lt;/strong&gt; in Postman, and check in Make (in the scenario’s history) that the second route was taken, as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/second-route-taken.jpg&quot; alt=&quot;Second route was taken&quot; title=&quot;Second route was taken&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also verify that in the &lt;strong&gt;Google sheets&lt;/strong&gt; module, the content of the payload was mapped to different values (A:, B:, C:,…)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/googlesheet-set.jpg&quot; alt=&quot;Google sheet property mapping &quot; title=&quot;Google sheet property mapping &quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, verify that your Google sheet was updated and a new row was added using the payload content, as shown in my Google sheet below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/google-row-visible.jpg&quot; alt=&quot;Row was added in Google sheet&quot; title=&quot;Row was added in Google sheet&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Putting it all together&lt;/h2&gt;
&lt;p&gt;It’s now time to put it all together. Connect to HPE GreenLake cloud console and select to &lt;strong&gt;Manage Workspace&lt;/strong&gt;. From there, select the &lt;strong&gt;Automations&lt;/strong&gt; tile and finally the &lt;strong&gt;Webhooks&lt;/strong&gt; tile to access the webhooks configuration section:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webhooksview.jpg&quot; alt=&quot;Webhooks in HPE GreenLake cloud&quot; title=&quot;Webhooks in HPE GreenLake cloud&quot;&gt;&lt;/p&gt;
&lt;p&gt;To register a new webhook as shown in the capture above, you need to provide:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A name&lt;/li&gt;
&lt;li&gt;A description&lt;/li&gt;
&lt;li&gt;The URL of the webhook&lt;/li&gt;
&lt;li&gt;The Webhook secret key. (When you set this secret key don’t forget to go back to Make and edit the &lt;strong&gt;Tool Set variable&lt;/strong&gt; module to use the same secret key. These keys MUST match exactly!)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/same-secret-key.jpg&quot; alt=&quot;You must use the same secret key on both ends&quot; title=&quot;You must use the same secret key on both ends&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Register your webhook&lt;/h3&gt;
&lt;p&gt;When you select &lt;strong&gt;Register webhook&lt;/strong&gt;, the first thing HPE GreenLake cloud will do is to establish a trust relationship with the webhook pointed by the URL, by sending a challenge payload using an HTTP POST request (as already discussed in the previous section).&lt;/p&gt;
&lt;p&gt;If the webhook response is the expected one, then the webhook is placed in &lt;strong&gt;Active&lt;/strong&gt; state. Otherwise, after a few attempts, the webhook state will be set to &lt;strong&gt;Critical&lt;/strong&gt;. At any point in time you can use the &lt;strong&gt;Test&lt;/strong&gt; button, to submit a new challenge (after fixing an issue with your webhook, for example).&lt;/p&gt;
&lt;h3&gt;It’s time to subscribe to events&lt;/h3&gt;
&lt;p&gt;From the web console, you can now subscribe to events available from the HPE GreenLake services. You can find the list of those already available from the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/event/public/ui/#finding-events-on-hpe-greenlake-developer-portal&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;. The list will grow over time as new services adopt this events framework.&lt;/p&gt;
&lt;p&gt;Select your webhook from the list and select &lt;strong&gt;Subscribe to event&lt;/strong&gt;. Select the source &lt;strong&gt;Service manager&lt;/strong&gt; (there is only HPE GreenLake Platform for now), then cut/paste the &lt;strong&gt;Event type&lt;/strong&gt; from the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/#event-catalog&quot;&gt;event catalog&lt;/a&gt;. The following show two examples of event type.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;com.hpe.greenlake.audit-log.v1.logs.created
com.hpe.greenlake.subscriptions.v1.expiring-subscriptions
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/subscribe.jpg&quot; alt=&quot;Subscribing to events&quot; title=&quot;Subscribing to events&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you select &lt;strong&gt;Subscribe to event&lt;/strong&gt;, your webhook handler is now registered to receive these types of events.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/event-subscribed.jpg&quot; alt=&quot;Event subscribed&quot; title=&quot;Event subscribed&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s check what happened in the Google Sheet after a while.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/events-in-google-docs.jpg&quot; alt=&quot;Google Sheet starts being populated with events&quot; title=&quot;Google Sheet starts being populated with events&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see different types of audit log events coming up from HPE GreenLake cloud. They could be processed on the receiving end, for example, to open incident tickets or take an automatic action.&lt;/p&gt;
&lt;h2&gt;Call to action&lt;/h2&gt;
&lt;p&gt;Webhooks together with the HPE GreenLake cloud events framework provide a great way to integrate with HPE GreenLake cloud using modern technology. It provides great flexibility on where you can run the subscriber code, and allows you to choose the language in which to write the webhook code. It also allows you to build a tight integration with an existing platform such as HPE OpsRamp or ServiceNow.&lt;/p&gt;
&lt;p&gt;Additional benefits I can see from this technique are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;no need to verify that a polling code is still running&lt;/li&gt;
&lt;li&gt;no risk of API token expiration&lt;/li&gt;
&lt;li&gt;no loss of events&lt;/li&gt;
&lt;li&gt;it’s well established industry standard mechanism&lt;/li&gt;
&lt;li&gt;there is a huge choice of implementation methods including low code/no code ones&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can request access to this feature by signing up to the Automations/Webhook Access Beta Program &lt;a href=&quot;https://app.smartsheet.com/b/form/0e61e8c2bd6d48c7829845ab824c11d6&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using the Chapel Compiler to Develop Language Tooling]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/using-the-chapel-compiler-to-develop-language-tooling/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-chapel-compiler-to-develop-language-tooling/</guid><pubDate>Wed, 05 Feb 2025 04:33:04 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore AI and Private AI uses cases as well as webhooks, LLMs, and HPC debuggers]]></title><link>https://developer.hpe.com/2025-february-03/</link><guid isPermaLink="false">https://developer.hpe.com/2025-february-03/</guid><pubDate>Tue, 04 Feb 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[DragonHPC featured at SC24]]></title><description><![CDATA[DragonHPC was featured again at this year's Supercomputing 2024 conference as part of a special workshop on High Performance Python for…]]></description><link>https://developer.hpe.com/dragonhpc-featured-at-sc24/</link><guid isPermaLink="false">https://developer.hpe.com/dragonhpc-featured-at-sc24/</guid><pubDate>Wed, 29 Jan 2025 16:53:37 GMT</pubDate><content:encoded>&lt;p&gt;DragonHPC was featured again at this year&apos;s Supercomputing 2024 conference as part of a special workshop on High Performance Python for Science at Scale. DragonHPC&apos;s presentation focused on telemetry and visualization for complex HPC and AI workflows. The workshop featured a range of different Python efforts for scientific computing at HPE, Argonne National Lab, Lawrence Berkeley National Lab, and the University of Delaware.&lt;/p&gt;
&lt;p&gt;Here is a link to this year&apos;s conference workshop presentation:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://sc24.supercomputing.org/proceedings/workshops/workshop_pages/ws_hppss101.html&quot;&gt;https://sc24.supercomputing.org/proceedings/workshops/workshop_pages/ws_hppss101.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A sample DragonHPC telemetry panel image from the workshop is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dragonhpc_telemetry_sc24.png&quot; alt=&quot;DragonHPC Telemetry Sample Panel&quot; title=&quot;DragonHPC Telemetry Sample Panel&quot;&gt;&lt;/p&gt;
&lt;p&gt;DragonHPC is a composable and highly versatile distributed runtime for HPC &amp;#x26; AI workflows. The DragonHPC distributed runtime is Python native but also extensible to other common programming languages. DragonHPC can help orchestrate distributed processes effectively, access distributed data efficiently, and can be used on heterogeneous hardware.&lt;/p&gt;
&lt;p&gt;DragonHPC would be of benefit and high interest for HPC &amp;#x26; AI software developers, architects, researchers, and data scientists.&lt;/p&gt;
&lt;p&gt;You can learn more about DragonHPC by visiting &lt;a href=&quot;https://developer.hpe.com/platform/dragonhpc/home/&quot;&gt;the landing page on HPE Developer Community portal&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM Agentic Tool Mesh: Orchestrating agentic tools for the AI revolution]]></title><description><![CDATA[In my previous blog posts, I dolve into the Chat Service, Agent Service, and RAG Service of LLM Agentic Tool Mesh open source project. Today…]]></description><link>https://developer.hpe.com/llm-agentic-tool-mesh-orchestrating-agentic-tools-for-the-next-revolution/</link><guid isPermaLink="false">https://developer.hpe.com/llm-agentic-tool-mesh-orchestrating-agentic-tools-for-the-next-revolution/</guid><pubDate>Mon, 20 Jan 2025 08:36:14 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px !important;
    line-height: 33px !important;
    max-width: none !important;
}
&lt;/style&gt;
&lt;p&gt;In my previous blog posts, I dolve into the &lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-exploring-chat-service-and-factory-design-pattern/&quot;&gt;Chat Service&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai/&quot;&gt;Agent Service&lt;/a&gt;, and &lt;a href=&quot;https://developer.hpe.com/blog/llm-agentic-tool-mesh-empowering-gen-ai-with-retrieval-augmented-generation-rag/&quot;&gt;RAG Service&lt;/a&gt; of &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;LLM Agentic Tool Mesh open source project&lt;/a&gt;. Today, I&apos;ll explore the system services of LLM Agentic Tool Mesh, which are essential for managing and orchestrating the mesh of agentic tools.&lt;/p&gt;
&lt;p&gt;I&apos;ll provide insights into these services, showcase an example of a Mesh available in the repository, discuss federated governance, and share our vision for the future evolution of the LLM Agentic Tool Mesh project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mesh.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Understanding the system services&lt;/h1&gt;
&lt;p&gt;The system services in LLM Agentic Tool Mesh are crucial for the seamless operation and orchestration of agentic tools and web applications. These services ensure consistency, ease of use, and flexibility across the platform. They include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Tool Client Service&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool Server Service&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&apos;s explore each of these components in detail.&lt;/p&gt;
&lt;h2&gt;Tool Client Service&lt;/h2&gt;
&lt;p&gt;The Tool Client Service enables developers to transform any code function into an LLM Agentic Tool Mesh tool by applying a simple decorator. This service abstracts the complexities of tool integration, allowing for quick conversion of functions into reusable tools within the LLM Agentic Tool Mesh ecosystem.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Decorator-based: Convert functions into tools using the &lt;code&gt;@AthonTool&lt;/code&gt; decorator&lt;/li&gt;
&lt;li&gt;Seamless integration: Decorated functions are fully integrated into the LLM Agentic Tool Mesh platform&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example usage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.system import AthonTool, Logger

config = {
    &quot;greetings&quot;: &quot;Hello World! Welcome, &quot;
}
logger = Logger().get_logger()

@AthonTool(config, logger)
def hello_world(query: str) -&gt; str:
    &quot;&quot;&quot;Greets the user.&quot;&quot;&quot;
    greeting_message = f&quot;{config[&apos;greetings&apos;]} {query}!&quot;
    return greeting_message.capitalize()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Tool Server Service&lt;/h2&gt;
&lt;p&gt;The Tool Server Service provides the necessary infrastructure to manage and run LLM Agentic Tool Mesh tools on the platform. It includes capabilities for tool discovery and execution, ensuring that tools are easily accessible and efficiently managed.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tool discovery: Automatically discover tools within the platform&lt;/li&gt;
&lt;li&gt;Execution management: Manage the execution of tools, ensuring efficient operation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/tools.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Example usage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.agents import ToolRepository
from athon.system import ToolDiscovery

projects_config = [
    {
        &quot;name&quot;: &quot;Simple Project&quot;,
        &quot;tools&quot;: [
            &quot;examples/local_tool&quot;,          # A local tool path
            &quot;https://127.0.0.1:5003/&quot;,      # A remote tool URL
        ]
    }
]
tools_config = {
    &quot;type&quot;: &quot;LangChainStructured&quot;
}

def discover_and_load_tools(projects_config, tools_config):
    tool_repository = ToolRepository.create(tools_config)
    tool_discovery = ToolDiscovery()
    tool_id_counter = 1

    for project in projects_config:
        for tool_reference in project[&quot;tools&quot;]:
            tool_info = tool_discovery.discover_tool(tool_reference)
            if tool_info:
                tool_metadata = {
                    &quot;id&quot;: tool_id_counter,
                    &quot;project&quot;: project[&quot;name&quot;],
                    &quot;name&quot;: tool_info[&quot;name&quot;],
                    &quot;interface&quot;: tool_info.get(&quot;interface&quot;)
                }
                tool_repository.add_tool(tool_info[&quot;tool&quot;], tool_metadata)
                tool_id_counter += 1

    return tool_repository

# Run the tool discovery and loading process
tool_repository = discover_and_load_tools(projects_config, tools_config)

# Display the discovered tools
for tool in tool_repository.get_tools().tools:
    print(f&quot;Discovered tool: {tool[&apos;name&apos;]} from project: {tool[&apos;metadata&apos;][&apos;project&apos;]}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Building a mesh of LLM Agentic Tools&lt;/h1&gt;
&lt;p&gt;We have developed a series of web applications and tools, complete with examples, to demonstrate the capabilities of LLM Agentic Tool Mesh in our &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;GitHub repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Web applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Chatbot&lt;/strong&gt; (&lt;code&gt;examples/app_chatbot&lt;/code&gt;): This is a chatbot capable of reasoning and invoking appropriate LLM tools to perform specific actions. You can configure the chatbot using files that define LLM Agentic Tool Mesh platform services, project settings, toolkits, and memory configurations. The web app orchestrates both local and remote LLM tools, allowing them to define their own HTML interfaces, supporting text, images, and code presentations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Admin panel&lt;/strong&gt; (&lt;code&gt;examples/app_backpanel&lt;/code&gt;): There&apos;s an admin panel that enables the configuration of basic LLM tools to perform actions via LLM calls. It allows you to set the system prompt, select the LLM model, and define the LLM tool interface, simplifying the process of configuring LLM tool interfaces.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Tools:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Basic copywriter&lt;/strong&gt; (&lt;code&gt;examples/tool_copywriter&lt;/code&gt;): This tool rewrites text, providing explanations for enhancements and changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Temperature finder&lt;/strong&gt; (&lt;code&gt;examples/tool_api&lt;/code&gt;): This tool fetches and displays the current temperature for a specified location by utilizing a public API.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Temperature analyzer&lt;/strong&gt; (&lt;code&gt;examples/tool_analyzer&lt;/code&gt;): Another tools generates code, using a language model to analyze historical temperature data and create visual charts for better understanding&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Telco expert&lt;/strong&gt; (&lt;code&gt;examples/tool_rag&lt;/code&gt;): The RAG tool provides quick and accurate access to 5G specifications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OpenAPI manager&lt;/strong&gt; (&lt;code&gt;examples/tool_agents&lt;/code&gt;): This multi-agent tool reads OpenAPI documentation and provides users with relevant information based on their queries.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running the examples&lt;/h2&gt;
&lt;p&gt;You can run the tools and web applications individually or use the provided &lt;code&gt;run_examples.sh&lt;/code&gt; script to run them all together. Once everything is started:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access the chatbot at &lt;code&gt;https://127.0.0.1:5001/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Access the admin panel at &lt;code&gt;https://127.0.0.1:5011/&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/mesh-apps.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Federated governance and standards&lt;/h1&gt;
&lt;p&gt;In the LLM Agentic Tool Mesh platform, the management of LLM tools is decentralized, promoting flexibility and innovation. To ensure this decentralization does not compromise the platform&apos;s integrity, LLM Agentic Tool Mesh implements a unified framework of governance policies and standards.&lt;/p&gt;
&lt;p&gt;Key principles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Interoperable standards&lt;/strong&gt;: Ensures all tools and services work together seamlessly while adhering to best practices&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ethical compliance&lt;/strong&gt;: Emphasizes minimizing biases, ensuring fairness, and upholding ethical principles across all AI tools and models&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security and privacy&lt;/strong&gt;: Maintains rigorous standards to protect data and ensure compliance with privacy regulations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Continuous improvement&lt;/strong&gt;: Encourages feedback and collaboration to refine governance practices&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automated governance&lt;/strong&gt;: Plans to extend code quality checks to enforce governance policies, ensuring comprehensive compliance across the platform&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;LLM Agentic Tool Mesh includes a dedicated repository containing text files that outline various policies and standards (&lt;code&gt;federated_governance/&lt;/code&gt;). These documents cover essential areas such as LLM model usage, RAG processes, and more.&lt;/p&gt;
&lt;h1&gt;Vision for LLM Agentic Tool Mesh project evolution&lt;/h1&gt;
&lt;p&gt;The LLM Agentic Tool Mesh platform is evolving into a comprehensive solution, providing several panel views tailored for various stages of the tool and application lifecycle.&lt;/p&gt;
&lt;p&gt;When creating a new web app, you can build upon the existing examples. With all services fully parameterized, there is unparalleled flexibility to design diverse user experience panels. For instance, current examples include a chatbot as a user interface and an admin panel for configuring an LLM tool. Additionally, web apps can be developed to support deployment tasks or facilitate experiments aimed at optimizing service parameters for specific objectives.&lt;/p&gt;
&lt;p&gt;Currently, the platform provides a user panel and a dvelopment panel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;User panel&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Implemented features: This panel focuses on engaging with tools like Chat, RAG, and Agent services. It provides a user-friendly interface for interacting with these capabilities.&lt;/li&gt;
&lt;li&gt;Future goals: We plan to enrich the existing services, offering an even more seamless and feature-rich experience for end-users.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Development panel&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Implemented features: This panel has been partially tackled with the backpanel web app example, which allows users to runtime modify the basic copywriter agentic tool and the RAG tool.&lt;/li&gt;
&lt;li&gt;Future goals: We aim to add more system services to support development, including real-time LLM tuning and configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the future, we hope to be able to offer a deployment panel and an experiment panel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deployment panel (future)&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Purpose: This panel will focus on deploying the LLM Agentic Tool Mesh tools seamlessly across one or more clusters, enabling large-scale and distributed deployments.&lt;/li&gt;
&lt;li&gt;Planned features: We hope to offer tools for monitoring deployed tools, orchestrating distributed systems, and managing deployment pipelines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Experiment Panel (Future)&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Purpose: This panel will be designed to track and manage experiments to optimize LLM tool performance and suitability.&lt;/li&gt;
&lt;li&gt;Planned features: This panel will allow users to try different configurations and compare outcomes, helping teams evaluate the most effective settings for their use cases.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/usage.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Our mission ahead&lt;/h1&gt;
&lt;p&gt;As LLM Agentic Tool Mesh evolves, we aim to continue enhancing the platform by enriching existing services, especially in the user panel, and expanding the development panel with more robust system services. The addition of the deployment panel and experiment panel will complete the platform&apos;s vision, enabling a fully integrated lifecycle from development to deployment and optimization.&lt;/p&gt;
&lt;p&gt;Stay tuned as we advance toward democratizing Gen AI with a comprehensive, flexible, and user-centric platform!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using the gdb4hpc debugger at scale with a CUDA/MPI HPC application]]></title><description><![CDATA[Command line debuggers for Linux have existed for decades. The GNU debugger (gdb) is the most famous and arguably the most powerful. But gdb…]]></description><link>https://developer.hpe.com/using-the-gdb4hpc-debugger-at-scale-with-a-cuda-mpi-hpc-application/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-gdb4hpc-debugger-at-scale-with-a-cuda-mpi-hpc-application/</guid><pubDate>Fri, 17 Jan 2025 16:37:53 GMT</pubDate><content:encoded>&lt;p&gt;Command line debuggers for Linux have existed for decades. The GNU debugger (gdb) is the most famous and arguably the most powerful. But gdb has a weakness when it comes to high performance computing (HPC) applications - it can only run on one system at a time. A typical HPC application runs tens of thousands of processes on thousands of systems at once! Classic debuggers like gdb were never designed for that.&lt;/p&gt;
&lt;p&gt;This is where gdb4hpc, the gdb based debugger for HPC, comes in. The gdb4hpc debugger is part of the HPE Cray Programming Environment package. It captures and extends the powerful single-system features of gdb and provides a familiar command line debugging experience for HPC applications running on thousands of nodes.&lt;/p&gt;
&lt;p&gt;To provide these features, gdb4hpc connects the user to many instances of gdb at once. While connected, it provides tools enabling the user to control all of the instances of gdb simultaneously. Familiar debugging features like breakpoints, backtraces, and printing variables can be applied to all processes simultaneously. Additionally, more advanced debugging features like attaching to a running application and debugging GPU kernels are supported.&lt;/p&gt;
&lt;p&gt;To help the user manage the inherent complexities of debugging thousands of processes at once, the gdb4hpc debugger aggregates and filters the results from potentially thousands of gdb instances into representations that will comfortably fit on a single terminal screen. Additionally, it provides tools to inspect large amounts of data like the very large arrays that are typical in HPC applications.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/gdb4hpc-controlling-gdbs.png&quot; width=&quot;95%&quot; alt=&quot;Illustration showing how gdb4hpc connects to individual gdb instances. A single gdb4hpc instance is run on the login node. On each compute node, multiple application ranks are running. Each application rank has an instance of gdb attached to it. In turn, gdb4hpc remotely attaches to each individual gdb instance.&quot; title=&quot;gdb4hpc controlling multiple instances of gdb&quot;&gt;&lt;/center&gt;
&lt;p&gt;In this tutorial, you will learn how to debug a multinode MPI/CUDA application
with gdb4hpc in the HPE Cray Programming Environment. This tutorial uses a CUDA
application and NVIDIA GPUs as examples, but the concepts are applicable to HIP (AMD&apos;s equivalent of NVIDIA&apos;s CUDA) applications on AMD GPUs as well.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;This tutorial assumes you already have a familiarity with &lt;a href=&quot;https://sourceware.org/gdb/&quot;&gt;command line debuggers&lt;/a&gt;, &lt;a href=&quot;https://git-scm.com/&quot;&gt;Git&lt;/a&gt;, &lt;a href=&quot;https://en.wikipedia.org/wiki/Message_Passing_Interface&quot;&gt;MPI&lt;/a&gt;, and &lt;a href=&quot;https://developer.nvidia.com/cuda-toolkit&quot;&gt;CUDA&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You will need a system with access to Cray C++ compilers and gdb4hpc, which are parts of the HPC Cray Programming Environment. You will also need access to the NVIDIA C++ compiler.&lt;/p&gt;
&lt;h2&gt;Setup&lt;/h2&gt;
&lt;p&gt;To set up the debugging session, you will need to obtain the sample application
and compile it for your chosen GPU with debug information.&lt;/p&gt;
&lt;h3&gt;Download the sample code&lt;/h3&gt;
&lt;p&gt;For this tutorial, you will use the &lt;code&gt;simpleMPI&lt;/code&gt; example from
&lt;a href=&quot;https://github.com/NVIDIA/cuda-samples&quot;&gt;NVIDIA&apos;s cuda-samples repository&lt;/a&gt;.
The &lt;code&gt;simpleMPI&lt;/code&gt; application does a distributed calculation involving square roots and averages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;simpleMPI&lt;/code&gt; generates millions of random floats (2,560,000 per node)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;simpleMPI&lt;/code&gt; uses MPI to distribute those numbers across multiple compute nodes&lt;/li&gt;
&lt;li&gt;&lt;code&gt;simpleMPI&lt;/code&gt; uses uses CUDA to calculate the square root of each number&lt;/li&gt;
&lt;li&gt;&lt;code&gt;simpleMPI&lt;/code&gt; uses MPI to collect the resulting square roots and average them&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Start by cloning the sample code repository.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ git clone https://github.com/NVIDIA/cuda-samples.git
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, navigate to the directory containing the &lt;code&gt;simpleMPI&lt;/code&gt; sample.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd cuda-samples/Samples/0_Introduction/simpleMPI
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The sample comes with some source files, some Visual Studio project files, a
Makefile, and a README.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ ls
Makefile       simpleMPI.cu          simpleMPI_vs2017.vcxproj  simpleMPI_vs2022.sln
README.md      simpleMPI.h           simpleMPI_vs2019.sln      simpleMPI_vs2022.vcxproj
simpleMPI.cpp  simpleMPI_vs2017.sln  simpleMPI_vs2019.vcxproj
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For this tutorial, you will only need the following source files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;simpleMPI.cpp  simpleMPI.cu  simpleMPI.h
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Remove the other files, if desired.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ rm *.sln *.vcxproj README.md Makefile
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Build the application&lt;/h3&gt;
&lt;p&gt;To compile the application, load the appropriate compilers and compile the
sample application for your chosen GPU architecture with debug information.&lt;/p&gt;
&lt;h4&gt;Load compilers&lt;/h4&gt;
&lt;p&gt;The version of the &lt;code&gt;cce&lt;/code&gt; and &lt;code&gt;cuda&lt;/code&gt; compilers that you should load depend on
which GPU architecture you will be compiling for and the version of the CUDA
drivers on your system. Find your GPU on the &lt;a href=&quot;https://developer.nvidia.com/cuda-gpus&quot;&gt;NVIDIA GPU Compute Capability page&lt;/a&gt; and choose a version of CUDA that
supports the required compute capability. Then load an appropriate CCE version.
For information about which CCE versions support which versions of CUDA, see
the &lt;a href=&quot;https://cpe.ext.hpe.com/docs/latest/release_announcements/index.html&quot;&gt;HPE Cray Programming Environment release announcements&lt;/a&gt;. If in doubt, the default versions on your system
should work just fine.&lt;/p&gt;
&lt;p&gt;When writing this tutorial, I used CCE 17 and CUDA 12 and ran the application on NVIDIA A100 GPUs, but the debugging process is the same for any compiler combination.&lt;/p&gt;
&lt;p&gt;After finding which versions you need, use the &lt;code&gt;module&lt;/code&gt; command in a terminal to ensure that the Cray &lt;code&gt;CC&lt;/code&gt; compiler and the NVIDIA &lt;code&gt;nvcc&lt;/code&gt; compiler are available.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ module load cce/17
$ module load cuda/12
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can check that your environment is set up correctly by checking that CC
(the Cray CCE C++ compiler) and nvcc (the NVIDIA CUDA application compiler)
are available in the terminal.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ CC --version
Cray clang version 17.0.1  (5ec9405551a8c8845cf14e81dc28bff7aa3935cb)
...

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Jun__6_02:18:23_PDT_2024
Cuda compilation tools, release 12.5, V12.5.82
...
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Compile the application&lt;/h4&gt;
&lt;p&gt;The sample application comes with a Makefile, but it&apos;s not compatible with the
HPE Cray Programming Environment. You will compile the application manually.&lt;/p&gt;
&lt;h4&gt;Include debug info&lt;/h4&gt;
&lt;p&gt;When compiling an application, compilers apply optimizations to make the resulting
application faster and smaller. These optimizations often rearrange the order
of code or completely remove it which makes reasoning about the original code
at runtime difficult. When preparing to debug an application, you need to tell
the compiler to disable these optimizations and include extra information that
the debugger can use to reason about the original code.&lt;/p&gt;
&lt;h4&gt;Compile the .cu file&lt;/h4&gt;
&lt;p&gt;The simpleMPI.cu file contains the code that will be run on the GPU. Use
nvcc to compile it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ nvcc -g -G -O0 -c -ccbin=cc -gencode arch=... simpleMPI.cu -o simpleMPI_gpu.o
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;-g&lt;/code&gt; flag tells nvcc to include debugging information for the host code
(the code that will run on the CPU) and the &lt;code&gt;-G&lt;/code&gt; flag tells nvcc to include
debugging information for the device code (the code that will run on the GPU).
&lt;code&gt;-O0&lt;/code&gt; disables optimizations for the host code. &lt;code&gt;-G&lt;/code&gt; implicitly disables
optimizations for the device code.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-c&lt;/code&gt; tells the compiler to produce an object file instead of a executable. An
object file is a mostly-compiled program that only needs to be linked to create
an executable. After you compile the .cpp file in the next section, you will
link the object files together to create the final executable.&lt;/p&gt;
&lt;p&gt;The nvcc compiler does not actually fully compile C/C++. It actually processes the CUDA
directives in the .cu file and produces a new C/C++ file where the CUDA
directives are replaced with the proper C/C++ code that interacts with the CUDA
API to do the work of loading the correct GPU driver etc. &lt;code&gt;-ccbin=cc&lt;/code&gt; tells
nvcc to use the &lt;code&gt;cc&lt;/code&gt; compiler to do the actual compilation of the generated
C/C++ code. When the Cray CCE module is loaded, cc is the Cray C/C++ compiler.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-gencode&lt;/code&gt; tells nvcc which GPU architecture to target. The actual value for
this argument will vary based on which GPU you are targeting. Refer to the
&lt;a href=&quot;https://developer.nvidia.com/cuda-gpus&quot;&gt;NVIDIA compute capabilities table&lt;/a&gt; to
find the compute capability of your GPU. For example, the A100 GPU that I am
targeting while writing this tutorial has a compute capability of 8.0. Take the compute
capability of your GPU and pass it to nvcc via the &lt;code&gt;-gencode&lt;/code&gt; flag by
removing the decimal point and embedding it in an &lt;code&gt;arch=compute_XX,code=sm_XX&lt;/code&gt;
pair. For my A100 example, the complete option is &lt;code&gt;-gencode compute_80,code=sm_80&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;simpleMI.cu&lt;/code&gt; is the input file and &lt;code&gt;-o simpleMPI_gpu.o&lt;/code&gt; tells &lt;code&gt;nvcc&lt;/code&gt; to call
the output file &lt;code&gt;simpleMPI_gpu.o&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;My complete nvcc invocation for an A100 GPU looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ nvcc -g -G -c -ccbin=cc -gencode arch=compute_80,code=sm_80 simpleMPI.cu -o simpleMPI_gpu.o
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html&quot;&gt;&lt;code&gt;nvcc&lt;/code&gt; user manual&lt;/a&gt;
and the &lt;a href=&quot;https://docs.nvidia.com/cuda/cuda-gdb/index.html#compiling-the-application&quot;&gt;compilation section of the cuda-gdb documentation&lt;/a&gt;
for more information.&lt;/p&gt;
&lt;h4&gt;Compile the .cpp file&lt;/h4&gt;
&lt;p&gt;The simpleMPI.cpp file contains code that will be run on the CPU. It is the
entry point to the program and contains the MPI operations. Use CC to compile
it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ CC -g -O0 -c simpleMPI.cpp -o simpleMPI_cpu.o
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The flags have the same meanings as in the nvcc compilation. The Cray CC
compiler will automatically include the flags needed for building with MPI.&lt;/p&gt;
&lt;h4&gt;Link the application and produce an executable&lt;/h4&gt;
&lt;p&gt;Finally, use CC to link the two object files into the final executable.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ CC simpleMPI_gpu.o simpleMPI_cpu.o -o simpleMPI
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Test the application&lt;/h4&gt;
&lt;p&gt;Test that the application was compiled correctly by running it on multiple
nodes. For example, on a Slurm system:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ srun -n8 -N2 --ntasks-per-node 4 --exclusive -p griz256 ./simpleMPI
Running on 8 nodes
Average of square roots is: 0.667337
PASSED
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Your srun command&apos;s flags may vary depending on your system. You should launch the
job on nodes with your targeted GPU with one rank per GPU. See
&lt;a href=&quot;https://slurm.schedmd.com/srun.html&quot;&gt;the &lt;code&gt;srun&lt;/code&gt; documentation&lt;/a&gt;
for how to achieve a correct job launch on your system.&lt;/p&gt;
&lt;p&gt;In my specific example:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-n8 -N2 --ntasks-per-node 4&lt;/code&gt; tells Slurm to run 8 ranks across 2 nodes,
splitting the job evenly by running 4 ranks on each node. In this example I
am using nodes with 4 A100s each.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--exclusive&lt;/code&gt; tells Slurm to run on nodes that aren&apos;t being used by anyone
else, and to not let anyone else use them while running the job. Only one
application can use each GPU at a time, so you need to make sure that you aren&apos;t
given nodes that have GPUs that are already in use.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-p griz256&lt;/code&gt; tells Slurm to use the nodes in the griz256 partition. In my case,
the griz256 partition designates all the nodes with A100s. Your partition name
will probably be different.&lt;/p&gt;
&lt;h2&gt;Debugging&lt;/h2&gt;
&lt;h3&gt;Load gdb4hpc and cuda-gdb&lt;/h3&gt;
&lt;p&gt;You&apos;re ready to start debugging! Load the gdb4hpc debugger.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ module load gdb4hpc
$ gdb4hpc --version
gdb4hpc-4.16.3.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For this tutorial, I am using gdb4hpc 4.16.3, but any version above 4.13.1
will have the commands used in this tutorial.&lt;/p&gt;
&lt;p&gt;The gdb4hpc debugger will internally use the cuda-gdb debugger for GPU debugging. cuda-gdb is
provided by the cuda module that you loaded for building the application.
Check that cuda-gdb is available.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cuda-gdb --version
NVIDIA (R) cuda-gdb 12.5
Portions Copyright (C) 2007-2024 NVIDIA Corporation
Based on GNU gdb 13.2
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later &amp;#x3C;http://gnu.org/licenses/gpl.html&gt;
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Launch the application in gdb4hpc&lt;/h4&gt;
&lt;p&gt;Start gdb4hpc. You will be dropped into a command line interface. gdb4hpc is operated
by issuing debugging commands on the command line. The commands should be familiar
to anyone who has used gdb before.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ gdb4hpc
gdb4hpc 4.16.3. - Cray Interactive Parallel Debugger
With Cray Comparative Debugging Technology.
Copyright 2007-2024 Hewlett Packard Enterprise Development LP.
Copyright 1996-2016 University of Queensland. All Rights Reserved.

Type &quot;help&quot; for a list of commands.
Type &quot;help &amp;#x3C;cmd&gt;&quot; for detailed help about a command.
dbg all&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use the &lt;code&gt;launch&lt;/code&gt; command to start the application in the debugger.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; launch $simpleMPI{8} --launcher-args=&quot;-N2 --ntasks-per-node=4 --exclusive -p griz256&quot; --gpu ./simpleMPI
Starting application, please wait...
Launched application...
0/8 ranks connected... (timeout in 300 seconds)
..
8/8 ranks connected.
Created network...
Connected to application...
Launch complete.
simpleMPI{0..7}: Initial breakpoint, main at simpleMPI.cpp:63
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;$simpleMPI{8}&lt;/code&gt; tells gdb4hpc to launch a job with 8 ranks and give it a
handle called &lt;code&gt;simpleMPI&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;--launcher-args&lt;/code&gt; tells gdb4hpc which WLM arguments to use to launch the job
correctly. Use the same arguments you used in the &lt;strong&gt;Test The
Application&lt;/strong&gt; section.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--gpu&lt;/code&gt; flag tells gdb4hpc to use the appropriate GPU-aware debugger. Here, that&apos;s cuda-gdb.&lt;/p&gt;
&lt;p&gt;The final argument, &lt;code&gt;./simpleMPI&lt;/code&gt;, specifies the binary to launch.&lt;/p&gt;
&lt;p&gt;After the launch completes, gdb4hpc stops at &lt;code&gt;main&lt;/code&gt; and gives you control.&lt;/p&gt;
&lt;h4&gt;Stop at a breakpoint in a GPU kernel&lt;/h4&gt;
&lt;p&gt;The sample application contains one CUDA Kernel called &lt;code&gt;simpleMPIKernel&lt;/code&gt;
defined in simpleMPI.cu. The source code for the kernel looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;52  // Device code
53  // Very simple GPU Kernel that computes square roots of input numbers
54  __global__ void simpleMPIKernel(float *input, float *output) {
55    int tid = blockIdx.x * blockDim.x + threadIdx.x;
56    output[tid] = sqrt(input[tid]);
57  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set a breakpoint at the first line of the kernel with the &lt;code&gt;break&lt;/code&gt; command, just
like you would in gdb:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; break simpleMPI.cu:55
simpleMPI{0..7}: Breakpoint 1: file simpleMPI.cu, line 55.
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;You could have also set the breakpoint via the function name by typing &lt;code&gt;break simpleMPIKernel&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now use the &lt;code&gt;continue&lt;/code&gt; command to run the application until it reaches the
breakpoint in the kernel.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; continue
dbg all&gt; &amp;#x3C;$simpleMPI&gt;: Running on 8 nodes
simpleMPI{0..7}: Breakpoint 1,  at simpleMPI.cu:55
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The application has stopped in a GPU kernel. From here you can use the usual
debugger commands like &lt;code&gt;print&lt;/code&gt;, &lt;code&gt;step&lt;/code&gt;, &lt;code&gt;next&lt;/code&gt;, etc. You can use &lt;code&gt;print&lt;/code&gt; to
inspect GPU memory as you would any other memory. Here I print the first 8 items
of the &lt;code&gt;input&lt;/code&gt; array. Since the program initializes the array with random data,
the result is different for each rank, and your data might be different.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; info locals
simpleMPI{0..7}: Name:tid                       Type:@register int
simpleMPI{0..7}: Name:input                     Type:@generic float * @parameter
simpleMPI{0..7}: Name:output                    Type:@generic float * @parameter
dbg all&gt; print input[0]@8 # print the first 8 items of the input array
simpleMPI{0}: {0.840188,0.394383,0.783099,0.79844,0.911647,0.197551,0.335223,0.76823}
simpleMPI{1}: {0.432718,0.407395,0.918358,0.348798,0.421943,0.527366,0.664573,0.899948}
simpleMPI{2}: {0.490992,0.530234,0.736292,0.0698217,0.825561,0.719689,0.26409,0.439787}
simpleMPI{3}: {0.404152,0.403803,0.990251,0.10481,0.267578,0.685576,0.794025,0.972998}
simpleMPI{4}: {0.113738,0.293829,0.443866,0.706545,0.762252,0.451777,0.93063,0.569985}
simpleMPI{5}: {0.386505,0.760585,0.354206,0.784775,0.329661,0.25768,0.0700815,0.955718}
simpleMPI{6}: {0.959459,0.299068,0.889675,0.0221367,0.809361,0.0220988,0.373968,0.589243}
simpleMPI{7}: {0.839492,0.778462,0.195212,0.832727,0.125331,0.211325,0.0624168,0.56848}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Inspect CUDA threads and use CUDA commands&lt;/h4&gt;
&lt;p&gt;The gdb4hpc debugger supports cuda-gdb&apos;s &lt;code&gt;cuda&lt;/code&gt; commands.&lt;/p&gt;
&lt;p&gt;To get info about currently running CUDA threads, use &lt;code&gt;info cuda threads&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; info cuda threads
simpleMPI{0}: Showing one thread per location; use -v for full list.
simpleMPI{0}:    BlockIdx ThreadIdx   Count    Location
simpleMPI{0}: Kernel 0
simpleMPI{0}:   (649,0,0)   (0,0,0)   7840 simpleMPI.cu    54
simpleMPI{0}: *   (0,0,0)   (0,0,0) 213344 simpleMPI.cu    55
simpleMPI{1}: Showing one thread per location; use -v for full list.
simpleMPI{1}:    BlockIdx ThreadIdx   Count    Location
simpleMPI{1}: Kernel 0
simpleMPI{1}:   (649,0,0)   (0,0,0)   7936 simpleMPI.cu    54
simpleMPI{1}: *   (0,0,0)   (0,0,0) 213248 simpleMPI.cu    55
simpleMPI{2}: Showing one thread per location; use -v for full list.
simpleMPI{2}:    BlockIdx ThreadIdx   Count    Location
simpleMPI{2}: Kernel 0
simpleMPI{2}:   (651,0,0)   (0,0,0)   8896 simpleMPI.cu    54
simpleMPI{2}: *   (0,0,0)   (0,0,0) 212288 simpleMPI.cu    55
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that each rank&apos;s GPU threads are all stopped in the
&lt;code&gt;simpleMPIKernel&lt;/code&gt; in &lt;code&gt;simpleMPI.cu&lt;/code&gt; lines 54 or 55. The ranks have different
locations because gdb4hpc debugs CUDA applications in
&lt;a href=&quot;https://sourceware.org/gdb/current/onlinedocs/gdb.html/All_002dStop-Mode.html&quot;&gt;all-stop mode&lt;/a&gt;.
All-stop mode means that once one thread stops, all other threads are stopped
immediately. This means that every thread was stopped as soon as the first
thread hit the breakpoint, resulting in all threads halting wherever they
happened to be at the time. Because of this, your locations might be slightly
different.&lt;/p&gt;
&lt;p&gt;The current thread (the rank that any gdb4hpc commands will apply to) is
indicated with a &lt;code&gt;*&lt;/code&gt; symbol. You can switch the current block or thread (or warp or
lane) by using the &lt;code&gt;cuda [block x] thread y&lt;/code&gt; command. Since this application
uses a one dimensional grid of one dimensional thread blocks, you can derive
block and thread IDs by simply using the first number in the block&apos;s/thread&apos;s
(x, 0, 0) tuple.&lt;/p&gt;
&lt;p&gt;The example above shows block 649 on rank 0 as being in a different location
than block 0 thread 0. Your exact block and thread numbers might be different.
Use &lt;code&gt;cuda block x thread y&lt;/code&gt; to switch to a different block and thread. You can
confirm the switch by printing the CUDA kernel built in variables &lt;code&gt;blockIdx&lt;/code&gt;
and &lt;code&gt;threadIdx&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; cuda block 649 thread 0
simpleMPI{0}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 2, sm 0, warp 59, lane 0]
simpleMPI{0}: 0x00007fcef725da10        54      in simpleMPI.cu
simpleMPI{1}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 0, sm 0, warp 59, lane 0]
simpleMPI{1}: 0x00007f282725da20        54      in simpleMPI.cu
simpleMPI{2}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 3, sm 0, warp 59, lane 0]
simpleMPI{2}: 55        in simpleMPI.cu
simpleMPI{3}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 1, sm 0, warp 59, lane 0]
simpleMPI{3}: 0x00007f9dc725da20        54      in simpleMPI.cu
simpleMPI{4}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 2, sm 0, warp 59, lane 0]
simpleMPI{4}: 0x00007ff3d725da20        54      in simpleMPI.cu
simpleMPI{5}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 0, sm 0, warp 59, lane 0]
simpleMPI{5}: 0x00007f79c325da20        54      in simpleMPI.cu
simpleMPI{6}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 3, sm 0, warp 59, lane 0]
simpleMPI{6}: 0x00007f410525da20        54      in simpleMPI.cu
simpleMPI{7}: [Switching focus to CUDA kernel 0, grid 1, block (649,0,0), thread (0,0,0), device 1, sm 4, warp 50, lane 0]
simpleMPI{7}: 55        in simpleMPI.cu
dbg all&gt; print blockIdx
simpleMPI{0..7}: {x = 649, y = 0, z = 0}
dbg all&gt; print threadIdx
simpleMPI{0..7}: {x = 0, y = 0, z = 0}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Use more CUDA commands&lt;/h4&gt;
&lt;p&gt;To see what other &lt;code&gt;info&lt;/code&gt; commands are available, use &lt;code&gt;cuda info help&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; cuda help info
simpleMPI{0..7}: Print informations about the current CUDA activities. Available options:
simpleMPI{0..7}:          devices : information about all the devices
simpleMPI{0..7}:              sms : information about all the SMs in the current device
simpleMPI{0..7}:            warps : information about all the warps in the current SM
simpleMPI{0..7}:            lanes : information about all the lanes in the current warp
simpleMPI{0..7}:          kernels : information about all the active kernels
simpleMPI{0..7}:         contexts : information about all the contexts
simpleMPI{0..7}:           blocks : information about all the active blocks in the current kernel
simpleMPI{0..7}:          threads : information about all the active threads in the current kernel
simpleMPI{0..7}:     launch trace : information about the parent kernels of the kernel in focus
simpleMPI{0..7}:  launch children : information about the kernels launched by the kernels in focus
simpleMPI{0..7}:          managed : information about global managed variables
simpleMPI{0..7}:             line : information about the filename and linenumber for a given $pc
simpleMPI{0..7}:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are also other top level &lt;code&gt;cuda&lt;/code&gt; commands other than &lt;code&gt;info&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;dbg all&gt; cuda help
simpleMPI{0..7}: Print or select the CUDA focus.
simpleMPI{0..7}:
simpleMPI{0..7}: List of cuda subcommands:
simpleMPI{0..7}:
simpleMPI{0..7}: cuda block -- Print or select the current CUDA block.
simpleMPI{0..7}: cuda device -- Print or select the current CUDA device.
simpleMPI{0..7}: cuda grid -- Print or select the current CUDA grid.
simpleMPI{0..7}: cuda kernel -- Print or select the current CUDA kernel.
simpleMPI{0..7}: cuda lane -- Print or select the current CUDA lane.
simpleMPI{0..7}: cuda sm -- Print or select the current CUDA SM.
simpleMPI{0..7}: cuda thread -- Print or select the current CUDA thread.
simpleMPI{0..7}: cuda warp -- Print or select the current CUDA warp.
simpleMPI{0..7}:
simpleMPI{0..7}: Type &quot;help cuda&quot; followed by cuda subcommand name for full documentation.
simpleMPI{0..7}: Type &quot;apropos word&quot; to search for commands related to &quot;word&quot;.
simpleMPI{0..7}: Type &quot;apropos -v word&quot; for full documentation of commands related to &quot;word&quot;.
simpleMPI{0..7}: Command name abbreviations are allowed if unambiguous.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See &lt;a href=&quot;https://docs.nvidia.com/cuda/cuda-gdb/index.html&quot;&gt;the cuda-gdb documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h4&gt;Single step in a GPU kernel&lt;/h4&gt;
&lt;p&gt;The usual &lt;code&gt;step&lt;/code&gt; and &lt;code&gt;next&lt;/code&gt; commands are supported. In a GPU kernel, stepping
has the granularity of a warp. A warp is a group of 32 GPU threads and is the
granularity the GPU scheduler works at. See
&lt;a href=&quot;https://docs.nvidia.com/cuda/cuda-gdb/index.html#single-stepping&quot;&gt;the cuda-gdb documentation&lt;/a&gt;
for more info.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;simpleMPI{0..7}: Breakpoint 1,  at simpleMPI.cu:55
dbg all&gt; list 55
simpleMPI{0..7}: 50       }
simpleMPI{0..7}: 51
simpleMPI{0..7}: 52     // Device code
simpleMPI{0..7}: 53     // Very simple GPU Kernel that computes square roots of input numbers
simpleMPI{0..7}: 54     __global__ void simpleMPIKernel(float *input, float *output) {
simpleMPI{0..7}: 55       int tid = blockIdx.x * blockDim.x + threadIdx.x;
simpleMPI{0..7}: 56       output[tid] = sqrt(input[tid]);
simpleMPI{0..7}: 57     }
simpleMPI{0..7}: 58
simpleMPI{0..7}: 59     // Initialize an array with random data (between 0 and 1)
dbg all&gt; next # set tid
simpleMPI{0..7}: 56       output[tid] = sqrt(input[tid]);
dbg all&gt; p tid
simpleMPI{0..7}: 64
dbg all&gt; p output[tid]
simpleMPI{0..7}: 0
dbg all&gt; n
simpleMPI{0..7}: 57     }
dbg all&gt; p output[tid]
simpleMPI{0}: 0.516397
simpleMPI{1}: 0.392356
simpleMPI{2}: 0.883166
simpleMPI{3}: 0.806327
simpleMPI{4}: 0.458106
simpleMPI{5}: 0.999046
simpleMPI{6}: 0.366925
simpleMPI{7}: 0.646477
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;You now know how to start debugging a multinode MPI/CUDA application with gdb4hpc in the HPE Cray Programming Environment! You can now start tracking down bugs in HPC applications with thousands of processes just as easily as you would in a single process application, saving you time and sanity.&lt;/p&gt;
&lt;p&gt;In general, gdb4hpc&apos;s GPU debugging capabilities are that of the underlying GPU
debugger. See the &lt;a href=&quot;https://docs.nvidia.com/cuda/cuda-gdb/index.html&quot;&gt;cuda-gdb documentation&lt;/a&gt;
for more capabilities.&lt;/p&gt;
&lt;p&gt;For more information about gdb4hpc, see &lt;a href=&quot;https://cpe.ext.hpe.com/docs/latest/debugging-tools/index.html#gdb4hpc&quot;&gt;the gdb4hpc documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy bug hunting!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using the hpegl and Morpheus Terraform Providers to provision a VMaaS workload]]></title><description><![CDATA[Introduction Many customers of HPE GreenLake for Private Cloud Enterprise want to use Terraform or the open-source alternative OpenTofu to…]]></description><link>https://developer.hpe.com/using-the-hpegl-hpe-greenlake-and-morpheus-terraform-providers-in-concert-to-create-provision-and-manage-hpe-pce-private-cloud-enterprise-vmaas-instances/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-hpegl-hpe-greenlake-and-morpheus-terraform-providers-in-concert-to-create-provision-and-manage-hpe-pce-private-cloud-enterprise-vmaas-instances/</guid><pubDate>Wed, 15 Jan 2025 14:58:20 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h1&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Many customers of HPE GreenLake for Private Cloud Enterprise want to use &lt;a href=&quot;https://www.terraform.io/&quot;&gt;Terraform&lt;/a&gt; or the open-source alternative &lt;a href=&quot;https://opentofu.org/&quot;&gt;OpenTofu&lt;/a&gt; to create, provision and manage VMaaS (Virtual Machines as a Service, a service offering in HPE GreenLake for Private Cloud Enterprise) VMs. The cloud-based VMaaS service interacts with &lt;a href=&quot;https://morpheusdata.com/&quot;&gt;Morpheus&lt;/a&gt; (recently acquired by HPE) running on-premise which provides the core Virtual Machine functionality. There is a HPE GreenLake Terraform provider called &lt;em&gt;&lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest&quot;&gt;hpegl&lt;/a&gt;&lt;/em&gt; (HPE GreenLake) and a separate &lt;a href=&quot;https://registry.terraform.io/providers/gomorpheus/morpheus/latest&quot;&gt;Morpheus&lt;/a&gt; provider. Both providers complement each other and when used in concert offer a rich set of IaC capabilities.&lt;/p&gt;
&lt;p&gt;In this blog post we will demonstrate how to use the two providers in concert to create, provision and manage VMaaS VM instances. Central to this is a hpegl data-source which can be used to retrieve an Access Token and URL for the on-premise Morpheus instance which are then passed to the Morpheus provider.&lt;/p&gt;
&lt;p&gt;Note that we plan to write and release a new Terraform provider which will combine the functionality of the hpegl and Morpheus providers into a single provider and replace both. This will simplify the process of creating, provisioning and managing VM instances.  It will also extend the provider functionality.  This new provider will be released at a future date and will be the subject of a follow-on blog post.&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;IaC and HPE GreenLake for Private Cloud Enterprise&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Infrastructure_as_code&quot;&gt;IaC&lt;/a&gt; (Infrastructure as Code) is a well-established approach to configuring, creating, and managing resources. Terraform and its OpenSource alternative OpenTofu, along with per-service providers, are especially popular. For HPE GreenLake for Private Cloud Enterprise&apos;s VMaaS (VM as a Service) service, there are two relevant providers: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hpegl (HPE GreenLake), which interacts with the HPE GreenLake for Private Cloud Enterprise Services such as Identity and Access Management (IAM) and VMaaS. &lt;/li&gt;
&lt;li&gt;Morpheus, which interacts with the on-premise Morpheus instance or instances that are associated with the VMaaS service.  Each Morpheus instance is a separate on-premise installation of Morpheus. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note that we plan to write and release a new unified provider which will replace these providers and offer a richer set of functionality.&lt;/p&gt;
&lt;p&gt;These two current providers (hpegl and Morpheus) complement each other: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hpegl deals with HPE GreenLake IAM and manages VMaaS resources, including VM instances. &lt;/li&gt;
&lt;li&gt;Morpheus deals with underlying Morpheus resources on which the VMaaS resources – including VM instances – depend. This covers resources such as Clouds, VM images, Node Types, Groups, Instance Types, and Instance Layouts. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The glue between these two providers is a hpegl &lt;em&gt;data source&lt;/em&gt; called &lt;em&gt;hpegl_vmaas_morpheus_details&lt;/em&gt;. For a specific VMaaS &lt;em&gt;location&lt;/em&gt;, this data source will output: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Morpheus access token. &lt;/li&gt;
&lt;li&gt;The URL of the Morpheus instance. &lt;/li&gt;
&lt;li&gt;The time to expiry of the access token (in seconds). &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first two outputs can then be passed into a Morpheus provider stanza. See the following example where there is just one &lt;em&gt;location&lt;/em&gt; for a VMaaS instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;provider &quot;hpegl&quot; {
  vmaas {
    location   = var.location
    space_name = var.space
  }
}

data &quot;hpegl_vmaas_morpheus_details&quot; &quot;morpheus_details&quot; {}

provider &quot;morpheus&quot; {
  url          = data.hpegl_vmaas_morpheus_details.morpheus_details.url
  access_token = data.hpegl_vmaas_morpheus_details.morpheus_details.access_token
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For VMaaS instances with multiple locations &lt;em&gt;provider aliasing&lt;/em&gt; can be used to tie a specific VMaaS &lt;em&gt;location&lt;/em&gt; to the corresponding Morpheus instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;# Location 1
provider &quot;hpegl&quot; {
  vmaas {
    location   = var.location_1
    space_name = var.space_1
  }

  alias = &quot;location_1&quot;
}

data &quot;hpegl_vmaas_morpheus_details&quot; &quot;location_1&quot; {
  provider = hpegl.location_1
}

provider &quot;morpheus&quot; {
  url          = data.hpegl_vmaas_morpheus_details.location_1.url
  access_token = data.hpegl_vmaas_morpheus_details.location_1.access_token

  alias = &quot;morpheus_location_1&quot;
}


# Location 2
provider &quot;hpegl&quot; {
  vmaas {
    location   = var.location_2
    space_name = var.space_2
  }

  alias = &quot;location_2&quot;
}

data &quot;hpegl_vmaas_morpheus_details&quot; &quot;location_2&quot; {
  provider = hpegl.location_2
}

provider &quot;morpheus&quot; {
  url          = data.hpegl_vmaas_morpheus_details.location_2.url
  access_token = data.hpegl_vmaas_morpheus_details.location_2.access_token

  alias = &quot;morpheus_location_2&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;&lt;strong&gt;Complete examples for one VMaaS location&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;In the next two sections we present complete HCL examples for two configurations each with one VMaaS location: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One VMaaS cloud, with one VMaaS instance. &lt;/li&gt;
&lt;li&gt;Two VMaaS clouds, each with one VMaaS instance for a total of two VMaaS instances. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt; The clouds are VMware vSphere clouds.  In both cases: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;hpegl is used to: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Define the VMaaS location and space. &lt;/li&gt;
&lt;li&gt;Retrieve the relevant Morpheus token and URL. &lt;/li&gt;
&lt;li&gt;Retrieve details for the VMaaS Datastore, Network, Resource Pool, Plan, Environment, and Cloud Folder. &lt;/li&gt;
&lt;li&gt;Create a VMaaS instance. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Morpheus is used to: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Retrieve details for the VMaaS Cloud. &lt;/li&gt;
&lt;li&gt;Create a Group for the above cloud. &lt;/li&gt;
&lt;li&gt;Create an Instance Type. &lt;/li&gt;
&lt;li&gt;Retrieve details for a VM Image. &lt;/li&gt;
&lt;li&gt;Create a Node Type for the VM Image. &lt;/li&gt;
&lt;li&gt;Create an Instance Layout for the above Node Type and Instance Type.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;One location, one VMaaS cloud, one VMaaS instance example&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpegl-and-morpheus-terraform-blog-post-frame-1-1-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;HCL&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Terraform/HPEGL-Morpheus-PCE-VMaaS/One-Location/One-Cloud-One-Instance/1_location_1_cloud&quot;&gt;View the example HCL&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;The output from a Terraform (v1.5.7) run to create the VMaaS instance&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;➜  1_location_1_cloud git:(blog1) ✗ terraform apply -var-file variables_tenant.tfvars -auto-approve
data.hpegl_vmaas_environment.env: Reading...
data.hpegl_vmaas_network.blue_segment: Reading...
data.hpegl_vmaas_plan.g1_large: Reading...
data.hpegl_vmaas_morpheus_details.morpheus_details: Reading...
data.hpegl_vmaas_plan.g1_large: Read complete after 0s [id=407]
data.hpegl_vmaas_network.blue_segment: Read complete after 0s [id=14]
data.hpegl_vmaas_environment.env: Read complete after 1s [id=1]
data.hpegl_vmaas_morpheus_details.morpheus_details: Read complete after 2s [id=94de0616-db52-4737-8afe-aeef618db00b]
data.morpheus_cloud.cloud: Reading...
data.morpheus_virtual_image.example_virtual_image: Reading...
data.morpheus_cloud.cloud: Read complete after 2s [name=HPE GreenLake VMaaS Cloud-Trial3]
data.hpegl_vmaas_datastore.c_3par: Reading...
data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 0s [id=3]
data.hpegl_vmaas_datastore.c_3par: Read complete after 0s [id=1]
data.morpheus_virtual_image.example_virtual_image: Read complete after 2s [name=ubuntu-20]
data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 0s [id=6]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 1
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = &quot;cfe_tf_example_instance&quot;
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 407
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v41&quot;
          + no_agent         = false
          + resource_pool_id = 3
        }

      + network {
          + id          = 14
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;1&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 1,
        ]
      + code      = &quot;CFE TF Group&quot;
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = &quot;CFE TF Group&quot;
    }

  # morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = &quot;cfe_todo_app_frontend&quot;
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = &quot;cfe_tf_example_instance&quot;
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = &quot;cfe_tf_example_instance&quot;
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = &quot;cfe_tf_example_node_type_1&quot;
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype1&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 967
    }

  # random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id     = (known after apply)
      + max    = 50000
      + min    = 1
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.
morpheus_group.tf_example_group: Creating...
morpheus_node_type.tf_example_node: Creating...
morpheus_instance_type.tf_example_instance_type: Creating...
random_integer.random: Creating...
random_integer.random: Creation complete after 0s [id=34008]
morpheus_node_type.tf_example_node: Creation complete after 1s [id=1968]
morpheus_instance_type.tf_example_instance_type: Creation complete after 1s [id=207]
morpheus_instance_layout.tf_example_instance_layout: Creating...
morpheus_group.tf_example_group: Creation complete after 2s [id=83]
morpheus_instance_layout.tf_example_instance_layout: Creation complete after 2s [id=1409]
hpegl_vmaas_instance.sample_vm[0]: Creating...
hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m40s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m50s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Creation complete after 1m51s [id=52688]

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The output from an OpenTofu (v1.6.2) run  to create the VMaaS instance&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;➜  1_location_1_cloud git:(blog1) ✗ tofu apply -var-file variables_tenant.tfvars -auto-approve
data.hpegl_vmaas_network.blue_segment: Reading...
data.hpegl_vmaas_morpheus_details.morpheus_details: Reading...
data.hpegl_vmaas_plan.g1_large: Reading...
data.hpegl_vmaas_environment.env: Reading...
data.hpegl_vmaas_environment.env: Read complete after 0s [id=1]
data.hpegl_vmaas_plan.g1_large: Read complete after 0s [id=407]
data.hpegl_vmaas_network.blue_segment: Read complete after 1s [id=14]
data.hpegl_vmaas_morpheus_details.morpheus_details: Read complete after 2s [id=94de0616-db52-4737-8afe-aeef618db00b]
data.morpheus_cloud.cloud: Reading...
data.morpheus_virtual_image.example_virtual_image: Reading...
data.morpheus_cloud.cloud: Read complete after 1s [name=HPE GreenLake VMaaS Cloud-Trial3]
data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
data.hpegl_vmaas_datastore.c_3par: Reading...
data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 1s [id=3]
data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 1s [id=6]
data.morpheus_virtual_image.example_virtual_image: Read complete after 2s [name=ubuntu-20]
data.hpegl_vmaas_datastore.c_3par: Read complete after 1s [id=1]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 1
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = &quot;cfe_tf_example_instance&quot;
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 407
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v41&quot;
          + no_agent         = false
          + resource_pool_id = 3
        }

      + network {
          + id          = 14
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;1&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 1,
        ]
      + code      = &quot;CFE TF Group&quot;
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = &quot;CFE TF Group&quot;
    }

  # morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = &quot;cfe_todo_app_frontend&quot;
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = &quot;cfe_tf_example_instance&quot;
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = &quot;cfe_tf_example_instance&quot;
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = &quot;cfe_tf_example_node_type_1&quot;
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype1&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 967
    }

  # random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id     = (known after apply)
      + max    = 50000
      + min    = 1
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.
morpheus_node_type.tf_example_node: Creating...
morpheus_instance_type.tf_example_instance_type: Creating...
morpheus_group.tf_example_group: Creating...
random_integer.random: Creating...
random_integer.random: Creation complete after 0s [id=38926]
morpheus_instance_type.tf_example_instance_type: Creation complete after 2s [id=208]
morpheus_node_type.tf_example_node: Creation complete after 2s [id=1969]
morpheus_instance_layout.tf_example_instance_layout: Creating...
morpheus_group.tf_example_group: Creation complete after 2s [id=84]
morpheus_instance_layout.tf_example_instance_layout: Creation complete after 2s [id=1410]
hpegl_vmaas_instance.sample_vm[0]: Creating...
hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
hpegl_vmaas_instance.sample_vm[0]: Creation complete after 1m34s [id=52689]

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;One location, two VMaaS clouds, two VMaaS instances example&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpegl-and-morpheus-terraform-blog-post-frame-2-1-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The example HCL used here is based on the HCL for the first example.  However we have created two separate &lt;em&gt;Modules&lt;/em&gt; based on that HCL: &lt;/p&gt;
&lt;p&gt;&lt;em&gt;morpheus_artefacts&lt;/em&gt; does the following. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Takes as input: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A VMaaS cloud name &lt;/li&gt;
&lt;li&gt;A group name &lt;/li&gt;
&lt;li&gt;An image name &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creates: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A group &lt;/li&gt;
&lt;li&gt;An instance type &lt;/li&gt;
&lt;li&gt;An instance layout &lt;/li&gt;
&lt;li&gt;A node type &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Outputs: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Instance type details &lt;/li&gt;
&lt;li&gt;Instance layout details &lt;/li&gt;
&lt;li&gt;Node type details &lt;/li&gt;
&lt;li&gt;Group details &lt;/li&gt;
&lt;li&gt;Cloud ID&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;vmaas_instance&lt;/em&gt; does the following. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Takes inputs from an instance of the &lt;em&gt;morpheus_artefacts&lt;/em&gt; module &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Retrieves details for the following VMaaS resources: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VMaaS datastore &lt;/li&gt;
&lt;li&gt;VMaaS network &lt;/li&gt;
&lt;li&gt;VMaaS resource pool &lt;/li&gt;
&lt;li&gt;VMaaS plan &lt;/li&gt;
&lt;li&gt;VMaaS environment &lt;/li&gt;
&lt;li&gt;VMaaS cloud folder &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creates a VMaaS instance &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These modules can be combined in different ways. In our specific case we: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make two calls to &lt;em&gt;morpheus_artefacts&lt;/em&gt;. &lt;/li&gt;
&lt;li&gt;Make two calls to &lt;em&gt;vmaas_instance&lt;/em&gt; each using outputs from one of the calls to &lt;em&gt;morpheus_artefacts&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this way we create two VMaaS instances each in a separate VMaaS cloud based on the one underlying on-premise Morpheus instance.&lt;/p&gt;
&lt;h3&gt;HCL&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Terraform/HPEGL-Morpheus-PCE-VMaaS/One-Location/One-Cloud-One-Instance/1_location_1_cloud&quot;&gt;View the example HCL&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;The output from a Terraform (v1.5.7) run to create the two VMaaS instances&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;➜  2_clouds_module_nest git:(blog1) ✗ terraform apply -var-file variables_tenant.tfvars -auto-approve
data.hpegl_vmaas_morpheus_details.morpheus_details: Reading...
module.instance_cloud_1.data.hpegl_vmaas_environment.env: Reading...
module.instance_cloud_1.data.hpegl_vmaas_network.blue_segment: Reading...
module.instance_cloud_2.data.hpegl_vmaas_plan.g1_large: Reading...
module.instance_cloud_1.data.hpegl_vmaas_plan.g1_large: Reading...
module.instance_cloud_2.data.hpegl_vmaas_environment.env: Reading...
module.instance_cloud_2.data.hpegl_vmaas_network.blue_segment: Reading...
module.instance_cloud_1.data.hpegl_vmaas_network.blue_segment: Read complete after 2s [id=22]
module.instance_cloud_1.data.hpegl_vmaas_environment.env: Read complete after 2s [id=1]
module.instance_cloud_2.data.hpegl_vmaas_network.blue_segment: Read complete after 3s [id=41]
module.instance_cloud_2.data.hpegl_vmaas_plan.g1_large: Read complete after 3s [id=877]
module.instance_cloud_1.data.hpegl_vmaas_plan.g1_large: Read complete after 3s [id=877]
module.instance_cloud_2.data.hpegl_vmaas_environment.env: Read complete after 3s [id=1]
data.hpegl_vmaas_morpheus_details.morpheus_details: Read complete after 5s [id=4c4fcdd1-e283-4d9e-8a43-d050a3d201f9]
module.morpheus_artefacts_1.data.morpheus_cloud.cloud: Reading...
module.morpheus_artefacts_1.data.morpheus_virtual_image.example_virtual_image: Reading...
module.morpheus_artefacts_2.data.morpheus_cloud.cloud: Reading...
module.morpheus_artefacts_2.data.morpheus_virtual_image.example_virtual_image: Reading...
module.morpheus_artefacts_1.data.morpheus_cloud.cloud: Read complete after 5s [name=HPE GreenLake VMaaS Cloud]
module.morpheus_artefacts_2.data.morpheus_cloud.cloud: Read complete after 5s [name=ST01]
module.instance_cloud_1.data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
module.instance_cloud_1.data.hpegl_vmaas_datastore.c_3par: Reading...
module.instance_cloud_1.data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
module.instance_cloud_2.data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
module.instance_cloud_2.data.hpegl_vmaas_datastore.c_3par: Reading...
module.instance_cloud_2.data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
module.morpheus_artefacts_1.data.morpheus_virtual_image.example_virtual_image: Read complete after 5s [name=ubuntu20]
module.morpheus_artefacts_2.data.morpheus_virtual_image.example_virtual_image: Read complete after 5s [name=ubuntu-20]
module.instance_cloud_1.data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 2s [id=1]
module.instance_cloud_1.data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 2s [id=4]
module.instance_cloud_1.data.hpegl_vmaas_datastore.c_3par: Read complete after 2s [id=4]
module.instance_cloud_2.data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 2s [id=8]
module.instance_cloud_2.data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 2s [id=11]
module.instance_cloud_2.data.hpegl_vmaas_datastore.c_3par: Read complete after 2s [id=11]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 1
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = (known after apply)
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 877
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v1009&quot;
          + no_agent         = false
          + resource_pool_id = 1
        }

      + network {
          + id          = 22
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;4&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # module.instance_cloud_1.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;1&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 5
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = (known after apply)
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 877
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v68&quot;
          + no_agent         = false
          + resource_pool_id = 8
        }

      + network {
          + id          = 41
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;11&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # module.instance_cloud_2.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;5&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 1,
        ]
      + code      = (known after apply)
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = (known after apply)
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = (known after apply)
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = (known after apply)
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # module.morpheus_artefacts_1.morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = (known after apply)
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype-ubuntu20&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 3065
    }

  # module.morpheus_artefacts_1.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;HPE GreenLake VMaaS Cloud&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 5,
        ]
      + code      = (known after apply)
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = (known after apply)
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = (known after apply)
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = (known after apply)
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # module.morpheus_artefacts_2.morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = (known after apply)
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype-ubuntu-20&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 3056
    }

  # module.morpheus_artefacts_2.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;ST01&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

Plan: 14 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + instance_ids_cloud_1         = (known after apply)
  + instance_ids_cloud_2         = (known after apply)
  + instance_layout_id_cloud_1   = (known after apply)
  + instance_layout_id_cloud_2   = (known after apply)
  + instance_layout_name_cloud_1 = (known after apply)
  + instance_layout_name_cloud_2 = (known after apply)
  + instance_names_cloud_1       = (known after apply)
  + instance_names_cloud_2       = (known after apply)
  + node_type_id_cloud_1         = (known after apply)
  + node_type_id_cloud_2         = (known after apply)
  + node_type_name_cloud_1       = (known after apply)
  + node_type_name_cloud_2       = (known after apply)
module.morpheus_artefacts_2.random_integer.random: Creating...
module.morpheus_artefacts_1.random_integer.random: Creating...
module.morpheus_artefacts_2.random_integer.random: Creation complete after 0s [id=29896]
module.morpheus_artefacts_1.random_integer.random: Creation complete after 0s [id=17891]
module.morpheus_artefacts_1.morpheus_group.tf_example_group: Creating...
module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type: Creating...
module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type: Creating...
module.morpheus_artefacts_1.morpheus_node_type.tf_example_node: Creating...
module.instance_cloud_1.random_integer.random: Creating...
module.instance_cloud_2.random_integer.random: Creating...
module.morpheus_artefacts_2.morpheus_group.tf_example_group: Creating...
module.morpheus_artefacts_2.morpheus_node_type.tf_example_node: Creating...
module.instance_cloud_1.random_integer.random: Creation complete after 0s [id=39454]
module.instance_cloud_2.random_integer.random: Creation complete after 0s [id=48337]
module.morpheus_artefacts_1.morpheus_node_type.tf_example_node: Creation complete after 5s [id=5104]
module.morpheus_artefacts_2.morpheus_node_type.tf_example_node: Creation complete after 5s [id=5107]
module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type: Creation complete after 5s [id=328]
module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout: Creating...
module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type: Creation complete after 6s [id=325]
module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout: Creating...
module.morpheus_artefacts_2.morpheus_group.tf_example_group: Creation complete after 7s [id=88]
module.morpheus_artefacts_1.morpheus_group.tf_example_group: Creation complete after 8s [id=85]
module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout: Creation complete after 6s [id=3556]
module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout: Creation complete after 5s [id=3553]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Creating...
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Creating...
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Creation complete after 4m18s [id=628]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Creation complete after 4m35s [id=631]

Apply complete! Resources: 14 added, 0 changed, 0 destroyed.

Outputs:

instance_ids_cloud_1 = &quot;628&quot;
instance_ids_cloud_2 = &quot;631&quot;
instance_layout_id_cloud_1 = &quot;3556&quot;
instance_layout_id_cloud_2 = &quot;3553&quot;
instance_layout_name_cloud_1 = &quot;cfe_todo_app_frontend--17891&quot;
instance_layout_name_cloud_2 = &quot;cfe_todo_app_frontend--29896&quot;
instance_names_cloud_1 = &quot;CFE morpheus-vmaas sample VM 1-39454-0&quot;
instance_names_cloud_2 = &quot;CFE morpheus-vmaas sample VM 1-48337-0&quot;
node_type_id_cloud_1 = &quot;5104&quot;
node_type_id_cloud_2 = &quot;5107&quot;
node_type_name_cloud_1 = &quot;cfe_tf_example_node_type--17891&quot;
node_type_name_cloud_2 = &quot;cfe_tf_example_node_type--29896&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The output from an OpenTofu (v1.6.2) run to create the two VMaaS instances&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;➜  2_clouds_module_nest git:(blog1) ✗ tofu apply -var-file variables_tenant.tfvars -auto-approve
module.instance_cloud_1.data.hpegl_vmaas_environment.env: Reading...
module.instance_cloud_2.data.hpegl_vmaas_environment.env: Reading...
module.instance_cloud_1.data.hpegl_vmaas_network.blue_segment: Reading...
module.instance_cloud_2.data.hpegl_vmaas_plan.g1_large: Reading...
module.instance_cloud_1.data.hpegl_vmaas_plan.g1_large: Reading...
module.instance_cloud_2.data.hpegl_vmaas_network.blue_segment: Reading...
data.hpegl_vmaas_morpheus_details.morpheus_details: Reading...
module.instance_cloud_2.data.hpegl_vmaas_environment.env: Read complete after 2s [id=1]
module.instance_cloud_2.data.hpegl_vmaas_network.blue_segment: Read complete after 2s [id=41]
module.instance_cloud_1.data.hpegl_vmaas_environment.env: Read complete after 3s [id=1]
module.instance_cloud_1.data.hpegl_vmaas_network.blue_segment: Read complete after 3s [id=22]
module.instance_cloud_1.data.hpegl_vmaas_plan.g1_large: Read complete after 3s [id=877]
module.instance_cloud_2.data.hpegl_vmaas_plan.g1_large: Read complete after 3s [id=877]
data.hpegl_vmaas_morpheus_details.morpheus_details: Read complete after 5s [id=4c4fcdd1-e283-4d9e-8a43-d050a3d201f9]
module.morpheus_artefacts_2.data.morpheus_cloud.cloud: Reading...
module.morpheus_artefacts_2.data.morpheus_virtual_image.example_virtual_image: Reading...
module.morpheus_artefacts_1.data.morpheus_cloud.cloud: Reading...
module.morpheus_artefacts_1.data.morpheus_virtual_image.example_virtual_image: Reading...
module.morpheus_artefacts_2.data.morpheus_cloud.cloud: Read complete after 5s [name=ST01]
module.instance_cloud_2.data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
module.instance_cloud_2.data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
module.instance_cloud_2.data.hpegl_vmaas_datastore.c_3par: Reading...
module.morpheus_artefacts_2.data.morpheus_virtual_image.example_virtual_image: Read complete after 5s [name=ubuntu-20]
module.morpheus_artefacts_1.data.morpheus_virtual_image.example_virtual_image: Read complete after 5s [name=ubuntu20]
module.morpheus_artefacts_1.data.morpheus_cloud.cloud: Read complete after 5s [name=HPE GreenLake VMaaS Cloud]
module.instance_cloud_1.data.hpegl_vmaas_cloud_folder.compute_folder: Reading...
module.instance_cloud_1.data.hpegl_vmaas_resource_pool.cl_resource_pool: Reading...
module.instance_cloud_1.data.hpegl_vmaas_datastore.c_3par: Reading...
module.instance_cloud_2.data.hpegl_vmaas_datastore.c_3par: Read complete after 2s [id=11]
module.instance_cloud_2.data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 2s [id=11]
module.instance_cloud_2.data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 3s [id=8]
module.instance_cloud_1.data.hpegl_vmaas_resource_pool.cl_resource_pool: Read complete after 3s [id=1]
module.instance_cloud_1.data.hpegl_vmaas_cloud_folder.compute_folder: Read complete after 3s [id=4]
module.instance_cloud_1.data.hpegl_vmaas_datastore.c_3par: Read complete after 3s [id=4]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 1
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = (known after apply)
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 877
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v1009&quot;
          + no_agent         = false
          + resource_pool_id = 1
        }

      + network {
          + id          = 22
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;4&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # module.instance_cloud_1.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;1&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0] will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;sample_vm&quot; {
      + cloud_id           = 5
      + containers         = (known after apply)
      + environment_code   = &quot;dev&quot;
      + group_id           = (known after apply)
      + history            = (known after apply)
      + hostname           = (known after apply)
      + id                 = (known after apply)
      + instance_type_code = (known after apply)
      + labels             = [
          + &quot;sample_vm&quot;,
        ]
      + layout_id          = (known after apply)
      + name               = (known after apply)
      + plan_id            = 877
      + server_id          = (known after apply)
      + status             = (known after apply)
      + tags               = {
          + &quot;APPLICATION&quot;       = &quot;Custom Ubuntu&quot;
          + &quot;BillingCostCenter&quot; = &quot;999&quot;
          + &quot;Division&quot;          = &quot;AUK&quot;
          + &quot;PatchGroup&quot;        = &quot;None&quot;
          + &quot;ResourceContact&quot;   = &quot;john.lenihan@hpe.com&quot;
          + &quot;ResourcePurpose&quot;   = &quot;CFE&quot;
          + &quot;purpose&quot;           = &quot;sample&quot;
        }

      + config {
          + asset_tag        = &quot;vm_tag&quot;
          + create_user      = true
          + folder_code      = &quot;group-v68&quot;
          + no_agent         = false
          + resource_pool_id = 8
        }

      + network {
          + id          = 41
          + internal_id = (known after apply)
          + is_primary  = (known after apply)
          + name        = (known after apply)
        }

      + volume {
          + datastore_id = &quot;11&quot;
          + id           = (known after apply)
          + name         = &quot;root_vol&quot;
          + root         = true
          + size         = 30
        }
    }

  # module.instance_cloud_2.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;5&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 1,
        ]
      + code      = (known after apply)
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = (known after apply)
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = (known after apply)
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = (known after apply)
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # module.morpheus_artefacts_1.morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = (known after apply)
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype-ubuntu20&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 3065
    }

  # module.morpheus_artefacts_1.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;HPE GreenLake VMaaS Cloud&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_group.tf_example_group will be created
  + resource &quot;morpheus_group&quot; &quot;tf_example_group&quot; {
      + cloud_ids = [
          + 5,
        ]
      + code      = (known after apply)
      + id        = (known after apply)
      + location  = &quot;galway&quot;
      + name      = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout will be created
  + resource &quot;morpheus_instance_layout&quot; &quot;tf_example_instance_layout&quot; {
      + creatable                  = true
      + description                = (known after apply)
      + id                         = (known after apply)
      + instance_type_id           = (known after apply)
      + labels                     = [
          + &quot;demo&quot;,
          + &quot;layout&quot;,
          + &quot;terraform&quot;,
        ]
      + minimum_memory             = (known after apply)
      + name                       = (known after apply)
      + node_type_ids              = (known after apply)
      + option_type_ids            = (known after apply)
      + price_set_ids              = (known after apply)
      + spec_template_ids          = (known after apply)
      + support_convert_to_managed = (known after apply)
      + technology                 = &quot;vmware&quot;
      + version                    = &quot;1.0&quot;
      + workflow_id                = (known after apply)
    }

  # module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type will be created
  + resource &quot;morpheus_instance_type&quot; &quot;tf_example_instance_type&quot; {
      + category           = &quot;web&quot;
      + code               = (known after apply)
      + description        = &quot;Terraform Example Instance Type&quot;
      + enable_deployments = (known after apply)
      + enable_scaling     = (known after apply)
      + enable_settings    = (known after apply)
      + environment_prefix = (known after apply)
      + featured           = true
      + id                 = (known after apply)
      + labels             = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name               = (known after apply)
      + option_type_ids    = (known after apply)
      + price_set_ids      = (known after apply)
      + visibility         = &quot;public&quot;
    }

  # module.morpheus_artefacts_2.morpheus_node_type.tf_example_node will be created
  + resource &quot;morpheus_node_type&quot; &quot;tf_example_node&quot; {
      + category            = &quot;tfexample&quot;
      + file_template_ids   = (known after apply)
      + id                  = (known after apply)
      + labels              = [
          + &quot;demo&quot;,
          + &quot;instance&quot;,
          + &quot;terraform&quot;,
        ]
      + name                = (known after apply)
      + script_template_ids = (known after apply)
      + short_name          = &quot;tfexamplenodetype-ubuntu-20&quot;
      + technology          = &quot;vmware&quot;
      + version             = &quot;2.0&quot;
      + virtual_image_id    = 3056
    }

  # module.morpheus_artefacts_2.random_integer.random will be created
  + resource &quot;random_integer&quot; &quot;random&quot; {
      + id      = (known after apply)
      + keepers = {
          + &quot;cloud&quot; = &quot;ST01&quot;
        }
      + max     = 50000
      + min     = 1
      + result  = (known after apply)
    }

Plan: 14 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + instance_ids_cloud_1         = (known after apply)
  + instance_ids_cloud_2         = (known after apply)
  + instance_layout_id_cloud_1   = (known after apply)
  + instance_layout_id_cloud_2   = (known after apply)
  + instance_layout_name_cloud_1 = (known after apply)
  + instance_layout_name_cloud_2 = (known after apply)
  + instance_names_cloud_1       = (known after apply)
  + instance_names_cloud_2       = (known after apply)
  + node_type_id_cloud_1         = (known after apply)
  + node_type_id_cloud_2         = (known after apply)
  + node_type_name_cloud_1       = (known after apply)
  + node_type_name_cloud_2       = (known after apply)
module.morpheus_artefacts_2.random_integer.random: Creating...
module.instance_cloud_2.random_integer.random: Creating...
module.instance_cloud_1.random_integer.random: Creating...
module.morpheus_artefacts_1.random_integer.random: Creating...
module.instance_cloud_2.random_integer.random: Creation complete after 0s [id=10027]
module.morpheus_artefacts_1.random_integer.random: Creation complete after 0s [id=40272]
module.morpheus_artefacts_2.random_integer.random: Creation complete after 0s [id=1661]
module.instance_cloud_1.random_integer.random: Creation complete after 0s [id=36444]
module.morpheus_artefacts_2.morpheus_group.tf_example_group: Creating...
module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type: Creating...
module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type: Creating...
module.morpheus_artefacts_1.morpheus_group.tf_example_group: Creating...
module.morpheus_artefacts_2.morpheus_node_type.tf_example_node: Creating...
module.morpheus_artefacts_1.morpheus_node_type.tf_example_node: Creating...
module.morpheus_artefacts_2.morpheus_instance_type.tf_example_instance_type: Creation complete after 5s [id=334]
module.morpheus_artefacts_1.morpheus_node_type.tf_example_node: Creation complete after 5s [id=5113]
module.morpheus_artefacts_2.morpheus_node_type.tf_example_node: Creation complete after 5s [id=5110]
module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout: Creating...
module.morpheus_artefacts_1.morpheus_instance_type.tf_example_instance_type: Creation complete after 5s [id=331]
module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout: Creating...
module.morpheus_artefacts_1.morpheus_group.tf_example_group: Creation complete after 7s [id=91]
module.morpheus_artefacts_2.morpheus_group.tf_example_group: Creation complete after 9s [id=94]
module.morpheus_artefacts_2.morpheus_instance_layout.tf_example_instance_layout: Creation complete after 5s [id=3559]
module.morpheus_artefacts_1.morpheus_instance_layout.tf_example_instance_layout: Creation complete after 5s [id=3562]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Creating...
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Creating...
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [1m50s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m30s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [2m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m0s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m10s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m20s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m30s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m40s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m40s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [3m50s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m0s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m10s elapsed]
module.instance_cloud_1.hpegl_vmaas_instance.sample_vm[0]: Creation complete after 4m20s [id=637]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Still creating... [4m20s elapsed]
module.instance_cloud_2.hpegl_vmaas_instance.sample_vm[0]: Creation complete after 4m21s [id=634]

Apply complete! Resources: 14 added, 0 changed, 0 destroyed.

Outputs:

instance_ids_cloud_1 = &quot;637&quot;
instance_ids_cloud_2 = &quot;634&quot;
instance_layout_id_cloud_1 = &quot;3562&quot;
instance_layout_id_cloud_2 = &quot;3559&quot;
instance_layout_name_cloud_1 = &quot;cfe_todo_app_frontend--40272&quot;
instance_layout_name_cloud_2 = &quot;cfe_todo_app_frontend--1661&quot;
instance_names_cloud_1 = &quot;CFE morpheus-vmaas sample VM 1-36444-0&quot;
instance_names_cloud_2 = &quot;CFE morpheus-vmaas sample VM 1-10027-0&quot;
node_type_id_cloud_1 = &quot;5113&quot;
node_type_id_cloud_2 = &quot;5110&quot;
node_type_name_cloud_1 = &quot;cfe_tf_example_node_type--40272&quot;
node_type_name_cloud_2 = &quot;cfe_tf_example_node_type--1661&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Summary &lt;/h1&gt;
&lt;p&gt;When using Terraform (or OpenTofu) to interact with HPE GreenLake for Private Cloud Enterprise VMaaS there are two complementary providers: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest&quot;&gt;hpegl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/gomorpheus/morpheus/latest&quot;&gt;Morpheus&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When used in concert, these two providers enable a rich IaC functionality. The hpegl provider has a &lt;em&gt;data source&lt;/em&gt; named &lt;em&gt;hpegl_vmaas_morpheus_details&lt;/em&gt;, which can be used to retrieve an access token and URL for the on-premise Morpheus instance associated with the specific VMaaS &lt;em&gt;location&lt;/em&gt;. These can be used as inputs to the provider stanza for the Morpheus provider. &lt;/p&gt;
&lt;p&gt;We have included example HCL for two separate VMaaS configurations: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One location, one VMaaS cloud with one VMaaS instance &lt;/li&gt;
&lt;li&gt;One location, two VMaaS clouds each with one VMaaS instance &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The HCL for the second configuration is built around a pair of HCL modules. These modules can be combined in various ways to work with a number of different VMaaS configurations, including configurations with more than one on-premise Morpheus instance. These modules are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/eamonnotoole/GLP-API-Tooling/tree/main/Terraform/HPEGL-Morpheus-PCE-VMaaS/One-Location/Two-Clouds-Two-Instances/morpheus_artefacts&quot;&gt;&lt;em&gt;morpheus_artefacts&lt;/em&gt;&lt;/a&gt; uses the Morpheus provider to create a group, instance type, instance layout, and node type in the target VMaaS cloud.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/eamonnotoole/GLP-API-Tooling/tree/main/Terraform/HPEGL-Morpheus-PCE-VMaaS/One-Location/Two-Clouds-Two-Instances/vmaas_instance&quot;&gt;&lt;em&gt;vmaas_instance&lt;/em&gt;&lt;/a&gt; takes inputs from morpheus_artefacts and uses the hpegl provider to retrieve details about VMaaS resources and creates and launches a VMaaS VM Instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise customers are encouraged to use these example HCL bundles as a starting point for their own investigations into using hpegl and Morpheus for IaC with their environments.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why Private AI?]]></title><description><![CDATA[Everybody’s getting into AI. The visceral experience of using ChatGPT in professional or personal lives can raise a lot of concern. Concerns…]]></description><link>https://developer.hpe.com/why-private-ai/</link><guid isPermaLink="false">https://developer.hpe.com/why-private-ai/</guid><pubDate>Fri, 10 Jan 2025 13:21:41 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Everybody’s getting into AI. The visceral experience of using ChatGPT in professional or personal lives can raise a lot of concern. Concerns about the future of work, creativity... even civilization&apos;s lifespan.&lt;/p&gt;
&lt;p&gt;One big concern that often comes up is about &lt;strong&gt;control&lt;/strong&gt;. How can an organization or individual maintain a certain amount of control over their experience when using AI?&lt;/p&gt;
&lt;p&gt;Not everyone cares about this. But for those who do, it’s critical.&lt;/p&gt;
&lt;p&gt;I think there are three major reasons why organizations want control over their AI-powered applications, and I’ll discuss them in this post: &lt;strong&gt;Data&lt;/strong&gt;, &lt;strong&gt;Model Performance&lt;/strong&gt;, and &lt;strong&gt;Cost&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The only way to address these concerns is to run a model in a private, protected environment - one in which you have full control.&lt;/p&gt;
&lt;h2&gt;1. Data&lt;/h2&gt;
&lt;p&gt;Having control over your data means what you think it does; that you don&apos;t let anyone else see it.&lt;/p&gt;
&lt;h3&gt;Privacy and security guarantees&lt;/h3&gt;
&lt;p&gt;When you run your own large language model (LLM) endpoint, all data is processed locally, on your network. This allows you to minimize the risk of exposure in two ways: when the data is in transit, and when the data is stored in the LLM endpoint’s logs.&lt;/p&gt;
&lt;p&gt;When you depend on a service that is hosted externally to your organization, there is always a form of &lt;a href=&quot;https://www.investopedia.com/terms/c/counterpartyrisk.asp&quot;&gt;counterparty risk&lt;/a&gt;. Public services can fall victim to scalability issues, power outages, ransomware attacks, or other Force Majeure. Also, counterparties can choose to update or change models without telling you. And you can&apos;t forget the cost of API calls.&lt;/p&gt;
&lt;p&gt;Processing data locally or in controlled environments minimizes these risks. Not because you’re any better at cybersecurity or running a datacenter than these counterparties… just because you’re &lt;strong&gt;already exposed&lt;/strong&gt; to issues on your side. Why increase the surface area? Why trust someone with your tokens if you don’t have to?&lt;/p&gt;
&lt;p&gt;In addition, if you are using a public LLM endpoint that provides a streaming response, even encrypted data in transmit makes it possible to recover the plain text from the encrypted network traffic. More detail is available in &lt;a href=&quot;https://cdn.arstechnica.net/wp-content/uploads/2024/03/LLM-Side-Channel.pdf&quot;&gt;this paper&lt;/a&gt;, which I hope public endpoint providers have seen and addressed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://jordannanos.github.io/images/2024-11-15-streaming-response.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Compliance with regulations&lt;/h3&gt;
&lt;p&gt;Beyond reputational risk, regulations like HIPAA and PCI force counterparties in healthcare and finance to store data for years, while the EU’s GDPR and India’s DPDPA force PII to be stored within their jurisdiction (at least, that’s how companies are interpreting the laws). Ensuring compliance with these regulations is important, but it’s also important in conjunction with the security risks described earlier. Why rely on a third-party who is forced to store every prompt and response for six years?&lt;/p&gt;
&lt;h3&gt;IP and copyright protection&lt;/h3&gt;
&lt;p&gt;The final example of losing control of your data is when some engineer goes and uploads a document describing trade secrets, debugs some code with a proprietary algorithm inside, or signs a waiver to download a confidential document, forgets about the waiver, and then uploads it to get a quick summary. Looking at you &lt;a href=&quot;https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak&quot;&gt;Samsung&lt;/a&gt;…&lt;/p&gt;
&lt;h2&gt;2. Model Performance&lt;/h2&gt;
&lt;p&gt;I referred to this &lt;a href=&quot;https://developer.hpe.com/blog/how-to-pick-a-large-language-model-for-private-ai/&quot;&gt;in a previous blog&lt;/a&gt;: there are two ways to measure model performance: the &lt;strong&gt;speed&lt;/strong&gt; of the output, and the &lt;strong&gt;quality&lt;/strong&gt; of the output.&lt;/p&gt;
&lt;h3&gt;Improving Model Speed&lt;/h3&gt;
&lt;p&gt;Primarily, there are two key metrics that impact user experience: latency and throughput. Latency is generally considered to be “time to first token” or TTFT. This is constrained by how fast the model can process the input (i.e. the prompt) measured in tokens-per-second (tok/sec). Throughput is generally considered to be “time per output token” or TPOT. Throughput can also be measured by inter-token latency and generally, it is represented in tokens-per-second (tok/sec).&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Tokens are the basic units of input and output in a large language model. Tokens typically represent words, sub-words, or characters. They are the smallest units of meaning in a text that can be processed by a large language model.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When considering a minimum tok/sec of performance for language models, most people jump to a comparison that includes reading speed. The reasoning goes something like: proficient readers are around 300 words per minute. Considering the vocabulary size of current LLM tokenizers to be 1.5 tokens per word, that is 450 tokens per minute, or 7.5 tokens per second.&lt;/p&gt;
&lt;p&gt;However, a few comments:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;5 tok/sec feels slow to me&lt;/strong&gt;. 10 tok/sec feels good. 20+ tok/sec feels fast.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Many LLM’s are used for things beyond just chatting back and forth&lt;/strong&gt;, where the user doesn’t read every line that was generated line by line. For example, when I am editing a blog or generating an image, I am just waiting for the LLM to finish generating before I ctrl+C, ctrl+V. Or, when generating, refactoring, and adding comments to code, I am going to immediately insert it and move on.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Many “online” or “interactive” LLM-powered applications contain multiple prompts and responses&lt;/strong&gt;. For example, RAG involves an embedding model and a chat model. Text-to-SQL involves a prompt/response to generate the query, running the query, and an optional prompt/response for synthesis of the response. Tool use, agentic workflows, reflection, and chain-of-thought prompting is all growing in popularity. These approaches require multiple prompt/response turns before. Anything that involves audio (text-to-speech or speech-to-text) has a minimum speed requirement (people can speak English at around 150 words per minute, about half as fast as they read).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Many “offline” or “batch” use cases also exist&lt;/strong&gt; (i.e. summarize, classify, translate or transcribe 1000 files)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In just about every one of these cases, faster is better (or cheaper). Having full control over the hardware that your model runs on is really the only way to improve this.&lt;/p&gt;
&lt;h3&gt;Improving Model Quality&lt;/h3&gt;
&lt;p&gt;Three ways to improve model quality is via prompt-tuning (or p-tuning), retrieval augmented-generation (RAG), or fine-tuning the model (i.e. changing some of the weights).&lt;/p&gt;
&lt;p&gt;In all of these cases, it is clear that the developer of the LLM-powered application would like to control things like the system prompt (kind of available via ChatGPT or Claude), the weights of the model (not really available for public endpoints), and most importantly, use &lt;strong&gt;private data&lt;/strong&gt; during RAG.&lt;/p&gt;
&lt;p&gt;As discussed earlier, the quality of your AI-powered application depends on the quality of your data. If you want to integrate private datasets without taking on risk, you have to host the endpoint(s) for yourself.&lt;/p&gt;
&lt;h2&gt;3. Cost&lt;/h2&gt;
&lt;p&gt;The biggest thing that bothers me about using public APIs, is paying per token. It seems like cloud has gone too far. First, it was CPU cycles by the hour. Then, it was functions as-a-Service. Now, if the model is too chatty, I’m getting hit with a bill.&lt;/p&gt;
&lt;p&gt;This isn’t fantasy: many use cases for large context windows are popping up. The first Harry Potter book costs around 100k tokens, and so do a lot of my product spec sheets and user guides. Anthropic prices Claude 3.5 Sonnet at $3/M tokens. OpenAI charges $5/M tokens for GPT-4o. Google offers Gemini 1.5 Pro for $1.25/M tokens.&lt;/p&gt;
&lt;p&gt;So, (over simplifying and avoiding the pricing improvements of context caching), if I have 10 questions about the first Harry Potter book, it’s going to cost me between $1 and $5. And, I have a few more questions than that.&lt;/p&gt;
&lt;h2&gt;Conclusion: The world will be hybrid&lt;/h2&gt;
&lt;p&gt;It seems to me that Private AI will experience a similar evolution as found with Private Cloud. Public experiences will win hearts, minds, and popular culture. However, many companies will feel the need to try and get the same experience in private due to specific requirements for data privacy, model performance, and cost. The result will be a mix of both, resulting in a hybrid AI experience.&lt;/p&gt;
&lt;p&gt;Hunter and I discussed this in Episode 3 of &lt;em&gt;Things We Read This Week&lt;/em&gt;. You can watch it &lt;a href=&quot;https://www.youtube.com/watch?v=Byjlr0xplNI&quot;&gt;here on YouTube&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM Agentic Tool Mesh: Empowering Gen AI with Retrieval-Augmented Generation (RAG)]]></title><description><![CDATA[In our previous blog posts, we explored the Chat Service and the Agents Service of LLM Agentic Tool Mesh, highlighting how they simplify the…]]></description><link>https://developer.hpe.com/llm-agentic-tool-mesh-empowering-gen-ai-with-retrieval-augmented-generation-rag/</link><guid isPermaLink="false">https://developer.hpe.com/llm-agentic-tool-mesh-empowering-gen-ai-with-retrieval-augmented-generation-rag/</guid><pubDate>Wed, 08 Jan 2025 11:46:13 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px !important;
    line-height: 33px !important;
    max-width: none !important;
}
&lt;/style&gt;
&lt;p&gt;In our previous blog posts, we explored the &lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-exploring-chat-service-and-factory-design-pattern/&quot;&gt;Chat Service&lt;/a&gt; and the &lt;a href=&quot;https://developer.hpe.com/blog/llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai/&quot;&gt;Agents Service&lt;/a&gt; of &lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/&quot;&gt;LLM Agentic Tool Mesh&lt;/a&gt;, highlighting how they simplify the integration of Generative AI (Gen AI) into applications.&lt;/p&gt;
&lt;p&gt;Today, we&apos;ll dive into another pivotal feature of LLM Agentic Tool Mesh: &lt;strong&gt;Retrieval-Augmented Generation (RAG) Service&lt;/strong&gt;. We&apos;ll explain what RAG is, how LLM Agentic Tool Mesh handles it, delve into the RAG services, and showcase an example of an agentic tool using RAG.&lt;/p&gt;
&lt;h1&gt;Understanding Retrieval-Augmented Generation (RAG)&lt;/h1&gt;
&lt;p&gt;RAG is a technique that enhances the capabilities of language models by providing them with access to external knowledge sources. Instead of relying solely on the information contained within the model&apos;s parameters, RAG allows models to retrieve and utilize relevant data from external documents or databases.&lt;/p&gt;
&lt;p&gt;This approach improves the accuracy and relevance of generated responses, especially in domains requiring up-to-date or specialized information. The key benefits of RAG are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Enhanced accuracy&lt;/strong&gt;: Provides more precise and factual responses by accessing more data available externally&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Domain specialization&lt;/strong&gt;: Enables models to handle specialized topics by leveraging domain-specific documents&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reduced hallucinations&lt;/strong&gt;: Minimizes the generation of incorrect or nonsensical information by grounding responses in real data&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;RAG in LLM Agentic Tool Mesh&lt;/h1&gt;
&lt;p&gt;In the LLM Agentic Tool Mesh platform, RAG is a &lt;strong&gt;crucial tool&lt;/strong&gt; that &lt;strong&gt;can enhance&lt;/strong&gt; &lt;strong&gt;a language model&apos;s&lt;/strong&gt; capabilities by integrating external knowledge. LLM Agentic Tool  implements RAG through two main stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Injection&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retrieval&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each stage is designed to standardize and optimize data use, ensuring generated content is both relevant and accurate.&lt;/p&gt;
&lt;h2&gt;The injection process&lt;/h2&gt;
&lt;p&gt;The injection process involves preparing and integrating data into a storage system where it can be efficiently retrieved when content is being generated.&lt;/p&gt;
&lt;p&gt;This process is abstracted into three key steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Extraction&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transformation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Loading&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/ingestion.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Extraction&lt;/h3&gt;
&lt;p&gt;As part of the extraction phase, data gathering involves collecting information from various sources, such as DOCX, PDF, or other formats, and converting it into a common format, typically JSON, to ensure consistency.&lt;/p&gt;
&lt;p&gt;Example usage&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Configuration for the Data Extractor
EXTRACTOR_CONFIG = {
    &apos;type&apos;: &apos;UnstructuredSections&apos;,
    &apos;document_type&apos;: &apos;Pdf&apos;,
    &apos;cache_elements_to_file&apos;: True,
    &apos;extract_text&apos;: True,
    &apos;exclude_header&apos;: True,
    &apos;exclude_footer&apos;: True,
    &apos;extract_image&apos;: False,
    &apos;image_output_folder&apos;: &apos;./images&apos;
}

# Initialize the Data Extractor
data_extractor = DataExtractor.create(EXTRACTOR_CONFIG)

# Parse a document file
file_path = &apos;example_document.pdf&apos;
result = data_extractor.parse(file_path)

# Handle the extraction result
if result.status == &quot;success&quot;:
    print(f&quot;EXTRACTED ELEMENTS:\n{result.elements}&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Transformation&lt;/h3&gt;
&lt;p&gt;As part of the transformation phase, the process involves cleaning the data by removing irrelevant or redundant information, enriching it with metadata to enhance searchability during retrieval, and transforming the cleaned data using LLMs to generate summaries, question-and-answer pairs, or other structured outputs.&lt;/p&gt;
&lt;p&gt;Example usage&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.rag import DataTransformer

# Configuration for the Data Transformer
TRANSFORMER_CONFIG = {
    &apos;type&apos;: &apos;CteActionRunner&apos;,
    &apos;clean&apos;: {
        &apos;headers_to_remove&apos;: [&apos;Confidential&apos;, &apos;Draft&apos;],
        &apos;min_section_length&apos;: 100
    },
    &apos;transform&apos;: {
        &apos;llm_config&apos;: {
            &apos;type&apos;: &apos;LangChainChatOpenAI&apos;,
            &apos;api_key&apos;: &apos;your-api-key-here&apos;,
            &apos;model_name&apos;: &apos;gpt-4o&apos;
        },
        &apos;system_prompt&apos;: &apos;Summarize the following content.&apos;,
        &apos;transform_delimeters&apos;: [&apos;```&apos;, &apos;```json&apos;]
    },
    &apos;enrich&apos;: {
        &apos;metadata&apos;: {
            &apos;source&apos;: &apos;LLM Agentic Tool Mesh Platform&apos;,
            &apos;processed_by&apos;: &apos;CteActionRunner&apos;
        }
    }
}

# Initialize the Data Transformer
data_transformer = DataTransformer.create(TRANSFORMER_CONFIG)

# List of extracted elements to be transformed
extracted_elements = [
    {&quot;text&quot;: &quot;Confidential Report on AI Development&quot;, &quot;metadata&quot;: {&quot;type&quot;: &quot;Header&quot;}},
    {&quot;text&quot;: &quot;AI is transforming industries worldwide...&quot;, &quot;metadata&quot;: {&quot;type&quot;: &quot;Paragraph&quot;}}
]

# Define the actions to be performed
actions = [&apos;RemoveSectionsByHeader&apos;, &apos;TransformInSummary&apos;, &apos;EnrichMetadata&apos;]

# Process the elements
result = data_transformer.process(actions, extracted_elements)

# Handle the transformation result
if result.status == &quot;success&quot;:
    print(f&quot;TRANSFORMED ELEMENTS:\n{result.elements}&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Loading&lt;/h3&gt;
&lt;p&gt;As part of the loading phase, the process includes injecting the transformed data into the chosen storage solution, such as a vector database, and further adapting the data as needed, such as chunking it into smaller pieces for efficient retrieval.&lt;/p&gt;
&lt;p&gt;Example usage&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.rag import DataLoader

# Configuration for the Data Loader
LOADER_CONFIG = {
    &apos;type&apos;: &apos;ChromaForSentences&apos;
}

# Initialize the Data Loader
data_loader = DataLoader.create(LOADER_CONFIG)

# Example collection (retrieved from a DataStorage instance)
collection = data_storage.get_collection().collection

# List of elements to be inserted
elements = [
    {&quot;text&quot;: &quot;Generative AI is transforming industries.&quot;, &quot;metadata&quot;: {&quot;category&quot;: &quot;AI&quot;, &quot;importance&quot;: &quot;high&quot;}},
    {&quot;text&quot;: &quot;This document discusses the impact of AI.&quot;, &quot;metadata&quot;: {&quot;category&quot;: &quot;AI&quot;, &quot;importance&quot;: &quot;medium&quot;}}
]

# Insert the elements into the collection
result = data_loader.insert(collection, elements)

# Handle the insertion result
if result.status == &quot;success&quot;:
    print(&quot;Data successfully inserted into the collection.&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The retrieval process&lt;/h2&gt;
&lt;p&gt;Once the data has been injected and is ready for use, the retrieval process focuses on fetching the most relevant information based on a given input query. This ensures that the language model has access to the right data to generate accurate and contextually relevant outputs.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Data retrieval&lt;/strong&gt;: Uses various methods, such as dense or sparse retrieval, to fetch the most relevant data from storage&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Metadata filtering&lt;/strong&gt;:  Applies metadata filters to narrow down search results, ensuring the retrieved data matches the specific needs of the query&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chunk Expansion&lt;/strong&gt;: Expands the retrieved data chunks to provide comprehensive information for the language model&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/retrieve.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Example usage&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.rag import DataRetriever

# Configuration for the Data Retriever
RETRIEVER_CONFIG = {
    &apos;type&apos;: &apos;ChromaForSentences&apos;,
    &apos;expansion_type&apos;: &apos;Section&apos;,
    &apos;sentence_window&apos;: 3,
    &apos;n_results&apos;: 10,
    &apos;include&apos;: [&apos;documents&apos;, &apos;metadatas&apos;]
}

# Initialize the Data Retriever
data_retriever = DataRetriever.create(RETRIEVER_CONFIG)

# Example collection (retrieved from a DataStorage instance)
collection = data_storage.get_collection().collection

# Query to search within the collection
query = &quot;What is the impact of Generative AI on industries?&quot;

# Retrieve relevant data based on the query
result = data_retriever.select(collection, query)

# Handle the retrieval result
if result.status == &quot;success&quot;:
    for element in result.elements:
        print(f&quot;TEXT:\n{element[&apos;text&apos;]}\nMETADATA:\n{element[&apos;metadata&apos;]}\n&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;LLM Agentic Tool Mesh in action: Agentic tool using RAG&lt;/h1&gt;
&lt;p&gt;In the &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;LLM Agentic Tool Mesh GitHub&lt;/a&gt;, there is an example of a RAG-based tool that provides quick and accurate access to 5G specifications: the &lt;strong&gt;telco expert&lt;/strong&gt; (inside folder &lt;code&gt;examples/tool_rag&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;This agentic tool leverages the RAG services in LLM Agentic Tool Mesh to read telco standards, build or use a vector store from them, and then uses a query engine to find and return relevant information based on user queries.&lt;/p&gt;
&lt;p&gt;For enhanced observability, the telco expert not only provides the answer but also displays the retrieved chunks used to formulate the response. This includes both the text of the chunks and their associated metadata, such as the document source, date, and other relevant details. This feature allows users to verify the origin of the information and gain deeper insights into the data supporting the answer.&lt;/p&gt;
&lt;p&gt;Tool code snippet&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@AthonTool(config, logger)
def telco_expert(query: str) -&gt; str:
    &quot;&quot;&quot;
    This function reads the telco standards, builds or uses a vector store
    from them, and then uses a query engine to find and return relevant
    information to the input question.
    &quot;&quot;&quot;
    collection = _get_collection()
    if LOAD:
        _load_files_into_db(collection)
    augment_query = _augment_query_generated(query)
    rag_results = _retrieve_from_collection(collection, augment_query)
    ordered_rag_results = _rerank_answers(augment_query, rag_results)
    summary_answer = _summary_answer(augment_query, ordered_rag_results)
    chunk_answer = _create_chunk_string(ordered_rag_results)
    return summary_answer + &quot;\n\n&quot; + chunk_answer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The functionalities shown are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Data injection&lt;/strong&gt;: Loads telco standards into a vector store&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Query augmentation&lt;/strong&gt;: Enhances the user&apos;s query for better retrieval&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data retrieval&lt;/strong&gt;: Retrieves relevant chunks from the vector store&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Answer generation&lt;/strong&gt;: Summarizes and formats the retrieved information to provide a comprehensive answer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/rag_tool.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;The RAG Service in LLM Agentic Tool Mesh exemplifies how advanced design principles and innovative engineering simplify and enhance the adoption of Gen AI. By abstracting complexities and providing versatile examples, LLM Agentic Tool Mesh enables developers and users alike to unlock the transformative potential of Gen AI in various domains.&lt;/p&gt;
&lt;p&gt;Stay tuned for our next post, where we&apos;ll explore the System Service of LLM Agentic Tool Mesh, essential for creating and managing a mesh of tools, as we continue our journey to democratize Gen AI!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Start the new year right by exploring a broad range of topics in our newsletter]]></title><link>https://developer.hpe.com/2025-january-07/</link><guid isPermaLink="false">https://developer.hpe.com/2025-january-07/</guid><pubDate>Tue, 07 Jan 2025 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[SC24 from the Chapel Language Perspective]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/sc24-from-the-chapel-language-perspective/</link><guid isPermaLink="false">https://developer.hpe.com/sc24-from-the-chapel-language-perspective/</guid><pubDate>Wed, 18 Dec 2024 23:25:56 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.3!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-3/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-3/</guid><pubDate>Thu, 12 Dec 2024 21:25:00 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM Agentic Tool Mesh: Harnessing agent services and multi-agent AI for next-level Gen AI]]></title><description><![CDATA[In my previous blog post, I explored the Chat Service of LLM Agentic Tool Mesh, an open-source project aimed at democratizing Generative AI…]]></description><link>https://developer.hpe.com/llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai/</link><guid isPermaLink="false">https://developer.hpe.com/llm-agentic-tool-mesh-harnessing-agent-services-and-multi-agent-ai-for-next-level-gen-ai/</guid><pubDate>Thu, 12 Dec 2024 17:08:46 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px !important;
    line-height: 33px !important;
    max-width: none !important;
}
&lt;/style&gt;
&lt;p&gt;In my previous blog post, I explored the &lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-exploring-chat-service-and-factory-design-pattern/&quot;&gt;Chat Service&lt;/a&gt; of &lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/&quot;&gt;LLM Agentic Tool Mesh&lt;/a&gt;, an &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;open-source project&lt;/a&gt; aimed at democratizing Generative AI (Gen AI).&lt;/p&gt;
&lt;p&gt;Today, I&apos;ll delve into another core feature: the &lt;strong&gt;Agent Service&lt;/strong&gt;. I&apos;ll discuss what agents are, explain the LLM Agentic Tool Mesh related services, and showcase examples from its repository.&lt;/p&gt;
&lt;h2&gt;Understanding LLM agents&lt;/h2&gt;
&lt;p&gt;In the context of Large Language Models (LLMs), an agent is an autonomous entity capable of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Perceiving its environment&lt;/strong&gt;: Agents can gather and interpret information from their surroundings.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Making decisions&lt;/strong&gt;: Based on the perceived information, agents decide on the best course of action.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Acting on decisions&lt;/strong&gt;: Agents execute actions to achieve specific objectives.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These agents can operate independently or interact with one another to optimize their collective performance depending on the complexity of the task.
In fact, multi-agent AI involves coordinating multiple agents, each specialized in a specific domain or function, to collaborate and achieve a common goal. These agents handle:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Task division&lt;/strong&gt;: Dividing complex tasks into manageable parts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Specialization&lt;/strong&gt;: Each agent specializes in a particular function, such as information retrieval or decision-making.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;: Agents communicate and share information for effective and efficient task execution.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/multiagents.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Managing such agents typically requires advanced coding and deep knowledge of agent-based systems. However, LLM Agentic Tool Mesh simplifies this process by providing high-level abstractions through intuitive prompts and configuration files. Users can focus on defining tasks and desired outcomes while LLM Agentic Tool Mesh handles the coordination, task distribution, and result aggregation behind the scenes.&lt;/p&gt;
&lt;h2&gt;LLM Agentic Tool Mesh Agent Service&lt;/h2&gt;
&lt;p&gt;LLM Agentic Tool Mesh provides all the necessary tools to build a powerful agentic system by handling:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The &lt;strong&gt;tool repository&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;reasoning engine&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;multi-agent task force&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Tool repository&lt;/h3&gt;
&lt;p&gt;Agents in LLM Agentic Tool Mesh rely on tools to perform specialized tasks like information retrieval, document summarization, or data analysis. These tools extend the agents&apos; capabilities, allowing them to efficiently complete complex operations. The &lt;strong&gt;tool repository&lt;/strong&gt; service in LLM Agentic Tool Mesh simplifies and automates the storage, management, and retrieval of these tools.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Dynamic tool storage&lt;/strong&gt;: Add tools with associated metadata, including tool name, description, function, and usage parameters.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool retrieval&lt;/strong&gt;: Flexible search and retrieval functionality, enabling agents to access tools based on specific criteria.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Metadata management&lt;/strong&gt;: Store relevant metadata for each tool, aiding in decision-making for task assignments.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example usage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.agents import ToolRepository

# Configuration for the Tool Repository
REPO_CONFIG = {
    &apos;type&apos;: &apos;LangChainStructured&apos;
}

# Initialize the Tool Repository
tool_repository = ToolRepository.create(REPO_CONFIG)
Adding a tool to the repository:
from langchain.tools import tool

@tool
def text_summarizer(text: str) -&gt; str:
    &quot;&quot;&quot;A simple text summarizer function&quot;&quot;&quot;
    return text[:50] 

metadata = {
    &apos;category&apos;: &apos;NLP&apos;,
    &apos;version&apos;: &apos;1.0&apos;,
    &apos;author&apos;: &apos;John Doe&apos;
}

# Add the tool to the repository
add_result = tool_repository.add_tool(text_summarizer, metadata)

if add_result.status == &quot;success&quot;:
    print(&quot;Tool added successfully.&quot;)
else:
    print(f&quot;ERROR:\n{add_result.error_message}&quot;)

# Retrieve tools with a metadata filter
metadata_filter = {&apos;category&apos;: &apos;NLP&apos;}
get_result = tool_repository.get_tools(metadata_filter)

if get_result.status == &quot;success&quot;:
    print(f&quot;RETRIEVED TOOLS:\n{get_result.tools}&quot;)
else:
    print(f&quot;ERROR:\n{get_result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Reasoning engine&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;reasoning engine&lt;/strong&gt; orchestrates interactions between the LLM and various tools, enabling agents to seamlessly combine decision-making capabilities with tool-based actions. It extends the chat capabilities by managing the dynamic integration of tools with the LLM, allowing for real-time decision-making and task execution.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tool orchestration&lt;/strong&gt;: Coordinates between the LLM and tools, deciding which tools to invoke based on context and user input.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Memory management&lt;/strong&gt;: Handles storage and retrieval of relevant memory for ongoing tasks or conversations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic configuration&lt;/strong&gt;: Allows users to adjust the reasoning engine&apos;s behavior dynamically, tailoring interactions between LLMs and tools.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/reasoning.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Task force&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;multi-agents task force&lt;/strong&gt; service enables the orchestration of complex tasks through a network of specialized agents. This service allows users to define a structured workflow where each agent is assigned a specific task, executed in sequence or parallel.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;LLM-driven planning&lt;/strong&gt;: Integrates with an LLM to plan task sequences, ensuring intelligent coordination.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agent specialization&lt;/strong&gt;: Each agent specializes in a particular task, tailored through prompts defining their role, backstory, and goals.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Task-oriented workflow&lt;/strong&gt;: Supports both sequential and parallel task execution, configurable through prompts and configuration files.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool integration&lt;/strong&gt;: Agents utilize a suite of tools to complete their tasks, dynamically loaded and executed during task completion.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example usage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.agents import TaskForce
from custom_tools import DataFetcher, SalesSummarizer, PresentationBuilder

# Example configuration for the Task Force Multi-Agents
TASK_FORCE_CONFIG = {
    &apos;type&apos;: &apos;CrewAIMultiAgent&apos;,
    &apos;plan_type&apos;: &apos;Sequential&apos;,
    &apos;tasks&apos;: [
        {
            &apos;description&apos;: &apos;Analyze the recent sales data.&apos;,
            &apos;expected_output&apos;: &apos;A summary report of sales trends.&apos;,
            &apos;agent&apos;: {
                &apos;role&apos;: &apos;Data Analyst&apos;,
                &apos;goal&apos;: &apos;Summarize sales data&apos;,
                &apos;backstory&apos;: &apos;Experienced in sales data analysis&apos;,
                &apos;tools&apos;: [&apos;DataFetcher&apos;, &apos;SalesSummarizer&apos;]
            }
        },
        {
            &apos;description&apos;: &apos;Prepare a presentation based on the report.&apos;,
            &apos;expected_output&apos;: &apos;A presentation deck summarizing the sales report.&apos;,
            &apos;agent&apos;: {
                &apos;role&apos;: &apos;Presentation Specialist&apos;,
                &apos;goal&apos;: &apos;Create a presentation&apos;,
                &apos;backstory&apos;: &apos;Expert in creating engaging presentations&apos;,
                &apos;tools&apos;: [&apos;PresentationBuilder&apos;]
            }
        }
    ],
    &apos;llm&apos;: {
        &apos;type&apos;: &apos;LangChainChatOpenAI&apos;,
        &apos;api_key&apos;: &apos;your-api-key-here&apos;,
        &apos;model_name&apos;: &apos;gpt-4o-mini&apos;
    },
    &apos;verbose&apos;: True,
    &apos;memory&apos;: False
}

# Initialize the Task Force with the provided configuration
task_force = TaskForce.create(TASK_FORCE_CONFIG)

# Run the task force with an input message
input_message = &quot;Generate a sales analysis report and prepare a presentation.&quot;
result = task_force.run(input_message)

# Handle the response
if result.status == &quot;success&quot;:
    print(f&quot;COMPLETION:\n{result.completion}&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;LLM Agentic Tool Mesh in action: Examples from the repository&lt;/h2&gt;
&lt;p&gt;The LLM Agentic Tool Mesh GitHub repository includes several examples demonstrating the versatility and capabilities of the agent services.&lt;/p&gt;
&lt;h3&gt;Chatbot application&lt;/h3&gt;
&lt;p&gt;This chatbot (examples/app_chatbot) is capable of reasoning and invoking appropriate LLM tools to perform specific actions. You can configure the chatbot using files that define LLM Agentic Tool Mesh platform services, project settings, toolkits, and memory configurations. The web app orchestrates both local and remote LLM tools, allowing them to define their own HTML interfaces, supporting text, images, and code presentations.&lt;/p&gt;
&lt;p&gt;Configuration example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;projects:
  - name: &quot;Personal Chat&quot;
    memory:
      type: LangChainBuffer
      memory_key: chat_history
      return_messages: true
    tools:
      - &quot;https://127.0.0.1:5002/&quot;     # Basic Copywriter
  - name: &quot;Project 5G Network&quot;
    memory:
      type: LangChainRemote
      memory_key: chat_history
      return_messages: true
      base_url: &quot;https://127.0.0.1:5010/&quot;
      timeout: 100
      cert_verify: false
    tools:
      - &quot;https://127.0.0.1:5005/&quot;     # OpenAPI Manager
      - &quot;https://127.0.0.1:5006/&quot;     # IMS Expert
  - name: &quot;Project Meteo&quot;
    memory:
      type: LangChainBuffer
      memory_key: chat_history
      return_messages: true
    tools:
      - &quot;https://127.0.0.1:5003/&quot;     # Temperature Finder
      - &quot;https://127.0.0.1:5004/&quot;     # Temperature Analyzer
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;OpenAPI manager&lt;/h3&gt;
&lt;p&gt;The OpenAPI manager (examples/tool_agents) is a multi-agent tool that reads OpenAPI documentation and provides users with relevant information based on their queries. It uses the Task Force service to answer questions related to 5G APIs.&lt;/p&gt;
&lt;p&gt;Capabilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ListOpenApis&lt;/strong&gt;: Lists all OpenAPI specifications present in the system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SelectOpenApi&lt;/strong&gt;: Selects a specific OpenAPI specification.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetOpenApiVersion&lt;/strong&gt;: Returns the OpenAPI version of the selected specification.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetInfo&lt;/strong&gt;: Returns the information dictionary of the selected specification.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetMethodsByTag&lt;/strong&gt;: Lists all methods of the selected specification for a specific tag.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetMethodById&lt;/strong&gt;: Returns detailed information about a method selected by ID.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetRequestBody&lt;/strong&gt;: Returns the request body schema of the selected specification.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GetResponse&lt;/strong&gt;: Returns the response schema of the selected specification.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/mesh.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The &lt;strong&gt;LLM Agentic Tool Mesh Agent Service&lt;/strong&gt; exemplifies how advanced design principles and innovative prompt engineering simplify and enhance the adoption of Gen AI. By abstracting complexities and providing versatile examples, LLM Agentic Tool Mesh enables developers and users alike to unlock the transformative potential of Gen AI in various domains.&lt;/p&gt;
&lt;p&gt;Stay tuned for our next post, where we&apos;ll explore another key service of LLM Agentic Tool Mesh and continue our journey to democratize Gen AI!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Transform IT Operations with AI-Infused, Full-Stack Observability]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/how-to-transform-it-operations-with-ai-infused-full-stack-observability/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-transform-it-operations-with-ai-infused-full-stack-observability/</guid><pubDate>Mon, 02 Dec 2024 19:21:18 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OpsRamp Continues to Push Autonomous IT Operations Forward]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/hpe-opsramp-continues-to-push-autonomous-it-operations-forward/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-opsramp-continues-to-push-autonomous-it-operations-forward/</guid><pubDate>Mon, 02 Dec 2024 19:14:19 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Schrödinger’s Cat Challenge of Observing Cloud-Native Applications]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/the-schrödinger’s-cat-challenge-of-observing-cloud-native-applications/</link><guid isPermaLink="false">https://developer.hpe.com/the-schrödinger’s-cat-challenge-of-observing-cloud-native-applications/</guid><pubDate>Mon, 02 Dec 2024 19:12:19 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[This month, get tips on working with GenAI, Morpheus, Chapel and Ampere ARM processors]]></title><link>https://developer.hpe.com/2024-december-02/</link><guid isPermaLink="false">https://developer.hpe.com/2024-december-02/</guid><pubDate>Mon, 02 Dec 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[LLM Agentic Tool Mesh: Exploring chat service and factory design pattern]]></title><description><![CDATA[In our previous blog post, we introduced LLM Agentic Tool Mesh, an open-source project aimed at democratizing Generative AI (Gen AI). The…]]></description><link>https://developer.hpe.com/ll-mesh-exploring-chat-service-and-factory-design-pattern/</link><guid isPermaLink="false">https://developer.hpe.com/ll-mesh-exploring-chat-service-and-factory-design-pattern/</guid><pubDate>Thu, 21 Nov 2024 12:21:15 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px !important;
    line-height: 33px !important;
    max-width: none !important;
}
&lt;/style&gt;
&lt;p&gt;In our previous blog post, we introduced &lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/&quot;&gt;LLM Agentic Tool Mesh&lt;/a&gt;&lt;/strong&gt;, an &lt;strong&gt;&lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;open-source project&lt;/a&gt;&lt;/strong&gt; aimed at democratizing Generative AI (Gen AI). The initiative addresses both the technical complexity and organizational challenges of adopting Gen AI. The vision behind LLM Agentic Tool Mesh is to make Gen AI accessible and beneficial to a broader audience, empowering users from diverse backgrounds to leverage cutting-edge AI technologies effortlessly.&lt;/p&gt;
&lt;p&gt;This blog dives deeper into one of LLM Agentic Tool Mesh&apos;s core features: the &lt;strong&gt;Chat Service&lt;/strong&gt;. The LLM Agentic Tool Mesh platform provides a robust foundation for creating chat applications using Large Language Models (LLMs).
First, we&apos;ll detail its key services. Then, we’ll explore some code that illustrates how the &lt;strong&gt;Factory Design Pattern&lt;/strong&gt; empowers it to handle diverse LLM integrations seamlessly. Finally, we’ll highlight some examples in the LLM Agentic Tool Mesh repository that utilize the Chat Service, such as the &lt;strong&gt;chatbot&lt;/strong&gt; application and &lt;strong&gt;agentic tools&lt;/strong&gt;, showcasing the flexibility and practical applications of these services.&lt;/p&gt;
&lt;h2&gt;Key components of LLM Agentic Tool Mesh&apos;s Chat Service&lt;/h2&gt;
&lt;p&gt;Let’s explore the key components and how they enable seamless integration of Gen AI into chat applications.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Prompt rendering&lt;/strong&gt;: This service simplifies the creation and management of prompts, which are the backbone of effective LLM interactions. It supports rendering prompts from templates or files and even saving custom prompts to the file system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model management&lt;/strong&gt;: It manages the initialization and utilization of different LLMs based on configuration parameters.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Memory management&lt;/strong&gt;: Maintaining context in a chat is essential for meaningful interactions. This service manages storage and retrieval of conversation history.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Message processing&lt;/strong&gt;: This service handles the serialization and deserialization of messages, converting between different data formats to maintain compatibility across components&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/chat-chat.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;The power of the Factory Design Pattern&lt;/h2&gt;
&lt;p&gt;All the previous services are implemented using the &lt;strong&gt;Factory Design Pattern&lt;/strong&gt;. This is a creational design approach that provides a flexible interface for object creation. In LLM Agentic Tool Mesh, this pattern ensures that the platform can handle multiple service types dynamically based on configuration parameters.&lt;/p&gt;
&lt;p&gt;One of LLM Agentic Tool Mesh&apos;s Chat Service features is the &lt;strong&gt;&lt;code&gt;ChatModel&lt;/code&gt;&lt;/strong&gt; factory, which simplifies the creation of specific chat models. Here&apos;s an example of how it works:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class ChatModel:
    _models = {
        &apos;LangChainChatOpenAI&apos;: LangChainChatOpenAIModel,
        &apos;LangChainAzureChatOpenAI&apos;: LangChainAzureChatOpenAIModel,
        &apos;LangChainChatGoogleGenAI&apos;: LangChainChatGoogleGenAIModel,
        # Additional models...
    }

    @staticmethod
    def create(config):
        model_type = config.get(&apos;type&apos;)
        if not model_type:
            raise ValueError(&quot;Configuration must include &apos;type&apos;.&quot;)
        model_class = ChatModel._models.get(model_type)
        if not model_class:
            raise ValueError(f&quot;Unsupported model type: {model_type}&quot;)
        return model_class(config)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Definitions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Model registry&lt;/strong&gt;: The &lt;code&gt;_models&lt;/code&gt; dictionary maps model types (e.g., &lt;code&gt;&apos;LangChainChatOpenAI&apos;&lt;/code&gt;) to their corresponding classes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic model selection&lt;/strong&gt;: The &lt;code&gt;create&lt;/code&gt; method retrieves the desired model class based on the &lt;code&gt;type&lt;/code&gt; in the provided &lt;code&gt;config&lt;/code&gt;. If &lt;code&gt;type&lt;/code&gt; is missing or unsupported, it raises an error.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Instance creation&lt;/strong&gt;: The method initializes and returns the appropriate model class with the given configuration.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The main benefits of this approach are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Encapsulation&lt;/strong&gt;: Hides the complexity of model initialization.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexibility&lt;/strong&gt;: Switching models only requires updating the &lt;code&gt;type&lt;/code&gt; in &lt;code&gt;config&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensibility&lt;/strong&gt;: New models can be added by updating the &lt;code&gt;_models&lt;/code&gt; dictionary without modifying the logic.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Error handling&lt;/strong&gt;: By validating the configuration and supported model types, the design prevents runtime errors, ensuring the system is robust and user-friendly.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s how a developer might use LLM Agentic Tool Mesh&apos;s &lt;strong&gt;&lt;code&gt;ChatModel&lt;/code&gt;&lt;/strong&gt; factory:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from athon.chat import ChatModel
from langchain.schema import HumanMessage, SystemMessage

# Configuration for the Chat Model
LLM_CONFIG = {
    &apos;type&apos;: &apos;LangChainAzureChatOpenAI&apos;,
    &apos;api_key&apos;: &apos;your-api-key-here&apos;,
    &apos;azure_deployment&apos;: &apos;your-deployment-name&apos;,
    &apos;endpoint&apos;: &apos;your-endpoint-url&apos;,
    &apos;api_version&apos;: &apos;your-api-version&apos;,
    &apos;model_name&apos;: ‘gpt-4o’,
    &apos;temperature&apos;: 0.7
}

# Initialize the Chat Model
chat = ChatModel.create(LLM_CONFIG)

# Define prompts
prompts = [
    SystemMessage(content=&quot;Convert the message to pirate language&quot;),
    HumanMessage(content=&quot;Today is a sunny day and the sky is blue&quot;)
]

# Invoke the model
result = chat.invoke(prompts)

# Process the response
if result.status == &quot;success&quot;:
    print(f&quot;COMPLETION:\n{result.content}&quot;)
else:
    print(f&quot;ERROR:\n{result.error_message}&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Explanation:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;ChatModel.create()&lt;/code&gt; method dynamically selects and initializes the appropriate chat model based on the configuration.&lt;/li&gt;
&lt;li&gt;Prompts are processed through the &lt;code&gt;invoke()&lt;/code&gt; method, which interacts with the LLM to generate responses.&lt;/li&gt;
&lt;li&gt;Developers handle responses easily, focusing on application logic rather than the complexities of model interaction.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The modular design of LLM Agentic Tool Mesh ensures scalability and maintainability. Here’s a glimpse into its structure:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-plaintext&quot;&gt;self_serve_platform/
│
└── chat/
    ├── model.py  # Factory method for creating chat model instances
    └── models/
        ├── base.py  # Abstract base class for chat models
        ├── langchain_chat_openai.py  # OpenAI chat model implementation
        ├── langchain_azure_chat_openai.py  # Azure chat model implementation
        └── ... # Additional implementations
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Examples of LLM Agentic Tool Mesh in action&lt;/h2&gt;
&lt;p&gt;In the &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;LLM Agentic Tool Mesh GitHub&lt;/a&gt;, there are a series of web applications and tools, complete with examples, to showcase the versatility and capabilities of LLM Agentic Tool Mesh. These examples are designed to demonstrate how the platform&apos;s services can be leveraged for real-world applications. Notably, the repository includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Chatbot application&lt;/strong&gt; (&lt;code&gt;examples/app_chatbot&lt;/code&gt;): A web-based chatbot built using the chat service, offering a hands-on example of how to integrate LLM-powered conversational agents.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agentic tools&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Basic copywriter&lt;/strong&gt; (&lt;code&gt;examples/tool_copywriter&lt;/code&gt;): A tool designed to rewrite and improve text, providing explanations for suggested enhancements. For instance, it can refine content while offering insights into the reasoning behind the changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Temperature analyzer&lt;/strong&gt; (&lt;code&gt;examples/tool_analyzer&lt;/code&gt;): A tool that generates Python code using an LLM to analyze historical temperature data. It creates visual charts, enabling users to gain a deeper understanding of temperature trends.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can run these tools and applications individually or use the &lt;code&gt;run_examples.sh&lt;/code&gt; script to launch them all at once. Once initialized:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Access the &lt;strong&gt;chatbot application&lt;/strong&gt; via &lt;code&gt;https://127.0.0.1:5001/&lt;/code&gt; to start chatting or explore its additional features like the &lt;strong&gt;personal chat&lt;/strong&gt;. Within Personal Chat, users can enhance text using the copywriter tool by interacting directly in chat or clicking interface buttons.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;basic copywriter&lt;/strong&gt; also features a configurable backpanel app accessible at &lt;code&gt;https://127.0.0.1:5011/&lt;/code&gt;. Without needing to code, users can modify settings such as system prompts, LLM type, or even interface behavior—for example, transforming a formal copywriter into one with a pirate&apos;s flair, where  one might start by saying &quot;Welcome to our sale&quot; and it would be transformed into &quot;Ahoy, matey, grab the loot!&quot;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;temperature analyzer&lt;/strong&gt; enriches its prompts by incorporating historical temperature datasets. It then uses the chat service to generate code as instructed in the system prompt, executes the analysis, and visualizes the results in a chart.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/tool-chat.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These examples demonstrate the flexibility and potential of LLM Agentic Tool Mesh services, highlighting the power of LLM models and advanced prompt engineering in solving diverse problems.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;LLM Agentic Tool Mesh Chat Service&lt;/strong&gt; and accompanying tools exemplify how advanced design principles like the Factory Design Pattern, combined with innovative prompt engineering, simplify and enhance the adoption of Gen AI. By abstracting complexities and providing versatile examples, LLM Agentic Tool Mesh enables developers and users alike to unlock the transformative potential of Gen AI in a variety of domains.&lt;/p&gt;
&lt;p&gt;Stay tuned for our next post, where we’ll delve into another key service of LLM Agentic Tool Mesh and explore how it continues to democratize Gen AI!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Use Redfish and IPv6 to find lost servers]]></title><description><![CDATA[Introduction Have you ever moved one or more servers from a Local Area Network (LAN) to another without configuring the target network…]]></description><link>https://developer.hpe.com/use-redfish-and-ipv6-to-find-lost-servers/</link><guid isPermaLink="false">https://developer.hpe.com/use-redfish-and-ipv6-to-find-lost-servers/</guid><pubDate>Wed, 20 Nov 2024 16:32:49 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 35px; max-width: none; } &lt;/style&gt;
&lt;style&gt; figcaption {font-style: italic; font-size: 15px; line-height: 33px; max-width: none;} &lt;/style&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Have you ever moved one or more servers from a Local Area Network (LAN) to another without configuring the target network detail before the move ?
Perhaps some servers in the corporate network &lt;code&gt;10.1.2.0/24&lt;/code&gt; have been moved to the lab network &lt;code&gt;19.168.1.0/22&lt;/code&gt;.
I am sure the answer is yes. As a result, the servers that have been moved are unreachable and could just as well be expensive bricks!&lt;/p&gt;
&lt;p&gt;A similar situation happens on LANs without a DHCP server when you initiate a
&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/securesystemerase/#initiating-secure-erase-through-redfish&quot;&gt;secure erase&lt;/a&gt; operation
on systems using a static iLO IP address.
Among other operations, the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=sd00002007en_us&amp;#x26;docLocale=en_US&amp;#x26;page=GUID-8A7C60B7-A3A1-42C6-BEA5-0252F945B7C9.html#ariaid-title2&quot;&gt;One-button Secure Erase&lt;/a&gt; procedure resets the iLO to its factory defaults which includes the use of DHCP for
its network configuration. As a consequence, the iLOs are not easily reachable anymore.&lt;/p&gt;
&lt;p&gt;This blog post presents a procedure to recover from these unpleasant situations with the help of the
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4291#section-2.5.6&quot; target=&quot;_blank&quot;&gt;IPv6 link local address&lt;/a&gt; of the affected iLO network ports. I will also explain what an IPv6 link local is.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;#recovery-process&quot;&gt;recovery process&lt;/a&gt; presented below is suitable for both the dedicated and the shared iLO network ports of lost iLOs. In this post, it is assumed that access to a system connected to the same LAN as the affected iLOs is possible. This system is called the &quot;brick-saver&quot; throughout of this blog post. During my tests, I used a Linux brick-saver, but I am convinced that Microsoft Windows or Apple macOS could have used it as well.&lt;/p&gt;
&lt;p&gt;In this post, it is also assumed that it is not possible to change the brick-saver network configuration to match the LAN settings of lost iLOs. If that is not the case, this article is not relevant.&lt;/p&gt;
&lt;p&gt;When the Media Access Control (MAC) address of impacted iLOs is unknown, an operator must be able to access and manipulate iLO cables and power buttons of the affected servers.&lt;/p&gt;
&lt;p&gt;Then, in order to re-program the network configuration of lost iLOs, you must have the credentials of a privileged user in each of them. This user could be the factory administrator account and associated password mentioned on the &quot;toe-tag&quot; present on the front panel of HPE servers.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TIP:&lt;br&gt;
If you know that the Link Layer Discovery Protocol (&lt;a href=&quot;https://standards.ieee.org/standard/802_1AB-2005.html&quot; target=&quot;_blank&quot;&gt;LLDP&lt;/a&gt;) is enabled in lost iLOs (&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_changelog/#schema-updates&quot;
target=&quot;_blank&quot;&gt;iLO 6&lt;/a&gt; and later) and you have a mean to collect LLDP information, save their &lt;code&gt;ManagementAddressIPv6&lt;/code&gt; property in a list. It contains their link local address. With this list, the recovery process is shorter.
The next screenshot shows the content of an LLDP transmit object sent
over the network by an HPE iLO 6.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig1-lldp-transmit.png&quot; alt=&quot;Figure 1: Link local address in LLDP transmit object&quot; title=&quot;Figure 1: Link local address in LLDP transmit object&quot;&gt;&lt;/p&gt;
&lt;figcaption&gt;Figure 1: Link local address in LLDP transmit object&lt;/figcaption&gt;
&lt;h2&gt;IPv6 basics&lt;/h2&gt;
&lt;p&gt;If you are already familiar with IPv6, link local, and multicast pre-defined addresses like &lt;code&gt;FF02::1%ZoneId&lt;/code&gt;, you can skip this paragraph and go directly to the &lt;a href=&quot;#recovery-process&quot;&gt;Recovery process&lt;/a&gt; section.&lt;/p&gt;
&lt;p&gt;The basic IPv6 architecture is described in Request For Comments (RFC) &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4291&quot; target=&quot;_blank&quot;&gt;4291&lt;/a&gt;.
Briefly, IPv6 addresses are 128 bit long with the colon (&quot;:&quot;) character
separating eight chunks of sixteen bits. Each chunk is represented by four hexadecimal digits (i.e. &lt;code&gt;2001:1890:1109:4110:9618:82FF:FE71:D01E&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Contiguous chunks of zeros can be compressed and represented with &quot;::&quot;.
Moreover, leading zeros can be omitted.
As an example, &lt;code&gt;2001:0DB8:0000:0000:0008:0800:200C:417A&lt;/code&gt; can be represented as &lt;code&gt;2001:DB8::8:800:200C:417A&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;IPv6 addresses are made of two parts: a subnet prefix and an identifier on the subnet. The
text representation of subnet prefixes has the form &lt;code&gt;IPv6-Address/subnet-prefix-length&lt;/code&gt;, which is similar to the Classless Inter-Domain
Routing &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc1519&quot; target=&quot;_blank&quot;&gt;(CIDR)&lt;/a&gt; representation of IPv4 addresses.&lt;/p&gt;
&lt;p&gt;With this in mind, &lt;code&gt;2001:1890:1109:4110:9618:82FF:FE71:D01E/64&lt;/code&gt; is the text representation of subnet prefix &lt;code&gt;2001:1890:1109:4110::/64&lt;/code&gt;
and identifier &lt;code&gt;9618:82FF:FE71:D01E&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;IPv6 link local addresses&lt;/h3&gt;
&lt;p&gt;The recovery process presented in this blog post is entirely based on IPv6 link local addresses.
Hence, it is important to fully understand what they are and how to use them.&lt;/p&gt;
&lt;p&gt;IPv6 link local addresses are also defined in
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4291#section-2.5.6&quot;
target=&quot;_blank&quot;&gt;RFC 4291&lt;/a&gt;.
They are called &quot;link local&quot; because packets coming from or going to such addresses cannot be forwarded to external networks.
They must stay on the same physical link.&lt;/p&gt;
&lt;p&gt;When IPv6 is enabled in an Operating System (OS),
network interfaces autoconfigure using the
well known prefix &lt;code&gt;FE80::/10&lt;/code&gt; and the MAC address, as explained in the
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4862#section-5.3&quot;
target=&quot;_blank&quot;&gt;IPv6 Stateless Address Autoconfiguration&lt;/a&gt; (SLAAC)
RFC.&lt;/p&gt;
&lt;p&gt;Other link local autoconfiguration methods exist.
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc8981&quot; target=&quot;_blank&quot;&gt;RFC 8981&lt;/a&gt;
describes a temporary address creation that is used by Microsoft Windows
on personal computers to make them harder to find. This RFC is not implemented in HPE iLOs.&lt;/p&gt;
&lt;p&gt;Then, &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc7217&quot; target=&quot;_blank&quot;&gt;RFC 7217&lt;/a&gt; proposes a method for generating opaque interface identifiers. This RFC is implemented in
&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_161/ilo6_network_resourcedefns161/#oemhpeipv6&quot;
target=&quot;_blank&quot;&gt;modern versions&lt;/a&gt; of iLO 6 and later, but disabled by default.&lt;/p&gt;
&lt;p&gt;IPv6 is enabled in the HPE iLO OS kernel and cannot be disabled. This is by design. Hence, the configured
management interface (dedicated or shared) autoconfigures as soon as it is physically connected to a network.
By default, the link local autoconfiguration process transforms the MAC address into an
&lt;a href=&quot;https://standards.ieee.org/faqs/regauth/&quot; target=&quot;_blank&quot;&gt;IEEE EUI-64&lt;/a&gt; address.
Simply explained, this process consists of stretching the MAC address from 48 bits to 64 bits by inserting &lt;code&gt;FF:FE&lt;/code&gt; in one place
and inverting the universal/local bit of the MAC address. Refer to Appendix A of
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4291&quot; target=&quot;_blank&quot;&gt;RFC 4291&lt;/a&gt; for more detail.&lt;/p&gt;
&lt;p&gt;The following two examples retrieve the EUI-64 link local address of a
dedicated iLO 6 network port, its MAC address and the value of the
&lt;code&gt;RFC7217Enabled&lt;/code&gt; property. Notice the &lt;code&gt;FF:FE&lt;/code&gt; string in the link
local address and the similarity between the identifier and the
MAC address.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# iLOrest example
ilorest login ilo-lio365g11-1 -u &amp;#x3C;username&gt; -p password
ilorest get IPv6Addresses/Address MACAddress Oem/Hpe/IPv6/RFC7217Enabled  \
        --select EthernetInterface.                                       \
        --filter Name=&quot;Manager Dedicated*&quot; --json

{
  &quot;IPv6Addresses&quot;: [
    {
      &quot;Address&quot;: &quot;FE80::5EED:8CFF:FE01:D7C&quot;
    }
  ],
  &quot;MACAddress&quot;: &quot;5C:ED:8C:01:0D:7C&quot;,
  &quot;Oem&quot;: {
    &quot;Hpe&quot;: {
      &quot;IPv6&quot;: {
        &quot;RFC7217Enabled&quot;: false
      }
    }
  }
}
ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Same example with cURL
curl --noproxy \* --insecure --silent --location --user &amp;#x3C;ilo-user&gt;:password  \
     https://ilo-lio365g11-1/redfish/v1/Managers/1/EthernetInterfaces/1 |    \
     jq &apos;{&quot;MACAddress&quot;:.MACAddress, &quot;IPv6Address&quot;:.IPv6Addresses[].Address, &quot;RFC7217Enabled&quot;: .Oem.Hpe.IPv6.RFC7217Enabled}&apos;

{
  &quot;MACAddress&quot;: &quot;5C:ED:8C:01:0D:7C&quot;,
  &quot;IPv6Address&quot;: &quot;FE80::5EED:8CFF:FE01:D7C&quot;,
  &quot;RFC7217Enabled&quot;: false
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following example retrieves the same properties as in the previous
example, but against an iLO with the &lt;code&gt;RFC7217Enabled&lt;/code&gt; property set
to &lt;code&gt;true&lt;/code&gt;. In this example, the link local address does not contain
the &lt;code&gt;FF:FE&lt;/code&gt; string and the identifier has no similarities with the MAC address.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest login ilo-lio365g11-2 -u &amp;#x3C;username&gt; -p password
ilorest get IPv6Addresses/Address MACAddress Oem/Hpe/IPv6/RFC7217Enabled  \
        --select EthernetInterface.                                       \
        --filter Name=&quot;Manager Dedicated*&quot; --json
{
  &quot;IPv6Addresses&quot;: [
    {
      &quot;Address&quot;: &quot;FE80::8751:5A51:F30F:F676&quot;
    }
  ],
  &quot;MACAddress&quot;: &quot;5C:ED:8C:01:0E:BE&quot;,
  &quot;Oem&quot;: {
    &quot;Hpe&quot;: {
      &quot;IPv6&quot;: {
        &quot;RFC7217Enabled&quot;: true
      }
    }
  }
}
ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;How to completely disable IPv6 in iLO?&lt;/h3&gt;
&lt;p&gt;This question is a bit aside of the core subject of this blog post, but, as it is
a recurrent question and there is enough information to answer it clearly in this blog post, I thought I would answer it here.&lt;/p&gt;
&lt;p&gt;The answer is: &lt;em&gt;Disable DHCPv6 as well as SLAAC&lt;/em&gt; as shown in Figure 1 below.
I can hear you reply that you did that, but you still can see the link local address
like in Figure 1 below. This is expected.&lt;/p&gt;
&lt;p&gt;As stated previously, SLAAC is always enabled in the iLO OS kernel, and cannot be disabled.
As a consequence, the IPv6 link local SLAAC address is always present in both the iLO Graphical User Interface (GUI),
and Redfish responses. However, you will not find any SLAAC addresses resulting of an
&lt;a href=&quot;https://radvd.litech.org/&quot; target=&quot;_blank&quot;&gt;advertised prefix&lt;/a&gt; or a DHCP server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig2-ipv6-link-local.png&quot; alt=&quot;Figure 2: SLAAC link local visible when SLAAC disabled&quot; title=&quot;Figure 2: SLAAC link local visible when SLAAC disabled&quot;&gt;&lt;/p&gt;
&lt;figcaption&gt;Figure 2: SLAAC link local visible when SLAAC disabled&lt;/figcaption&gt;
&lt;h3&gt;How to compute an EUI-64 link local&lt;/h3&gt;
&lt;p&gt;The easiest method to compute the EUI-64 link local address using a MAC address
is with the &lt;code&gt;ipv6calc&lt;/code&gt;
&lt;a href=&quot;https://www.deepspace6.net/projects/ipv6calc.html&quot;
target=&quot;_blank&quot;&gt;utility&lt;/a&gt;. Use your favorite package installer on modern Linux
systems or compile the sources on Microsoft Windows to make it work.&lt;/p&gt;
&lt;p&gt;The following block of code transforms a MAC address into its EUI-64 equivalent.
By prepending &lt;code&gt;FE80::&lt;/code&gt; to the output of the &lt;code&gt;ipv6calc&lt;/code&gt; command,
you will get the complete SLAAC link-local address (second command of next example).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ipv6calc --quiet --in mac 94:18:82:71:A0:7A --out eui64
9618:82ff:fe71:a07a

echo -n &quot;fe80::&quot; ; ipv6calc --quiet --in mac 94:18:82:71:A0:7A --out eui64
fe80::9618:82ff:fe71:a07a
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Zone indexes&lt;/h3&gt;
&lt;p&gt;From your brick-saver, when you want to initiate a communication with a link-local IPv6 address,
you will have to tell the operating system the network interface to use to send the packets.
The reason is because addresses starting with the &lt;code&gt;FE80::/10&lt;/code&gt; prefix are present on all
the networks directly connected to the system.&lt;/p&gt;
&lt;p&gt;You can retrieve the zone indexes (also called interface indexes) used by your brick-saver
with the following Windows PowerShell command:
&lt;code&gt;Get-NetAdapter | Where-Object { $_.Status -eq &apos;Up&apos; } | Select-Object Name, InterfaceIndex&lt;/code&gt;.
The output is generally an integer.&lt;/p&gt;
&lt;p&gt;The equivalent on Linux is:
&lt;code&gt;ip link list | awk -F: &apos;/LOWER_UP/ {print $2}&apos;&lt;/code&gt;. The output is generally a string like &lt;code&gt;eth0&lt;/code&gt;, &lt;code&gt;eno2&lt;/code&gt; or &lt;code&gt;enp1s0f4u4&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Once you have identified the interface to go through, the complete URL representation of a lost iLO IPv6 target is:
&lt;code&gt;https://[ip:v6::address%IZoneIndex]:PortNumber&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Although this syntax is standard, most browsers (Chrome, Brave, Firefox, Edge) don&apos;t support it. They don&apos;t consider this string as a valid URL that
they should follow, but as a string to search on the Internet.&lt;/p&gt;
&lt;p&gt;However, command line tools like &lt;code&gt;cURL&lt;/code&gt;,
&lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot;
target=&quot;_blank&quot;&gt;HPE iLOrest&lt;/a&gt;,
the &lt;code&gt;Invoke-WebRequest&lt;/code&gt; and &lt;code&gt;Invoke-RestMethod&lt;/code&gt; PowerShell Cmdlets support it. Here are some examples that retrieve the IPv4 configuration of the iLO dedicated
network port using the IPv6 link local address and zone index.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl --noproxy \* --insecure --silent --location \
     --user username:password                    \
     https://[FE80::9618:82FF:FE71:A07B%eno1]/redfish/v1/Managers/1/EthernetInterfaces/1 | \
     jq &apos;.IPv4Addresses[]&apos;
{
  &quot;Address&quot;: &quot;192.168.1.45&quot;,
  &quot;AddressOrigin&quot;: &quot;Static&quot;,
  &quot;Gateway&quot;: &quot;192.168.1.1&quot;,
  &quot;SubnetMask&quot;: &quot;255.255.252.0&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ilorest login https://[FE80::9618:82FF:FE71:A07B%eno1] -u &amp;#x3C;username&gt; -p password
ilorest select EthernetInterface.
ilorest get Ipv4Addresses --filter Name=&quot;Manager Dedicated*&quot; --json
{
  &quot;IPv4Addresses&quot;: [
    {
      &quot;Address&quot;: &quot;192.168.1.45&quot;,
      &quot;AddressOrigin&quot;: &quot;Static&quot;,
      &quot;Gateway&quot;: &quot;192.168.1.1&quot;,
      &quot;SubnetMask&quot;: &quot;255.255.252.0&quot;
    }
  ]
}

ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Use of PowerShell core (7) and Redfish basic authentication
$Credentials = &quot;username:password&quot;
$base64EncodedCreds = [Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($Credentials))
$Headers = @{}
$Headers[&quot;Content-Type&quot;] = &quot;application/json&quot;
$Headers[&quot;Authorization&quot;] = &quot;Basic $base64EncodedCreds&quot;

$Ipv6Address=&quot;FE80::9618:82FF:FE71:A07B%22&quot;
$Uri=&quot;https://[$Ipv6Address]/redfish/v1/Managers/1/EthernetInterfaces/1&quot;

$response = Invoke-WebRequest -Uri $Uri -Method GET -Headers $Headers -SkipCertificateCheck
$JsonResponse = $response.Content | ConvertFrom-Json
$JsonResponse.IPv4Addresses
Address      AddressOrigin Gateway     SubnetMask
-------      ------------- -------     ----------
192.168.1.47 Static        192.168.1.1 255.255.252.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Multicast addresses&lt;/h3&gt;
&lt;p&gt;An important concept in IPv6 is the absence of broadcast addresses. Instead, IPv6 uses multicast addresses. The exhaustive list of the pre-defined multicast addresses is present in &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc4291#section-2.7.1&quot; target=&quot;_blank&quot;&gt;RFC 4291&lt;/a&gt;. Among them is the all nodes &lt;code&gt;FF02::1&lt;/code&gt; address.&lt;/p&gt;
&lt;p&gt;From you brick-saver, when you issue a &lt;code&gt;ping FF02::1%Index&lt;/code&gt; command, each and every IPv6 interfaces present on the LAN must reply with their link local address.
By doing so, you can build the table of all your IPv6 neighbors. You will find more detail on how to use this table in the second scenario below.&lt;/p&gt;
&lt;h2&gt;Recovery process&lt;/h2&gt;
&lt;p&gt;The recovery process presented in this blog post relies entirely on EUI-64 IPv6 link local addresses, and thus on MAC addresses. If you don&apos;t have
the list of the affected iLO MAC addresses, you need to go through the next paragraph.&lt;/p&gt;
&lt;p&gt;You can skip the next paragraph and study the second scenario if already have that list and your systems are :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;iLO 5 based&lt;/li&gt;
&lt;li&gt;iLO 6 (or later) based with &amp;#x3C;a href=&quot;&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_161/ilo6_network_resourcedefns161/#oemhpeipv6&quot;&gt;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_161/ilo6_network_resourcedefns161/#oemhpeipv6&lt;/a&gt;&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;target=&quot;_blank&quot;&gt;RFC7217&lt;/a&gt; turned off.&lt;/p&gt;
&lt;h3&gt;MAC address discovery&lt;/h3&gt;
&lt;p&gt;The lost iLOs MAC address discovery is performed with the IPv6 neighbor table maintained in your brick-saver.&lt;/p&gt;
&lt;p&gt;The overall method used is the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Make sure affected servers are powered on. This step can only be done by a human being present in the computer room. Then, hide their iLO port from the network by physically disconnecting the network cable. You could unplug the server power cables to achieve this goal, but this would also hide OS interfaces connected to the management LAN. Those interfaces would then re-appear in step 5 below, although they are not iLO interfaces and alter the discovered MAC address list.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect to your brick-saver, flush the neighbor table, solicitate IPv6 neighbors and save the table in a file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Interface `eno1` is connected to the management network
ip -6 neigh flush dev eno1  # Flush interface eno1&apos;s neighbor table
ping -6 -c 3 ff02::1%eno1   # Start neighbor solicitation
sleep 15                    # Wait for solicitation process to finish
ip -6 neigh show dev eno1 | \
   sort -u  &gt; File1.txt     # Sort neighbor table and save it
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reconnect affected iLO cables.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Repeat step 2.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Interface `eno1` is connected to the management network
ip -6 neigh flush dev eno1  # Flush interface eno1 neighbor table
ping -6 -c 3 ff02::1%eno1   # Start neighbor solicitation
sleep 15                    # Wait for neighbor solicitation to finish
ip -6 neigh show dev eno1 | \
   sort -u  &gt; File2.txt     # Sort neighbor table and save it
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The list of affected iLO link local and MAC addresses is obtained by computing the difference of the files saved in steps 2 and 4.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# The following command stores discovered link local IPv6 addresses
# in variable LL_LIST
LL_LIST=$(diff File1.txt File2.txt | awk &apos;/^&gt;/ {print $2}&apos;)
echo $LL_LIST
fe80::5eed:8cff:fe01:d7c fe80::4645:c007:6bc2:bac6 fe80::5eed:8cff:fe6b:617
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You should verify that the list collected above contains Redfish conformant addresses and only
Redfish conformant addresses.
This can be done by performing a GET request toward each IPv6 link-local
and extract their &lt;code&gt;RedfishVersion&lt;/code&gt; property. If the list contains a non-redfish conformant address,
the following &lt;code&gt;cURL/jq&lt;/code&gt; commands will report an error.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;for ll in $LL_LIST ; do
  curl --proxy \* --insecure --silent https://[${ll}%eno1]/redfish/v1 | \
  jq &apos;.RedfishVersion&apos;
done
&quot;1.20.0&quot;
parse error: Invalid numeric literal at line 1, column 10
&quot;1.13.0&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If your brick-saver is running Microsoft Windows, here are some hints
to perform the above procedure.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Start a privileged PowerShell session and retrieve Zone indexes
Get-NetAdapter | Where-Object { $_.Status -eq &apos;Up&apos; } | Select-Object Name, InterfaceIndex

# Show neighbors:
netsh interface ipv6 show neighbors interface=22 &gt; File.txt

# Delete neighbor table
netsh interface ipv6 delete neighbors interface=22

# Populate neighbor table
Test-Connection -Count 1 -ComputerName ff02::1%22

Compare-Object (Get-Content File1.txt) (Get-Content File2.txt) |  Where-Object { $_.SideIndicator -eq &apos;=&gt;&apos; } | Select-Object InputObject
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Known MAC addresses and RFC 7217 disabled&lt;/h3&gt;
&lt;p&gt;If you have the list of lost MAC addresses and you know that
&lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc7217&quot; target=&quot;_blank&quot;&gt;RFC 7217&lt;/a&gt; is set to disabled, you just have to compute the corresponding IPv6 link-local addresses with the &lt;code&gt;ipv6calc&lt;/code&gt;
tool and verify their accuracy.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: RFC 7217 is disabled by default as explained &lt;a href=&quot;#ipv6-link-local-addresses&quot;&gt;above&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The following example computes and stores IPv6 link local addresses
in a variable. Then it tests whether they correspond to a
Redfish conformant (iLO) address by requesting the property &lt;code&gt;RedfishVersion&lt;/code&gt; from the
&lt;code&gt;ServiceRoot&lt;/code&gt; URI.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;MacList=&quot;5C:ED:8C:01:0D:7C 5C:ED:8C:6B:06:17&quot;
LL_LIST=&quot;$(for mac in $MacList ;
              do echo -n &quot;fe80::&quot; ; ipv6calc --quiet --in mac $mac --out eui64
           done
         )&quot;

for ll in $LL_LIST ; do
    curl --noproxy \* --insecure --silent https://[${ll}%eno1]/redfish/v1 | \
         jq &apos;.RedfishVersion&apos;
done
&quot;1.20.0&quot;
&quot;1.13.0&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can use iLOrest to verify that the computed list of link local addresses
corresponds to iLO addresses.
However, you will need to provide credentials, even for accessing the Redfish root URI.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;u=&quot;ilo-user&quot;
p=&quot;ilo-password&quot;

for ll in $LL_LIST ; do
    ilorest --nologo login [${ll}%eno1] -u $u -p $p &gt;/dev/null
    ilorest --nologo get RedfishVersion --select ServiceRoot
    ilorest --nologo logout &gt;/dev/null
done
RedfishVersion=1.20.0
RedfishVersion=1.13.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Configuration recovery&lt;/h3&gt;
&lt;p&gt;The last step to completely recover from the lost iLOs is to re-configure their
network parameters. From the brick-saver, the following example
shows a generic PATCH request and its associated payload
to configure a minimal static IPv4 configuration of a lost iLO.&lt;/p&gt;
&lt;p&gt;With the previous examples, it should be easy for you to adapt it
for iLOrest, cURL or PowerShell. Other examples for configuring iLO network
parameters are presented in the
&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishclients/ilorest-userguide/examplecommandsscripts/#configure-ilo-ip-addresses&quot;
target=&quot;_blank&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;PATCH request URL
https://[FE80::9618:82FF:FE71:A07B%eno1]/redfish/v1/Managers/1/EthernetInterfaces/1/

Payload:

{
    &quot;DHCPv4&quot;: {
        &quot;DHCPEnabled&quot;: false,
        &quot;UseDNSServers&quot;: false,
        &quot;UseDomainName&quot;: false,
        &quot;UseGateway&quot;: false,
        &quot;UseNTPServers&quot;: false,
        &quot;UseStaticRoutes&quot;: false
    },
    &quot;IPv4StaticAddresses&quot;: [
        {
            &quot;Address&quot;: &quot;192.168.1.44&quot;,
            &quot;Gateway&quot;: &quot;192.168.1.1&quot;,
            &quot;SubnetMask&quot;: &quot;255.255.252.0&quot;
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This article presented a method with simple PowerShell and Linux commands
to find lost servers after a
move from one network to another without any configuration preparation prior to the move.&lt;/p&gt;
&lt;p&gt;As this method uses IPv6, many of its concepts have been introduced,
including its autoconfiguration process. If you like more detail on autoconfiguration,
I would suggest to read the corresponding section I wrote some time ago in the
&lt;a href=&quot;https://ipj.dreamhosters.com/wp-content/uploads/issues/2004/ipj07-2.pdf&quot;
target=&quot;_blank&quot;&gt;Internet Protocol Journal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And don&apos;t forget to check out some of my other
&lt;a href=&quot;https://developer.hpe.com/search/?term=donze&quot; target=&quot;_blank&quot;&gt;blog posts&lt;/a&gt; on the HPE Developer Community portal to learn more about Redfish tips and tricks.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[VLAN Versus VXLAN]]></title><description><![CDATA[In a modern data center or large-scale network environment, network segmentation and isolation are key to optimizing both performance and…]]></description><link>https://developer.hpe.com/vlan-versus-vxlan/</link><guid isPermaLink="false">https://developer.hpe.com/vlan-versus-vxlan/</guid><pubDate>Mon, 18 Nov 2024 14:57:37 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;In a modern data center or large-scale network environment, &lt;strong&gt;network segmentation&lt;/strong&gt; and isolation are key to optimizing both &lt;strong&gt;performance and security&lt;/strong&gt;. &lt;strong&gt;Two popular technologies&lt;/strong&gt; that address this need are &lt;strong&gt;VLANs and VXLANs&lt;/strong&gt;, but they serve slightly different purposes and have distinct configurations. Understanding the nuances of both is essential for network engineers, systems administrators, and architects when deciding how to structure their networks.&lt;/p&gt;
&lt;p&gt;While the two technologies appear very similar from a high level, they &lt;strong&gt;operate at different layers&lt;/strong&gt; in the networking stack and differ in capabilities, scalability, and use cases. In this post, I&apos;ll define what a VLAN and VXLAN is, provide details showing their headers, show how they are configured and benefits of using VXLAN over VLAN.&lt;/p&gt;
&lt;h3&gt;What is VLAN?&lt;/h3&gt;
&lt;p&gt;A &lt;strong&gt;VLAN (Virtual Local Area Network)&lt;/strong&gt;, operates at &lt;strong&gt;Layer 2&lt;/strong&gt; and segments a physical network into multiple isolated networks, through the use of &lt;strong&gt;VLAN-ID&lt;/strong&gt;, which can range from &lt;strong&gt;&amp;#x3C;1-4096&gt; (12-bit)&lt;/strong&gt;. Different ports on a networking device can be assigned to different VLAN-IDs. The networking device ensures that the incoming or outgoing traffic is forwarded based on the VLAN-ID that is present in the ethernet frames.&lt;/p&gt;
&lt;p&gt;So, if the physical network is defined as Layer 1, the segmented physical network using VLANs is the Layer 2 data link layer.&lt;/p&gt;
&lt;h3&gt;What is VXLAN?&lt;/h3&gt;
&lt;p&gt;A &lt;strong&gt;VXLAN (Virtual Extensible Local Area Networks)&lt;/strong&gt;,  operates as an encapsulation protocol designed to &lt;strong&gt;extend Layer 2 networks over Layer 3&lt;/strong&gt; infrastructure. Due to this encapsulation, VXLAN is also called IP tunneling protocol to carry Layer 2 traffic over Layer 3 network for &lt;strong&gt;wide area deployments&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;VXLAN segment is identified by &lt;strong&gt;VNI (Virtual Network Identifier)&lt;/strong&gt; which is 24 bits. The Layer 2 Ethernet frames are encapsulated by &lt;strong&gt;VNI outer headers&lt;/strong&gt;. The VXLAN encapsulation is done by &lt;strong&gt;VTEP (VXLAN tunnel end point)&lt;/strong&gt;.  A networking device which is configured as VTEP adds the VXLAN header to the Ethernet frame.&lt;/p&gt;
&lt;h4&gt;How VXLAN encapsulation works?&lt;/h4&gt;
&lt;p&gt;The VTEP  must be configured on a networking device which encapsulate and decapsulates the ethernet frames into VXLAN packets. The intermediate networking devices receiving these VXLAN packets are routed as any other Layer 3 Packet.&lt;/p&gt;
&lt;p&gt;The VXLAN encapsulated packet or the VNI outer headers contains:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Other Ethernet Header (Layer 2)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Destination MAC Address:&lt;/strong&gt; The MAC address of the next-hop device.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Source MAC address:&lt;/strong&gt; The MAC address of the VTEP device sending the encapsulated frame.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ether Type:&lt;/strong&gt; Typically, &lt;code&gt;0x0800&lt;/code&gt; for IPv4 or &lt;code&gt;0x86DD&lt;/code&gt; for IPv6.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;2. Outer IP Header (Layer 3)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Source IP Address&lt;/strong&gt;: The IP address of the VXLAN tunnel endpoint (VTEP) originating the packet.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Destination IP Address&lt;/strong&gt;: The IP address of the remote VTEP, where the VXLAN frame is being sent.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Protocol&lt;/strong&gt;: Usually, this is set to &lt;code&gt;0x11&lt;/code&gt;, which signifies UDP (User Datagram Protocol).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;3. Outer UDP Header (Layer 4)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Source Port&lt;/strong&gt;: Typically randomized by the sending VTEP.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Destination Port&lt;/strong&gt;: Fixed at &lt;code&gt;4789&lt;/code&gt;, which is the well-known port for VXLAN.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Length&lt;/strong&gt;: Specifies the length of the UDP datagram, including the VXLAN header.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;4. VXLAN Header  (Layer 4)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The VXLAN header is inserted within the UDP payload.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VXLAN Network Identifier (VNI, 24 bits)&lt;/strong&gt;: A 24-bit field used to identify the VXLAN segment or virtual network.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5. Original Ethernet Frame (Layer 2)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This actual Ethernet frame including the originated Source MAC address, Destination MAC Address, Ether Type and it may contain the VLAN Header, IP Header, UDP Header and Application data.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Why do we need VXLAN?&lt;/h4&gt;
&lt;p&gt;With VLANs being used for segmentation across and within data centers, the Spanning Tree Protocol (STP) is used to build a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. &lt;strong&gt;STP disables links&lt;/strong&gt; that are not part of the spanning tree, leaving a single active path between two network nodes. This can result in a &lt;strong&gt;large number of disabled links&lt;/strong&gt;, an issue that can be &lt;strong&gt;resolved by using a Layer 3 virtual tunnel over the Layer 2 network&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Another issue that &lt;strong&gt;VXLANs resolve&lt;/strong&gt; is that of &lt;strong&gt;scalability&lt;/strong&gt;. A 12-bit VLAN ID is used in the Ethernet data frames to divide the larger Layer 2 network into multiple broadcast domains. As such, VLANs can only be used for data centers that require fewer than 4094 VLANs, limiting their scalability.&lt;/p&gt;
&lt;p&gt;Consider a situation where there are 24 or 48 servers connected to a Top-of-the-Rack (ToR) switch. Instead of just one MAC address per server link, the &lt;strong&gt;switch must learn all the MAC addresses of the virtual machines (VMs)&lt;/strong&gt; communicating across the servers. This &lt;strong&gt;inadequate MAC address table issue&lt;/strong&gt; would be &lt;strong&gt;addressed with the Layer 3 tunnel over the Layer 2 frames&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;VLAN Header in Ethernet Frame&lt;/h3&gt;
&lt;p&gt;This image illustrates the structure of an &lt;strong&gt;Ethernet frame with a VLAN tag&lt;/strong&gt;. As you can see, the VLAN header is inserted after the source MAC address. VLAN header contains a &lt;strong&gt;4-byte VLAN tag field used to identify the VLAN-ID&lt;/strong&gt; from which the packet originated. The format of VLAN-tagged frames is defined in IEEE 802.1Q standard.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture1.png&quot; alt=&quot;VLAN Header&quot; title=&quot;VLAN Header&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The &lt;strong&gt;VLAN tag&lt;/strong&gt; displays the following fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TPID&lt;/strong&gt; – By Default VLAN Tagged packets has 0x8100.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Priority&lt;/strong&gt; – To indicate the 802.1p priority of the frame&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CFI(Canonical Format Indicator)&lt;/strong&gt; – By default it is 0. 0 indicates MAC addresses are in standard format. 1 indicated non-standard format.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VLAN-ID&lt;/strong&gt; – 12 Bit value to identify the VLAN.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;VXLAN Header in Ethernet Frame&lt;/h3&gt;
&lt;p&gt;This image illustrates the structure of VXLAN encapsulated Ethernet frame including the VXLAN header.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture2.png&quot; alt=&quot;VXLAN Header&quot; title=&quot;VXLAN Header&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VXLAN Header&lt;/strong&gt; contains the following fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Flags&lt;/strong&gt; –  The &lt;code&gt;&quot;I&quot;&lt;/code&gt; bit would be set to 1 for a valid VXLAN Network ID (VNI).  The other reserved 7 bits would be set to zero.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VXLAN-ID&lt;/strong&gt; –  This is the 24 bit value used to designate the VXLAN overlay network.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;VLAN and VXLAN Configuration:&lt;/h3&gt;
&lt;p&gt;To understand VXLAN further, let us go through the &lt;strong&gt;configurations of VLAN and VXLAN, VNI and VTEP.&lt;/strong&gt;  Each VNI would be mapped with the VLAN it is carrying.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture3.png&quot; alt=&quot;VLAN, VXLAN Configuration&quot; title=&quot;VLAN, VXLAN Configuration&quot;&gt;&lt;/p&gt;
&lt;h3&gt;FAQs&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Will VXLAN replace VLAN standard?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VLAN standard is specified in the IEEE 802.1Q standard by IEEE. VXLAN standard is specified in the RFC 7438 by IETF. VXLAN encapsulates the VLAN tagged packet with four outer headers to transmit the packet in IP based network. It does not replace the VLAN standard.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does VXLAN always have an outer VLAN header?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The outer VLAN header in VXLAN headers is optional. It is based on the deployment scenario.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does VXLAN only solve the scalability issue?&lt;/p&gt;
&lt;p&gt;Apart from scalability issues, VXLAN addresses the issues below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VXLAN encapsulated packet contains IP header. So, the VXLAN technology is used for having IP based traffic in core network.&lt;/li&gt;
&lt;li&gt;The VNI (VXLAN Network Identifier) serves as a unique identifier for a Layer 2 segment (or virtual network) within a VXLAN environment. VXLAN based network segmentation which solves two major issues mentioned below:&lt;/li&gt;
&lt;li&gt;This allows the MAC addresses of devices (such as virtual machines or VMs) within different VXLAN segments to overlap without causing any conflict or risk of traffic &quot;crossover&quot; between virtual networks.&lt;/li&gt;
&lt;li&gt;This allows multiple VMs to exist within the same physical Layer 3 network but remain logically isolated in their own VXLAN segment, even if they have overlapping IP addresses and MAC addresses.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;What’s the difference between QinQ and VXLAN?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;QinQ (also known as 802.1ad or stacked VLANs) is a Layer 2 technology that adds an outer VLAN tag to the original Ethernet frame, allowing for a hierarchical VLAN structure. This technology is used in service provider environments, where multiple customers need to share the same physical network while maintaining their own isolated virtual networks.VXLAN is primarily used in data centers and cloud networks to create large-scale virtualized Layer 2 networks over a Layer 3 network.&lt;/li&gt;
&lt;li&gt;VXLAN is primarily used in data centers and cloud networks to create large-scale virtualized Layer 2 networks over a Layer 3 network.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Summary&lt;/h4&gt;
&lt;p&gt;To summarize, both &lt;strong&gt;VLAN and VXLAN&lt;/strong&gt; are valuable tools for &lt;strong&gt;network segmentation&lt;/strong&gt;, but they serve different needs. VLANs remain a simple and effective choice for smaller, less complex networks, while VXLAN provides the &lt;strong&gt;scalability and flexibility&lt;/strong&gt; needed for modern, distributed, and virtualized environments. Understanding the technical differences in configuration and their respective use cases will help you choose the right solution for your network architecture, ensuring optimal performance, security, and scalability. Both VLAN and VXLAN are used together to solve scalability issues and to &lt;strong&gt;overcome the limitations of VLAN&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about VLANs and VXLANs, please check out this documentation here (&lt;a href=&quot;https://www.arubanetworks.com/techdocs/AOS-CX/10.12/PDF/vxlan.pdf&quot;&gt;https://www.arubanetworks.com/techdocs/AOS-CX/10.12/PDF/vxlan.pdf)&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Developer assistance for those looking to work with Ampere ARM processors]]></title><description><![CDATA[The HPE ProLiant RL300 Gen11 is renowned as a best-in-class compute platform for cloud-native workloads. Part of what makes it so special is…]]></description><link>https://developer.hpe.com/developer-assistance-for-those-looking-to-work-with-ampere-arm-processors/</link><guid isPermaLink="false">https://developer.hpe.com/developer-assistance-for-those-looking-to-work-with-ampere-arm-processors/</guid><pubDate>Mon, 18 Nov 2024 11:37:14 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;The HPE ProLiant RL300 Gen11 is renowned as a best-in-class compute platform for cloud-native workloads. Part of what makes it so special is its use of the Ampere® Arm processors. Many of today’s progressive, open-source-centric customers are quickly pivoting to these AArch64 processors for the predictable performance they deliver at the scale required for cloud processing. Others are still looking for assistance on how to make the transition from x86.&lt;/p&gt;
&lt;p&gt;We recently published a couple of blog posts on the HPE Developer Community portal that provide some assistance. The first, &lt;a href=&quot;https://developer.hpe.com/blog/simplifying-code-migration-the-benefits-of-the-new-ampere-porting-advisor-for-x86-to-arm64/&quot;&gt;Simplifying code migration: The benefits of the new Ampere Porting Advisor for x86 to ARM64&lt;/a&gt;, describes a new open-source software porting advisor that promises to simplify the porting process. Another post introduces the &lt;a href=&quot;https://amperecomputing.com/blogs/ampere-performance-toolkit-announcement&quot;&gt;Ampere Performance Toolkit&lt;/a&gt;, which provides an automated way to run and benchmark important application data.&lt;/p&gt;
&lt;p&gt;In our research on how to best assist developers in dealing with these processors, we came across the &lt;a href=&quot;https://amperecomputing.com/blogs/ampere-launches-developer-program-for-the-cloud-computing-community&quot;&gt;Ampere Developer Program for the Cloud Computing Community&lt;/a&gt;. On this site, you’ll find:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Porting and tuning utilities&lt;/li&gt;
&lt;li&gt;Developer outreach technical documents and resources&lt;/li&gt;
&lt;li&gt;Customer quick start guides&lt;/li&gt;
&lt;li&gt;And resources for every stage of development&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We understand that this site will continue to expand to include things like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Workload briefs&lt;/li&gt;
&lt;li&gt;Transition and tuning guides&lt;/li&gt;
&lt;li&gt;Reference architectures&lt;/li&gt;
&lt;li&gt;Developer blog posts&lt;/li&gt;
&lt;li&gt;Tutorials&lt;/li&gt;
&lt;li&gt;And case studies&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you’re developing on any HPE systems that are using the Ampere ARM processor, you may wish to consult this Ampere Developer Program for more resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Navier-Stokes in Chapel — Distributed Cavity-Flow Solver]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/navier-stokes-in-chapel-—-distributed-cavity-flow-solver/</link><guid isPermaLink="false">https://developer.hpe.com/navier-stokes-in-chapel-—-distributed-cavity-flow-solver/</guid><pubDate>Thu, 14 Nov 2024 19:46:35 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with Retrieval Augmented Generation (RAG)]]></title><description><![CDATA[Keeping up with AI technology and understanding more about Generative AI and LLM (Large Language Models) is quickly becoming an imperative…]]></description><link>https://developer.hpe.com/getting-started-with-retrieval-augmented-generation-rag/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-retrieval-augmented-generation-rag/</guid><pubDate>Thu, 14 Nov 2024 07:42:05 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Keeping up with AI technology and understanding more about Generative AI and LLM (Large Language Models) is quickly becoming an imperative. If you are like me, curious but not that much of a data scientist or a mathematician, you don&apos;t really want to deep dive into the gory details of a model and, instead, would prefer to consider them as interchangeable black boxes with specific properties and application ranges. Still, it’s important to understand how to consume some of these foundation models in order to build something for your own business. Let’s say, for example, that you want to use AI to help customers on your commercial web site, something often referred to as a Chatbot.  &lt;/p&gt;
&lt;p&gt;This blog post will show you how to build a simple questions-answering generative AI application using the LangChain orchestration framework powered by a large language model with a private data source to augment the knowledge of the model and answer the question more effectively. &lt;/p&gt;
&lt;h2&gt;Using a foundation model &lt;/h2&gt;
&lt;p&gt;LLMs fall into a category called foundation models (FMs). They are large deep learning neural networks trained on massive datasets. You’ve probably heard about, and even used, foundation models like OpenAI’s GPT, Anthropic’s Claude, or Meta’s Llama. Models like these have been trained using such a large dataset that it can answer all kinds of question from users. You might wonder, why can’t I just use such a foundation model, like GPT or Llama3, on my website to create custom generative AI model? Unfortunately, due to the fact that the model was not trained on your private data, i.e. details of the products you sell on your website, there is little chance that the model would be able to provide a valid response to your customer inquiries. &lt;/p&gt;
&lt;p&gt;At this point, you have two choices:  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You can invest time and money into finetuning (training) a large language model that includes your product information or private data. It can be complex, expensive and resource intensive, especially if you finetune your model each time you have new data you want the model to know. Finetuning an LLM can be complicated as it requires the right skill set to do the work correctly. It can also be costly as it requires a lot of compute resources including CPU, GPU, power, and cooling. &lt;/li&gt;
&lt;li&gt;You can use the most appropriate pre-trained foundation model that’s available and augment its knowledge with your own data, all without the need to train the model. This is what we call Retrieval Augmented Generation (RAG). &lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;RAG to the rescue &lt;/h2&gt;
&lt;p&gt;RAG is a combination of a natural language processing (NLP) technique combined with a large language model (LLM). A RAG system augments the knowledge that an LLM provides thanks to its training with its own knowledge base. This allows the model to provide responses based on more specific or private content, which would not be found in the public data used to train the foundation model.&lt;/p&gt;
&lt;h2&gt;The components of the RAG system &lt;/h2&gt;
&lt;p&gt;In order to build a RAG system, you would need the following: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access to an LLM (locally or in the cloud) such as Meta’s Llama3, OpenAI’s GPT, or many others. &lt;/li&gt;
&lt;li&gt;The documents you want to put in the RAG database. &lt;/li&gt;
&lt;li&gt;A Text Embedding model such as NVIDIA NeMo Retriever Text Embedding NIM to convert the documents into numerical vectors. &lt;/li&gt;
&lt;li&gt;A vector database such as open-source ones including Milvus, Qdrant, or Chroma. A vector database is a collection of data stored as mathematical representations. This is used to store the private data in the form of embeddings, also known as vectors. &lt;/li&gt;
&lt;li&gt;A chain server such as LangChain or LlamaIndex. This is responsible to coordinating dialogs between the user, the vector database, and the LLM by connecting the LLM to external data sources. &lt;/li&gt;
&lt;li&gt;You will also need a way for users to enter their prompt and read the responses from the system along with a mechanism for populating your private data into the vector database. The LangChain framework implements the prompt using a prompt template. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Seeing is believing &lt;/h2&gt;
&lt;p&gt;NVIDIA provides a good way to build this and get your hands (a little) dirty. You can find it on &lt;a href=&quot;https://github.com/NVIDIA/GenerativeAIExamples&quot;&gt;this GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once the repository is cloned, you can select &lt;a href=&quot;https://github.com/NVIDIA/GenerativeAIExamples/tree/main/RAG/examples/basic_rag/langchain&quot;&gt;the basic RAG using LangChain&lt;/a&gt; example.   &lt;/p&gt;
&lt;p&gt;As explained in the &lt;a href=&quot;https://github.com/NVIDIA/GenerativeAIExamples/blob/main/RAG/examples/basic_rag/langchain/README.md&quot;&gt;README.md&lt;/a&gt;, the example uses NeMo Retriever Text Embedding NIM as text embedding model, Milvus for its vector database, LangChain for its chain server, and meta/llama3-70b-instruct for its LLM. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://github.com/NVIDIA/GenerativeAIExamples/raw/main/docs/images/basic_rag_langchain_arch.png&quot; alt=&quot;Architecture of the RAG example&quot; title=&quot;Architecture of the RAG example&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;: To get this working you will need the following: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A machine with Ubuntu, Docker and Docker Compose (a GPU is not mandatory) &lt;/li&gt;
&lt;li&gt;An API key from &lt;a href=&quot;https://build.nvidia.com/explore/discover&quot;&gt;https://build.nvidia.com/explore/discover&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;An NGC API key, which you can get by joining NVIDIA developer program. Refer to these instructions to &lt;a href=&quot;https://docs.nvidia.com/ngc/gpu-cloud/ngc-user-guide/index.html#generating-api-key&quot;&gt;generate an NGC API key&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once you have this ready, you can use Docker Compose to build the solution and start it. Docker Compose will download from the NVIDIA container registry, build, and run the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ docker ps --format &quot;table {{.ID}}\t{{.Names}}\t{{.Status}}&quot; 
CONTAINER ID   	NAMES                    		STATUS 
436a1e0e1f2a   	milvus-standalone        		Up 18 hours 
c3a5bf578654   	rag-playground           		Up 18 hours 
78be16cb2ef4   	milvus-minio             		Up 18 hours (healthy) 
dc3bcc2c7b16   	chain-server             		Up 18 hours 
af87e14ff763   	milvus-etcd              		Up 18 hours (healthy) 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let me quickly run through what each container is all about: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;milvus-standalone, milvus-etcd and milvus-minio: the vector database used as a knowledge base for private data &lt;/li&gt;
&lt;li&gt;rag-playground: the GUI to capture the user prompt and display the response from the LLM. It also has the GUI for selecting documents to add into the knowledge base  &lt;/li&gt;
&lt;li&gt;chain-server: the LangChain based chain server &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As you can see from the port configuration of the rag-playground container, it&apos;s listening to port 8090. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker port rag-playground 
8090/tcp -&gt; 0.0.0.0:8090 
8090/tcp -&gt; [::]:8090 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, let’s open a browser on this URL to see the RAG in action.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog-1.jpg&quot; alt=&quot;Connect to web UI&quot; title=&quot;Connect to web UI&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 1 - Query using a foundation LLM &lt;/h3&gt;
&lt;p&gt;You can see that the model used is meta/llama3-70b-instruct, so first try only using the LLM. To do this, enter some text in the prompt window. For the sake of the example here, I&apos;ll pretend to be a mountain bike dealer, carrying the Swiss brand of mountain bikes called Flyer. Imagine that I am looking for details of one model called the &lt;em&gt;Uproc&lt;/em&gt;. I ask the model: &lt;em&gt;What is a Flyer Uproc?&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog2.jpg&quot; alt=&quot;Asking what is a flyer uproc without context&quot; title=&quot;Asking what is a flyer uproc without context&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see from the response, there is a problem, as the LLM was not trained with the information about the Flyer brand. The model did find something about &lt;em&gt;Flyer Uproc&lt;/em&gt;, but clearly this is not what my customer would expect to find on my web site. &lt;/p&gt;
&lt;h3&gt;Step 2 - Loading data into the knowledge base &lt;/h3&gt;
&lt;p&gt;To address this issue, use a PDF that describes the technical specifications of the Flyer Uproc in great detail. In the UI, select &lt;strong&gt;Knowledge base&lt;/strong&gt; from the upper right corner, then select the PDF (any PDF, TXT or markdown files can work, too). The system will check the content of the file, then build a series of embeddings to describe it, and then store these embeddings in the vector database. It only takes a few seconds to process the details contained in the PDF.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog3.jpg&quot; alt=&quot;Adding a PDF in the knowledge base&quot; title=&quot;Adding a PDF in the knowledge base&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 3 - Query using RAG &lt;/h3&gt;
&lt;p&gt;Now that the knowledge base includes details on the &lt;em&gt;Flyer Uproc&lt;/em&gt; product line, try the same query, but this time, make sure to check the &quot;use knowledge base&quot; checkbox. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog4.jpg&quot; alt=&quot;Asking what is a flyer uproc with context&quot; title=&quot;Asking what is a flyer uproc with context&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the context, provided by the augmented knowledge base, provides additional knowledge to the LLM, and it is now able to provide a much better result, delivering a lot more value to my customers. The RAG system has retrieved the most relevant information from the vector database and passed the information to the LLM. The LLM then used the information to answer the question more effectively. &lt;/p&gt;
&lt;p&gt;Other examples of customer questions could be:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog5.jpg&quot; alt=&quot;Asking more questions to system&quot; title=&quot;Asking more questions to system&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ragblog6.jpg&quot; alt=&quot;Asking more questions to system&quot; title=&quot;Asking more questions to system&quot;&gt;&lt;/p&gt;
&lt;p&gt;I think you get the point. If this was a Chatbot on my website, it would clearly be very efficient and helpful to my customers.&lt;/p&gt;
&lt;h2&gt;Next steps &lt;/h2&gt;
&lt;p&gt;This lab exercise is a great way to put rapidly a RAG system into play. Remember, however, that it calls the NVIDIA API to get to the Llama3 model, which has a cost associated with it. NVIDIA provides a number of credits to get you started, but these credits won’t last forever. &lt;/p&gt;
&lt;p&gt;The next step from here might be to run this entirely on premises, in your datacenter, on GPU-equipped hardware. There is another version of this lab dedicated to doing this &lt;a href=&quot;https://github.com/NVIDIA/GenerativeAIExamples/tree/main/RAG/examples/local_deploy&quot;&gt;here&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;If you are looking for the right platform to run an application like this, please check &lt;a href=&quot;https://www.hpe.com/us/en/private-cloud-ai.html&quot;&gt;HPE Private Cloud for AI,&lt;/a&gt; which has been developed in collaboration with NVIDIA. It is a platform that has been developed specifically for running these types of workloads and can be sized in accordance with your specific needs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/pcai.webp&quot; alt=&quot;HPE Private Cloud for AI&quot; title=&quot;HPE Private Cloud for AI&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Pick a Large Language Model for Private AI]]></title><description><![CDATA[Note: this blog is based on a video and slide deck available here on YouTube. As organizations continue to explore the potential of…]]></description><link>https://developer.hpe.com/how-to-pick-a-large-language-model-for-private-ai/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-pick-a-large-language-model-for-private-ai/</guid><pubDate>Wed, 13 Nov 2024 16:48:01 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;&lt;em&gt;Note: this blog is based on a video and slide deck available &lt;a href=&quot;https://www.youtube.com/watch?v=sNRqJOEKkCw&quot;&gt;here on YouTube&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As organizations continue to explore the potential of generative AI, choosing the right Large Language Model (LLM) is crucial for performance, scalability, and integration with existing systems. This post explores key considerations for selecting an LLM, including understanding different classes of models, evaluating performance, and planning for hardware requirements.&lt;/p&gt;
&lt;h2&gt;Classes of Language Models: What Does “Large” Mean?&lt;/h2&gt;
&lt;p&gt;Language models vary significantly in scale, with &quot;large&quot; models generally characterized by the number of parameters they contain. However, &quot;large&quot; has become a misnomer, with many models capable of running on mobile devices like laptops and cell phones. The size of the model will correspond with two types of &quot;performance&quot; -- quality and speed. Understanding these tradeoffs is important when considering the models feasibility for certain applications. &lt;br&gt;
&lt;br&gt;
In the chart below I describe five T-Shirt sizes of language models.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/capture1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What Can These AI Models Do?&lt;/h2&gt;
&lt;p&gt;LLMs are remarkably versatile, but capabilities can still differ between models. Understanding these distinctions can help narrow down which model class best suits an organization&apos;s goals and budget. Performance on given task that is meaningful to your business may depend on the model&apos;s architecture, training data, and parameter count.&lt;/p&gt;
&lt;p&gt;Let us define three classes of problems that LLMs can perform: Chat, Code, and everything else.&lt;/p&gt;
&lt;h3&gt;Chat&lt;/h3&gt;
&lt;p&gt;In the domain of Chat, models take in text and output text. It&apos;s been the killer app in NLP since ChatGPT was released, and there is no reason to think this trend is slowing down. Seems simple, however there are endless ways in which this can be useful. Here is a list of a few that I have worked on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Summarize a long email thread and help me write a response&lt;/li&gt;
&lt;li&gt;Translate a pdf in a foreign language, such as an RFP or product specs and help answer some questions&lt;/li&gt;
&lt;li&gt;Create a study guide, flashcards, and a multiple-choice practice quiz based on a chapter of a textbook&lt;/li&gt;
&lt;li&gt;Classify the audio transcription of a patient&apos;s visit with a medical professional to determine who the follow up appointment would be with&lt;/li&gt;
&lt;li&gt;Determine if an auto insurance claim is likely to be fraudulent or not, and explain the reasoning why to a care agent&lt;/li&gt;
&lt;li&gt;Ask questions about the contents of a database, such as local restaurant reviews or the performance of a business over the past month&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I may have even used one to help write this blog post 😁&lt;/p&gt;
&lt;p&gt;Moving on...&lt;/p&gt;
&lt;h3&gt;Code&lt;/h3&gt;
&lt;p&gt;The second most popular LLM-powered application today seems to be GitHub Copilot. It&apos;s already spawned open-source projects (with much more flexibility) such as &lt;a href=&quot;https://www.cursor.com/&quot;&gt;Cursor&lt;/a&gt; and &lt;a href=&quot;https://continue.dev/&quot;&gt;Continue&lt;/a&gt;. I&apos;m a big Continue user myself.&lt;/p&gt;
&lt;p&gt;There are three main things that I do with Cursor: Generate, Edit, and Explain.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generate&lt;/strong&gt; is text in, code out. You write a description of the code you want to see, and the model generates it from scratch. For example, a new page on a website, an API call, or a new function in a library. Personally, I&apos;ve been writing a lot of yaml for k8s recently (is config code?). Also, I don&apos;t hit the docs to figure out slurm and linux commands as much anymore (is bash code? alas...)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Edit&lt;/strong&gt; is code + text in, code out. Editing is similar to generating code, but you&apos;re including some existing code in-context for the model to use. For example, you might want to write a unit test, create a function, refactor something, or add comments. The model is reasoning based on existing code.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explain&lt;/strong&gt; is code in, text out. For example, you might be moving onto a new project and trying to understand the existing code base. Or you forgot what you were trying to do the night before. Explanations can be helpful.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Everything Else&lt;/h3&gt;
&lt;p&gt;Models are improving over time. With the introduction of &quot;multimodal&quot; LLMs, we now have both &quot;Large&quot; and &quot;Language&quot; being misnomers in this acronym. Anyway, this is what I&apos;ll call the domain of Everything Else:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Analyze the contents of an image (say, diagnose an X-Ray for a patient)&lt;/li&gt;
&lt;li&gt;Generate an image (text-to-image)&lt;/li&gt;
&lt;li&gt;Transcribe audio (speech-to-text)&lt;/li&gt;
&lt;li&gt;Generate audio (text-to-speech)&lt;/li&gt;
&lt;li&gt;Generate a video (text-to-video)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can see that in general, text is still usually the modality of input or output. However, transformer models are being used in unique ways (audio-to-audio, image-to-video, etc.) and research continues. How long until this blog is out of date?&lt;/p&gt;
&lt;h2&gt;Types of Performance: Quality vs. Speed&lt;/h2&gt;
&lt;p&gt;Evaluating LLM performance involves two key metrics:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Quality&lt;/strong&gt;: Refers to the accuracy or relevance of the model’s outputs. Quality can be assessed through various benchmarks like the &lt;a href=&quot;https://lmarena.ai/?leaderboard&quot;&gt;LMSys Chatbot Arena&lt;/a&gt;, &lt;a href=&quot;https://github.com/hendrycks/test&quot;&gt;MMLU,&lt;/a&gt; &lt;a href=&quot;https://github.com/openai/human-eval&quot;&gt;HumanEval&lt;/a&gt;, and &lt;a href=&quot;https://github.com/openai/grade-school-math&quot;&gt;GSM8K&lt;/a&gt;, which measure models on complex reasoning and specific problem-solving tasks.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Speed&lt;/strong&gt;: Involves both throughput and latency. Throughput is the volume of outputs the model can generate per unit of time (request/sec or tokens/sec), while latency (time-to-first-token, or TTFT) indicates the delay before generating the first part of a response.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Comparing Quality&lt;/h2&gt;
&lt;p&gt;In terms of measuring quality, it&apos;s frankly just really hard to understand the objective quality of an LLM. Open source benchmarks exist, but they&apos;re usually not representative of your business objectives. To understand how a model performs on the specific downstream task that your organization will be performing, there is really no substitute for user feedback and custom evals. I&apos;m going to leave this topic for a future blog.&lt;/p&gt;
&lt;p&gt;Test your models, and talk to your users.&lt;/p&gt;
&lt;h2&gt;Comparing Speed&lt;/h2&gt;
&lt;h3&gt;Throughput&lt;/h3&gt;
&lt;p&gt;Throughput metrics offer insights into how well models handle large volumes of data. High throughput is especially valuable for batch processing tasks, where efficiency directly translates into time and cost savings.&lt;/p&gt;
&lt;h3&gt;Latency (TTFT)&lt;/h3&gt;
&lt;p&gt;Latency reflects the model’s responsiveness, which is critical for real-time applications. Lower latency means faster initial responses, which is crucial for user-facing applications, like chatbots or customer support systems.&lt;/p&gt;
&lt;h2&gt;Continuous Improvement in Runtimes&lt;/h2&gt;
&lt;p&gt;The field of LLMs is rapidly advancing, with regular improvements in model efficiency and runtime performance. For instance, in a span of just over three months, throughput for certain models nearly doubled thanks to improvements in the vLLM runtime from v0.5.0 (released in June 2024) to v0.6.2 (released in September 2024).&lt;/p&gt;
&lt;p&gt;Whether you prefer &lt;a href=&quot;https://ollama.com/&quot;&gt;ollama&lt;/a&gt;, &lt;a href=&quot;https://docs.vllm.ai/en/latest/&quot;&gt;vLLM&lt;/a&gt;, &lt;a href=&quot;https://huggingface.co/docs/text-generation-inference/en/index&quot;&gt;tgi&lt;/a&gt;, &lt;a href=&quot;https://www.nvidia.com/en-us/ai/&quot;&gt;NIMs&lt;/a&gt;, or something else, staying updated with these advancements is important to keep deployments cost-effective over time.&lt;/p&gt;
&lt;h2&gt;Conclusion: Infrastructure Sizing&lt;/h2&gt;
&lt;p&gt;Determining the right infrastructure to support an LLM is critical to deployment success. Hardware requirements are influenced by model size and complexity, since larger models generally require more powerful infrastructure, as well as desired performance, since the need for high throughput or low latency impacts the choice of hardware and the amount of it.&lt;/p&gt;
&lt;p&gt;The obvious conclusion to draw here is that there is a tradeoff:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;bigger models provide higher quality output, but are slower (or more expensive to run)&lt;/li&gt;
&lt;li&gt;smaller models provide lower quality output, but are faster (or less expensive to run)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Understanding this tradeoff is critical to ensure that you have control over the cost and performance (quality and speed) of your LLM-powered application.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Watch this video for more detail, and live demo comparing latency and throughput for three different sizes (8B, 70B, 405B) of the Llama 3.1 Instruct model: &lt;a href=&quot;https://www.youtube.com/watch?v=sNRqJOEKkCw&quot;&gt;https://www.youtube.com/watch?v=sNRqJOEKkCw&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[LLM Agentic Tool Mesh: Democratizing Gen AI through open source innovation]]></title><description><![CDATA[Generative AI (Gen AI) is revolutionizing technology, transforming industries, and creating unprecedented opportunities. At the heart of…]]></description><link>https://developer.hpe.com/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/</link><guid isPermaLink="false">https://developer.hpe.com/ll-mesh-democratizing-gen-ai-through-open-source-innovation-1/</guid><pubDate>Mon, 11 Nov 2024 07:58:18 GMT</pubDate><content:encoded>&lt;style&gt; 
  li {
    font-size: 27px !important;
    line-height: 33px !important;
    max-width: none !important; 
  } 
&lt;/style&gt;
&lt;p&gt;Generative AI (Gen AI) is revolutionizing technology, transforming industries, and creating unprecedented opportunities. At the heart of this revolution are Large Language Models (LLMs), sophisticated Gen AI models designed to understand and generate human-like text. Trained on vast datasets of billions of words, LLMs grasp the patterns and nuances of language, enabling them to perform tasks ranging from answering questions and summarizing text to generating creative content, translating languages, and even coding.&lt;/p&gt;
&lt;p&gt;By enabling machines to understand and generate human-like language—a significant leap from traditional data storage—LLMs uniquely contribute to digital transformation by not only storing information but also understanding its context and reasoning.
However, these innovations bring significant challenges, including the complexity of adoption, ethical considerations, and the need for scalable, user-friendly solutions. Specifically, adopting Gen AI presents two main challenges:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Technical complexity&lt;/strong&gt;: Gen AI tools are powerful but often require expertise in coding and machine learning, making it difficult for companies to use them effectively without specialized skills.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Organizational challenge&lt;/strong&gt;s: Simply adding a Gen AI team isn&apos;t enough. The real value comes from leveraging the knowledge of your existing teams, especially those who may not be tech experts. If not managed properly, integrating Gen AI can impact team dynamics. It&apos;s crucial to find ways to use Gen AI that enhance collaboration and maximize everyone&apos;s expertise.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Embrace change: Leverage existing skills with Gen AI&lt;/h2&gt;
&lt;p&gt;To turn these disruptive challenges into constructive opportunities, we must embrace change. This recalls the story of Michelangelo and the Sistine Chapel. In 1508, although renowned as a sculptor, Michelangelo was commissioned to paint the Sistine Chapel—a monumental task outside his comfort zone. Initially reluctant and feeling inadequate, he faced immense pressure, including working at great heights and competing with Raffaello, the era&apos;s leading painter.&lt;/p&gt;
&lt;p&gt;However, Michelangelo adapted by applying his sculptural techniques to fresco painting, bringing a lifelike, three-dimensional quality to his work. This transformation highlights the power of leveraging existing skills to overcome new challenges, ultimately resulting in one of history&apos;s greatest masterpieces.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vision_1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Inspired by Michelangelo&apos;s adaptation, Gen AI is as an opportunity to craft our own modern masterpieces. Just as he embraced change to overcome challenges, businesses can leverage Gen AI to catalyze this disruptive revolution, unlocking new competitive advantages by leveraging their existing skills and unique differentiators. Embracing Gen AI can unlock new potentials and drive innovation across all parts of the organization. It&apos;s more than a technical shift; it&apos;s a cultural transformation that requires embracing digital changes organization-wide.&lt;/p&gt;
&lt;p&gt;To support this digital transformation, HPE Athonet developed LLM Agentic Tool Mesh, an open-source platform. This pioneering initiative aims at democratizing Gen AI and tackling both the technical complexity and the organizational challenges. The vision is to make Gen AI accessible and beneficial to a broader audience, enabling users from various backgrounds to leverage cutting-edge Gen AI technology effortlessly. LLM Agentic Tool Mesh was demonstrated at the beginning of 2024, already demonstrating its potential at the Mobile World Congress in Barcelona, where it stood out for its innovative approach and significant impact on simplifying Gen AI adoption.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ux_tools.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Simplify complexity with LLM Agentic Tool Mesh&lt;/h2&gt;
&lt;p&gt;LLM Agentic Tool Mesh empowers users to create tools and web applications using Gen AI with low or no coding. This approach addresses technical challenges by simplifying the integration process.&lt;/p&gt;
&lt;p&gt;The Pareto principle, also known as the 80/20 rule, states that roughly 80% of outcomes come from 20% of causes. Leveraging this principle, LLM Agentic Tool Mesh focuses on the 20% of features that cover 80% of user needs. It achieves this by abstracting complex, low-level libraries into easy-to-understand, user-friendly configurations, effectively hiding the underlying complexity.&lt;/p&gt;
&lt;p&gt;For example, tasks that would traditionally require extensive coding and a deep understanding of machine learning models, such as implementing a chat feature with LLM or utilizing your own data through Retrieval-Augmented Generation (RAG), can now be accomplished through simple, descriptive configurations within LLM Agentic Tool Mesh. These features are particularly focused on delivering high value to the end user.&lt;/p&gt;
&lt;p&gt;This simplicity not only helps technical teams but also empowers non-technical teams to develop tools tailored to their specific domain of expertise, effectively addressing organizational challenges. For instance, marketing and sales teams can create custom AI assistants without coding, enhancing customer engagement and streamlining internal workflows.&lt;/p&gt;
&lt;h2&gt;Core principles of LLM Agentic Tool Mesh&lt;/h2&gt;
&lt;p&gt;The solution is built on three foundational principles, as described in previous blog posts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Team-centric collaborative framework&lt;/strong&gt;: it prioritizes collaborative efforts over individual achievements to enhance team synergy and productivity (see &lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-first-pillar-from-personal-assistant-to-collaborative-corporate-tool/&quot;&gt;HPE Athonet LLM platform: From personal assistant to collaborative corporate tool&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;User experience&lt;/strong&gt;: the interface is designed to minimize distractions and support users in achieving deep concentration, or &quot;Flow,&quot; thereby enhancing focus and facilitating efficient workflows (see &lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-driving-users-towards-peak-flow-efficiency/&quot;&gt;HPE Athonet LLM platform: Driving users toward peak flow efficiency&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Versatile architecture&lt;/strong&gt;: By incorporating Data Mesh and Microservices principles, the system is engineered for scalability and efficiency, supporting a wide range of operations from data management to service-oriented tasks (see &lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-divide-et-impera-designing-the-ll-mesh/&quot;&gt;HPE Athonet LLM platform: Divide et impera, designing the LLM Agentic Tool Mesh&lt;/a&gt;).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;LLM Agentic Tool Mesh realizes these principles through a robust self-service infrastructure, which empowers teams to independently develop, deploy, and manage their Gen AI-based web applications and tools. This simplicity enables both technical and non-technical teams to create tools tailored to their specific knowledge, fostering a &lt;strong&gt;decentralized ownership&lt;/strong&gt; model that is fundamental to LLM Agentic Tool Mesh&apos;s philosophy and effectively addresses organizational challenges.&lt;/p&gt;
&lt;p&gt;Aligning with &lt;strong&gt;Conway&apos;s Law&lt;/strong&gt;—which states that products mirror the communication structures of the organizations that create them—LLM Agentic Tool Mesh allows each department to leverage its unique knowledge and insights directly into tool development. This approach enhances efficiency and value by eliminating the need for separate teams dedicated solely to Gen AI products, which may overlook crucial domain-specific information.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vision_2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Importantly, we&apos;re &lt;strong&gt;not reinventing the wheel&lt;/strong&gt;. LLM Agentic Tool Mesh builds upon existing libraries to hide complexity, standardize interfaces, and convert coding logic into more descriptive configurations. This enables low-code or no-code development, making advanced Gen AI capabilities accessible to users without deep programming expertise. Each function in the LLM Agentic Tool Mesh library is implemented using the &lt;strong&gt;Factory Design Pattern&lt;/strong&gt;, which abstracts functionality and standardizes interfaces through domain-specific, high-level functions.&lt;/p&gt;
&lt;p&gt;This design pattern allows for the creation of objects without specifying the exact class of the object to be created, ensuring consistency and usability across various tools and applications. For readers unfamiliar with design patterns, this method improves code maintainability and scalability, making it easier to manage complex systems as they grow. Unlike many open-source libraries that prioritize coding flexibility, our user-centric approach emphasizes &lt;strong&gt;ease of use and accessibility&lt;/strong&gt;, focusing on enabling even non-technical users to create tools in a descriptive manner.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vision_3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The platform enables the creation of a &lt;strong&gt;&apos;Mesh&apos;&lt;/strong&gt; of these Gen AI tools, providing powerful orchestration capabilities through an agentic &lt;strong&gt;Reasoning Engine&lt;/strong&gt; based on LLMs. A Reasoning Engine is an AI system that mimics human-like decision-making and problem-solving by applying rules, data, and logic. It emulates different types of reasoning, such as deductive, inductive, and abductive, allowing the LLM to not only generate content but also understand context, draw logical inferences, and connect information to solve complex problems.&lt;/p&gt;
&lt;p&gt;Additionally, each tool developed through LLM Agentic Tool Mesh can be deployed as a standalone web application with a manifest that clearly exposes its information and endpoints, allowing for seamless integration and orchestration across a cohesive network of services. This orchestration ensures that all tools work together seamlessly, enhancing overall functionality and efficiency across the organization.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vision_4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This seamless integration is possible because, while the management of LLM tools operates in a decentralized manner, LLM Agentic Tool Mesh implements a unified framework of governance policies and standards to maintain consistency, ethical integrity, and overall quality across the platform. &lt;strong&gt;Federated Governance&lt;/strong&gt; ensures a careful balance between fostering innovation and upholding rigorous standards for security, privacy, and compliance. This framework includes setting interoperable standards, enforcing ethical compliance, and utilizing automated governance tools to consistently apply policies. Continuous collaboration and feedback within LLM Agentic Tool Mesh are crucial for adapting and refining governance practices, promoting ongoing improvement and ensuring alignment with emerging technologies and challenges (these principles are inspired by Data Mesh architecture).&lt;/p&gt;
&lt;h2&gt;Join the LLM Agentic Tool Mesh Community&lt;/h2&gt;
&lt;p&gt;LLM Agentic Tool Mesh is now open source; you can download it from our &lt;a href=&quot;https://github.com/HewlettPackard/llmesh&quot;&gt;GitHub repository&lt;/a&gt; and explore its &lt;a href=&quot;https://github.com/HewlettPackard/llmesh/wiki&quot;&gt;Wiki documentation&lt;/a&gt;. In the coming weeks, we will publish additional blog posts providing more details about the services offered by LLM Agentic Tool Mesh, moving from theory to practical examples. We will also release a Workshop -on-Demand and host a Meetup to give you hands-on experience and share the latest updates and vision for the project.&lt;/p&gt;
&lt;p&gt;We invite you to join us on this journey to harness the transformative power of Gen AI. Let&apos;s collaborate to craft our own masterpieces and drive innovation forward. Engage with LLM Agentic Tool Mesh, contribute to the open-source project, and share your experiences with the community. Together, we can shape the future of Gen AI.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for David Bader: Graph Analytics at Scale with Arkouda and Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-david-bader-graph-analytics-at-scale-with-arkouda-and-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-david-bader-graph-analytics-at-scale-with-arkouda-and-chapel/</guid><pubDate>Thu, 07 Nov 2024 04:55:50 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Morpheus Terraform Profiles]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/morpheus-terraform-profiles/</link><guid isPermaLink="false">https://developer.hpe.com/morpheus-terraform-profiles/</guid><pubDate>Wed, 06 Nov 2024 16:25:49 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Morpheus Extensibility with Plugins]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/morpheus-extensibility-with-plugins/</link><guid isPermaLink="false">https://developer.hpe.com/morpheus-extensibility-with-plugins/</guid><pubDate>Wed, 06 Nov 2024 16:24:19 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Managing Morpheus Plugins via REST API]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/managing-morpheus-plugins-via-rest-api/</link><guid isPermaLink="false">https://developer.hpe.com/managing-morpheus-plugins-via-rest-api/</guid><pubDate>Wed, 06 Nov 2024 15:44:27 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Upcoming AI Jam webinars, Morpheus joins our Community portal, and Chapel use cases]]></title><link>https://developer.hpe.com/2024-november-04/</link><guid isPermaLink="false">https://developer.hpe.com/2024-november-04/</guid><pubDate>Mon, 04 Nov 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Implementing your AI Breakthroughs Effectively – The Infrastructure to your AI]]></title><description><![CDATA[We’ve been here before. Think about the seemingly obscure and ever-evolving infrastructure technologies that have been introduced over the…]]></description><link>https://developer.hpe.com/implementing-your-ai-breakthroughs-effectively-–-the-infrastructure-to-your-a-i/</link><guid isPermaLink="false">https://developer.hpe.com/implementing-your-ai-breakthroughs-effectively-–-the-infrastructure-to-your-a-i/</guid><pubDate>Thu, 31 Oct 2024 18:58:58 GMT</pubDate><content:encoded>&lt;p&gt;We’ve been here before. Think about the seemingly obscure and ever-evolving infrastructure technologies that have been introduced over the years that only few interact with, learn, and even see, but are always expected to work and are foundational to our digitalized world.&lt;/p&gt;
&lt;p&gt;In my early days as a sales engineer, one of my favorite opening lines with friends and family was “the cloud really is a place.” Just because we store our data out of sight and out of mind doesn’t mean it’s vanished from the planet. That data is stored on a well-thought-out infrastructure in a well-planned data center facility that makes it accessible anywhere. And just like “the cloud”, AI runs in a real location, on a fully-configured infrastructure, where it’s deployed and secured by people. Well, many people.&lt;/p&gt;
&lt;p&gt;So, you see, we really have been here before. In this new AI Jam series, we always aim to “keep it real with AI”. And despite the artificial and surreal vocabulary used today, AI workloads, just like the cloud, still need specialized hardware, software, and networking to make it a reality.&lt;/p&gt;
&lt;p&gt;Join us for our next talk about &lt;a href=&quot;https://developer.hpe.com/campaign/get-real-with-ai-%E2%80%93-jam-series/&quot;&gt;Implementing your AI Breakthrough Effectively - The infrastructure to you’re AI&lt;/a&gt;. We’ll dive into the spectrum of use cases and infrastructure considerations that are challenging businesses today. As you know, different use cases and stages in a development cycle require different configurations. Infrastructure configurations and capacity for fine-tuning LLMs are not the same as what’s needed for inference, retrieval augmented generation (RAG), or even small language models.&lt;/p&gt;
&lt;p&gt;So how best to get started? Do you need access to large training clusters which could be very challenging for anybody to deploy? And what type of budget do you need, as the common statement you hear about AI projects is that it’s going to be expensive!&lt;/p&gt;
&lt;p&gt;Fortunately, this is not always the case. In fact, at HPE we help customers get started every day with AI by removing complexity. Large-sized clusters for training large language models are not the only entry point to enabling AI in your organization. If you are stuck between a use case and how to get started, this interactive session will help you navigate how operational improvements can help you get started today.&lt;/p&gt;
&lt;p&gt;We’ll also discuss how to show value to your organization today that will help enable a successful transformational AI adoption over time. So, Data Engineers, bring your IT Ops Manager, grab some coffee, and join us for this AI Jam session, where we discuss different AI infrastructure environments and software options to help get your use case started.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Navier-Stokes in Chapel — Distributed Poisson Solver]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/navier-stokes-in-chapel-—-distributed-poisson-solver/</link><guid isPermaLink="false">https://developer.hpe.com/navier-stokes-in-chapel-—-distributed-poisson-solver/</guid><pubDate>Mon, 28 Oct 2024 23:20:39 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Nelson Luís Dias: Atmospheric Turbulence in Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-nelson-luís-dias-atmospheric-turbulence-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-nelson-luís-dias-atmospheric-turbulence-in-chapel/</guid><pubDate>Tue, 15 Oct 2024 19:01:13 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Distributed Tuning in Chapel with a Hyperparameter Optimization Example]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/distributed-tuning-in-chapel-with-a-hyperparameter-optimization-example/</link><guid isPermaLink="false">https://developer.hpe.com/distributed-tuning-in-chapel-with-a-hyperparameter-optimization-example/</guid><pubDate>Wed, 09 Oct 2024 06:28:40 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Small Language Models: The Next Frontier with Private Cloud AI]]></title><description><![CDATA[Text]]></description><link>https://developer.hpe.com/small-language-models-the-next-frontier-with-private-cloud-ai/</link><guid isPermaLink="false">https://developer.hpe.com/small-language-models-the-next-frontier-with-private-cloud-ai/</guid><pubDate>Thu, 03 Oct 2024 17:13:41 GMT</pubDate><content:encoded>&lt;p&gt;Text&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Scott Bachman: Analyzing Coral Reefs with Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-scott-bachman-analyzing-coral-reefs-with-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-scott-bachman-analyzing-coral-reefs-with-chapel/</guid><pubDate>Tue, 01 Oct 2024 19:15:34 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exciting webinars, HPE acquires Morpheus Data, HPE GreenLake Private Cloud tutorials, and more!]]></title><link>https://developer.hpe.com/2024-october-01/</link><guid isPermaLink="false">https://developer.hpe.com/2024-october-01/</guid><pubDate>Tue, 01 Oct 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Introducing the Ampere® Performance Toolkit to optimize software]]></title><description><![CDATA[External Blog]]></description><link>https://developer.hpe.com/introducing-the-ampere®-performance-toolkit-to-optimize-software/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-the-ampere®-performance-toolkit-to-optimize-software/</guid><pubDate>Fri, 27 Sep 2024 12:05:40 GMT</pubDate><content:encoded>&lt;p&gt;External Blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.2!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-2/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-2/</guid><pubDate>Thu, 26 Sep 2024 21:05:26 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Keeping it Real with a new AI Jam Series]]></title><description><![CDATA[It wasn’t that long ago we used to say “Artificial Intelligence". Now we just say A.I. These words are so pervasive in conversation that we…]]></description><link>https://developer.hpe.com/keeping-it-real-with-a-new-ai-jam-series/</link><guid isPermaLink="false">https://developer.hpe.com/keeping-it-real-with-a-new-ai-jam-series/</guid><pubDate>Thu, 26 Sep 2024 13:54:21 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/hpe_story_548_1600_0_72_rgb.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;It wasn’t that long ago we used to say “Artificial Intelligence&quot;. Now we just say A.I. These words are so pervasive in conversation that we are now down to the acronym. But what I have been hearing is that companies are completely scattered on the topic. Some are launching experiments on the talent and technologies they have been maturing for years; others are scrambling to get in the game. Despite this, everyone is asking &quot;how do I get started&quot; or &quot;how do I move quickly&quot; right now. That’s why the HPE Developer Community is sponsoring a new series of Skill Up meetings called “&lt;a href=&quot;https://developer.hpe.com/campaign/get-real-with-ai-jam-series/&quot;&gt;Get real with AI&lt;/a&gt;” to help you. We want to get past the hype and bring you technical and business value that meets you where you are today. So, let’s just say “keep calm and get real with AI.”&lt;/p&gt;
&lt;h2&gt;Responding to requests for more help&lt;/h2&gt;
&lt;p&gt;Earlier this year, we held a workshop that focused on the basics companies need to successfully host and deploy AI projects. The success of this one workshop resulted in a clamor for more sessions on this same topic. We quickly came to realize that we needed a new series of interactive sessions that were a lot more basic in nature. And many of the subject matter experts we spoke with wanted to share more of their ideas and experiences about where and how to get started.&lt;/p&gt;
&lt;p&gt;This resulted in the premier of our first AI Jam session titled Exploring Transformative AI Use Cases Across Industries. The discussion brought together thought leaders, industry experts, and technologists who talked through transformative AI use cases across diverse sectors, including healthcare, finance, manufacturing, and retail.&lt;/p&gt;
&lt;p&gt;It turned out to be a very interactive and useful session with panelists sharing their insights on the critical factors for AI adoption, including data management, ethical considerations, and integration with existing systems. The discussion also highlighted emerging trends, such as the use of AI for predictive analytics, personalization, automation, and decision-making. If you missed this session you can catch the &lt;a href=&quot;https://developer.hpe.com/campaign/get-real-with-ai-jam-series/&quot;&gt;replay here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Keeping it real in our next session&lt;/h2&gt;
&lt;p&gt;It was a good start. Going forward, we decided it was best to continue keeping it well aligned and focused on “keeping it real with AI” basics. To do that, we needed to change up the next session to focus on – “Cool AI Use Case, Now What?”.&lt;/p&gt;
&lt;p&gt;I hear a lot about people coming up with great ideas where AI technologies could help their business. But as quickly as an idea is floated, all eyes go towards the technologist, the data scientists, and the IT operations teams, and leadership comes back saying “sounds good, now show me something.” At that point it can get complicated, as many people don’t know where to get started or what’s really needed to show success. That’s what we’re going to focus on in the next AI jam session. We’ll be discussing best practices on getting started with AI and illustrate shortcuts on how to get AI projects from data experiments to AI deployments. If this is something you might be interested in, don’t forget to register for the October 2nd AI jam session at 10.00 (CST).&lt;/p&gt;
&lt;p&gt;And yes, there’s more to come, so please be sure to check in on the new upcoming &lt;a href=&quot;https://developer.hpe.com/campaign/get-real-with-ai-jam-series/&quot;&gt;AI jam series&lt;/a&gt; that we will be running in November and December. See you there!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[7 Questions for Éric Laurendeau: Computing Aircraft Aerodynamics in Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/7-questions-for-éric-laurendeau-computing-aircraft-aerodynamics-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/7-questions-for-éric-laurendeau-computing-aircraft-aerodynamics-in-chapel/</guid><pubDate>Tue, 17 Sep 2024 20:44:07 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Finding the best LoRA parameters]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/finding-the-best-lora-parameters/</link><guid isPermaLink="false">https://developer.hpe.com/finding-the-best-lora-parameters/</guid><pubDate>Wed, 11 Sep 2024 18:44:16 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Programmatically fetch & analyze logs, integrate K8sGPT, configure long-lived tokens and more! ]]></title><link>https://developer.hpe.com/2024-september-05/</link><guid isPermaLink="false">https://developer.hpe.com/2024-september-05/</guid><pubDate>Thu, 05 Sep 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[What’s New with Chapel? Nine Questions for the Development Team]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/what’s-new-with-chapel-nine-questions-for-the-development-team/</link><guid isPermaLink="false">https://developer.hpe.com/what’s-new-with-chapel-nine-questions-for-the-development-team/</guid><pubDate>Wed, 04 Sep 2024 16:54:10 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Setting up hierarchical namespaces in Kubernetes in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Multi-tenancy for Kubernetes (K8s) cluster requires sophisticated namespace management to enable robust tenant isolation and organization…]]></description><link>https://developer.hpe.com/setting-up-hierarchical-namespaces-in-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/setting-up-hierarchical-namespaces-in-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Thu, 22 Aug 2024 16:16:41 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;&lt;strong&gt;Multi-tenancy&lt;/strong&gt; for Kubernetes (K8s) cluster requires sophisticated namespace management to enable robust tenant isolation and organization. &lt;strong&gt;Hierarchical namespaces&lt;/strong&gt; allow you to model namespaces according to your own organizational hierarchy and allocate capabilities inside a single K8s cluster. This eliminates the need for a new cluster for each organizational unit. The utilization of hierarchical namespaces can result in a more streamlined namespace management and improved security in the complex K8s production environment.&lt;/p&gt;
&lt;p&gt;This blog post provides a step-by-step guide on how to set up hierarchical namespaces in K8s in HPE GreenLake for Private Cloud Enterprise. The simplicity of handling relationships between K8s namespaces, propagating configurations and resource constraints, and applying access control policies using hierarchical namespaces in K8s is demonstrated here.&lt;/p&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;A K8s &lt;a href=&quot;https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/&quot;&gt;&lt;em&gt;namespace&lt;/em&gt;&lt;/a&gt; is a fundamental API object within the K8s architecture. It provides a mechanism for grouping and isolating K8s resources within a single cluster. Each namespace has its own set of resources, policies and constraints. Cluster administrators can create roles using role-based access control (RBAC) to define permissions within a specific namespace. They can aslo enforce resource limits and quotas to control the consumption of resources by objects in a namespace. K8s namespaces facilitate the separation of resources and allow multiple users, teams, or projects to share a cluster within an organization. This approach is more cost-effective than creating a new cluster for each organizational unit. With the application of various K8s configurations and policies to namespaces, cluster administrators can ensure safe and fair resource isolation and cluster sharing.&lt;/p&gt;
&lt;p&gt;K8s namespaces created within a cluster are peers, each fully isolated from the others. K8s namespaces do not align well with organizational structures. Cluster administrators must define roles and create policies and constraints for each individual namespace. At scale, managing numerous namespaces can become challenging, leading to potential management issues. The administrative overhead can make managing namespaces within the cluster tedious and prone to errors.&lt;/p&gt;
&lt;p&gt;In 2020, K8s upstream introduced a K8s extension known as the &lt;a href=&quot;https://github.com/kubernetes-sigs/hierarchical-namespaces#the-hierarchical-namespace-controller-hnc&quot;&gt;&lt;em&gt;Hierarchical Namespace Controller&lt;/em&gt; (HNC)&lt;/a&gt;. HNC supports hierarchical namespaces and helps cluster administrators manage the security and capabilities of namespaces with less effort than the flat, peer-to-peer namespace model. Using HNC, administrators can organize namespaces according to an organizational hierarchy and allocate capabilities using a list of K8s resources, such as &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole&quot;&gt;&lt;em&gt;Role&lt;/em&gt;&lt;/a&gt;/&lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding&quot;&gt;&lt;em&gt;RoleBinding&lt;/em&gt;&lt;/a&gt;, &lt;a href=&quot;https://kubernetes.io/docs/concepts/policy/resource-quotas/&quot;&gt;&lt;em&gt;ResourceQuota&lt;/em&gt;&lt;/a&gt; and &lt;a href=&quot;https://kubernetes.io/docs/concepts/configuration/secret/&quot;&gt;&lt;em&gt;Secret&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster provisioned using &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;HPE GreenLake Terraform provider&lt;/a&gt; for example, in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://curl.se/&quot;&gt;cURL&lt;/a&gt; CLI tool&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Set up hierarchical namespaces&lt;/h3&gt;
&lt;p&gt;K8s does not come with hierarchical namespace support by default. There are two components, the &lt;em&gt;HNC manager&lt;/em&gt; and the optional kubectl plugin &lt;em&gt;kubectl-hns&lt;/em&gt; that must be installed to support hierarchical namespaces in K8s.&lt;/p&gt;
&lt;h4&gt;Install the HNC manager&lt;/h4&gt;
&lt;p&gt;Type the following commands to install the HNC manager within the control plane of the K8s cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ HNC_VERSION=v1.1.0
$ HNC_VARIANT=default

$ kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/${HNC_VERSION}/${HNC_VARIANT}.yaml 
namespace/hnc-system created
customresourcedefinition.apiextensions.k8s.io/hierarchicalresourcequotas.hnc.x-k8s.io created
customresourcedefinition.apiextensions.k8s.io/hierarchyconfigurations.hnc.x-k8s.io created
customresourcedefinition.apiextensions.k8s.io/hncconfigurations.hnc.x-k8s.io created
customresourcedefinition.apiextensions.k8s.io/subnamespaceanchors.hnc.x-k8s.io created
role.rbac.authorization.k8s.io/hnc-leader-election-role created
clusterrole.rbac.authorization.k8s.io/hnc-admin-role created
clusterrole.rbac.authorization.k8s.io/hnc-manager-role created
rolebinding.rbac.authorization.k8s.io/hnc-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/hnc-manager-rolebinding created
secret/hnc-webhook-server-cert created
service/hnc-controller-manager-metrics-service created
service/hnc-webhook-service created
deployment.apps/hnc-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/hnc-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/hnc-validating-webhook-configuration created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above commands install the latest &lt;a href=&quot;https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/tag/v1.1.0&quot;&gt;HNC v1.1.0&lt;/a&gt; to the namespace &lt;em&gt;&apos;hnc-system&apos;&lt;/em&gt; in the cluster.&lt;/p&gt;
&lt;p&gt;Type the following command to check the HNC manager installation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n hnc-system
NAME                                         READY   STATUS    RESTARTS      AGE
pod/hnc-controller-manager-9b5dbcd48-2268c   1/1     Running   0             29s

NAME                                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/hnc-controller-manager-metrics-service   ClusterIP   10.96.255.60   &amp;#x3C;none&gt;        8080/TCP   32s
service/hnc-webhook-service                      ClusterIP   10.96.73.3     &amp;#x3C;none&gt;        443/TCP    32s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hnc-controller-manager   1/1     1            1           31s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/hnc-controller-manager-9b5dbcd48   1         1         1       32s
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Install the &lt;em&gt;kubectl-hns&lt;/em&gt; plugin&lt;/h4&gt;
&lt;p&gt;After installing the HNC manager, you can set up hierarchical namespaces directly using the kubectl CLI tool, in conjunction with a list of HNC custom resource definitions (CRDs), such as &lt;em&gt;HierarchyConfiguration&lt;/em&gt; and &lt;em&gt;SubnamespaceAnchor&lt;/em&gt;. You can install the kubectl plugin &lt;em&gt;kubectl-hns&lt;/em&gt; in your client environment. This kubectl plugin works together with the kubectl CLI tool and simplifies many hierarchical namespace operations. This section shows the process to install the &lt;em&gt;kubectl-hns&lt;/em&gt; plugin to Linux workstation in a local environment.&lt;/p&gt;
&lt;p&gt;Type the following commands to install the &lt;em&gt;kubectl-hns&lt;/em&gt; plugin using &lt;em&gt;cURL&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;


$ HNC_VERSION=v1.1.0
$ HNC_PLATFORM=linux_amd64

$ curl -L https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/$HNC_VERSION/kubectl-hns_$HNC_PLATFORM -o ./kubectl-hns
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 62.2M  100 62.2M    0     0  2340k      0  0:00:27  0:00:27 --:--:-- 2336k

$ chmod +x kubectl-hns
$ sudo mv kubectl-hns /usr/local/bin/kubectl-hns

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to verify the &lt;em&gt;kubectl-hns&lt;/em&gt; plugin is installed correctly and works with kubectl:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;

$ kubectl hns
Manipulates hierarchical namespaces provided by HNC

Usage:
  kubectl-hns [command]

Available Commands:
  completion  generate the autocompletion script for the specified shell
  config      Manipulates the HNC configuration
  create      Creates a subnamespace under the given parent.
  describe    Displays information about the hierarchy configuration
  help        Help about any command
  hrq         Display one or more HierarchicalResourceQuota
  set         Sets hierarchical properties of the given namespace
  tree        Display one or more hierarchy trees
  version     Show version of HNC plugin

Flags:
      --as string                      Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                  UID to impersonate for the operation.
      --cache-dir string               Default cache directory (default &quot;/home/guoping/.kube/cache&quot;)
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
  -h, --help                           help for kubectl-hns
      --insecure-skip-tls-verify       If true, the server&apos;s certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string               If present, the namespace scope for this CLI request
      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don&apos;t timeout requests. (default &quot;0&quot;)
  -s, --server string                  The address and port of the Kubernetes API server
      --tls-server-name string         Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                   Bearer token for authentication to the API server
      --user string                    The name of the kubeconfig user to use
  -v, --version                        version for kubectl-hns

Use &quot;kubectl-hns [command] --help&quot; for more information about a command.

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create hierarchical namespaces&lt;/h3&gt;
&lt;p&gt;After installing both the HNC manager and the &lt;em&gt;kubectl-hns&lt;/em&gt; plugin, you can start creating hierarchical namespaces. This section sets up an imaginary hierarchical namespace structure, in which an organization, named &lt;strong&gt;cfe-pce&lt;/strong&gt;, consists of two teams, &lt;em&gt;team-caas&lt;/em&gt; &amp;#x26; &lt;em&gt;team-vmaas&lt;/em&gt;, each team runs its &lt;em&gt;devops&lt;/em&gt; and &lt;em&gt;iac&lt;/em&gt; projects:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cfe-pce:
 - team-caas
    - caas-devops
    - caas-iac
 - team-vmaas
    - vmaas-devops
    - vmaas-iac
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A hierarchical namespace can be created using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
kubectl hns create &amp;#x3C;ns_child&gt; -n &amp;#x3C;ns_parent&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;where, &lt;em&gt;&amp;#x3C;ns_child&gt;&lt;/em&gt; is the child namespace you want to create, while &lt;em&gt;&amp;#x3C;ns_parent&gt;&lt;/em&gt; is the parent namespace that is created using the command &lt;em&gt;kubectl create ns &amp;#x3C;ns_parent&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In case both &lt;em&gt;&amp;#x3C;ns_child&gt;&lt;/em&gt; and &lt;em&gt;&amp;#x3C;ns_parent&gt;&lt;/em&gt; already exist, you can still create a hierarchical structure by running below command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
kubectl hns set &amp;#x3C;ns_child&gt; --parent &amp;#x3C;ns_parent&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the command below to create the root namespace &lt;em&gt;&apos;cfe-pce&apos;&lt;/em&gt; representing the organization:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl create ns cfe-pce
namespace/cfe-pce created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then run the following commands to create the subnamespaces under the organization:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl hns create team-vmaas -n cfe-pce                                                                                                                                              
Successfully created &quot;team-vmaas&quot; subnamespace anchor in &quot;cfe-pce&quot; namespace

$ kubectl hns create vmaas-devops -n team-vmaas                                                                                                        
Successfully created &quot;vmaas-devops&quot; subnamespace anchor in &quot;team-vmaas&quot; namespace

$ kubectl hns create vmaas-iac -n team-vmaas                                                                                                                                            
Successfully created &quot;vmaas-iac&quot; subnamespace anchor in &quot;team-vmaas&quot; namespace    
                                                                                                     
$ kubectl hns create team-caas -n cfe-pce                                                                                                                                               
Successfully created &quot;team-caas&quot; subnamespace anchor in &quot;cfe-pce&quot; namespace

$ kubectl hns create caas-devops -n team-caas
Successfully created &quot;caas-devops&quot; subnamespace anchor in &quot;team-caas&quot; namespace

$ kubectl hns create caas-iac -n team-caas
Successfully created &quot;caas-iac&quot; subnamespace anchor in &quot;team-caas&quot; namespace
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the command &lt;em&gt;&apos;kubectl hns tree&apos;&lt;/em&gt; to check the hierarchical namespaces:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl hns tree cfe-pce
cfe-pce
├── team-caas
│   ├── caas-devops
│   └── caas-iac
└── team-vmaas
    ├── vmaas-devops
    └── vmaas-iac
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create resources and apply propagating capabilities&lt;/h3&gt;
&lt;p&gt;After creating the hierarchical namespace structure, you can add roles and rolebindings, using RBAC, to enforce access control within namespaces. You can set up resource quotas and secrets to ensure safe cluster sharing. Hierarchical namespaces allow for propagation of configurations and resources, enforcing access control policies across namespaces.&lt;/p&gt;
&lt;p&gt;You can refer to the &lt;a href=&quot;https://github.com/kubernetes-sigs/hierarchical-namespaces/blob/master/docs/user-guide/README.md&quot;&gt;HNC user guide&lt;/a&gt; for setting up other K8s resources using HNC.&lt;/p&gt;
&lt;h4&gt;Cascade roles and rolebindings&lt;/h4&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&quot;&gt;RBAC&lt;/a&gt; is commonly used in K8s to limit access to the appropirate namespaces. You must ensure that each user or workload has the correct access to only their designated namespaces. K8s &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole&quot;&gt;&lt;em&gt;Role&lt;/em&gt;&lt;/a&gt; and &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding&quot;&gt;&lt;em&gt;RoleBinding&lt;/em&gt;&lt;/a&gt; are two API objects used at the namespace level to enforce access control.&lt;/p&gt;
&lt;p&gt;Type the following commands to create an admin role &lt;em&gt;&apos;pce-admin&apos;&lt;/em&gt; across the whole organization &lt;em&gt;cfe-pce&lt;/em&gt; and two site reliability engineer (SRE) roles, &lt;em&gt;&apos;vmaas-sre&apos;&lt;/em&gt; and &lt;em&gt;&apos;caas-sre&apos;&lt;/em&gt;, for &lt;em&gt;team-vmaas&lt;/em&gt; and &lt;em&gt;team-caas&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl -n cfe-pce create role pce-admin --verb=* --resource=pod
role.rbac.authorization.k8s.io/pce-admin created

$ kubectl -n team-vmaas create role vmaas-sre --verb=update --resource=pod
role.rbac.authorization.k8s.io/vmaas-sre created

$ kubectl -n team-caas create role caas-sre --verb=update --resource=pod
role.rbac.authorization.k8s.io/caas-sre created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following commands to create the rolebindings:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl -n cfe-pce create rolebinding pce-admins --role pce-admin --serviceaccount=cfe-pce:default
rolebinding.rbac.authorization.k8s.io/pce-admins created

$ kubectl -n team-caas create rolebinding caas-sres --role caas-sre --serviceaccount=team-caas:default
rolebinding.rbac.authorization.k8s.io/caas-sres created

$ kubectl -n team-vmaas create rolebinding vmaas-sres --role vmaas-sre --serviceaccount=team-vmaas:default
rolebinding.rbac.authorization.k8s.io/vmaas-sres created

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Creating roles and rolebindings in the parent namespaces are propagated to all associated subnamespaces at team and project levels:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get role -n cfe-pce pce-admin
NAME        CREATED AT
pce-admin   2024-06-27T12:45:51Z

$ kubectl get rolebindings -n cfe-pce pce-admins
NAME         ROLE             AGE
pce-admins   Role/pce-admin   44s

$ kubectl get role -n team-caas
NAME                            CREATED AT
...
caas-sre                        2024-06-27T12:45:53Z
pce-admin                       2024-06-27T12:45:51Z

$ kubectl get rolebindings -n team-caas
NAME                            ROLE                                 AGE
...
caas-sres                       Role/caas-sre                        44s
pce-admins                      Role/pce-admin                       63s

$ kubectl get role -n caas-devops
NAME                            CREATED AT
...
caas-sre                        2024-06-27T12:45:53Z
pce-admin                       2024-06-27T12:45:51Z

$ kubectl get rolebindings -n caas-devops
NAME                            ROLE                                 AGE
...
caas-sres                       Role/caas-sre                        53s
pce-admins                      Role/pce-admin                       63s

$ kubectl get role -n caas-iac
NAME                            CREATED AT
...
caas-sre                        2024-07-29T12:10:59Z
pce-admin                       2024-08-08T08:23:42Z

$ kubectl get rolebindings -n caas-iac
NAME                            ROLE                                 AGE
...
caas-sres                       Role/caas-sre                        53s
pce-admins                      Role/pce-admin                       63s

$ kubectl get role -n team-vmaas
NAME                            CREATED AT
...
pce-admin                       2024-06-27T12:45:51Z
vmaas-sre                       2024-06-27T12:45:52Z

$ kubectl get rolebindings -n team-vmaas
NAME                            ROLE                                 AGE
...
pce-admins                      Role/pce-admin                       63s
vmaas-sres                      Role/vmaas-sre                       43s

$ kubectl get role -n vmaas-devops
NAME                            CREATED AT
...
pce-admin                       2024-08-03T14:08:45Z
vmaas-sre                       2024-07-29T12:10:59Z

$ kubectl get rolebindings -n vmaas-devops
NAME                            ROLE                                 AGE
pce-admins                      Role/pce-admin                       63s
vmaas-sres                      Role/vmaas-sre                       43s

$ kubectl get role -n vmaas-iac
NAME                            CREATED AT
...
pce-admin                       2024-06-27T12:45:51Z
vmaas-sre                       2024-06-27T12:45:52Z

$ k get rolebindings -n vmaas-iac
NAME                            ROLE                                 AGE
...
pce-admins                      Role/pce-admin                       65s
vmaas-sres                      Role/vmaas-sre                       45s

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Cascade resource quotas&lt;/h4&gt;
&lt;p&gt;K8s provides &lt;a href=&quot;https://kubernetes.io/docs/concepts/policy/resource-quotas/&quot;&gt;ResourceQuota&lt;/a&gt;, an API object that allows the cluster administrators to define resource quotas and
limit ranges per namespace. Resource quota tracks aggregate usage of resources in the namespace and allow cluster operators to define hard resource usage limits that a namespace may consume. A limit range defines minimum and maximum constraints on the amount of resources a single entity can consume in a namespace. It&apos;s useful to make sure resource usage is staying with certain bounds. This section shows the process to set up resource quotas using &lt;em&gt;ResourceQuota&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Type the following commands to apply two resource quotas, &lt;em&gt;&apos;team-vmaas-quota&apos;&lt;/em&gt; &amp;#x26; &lt;em&gt;&apos;team-caas-quota&apos;&lt;/em&gt;, to the team namespaces &lt;em&gt;team-vmaas&lt;/em&gt; and &lt;em&gt;team-caas&lt;/em&gt;, respectively:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;


$ cat team-vmaas-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-vmaas-quota
spec:
  hard:
    cpu: &quot;4&quot;
    memory: 20Gi
    pods: &quot;10&quot;
    persistentvolumeclaims: &quot;10&quot;
    services.nodeports: &quot;2&quot;
    requests.storage: 20Gi


$ kubectl -n team-vmaas apply -f team-vmaas-quota.yaml
resourcequota/team-vmaas-quota created

$ cat team-caas-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: team-caas-quota
spec:
  hard:
    cpu: &quot;8&quot;
    memory: 60Gi
    pods: &quot;30&quot;
    persistentvolumeclaims: &quot;30&quot;
    services.nodeports: &quot;6&quot;
    requests.storage: 60Gi


$ kubectl -n team-caas apply -f team-caas-quota.yaml
resourcequota/team-caas-quota created


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you check the resouce quotas using the following commands, you will find they are only created in each team namespace. The quota resources are not propagated to any projects under the team namespaces:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get resourcequota -n team-caas
NAME               AGE   REQUEST                                                                                                                 LIMIT
team-caas-quota   67s   cpu: 0/8, memory: 0/60Gi, persistentvolumeclaims: 0/30, pods: 0/30, requests.storage: 0/60Gi, services.nodeports: 0/6

$ kubectl get resourcequota -n team-vmaas
NAME              AGE   REQUEST                                                                                                                 LIMIT
team-vmaas-quota   74s   cpu: 0/4, memory: 0/20Gi, persistentvolumeclaims: 0/10, pods: 0/10, requests.storage: 0/20Gi, services.nodeports: 0/2

$ kubectl get resourcequota -n vmaas-iac
No resources found in vmaas-iac namespace.

$ kubectl get resourcequota -n caas-iac
No resources found in caas-iac namespace.

$ kubectl get resourcequota -n caas-devops
No resources found in caas-devops namespace.

$ kubectl get resourcequota -n vmaas-devops
No resources found in vmaas-devops namespace.

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The lack of propagation of resource quotas to project namespaces is due to the default HNC configuration. The HNC configuration propagates RBAC objects, specifically, &lt;em&gt;roles&lt;/em&gt; and &lt;em&gt;rolebindings&lt;/em&gt;, by default. You have to update the HNC configuration to propagate other K8s resources.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl hns config describe
Synchronized resources:
* Propagating: rolebindings (rbac.authorization.k8s.io/v1)
* Propagating: roles (rbac.authorization.k8s.io/v1)

Conditions:

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to update the HNC configuration to propagate the K8s &lt;em&gt;ResourceQuota&lt;/em&gt; resources:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl hns config set-resource resourcequotas --mode Propagate

$ kubectl hns config describe
Synchronized resources:
* Propagating: resourcequotas (/v1)
* Propagating: rolebindings (rbac.authorization.k8s.io/v1)
* Propagating: roles (rbac.authorization.k8s.io/v1)

Conditions:

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you check the resouce quotas again, you can notice the quota resources propagated to all projects under each team namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get resourcequota -n team-caas
NAME               AGE     REQUEST                                                                                                                 LIMIT
team-caas-quota   3m39s   cpu: 0/8, memory: 0/60Gi, persistentvolumeclaims: 0/30, pods: 0/30, requests.storage: 0/60Gi, services.nodeports: 0/6


$ kubectl get resourcequota -n caas-iac
NAME               AGE   REQUEST                                                                                                                 LIMIT
team-caas-quota   45s   cpu: 0/8, memory: 0/60Gi, persistentvolumeclaims: 0/30, pods: 0/30, requests.storage: 0/60Gi, services.nodeports: 0/6


$ kubectl get resourcequota -n caas-devops
NAME               AGE   REQUEST                                                                                                                 LIMIT
team-caas-quota   60s   cpu: 0/8, memory: 0/60Gi, persistentvolumeclaims: 0/30, pods: 0/30, requests.storage: 0/60Gi, services.nodeports: 0/6


$ kubectl get resourcequota -n team-vmaas
NAME              AGE     REQUEST                                                                                                                 LIMIT
team-vmaas-quota   4m10s   cpu: 0/4, memory: 0/20Gi, persistentvolumeclaims: 0/10, pods: 0/10, requests.storage: 0/20Gi, services.nodeports: 0/2


$ kubectl get resourcequota -n vmaas-iac
NAME              AGE   REQUEST                                                                                                                 LIMIT
team-vmaas-quota   75s   cpu: 0/4, memory: 0/20Gi, persistentvolumeclaims: 0/10, pods: 0/10, requests.storage: 0/20Gi, services.nodeports: 0/2


$ kubectl get resourcequota -n vmaas-devops
NAME              AGE   REQUEST                                                                                                                 LIMIT
team-vmaas-quota   79s   cpu: 0/4, memory: 0/20Gi, persistentvolumeclaims: 0/10, pods: 0/10, requests.storage: 0/20Gi, services.nodeports: 0/2

&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Cascade secrets&lt;/h4&gt;
&lt;p&gt;This section demonstrates the process to configure sensitive data to be propagated through the namespace hierarchy. It uses the K8s &lt;a href=&quot;https://kubernetes.io/docs/concepts/configuration/secret/&quot;&gt;Secret&lt;/a&gt; API object.&lt;/p&gt;
&lt;p&gt;First, type the following command to update the HNC configuration to propagate the K8s &lt;em&gt;Secret&lt;/em&gt; resources:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl hns config set-resource secrets --mode Propagate

$ kubectl hns config describe
Synchronized resources:
* Propagating: resourcequotas (/v1)
* Propagating: secrets (/v1)
* Propagating: rolebindings (rbac.authorization.k8s.io/v1)
* Propagating: roles (rbac.authorization.k8s.io/v1)

Conditions:

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then execute the following commands to generate two K8s secrets of the &lt;em&gt;docker-registry&lt;/em&gt; type, &lt;em&gt;&apos;team-caas-regcrd&apos;&lt;/em&gt; &amp;#x26; &lt;em&gt;&apos;team-vmaas-regcrd&apos;&lt;/em&gt;, within the &lt;em&gt;team-caas&lt;/em&gt; and &lt;em&gt;team-vmaas&lt;/em&gt; namespaces, respectively. These secrets can be used by the corresponding teams and their projects, enabling them to authenticate against their respective team Docker registries and retrieve private images necessary for the deployment of their applications.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt; $ kubectl -n team-caas create secret generic team-caas-regcrd --from-file=.dockerconfigjson=/home/guoping/.docker/config-team-caas.json --type=kubernetes.io/dockerconfigjson
secret/team-caas-regcrd created

$ kubectl -n team-vmaas create secret generic team-vmaas-regcrd --from-file=.dockerconfigjson=/home/guoping/.docker/config-team-vmaas.json --type=kubernetes.io/dockerconfigjson
secret/team-vmaas-regcrd created

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The secrets created within the team namespaces are automatically propagated to every project within each team namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;
$ kubectl get secret -n team-caas team-caas-regcrd
NAME               TYPE                             DATA   AGE
team-caas-regcrd   kubernetes.io/dockerconfigjson   1      49s

$ kubectl get secret -n caas-devops team-caas-regcrd
NAME               TYPE                             DATA   AGE
team-caas-regcrd   kubernetes.io/dockerconfigjson   1      99s

$ kubectl get secret -n caas-iac team-caas-regcrd
NAME               TYPE                             DATA   AGE
team-caas-regcrd   kubernetes.io/dockerconfigjson   1      104s

$ kubectl get secret -n team-vmaas team-vmaas-regcrd
NAME                TYPE                             DATA   AGE
team-vmaas-regcrd   kubernetes.io/dockerconfigjson   1      40s

$ kubectl get secret -n vmaas-devops team-vmaas-regcrd
NAME                TYPE                             DATA   AGE
team-vmaas-regcrd   kubernetes.io/dockerconfigjson   1      59s

$ kubectl get secret -n vmaas-iac team-vmaas-regcrd
NAME                TYPE                             DATA   AGE
team-vmaas-regcrd   kubernetes.io/dockerconfigjson   1      65s




&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post provides a detailed guide on how to set up hierarchical namespaces in K8s in the HPE GreenLake for Private Cloud Enterprise. It delves into the capabilities of hierarchical namespaces and their impact on managing K8s namespaces. The support of hierarchical namespaces simplifies K8s management and addresses the complexities of administering large-scale namespaces.&lt;/p&gt;
&lt;p&gt;Keep visiting the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Migrate to cloud operations more easily with HPE GreenLake cloud and explore more on Chapel]]></title><link>https://developer.hpe.com/2024-august-08/</link><guid isPermaLink="false">https://developer.hpe.com/2024-august-08/</guid><pubDate>Thu, 08 Aug 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Simplifying code migration: The benefits of the new Ampere Porting Advisor for x86 to ARM64]]></title><description><![CDATA[The demand for efficient software porting solutions is increasing. With the transition from legacy x86 to AArch64, and with Ampere…]]></description><link>https://developer.hpe.com/simplifying-code-migration-the-benefits-of-the-new-ampere-porting-advisor-for-x86-to-arm64/</link><guid isPermaLink="false">https://developer.hpe.com/simplifying-code-migration-the-benefits-of-the-new-ampere-porting-advisor-for-x86-to-arm64/</guid><pubDate>Mon, 05 Aug 2024 12:34:13 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;The demand for efficient software porting solutions is increasing. With the transition from legacy x86 to AArch64, and with Ampere processors gaining adoption momentum, developers are looking for ways to expedite the migration for existing codebases. Built for sustainable cloud computing, Ampere’s Cloud Native Processors (CNPs) deliver predictable high performance, platform scalability, and power efficiency unprecedented in the industry.&lt;/p&gt;
&lt;p&gt;Today, we are announcing the Ampere Porting Advisor (APA), a new open-source software porting advisor via our &lt;a href=&quot;https://github.com/AmpereComputing/ampere-porting-advisor&quot;&gt;GitHub&lt;/a&gt; page promising to simplify this process.  The introduction of the new Ampere Porting Advisor provides a significant advancement in simplifying the migration of x86 code to the AArch64 architecture. By streamlining the migration process, reducing development costs, and enabling access to a wider ecosystem, the advisor empowers developers to embrace the benefits of the AArch64 architecture more quickly and effectively.&lt;/p&gt;
&lt;p&gt;The AArch64 architecture has gained significant traction across various software packages. By leveraging the APA, developers can tap into this expanding ecosystem and take advantage of the benefits offered by AArch64-based platforms. Migrating code from x86 to the AArch64 architecture does not have to be an intimidating process.&lt;/p&gt;
&lt;p&gt;The Ampere Porting Advisor tool:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduces time-to-market by minimizing the need for manual intervention. Developers can allocate their time and resources to other critical aspects of the project.&lt;/li&gt;
&lt;li&gt;Reduces development costs by automating various tasks involved in the migration.&lt;/li&gt;
&lt;li&gt;Minimizes risk of post-migration issues by eliminating the need for costly debugging and re-work.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Ampere Porting Advisor offers a streamlined migration process, allowing developers to save time and effort. It automates many of the manual steps involved in porting code, reducing the risk of errors, and ensuring consistency throughout the migration. The APA performs a comprehensive analysis and output recommendations of the source code, providing detailed insights into changes required, highlighting potential pitfalls, and recommending optimal modifications. This guidance enables developers to navigate the intricacies of transitioning between architectures more efficiently, accelerating the overall migration process.&lt;/p&gt;
&lt;p&gt;The APA includes the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Language support:&lt;/strong&gt; Python 3+, Java 8+, Go 1.11+, C, C++, Fortran&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Architecture specific code detection:&lt;/strong&gt; i.e. missing corresponding AAarch64 assembly, architecture specific instructions, architecture specific flags in make files.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dependency checks:&lt;/strong&gt; for versioning, JAR scanning, and dependency files.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Simple to run:&lt;/strong&gt; via python script, binary, or containers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multiple output formats:&lt;/strong&gt; terminal for quick checks, html for easy distribution, and CSV for post-processing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Ampere Porting Advisor will not make code modifications, nor API level recommendations, and it will not send data back to Ampere.  The APA is a static command line tool that analyzes the make environment and source code for known code patterns and dependency libraries and generates a report with incompatibilities and recommendations.&lt;/p&gt;
&lt;p&gt;Download and try the Ampere Porting Advisor from &lt;a href=&quot;https://github.com/AmpereComputing/ampere-porting-advisor&quot;&gt;Ampere’s GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about the developer efforts, find best practices, insights, you are invited to join the conversation at: &lt;a href=&quot;https://developer.amperecomputing.com&quot;&gt;https://developer.amperecomputing.com&lt;/a&gt;, and &lt;a href=&quot;https://community.amperecomputing.com/&quot;&gt;https://community.amperecomputing.com/&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake API - How to fetch and analyze Audit Logs in Python]]></title><description><![CDATA[Accessing and organizing audit logs is crucial for keeping your system secure, meeting compliance requirements, and running operations…]]></description><link>https://developer.hpe.com/greenlake-api-how-to-generate-aceess-token-in-python/</link><guid isPermaLink="false">https://developer.hpe.com/greenlake-api-how-to-generate-aceess-token-in-python/</guid><pubDate>Fri, 26 Jul 2024 02:57:15 GMT</pubDate><content:encoded>&lt;p&gt;Accessing and organizing audit logs is crucial for keeping your system secure, meeting compliance requirements, and running operations smoothly. HPE GreenLake’s audit logs are essential for tracking activities and spotting any unusual behavior that might indicate a security threat.&lt;/p&gt;
&lt;p&gt;In this guide, we&apos;ll show you how to set up a Python script to automatically fetch and process HPE GreenLake audit logs. This will save you time and make it easier to keep an eye on your system.&lt;/p&gt;
&lt;p&gt;If you’re new to HPE GreenLake’s APIs, take a look at this &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;blog post&lt;/a&gt; to get started with the HPE GreenLake platform API Audit Logs.&lt;/p&gt;
&lt;h2&gt;Use cases for fetching audit logs&lt;/h2&gt;
&lt;p&gt;Audit logs are extremely useful in several scenarios:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Security Monitoring&lt;/strong&gt;: Keeping an eye on who accessed the system and what they did helps in spotting unusual activities that might be security threats.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compliance Auditing&lt;/strong&gt;: Many industries have rules about keeping records of system activities. Audit logs are perfect for this and can be shown as proof during audits.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Operational Analysis&lt;/strong&gt;: By looking at the audit logs, companies can see patterns in how their systems are used, helping them to make better decisions about resources and planning.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Detailed steps to set up the automation&lt;/h2&gt;
&lt;p&gt;Here’s a step-by-step guide to getting your audit logs automatically using Python:&lt;/p&gt;
&lt;h3&gt;Step 1: Set up OAuth2 authentication&lt;/h3&gt;
&lt;p&gt;HPE GreenLake uses OAuth2 for security, which means you first need to get an access token.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Store Your Credentials&lt;/strong&gt;: Keep your client ID and secret safe but accessible to your script. A good practice is to store them in a text file. In this case, I did save the client_Id in first line and client_secret in second line.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Request a Token&lt;/strong&gt;: Your script needs to read these credentials and use them to request an access token. This token proves to HPE GreenLake that your script has permission to access the logs.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def generate_and_save_token():
    with open(&apos;credentials.txt&apos;, &apos;r&apos;) as file:
        client_id, client_secret = [line.strip() for line in file.readlines()]
    client = BackendApplicationClient(client_id=client_id)
    oauth = OAuth2Session(client=client)
    token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=(client_id, client_secret))
    return token[&apos;access_token&apos;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Fetch the audit logs&lt;/h3&gt;
&lt;p&gt;Now that you have a token, you can fetch the logs from HPE GreenLake.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Make the Request&lt;/strong&gt;: Use the token to make a secure request to the audit logs endpoint. Handle any errors and ensure you process the logs only if the fetch was successful.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def fetch_audit_logs(token):
    headers = {&apos;Authorization&apos;: f&apos;Bearer {token}&apos;}
    response = requests.get(&apos;https://global.api.greenlake.hpe.com/audit-log/v1beta1/logs&apos;, headers=headers)
    return response.json() if response.status_code == 200 else None
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Process and save the data&lt;/h3&gt;
&lt;p&gt;After getting the logs, you&apos;ll want to convert them into a format that’s easy to read and use.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Process the JSON&lt;/strong&gt;: Convert the JSON data into a pandas DataFrame. This makes it easier to work with in Python.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Save to Excel&lt;/strong&gt;: Export this DataFrame to an Excel file. This file will be your audit log record that you can open, share, and analyze without needing specialised software.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What does it look like in excel?&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/excel-output.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I have used excel in this script just for testing and can be used with any of the databases including sqlite, postgres and oracle. Python can also do it, but the results vary depending on the use case, especially when it comes to real-time data monitoring or handling very large volumes of logs.&lt;/p&gt;
&lt;p&gt;For these scenarios, other tools and platforms can be more suitable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Splunk&lt;/strong&gt;: This powerful tool specializes in searching, monitoring, and analyzing machine-generated data via a Web-style interface.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ELK Stack (Elasticsearch, Logstash, Kibana)&lt;/strong&gt;: This group of three open-source tools works together to allow users to collect logs from different sources, process them, and then visualize and analyze these data in real time. Elasticsearch acts as a search and analytics engine, Logstash processes and prepares your data, while Kibana lets you visualize the data with charts and graphs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tableau&lt;/strong&gt;: Known for its advanced visualization capabilities, Tableau can connect directly to nearly any database or log management solution to create complex dashboards and reports.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Full python script&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import requests
from oauthlib.oauth2 import BackendApplicationClient
from requests.auth import HTTPBasicAuth
from requests_oauthlib import OAuth2Session
import pandas as pd
import json

# Constants
BASE_PATH = &apos;&amp;#x3C;Directory in your PC&gt;&apos;
CREDENTIALS_FILE = f&apos;{BASE_PATH}credentials.txt&apos;
TOKEN_FILE_PATH = f&apos;{BASE_PATH}access_token.txt&apos;
AUDIT_LOGS_URL = &apos;https://global.api.greenlake.hpe.com/audit-log/v1beta1/logs&apos;
EXCEL_FILE_PATH = f&apos;{BASE_PATH}audit_logs.xlsx&apos;

def generate_and_save_token():
    print(&quot;Reading credentials from file...&quot;)
    with open(CREDENTIALS_FILE, &apos;r&apos;) as file:
        lines = file.readlines()
        client_id = lines[0].strip()
        client_secret = lines[1].strip()

    print(&quot;Initializing OAuth client...&quot;)
    client = BackendApplicationClient(client_id=client_id)
    oauth = OAuth2Session(client=client)
    auth = HTTPBasicAuth(client_id, client_secret)

    print(&quot;Fetching access token...&quot;)
    token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)
    access_token = token[&apos;access_token&apos;]
    
    print(&quot;Saving access token to file...&quot;)
    with open(TOKEN_FILE_PATH, &apos;w&apos;) as token_file:
        token_file.write(access_token)

    print(&quot;Token fetched and saved successfully.&quot;)
    return access_token

def fetch_audit_logs():
    print(&quot;Generating and saving access token...&quot;)
    access_token = generate_and_save_token()
    headers = {&apos;Authorization&apos;: f&apos;Bearer {access_token}&apos;}

    print(&quot;Fetching audit logs from API...&quot;)
    response = requests.get(AUDIT_LOGS_URL, headers=headers)
    
    if response.status_code == 200:
        print(&quot;Audit logs fetched successfully.&quot;)
        return response.json()
    else:
        print(f&quot;Failed to fetch audit logs: HTTP status code {response.status_code}&quot;)
        return None

def process_and_export_to_excel(data):
    print(&quot;Processing audit log data...&quot;)
    df = pd.json_normalize(data[&apos;items&apos;])
    df = df.reindex(columns=[  # Reordering and selecting columns
        &apos;id&apos;, &apos;type&apos;, &apos;user.username&apos;, &apos;workspace.id&apos;, &apos;workspace.workspaceName&apos;,
        &apos;category&apos;, &apos;application.id&apos;, &apos;application.applicationName&apos;, 
        &apos;description&apos;, &apos;generation&apos;, &apos;createdAt&apos;, &apos;updatedAt&apos;, 
        &apos;additionalInfo.ipAddress&apos;, &apos;hasDetails&apos;
    ])
    df.rename(columns={  # Renaming for clarity
        &apos;user.username&apos;: &apos;Username&apos;,
        &apos;workspace.id&apos;: &apos;Workspace ID&apos;,
        &apos;workspace.workspaceName&apos;: &apos;Workspace Name&apos;,
        &apos;category&apos;: &apos;Category&apos;,
        &apos;application.id&apos;: &apos;Application ID&apos;,
        &apos;application.applicationName&apos;: &apos;Application Name&apos;,
        &apos;description&apos;: &apos;Description&apos;,
        &apos;generation&apos;: &apos;Generation&apos;,
        &apos;createdAt&apos;: &apos;Created At&apos;,
        &apos;updatedAt&apos;: &apos;Updated At&apos;,
        &apos;additionalInfo.ipAddress&apos;: &apos;IP Address&apos;,
        &apos;hasDetails&apos;: &apos;Has Details&apos;
    }, inplace=True)
    
    print(&quot;Exporting data to Excel...&quot;)
    df.to_excel(EXCEL_FILE_PATH, index=False)
    print(f&quot;Audit logs have been successfully exported to {EXCEL_FILE_PATH}.&quot;)

def main():
    print(&quot;Starting audit log processing...&quot;)
    audit_logs_data = fetch_audit_logs()
    if audit_logs_data:
        process_and_export_to_excel(audit_logs_data)
    print(&quot;Process completed.&quot;)

if __name__ == &quot;__main__&quot;:
    main()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This post showed how to access and organize HPE GreenLake&apos;s audit logs using Python. It explained how to set up OAuth2 to get an access token, fetch the audit logs, and turn them into an easy-to-read Excel file. This makes it easier to keep your system secure, follow rules, and run things smoothly.&lt;/p&gt;
&lt;p&gt;In the next article, we will explore how to get additional detail of an audit log and all audit logs of a particular application or user.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake Audit Log APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Configuring Azure Active Directory with long-lived tokens for user provisioning]]></title><description><![CDATA[Azure Active Directory (Azure AD) is Microsoft's cloud-based identity and access management service, designed to simplify user…]]></description><link>https://developer.hpe.com/configuring-azure-ad-with-long-term-token-for-scim-provisiong/</link><guid isPermaLink="false">https://developer.hpe.com/configuring-azure-ad-with-long-term-token-for-scim-provisiong/</guid><pubDate>Tue, 16 Jul 2024 14:33:05 GMT</pubDate><content:encoded>&lt;p&gt;Azure Active Directory (Azure AD) is Microsoft&apos;s cloud-based identity and access management service, designed to simplify user authentication and authorization across various applications and platforms. It offers a centralized solution for managing user identities, enforcing security policies, and facilitating seamless access to cloud-based resources. Azure AD automatic user provisioning simplifies the creation, maintenance, and removal of user identities in SaaS applications based on business rules.&lt;/p&gt;
&lt;p&gt;The Azure AD provisioning service provisions users to the HPE GreenLake portal by connecting to the user management API endpoints provided by HPE GreenLake Identity and Access Management (IAM). These user management API endpoints allow Azure AD to programmatically create, update, and remove users and groups. The Azure AD provisioning service uses an HPE GreenLake tenant API token to provision users and groups to the HPE GreenLake IAM.  The HPE tenant API tokens are only valid for fifteen minutes. Because Azure AD cannot automatically renew the token, long-term tokens are required.&lt;/p&gt;
&lt;p&gt;I﻿n this blog post, I&apos;ll explain the process for configuring Azure AD to use a long-term token for user and group provisioning.&lt;/p&gt;
&lt;h2&gt;S﻿teps to configure long-term tokens in Azure AD&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Connect to the HPE GreenLake portal and assign roles required for System for cross-domain identity management (SCIM)&lt;/li&gt;
&lt;li&gt;G﻿et a personal access token&lt;/li&gt;
&lt;li&gt;C﻿reate a SCIM proxy token&lt;/li&gt;
&lt;li&gt;U﻿pdate the SCIM proxy token and the tenant URL in Azure AD Enterprise Application&lt;/li&gt;
&lt;li&gt;Update the attribute mappings of Users and Groups&lt;/li&gt;
&lt;li&gt;User/Group Provisioning&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;S﻿tep 1: Connect to HPE GreenLake portal and assign roles required for System for Cross-domain Identity Management (SCIM)&lt;/h2&gt;
&lt;p&gt;A﻿ssign &quot;SCIM Proxy Token Contributor&quot; role to the user or user group that will create the long-term token&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connect to the HPE GreenLake portal (&lt;a href=&quot;https://common.cloud.hpe.com/&quot;&gt;https://common.cloud.hpe.com&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Cross-launch &quot;HPE GreenLake Flex Solutions&quot; service.&lt;/li&gt;
&lt;li&gt;C﻿lick the &quot;User Management&quot; icon on the top-right corner.&lt;/li&gt;
&lt;li&gt;S﻿elect the user/user group that will generate the SCIM proxy token.&lt;/li&gt;
&lt;li&gt;S﻿elect &quot;Actions&quot; and then &quot;Create Assignment&quot;.&lt;/li&gt;
&lt;li&gt;S﻿elect &quot;SCIM Proxy Token Contributor&quot; role.&lt;/li&gt;
&lt;li&gt;S﻿elect &quot;All Resources&quot; space and &quot;greenlake.service.user&quot; scope.&lt;/li&gt;
&lt;li&gt;E﻿nable &quot;I confirm that I want to create the assignments listed above&quot;.&lt;/li&gt;
&lt;li&gt;C﻿lick &quot;Create Assignment&quot; button.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;S﻿tep 2: G﻿et a personal access token&lt;/h2&gt;
&lt;p&gt;An API token issued by the HPE GreenLake Flex Solutions platform must be used as the Bearer token in the Authorization header of HPE GreenLake Flex Solutions REST API requests. Perform the following steps to get API access token from HPE GreenLake Flex Solutions portal:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log into HPE GreenLake Flex Solutions.&lt;/li&gt;
&lt;li&gt;Click the profile icon on the top-right corner.&lt;/li&gt;
&lt;li&gt;Select API Access.&lt;/li&gt;
&lt;li&gt;Copy the API access token.&lt;/li&gt;
&lt;li&gt;Save it for use with cURL or an other REST API client.&lt;/li&gt;
&lt;li&gt;For example: export BEARER_TOKEN=&lt;paste token value&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;N﻿ote&lt;/strong&gt;: This token is valid for 15 minutes after generation.&lt;/p&gt;
&lt;h2&gt;S﻿tep 3: Create a SCIM proxy token&lt;/h2&gt;
&lt;p&gt;A SCIM Proxy Token is required for the SCIM integration to work. Run the following cURL command to generate the SCIM Proxy token:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;curl -H &quot;Authorization: bearer $BEARER_TOKEN&quot; -X POST https://sps.us1.greenlake-hpe.com/v1alpha1/proxytoken&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;N﻿ote&lt;/strong&gt;: This step must be performed once during the initial setup and every time a token is deleted.&lt;/p&gt;
&lt;h2&gt;S﻿tep 4: Update the SCIM proxy token and the tenant URL in Azure AD Enterprise Application&lt;/h2&gt;
&lt;p&gt;The generated SCIM Proxy Token should be copied and applied in the Azure AD Enterprise Application.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt; In Azure AD, go to the “Enterprise applications”.&lt;/li&gt;
&lt;li&gt; Click the “SSO-Integration” application.&lt;/li&gt;
&lt;li&gt; Click the “Provisioning” on the left navigation window.&lt;/li&gt;
&lt;li&gt; Click the “Get started.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page1.png&quot; alt=&quot;&quot; title=&quot;Application provisioning in Azure AD&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select &quot;Provisioning Mode&quot; to &quot;Automatic&quot; &lt;/li&gt;
&lt;li&gt;Click the “Admin Credentials”.&lt;/li&gt;
&lt;li&gt; Update the generated token in the “Secret Token” field.&lt;/li&gt;
&lt;li&gt; Update the URL &lt;a href=&quot;https://sps.us1.greenlake-hpe.com/v1alpha1/scimproxy&quot;&gt;https://sps.us1.greenlake-hpe.com/v1alpha1/scimproxy&lt;/a&gt; in the “Tenant URL” field.&lt;/li&gt;
&lt;li&gt;Test connection - Connection should HPE GreenLake platform should succeed.&lt;/li&gt;
&lt;li&gt;Save the configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page2.png&quot; alt=&quot;&quot; title=&quot;Updating the tenant URL and Token&quot;&gt;&lt;/p&gt;
&lt;h2&gt;S﻿tep 5: Update the attribute mappings of users and groups&lt;/h2&gt;
&lt;p&gt;Before provisioning the users/groups to HPE GreenLake platform, edit the attribute mappings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Update the attribute mapping of Users&lt;/li&gt;
&lt;li&gt;Unselect the update options under &quot;Target Object Actions&quot;&lt;/li&gt;
&lt;li&gt;customappsso attribute should have below attributes configured&lt;br&gt;
userName&lt;br&gt;
displayName&lt;br&gt;
name.givenName&lt;br&gt;
name.familyName&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page3.png&quot; alt=&quot;&quot; title=&quot;Attribute Mapping of user&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Update the attribute mapping of groups&lt;/li&gt;
&lt;li&gt;customappsso attribute should have the below attributes configured&lt;br&gt;
displayName&lt;br&gt;
externalid&lt;br&gt;
members&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page4.png&quot; alt=&quot;&quot; title=&quot;Attribute Mapping of Group&quot;&gt;&lt;/p&gt;
&lt;p&gt;Save the configuration and enable the provisioning status from &quot;OFF&quot; to &quot;ON&quot;&lt;/p&gt;
&lt;p&gt;![](/img/scim-page6.png &quot;Enabling the Provisioning status to \\&quot;ON\\&quot;&quot;)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Assign the Azure AD group to the Enterprise application&lt;/li&gt;
&lt;li&gt;Note: This step is very important to give access to subset of groups and users who need access to HPE GreenLake platform from large enterprise groups from Azure AD.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page5.png&quot; alt=&quot;&quot; title=&quot;Assign the Azure AD group to the Enterprise application&quot;&gt;&lt;/p&gt;
&lt;h2&gt;S﻿tep 6: User/Group Provisioning&lt;/h2&gt;
&lt;p&gt;All set to provision the groups/users to &quot;HPE GreenLake platform&quot;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &quot;Start Provisioning&quot; to start.&lt;/li&gt;
&lt;li&gt;Upon successful provisioning verify the users and groups are pushed to &quot;HPE GreenLake platform&quot;&lt;/li&gt;
&lt;li&gt;Click &quot;Stop Provisioning&quot; to stop.&lt;/li&gt;
&lt;li&gt;Click &quot;View Provisioning&quot; logs to view the failures.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-page7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;U﻿sers can rotate a long-lived token before its expiration date using the following API:&lt;/p&gt;
&lt;p&gt;﻿&lt;code&gt;curl -H &quot;Authorization: bearer $BEARER_TOKEN&quot; -X POST https://sps.us1.greenlake-hpe.com/v1alpha1/proxytoken/rotate?remove-current=true&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;When the &quot;remove_current&quot; flag is enabled, it replaces the current token with a new one. During this process, there might be a temporary disruption in user and group provisioning, which will automatically resolve itself in the subsequent provisioning cycle. Alternatively, if the &quot;remove_current&quot; flag is disabled, the current token is replaced only after the new token takes effect, ensuring uninterrupted user experience without any provisioning failures.&lt;/p&gt;
&lt;p&gt;I hope this blog post answers any questions you may have regarding configuration of SCIM with HPE GreenLake platform. Please return to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more tips and tricks on working with the HPE GreenLake platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Parallel Processing of a Billion Rows of Data in Chapel]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/parallel-processing-of-a-billion-rows-of-data-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/parallel-processing-of-a-billion-rows-of-data-in-chapel/</guid><pubDate>Fri, 12 Jul 2024 19:10:04 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Tensor Parallelism in Three Levels of Difficulty]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/tensor-parallelism-in-three-levels-of-difficulty/</link><guid isPermaLink="false">https://developer.hpe.com/tensor-parallelism-in-three-levels-of-difficulty/</guid><pubDate>Wed, 10 Jul 2024 13:44:29 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Navier-Stokes in Chapel — 2D Simulations and Performance]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/navier-stokes-in-chapel-—-2d-simulations-and-performance/</link><guid isPermaLink="false">https://developer.hpe.com/navier-stokes-in-chapel-—-2d-simulations-and-performance/</guid><pubDate>Tue, 09 Jul 2024 22:34:53 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Integrating K8sGPT to empower Kubernetes with AI in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[This blog post describes the process to integrate K8sGPT serving a local large language model (LLM) as an artificial intelligence (AI…]]></description><link>https://developer.hpe.com/integrating-k8sgpt-to-empower-kubernetes-with-ai-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/integrating-k8sgpt-to-empower-kubernetes-with-ai-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Tue, 09 Jul 2024 11:48:04 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;This blog post describes the process to integrate &lt;a href=&quot;https://github.com/k8sgpt-ai/k8sgpt&quot;&gt;K8sGPT&lt;/a&gt; serving a local large language model (LLM) as an artificial intelligence (AI) backend to Kubernetes (K8s) in HPE GreenLake for Private Cloud Enterprise. It explores the convergence of K8s and AI for diagnosing and triaging issues in K8s clusters and providing actionable insights and recommendations from AI for K8s management.&lt;/p&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;K8s is celebrated for its scalability, self-healing capabilities and compatibility, which make it a preferred choice for developers and organizations. Though K8s dramatically simplifies application deployment in containers, it adds a new set of complexities for managing, securing and troubleshooting applications. Operating K8s clusters and managing the deployed applications can present challenges.&lt;/p&gt;
&lt;p&gt;AI is a technological innovation that equips computers and machines with the ability to mimic human intelligence and problem-solving skills. These AI systems are trained on vast volumes of data, enabling them to recognize patterns and perform tasks such as solving problems in a human-like manner. The evolution of AI, especially the advent of extensive LLM models, has expanded possibilities in many sectors, including K8s. The application of AI in enhancing diagnostics and troubleshooting workflows in K8s clusters has grown considerably.&lt;/p&gt;
&lt;p&gt;This blog post explores the convergence of K8s and AI through K8sGPT, a tool for scanning the K8s cluster, diagnosing and triaging K8s issues using AI. It describes the detailed process to integrate K8sGPT with local LLM models to empower K8s in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A K8s cluster being provisioned, using &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;HPE GreenLake Terraform provider&lt;/a&gt; for example, in HPE GreenLake for Private Cloud Enterprise&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;Python 3.8 or higher&lt;/a&gt;, and &lt;em&gt;pip&lt;/em&gt; that’s included by default in Python&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;K8sGPT and LocalAI&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/k8sgpt-ai/k8sgpt&quot;&gt;K8sGPT&lt;/a&gt; is an open source project designed to address common and complex issues within K8s cluster using AI. It leverages large language models (LLMs) to enhance troubleshooting, streamline processes, and improve K8s management. K8sGPT supports various &lt;a href=&quot;https://docs.k8sgpt.ai/reference/providers/backend/&quot;&gt;AI backends&lt;/a&gt; (also called providers), including &lt;a href=&quot;https://openai.com/&quot;&gt;OpenAI&lt;/a&gt;, &lt;a href=&quot;https://aws.amazon.com/bedrock/&quot;&gt;Amazon Bedrock&lt;/a&gt;, &lt;a href=&quot;https://azure.microsoft.com/en-us/products/cognitive-services/openai-service&quot;&gt;Azure OpenAI&lt;/a&gt;, &lt;a href=&quot;https://ai.google.dev/docs/gemini_api_overview&quot;&gt;Google Gemini&lt;/a&gt; as well as &lt;a href=&quot;https://github.com/mudler/LocalAI&quot;&gt;LocalAI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;LocalAI is an open source project that provides an alternative to OpenAI’s offerings for local inferencing. It does not require a GPU and can run on consumer-grade hardware without high-end computing resources. By deploying AI solutions within the local infrastructure and keeping all processes in-house, it avoids the costs associated with external AI services and ensures better data sovereignty and privacy.&lt;/p&gt;
&lt;p&gt;The following sections describe the process to deploy K8sGPT using LocalAI as its backend to empower K8s cluster with AI capabilities in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Install K8sGPT&lt;/h3&gt;
&lt;p&gt;There are two options to install K8sGPT, as a CLI tool or as a K8s Operator in the K8s cluster. This section takes the CLI option to install K8sGPT to the Linux workstation in my local environment that&apos;s used to manage the K8s clusters. You can refer to the &lt;a href=&quot;https://github.com/k8sgpt-ai/k8sgpt&quot;&gt;K8sGPT Github&lt;/a&gt; for additional information to install the K8sGPT as a K8s Operator or as a CLI tool on &lt;em&gt;Windows&lt;/em&gt; or &lt;em&gt;Mac&lt;/em&gt;. With the CLI option, the K8sGPT setup will be independent of any specific K8s cluster and it can be used for working on any existing K8s clusters. This is helpful, especially when multiple K8s clusters are created and managed in your environment. They can all work with the same K8sGPT installation.&lt;/p&gt;
&lt;p&gt;Follow these instructions to install K8sGPT as a CLI tool on the Linux workstation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_386.deb
$ sudo dpkg -i k8sgpt_386.deb
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to verify the K8sGPT installation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt version
k8sgpt: 0.3.24 (eac9f07), built at: unknown

$ k8sgpt -h
Kubernetes debugging powered by AI

Usage:
  k8sgpt [command]

Available Commands:
  analyze     This command will find problems within your Kubernetes cluster
  auth        Authenticate with your chosen backend
  cache       For working with the cache the results of an analysis
  completion  Generate the autocompletion script for the specified shell
  filters     Manage filters for analyzing Kubernetes resources
  generate    Generate Key for your chosen backend (opens browser)
  help        Help about any command
  integration Integrate another tool into K8sGPT
  serve       Runs k8sgpt as a server
  version     Print the version number of k8sgpt

Flags:
      --config string        Default config file (/home/guoping/.config/k8sgpt/k8sgpt.yaml)
  -h, --help                 help for k8sgpt
      --kubeconfig string    Path to a kubeconfig. Only required if out-of-cluster.
      --kubecontext string   Kubernetes context to use. Only required if out-of-cluster.

Use &quot;k8sgpt [command] --help&quot; for more information about a command.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;K8sGPT comes with a list of &lt;a href=&quot;https://github.com/k8sgpt-ai/k8sgpt?tab=readme-ov-file#analyzers&quot;&gt;built-in analyzers&lt;/a&gt;, which is essentially a series of rules/checks that can be used to diagnose issues in various K8s API resources, such as &lt;em&gt;Pod&lt;/em&gt; crashes, &lt;em&gt;Service&lt;/em&gt; failures, etc.&lt;/p&gt;
&lt;p&gt;K8sGPT provides a list of &lt;em&gt;filters&lt;/em&gt; that can be used together with K8sGPT analyzer to scan issues for any specific K8s API resources:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt filters list
Active:
&gt; Node
&gt; Pod
&gt; Deployment
&gt; ReplicaSet
&gt; PersistentVolumeClaim
&gt; Service
&gt; Ingress
&gt; StatefulSet
&gt; ValidatingWebhookConfiguration
&gt; MutatingWebhookConfiguration
&gt; CronJob
Unused:
&gt; HorizontalPodAutoScaler
&gt; PodDisruptionBudget
&gt; NetworkPolicy
&gt; Log
&gt; GatewayClass
&gt; Gateway
&gt; HTTPRoute
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can add any other &lt;em&gt;unused&lt;/em&gt; filters, e.g., &lt;em&gt;HorizontalPodAutoScaler&lt;/em&gt; or &lt;em&gt;NetworkPolicy&lt;/em&gt;, by typing the command &lt;em&gt;&apos;k8sgpt filter add [filter(s)]&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Set up LocalAI backend&lt;/h3&gt;
&lt;p&gt;This section will focus on setting up and utilizing LocalAI with a supported LLM model in the local environment. This LocalAI setup in the workstation will be integrated with K8sGPT.&lt;/p&gt;
&lt;h4&gt;Download a LLM model&lt;/h4&gt;
&lt;p&gt;LocalAI supports a list of LLM models, such as &lt;em&gt;LLama&lt;/em&gt;, &lt;em&gt;GPT4ALL&lt;/em&gt;, &lt;em&gt;Alpaca&lt;/em&gt; and &lt;em&gt;koala&lt;/em&gt;, etc. In this blog post, the LLM model &lt;em&gt;&lt;code&gt;Llama-2–13b-chat-hf&lt;/code&gt;&lt;/em&gt; from &lt;a href=&quot;https://huggingface.co/&quot;&gt;Hugging Face&lt;/a&gt; will be downloaded and used as the local AI backend for K8sGPT.&lt;/p&gt;
&lt;p&gt;After you create an account in &lt;em&gt;Hugging Face&lt;/em&gt;, log in to the site and share your contact information. You will then be granted access to this model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hf-llama.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Type the following command to clone the LLM model. Make sure you have &lt;a href=&quot;https://git-lfs.github.com&quot;&gt;&lt;em&gt;git-lfs&lt;/em&gt;&lt;/a&gt; installed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ git-lfs clone https://huggingface.co/meta-llama/Llama-2-13b-chat-hf

$ tree Llama-2-13b-chat-hf/
Llama-2-13b-chat-hf/
├── config.json
├── generation_config.json
├── LICENSE.txt
├── model-00001-of-00003.safetensors
├── model-00002-of-00003.safetensors
├── model-00003-of-00003.safetensors
├── model.safetensors.index.json
├── pytorch_model-00001-of-00003.bin
├── pytorch_model-00002-of-00003.bin
├── pytorch_model-00003-of-00003.bin
├── pytorch_model.bin.index.json
├── README.md
├── Responsible-Use-Guide.pdf
├── special_tokens_map.json
├── tokenizer_config.json
├── tokenizer.json
├── tokenizer.model
└── USE_POLICY.md

0 directories, 18 files
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Install &lt;em&gt;Text Generation Web UI&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;There are many LLM serving framework and tools that support various LLM models and provide a way to interact with those models with OpenAI compatible API endpoints.&lt;/p&gt;
&lt;p&gt;This section introduces the &lt;a href=&quot;https://github.com/oobabooga/text-generation-webui&quot;&gt;Text Generation Web UI (TGW)&lt;/a&gt; tool and shows how to set up this tool to support the locally downloaded LLM model as the AI backend. The TGW is a &lt;a href=&quot;https://github.com/gradio-app/gradio/&quot;&gt;Gradio&lt;/a&gt; based tool that builds a web UI and supports many large language model formats with OpenAI compatible APIs.&lt;/p&gt;
&lt;p&gt;Type the following commands to clone the TGW repo and install it in the workstation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Serve local LLM model&lt;/h4&gt;
&lt;p&gt;Type the following command to start serving the downloaded LLM model in your workstation. The option &lt;em&gt;&apos;--extensions openai&apos;&lt;/em&gt; specifies to use the OpenAI extension to provide OpenAI format API. The options &lt;em&gt;&apos;--cpu&apos;&lt;/em&gt; &amp;#x26; &lt;em&gt;&apos;--load-in-4bit&apos;&lt;/em&gt; to load model in 4-bit precision and use the model performed on a CPU to make predictions. This is more cost-effective for inference, and it’s helpful in case you don’t have GPU&apos;s installed in your environment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ python3 server.py --listen-host 0.0.0.0 --listen --model Llama-2-13b-chat-hf --model-dir /home/guoping/CFE/AI/models --extensions openai --cpu --load-in-4bit
20:32:39-883617 INFO     Starting Text generation web UI                                                                                         
20:32:39-887086 WARNING                                                                                                                          
                         You are potentially exposing the web UI to the entire internet without any access password.                             
                         You can create one with the &quot;--gradio-auth&quot; flag like this:                                                             
                                                                                                                                                 
                         --gradio-auth username:password                                                                                         
                                                                                                                                                 
                         Make sure to replace username:password with your own.                                                                   
20:32:39-889820 INFO     Loading &quot;Llama-2-13b-chat-hf&quot;                                                                                           
20:32:39-923551 INFO     TRANSFORMERS_PARAMS=                                                                                                    
{&apos;low_cpu_mem_usage&apos;: True, &apos;torch_dtype&apos;: torch.float32}

Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████| 3/3 [00:47&amp;#x3C;00:00, 15.91s/it]
20:33:27-988083 INFO     LOADER: &quot;Transformers&quot;                                                                                                  
20:33:27-989519 INFO     TRUNCATION LENGTH: 4096                                                                                                 
20:33:27-990434 INFO     INSTRUCTION TEMPLATE: &quot;Custom (obtained from model metadata)&quot;                                                           
20:33:27-991414 INFO     Loaded the model in 48.10 seconds.                                                                                      
20:33:27-992350 INFO     Loading the extension &quot;openai&quot;                                                                                          
20:33:28-164441 INFO     OpenAI-compatible API URL:                                                                                              
                                                                                                                                                 
                         http://0.0.0.0:5000                                                                                                     
                                                                                                                                                 
Running on local URL:  http://0.0.0.0:7860
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The OpenAI compatible API endpoint is hosted on &lt;em&gt;&apos;&lt;a href=&quot;http://0.0.0.0.5000&quot;&gt;http://0.0.0.0.5000&lt;/a&gt;&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Apart from this API endpoint URL, the TGW starts running another local web URL, i.e., at &lt;em&gt;&apos;&lt;a href=&quot;http://0.0.0.0:7860&quot;&gt;http://0.0.0.0:7860&lt;/a&gt;&apos;&lt;/em&gt;, providing 3 interface modes, &lt;em&gt;chat&lt;/em&gt;, &lt;em&gt;default&lt;/em&gt; &amp;#x26; &lt;em&gt;notebook&lt;/em&gt;, for text generation with the local LLM model backend.&lt;/p&gt;
&lt;p&gt;You can start the browser and type this local URL &lt;em&gt;&apos;&lt;a href=&quot;http://0.0.0.0:7860&quot;&gt;http://0.0.0.0:7860&lt;/a&gt;&apos;&lt;/em&gt;. You then land on the &lt;em&gt;Chat&lt;/em&gt; page. Ask a question in the &lt;em&gt;Chat&lt;/em&gt; page by typing some text and clicking &lt;strong&gt;Generate&lt;/strong&gt; button. You may notice it’s a bit slower if you serve the model using CPU inference. But everything should work and you will get the response to your question by &lt;em&gt;AI&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/local-llm.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Integrate K8sGPT with LocalAI&lt;/h3&gt;
&lt;p&gt;Type the following command to configure K8sGPT to use the local API endpoint. Don’t forget to add the &lt;em&gt;“/v1”&lt;/em&gt; to the API URL.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt auth add --backend localai --model Llama-2-13b-chat-hf --baseurl http://localhost:5000/v1
localai added to the AI backend provider list


$ k8sgpt auth list
Default:
&gt; openai
Active:
&gt; localai
Unused:
&gt; openai
&gt; azureopenai
&gt; noopai
&gt; cohere
&gt; amazonbedrock
&gt; amazonsagemaker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In case the LocalAI API endpoint has been already added to K8sGPT, type the command below to remove it. Then re-run the &lt;em&gt;k8sgpt auth add&lt;/em&gt; to add it again with the new LLM model and its API endpoint.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt auth remove --backends localai
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Detect K8s issues using K8sGPT&lt;/h3&gt;
&lt;p&gt;With all components being installed and configured, it&apos;s time to detect K8s issues in the K8s cluster.&lt;/p&gt;
&lt;h4&gt;Deploy a sample application&lt;/h4&gt;
&lt;p&gt;This section will deploy a sample application using &lt;em&gt;kubectl&lt;/em&gt; CLI.&lt;/p&gt;
&lt;p&gt;Type below commands to deploy such a sample application to the namespace &lt;em&gt;&apos;cfe-apps&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create ns cfe-apps
namespace/cfe-apps created

$ kubectl create deployment app-with-no-image --image=cfe-image-not-exist -n cfe-apps
deployment.apps/app-with-no-image created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check the deployed application using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n cfe-apps
NAME                                     READY   STATUS             RESTARTS   AGE
pod/app-with-no-image-7ff65f5484-9bt4z   0/1     ImagePullBackOff   0           2m

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/app-with-no-image   0/1     1            0            2m

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/app-with-no-image-7ff65f5484   1         1         0        2m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output shows some issues in the K8s &lt;em&gt;Pod&lt;/em&gt; &lt;em&gt;&apos;app-with-no-image-7ff65f5484-9bt4z&apos;&lt;/em&gt;. The K8s Deployment keeps showing as &lt;em&gt;&apos;0/1 READY&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;h4&gt;Run &lt;em&gt;k8sgpt analyze&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;To start K8s issue detection, type the following &lt;em&gt;k8sgpt analyze&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt analyze --backend localai --filter=Pod --namespace cfe-apps --no-cache
AI Provider: localai

0 cfe-apps/app-with-no-image-7ff65f5484-9bt4z(Deployment/app-with-no-image)
- Error: Back-off pulling image &quot;cfe-image-not-exist&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Instead of running a scan with all the default analyzers, the above command uses the filter &lt;em&gt;Pod&lt;/em&gt; to detect K8s Pod issue in the namespace &lt;em&gt;&apos;cfe-apps&apos;&lt;/em&gt;. It detects the error of back-off pulling image &quot;cfe-image-not-exist&quot; in the Pod.&lt;/p&gt;
&lt;p&gt;Execute again the K8sGPT analyze with the option &lt;em&gt;&apos;--explain&apos;&lt;/em&gt; this time:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt analyze --backend localai --filter=Pod --namespace cfe-apps --explain --no-cache
 100% |█████████████████████████████████████████████████████████████████████████████████████████████████| (1/1, 25 it/hr)
AI Provider: localai

0 cfe-apps/app-with-no-image-7ff65f5484-9bt4z(Deployment/app-with-no-image)
- Error: Back-off pulling image &quot;cfe-image-not-exist&quot;
Error: Back-off pulling image &quot;cfe-image-not-exist&quot;
Solution:

1. Check the image name and ensure it exists in the registry.
2. Verify the image tag and ensure it matches the expected tag.
3. Try pulling the image again using the correct name and tag.
4. If the image still does not exist, check the spelling and ensure the name is correct.
5. If all else fails, try using the &quot;docker pull&quot; command to download the image locally and then push it to the registry. 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The option &lt;em&gt;&apos;--explain&apos;&lt;/em&gt; in the above command enables the AI backend. The analyze establishes a connection to the AI backend and provides it with the error message it detected in K8s. When the option &lt;em&gt;&apos;--explain&apos;&lt;/em&gt; is used, K8sGPT not only executes the requested action to detect the K8s issue but also provides the step-by-step solutions. The solutions provided by K8sGPT can be used as actionable insights to fix the K8s issues.&lt;/p&gt;
&lt;p&gt;Apart from the default English, K8sGPT supports getting the response in other languages. Here is an example of getting a response in K8sGPT in French:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ k8sgpt analyze --backend localai --filter=Pod --namespace cfe-apps --language french --explain --no-cache
 100% |█████████████████████████████████████████████████████████████████████████████████████████████████| (1/1, 16 it/hr)
AI Provider: localai

0 cfe-apps/app-with-no-image-7ff65f5484-9bt4z(Deployment/app-with-no-image)
- Error: Back-off pulling image &quot;cfe-image-not-exist&quot;

Sure! Here&apos;s the simplified error message and solution in French, within the 280 character limit:

Error: Impossible de tirer l&apos;image &quot;cfe-image-not-exist&quot;.

Solution:

1. Vérifiez si l&apos;image est disponible dans le registre de conteneurs.
2. Si l&apos;image n&apos;existe pas, créez-la manuellement en utilisant le commandes `docker pull` ou `docker build`.
3. Essayez à nouveau de tirer l&apos;image en utilisant la commande `kubectl apply`.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post explores the integration of K8s with AI to address the complexities for managing, securing and troubleshooting K8s cluster and its applications. By integrating K8sGPT with LocalAI to K8s in HPE GreenLake for Private Cloud Enterprise, it eliminates the need for external AI services, leading to cost reductions and enhanced data sovereignty and privacy by preventing data transmission to external AI providers.&lt;/p&gt;
&lt;p&gt;K8sGPT proves to be a valuable asset in diagnosing issues in K8s cluster and enhancing operational efficiency. This tool can be further integrated with other organizational tools like &lt;em&gt;Slack&lt;/em&gt; and &lt;em&gt;Microsoft Teams&lt;/em&gt; to dispatch notifications and alerts, accompanied by suggested solutions as remediation steps. This futher amplifies the practical value of K8sGPT within the organization.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Learn what happened at Discover 2024, accelerate Day 0 operations, explore AI, and more!]]></title><link>https://developer.hpe.com/2024-july-08/</link><guid isPermaLink="false">https://developer.hpe.com/2024-july-08/</guid><pubDate>Mon, 08 Jul 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[The Sleuth and the Storyteller: The Dynamic Duo behind RAG]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/the-sleuth-and-the-storyteller-the-dynamic-duo-behind-rag/</link><guid isPermaLink="false">https://developer.hpe.com/the-sleuth-and-the-storyteller-the-dynamic-duo-behind-rag/</guid><pubDate>Thu, 04 Jul 2024 10:16:28 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introduction to Redfish interoperability profiles]]></title><description><![CDATA[Introduction When I explain to an audience that the Redfish® standard
requires the implementation of only a subset of the properties…]]></description><link>https://developer.hpe.com/redfish-interoperability-profiles/</link><guid isPermaLink="false">https://developer.hpe.com/redfish-interoperability-profiles/</guid><pubDate>Wed, 03 Jul 2024 10:58:04 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 35px; max-width: none; } &lt;/style&gt;
&lt;style&gt; figcaption {font-style: italic; font-size: 15px; line-height: 33px; max-width: none;} &lt;/style&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When I explain to an audience that the &lt;a href=&quot;https://redfish.dmtf.org&quot; target=&quot;_blank&quot;&gt;Redfish® standard&lt;/a&gt;
requires the implementation of only a subset of the properties mentioned in the entire &lt;a href=&quot;https://developer.hpe.com/blog/why-is-redfish%C2%AE-different-from-other-rest-apis-part-1/#data-model&quot; target=&quot;_blank&quot;&gt;data model&lt;/a&gt;, I can see people looking at me puzzled and asking themselves:&lt;/p&gt;
&lt;p&gt;&quot;&lt;em&gt;What? Are you telling me that there is a potential that some BMCs in my data center do not implement the &lt;code&gt;FirmwareVersion&lt;/code&gt; property and yet they are considered to be compliant with the standard?&lt;/em&gt;&quot;.&lt;/p&gt;
&lt;p&gt;The answer is yes. Those Redfish services that are not implementing properties not labeled &quot;required&quot; are still considered to be compliant.&lt;/p&gt;
&lt;p&gt;Although they are considered to be compliant, there are instances where this can be problematic. In this blog post, I&apos;ll elaborate and provide examples of use cases where it can be a problem. Then I&apos;ll introduce the &lt;a href=&quot;https://www.dmtf.org/dsp/DSP0272&quot; target=&quot;_blank&quot;&gt; Redfish interoperability profiles specification&lt;/a&gt; that has been created by the Distributed Management Task Force (&lt;a href=&quot;https://dmtf.org&quot; target=&quot;_blank&quot;&gt;DMTF&lt;/a&gt;) to address those use cases.&lt;/p&gt;
&lt;p&gt;The Redfish interoperability profiles specification constitutes another Redfish specificity that could be added to the list presented in &lt;a href=&quot;https://developer.hpe.com/blog/why-is-redfish%C2%AE-different-from-other-rest-apis-part-1/&quot; target=&quot;_blank&quot;&gt;part 1&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/why-is-redfish%C2%AE-different-from-other-rest-apis-part-2/&quot; target=&quot;_blank&quot;&gt;part 2&lt;/a&gt; of the &lt;em&gt;Why is Redfish different from other REST APIs&lt;/em&gt; blog posts.&lt;/p&gt;
&lt;h3&gt;Redfish services can omit defined properties&lt;/h3&gt;
&lt;p&gt;The DSP0266 standard document states in its &lt;a href=&quot;https://www.dmtf.org/sites/default/files/standards/documents/DSP0266_1.20.1.html#properties&quot; target=&quot;_blank&quot;&gt;Properties overview&lt;/a&gt; paragraph:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Required properties shall always be returned in a response.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig1-dmtfpropertiesoverview.png&quot; alt=&quot;Figure 1: DMTF Properties overview paragraph from DSP0266&quot; title=&quot;Figure 1: DMTF Properties overview paragraph from DSP0266&quot;&gt;&lt;/p&gt;
&lt;figcaption&gt;Figure 1: DMTF Properties overview paragraph from DSP0266&lt;/figcaption&gt;
&lt;p&gt;This assertion suggests that some properties are not &quot;required&quot; in the implementation of the service. As an example, in the data model of the &lt;a href=&quot;https://redfish.dmtf.org/schemas/v1/Manager.v1_19_1.json&quot; target=&quot;_blank&quot;&gt;Baseboard Manager Controller&lt;/a&gt; (BMC) (Figure 2), the only required properties are: &lt;code&gt;@odata.id&lt;/code&gt;, &lt;code&gt;@odata.type&lt;/code&gt;, &lt;code&gt;Id&lt;/code&gt; and &lt;code&gt;Name&lt;/code&gt;. Since the &lt;code&gt;FirmwareVersion&lt;/code&gt; property is not listed in this normative document, its implementation is not required in Redfish services.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig2-managerrequiredproperties.png&quot; alt=&quot;Manager required properties&quot; title=&quot;Manager required properties&quot;&gt;&lt;/p&gt;
&lt;figcaption&gt;Figure 2: Manager required properties&lt;/figcaption&gt;
&lt;p&gt;Requiring the implementation of only a small number properties &lt;em&gt;provide[s] significant flexibility, and allow conforming implementations on a wide variety of products&lt;/em&gt; as mentioned in the abstract of the &lt;a href=&quot;https://www.dmtf.org/sites/default/files/standards/documents/DSP0272_1.8.0.pdf&quot; target=&quot;_blank&quot;&gt;standard document&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Who cares?&lt;/h3&gt;
&lt;p&gt;Flexibility is great, but too much flexibility can become an issue for standard organizations, software projects, or end users willing to move away from the Intelligent Platform Management Interface (&lt;a href=&quot;https://www.intel.com/content/www/us/en/products/docs/servers/ipmi/ipmi-home.html&quot; target=&quot;_blank&quot;&gt;IPMI&lt;/a&gt;) to Redfish for hardware management.&lt;/p&gt;
&lt;p&gt;Standards organizations and software projects, like the &lt;a href=&quot;https://www.opencompute.org/about&quot; target=&quot;_blank&quot;&gt;Open Compute Project®&lt;/a&gt; (OCP) and the &lt;a href=&quot;https://wiki.openstack.org/wiki/Main_Page&quot; target=&quot;_blank&quot;&gt;OpenStack&lt;/a&gt; projects can only adopt Redfish as their preferred management protocol if they can easily define some sort of &quot;baseline&quot; containing which property must, should, or could be implemented in their managed nodes.&lt;/p&gt;
&lt;p&gt;Specific to OCP, the charter of the &lt;a href=&quot;https://www.opencompute.org/projects/hardware-management&quot; target=&quot;_blank&quot;&gt;Hardware Management Project&lt;/a&gt; mentions: &quot;&lt;em&gt;The hardware management specification incorporates [...] tools and best practices [...] for remote machine management&lt;/em&gt;&quot;. This means that any server compliant to this specification must implement the network protocol(s) mentioned in the baseline.&lt;/p&gt;
&lt;p&gt;Systems supported by the OpenStack &lt;a href=&quot;https://wiki.openstack.org/wiki/Ironic&quot; target=&quot;_blank&quot;&gt;Ironic&lt;/a&gt; (bare metal machine provisioning) and the &lt;a href=&quot;https://wiki.openstack.org/wiki/Valence&quot; target=&quot;_blank&quot;&gt;Valence&lt;/a&gt; projects (system lifecycle management) must implement a baseline of features containing at least the possibility to be powered on and off remotely as well.&lt;/p&gt;
&lt;p&gt;Redfish clients designed for managing multi-vendor systems also need a list of mandatory and recommended features. For example, if a system cannot return its BMC&apos;s firmware version, the client will have difficulties performing firmware updates.&lt;/p&gt;
&lt;h2&gt;Redfish interoperability profiles&lt;/h2&gt;
&lt;p&gt;To address the baseline issue mentioned above, the DMTF created the
&lt;a href=&quot;https://www.dmtf.org/dsp/DSP0272&quot; target=&quot;_blank&quot;&gt;DSP0272&lt;/a&gt; specification document that defines interoperability profiles. A Redfish interoperability profile (or profile) is a JSON document enumerating resources and properties that must, should, or could be implemented in a Redfish service.&lt;/p&gt;
&lt;h3&gt;Didactic minimal profile example&lt;/h3&gt;
&lt;p&gt;The following example is a minimal profile created for didactic purpose. It is not relevant for use in a proper production context.&lt;/p&gt;
&lt;p&gt;A summary of this example could be the following: &quot;To be compliant to this profile, Redfish services must model at least one manager (BMC) and must be able to return the manager&apos;s &lt;code&gt;FirmwareVersion&lt;/code&gt; value&quot;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The content of interoperability profiles is described in a versioned schema file. All the profile versioned schema files are grouped in compressed bundles (&lt;code&gt;.zip&lt;/code&gt;) and can be &lt;a href=&quot;https://www.dmtf.org/dsp/DSP8013&quot; target=&quot;_blank&quot;&gt;downloaded from the DMTF&lt;/a&gt;. The following example is compliant with version 1.8.0 as specified in the &lt;code&gt;SchemaDefinition&lt;/code&gt; key of the following example (first line).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;SchemaDefinition&quot;: &quot;RedfishInteroperabilityProfile.v1_8_0&quot;,
    &quot;ProfileName&quot;: &quot;FDZ&apos;s minimal profile&quot;,
    &quot;ProfileVersion&quot;: &quot;1.0.0&quot;,
    &quot;Purpose&quot;: &quot;This is a minimal educational Redfish interoperability profile.&quot;,
    &quot;OwningEntity&quot;: &quot;Koulapic&quot;,
    &quot;ContributedBy&quot;: &quot;FDZ&quot;,
    &quot;License&quot;: &quot;CC BY-SA&quot;,
    &quot;ContactInfo&quot;: &quot;fdz@koulapic.com&quot;,
    &quot;Protocol&quot;: {
        &quot;MinVersion&quot;: &quot;1.6&quot;
    },
    &quot;Resources&quot;: {
        &quot;ManagerCollection&quot;: {
            &quot;Purpose&quot;: &quot;Every implementation must have at least one BMC.&quot;,
            &quot;PropertyRequirements&quot;: {
                &quot;Members&quot;: {
                    &quot;MinCount&quot;: 1
                }
            }
        },
        &quot;Manager&quot;: {
            &quot;Purpose&quot;: &quot;Make sure Manager is conformant to schema 1.5.1 or later and implements the `FirmwareVersion` property&quot;,
            &quot;MinVersion&quot;: &quot;1.5.1&quot;,
            &quot;PropertyRequirements&quot;: {
                &quot;FirmwareVersion&quot;: {
                    &quot;ReadRequirement&quot;: &quot;Mandatory&quot;
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Profiles contain administrative &quot;self explanatory&quot; keys like &lt;code&gt;ProfileName&lt;/code&gt;, &lt;code&gt;ContactInfo&lt;/code&gt;, or &lt;code&gt;Purpose&lt;/code&gt;. The normative definition of those properties is in the schema mentioned in the above note. For this example I extracted in the next JSON block, the description of the &lt;code&gt;Protocol/MinVersion&lt;/code&gt; property, which mentions that it is to be compared to the &lt;code&gt;ServiceRoot/RedfishVersion&lt;/code&gt; of the evaluated Redfish service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
        &quot;Protocol&quot;: {
            &quot;type&quot;: &quot;object&quot;,
            &quot;description&quot;: &quot;Requirements related to the Redfish protocol outside of the JSON resources.&quot;,
            &quot;additionalProperties&quot;: false,
            &quot;properties&quot;: {
                &quot;MinVersion&quot;: {
                    &quot;$ref&quot;: &quot;#/definitions/MinVersion&quot;,
                    &quot;description&quot;: &quot;Indicates the minimum version of the Redfish Specification protocol support required by this profile. This version shall be reported by the Redfish service in the `ServiceRoot` resource property `RedfishVersion`. The version shall be represented using a `&amp;#x3C;major&gt;.&amp;#x3C;minor&gt;.&amp;#x3C;errata&gt;` format, including an optional errata version.  If this property is absent, the minimum value shall be `1.0.0`.&quot;
                }
            }
        }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;Resources{}&lt;/code&gt; object of the above profile contains two members: &lt;code&gt;ManagerCollection{}&lt;/code&gt; and &lt;code&gt;Manager{}&lt;/code&gt;. The first one requires at least one BMC modeled in the evaluated Redfish service (&lt;code&gt;MinCount = 1&lt;/code&gt;). The second requires the implementation of the &lt;code&gt;Manager/FirmwareVersion&lt;/code&gt; (&lt;code&gt;ReadRequirement = Mandatory&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The other possible values for the &lt;code&gt;ReadRequirement&lt;/code&gt; property are listed and described in the profile schema. I pasted them in the following JSON block. It is interesting to notice that, in addition to obvious values, like &lt;code&gt;Mandatory&lt;/code&gt; or &lt;code&gt;Recommended&lt;/code&gt;, others (i.e. &lt;code&gt;Conditional&lt;/code&gt;, &lt;code&gt;IfImplemented&lt;/code&gt;) need more attention.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
        &quot;ReadRequirement&quot;: {
            &quot;type&quot;: &quot;string&quot;,
            &quot;description&quot;: &quot;The read (HTTP GET) requirements for this property.  The default value, or if `ReadRequirement` is not present, is `Mandatory`.  For object properties, requirements of the embedded properties will apply only if the object is present.&quot;,
            &quot;enum&quot;: [
                &quot;Mandatory&quot;, &quot;Supported&quot;, &quot;Recommended&quot;, &quot;IfImplemented&quot;, &quot;IfPopulated&quot;, &quot;Conditional&quot;, &quot;None&quot;
            ],
            &quot;enumDescriptions&quot;: {
                &quot;Mandatory&quot;: &quot;This property is required in all instances of this resource.  For array properties, the property is required in all non-null array items.  If `Values` is defined, at least one instance of each enumeration value is required among instances of this property.&quot;,
                &quot;Supported&quot;: &quot;This property is required to be supported by the service, but may not appear in all instances of this resource.  The requirement is met if the property appears in at least one instance of this resource.&quot;,
                &quot;Recommended&quot;: &quot;It is recommended, but not required, that this property be supported.&quot;,
                &quot;IfImplemented&quot;: &quot;This property is required if the underlying functionality is implemented.  For object properties, requirements on embedded properties within the object will only apply if the object is present.&quot;,
                &quot;IfPopulated&quot;: &quot;For property-level requirements, this property is required if the `State` property within the `Status` object for the object or resource does not contain `Absent`.  This value is useful for properties within absent resources where empty slots, sockets, or bays are rendered with minimal properties until they are populated by a device.  For resource-level requirements, this value indicates that the resource is required, but may not be present (populated) in the service at all times.&quot;,
                &quot;Conditional&quot;: &quot;This property is only required if `ConditionalRequirements` items apply to this instance of the resource.&quot;,
                &quot;None&quot;: &quot;This property is not required by this profile.  It is listed here for clarity.&quot;
            }
        }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Stacking profiles&lt;/h3&gt;
&lt;p&gt;An interesting feature of Redfish interoperability profiles is that you can extend existing profiles with your own definitions at will. To create a profile that extends the &lt;a href=&quot;https://github.com/openstack/ironic/tree/master/redfish-interop-profiles&quot; target=&quot;_blank&quot;&gt;Ironic profile&lt;/a&gt;, just use the &lt;code&gt;RequiredProfile{}&lt;/code&gt; object as shown in the next example.&lt;/p&gt;
&lt;p&gt;This example specifies the URL and minimum version of the Ironic profile and yours.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;    &quot;RequiredProfiles&quot;: {
        &quot;Ironic&quot;: {
            &quot;Repository&quot;: &quot;https://github.com/openstack/ironic/tree/master/redfish-interop-profiles&quot;,
            &quot;MinVersion&quot;: &quot;1.0.0&quot;
        },
        &quot;MyRequiredProfile&quot;: {
                &quot;Repository&quot;: &quot;https://koulapic.com/MyInteropProfiles&quot;,
                &quot;MinVersion&quot;: &quot;1.0.0&quot;
        }
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that you understand the basic architecture of Redfish interoperability profiles, I encourage you to study the &lt;a href=&quot;https://github.com/opencomputeproject/HWMgmt-OCP-Profiles&quot; target=&quot;_blank&quot;&gt;OCP&lt;/a&gt; and OpenStack public &lt;a href=&quot;https://github.com/openstack/ironic/tree/master/redfish-interop-profiles&quot; target=&quot;_blank&quot;&gt;profiles&lt;/a&gt;. Don&apos;t forget to refer to the &lt;a href=&quot;https://www.dmtf.org/dsp/DSP8013&quot; target=&quot;_blank&quot;&gt;profile schemas&lt;/a&gt; in case you have problem understanding some properties, directives or objects.&lt;/p&gt;
&lt;h2&gt;How to validate profiles?&lt;/h2&gt;
&lt;p&gt;Profile conformance can be easily performed with the &lt;a href=&quot;https://github.com/DMTF/Redfish-Interop-Validator&quot; target=&quot;_blank&quot;&gt;interoperability validator&lt;/a&gt; provided by the DMTF. It is a Python script that takes as input a configuration file and a profile. The main output is an HTML report.&lt;/p&gt;
&lt;p&gt;The following code block clones the validator GitHub repository and asks to create a configuration file and a profile. Then it launches the validator with those two files as input.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/DMTF/Redfish-Interop-Validator.git
cd Redfish-Interop-Validator
# Create configuration file and profile
python RedfishInteropValidator.py -c config/ilo-scott380g11-1.ini FdzMiniProfile.v1_0_0.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The profile used in the above code example is the minimal profile mentioned &lt;a href=&quot;#didactic-minimal-profile-example&quot;&gt;earlier&lt;/a&gt;. This profile requests at least one manager in the manager &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/concepts/datatypesandcollections/#resource-collections&quot; target=&quot;_blank&quot;&gt;collection&lt;/a&gt; and a &lt;code&gt;FirmwareVersion&lt;/code&gt; property in the &lt;code&gt;Manager&lt;/code&gt; resource. To be sure the validator verifies those requirements, the configuration file (next code block) specifies &lt;code&gt;payload = tree /redfish/v1/Managers&lt;/code&gt;. This line tells the validator to verify the profile directives at &lt;code&gt;/redfish/v1/Managers&lt;/code&gt; and then follow recursively each and every link it finds and process them. The exact crawling algorithm is explained in the &lt;a href=&quot;https://github.com/DMTF/Redfish-Interop-Validator/blob/main/README.md&quot; target=&quot;_blank&quot;&gt;validator GitHub README.md&lt;/a&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Profile Validator configuration file.
#
# Parameter description at:
# https://github.com/DMTF/Redfish-Interop-Validator/blob/main/README.md 

[Tool]
Version = 2
Copyright = Redfish DMTF (c) 2021
verbose =

[Host]
ip = https://ilo-scott380g11-1
username = username
password = password
description = iLO 6
forceauth = False
authtype = Session
token =

[Validator]
# The following `tree` keyword tells the Validator
# to crawl the Redfish tree starting at the following
# starting point.
#
# An alternative is `single` to only validate the
# starting point.
payload =  tree /redfish/v1/Managers
logdir = ./logs
oemcheck = False
online_profiles = False
required_profiles_dir = 
debugging = False
collectionlimit =
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The validator outputs two files in the directory specified in the configuration file (&lt;code&gt;logdir&lt;/code&gt;): a report in HTML format and a text file containing the different steps of the validation process.&lt;/p&gt;
&lt;p&gt;Figure 3 below is a portion of the validator report showing three successful verification. The first one (&lt;code&gt;Service level requirements&lt;/code&gt;) requires the existence of the &lt;code&gt;ManagerCollection&lt;/code&gt; and &lt;code&gt;Manager&lt;/code&gt; end points. It has been automatically added by the validator as a mandatory condition before proceeding the verification of the other requirements mentioned in the profile.&lt;/p&gt;
&lt;p&gt;In the figure below, you can also view the results of the two verifications required in the profile.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig3-conformancetestreport.png&quot; alt=&quot;Figure 3: Redfish conformance test report&quot; title=&quot;Redfish conformance test report&quot;&gt;&lt;/p&gt;
&lt;figcaption&gt;Figure 3: Redfish conformance test report&lt;/figcaption&gt;
&lt;h2&gt;Leveraging Redfish interoperability profiles&lt;/h2&gt;
&lt;p&gt;Although present for a long time (January 2018) and despite an introduction &lt;a href=&quot;https://www.youtube.com/watch?v=iVAYSEPwmV8&quot; target=&quot;_blank&quot;&gt;video&lt;/a&gt;, the Redfish interoperability profile specification is not very well known. It could be better leveraged by Redfish client programmers supervising heterogeneous data centers. As this standard, along with the interoperability validator, highlight the differences between Redfish implementations, it can help them produce a more efficient code quickly. If you already know a property is absent in a Redfish service, you can anticipate and adapt your code early in the development process.&lt;/p&gt;
&lt;p&gt;The OCP hardware management team is very active relatively to Redfish interoperability profiles. Several profiles are published in their &lt;a href=&quot;https://github.com/opencomputeproject/HWMgmt-OCP-Profiles&quot; target=&quot;_blank&quot;&gt;GitHub repository&lt;/a&gt; that you can use &quot;as-is&quot; or extend as mentioned in the &lt;a href=&quot;#stacking-profiles&quot;&gt;Stacking profile&lt;/a&gt; paragraph. You can also help OCP further refine them by submitting ideas and feedback.&lt;/p&gt;
&lt;p&gt;With this blog post, I hope you discovered enough information about this Redfish standard to eventually use it or present it to friends or colleagues.&lt;/p&gt;
&lt;p&gt;And don&apos;t forget to check out some of my other &lt;a href=&quot;https://developer.hpe.com/search/?term=donze&quot; target=&quot;_blank&quot;&gt;blog posts&lt;/a&gt; on the HPE Developer portal to learn more about Redfish tips and tricks.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[RAG for an LLM-powered hologram]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/rag-for-an-llm-powered-hologram/</link><guid isPermaLink="false">https://developer.hpe.com/rag-for-an-llm-powered-hologram/</guid><pubDate>Tue, 02 Jul 2024 11:56:03 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Reflections on ChapelCon '24: A Community Growing Together]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/reflections-on-chapelcon-24-a-community-growing-together/</link><guid isPermaLink="false">https://developer.hpe.com/reflections-on-chapelcon-24-a-community-growing-together/</guid><pubDate>Mon, 01 Jul 2024 22:18:55 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 2.1!]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/announcing-chapel-2-1/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-2-1/</guid><pubDate>Fri, 28 Jun 2024 06:27:51 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcements at HPE Discover 2024 accelerate AI adoption and offer optionality]]></title><description><![CDATA[Ever since OpenAI released ChatGPT in November 2022, there’s been no question that we have entered into the Age of AI. The possibilities…]]></description><link>https://developer.hpe.com/announcements-at-hpe-discover-2024-accelerate-ai-adoption-and-offer-optionality/</link><guid isPermaLink="false">https://developer.hpe.com/announcements-at-hpe-discover-2024-accelerate-ai-adoption-and-offer-optionality/</guid><pubDate>Tue, 25 Jun 2024 14:22:51 GMT</pubDate><content:encoded>&lt;p&gt;Ever since OpenAI released ChatGPT in November 2022, there’s been no question that we have entered into the Age of AI. The possibilities Generative AI affords to accelerating innovation is too compelling to ignore. But ever since then, enterprises have struggled with how to implement AI solutions. Their expense and complexity have proven to be significant barriers to entry.&lt;/p&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) and NVIDIA took a bold step forward at HPE Discover 2024, making several important announcements that aim to enable enterprise AI adoption. With purpose-built AI solutions delivered through &lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;HPE GreenLake cloud&lt;/a&gt; that combine the technical prowess of these two companies, they will introduce to the market a simplicity of experience that just may jump start mass corporate AI adoption.&lt;/p&gt;
&lt;p&gt;HPE GreenLake delivers a portfolio of cloud services and as-a-service solutions across locations, from the edge to the data center and to the cloud, with simple, agile, and secure hybrid multicloud management. It serves as the base for these solutions. Additional announcements related to the platform focused on offering customers new options for virtualization and providing cloud-type operations for highly regulated environments.&lt;/p&gt;
&lt;h2&gt;Leading the enterprise charge in AI&lt;/h2&gt;
&lt;h3&gt;Solving for humanity with humanity&lt;/h3&gt;
&lt;p&gt;Focusing an important corporate event on artificial intelligence can be risky. Fear of the unknown is bound to be in the hearts and minds of attendees. But, as so many pointed out during the event, we have already entered the Age of AI. Isn’t it better that we now take the lead and focus on using AI for the betterment of humanity, setting appropriate guardrails in place, and helping drive its adoption in a positive way?&lt;/p&gt;
&lt;p&gt;This was the bold leadership stance taken by the two leaders as HPE CEO Antoino Neri and NVIDEO founder and CEO Jensen Huang took to the stage in the Sphere at Las Vegas to open Discover 2024. Delivering the first &lt;a href=&quot;https://www.youtube.com/watch?v=p28lHtjWn5k&quot;&gt;keynote&lt;/a&gt; ever in this amazing venue, Antonio spoke of the hope and limitless possibilities AI could bring for renewable energy, medical breakthroughs, and revolutionized agriculture.&lt;/p&gt;
&lt;p&gt;Accompanied by video emphasizing these points, Antonio broke the barrier and started the hard conversation. Imbuing hope, he did not shy away from mentioning the risks involved and was quick to point out how HPE has been preparing for AI for a very long time, developing the technology and guardrails to help make AI a positive force for our planet.&lt;/p&gt;
&lt;h3&gt;Accelerating enterprise GenAI adoption&lt;/h3&gt;
&lt;p&gt;During the keynote, Antonio and Jensen jointly announced the launch of NVIDIA AI Computing by HPE, a comprehensive portfolio of co-developed AI solutions designed to accelerate the adoption of generative AI in enterprises. The portfolio includes HPE Private Cloud AI, a turnkey solution that includes a complete infrastructure, software portfolio, and AI model library managed via &lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;HPE GreenLake&lt;/a&gt;. Delivering a “private AI in a box”, it offers enterprises a solution to accelerate their AI project deployment while ensuring their data remains under their control.&lt;/p&gt;
&lt;p&gt;The foundation of the AI and data software stack integrated into this solution starts with the &lt;a href=&quot;https://www.nvidia.com/en-us/data-center/products/ai-enterprise/&quot;&gt;NVIDIA AI Enterprise software&lt;/a&gt; platform, which includes &lt;a href=&quot;https://www.nvidia.com/en-us/ai/#referrer=ai-subdomain?ncid=pa-srch-goog-772333&amp;#x26;_bt=697697685508&amp;#x26;_bk=nvidia%20nim&amp;#x26;_bm=e&amp;#x26;_bn=g&amp;#x26;_bg=165151891361&amp;#x26;gad_source=1&amp;#x26;gclid=EAIaIQobChMIlJ2kiNS4hgMVyi2tBh0XRw5KEAAYASAAEgI3ivD_BwE&quot;&gt;NVIDIA NIM™ inference microservices&lt;/a&gt;. HPE AI Essentials software adds a ready-to-run set of curated AI and data foundation tools with a unified control plane. The solution includes a fully integrated AI infrastructure stack with &lt;a href=&quot;https://www.nvidia.com/en-us/networking/spectrumx/&quot;&gt;NVIDIA Spectrum-X™ Ethernet networking&lt;/a&gt;, HPE GreenLake for File Storage, and HPE ProLiant servers with support for &lt;a href=&quot;https://www.nvidia.com/en-us/data-center/l40s/&quot;&gt;NVIDIA L40S&lt;/a&gt;, &lt;a href=&quot;https://www.nvidia.com/en-us/data-center/h100/&quot;&gt;NVIDIA H100 NVL Tensor Core GPUs&lt;/a&gt; and the &lt;a href=&quot;https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/&quot;&gt;NVIDIA GH200 NVL2 platform&lt;/a&gt;. In addition, &lt;a href=&quot;https://www.hpe.com/us/en/opsramp.html?jumpid=ps_tvmvd4zkn_aid-521080156&amp;#x26;ef_id=CjwKCAjw1K-zBhBIEiwAWeCOF0lwzLIS76IvYTcWyzQOw935SmbZgRafDYLekOCalqkzT29SVp0WOxoC_FIQAvD_BwE:G:s&amp;#x26;s_kwcid=AL!13472!3!702006004480!e!!g!!hpe%20opsramp!21375753832!169113463891&amp;#x26;gad_source=1&amp;#x26;gclid=CjwKCAjw1K-zBhBIEiwAWeCOF0lwzLIS76IvYTcWyzQOw935SmbZgRafDYLekOCalqkzT29SVp0WOxoC_FIQAvD_BwE&quot;&gt;OpsRamp&apos;s&lt;/a&gt; IT operations are integrated with HPE GreenLake cloud, HPE’s hybrid cloud platform designed to help organizations unlock the power of data and AI, with OpsRamp delivering observability and AIOps to all HPE products and services. &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2024/06/hewlett-packard-enterprise-and-nvidia-announce-nvidia-ai-computing-by-hpe-to-accelerate-generative-ai-industrial-revolution.html&quot;&gt;Get additional details about the solution configurations in the press release&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The solution takes into account the gravity of data and its impact on AI and how private clouds can best serve enterprise AI solutions. Read more about the case for private clouds in AI deployment in this blog post, &lt;a href=&quot;https://community.hpe.com/t5/ai-unlocked/hpe-private-cloud-ai-a-strategic-necessity-for-enterprises/ba-p/7217830&quot;&gt;HPE Private Cloud AI: A strategic necessity for enterprises&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For more information on HPE Private Cloud AI and NVIDIA AI computing by HPE, please visit the HPE.com &lt;a href=&quot;https://www.hpe.com/us/en/private-cloud-ai.html&quot;&gt;HPE Private Cloud AI home page&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Offering more choice&lt;/h2&gt;
&lt;h3&gt;New KVM offered by HPE&lt;/h3&gt;
&lt;p&gt;As part of HPE’s desire to offer its customers “optionality”, HPE also announced its decision to enter the hypervisor market with its own open-source virtualization capability. Executive Vice President and CTO Fidelma Russo explained that the decision to release one was in response to partner and customer demand. Providing the new HPE KVM hypervisor virtualization capability provides customers and partners with a choice in the virtualization market. The market response has been quite positive, with numerous industry outlets picking up on the news, such as &lt;a href=&quot;https://www.crn.com/news/virtualization/2024/cto-fidelma-russo-on-customer-choice-trust-and-why-hpe-now-has-its-own-virtualization-capability&quot;&gt;this article&lt;/a&gt; found in CRN.&lt;/p&gt;
&lt;h3&gt;Addressing regulated environments&lt;/h3&gt;
&lt;p&gt;HPE also announced that a significant effort being made to address the needs of highly secure and regulated environments with its new dedicated platform. For customers who have data residency and sovereignty requirements and are prohibited from connecting to the public internet or external networks, HPE GreenLake will be offering a fully disconnected/isolated private cloud management platform, allowing customers to consume HPE services in a cloud operating model. You can find more details about this offering in this blog post, &lt;a href=&quot;https://community.hpe.com/t5/the-cloud-experience-everywhere/empowering-regulated-environments-with-hpe-greenlake-cloud/ba-p/7217339&quot;&gt;Empowering regulated environments with HPE GreenLake cloud&lt;/a&gt;. More can be found on this solution, as well as other enhancements to HPE GreenLake cloud, in this article, &lt;a href=&quot;https://community.hpe.com/t5/the-cloud-experience-everywhere/what-s-new-expanded-platform-capabilities-of-the-hpe-greenlake/ba-p/7217343&quot;&gt;What&apos;s new: Expanded platform capabilities of the HPE GreenLake cloud&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;HPE Discover 2024 was truly an event unlike any other. Whether it was being in one of the 17,000 seats at the Sphere to take in Antonio’s keynote, walking the floor through HPE’s Innovation neighborhood, or interacting with an &lt;a href=&quot;https://x.com/hpe/status/1803965843707170980?s=46&quot;&gt;AI avatar of Antonio Neri&lt;/a&gt; himself, the experience was indeed memorable. To catch any of the replays of keynote sessions or interviews with the Cube, be sure to check out the &lt;a href=&quot;https://www.youtube.com/watch?v=-FEzi51gs9g&quot;&gt;YouTube video library&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Sustainability Insight Center API at a glance]]></title><description><![CDATA[HPE GreenLake provides a new service called HPE Sustainability Insight Center that can assist you in obtaining detailed information about…]]></description><link>https://developer.hpe.com/hpe-sustainability-insight-center-api-at-a-glance/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-sustainability-insight-center-api-at-a-glance/</guid><pubDate>Mon, 24 Jun 2024 12:44:18 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;HPE GreenLake provides a new service called HPE Sustainability Insight Center that can assist you in obtaining detailed information about carbon emissions, energy consumption and the cost of the infrastructure that is managed by HPE GreenLake. In this blog post, I will explain how to extract data from this service in a programmatic way using cURL and the HPE Sustainability Insight Center API.&lt;/p&gt;
&lt;h2&gt;What is HPE Sustainability Insight Center?&lt;/h2&gt;
&lt;p&gt;HPE Sustainability Insight Center is a service that runs on HPE GreenLake. You can add it into my workspace from the HPE GreenLake catalog under the Management &amp;#x26; Governance category.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-1.jpg&quot; alt=&quot;HPE GreenLake catalogue&quot; title=&quot;HPE GreenLake catalogue&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once deployed, the service provides a dashboard, which can be used to monitor carbon emissions, energy consumption, and energy cost. You can get more details about HPE Sustainability Insight Center from the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-64E2035A-138E-44E8-8A04-7968A272A97E.html&quot;&gt;User Guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-2.jpg&quot; alt=&quot;HPE Sustainability Insight Center Console&quot; title=&quot;HPE Sustainability Insight Center Console&quot;&gt;&lt;/p&gt;
&lt;p&gt;In addition to the dashboard, HPE Sustainability Insight Center provides an API, to enable programmatically accessing this same data.&lt;/p&gt;
&lt;h2&gt;What can I do with the HPE Sustainability Insight Center API?&lt;/h2&gt;
&lt;p&gt;Important use cases covered by the HPE Sustainability Insight Center API include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Achieving sustainability goals&lt;/strong&gt; — Use data retrieved from the HPE Sustainability Insight Center API to measure your organization&apos;s power consumption and carbon footprint. With this data, your organization can make data-infused, informed decisions to reduce its climate impact and ensure its IT assets operate in a way that meets regulatory and business environmental sustainability goals.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitoring IT costs&lt;/strong&gt; — Use data available from the HPE Sustainability Insight Center API to rationalize the power consumption of IT assets operations for cost efficiency. This will free up budget for innovative and growth-orientated investments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integrating into reporting workflows&lt;/strong&gt; — Use data retrieved from the HPE Sustainability Insight Center to incorporate into existing analytics, reporting, dashboards, and forecasting workflows to give your organization a robust understanding of its IT operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Where is the HPE Sustainability Insight Center API documented?&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/sustainability/public/&quot;&gt;API specifications&lt;/a&gt; are found on the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt; along with other HPE GreenLake APIs. For HPE Sustainability Insight Center, there are 3 API calls available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;usageByEntity&lt;/strong&gt;: Retrieves an aggregated energy usage list grouped by individual entities over a defined time frame.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;usageTotals&lt;/strong&gt;: Returns the total aggregated power cost, power consumption, and carbon emissions over a defined time frame and supports filtering by entities.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;usageSeries&lt;/strong&gt;: Retrieves aggregated energy usage statistics grouped by time bucket over a defined time frame and supports filtering by entities.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can download the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/sustainability/public/openapi/sustainability-insight-ctr-latest/overview/&quot;&gt;OpenAPI specs&lt;/a&gt; from the &lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;HPE GreenLake developer portal&lt;/a&gt;, if you want to use the API from a tool such as Postman.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Be careful with tokens!&lt;/h2&gt;
&lt;p&gt;A little word of advice about tokens. Because the HPE Sustainability Insight Center is a service, you will need to create a dedicated API client credentials for it in your workspace (under Manage Workspace/API). You cannot use API client credentials created for the platform, as it will result in a 403-error code. Also, you’ll need to be careful, as API client credentials are region specific. When creating an API client credentials, say for Europe (EU Central), make sure to use it to call the API endpoint for that region, as shown below (Connectivity Endpoint).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-3.jpg&quot; alt=&quot;HPE GreenLake platform API client credentials&quot; title=&quot;HPE GreenLake platform API client credentials&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Let&apos;s give it a try!&lt;/h2&gt;
&lt;p&gt;Let&apos;s start with the first of the 3 calls, &lt;strong&gt;usageByEntity&lt;/strong&gt;. From the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/sustainability/public/openapi/sustainability-insight-ctr-latest/operation/getUsageByEntity&quot;&gt;documentation&lt;/a&gt;, note that only 2 parameters are mandatory, &lt;em&gt;start-time&lt;/em&gt; and &lt;em&gt;end-time&lt;/em&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: the format of the date used by the API, which is ISO 8601 of the form: YYYY-MM-DDTHH:MM:SS.ss-/+FF:ff. For example: &apos;2023-07-24T04:21:22.00Z&apos; for 4:21AM on the 24th of July, 2023 in UTC (Z=Zero Meridian)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So, you could use the following command in Bash:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ curl -s -X GET &apos;https://eu-central.api.greenlake.hpe.com/sustainability-insight-ctr/v1beta1/usage-by-entity?end-time=2024-06-11T08%3A00%3A00Z&amp;#x26;start-time=2024-06-11T08%3A00%3A00Z&apos;-H &apos;Authorization: Bearer &amp;#x3C;my-token&gt;&apos; -H &quot;Accept:application/json&quot; | jq
{
  &quot;items&quot;: [
    {
      &quot;id&quot;: &quot;COMPUTE867962-B21CZJ93402YV&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/entities&quot;,
      &quot;entityId&quot;: &quot;COMPUTE867962-B21CZJ93402YV&quot;,
      &quot;entityMake&quot;: &quot;HPE&quot;,
      &quot;entityModel&quot;: &quot;ProLiant DL360 Gen10&quot;,
      &quot;entityType&quot;: &quot;COMPUTE&quot;,
      &quot;entitySerialNum&quot;: &quot;CZJ93402YV&quot;,
      &quot;entityProductId&quot;: &quot;867962-B21&quot;,
      &quot;entityManufactureTimestamp&quot;: &quot;2024-05-25T06:43:32.934Z&quot;,
      &quot;locationName&quot;: &quot;Sophia Antipolis&quot;,
      &quot;locationId&quot;: &quot;d1386776-e49d-4333-9ccc-c4bfc2029f40&quot;,
      &quot;locationCity&quot;: &quot;Mougins&quot;,
      &quot;locationState&quot;: &quot;PACA&quot;,
      &quot;locationCountry&quot;: &quot;France&quot;,
      &quot;name&quot;: &quot;centos82rf2&quot;,
      &quot;costUsd&quot;: 3.1437287,
      &quot;co2eMetricTon&quot;: 0.0015487487,
      &quot;kwh&quot;: 23.115652
    },
    {
      &quot;id&quot;: &quot;IAPVariousVarious&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/entities&quot;,
      &quot;entityId&quot;: &quot;IAPVariousVarious&quot;,
      &quot;entityMake&quot;: &quot;HPE&quot;,
      &quot;entityModel&quot;: &quot;Aggregated HPE Aruba access points&quot;,
      &quot;entityType&quot;: &quot;IAP&quot;,
      &quot;entitySerialNum&quot;: &quot;Various&quot;,
      &quot;entityProductId&quot;: &quot;Various&quot;,
      &quot;entityManufactureTimestamp&quot;: &quot;2024-04-25T09:32:06.996Z&quot;,
      &quot;locationName&quot;: null,
      &quot;locationId&quot;: null,
      &quot;locationCity&quot;: null,
      &quot;locationState&quot;: null,
      &quot;locationCountry&quot;: null,
      &quot;name&quot;: &quot;&quot;,
      &quot;costUsd&quot;: 0.77483493,
      &quot;co2eMetricTon&quot;: 0.0021369294,
      &quot;kwh&quot;: 4.842718
    }
  ],
  &quot;count&quot;: 2,
  &quot;total&quot;: 2,
  &quot;offset&quot;: 0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, you can see from the JSON response obtained for energy cost, CO2 emission and kWh consumption for an HPE ProLiant DL360 and a group of Aruba access points over the selected period.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can also try this from the HPE GreenLake Developer Portal by using the &lt;strong&gt;Try it&lt;/strong&gt; function:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-4.jpg&quot; alt=&quot;HPE GreenLake Developer Portal&quot; title=&quot;HPE GreenLake Developer Portal&quot;&gt;&lt;/p&gt;
&lt;p&gt;The next call to try out is &lt;strong&gt;usage-totals&lt;/strong&gt;, which is documented &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/sustainability/public/openapi/sustainability-insight-ctr-latest/operation/getUsageTotals&quot;&gt;here&lt;/a&gt;. The parameters of the call are the same as in the previous call, so let&apos;s give it a try:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ curl -s -X GET &apos;https://eu-central.api.greenlake.hpe.com/sustainability-insight-ctr/v1beta1/usage-totals?end-time=2024-06-11T08%3A00%3A00Z&amp;#x26;start-time=2024-06-01T00%3A00%3A00Z&apos; -H &apos;Authorization: Bearer &amp;#x3C;my-token&gt;&apos; -H &quot;Accept:application/json&quot; | jq
{
  &quot;items&quot;: [
    {
      &quot;type&quot;: &quot;sustainability-insight-ctr/totals&quot;,
      &quot;costUsd&quot;: 3.9185636,
      &quot;co2eMetricTon&quot;: 0.0036856781,
      &quot;kwh&quot;: 27.95837
    }
  ],
  &quot;count&quot;: 1
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This call returns the totals Cost, CO2, and kWh for the complete environment. You can apply filters, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;filter=locationCountry eq &apos;France&apos;&lt;/code&gt; to reduce scope to only devices located in France&lt;/li&gt;
&lt;li&gt;&lt;code&gt;filter=entityType eq &apos;COMPUTE&apos;&lt;/code&gt; to reduce scope to only COMPUTE devices&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s try these two:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ curl -s -X GET &apos;https://eu-central.api.greenlake.hpe.com/sustainability-insight-ctr/v1beta1/usage-totals?end-time=2024-06-11T08%3A00%3A00Z&amp;#x26;start-time=2024-06-01T00%3A00%3A00Z&amp;#x26;filter=locationCountry%20eq%20%27France%27&apos; -H &apos;Authorization: Bearer &amp;#x3C;my-token&gt;&apos; -H &quot;Accept:application/json&quot;  | jq
{
  &quot;items&quot;: [
    {
      &quot;type&quot;: &quot;sustainability-insight-ctr/totals&quot;,
      &quot;costUsd&quot;: 3.1437287,
      &quot;co2eMetricTon&quot;: 0.0015487487,
      &quot;kwh&quot;: 23.115652
    }
  ],
  &quot;count&quot;: 1
}


$ curl -s -X GET &apos;https://eu-central.api.greenlake.hpe.com/sustainability-insight-ctr/v1beta1/usage-totals?end-time=2024-06-11T08%3A00%3A00Z&amp;#x26;start-time=2024-06-01T00%3A00%3A00Z&amp;#x26;filter=entityType%20eq%20%27COMPUTE%27&apos; -H &apos;Authorization: Bearer &amp;#x3C;my-token&gt;&apos; -H &quot;Accept:application/json&quot;  | jq
{
  &quot;items&quot;: [
    {
      &quot;type&quot;: &quot;sustainability-insight-ctr/totals&quot;,
      &quot;costUsd&quot;: 3.1437287,
      &quot;co2eMetricTon&quot;: 0.0015487487,
      &quot;kwh&quot;: 23.115652
    }
  ],
  &quot;count&quot;: 1
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, give the last call, &lt;strong&gt;usageSeries&lt;/strong&gt;, a try. You can see from the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/sustainability/public/openapi/sustainability-insight-ctr-latest/operation/getUsageBySeries&quot;&gt;documentation&lt;/a&gt; that, in addition to &lt;em&gt;start-time&lt;/em&gt; and &lt;em&gt;end-time&lt;/em&gt;, there is an additional required parameter called &lt;em&gt;interval&lt;/em&gt;. &lt;em&gt;interval&lt;/em&gt; should be formatted as an integer value followed by a unit string. Units can be one of: day, hour, week, month, or year.&lt;/p&gt;
&lt;p&gt;Let&apos;s give this a try:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ curl -s -X GET &apos;https://eu-central.api.greenlake.hpe.com/sustainability-insight-ctr/v1beta1/usage-series?end-time=2024-06-11T08%3A00%3A00Z&amp;#x26;start-time=2024-06-01T00%3A00%3A00Z&amp;#x26;interval=1%20day&apos; -H &apos;Authorization: Bearer &amp;#x3C;my-token&gt;&apos; -H &quot;Accept:application/json&quot; | jq
{
  &quot;items&quot;: [
    {
      &quot;id&quot;: &quot;2024-06-01T00:00:00.000Z&quot;,:
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-01T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.403164,
      &quot;co2eMetricTon&quot;: 0.00041689575,
      &quot;kwh&quot;: 2.858163
    },
    {
      &quot;id&quot;: &quot;2024-06-02T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-02T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.40312916,
      &quot;co2eMetricTon&quot;: 0.0004167917,
      &quot;kwh&quot;: 2.8579495
    },
    {
      &quot;id&quot;: &quot;2024-06-03T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-03T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.4038577,
      &quot;co2eMetricTon&quot;: 0.00041880895,
      &quot;kwh&quot;: 2.8624988
    },
    {
      &quot;id&quot;: &quot;2024-06-04T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-04T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.40462032,
      &quot;co2eMetricTon&quot;: 0.00042090417,
      &quot;kwh&quot;: 2.8672693
    },
    {
      &quot;id&quot;: &quot;2024-06-05T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-05T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.40372863,
      &quot;co2eMetricTon&quot;: 0.00041844492,
      &quot;kwh&quot;: 2.861696
    },
    {
      &quot;id&quot;: &quot;2024-06-06T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-06T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.4039479,
      &quot;co2eMetricTon&quot;: 0.00041905773,
      &quot;kwh&quot;: 2.8630626
    },
    {
      &quot;id&quot;: &quot;2024-06-07T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-07T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.30680534,
      &quot;co2eMetricTon&quot;: 0.00015114676,
      &quot;kwh&quot;: 2.2559216
    },
    {
      &quot;id&quot;: &quot;2024-06-08T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-08T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.4034695,
      &quot;co2eMetricTon&quot;: 0.00041773028,
      &quot;kwh&quot;: 2.8600764
    },
    {
      &quot;id&quot;: &quot;2024-06-09T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-09T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.40351078,
      &quot;co2eMetricTon&quot;: 0.00041754392,
      &quot;kwh&quot;: 2.8604808
    },
     {
      &quot;id&quot;: &quot;2024-06-09T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-10T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.40351078,
      &quot;co2eMetricTon&quot;: 0.00041754392,
      &quot;kwh&quot;: 2.8604808
    },

    {
      &quot;id&quot;: &quot;2024-06-10T00:00:00.000Z&quot;,
      &quot;type&quot;: &quot;sustainability-insight-ctr/timeseries&quot;,
      &quot;timeBucket&quot;: &quot;2024-06-11T00:00:00.000Z&quot;,
      &quot;costUsd&quot;: 0.3068089,
      &quot;co2eMetricTon&quot;: 0.0001511485,
      &quot;kwh&quot;: 2.2559478
    },
  ],
  &quot;count&quot;: 11
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that I get one measure per day from June 1 at 00:00 until June 11th.&lt;/p&gt;
&lt;h2&gt;Using this data externally&lt;/h2&gt;
&lt;p&gt;You can use another set of data collected every hour, over a period of two days, within a tool such as Excel:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-5.jpg&quot; alt=&quot;Data imported in Excel&quot; title=&quot;Data imported in Excel&quot;&gt;&lt;/p&gt;
&lt;p&gt;And start providing graphs according to your needs:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-graphs.jpg&quot; alt=&quot;Excel graphs&quot; title=&quot;Excel graphs&quot;&gt;&lt;/p&gt;
&lt;p&gt;You could even go a step further and feed this time-series data into &lt;a href=&quot;https://www.elastic.co/elasticsearch&quot;&gt;Elasticsearch&lt;/a&gt; or &lt;a href=&quot;https://www.influxdata.com/&quot;&gt;influxdb&lt;/a&gt; to build more advanced dashboards with &lt;a href=&quot;https://www.elastic.co/kibana&quot;&gt;Kibana&lt;/a&gt; or &lt;a href=&quot;https://grafana.com/&quot;&gt;Grafana&lt;/a&gt;. The capture below shows an example of influxdb dashboard showing kWh, CO2 and cost over time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sic-blog-9.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next step&lt;/h2&gt;
&lt;p&gt;In this article, I described how to retrieve carbon emission, energy consumption, and cost of an Infrastructure managed with HPE GreenLake using the API for the HPE Sustainability Insight Center. I used Bash and cURL, but you could do the same using PowerShell or Python. For more details about how to use these two languages with HPE GreenLake, please check out my previous blog posts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/hpe-greenlake-edge-to-cloud-platform-scripting-fundamentals/&quot;&gt;HPE GreenLake edge-to-cloud platform scripting fundamentals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/bulk-onboarding-of-users-in-hpe-greenlake-edge-to-cloud-platform/&quot;&gt;Bulk onboarding of users in HPE GreenLake edge-to-cloud platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you’re interested in trying out the HPE GreenLake API, you might first want to check out one of our hands-on Workshops-on-Demand. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested in and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack&lt;/a&gt;  and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake for Private Cloud Enterprise: Mastering cloud migration with the 6Rs approach]]></title><description><![CDATA[In today's digital landscape, organizations are increasingly turning to the cloud to drive innovation, agility, and efficiency. However, the…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-mastering-cloud-migration-with-the-6rs-approach/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-mastering-cloud-migration-with-the-6rs-approach/</guid><pubDate>Thu, 13 Jun 2024 17:36:32 GMT</pubDate><content:encoded>&lt;p&gt;In today&apos;s digital landscape, organizations are increasingly turning to the cloud to drive innovation, agility, and efficiency. However, the journey to the cloud is fraught with challenges, including complex legacy systems, data security concerns, and the need for uninterrupted operations. HPE GreenLake for Private Cloud Enterprise addresses these hurdles with its comprehensive 6Rs approach to cloud migration and management.&lt;/p&gt;
&lt;p&gt;The 6Rs—&lt;strong&gt;Rehost&lt;/strong&gt;, &lt;strong&gt;Replatform&lt;/strong&gt;, &lt;strong&gt;Refactor&lt;/strong&gt;, &lt;strong&gt;Repurchase&lt;/strong&gt;, &lt;strong&gt;Retire&lt;/strong&gt;, and &lt;strong&gt;Retain&lt;/strong&gt;—provide a structured framework that ensures a seamless transition to the cloud. This blog post offers an in-depth look into each R&apos;s methodology, outlining a step-by-step guide that helps organizations optimize their cloud migration strategies and enhance their cloud management capabilities.&lt;/p&gt;
&lt;p&gt;This post is the first part of a multipart series covering migrations in depth. Below, I will delve into each of the 6Rs, exploring their unique roles and the value they bring to the migration process.&lt;/p&gt;
&lt;h2&gt;Industry need and challenges in cloud migration &lt;/h2&gt;
&lt;p&gt;As organizations strive to stay competitive, the shift towards cloud adoption becomes imperative. The need for scalable infrastructure, cost optimization, and enhanced operational efficiency drives this transformation. However, the migration journey is not without its obstacles. Developers and IT teams often grapple with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Complex legacy systems: Modernizing outdated and intricate legacy systems without disrupting business operations.&lt;/li&gt;
&lt;li&gt;Data security concerns: Ensuring robust security measures to protect sensitive data during and after migration.&lt;/li&gt;
&lt;li&gt;Operational continuity: Maintaining uninterrupted service delivery throughout the migration process.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How HPE GreenLake addresses these challenges&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise is designed to tackle these challenges effectively. By leveraging the 6Rs framework, HPE GreenLake offers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tailored migration strategies: Customized solutions that align with organizational goals and existing IT landscapes.&lt;/li&gt;
&lt;li&gt;Enhanced security: Advanced security protocols and compliance measures to safeguard data integrity.&lt;/li&gt;
&lt;li&gt;Seamless integration: Tools and services that ensure a smooth transition with minimal downtime, preserving business continuity.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Let’s take a look at each of the 6Rs individually and how HPE GreenLake for Private Cloud Enterprise can help&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;1. Rehosting&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Efficiently migrate applications to the cloud without making changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Use HPE GreenLake for Private Cloud Enterprise&apos;s integration capabilities for a comprehensive application inventory and assessment to identify suitable candidates for migration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Achieve faster migration, immediate cost reduction, and enhanced scalability by relocating applications to a more flexible and scalable cloud environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;2. Replatforming&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal&lt;/strong&gt;: Ensure cloud compatibility with minimal application adjustments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Utilize HPE GreenLake for Private Cloud Enterprise&apos;s adaptable cloud environment to make minor alterations to applications, ensuring they benefit from cloud features.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Attain improved cloud performance and long-term operational efficiency by ensuring applications operate optimally in a cloud environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;3. Rearchitecting&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Revamp applications to maximize cloud functionality.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Utilize HPE GreenLake for Private Cloud Enterprise&apos;s tools and resources to redesign and reconstruct applications, integrating cloud-native features for optimal performance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Realize enhanced application functionality, scalability, and superior cloud performance by leveraging tailored, cloud-centric designs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;4. Repurchasing&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Transition to cloud-optimized applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Evaluate existing applications and select from HPE GreenLake for Private Cloud Enterprise&apos;s cloud service offerings for better cloud compatibility and efficiency.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome&lt;/strong&gt;: Streamline operations and bolster cloud compatibility and performance by adopting applications designed for the cloud environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5. Retire&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Eliminate redundant or underutilized applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy:&lt;/strong&gt; Use HPE GreenLake for Private Cloud Enterprise&apos;s tools for an exhaustive application assessment to identify and safely retire unnecessary applications.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Minimize costs and enhance overall efficiency by discontinuing the use of unneeded applications, freeing up resources for critical operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;6. Retain&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Identify and maintain applications that are not suitable for cloud migration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Strategy:&lt;/strong&gt; Utilize HPE GreenLake for Private Cloud Enterprise&apos;s tools for an in-depth risk and performance assessment to identify applications best kept off-cloud.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Ensure sustained application performance and security by making informed decisions on which applications to keep in their current environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Pilots/POC&lt;/h2&gt;
&lt;p&gt;We recommend that you test out some of these to get a sense as to how HPE GreenLake for Private Cloud Enterprise can assist you in modernizing your environment through a pilot or proof-of-concept (POC) approach. A recommended methodology would be as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Approach:&lt;/strong&gt; Execute 2-3 pilot projects or proofs of concept for each R. Begin with &lt;strong&gt;Rehosting&lt;/strong&gt; and &lt;strong&gt;Replatforming&lt;/strong&gt; to assess and refine the migration process.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Outcome:&lt;/strong&gt; Gather valuable metrics and insights for evaluating migration success and failure, paving the way for continuous improvement in migration strategies.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;By embracing the structured approach of the HPE GreenLake for Private Cloud Enterprise 6Rs Migration Strategy, organizations can ensure a smooth and effective cloud migration journey. HPE GreenLake for Private Cloud Enterprise&apos;s approach delivers enhanced performance, scalability, and cost benefits, contributing to the overall success and growth of the business.&lt;/p&gt;
&lt;p&gt; For more insights and to stay updated on the remaining parts of our blog series, connect back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community portal&lt;/a&gt;. Don&apos;t miss out on the in-depth discussions and expert advice that will help you navigate your cloud migration journey successfully.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Activation Memory: A Deep Dive using PyTorch]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/activation-memory-a-deep-dive-using-pytorch/</link><guid isPermaLink="false">https://developer.hpe.com/activation-memory-a-deep-dive-using-pytorch/</guid><pubDate>Wed, 12 Jun 2024 15:15:13 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Fun computer games, more HPE GreenLake APIs, HPE Ezmeral Unified Analytics, and more!]]></title><link>https://developer.hpe.com/2024-june-06/</link><guid isPermaLink="false">https://developer.hpe.com/2024-june-06/</guid><pubDate>Thu, 06 Jun 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE GreenLake Flex Solutions SCIM API Integration with Okta SCIM Adapter]]></title><description><![CDATA[I am excited to announce that HPE have developed and promoted a way to bring users and groups from Okta to HPE GreenLake Flex Solutions…]]></description><link>https://developer.hpe.com/glc-scim-api-integration-with-okta-scim-adapter-1/</link><guid isPermaLink="false">https://developer.hpe.com/glc-scim-api-integration-with-okta-scim-adapter-1/</guid><pubDate>Mon, 27 May 2024 17:30:08 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;I am excited to announce that HPE have developed and promoted a way to bring users and groups from Okta to HPE GreenLake Flex Solutions. Enterprises will be now able to sync and bring all users and groups from Okta into HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;p&gt;In this blog post, I will walk you through the process of configuring Okta SCIM adapter to sync users and groups over to HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;h2&gt;Okta (SCIM) Adapter&lt;/h2&gt;
&lt;p&gt;You can synchronize users and groups from your Okta identity management service to HPE GreenLake Flex Solutions using the Okta System for Cross-domain Identity Management (SCIM) adapter.&lt;/p&gt;
&lt;p&gt;The Okta SCIM adapter application can be installed from the Okta Integration Network (OIN) into your Okta implementation to allow for integration with a SCIM-compliant API. Any user that needs to be synchronized to HPE GreenLake Flex Solutions must be assigned to the Okta SCIM adapter application in your Okta implementation. Groups whose memberships need to be synced to HPE GreenLake Flex Solutions must be added as a Push Group in the application. Users can be assigned to the application using the same groups that are synchronized to HPE GreenLake Flex Solutions.&lt;/p&gt;
&lt;h1&gt;Configuring a SCIM application in Okta.&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; In the Okta Admin Console, deploy an application from the app catalog:&lt;br&gt;
a. Go to &lt;strong&gt;Applications&lt;/strong&gt; &gt; &lt;strong&gt;Browse App Catalog&lt;/strong&gt;.&lt;br&gt;
b. In the search bar type SCIM 2.0, and find the app called: SCIM 2.0 Test App (OAuth Bearer Token).&lt;br&gt;
c. Select the application and then click &lt;strong&gt;Add Integration&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;  In the Add Scim2.0 Test App page, do the following:&lt;br&gt;
a. Change the application label name if you want and make sure &lt;strong&gt;Automatically log in when user lands on login page&lt;/strong&gt; is checked.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scimgeneral.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
  &lt;br /&gt;
&lt;p&gt;b. Click &lt;strong&gt;Next&lt;/strong&gt;. On the following page, click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; After the application is created, configure the integration:&lt;br&gt;
a. Click the &lt;strong&gt;Provisioning&lt;/strong&gt; tab, then select &lt;strong&gt;Configure API Integration&lt;/strong&gt;.&lt;br&gt;
b. Select the &lt;strong&gt;Enable API Integration&lt;/strong&gt; check box.&lt;br&gt;
c. In the SCIM 2.0 Base Url field, enter: &lt;a href=&quot;https://sps.us1.greenlake-hpe.com/v1alpha1/scimproxy&quot;&gt;https://sps.us1.greenlake-hpe.com/v1alpha1/scimproxy&lt;/a&gt;.&lt;br&gt;
d. In the OAuth Bearer Token field: to create long-lived tokens for user provisioning, see step 2 and step 3 of the blog post &lt;a href=&quot;https://developer.hpe.com/blog/configuring-azure-ad-with-long-term-token-for-scim-provisiong/&quot;&gt;Configuring Azure Active Directory with long-lived tokens for user provisioning&lt;/a&gt;.&lt;br&gt;
e. Uncheck the box for the Import Groups option.&lt;br&gt;
f. Test that the URL and token are valid by clicking &lt;strong&gt;Test API Credentials&lt;/strong&gt;, then click &lt;strong&gt;Save&lt;/strong&gt;. If everything is correct, the following message is shown:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/scimtest.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Configure the synchronization settings:&lt;br&gt;
a. Under the &lt;strong&gt;Provisioning&lt;/strong&gt; tab &gt; &lt;strong&gt;To App&lt;/strong&gt; section, enable these settings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create Users&lt;/li&gt;
&lt;li&gt;Deactivate Users&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim2app.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;b. Select the six attributes shown in the following screenshot and discard the rest.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/attributes.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;c. Assign the group you want to synchronize to HPE GreenLake Flex Solutions to the SCIM application under the &lt;strong&gt;Application&lt;/strong&gt; &gt; &lt;strong&gt;Assignments&lt;/strong&gt; tab and add it as a push group in  the &lt;strong&gt;Push Groups&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Assignments&lt;/strong&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-group.png&quot; alt=&quot;&quot; title=&quot;Assignments tab&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Push Groups&lt;/strong&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/scim-push.png&quot; alt=&quot;&quot; title=&quot;Push Groups tab:&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please note:&lt;/strong&gt;
Adding the Group Everyone group to the SCIM application could have unintended effects on all users.&lt;/p&gt;
&lt;p&gt;These are all the steps required to configure a SCIM 2.0 application. Remember that users must be members of a group that is assigned to the SCIM application and that group must be included in a push group.
Now all configured groups can be pushed into HPE GreenLake Flex Solutions via the Okta SCIM Adapter.&lt;/p&gt;
&lt;p&gt;Please return to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more tips and tricks on working with the HPE GreenLake platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Converting HPE GreenLake API specifications in OAS 3.1 using OpenAPI tools]]></title><description><![CDATA[What are the HPE GreenLake APIs for Data Services on the HPE GreenLake Platform? These HPE GreenLake APIs are multiple sets of APIs that…]]></description><link>https://developer.hpe.com/converting-hpe-greenlake-api-specifications-in-oas-3-1-using-openapi-tools/</link><guid isPermaLink="false">https://developer.hpe.com/converting-hpe-greenlake-api-specifications-in-oas-3-1-using-openapi-tools/</guid><pubDate>Mon, 27 May 2024 09:42:19 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h2&gt;What are the HPE GreenLake APIs for Data Services on the HPE GreenLake Platform?&lt;/h2&gt;
&lt;p&gt;These HPE GreenLake APIs are multiple sets of APIs that enable client applications to manipulate the REST API resources available as part of Data Services on HPE GreenLake edge-to-cloud platform. For more information about Data Services on HPE GreenLake platform, see the &lt;a href=&quot;https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/&quot;&gt;Data Services landing page&lt;/a&gt; in the HPE Developer Forum. The landing page provides &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=sd00003533en_us&amp;#x26;page=ps_api_dscc.html&quot;&gt;information on accessing the documentation&lt;/a&gt; for each set of HPE GreenLake APIs related to the data services on the HPE GreenLake platform.&lt;/p&gt;
&lt;h2&gt;How can I use the HPE GreenLake APIs documentation for my automation projects?&lt;/h2&gt;
&lt;p&gt;All HPE GreenLake APIs provide example code snippets from different programming or scripting languages in the interactive website. All adopters can make use of the code snippets to create any client application or to create any client library for their development purposes. For early adopters, this method is the quickest and the easiest way to get started with their automation script. As the time goes on, you will see more &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;blog posts&lt;/a&gt; in HPE Developer forum that provide comprehensive code samples to address common use.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/examples-provided-in-the-documentation.png&quot; alt=&quot;Code snippets in multiple languages including JavaScript, Python, GO, and cURL&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The previous figure displays the request and response samples pane that is present on every API documentation page in the &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;HPE GreenLake Developer website&lt;/a&gt;. The pane contains the examples of the particular API using the code in cURL, JavaScript, Python, or GO.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Do I have an alternative to use these REST APIs for my development?&lt;/h2&gt;
&lt;p&gt;From the documentation for each set of HPE GreenLake APIs, you will recognize that these APIs were documented using OpenAPI specification files in either JSON or YAML format. However, you will also notice that a single set of the HPE GreenLake APIs, called Data Services Cloud Console, is based on the OpenAPI Standard 3.0. However, the rest of the HPE GreenLake APIs are based on the OpenAPI Standard 3.1.&lt;/p&gt;
&lt;p&gt;If you decide to use a tool to perform the conversion of these OpenAPI spec files into a client Library for particular programming language or scripting, there are a couple of blog posts that discuss how to convert these files into &lt;a href=&quot;https://developer.hpe.com/blog/introducing-python-sdk-for-dscc/&quot;&gt;Python&lt;/a&gt; or &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-powershell-sdk/&quot;&gt;PowerShell&lt;/a&gt; client libraries.&lt;/p&gt;
&lt;p&gt;This method enables an agile adoption of the HPE GreenLake APIs because many of these HPE GreenLake APIs, as of this blog post, are ongoing development cycle. So there is expectation that the existing APIs will be updated, deprecated, or new resources will be introduced. This method opens the opportunity to deploy code development automation by pushing the generated client Library into the GitHub as the new spec is made available.&lt;/p&gt;
&lt;p&gt;For more information about the versioning for the APIs based on the OpenAPI Standard 3.1, see this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/public/standards/versioning_basics/&quot;&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;First tool of the day: a converter from OpenAPI Standard 3.1 to the OpenAPI Standard 3.0&lt;/h2&gt;
&lt;p&gt;The OpenAPI initiative provides a framework to describe any APIs so that these APIs can be consumed by different organizations for documentation, client side, server-side mocks, and many other opportunities. This framework has evolved from standard version 3.0 to version 3.1 with all the benefits as described in this &lt;a href=&quot;https://www.youtube.com/live/Sflpzh_cAcA?si=zkAKqGNYQz-5C6oe&quot;&gt;video&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As of this blog post, the HPE Developer forum uses &lt;a href=&quot;https://openapi-generator.tech/&quot;&gt;an open source tool&lt;/a&gt; to generate client libraries for blog posts. However, the challenge of using this tool is that it only allows the Open API standard version 3.0 specification as the input. However, the majority of the HPE GreenLake APIs were documented using the OpenAPI Standard version 3.1.&lt;/p&gt;
&lt;p&gt;Let me introduce a tool to convert the spec file in OpenAPI standard 3.1 to a spec file in OpenAPI standard 3.0 to enable conversion using the &lt;a href=&quot;https://www.npmjs.com/package/@openapitools/openapi-generator-cli&quot;&gt;openapi-generator-cli&lt;/a&gt;. The open-source tool is named &lt;strong&gt;apiture/openapi-down-convert&lt;/strong&gt; by David Biesack and Mike Ralphson. It is documented in this GitHub &lt;a href=&quot;https://github.com/apiture/openapi-down-convert&quot;&gt;site&lt;/a&gt;, and shown in the following figure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/github-to-openapi-down-convert.png&quot; alt=&quot;The openapi-down-convert GitHub website&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the GitHub website for the documentation of the openapi-down-convert.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To install this tool, follow the instructions on the GitHub &lt;a href=&quot;https://github.com/apiture/openapi-down-convert&quot;&gt;README&lt;/a&gt; using the JavaScript package manager called the &lt;strong&gt;npm&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;A lot of the tools used in this part of the blog post are easier to use with the npm (JavaScript) software deployment method. I used the following steps to deploy the npm for JavaScript package management into a Microsoft Windows desktop environment. After these steps, you can deploy the openapi-down-convert tool.&lt;/p&gt;
&lt;p&gt;Before I started this, I made sure that my Microsoft Windows desktop had access to the internet and can connect to the websites that I provided in below steps. In some cases, you may need to define the internet proxy so that you can connect to these websites.&lt;/p&gt;
&lt;p&gt;Additionally, prior to deploying this OpenJDK, I removed any older version of the Java SDK or JRE from my Microsoft Windows environment. This ensures that the correct Java version was used for all the following steps.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;First, I deployed the Java SDK library into my Microsoft Windows desktop. My recommendation is to deploy the Microsoft Build version of the OpenJDK available from the Microsoft &lt;a href=&quot;https://learn.microsoft.com/en-us/java/openjdk/download&quot;&gt;website&lt;/a&gt; using the instructions provided.&lt;/li&gt;
&lt;li&gt;Afterward, I set the &lt;code&gt;JAVA_HOME&lt;/code&gt; system variable to &lt;code&gt;C:\Program Files\Eclipse Adoptium\jdk-21.0.1.12-hotspot&lt;/code&gt; using the following &lt;a href=&quot;https://confluence.atlassian.com/doc/setting-the-java_home-variable-in-windows-8895.html&quot;&gt;instruction&lt;/a&gt;. This step ensured that any applications that require Java would use the OpenJDK by default.&lt;/li&gt;
&lt;li&gt;Following the steps on the NodeJS &lt;a href=&quot;https://nodejs.org/en&quot;&gt;website&lt;/a&gt;, I deployed the NodeJS package into my Microsoft Windows environment downloaded from the nodejs.org website. I also ensured that I included the &lt;strong&gt;npm package manager&lt;/strong&gt; option as shown in one of the steps from the NodeJS installation wizard as shown in figure below.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/nodejs-deployment-ensure-npm-is-available.png&quot; alt=&quot;The wizard section to deploy the npm package manager&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;After the deployment of NodeJS completed, I was able to use the npm CLI to install the &lt;code&gt;openapi-down-convert&lt;/code&gt; in the Microsoft Windows CLI as follows.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;C:\&gt;npm I -g @apiture/openapi-down-convert
added 5 packages in 4s
npm notice
npm notice New minor version of npm available! 10.5.0 -&gt; 10.7.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.7.0
npm notice Run npm install -g npm@10.7.0 to update!
npm notice
C:\&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;After the preceding steps, I had the tool to convert any HPE GreenLake API spec files from OpenAPI Standard 3.1 to OpenAPI Standard 3.0.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Second tool of the day: deploying the openapi-generator-cli using the npm JavaScript&lt;/h2&gt;
&lt;p&gt;This time, I want to introduce a version of the openapi-generator that can be executed like any other CLI, &lt;a href=&quot;https://www.npmjs.com/package/@openapitools/openapi-generator-cli&quot;&gt;@openapitools/openapi-generator-cli&lt;/a&gt; shared by the OpenAPI Initiative team. Because we have already deployed the npm package deployment tool, as described previously, I can proceed to deploy this tool quickly. These are steps that I took to deploy openapi-generator-cli:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I opened a Microsoft Windows command line interface and issued the following npm CLI command:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;C:\Users\Administrator&gt;npm install -g @openapitools/openapi-generator-cli

added 116 packages in 36s

23 packages are looking for funding
  run `npm fund` for details

C:\Users\Administrator&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Afterward, I tested the deployed openapi-generator-cli to validate the version of the generator that was used. The openapi-generator-cli would use the latest published version of the openapi-generator-cli engine as default. At the time of the publication of this blog post, the latest published version which was 7.5.0 as shown below:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;   C:&gt;openapi-generator-cli
   Download 7.5.0 ...
   Downloaded 7.5.0
   Did set selected version to 7.5.0
   Usage: openapi-generator-cli &amp;#x3C;command&gt; [&amp;#x3C;args&gt;]

Options:
  --openapitools &amp;#x3C;openapitools.json&gt;  Use the specified openapi-generator-cli configuration file
  --custom-generator &amp;#x3C;generator&gt;      Custom generator jar

Commands:
  version-manager                     Manage used / installed generator version
  author                              Utilities for authoring generators or customizing templates.
  batch                               Generate code in batch via external configs.
  config-help                         Config help for chosen lang
  generate \[options]                  Generate code with the specified generator.
  help                                Display help information about openapi-generator
  list                                Lists the available generators
  meta                                MetaGenerator. Generator for creating a new template set and configuration for
                                      Codegen.  The output will be based on the language you specify, and includes
                                      default templates to include.
  validate                            Validate specification
  version                             Show version information used in tooling

  C:&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;With that, I had the tool to convert any of the HPE GreenLake APIs in OpenAPI Standard 3.0 to a client library for the popular scripting or programming languages listed in the &lt;a href=&quot;https://openapi-generator.tech/docs/generators&quot;&gt;website&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;But wait! What about converting those OpenAPI Standard 3.1 spec files to a client library?&lt;/h2&gt;
&lt;p&gt;Don&apos;t worry because I am not going to leave you stranded! 😊&lt;/p&gt;
&lt;p&gt;Now we have the required tools to create a pipeline to convert from the OpenAPI Standard 3.1 spec files to the client library for the scripting language of choice. An overview of the process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Download the OpenAPI spec file from the HPE GreenLake developer website.&lt;/li&gt;
&lt;li&gt;Convert the spec file to OpenAPI Standard 3.0 spec file.&lt;/li&gt;
&lt;li&gt;Convert the 3.0 spec file to the client library.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let me give you an example of converting the HPE GreenLake API for Data Services spec file to a PowerShell client library.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I downloaded the HPE GreenLake API for Data Services OpenAPI Standard 3.1 spec file from the HPE GreenLake Developer website using the UI:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/download-the-openapi-3.1-spec-data-services-from-developer-website.png&quot; alt=&quot;Download OpenAPI 3.1 JSON spec file from HP GreenLake Data Services&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Afterward, I changed the name of the downloaded file from &lt;code&gt;swagger.json&lt;/code&gt; to &lt;code&gt;GL-dataservices-31.json&lt;/code&gt; so that I could recognize this OpenAPI Standard 3.1 spec file from HPE GreenLake APIs for Data Services.&lt;/li&gt;
&lt;li&gt;After I renamed the JSON file, I followed by using the &lt;code&gt;openapi-down-convert&lt;/code&gt; tool to convert the OpenAPI Standard 3.1 to OpenAPI Standard 3.0 using the following command shown below. I named the converted file as &lt;code&gt;GL-dataservices-30.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;   PS C:\Users\Administrator\Downloads&gt; move .\swagger.json ..\Scripting\GL-dataservices-31.json
   PS C:\Users\Administrator\Downloads&gt; cd ..\Scripting\
   PS C:\Users\Administrator\Scripting&gt; dir
Mode                LastWriteTime         Length Name

- - -

-a----         5/2/2024   2:46 PM         171264 GL-dataservices-31.json

PS C:\Users\Administrator\Scripting&gt; openapi-down-convert -i .\GL-dataservices-31.json -o .\GL-dataservices-30.json
PS C:\Users\Administrator\Scripting&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Now it’s the time to convert this OpenAPI Standard 3.0 spec file to the PowerShell client library using the &lt;code&gt;openapi-generator tool&lt;/code&gt;. Additionally, I also changed the name of the generated package from the standard name &lt;code&gt;OpenAPITools&lt;/code&gt; to the specific name &lt;code&gt;GLdataservices&lt;/code&gt; using the special arguments.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;   PS C:\Users\Administrator\Scripting&gt; openapi-generator-cli generate -g powershell --additional-properties=&quot;packageName&quot;=&quot;GLdataservices&quot; -i .\GL-dataservices-30.json -o Posh-GL-dataservices
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;After the conversion was completed, I found a new folder named &lt;code&gt;Posh-GL-dataservices&lt;/code&gt; with some files in the folder as shown below. It looks like that I have a Markdown file called &lt;code&gt;README.md&lt;/code&gt;. I can use my favorite development editor, Microsoft Visual Studio Code, to investigate this generated PowerShell module.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The information on how to install Microsoft Visual Studio Code is available at the Visual Studio Code &lt;a href=&quot;https://code.visualstudio.com/&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;   PS C:\Users\Administrator\Scripting&gt; cd .\Posh-GL-dataservices\
   PS C:\Users\Administrator\Scripting\Posh-GL-dataservices&gt; dir
Mode                LastWriteTime         Length Name

- - -

d-----         5/2/2024   5:00 PM                .openapi-generator
d-----         5/2/2024   5:00 PM                docs
d-----         5/2/2024   5:40 PM                src
d-----         5/2/2024   5:00 PM                tests
-a----         5/2/2024   5:00 PM           1040 .openapi-generator-ignore
-a----         5/2/2024   5:40 PM           1224 appveyor.yml
-a----         5/2/2024   5:40 PM           2100 Build.ps1
-a----         5/2/2024   5:40 PM          12794 README.md

PS C:\Users\Administrator\Scripting\Posh-GL-dataservices&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Using the Microsoft Visual Studio Code, I opened the folder where the modules were located. I could see the list of the files that were generated by the openapi-generator. The first file that I opened was the README.md, and using the keyboard command of &lt;code&gt;&quot;CTRL + Left Shift + v&quot;&lt;/code&gt;, I was able to convert the README.md into a readable format. Further, I found out the instructions on installation, uninstallation, and detailed information on how to use this PowerShell module.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-gl-dataservices-posh-readme-md.png&quot; alt=&quot;HPE GreenLake Data Services POSH README.md&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;By following the guides in README.md, I knew to click &lt;code&gt;documentation for API Endpoints&lt;/code&gt; to display the information on how to use the API. For example, the &lt;code&gt;Invoke-ListAsyncOperations&lt;/code&gt; that I decided to use for this blog post.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/hp-gl-data-services-help-for-async-events-list.png&quot; alt=&quot;HPE GreenLake Data Services list of async operations&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;To use this PowerShell client library so that I can call &lt;code&gt;Invoke-ListAsyncOperations&lt;/code&gt;, I followed the instructions to install the module into a PowerShell workstation as described in the &lt;code&gt;README.md&lt;/code&gt; file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;To install from the source, run the following command to build and install the PowerShell module locally:
C:&gt; Build.ps1
C:&gt; Import-Module -Name &apos;.\src\GLdataservices&apos; -Verbose
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;Once this module was loaded, I created a short script based on PowerShell to call &lt;code&gt;Invoke-ListAsyncOperations&lt;/code&gt; and used that API to display a list of completed tasks.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$fileName = &quot;..\myCredential-rrd1.json&quot;
$secretFile = Get-Content -Path $fileName | ConvertFrom-Json

# general setting of the PowerShell module, e.g. base URL, authentication, etc
$Configuration = Get-Configuration
$Configuration.BaseUrl = &quot;https://us1.data.cloud.hpe.com&quot;
$Configuration.Username = $secretFile | select-object -ExpandProperty myId
$Configuration.Password = $secretFile | select-object -ExpandProperty mySecret
$token_url = &quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot;
$AuthenticationResult = Invoke-WebRequest $token_url -Method Post -Body @{
    grant_type = &quot;client_credentials&quot;
    client_id = $Configuration.Username
    client_secret = $Configuration.Password
}
$Configuration.AccessToken = $AuthenticationResult.Content | ConvertFrom-Json | select-object -ExpandProperty access_token

# general setting of the PowerShell module, e.g. base URL, authentication, etc
$Filter = &quot;&apos;backup-and-recovery&apos; in services&quot; # String | The UUID of the object
$Select = &apos;associatedResources,services,displayName,logMessages&apos; # String | A list of properties to include in the response. (optional)

# Returns details of a specific async-operation
try {
    $Result = Invoke-ListAsyncOperations -Offset 0 -Limit 10 -Filter $Filter -Select $Select
} catch {
    Write-Host (&quot;Exception occurred when calling Invoke-ListAsyncOperations: {0}&quot; -f ($_.ErrorDetails | ConvertFrom-Json))
    Write-Host (&quot;Response headers: {0}&quot; -f ($_.Exception.Response.Headers | ConvertTo-Json))
}
$Result.items | ConvertTo-Json
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The preceding script read a separate &lt;code&gt;client-credentials&lt;/code&gt; file so that I could gain authorization to my HPE GreenLake workspace. That way, I didn&apos;t have to include my &lt;code&gt;client-secrets&lt;/code&gt; and &lt;code&gt;client-id&lt;/code&gt; into this script file to ensure proper secure coding. This file called &lt;code&gt;myCredentials-rrd1.json&lt;/code&gt; which contains the JSON structure shown below. For more information on providing this client-credentials information, please see the HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/#authentication&quot;&gt;website&lt;/a&gt;. There is also a blog &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;post&lt;/a&gt; in HPE Developer Forum website that describes the process as well.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;{
  &quot;myWorkspace&quot;: &quot;xxxxxxxxxx-yyyy-yyyy-yyyy-zzzzzzzzzzzz&quot;, 
  &quot;myId&quot;: &quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee&quot;, 
  &quot;mySecret&quot;: &quot;06cffff699deeeee9f1d92f7gggggggg&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;As expected, the execution of the above script completed successfully. The API response returned the list of tasks based on the filtering and selection properties from the example.  These were task lists from my workspace created by the backup-and-recovery services in my workspace.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;[
  {
    &quot;associatedResources&quot;: [],
    &quot;services&quot;: [
      &quot;backup-and-recovery&quot;
    ],
    &quot;displayName&quot;: &quot;Create_individual_file_backup&quot;,
    &quot;logMessages&quot;: [
      &quot;@{message=Performing incremental copy.; timestamp=04/27/2024 09:01:22}&quot;,
      &quot;@{message=Resources acquired, volume copy preparation done.; timestamp=04/27/2024 09:01:23}&quot;,
      &quot;@{message=Submitted all copy jobs to data mover.; timestamp=04/27/2024 09:02:31}&quot;,
      &quot;@{message=Initiated Vmdk copy operation.; timestamp=04/27/2024 09:02:33}&quot;,
      &quot;@{message=Failed with retryable error code: -3403. Retrying after 1 minutes.; timestamp=04/27/2024 09:07:17}&quot;,
      &quot;@{message=Performing optimized copy since Invalid parent backup: Parent backup with id 61c86d49-2b7c-4e0a-9619-dc9bda08586d is in Reading state.; timestamp=04/27/2024 09:08:18}&quot;,
      &quot;@{message=Resetting retries as data movement successfully progressed after previous failure.; timestamp=04/27/2024 09:21:55}&quot;,
      &quot;@{message=Failed with retryable error code: -3403. Retrying after 15 minutes.; timestamp=04/27/2024 09:30:26}&quot;,
      &quot;@{message=Allocation map collection took 0.002743 seconds with 1 threads.; timestamp=04/27/2024 10:58:21}&quot;,
      &quot;@{message=Successfully completed copy operation; timestamp=04/27/2024 10:58:22}&quot;,
      &quot;@{message=Backup operation completed.; timestamp=04/27/2024 10:58:23}&quot;
    ]
  },
  {
    &quot;associatedResources&quot;: [
      &quot;@{name=0-VM-01-VVOL-DS; resourceUri=/hybrid-cloud/v1beta1/virtual-machines/1d1438c2-3ae2-52e0-b5a1-fa643903f526; type=hybrid-cloud/virtual-machine}&quot;
    ],
    &quot;services&quot;: [
      &quot;backup-and-recovery&quot;
    ],
    &quot;displayName&quot;: &quot;Delete backup [0-VM-01-VVOL-DS - 13/04/2024 03:48:59]&quot;,
    &quot;logMessages&quot;: [
      &quot;@{message=Job execution started.; timestamp=04/16/2024 08:03:26}&quot;,
      &quot;@{message=Deleting local backup.; timestamp=04/16/2024 08:03:28}&quot;,
      &quot;@{message=Deleting backup successful.; timestamp=04/16/2024 08:03:30}&quot;,
      &quot;@{message=Job execution completed.; timestamp=04/16/2024 08:03:31}&quot;
    ]
  },
  {
    &quot;associatedResources&quot;: [],
    &quot;services&quot;: [
      &quot;backup-and-recovery&quot;
    ],
    &quot;displayName&quot;: &quot;Delete MSSQL snapshot [Array_Snapshot_2024-04-27-21:35:19-K1cqhLAg - 2024-04-28T01:36:25.000Z]&quot;,
    &quot;logMessages&quot;: [
      &quot;@{message=Updating snapshot state; timestamp=04/30/2024 02:01:05}&quot;,
      &quot;@{message=Deleting volume(s) snapshot.; timestamp=04/30/2024 02:01:06}&quot;,
      &quot;@{message=Deleting snapshot; timestamp=04/30/2024 02:01:18}&quot;,
      &quot;@{message=Job execution completed.; timestamp=04/30/2024 02:01:18}&quot;
    ]
  },
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;With the introduction of new HPE GreenLake APIs on March 2024, HPE GreenLake APIs embraces OpenAPI Standard 3.1 for the specification files. Examples of the APIs released based on the OpenAPI Standard 3.1 include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Data Services&lt;/li&gt;
&lt;li&gt;Virtualization&lt;/li&gt;
&lt;li&gt;Backup and Recovery&lt;/li&gt;
&lt;li&gt;Private Cloud Business Edition&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This blog post introduces the tools to enable any adopter to use the open-source software based openapi-generator to convert these spec files after the conversion from the OpenAPI Standard 3.1 to OpenAPI Standard 3.0. In this blog post, I provided an example on how to convert the HPE GreenLake API for Data Services into a PowerShell client library. Furthermore, I also provided an example on how to use this client library to display the list of the tasks from HPE GreenLake for Backup and Recovery from my workspace.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; that this conversion will not be required once the open-source tool such openapi-generator-cli eventually support OpenAPI 3.1 Standard spec in the future. For more information on the supported SDK conversion from OpenAPI Standard 3.1 using this open-source tool, please follow this update to &lt;a href=&quot;https://github.com/OpenAPITools/openapi-generator/issues/9083&quot;&gt;issues&lt;/a&gt;. If you are using other (or commercial) OpenAPI SDK generator tool such as listed &lt;a href=&quot;https://openapi.tools/#sdk&quot;&gt;here&lt;/a&gt;, you may not be required to use the conversion tool as discussed in this blog.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Please don’t hesitate to explore the APIs for Data Services on the HPE GreenLake platform and see how you can improve your agility in managing your data. Any questions on HPE GreenLake Data Services Cloud Console API? Please join the HPE Developer Community Slack &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;Workspace&lt;/a&gt;, and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Do not miss The Innovation Campus at HPE Discover 2024!]]></title><description><![CDATA[We all recognize in today’s tech world; things are moving faster than ever. And in this uber-accelerated tech landscape, innovation cannot…]]></description><link>https://developer.hpe.com/do-not-miss-the-innovation-campus-at-discover-2024/</link><guid isPermaLink="false">https://developer.hpe.com/do-not-miss-the-innovation-campus-at-discover-2024/</guid><pubDate>Tue, 21 May 2024 14:09:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://www.hpe.com/content/dam/hpe/shared-publishing/images-norend/discover/2024/backgrounds/Discover-2024-Sphere-Customer-Story-2-9-6-1.jpg.hpetransform/bounded-resize:width=800/image.webp&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We all recognize in today’s tech world; things are moving faster than ever. And in this uber-accelerated tech landscape, innovation cannot be just a buzzword – it must be the driving force behind growth, competitiveness, and relevance.   &lt;/p&gt;
&lt;p&gt;At HPE, innovation is in our DNA.  We understand that staying ahead means embracing change, pushing boundaries and continually evolving, iterating and refining. Innovation permeates throughout HPE and that’s why the HPE Developer Community is excited to announce our evolved presence at &lt;a href=&quot;https://www.hpe.com/us/en/discover.html&quot;&gt;HPE Discover Las Vegas 2024&lt;/a&gt;; a space that is dedicated to highlighting our company-wide focus and commitment to innovation – The Innovation Campus.  &lt;/p&gt;
&lt;p&gt;Differing from our traditional Hack Shack, the Innovation Campus will span a new neighbourhood on the event floor, providing an immersive experience that is designed to demonstrate our focus on innovation. The Campus will walk attendees through the various stages of the innovation process, inviting them to embark on their own journeys of discovery, advancement and transformation alongside us. &lt;/p&gt;
&lt;p&gt;While the format is different, the focus on collaboration and moving your business ahead remains the same.  From inspiring ideas to empowering individuals, each element of the Innovation Campus will offer a unique glimpse into the different facets of innovation:  &lt;/p&gt;
&lt;h2&gt;Ignite&lt;/h2&gt;
&lt;h4&gt;Investing in Technology Futures – Hewlett Packard Labs &lt;/h4&gt;
&lt;p&gt;While innovation happens across HPE, this is the epicenter that leads how we innovate and ensures we are making an impact. Our approach includes technology and industry horizon scanning, technology invention and customer co-creation. At this booth, attendees can explore technologies beyond the horizon to ignite their technology future. &lt;/p&gt;
&lt;h2&gt;Engage&lt;/h2&gt;
&lt;h4&gt;Co-creating Innovation – HPE Customer Innovation Centers &lt;/h4&gt;
&lt;p&gt;Our global customer innovation centers explore edge-to-cloud opportunities in a tailored engagement. Customer Engagement teams customize customer visits to bring your initiatives to life and advance your digital journey. This is a great opportunity for attendees to immerse themselves in emerging technologies and explore how to apply them to their innovation roadmaps. &lt;/p&gt;
&lt;h2&gt;Explore&lt;/h2&gt;
&lt;h4&gt;Uncovering What’s Next – Innovation Exploration Booth  &lt;/h4&gt;
&lt;p&gt;Building a culture of innovation ensures that every employee understands your innovation priorities. At HPE we empower intelligent and curated search and navigation of innovation engineering projects, ideas, and the startup / open-source ecosystem. We also disseminate innovation maps and gap analysis, providing valuable insights. At this booth, attendees will see a demonstration of how we brought this ecosystem to life and explore what it takes to build a tool to manage the beginning funnel for innovation. &lt;/p&gt;
&lt;h2&gt;Pioneer&lt;/h2&gt;
&lt;h4&gt;Innovating Through Partners – The Start-Up Innovators Pavilion &lt;/h4&gt;
&lt;p&gt;The next stage of discovering what’s next is forming partnerships with companies that are pioneering in your industry. In our Startup Pavilion, attendees will get ideas on how to build their partnership programs to contribute to their innovation initiatives. &lt;/p&gt;
&lt;h2&gt;Inspire&lt;/h2&gt;
&lt;h4&gt;See our Innovations in Action – The Innovation Theater &lt;/h4&gt;
&lt;p&gt;In the Innovation Theater, throughout each day we will have lightning rounds that take a deep dive into some of our technology explorations.  &lt;/p&gt;
&lt;h2&gt;Empower&lt;/h2&gt;
&lt;h4&gt;Investing in Incubation – The Innovation Studio &lt;/h4&gt;
&lt;p&gt;In the Innovation Studio, attendees can try out some of HPE innovation methodologies including customer empathy interviews and co-design, as well as explore AI innovation for their business with our subject matter experts.  &lt;/p&gt;
&lt;h2&gt;Transform&lt;/h2&gt;
&lt;h4&gt;Innovation Proof of Value – Ask the Experts and Digital Transformation Advisors &lt;/h4&gt;
&lt;p&gt;Our Transformation Advisors will be available at the Innovation Campus to assist attendees in advancing their digital transformation journey. How can you assess the impact of innovation to demonstrate its value to your organization? The key is to manage your innovation efforts as a portfolio of investments. Just like any well-managed portfolio, the decisions your company makes about what not to invest in are as crucial as those it does. Our advisors are here to help you consider and make those strategic innovation-led decisions. &lt;/p&gt;
&lt;p&gt;Through these interactive experiences, we hope to inspire, educate and empower attendees to embrace innovation and to either start, or continue their journey as innovators themselves.  We will be inviting all visitors to the Innovation Campus to become part of our Innovator Community, by booking into dedicated online Innovation Group sessions and in-person workshops at our HPE Customer Innovation Centers.  These workshops are dedicated to actively listening to our customers and collaboratively exploring the realm of possibilities. They’ll be devoted to undertaking a journey of evolution alongside HPE, characterized by exploration, advancement and solution-driven progress. &lt;/p&gt;
&lt;p&gt;For anyone attending &lt;a href=&quot;https://www.hpe.com/us/en/discover.html&quot;&gt;HPE Discover Las Vegas 2024&lt;/a&gt;, we invite you to stop by the Innovation Campus and embark on this journey of discovery and transformation with us!&lt;/p&gt;
&lt;p&gt;Last but not least, you can meet two members of the HPE Developer Community team in the hands-on-Lab section, right next to the Innovation Campus. They will be welcoming you to take one of the multiple hands-on-labs available during the week, including two on the HPE GreenLake APIs.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Demystifying machine learning at scale: HPE Ezmeral Unified Analytics in action]]></title><description><![CDATA[In 2024, businesses are scuffling to seek innovative ways to leverage Generative AI to elevate customer experiences and streamline…]]></description><link>https://developer.hpe.com/title-demystifying-machine-learning-at-scale-hpe-ezmeral-unified-analytics-in-action/</link><guid isPermaLink="false">https://developer.hpe.com/title-demystifying-machine-learning-at-scale-hpe-ezmeral-unified-analytics-in-action/</guid><pubDate>Mon, 20 May 2024 20:24:58 GMT</pubDate><content:encoded>&lt;p&gt;In 2024, businesses are scuffling to seek innovative ways to leverage Generative AI to elevate customer experiences and streamline operational workflows. Traditional machine learning workflows, tools and trades are the bedrock on which the data-driven world is built – and no matter how loud the hype, that will not change anytime soon.&lt;/p&gt;
&lt;p&gt;Whilst generative AI (such as Large Language Models and ChatGPT) can be leveraged to deliver very human-like experiences to end customers, they quite often miss the mark on delivering actual, specific business value. Smaller, custom models that analyze, predict, and deliver based on relevant, tailored data will always win out when counted on to deliver invaluable insights and real, helpful customer experiences that surprise and delight. The challenge isn&apos;t just having the skills to wrangle data and develop models, but also setting up the necessary tools and infrastructure, which becomes especially difficult when dealing with large amounts of data.&lt;/p&gt;
&lt;p&gt;Enter HPE Ezmeral Unified Analytics – a single-touch, scalable deployment of the best-in-class open-source data, analytics, and machine learning tools on top of any infrastructure.&lt;/p&gt;
&lt;p&gt;In this blog post I&apos;ll take you on a journey using a smart retail experience that showcases the capabilities of HPE Ezmeral Unified Analytics to gather, process, and interpret retail data from stores across several European countries. This journey will show you how easy it is to work with data and models using HPE Ezmeral Unified Analytics and how you can leverage these tools to create cutting-edge customer experiences.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-unifiedanalytics-img1.png&quot; alt=&quot;This screenshot shows the built-in ecosystem of open source tools segmented by data engineering, analytics, and data science personas. &quot; title=&quot;HPE managed ecosystem of open source tools built into HPE Ezmeral Unified Analytics.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Embarking on the journey: Harnessing Apache Spark for customer insights&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The journey commences with a fundamental need: understanding diverse customer shopping experiences across European regions utilizing Apache Spark on HPE Ezmeral Unified Analytics. The objective is clear yet ambitious: to analyze extensive sales data and derive actionable insights. &lt;strong&gt;Apache Spark&lt;/strong&gt;, renowned for its agility and scalability in handling voluminous datasets, emerges as the natural choice. Seamlessly integrated within HPE Ezmeral Unified Analytics, it furnishes a scalable ecosystem for data processing, eliminating the complexities of manual setups.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-spark-img2.png&quot; alt=&quot;This screenshot show the ability to create a spark session from the Unified Analytics user interface. &quot; title=&quot;Simplified Apache Spark management from HPE Ezmeral Unified Analytics. &quot;&gt;&lt;/p&gt;
&lt;p&gt;In the Smart Retail Experience, a Spark Interactive session powered by &lt;strong&gt;Apache Livy&lt;/strong&gt; generates synthetic sales data reflecting diverse currencies, store locations and stores in delta tables for enhanced data reliability and managed version control.&lt;/p&gt;
&lt;p&gt;This initial phase of data prep-work lays the foundation for informed decision-making. Insights into trends, such as regional product preferences and seasonal sales peaks can now be unearthed, setting the stage for advanced, data-driven strategies.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Advancing with Apache Presto to streamline data connectivity&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The journey progresses with &lt;strong&gt;Apache Presto&lt;/strong&gt;, an integral component of HPE Ezmeral Software that amplifies Presto&apos;s capabilities, enabling the connection to and querying of various data sources seamlessly within a unified environment. With Presto, data from different stores can be amalgamated into a single view that you can easily manipulate and analyze. This seamless integration empowers real-time access to insights from data lakes, allowing for federated queries on data without worrying about the underlying complexity of tying each application to each data source individually.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-presto-img3.png&quot; alt=&quot;This screenshot how HPE Ezmeral Software amplifies Presto&amp;#x27;s capabilities by connecting to and querying of various data sources within a unified environment. &quot; title=&quot;From a unified interface, quickly connect to and query multiple data sources then manipulate and analyze. &quot;&gt;&lt;/p&gt;
&lt;p&gt;What&apos;s really impressive about Presto is its integration within the HPE Ezmeral Unified Analytics ecosystem, allowing not just for querying but directly connecting processed outputs for further analysis or visualization in tools like Apache Superset. The convenience of managing data sources, running queries, and directly utilizing this data across various stages of the MLOps pipeline underlines the unified approach of HPE Ezmeral Unified Analytics, streamlining workflows and eliminating the need for disjointed tool management.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Visualizing Insights: Unveiling data using Apache Superset&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Next, visualization with &lt;strong&gt;Apache Superset&lt;/strong&gt; illustrates the ease of creating engaging, insightful dashboards that outline pivotal business metrics, like the top-selling fruits, variation in sales over different years, or customer purchasing behavior. Clear visual storytelling delivers powerful insights, informing both store operations and strategic decision-making.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-superset-img4.png&quot; alt=&quot;This shows a sample of visualization capabilities within HPE Ezmeral Unified Analytics. &quot; title=&quot;Quick generate visualizations within HPE Ezmeral Unified Analytics to reduce redundant data preparation steps. &quot;&gt;&lt;/p&gt;
&lt;p&gt;The ability to generate these visualizations from Cached Assets within the HPE Ezmeral Unified Analytics platform adds layers of efficiency—data that is queried and transformed through Presto can directly feed into Superset without redundant steps of data preparation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Towards enhanced checkout experiences: Harnessing machine learning&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Armed with substantial insights, attention turns to enhancing the customer checkout experience through machine learning. A MobileNetV2 object recognition model is trained using &lt;strong&gt;TensorFlow&lt;/strong&gt; to recognize fresh produce via vision-based machine learning, presenting a transformative solution for streamlining checkout processes, particularly in self-service scenarios.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-mlflow-img5.png&quot; alt=&quot;This screenshot highlights integration between MLflow and HPE Ezmeral Unified Analytics for managing model training, tracking, and version control. &quot; title=&quot;Effortless integration between MLflow and HPE Ezmeral Unified analytics reduces the complexity of managing the machine learning lifecycle. &quot;&gt;&lt;/p&gt;
&lt;p&gt;In the subsequent exercises, we delve into the model-building phase using TensorFlow, followed by MLflow for managing the machine learning lifecycle, including model training, tracking, and version control.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics seamless integration with &lt;strong&gt;MLflow&lt;/strong&gt; via internally hosted Jupyter notebooks removes all the pain of managing the machine learning lifecycle, from model training and tracking to version control and storing. This integrated workflow not only simplifies the complex process of machine learning model development but also ensures models are scalable, reproducible, and ready for deployment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Seamless model deployment with KServe&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;With the model primed for deployment, the focus shifts to deployment logistics. KServe, a model serving framework on HPE Ezmeral, emerges as the solution, enabling efficient deployment and management of machine learning models. Noteworthy is its capability to auto-scale based on demand and seamlessly integrate with existing Kubernetes infrastructure, all under the umbrella of HPE Ezmeral Unified Analytics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/demystifying-ml-kserve-img6.png&quot; alt=&quot;This screenshot depicts KServe and HPE Ezmeral work together to efficiently deploy and manage machine learning models. &quot; title=&quot;Flawless integration between KServe and HPE Ezmeral Software simplifies ML model deployment, management, and auto-scaling with existing Kubernetes infrastructure. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bringing it all together&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Smart Retail Experience culminates with the transition from background operations to front-end application, where a custom-built smart retail application leverages the &lt;a href=&quot;&quot;&gt;served&lt;/a&gt; model to identify fresh produce through a clean user interface and a connected webcam.&lt;/p&gt;
&lt;p&gt;Just like a self-checkout lane at a store, this application demonstrates how HPE Ezmeral Unified Analytics lets you easily deploy machine learning models you build and use.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/alex-blog-visual-ai-orange.png&quot; alt=&quot;This pictures shows a human presenting an orange to a checkout webcam. &quot; title=&quot;Leveraging the served model, the smart retail application can identify fresh produce through a connected webcam and clean user interface. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/retail2-with-pricing-.png&quot; alt=&quot;This pictures shows the end result of the ML model and webcam simplifying a retail checkout process by identifying and placing the cost of an orange in the customer&amp;#x27;s virtual basket. &quot; title=&quot;The end result of this blog shows the model identifying the orange and charging the customer in the checkout basket. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;See how easy it is to deliver data analytics and machine learning projects with HPE Ezmeral Unified Analytics in our technical demonstration. By leveraging powerful open-source tools, businesses can create unique and impactful customer experiences.&lt;/p&gt;
&lt;p&gt;Thank you for accompanying us on this detailed exploration of data analytics, machine learning, and application deployment within the retail sector using HPE Ezmeral Unified Analytics. We trust this tutorial informed and inspired, propelling your business towards transformative possibilities.&lt;/p&gt;
&lt;p&gt;Don&apos;t miss future updates and tutorials from Hewlett Packard Enterprise as we explore how AI can revolutionize various industries.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Activation Memory: What is it?]]></title><description><![CDATA[External blog post]]></description><link>https://developer.hpe.com/activation-memory-what-is-it/</link><guid isPermaLink="false">https://developer.hpe.com/activation-memory-what-is-it/</guid><pubDate>Wed, 15 May 2024 17:54:38 GMT</pubDate><content:encoded>&lt;p&gt;External blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE iLOrest as a PyPI package]]></title><description><![CDATA[Good news!!! The HPE iLOrest tool has been repackaged into both source and binary distributions and is now available on PyPI. This means it…]]></description><link>https://developer.hpe.com/ilorest-as-a-pypi-package/</link><guid isPermaLink="false">https://developer.hpe.com/ilorest-as-a-pypi-package/</guid><pubDate>Wed, 15 May 2024 07:52:27 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Good news!!! The &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishclients/ilorest-userguide/&quot;  target=&quot;_blank&quot;&gt;HPE iLOrest tool&lt;/a&gt; has been repackaged into both source and binary distributions and is now available on &lt;a href=&quot;https://pypi.org/project/ilorest/&quot;&gt;PyPI&lt;/a&gt;. This means it can be easily utilized on any operating system that has Python 3 installed. The intention is for the PyPI package to replace the existing builds for macOS, Debian and Ubuntu distributions of HPE iLOrest.&lt;/p&gt;
&lt;p&gt;Here are the steps to install HPE iLOrest from &lt;a href=&quot;https://pypi.org/project/ilorest/&quot;&gt;PyPI&lt;/a&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Ensure that &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;Python 3&lt;/a&gt; is installed on your operating system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check if pip3 is installed. If not, on Ubuntu or Debian, you can run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ sudo apt install python3-pip 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On Linux, you can use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ wget  https://bootstrap.pypa.io/get-pip.py
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In most cases, pip3 will already be available on macOS and Microsoft Windows.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once &lt;code&gt;pip3&lt;/code&gt; is installed, and before installing HPE iLOrest, make sure the DMTF Redfish Library is not installed, as mentioned in the &lt;a href=&quot;(https://servermanagementportal.ext.hpe.com/docs/redfishclients/python-redfish-library/installationguide/#pip-install&quot; target=&quot;_blank&quot;&gt;HPE iLOrest user guide&lt;/a&gt; and then install the HPE iLOrest PyPi package using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ pip3 uninstall redfish
$ pip3 install ilorest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;With the PyPi package installation, &lt;a href=&quot;https://developer.hpe.com/blog/chif-driver-not-found/&quot;&gt;ilorest_chif.dll/.so&lt;/a&gt; will also be installed in site-packages.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you installed the HPE iLOrest PyPI package on an iLO based server you can verify the local (in-band) login by running:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ iLOrest -v login
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can check the location of iLOrest using the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ find / -name iLOrest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;h2&gt;Notes:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;PyPI package can also be used for ARM-based operating systems if &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;Python 3&lt;/a&gt; is present.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Additionally, the HPE iLOrest PyPI package can be utilized on RHEL, SLES and Microsoft Windows platforms as long as &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;Python 3&lt;/a&gt; is installed (preferably version &gt; 3.8).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DMTF&apos;s &lt;a href=&quot;https://pypi.org/project/redfish/&quot;&gt;Redfish&lt;/a&gt; library can not coexist with HPE &lt;a href=&quot;https://pypi.org/project/python-ilorest-library/&quot;&gt;Python ilorest library&lt;/a&gt; which is a dependency for the HPE iLOrest PyPI package. So, make sure to remove any &lt;a href=&quot;https://pypi.org/project/redfish/&quot;&gt;Redfish&lt;/a&gt; library you may have installed prior to installing the HPE Python iLOrest library using the command shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ pip3 uninstall redfish
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In an air-gapped environment, the HPE iLOrest PyPI package can be downloaded from the &lt;a href=&quot;https://pypi.org/project/ilorest/&quot; target=&quot;_blank&quot;&gt;PyPI repository&lt;/a&gt; and installed using the following command. Dependencies may need to be installed separately.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ pip3 install &amp;#x3C;path to the downloaded PyPI package&gt;  
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To perform a clean removal of the HPE iLOrest PyPI package, don&apos;t forget to uninstall the associated Python library as mentioned in the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ pip3 uninstall --yes python-ilorest-library ilorest  
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Summary:&lt;/h2&gt;
&lt;p&gt;By following the instructions I&apos;ve outlined above, you can simplify the installation of and start using the HPE iLOrest tool on PyPI rather than using rpm or msi. It&apos;s really quick and easy to get started. For more information on HPE iLO, along with some tips and tricks in working with it, make sure you check out the HPE &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;Developer&lt;/a&gt; blog regularly.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Generic Linear Multistep Method Evaluator using Chapel]]></title><description><![CDATA[external blog]]></description><link>https://developer.hpe.com/generic-linear-multistep-method-evaluator-using-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/generic-linear-multistep-method-evaluator-using-chapel/</guid><pubDate>Tue, 14 May 2024 00:29:12 GMT</pubDate><content:encoded>&lt;p&gt;external blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with Private Cloud Business Edition APIs]]></title><description><![CDATA[HPE GreenLake for Private Cloud Business Edition Deploy an agile, self-service private cloud wherever you need it. Simplify VM management…]]></description><link>https://developer.hpe.com/getting-started-with-private-cloud-business-edition-apis/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-private-cloud-business-edition-apis/</guid><pubDate>Mon, 13 May 2024 13:19:00 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
  line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;HPE GreenLake for Private Cloud Business Edition&lt;/h2&gt;
&lt;p&gt;Deploy an agile, self-service private cloud wherever you need it. Simplify VM management across on-prem and public clouds as you leverage HPE’s modern hyperconverged infrastructure to build a workload-optimized private cloud.&lt;/p&gt;
&lt;p&gt;Every organization wants to unleash the power of its data to drive digital transformation. But fragmented infrastructure complexity, manual processes across diverse environments across hybrid cloud, and infrastructure silos spanning edge to cloud are impeding data-driven innovation and agility and creating business risk.
Customers are looking for a radically simplified experience for data infrastructure at scale across the lifecycle — from streamlined device deployment to virtual machine (VM) provisioning, to a global VM and infrastructure management.
They are looking for a cloud-native control plane that scales with infrastructure, so managing hundreds of systems across geographies is as simple as one.&lt;/p&gt;
&lt;h2&gt;Simplify infrastructure operations&lt;/h2&gt;
&lt;p&gt;With cloud agility Hewlett Packard Enterprise provides HPE GreenLake for Private Cloud
Business Edition delivering VMs on-demand across hybrid cloud. With this offering, customers can build their self-service cloud on demand when they need it and where they need it in data centers, at the edge, and beyond. HPE GreenLake for Private Cloud Business Edition changes the way customers procure and manage VMs with a cloud operational experience. HPE GreenLake for Private Cloud Business Edition helps eliminate complexity with a unified cloud service for VM to infrastructure management, including public cloud VMs.
The solution, delivered through Data Services Cloud Console, enables global unified management and monitoring of VMs and infrastructure deployed at any location, It also supports one-click, multisite upgrades. This enables IT to reduce operating costs while helping optimize resource utilization, move to a generalist
model, and shift from managing infrastructure to managing VMs and data, thereby refocusing resources and skills on higher-value strategic initiatives.&lt;/p&gt;
&lt;h2&gt;Automating management&lt;/h2&gt;
&lt;p&gt;Besides the browser-based Data Services Cloud Console, HPE GreenLake for Private Cloud Business Edition offers a set of Application Programming Interfaces (API&apos;s) to automate management of HPE GreenLake for Private Cloud Enterprise or even integrate this with the customer&apos;s own management tools. These API&apos;s are
governed by role-based access controls (RBACs) similar to the regular users using the browser-based console. The API&apos;s will authenticate using a token, which is created from the client_id and client_secret key pair, and will have the same permissions as the user that created the client_id and client_secret key pair. Each API call will be audited and logged by HPE GreenLake platform and will be listed under the user&apos;s id (usually the email address of the user).&lt;/p&gt;
&lt;p&gt;In this series of blog posts I will demonstrate how we can connect to the APIs and how the APIs can be used in scripting or integrated into management software. For this series Python will be used in the examples, however any other programming language supporting the http protocol can be used.&lt;/p&gt;
&lt;h3&gt;Configuring API client credentials&lt;/h3&gt;
&lt;p&gt;To get started with the API you&apos;ll need to create a client_id/client_secret pair to authenticate each API request, this key pair is linked to the user creating it and every interaction using this token will be registered in the audit log. The token generated using this key pair has the same permissions as the user who created it. The token is valid for two hours and expires automatically.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Sign in to HPE GreenLake, select your Workspace, then select Manage Workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the API card.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/workspace-small.png&quot; alt=&quot;&quot; title=&quot;Manage workspace&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click Create Credentials.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create_credentials.png&quot; alt=&quot;&quot; title=&quot;Click create credentials&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the Service Manager you want to access.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the Credential Name.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the Create Credentials button to continue. The Credentials Created screen displays showing your credentials were successfully created.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the Client ID and the Client Secret to a safe and secure location. HPE GreenLake platform does not store your Client Secret.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the Copy icon to save your information to your desktop.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the Close button to continue. You are returned to the main API page, where you can now generate the access token using a sample code provided.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Example code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from oauthlib.oauth2 import BackendApplicationClient         
from requests.auth import HTTPBasicAuth         
from requests_oauthlib import OAuth2Session         

client = BackendApplicationClient(&apos;e21f3028-8097-4a4f-b491-a49b1d102d4a&apos;)         
        
oauth = OAuth2Session(client=client)         
auth = HTTPBasicAuth(&apos;e21f3028-8097-4a4f-b491-a49b1d102d4a&apos;, &apos;05656940014f11ef9946a2408e898685&apos;)         
   
token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)         
print(token[&quot;access_token&quot;])
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Make sure the modules oauthlib, requests and requests_oauthlib are installed in your environment, if not you can install these using:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;pip install oauthlib
pip install requests
pip install requests_oauthlib
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When running the example code JSON formatted data will be returned, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;access_token&quot;: &quot;eyJhbGciOiJSUzI1NiIsImtpZCI6IjJGSmhvZ1lRMDZNazNBc2Q4UU8zU09ZVE9wayIsInBpLmF0bSI6ImRlejAifQ.eyJjbGllbnRfaWQiOiJlMjFmMzAyOC04MDk3LTRhNGYtYjQ5MS1hNDliMWQxMDJkNGEiLCJpc3MiOiJodHRwczovL3Nzby5jb21tb24uY2xvdWQuaHBlLmNvbSIsImF1ZCI6ImV4dGVybmFsX2FwaSIsInN1YiI6Im1hcmsudmFuLnNpbGZob3V0QGhwZS5jb20iLCJ1c2VyX2N0eCI6ImVmMDc3MDVjOGVhNjExZWNiNzNiMWEyNTZjMDNiNTQ2IiwiYXV0aF9zb3VyY2UiOiJjY3NfdG9rZW5fbWFuYWdlbWVudCIsInBsYXRmb3JtX2N1c3RvbWVyX2lkIjoiZDI2YmFmMjY4ZWE2MTFlYzljZmUwMmMwMzQ1YzI0M2MiLCJpYXQiOjE3MTM4NjI4OTYsImFwcGxpY2F0aW9uX2luc3RhbmNlX2lkIjoiZWYwMTA4MjMtYjMwMS00MWJmLTk0OWMtYTQ3NzE0OTRiMzg1IiwiZXhwIjoxNzEzODcwMDk2fQ.arTNjVsVZ-wcW5Ic5fuGUvBKAupzWvRokuvdzlW1I2UqXFoqp0-jw1cHVr-QgDUlBPXG6uc-aILnYXWN_h-QWOQQu1c8aTbLaqpfXEL89MndPyErF0x4By21JLoR1mq-8zkMEEJ2CHGOeBYau_hBim-SBr1BtcetX3BcFl4GoGRGS5lzL1nbwkC-Hi_OOs_2UnDH4ajyyPsp_Ka4wmsgHn1aSQhJjDFbxx4WiLmRdp8aNZT5r250v5EWR9qVeqE4TDOfKAf_BBhC2RB00mWAt1F_Rd4rPmMyZm0uPZd5f71aDjd3I5tl0gh-W2Z4HHt0jRKV-L1d4o52jzYkhL1nTA&quot;,
    &quot;expires_at&quot;: 1713870095.208547,
    &quot;expires_in&quot;: 7199,
    &quot;token_type&quot;: &quot;Bearer&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JSON data holds the access_token itself and also the expiration time, which is 2 hours (7200 seconds).&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: For simplicity and demo reasons we store the client_id and client_secret in the code, obviously that should never be done in production! How to implement a more secure method will be explained in the next episode of this series.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Calling an API&lt;/h2&gt;
&lt;p&gt;A list of available APIs for HPE GreenLake for Private Cloud Business Edition can be found at &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/private-cloud-business/public/&quot;&gt;https://developer.greenlake.hpe.com/docs/greenlake/services/private-cloud-business/public/&lt;/a&gt; When publishing this blog post the APIs were still in beta, however the GA release is planned shortly after.&lt;/p&gt;
&lt;p&gt;In this example you will collect an overview of all System Software Catalogs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import json

from oauthlib.oauth2 import BackendApplicationClient
from requests.auth import HTTPBasicAuth
from requests_oauthlib import OAuth2Session

client = BackendApplicationClient(&apos;e21f3028-8097-4a4f-b491-a49b1d102d4a&apos;)
oauth = OAuth2Session(client=client)
auth = HTTPBasicAuth(&apos;e21f3028-8097-4a4f-b491-a49b1d102d4a&apos;, &apos;05656940014f11ef9946a2408e898685&apos;)
token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)
access_token= token[&quot;access_token&quot;]

url = &quot;https://us1.data.cloud.hpe.com/private-cloud-business/v1beta1/system-software-catalogs&quot;
headers = {&quot;Authorization&quot;: f&quot;Bearer {token}&quot;}
response = requests.get(url, headers=headers)
print(json.dumps(response.json(), indent=4, sort_keys=True))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will generate a JSON formatted data stream like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;    &quot;count&quot;: 10,
    &quot;items&quot;: [
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:56:04Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 991063,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;f3b72767-4f59-4af5-8e34-cc2828baa954&quot;,
            &quot;name&quot;: &quot;2.98&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/f3b72767-4f59-4af5-8e34-cc2828baa954&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;2.98&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:58:06Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 570404,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;993fae5e-a928-458e-8547-87e29f956952&quot;,
            &quot;name&quot;: &quot;7.61.34.18.37&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/993fae5e-a928-458e-8547-87e29f956952&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:31:52Z&quot;,
            &quot;version&quot;: &quot;7.61.34.18.37&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:54:04Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 970638,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;6cff3aef-c12c-41e4-8934-1e02fa19492a&quot;,
            &quot;name&quot;: &quot;7.6.35.18.36&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/6cff3aef-c12c-41e4-8934-1e02fa19492a&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;7.6.35.18.36&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2023-11-02T18:39:13Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-juno-1100.html&quot;,
            &quot;generation&quot;: 3181010,
            &quot;hypervisor&quot;: {
                &quot;name&quot;: &quot;ESXi 7.0U2 May 2021&quot;,
                &quot;releaseDate&quot;: &quot;2021-05-13&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00113634en_us&quot;,
                &quot;version&quot;: &quot;7.0.2-17867351&quot;
            },
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;2df73503-6b84-41b7-9c49-eb23b6e850b2&quot;,
            &quot;name&quot;: &quot;1.70&quot;,
            &quot;releaseDate&quot;: &quot;2023-06-01&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/2df73503-6b84-41b7-9c49-eb23b6e850b2&quot;,
            &quot;serverFirmware&quot;: {
                &quot;name&quot;: &quot;Service Pack for ProLiant&quot;,
                &quot;releaseDate&quot;: &quot;2022-09-28&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://internal.support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=a00127259en_us&quot;,
                &quot;version&quot;: &quot;2022.09.1&quot;
            },
            &quot;storageConnectionManager&quot;: {
                &quot;name&quot;: &quot;Nimble Connection Manager&quot;,
                &quot;releaseDate&quot;: &quot;2020-10-21&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://infosight.hpe.com/InfoSight/media/software/active/2/351/NCM701_RelNotes2.pdf&quot;,
                &quot;version&quot;: &quot;7.0.1-700003&quot;
            },
            &quot;storageSoftware&quot;: {
                &quot;name&quot;: &quot;Nimble OS&quot;,
                &quot;releaseDate&quot;: &quot;2023-06-13&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://infosight.hpe.com/InfoSight/media/cms/active/NimbleOS_Release_Notes_5.2.1.1100.pdf&quot;,
                &quot;version&quot;: &quot;5.2.1.1100-1027043&quot;
            },
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;1.70&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:54:04Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 952334,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;7332fe0c-9c98-4bf8-8071-d7b2cd8e8b37&quot;,
            &quot;name&quot;: &quot;7.6.34.18.36&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/7332fe0c-9c98-4bf8-8071-d7b2cd8e8b37&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;7.6.34.18.36&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:56:04Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 988428,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;50247e6c-13e1-426e-80d5-72aac49cbf3a&quot;,
            &quot;name&quot;: &quot;2.97&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/50247e6c-13e1-426e-80d5-72aac49cbf3a&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;2.97&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2023-11-02T18:39:13Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-juno-1100.html&quot;,
            &quot;generation&quot;: 3169145,
            &quot;hypervisor&quot;: {
                &quot;name&quot;: &quot;ESXi 7.0U1 May 2021&quot;,
                &quot;releaseDate&quot;: &quot;2021-05-13&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00113633en_us&quot;,
                &quot;version&quot;: &quot;7.0.1-17551050&quot;
            },
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;2b1b935a-6640-4f65-bbbc-02141582de20&quot;,
            &quot;name&quot;: &quot;1.69&quot;,
            &quot;releaseDate&quot;: &quot;2023-06-01&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/2b1b935a-6640-4f65-bbbc-02141582de20&quot;,
            &quot;serverFirmware&quot;: {
                &quot;name&quot;: &quot;Service Pack for ProLiant&quot;,
                &quot;releaseDate&quot;: &quot;2021-05-21&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://downloads.hpe.com/pub/softlib2/software1/publishable-catalog/p291731480/v138520/SPP2021.05.0ReleaseNotes.pdf&quot;,
                &quot;version&quot;: &quot;2021.05.0&quot;
            },
            &quot;storageConnectionManager&quot;: {
                &quot;name&quot;: &quot;Nimble Connection Manager&quot;,
                &quot;releaseDate&quot;: &quot;2020-10-21&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://infosight.hpe.com/InfoSight/media/software/active/2/351/NCM701_RelNotes2.pdf&quot;,
                &quot;version&quot;: &quot;7.0.1-700003&quot;
            },
            &quot;storageSoftware&quot;: {
                &quot;name&quot;: &quot;Nimble OS&quot;,
                &quot;releaseDate&quot;: &quot;2023-06-13&quot;,
                &quot;releaseNotesURL&quot;: &quot;https://infosight.hpe.com/InfoSight/media/cms/active/NimbleOS_Release_Notes_5.2.1.1100.pdf&quot;,
                &quot;version&quot;: &quot;5.2.1.1100-1027043&quot;
            },
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;1.69&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-02-20T02:56:40Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-juno-1100.html&quot;,
            &quot;generation&quot;: 1447550,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;6b692b44-ce0d-44a2-8a19-4d9cbfd9330f&quot;,
            &quot;name&quot;: &quot;1.77&quot;,
            &quot;releaseDate&quot;: &quot;2024-02-11&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/6b692b44-ce0d-44a2-8a19-4d9cbfd9330f&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;1.77&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:57:07Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 644305,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;f66777dd-e380-4319-9a32-a7ee24ff4cd4&quot;,
            &quot;name&quot;: &quot;7.61.35.18.37&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/f66777dd-e380-4319-9a32-a7ee24ff4cd4&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:31:52Z&quot;,
            &quot;version&quot;: &quot;7.61.35.18.37&quot;
        },
        {
            &quot;createdAt&quot;: &quot;2024-03-11T01:57:05Z&quot;,
            &quot;customerId&quot;: &quot;00000000000000000000000000000000&quot;,
            &quot;eula&quot;: &quot;https://update.nimblestorage.com/catalog/download/eula-rel-pebble-400.html&quot;,
            &quot;generation&quot;: 990969,
            &quot;hypervisor&quot;: null,
            &quot;hypervisorManager&quot;: null,
            &quot;id&quot;: &quot;3f3a18be-00f8-4478-9a40-b0b316d47771&quot;,
            &quot;name&quot;: &quot;2.99&quot;,
            &quot;releaseDate&quot;: &quot;2024-03-05&quot;,
            &quot;resourceUri&quot;: &quot;/private-cloud-business/v1beta1/system-software-catalogs/3f3a18be-00f8-4478-9a40-b0b316d47771&quot;,
            &quot;serverFirmware&quot;: null,
            &quot;storageConnectionManager&quot;: null,
            &quot;storageSoftware&quot;: null,
            &quot;systemsWithUpdatePath&quot;: null,
            &quot;type&quot;: &quot;private-cloud-business/system-software-catalog&quot;,
            &quot;updatedAt&quot;: &quot;2024-05-03T13:32:55Z&quot;,
            &quot;version&quot;: &quot;2.99&quot;
        }
    ],
    &quot;offset&quot;: 0,
    &quot;total&quot;: 11
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the next blog post in this series we will have a look at a more secure implementation of the client_secret, using a key chain manager to read the client_secret.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New ways of handling GenAI, new HPE GreenLake APIs for data services, and more!]]></title><link>https://developer.hpe.com/2024-may-06/</link><guid isPermaLink="false">https://developer.hpe.com/2024-may-06/</guid><pubDate>Mon, 06 May 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Automating IT Operations with Compute Ops Management Webhooks]]></title><description><![CDATA[NONE]]></description><link>https://developer.hpe.com/automating-it-operations-with-compute-ops-management-webhooks/</link><guid isPermaLink="false">https://developer.hpe.com/automating-it-operations-with-compute-ops-management-webhooks/</guid><pubDate>Fri, 03 May 2024 09:54:45 GMT</pubDate><content:encoded>&lt;p&gt;NONE&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Doing science in Python?  Wishing for more speed or scalability?]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/doing-science-in-python-wishing-for-more-speed-or-scalability/</link><guid isPermaLink="false">https://developer.hpe.com/doing-science-in-python-wishing-for-more-speed-or-scalability/</guid><pubDate>Tue, 30 Apr 2024 21:27:01 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI News #21]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/ai-news-21/</link><guid isPermaLink="false">https://developer.hpe.com/ai-news-21/</guid><pubDate>Mon, 29 Apr 2024 14:58:26 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel's High-Level Support for CPU-GPU Data Transfers and Multi-GPU Programming]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/chapels-high-level-support-for-cpu-gpu-data-transfers-and-multi-gpu-programming/</link><guid isPermaLink="false">https://developer.hpe.com/chapels-high-level-support-for-cpu-gpu-data-transfers-and-multi-gpu-programming/</guid><pubDate>Thu, 25 Apr 2024 22:17:28 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Bulk onboarding of users in HPE GreenLake edge-to-cloud platform]]></title><description><![CDATA[HPE GreenLake API to the rescue The use case covered in this document is part of what we call the Day 0 activities; tasks that must be done…]]></description><link>https://developer.hpe.com/bulk-onboarding-of-users-in-hpe-greenlake-edge-to-cloud-platform/</link><guid isPermaLink="false">https://developer.hpe.com/bulk-onboarding-of-users-in-hpe-greenlake-edge-to-cloud-platform/</guid><pubDate>Wed, 24 Apr 2024 13:44:40 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}

ol li{
 font-size:27px;
}
&lt;/style&gt;
&lt;h2&gt;HPE GreenLake API to the rescue&lt;/h2&gt;
&lt;p&gt;The use case covered in this document is part of what we call the Day 0 activities; tasks that must be done to onboard users to HPE GreenLake platform. When a customer decides to use HPE GreenLake, it is critical that all customer collaborators who require access to HPE GreenLake platform are invited to join. Using the HPE GreenLake console to invite hundreds of collaborators can be tedious and error prone - this is when an API comes to the rescue. The API allows you to write a script that reads a list of users from an Excel spreadsheet and automatically invites these users to access the HPE GreenLake platform.&lt;/p&gt;
&lt;h2&gt;What are the HPE GreenLake edge-to-cloud platform APIs&lt;/h2&gt;
&lt;p&gt;The foundational APIs for common HPE GreenLake platform services allow IT administrators and IT operators to programmatically operate and manage users and resources in an HPE GreenLake platform workspace.  &lt;/p&gt;
&lt;p&gt;This set of APIs for common platform services includes APIs for workspace management, identity and access management, device and subscription, locations, audit logs, and wellness.  &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake platform documentation&lt;/a&gt; for these APIs leverages OpenAPI specifications and associated reference material. The documentation provides a complete explanation of the operations supported by these APIs for common HPE GreenLake platform services, as well as sample requests and responses.&lt;/em&gt;  &lt;/p&gt;
&lt;p&gt;The following blog posts are an excellent way to learn more about the APIs using Postman:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-1-introduction-to-the-apis/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 1: Introduction to the APIs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-2-configuring-and-managing-a-workspace/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 2: Configuring and managing a workspace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 3: Tracking activities and monitoring health&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog post, I will focus on one specific API call, part of Identity Management. The call is &lt;code&gt;POST /identity/v1/users&lt;/code&gt;, which invites users to an HPE GreenLake workspace. Full documentation on this API call can be found in the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/iam/workspaces/public/openapi/workspaces-v1/operation/invite_user_to_account_identity_v1_users_post/&quot;&gt;HPE GreenLake developer portal&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Providing the right data to the script&lt;/h2&gt;
&lt;p&gt;Before writing any code, it’s important to understand what data is required to invite a user. You only need the email address of the invited user- that’s easy! In the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/iam/workspaces/public/openapi/workspaces-v1/operation/invite_user_to_account_identity_v1_users_post/&quot;&gt;API reference&lt;/a&gt;, you&apos;ll also see that there is no way to select a workspace to invite the user to. The reason for this is that the API credentials used to make the call is workspace specific, so it implicitly provides the workspace to which the user will be invited to. This means that you need to collect API access credentials for every workspace that you&apos;re adding users to. For the script I am writing here, in the Workspaces tab, I have stored the Client Id corresponding to API Access of a given Workspace. Because I don’t want to save Client Secrets, I will prompt for them and store them in memory.&lt;/p&gt;
&lt;p&gt;So, my Excel file contains the following 2 sheets:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bulkimport-blog-picture-1.png&quot; alt=&quot;Users tab in Excel&quot; title=&quot;Users tab in Excel&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bulkimport-blog-picture-2.png&quot; alt=&quot;Workspaces tab in Excel&quot; title=&quot;Workspaces tab in Excel&quot;&gt;&lt;/p&gt;
&lt;h2&gt;High-level algorithm&lt;/h2&gt;
&lt;p&gt;Let’s look at the steps necessary to invite users from my spreadsheets:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Read command parameters to get the Excel filename&lt;/li&gt;
&lt;li&gt;Open spreadsheet to retrieve data&lt;/li&gt;
&lt;li&gt;For each workspace in Workspaces sheet
&lt;ul&gt;
&lt;li&gt;Prompt for Client Secret that matches the Client Id&lt;/li&gt;
&lt;li&gt;Retrieve a session token using those credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For each user in Users sheet
&lt;ul&gt;
&lt;li&gt;Lookup Client Id using workspace name&lt;/li&gt;
&lt;li&gt;Call POST /identity/v1/users for user using email&lt;/li&gt;
&lt;li&gt;Increase counter of invited users&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Display list of users invited in each workspace&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Putting things together in PowerShell&lt;/h2&gt;
&lt;p&gt;I decided to use PowerShell to write this script because it provides easy native access to Excel spreadsheets.&lt;/p&gt;
&lt;h3&gt;Step 1 – Reading the parameter from the command line.&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Param($XLFile)

if ($Null -eq $XLFile)
{
    if ($env:XLFile -eq $Null)
    {
        $XLFile = read-host &quot;Enter name of the Excel file&quot; 
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2 – Importing data from the 2 sheets of my spreadsheet.&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$tokens =@{}
$invited=@{}

if ($XLFile)
{
    $users_excel  =   import-excel -path $XLFile -dataonly -worksheetname Users
    $workspaces_excel = Import-Excel -path $XLFile -dataonly -worksheetname Workspaces  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that I initialized 2 hash tables, one called $tokens that will store the token for a given Client Id (i.e Workspace) and another called $invited for storing the number of invited users for a given Client Id.&lt;/p&gt;
&lt;h3&gt;Step 3 – Iterating over the Workspaces sheet to collect client secrets, and retrieve access tokens.&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Ask for client_Secret of each workspace in Excel file
    foreach ($workspace in $workspaces_excel ) {   
        $client_id = $workspace.&apos;Client Id&apos;    
        if ($tokens[$client_id] -eq $null) {
            # We don&apos;t have a token for this client_id yet
            # We need to ask the Client secret for this workspace
            $workspace_name = $workspace.&apos;Workspace Name&apos;
            $client_id = $workspace.&apos;Client Id&apos;

            $secClientSecret = read-host  &quot;Enter HPE GreenLake Client Secret for Workspace $workspace_name&quot; -AsSecureString
            $bstr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secClientSecret)
            $secret = [System.Runtime.InteropServices.Marshal]::PtrToStringBSTR($bstr)
                        
            # use Client Id and Client Secret to retrieve a token
            $body = &quot;grant_type=client_credentials&amp;#x26;client_id=&quot; + $client_id + &quot;&amp;#x26;client_secret=&quot; + $secret
            $headers = @{} 
            $headers[&quot;Content-Type&quot;] = &quot;application/x-www-form-urlencoded&quot;
            
            try {
                $response = Invoke-webrequest &quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot; -Method POST -Headers $headers -Body $body
                # store the token for future use
                $AccessToken = ($response.Content  | Convertfrom-Json).access_token
                $tokens.Add($client_id,$AccessToken)
            }
            catch {
                Write-Host &quot;Error retrieving access token for workspace $workspace_name!&quot; -ForegroundColor Red 
                exit
            }
        }
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that, at the end of this loop, I have a hash table of tokens indexed by Client Id, which I will use to call the API in the next section.&lt;/p&gt;
&lt;h3&gt;Step 4 – Iterating over Users sheet to invite each of them.&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Now walk the list of users to add
    $invited.Add($client_id,0)
    foreach ($user in $users_excel ) {
        $workspace_name = $user.&apos;Workspace Name&apos;
        # Get client id from workspace name
        $result = $workspaces_excel | Where-Object { $_.&apos;Workspace Name&apos;  -eq $workspace_name }
        if ($result.Count -eq 0)
        {
            Write-Host &quot;Workspace not found for user &quot; $user.email -ForegroundColor Red
            exit
        }
        $client_id = $result[0].&apos;Client Id&apos;
        
        Write-Host &quot;Inviting user&quot; $user.email &quot;to workspace./&quot; $workspace_name
        $AccessToken = $tokens[$client_id]

        # Create header for next API calls 
        $headers = @{} 
        $headers[&quot;Authorization&quot;] = &quot;Bearer $AccessToken&quot;
        $headers[&quot;Accept&quot;] = &quot;application/json&quot;
        $headers[&quot;Content-Type&quot;] = &quot;application/json&quot;
        
        # Build body for next API call         
        $_body = @{
            &quot;email&quot;             = $user.email
            &quot;sendWelcomeEmail&quot;  = $true
        }
        
        $Body = $_body | ConvertTo-Json
        
        # Call GLP API to invite user
        try {
            $response = Invoke-webrequest -Uri &quot;https://global.api.greenlake.hpe.com/identity/v1/users&quot; -Method POST -Headers $headers -Body $Body
            $invited[$client_id]++
        }
        catch {
            Write-Host &quot;Error sending invite for&quot; $user.Email&quot;! Already onboarded?&quot;  -ForegroundColor Red
            Write-Host $Error[0]  -ForegroundColor Red
            continue
        }  
        sleep 15
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that before the loop, I initialized to zero the count of invited users for a given workspace. Also note the sleep 15 (seconds) at the end of the loop to avoid issues with rate limiting constraints  which might raise a status code 429.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: Rate Limiting is a mechanism employed to control and restrict the rate at which requests or interactions are permitted to occur between clients and a service.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Step 5: Displaying list of users invited in each workspace.&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;else 
{
    write-host &apos;Mailing list file not provided nor found....&apos;
    exit
}
Write-host &quot;Done processing Excel file $XLFile!&quot;

# ------------------------ Query GL to get list of users for each workspace ------------------------
foreach ($workspace in $workspaces_excel ) {
    $workspace_name = $workspace.&apos;Workspace Name&apos;
    $client_id = $workspace.&apos;Client Id&apos;
    # Create header for next API calls 
    $headers = @{} 
    $AccessToken = $tokens[$client_id]
    $headers[&quot;Authorization&quot;] = &quot;Bearer $AccessToken&quot;
    $headers[&quot;Accept&quot;] = &quot;application/json&quot;
    $headers[&quot;Content-Type&quot;] = &quot;application/json&quot;
    try {
        $response = Invoke-webrequest &quot;https://global.api.greenlake.hpe.com/identity/v1/users?filter=&amp;#x26;limit=300&amp;#x26;offset=0&quot; -Method GET -Headers $headers
    }
    catch {
        Write-Host &quot;Cannot get list of users!!&quot; 
        exit
    }
    $invited_users=$invited[$client_id]
    Write-Host $invited_users &quot;user(s) invited to workspace&quot; $workspace_name
    Write-Host &quot;List of users in workspace:&quot; $workspace_name
    
    $_list                     = $response.Content | ConvertFrom-Json
    if ($null -ne $_list)
    {
        $_users_list        =  [System.Collections.ArrayList]::new()
        
        foreach ($_u in $_list.Items)
        {
            
            $_users_list        += @{
                &apos;Username&apos;      = $_u.Username
                &apos;Status&apos;        = $_u.userStatus
                &apos;id&apos;            = $_u.Id
            }
        }
        
    }
    
    $_users_list | select Username, Status | ft -AutoSize
    
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Try it!&lt;/h2&gt;
&lt;p&gt;Let’s run this script, making sure to reference the correct Excel spreadsheet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;PS /Volumes/Dev/GreenLake/GLP-API-Tooling/Scripts&gt; ./bulk_invite.ps1 -XLfile userlist.xlsx                                                 
Enter HPE GreenLake Client Secret for Workspace HPEDEV -GLCP- Hackshack: ********************************
Enter HPE GreenLake Client Secret for Workspace Super Awesome Company: ********************************                 
Inviting user xxx@gmail.com to workspace  HPEDEV -GLCP- Hackshack                                              
Inviting user yyy@lalli.fr to workspace  HPEDEV -GLCP- Hackshack                                                     
Error sending invite for yyy@lalli.fr ! Already onboarded?
Inviting user zzz@lalli.fr to workspace  Super Awesome Company
Inviting user www@gmail.com to workspace  Super Awesome Company                                                
Error sending invite for www@gmail.com ! Already onboarded?
Done processing Excel file userlist.xlsx!

1 user(s) invited to workspace HPEDEV -GLCP- Hackshack                                                                  
List of users in workspace: HPEDEV -GLCP- Hackshack

Username                     Status
--------                     ------
&amp;#x3C;email&gt;                     VERIFIED
…
yyy@lalli.fr	            VERIFIED
…
xxx@gmail.com               UNVERIFIED
…
&amp;#x3C;email&gt;                     VERIFIED

1 user(s) invited to workspace Super Awesome Company                                                                    
List of users in workspace: Super Awesome Company

Username                          Status
--------                          ------
&amp;#x3C;email&gt;                          VERIFIED
…
www@gmail.com 	                 VERIFIED
…
xxx@lalli.fr                     UNVERIFIED
…
&amp;#x3C;email&gt;                          VERIFIED
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, the script has invited 1 user in each workspace. The second email address is already a member of the workspace, so no action is necessary.&lt;/p&gt;
&lt;h2&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;Through this post, I have shown you how it is possible to integrate with HPE GreenLake platform using the most popular scripting languages, such as PowerShell. You can get the source code for these scripts from &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling&quot;&gt;our community tooling repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you’re interested in trying out what I just discussed, you might first want to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake APIs mentioned in this blog post. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested in and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have questions regarding the HPE GreenLake platform APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion on our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We are always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From a pre-trained model to an AI assistant: Finetuning Gemma-2B using DPO]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/from-a-pre-trained-model-to-an-ai-assistant-finetuning-gemma-2b-using-dpo/</link><guid isPermaLink="false">https://developer.hpe.com/from-a-pre-trained-model-to-an-ai-assistant-finetuning-gemma-2b-using-dpo/</guid><pubDate>Wed, 24 Apr 2024 12:22:14 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI News #19]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/ai-news-19/</link><guid isPermaLink="false">https://developer.hpe.com/ai-news-19/</guid><pubDate>Mon, 15 Apr 2024 14:37:30 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI News #18]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/ai-news-18/</link><guid isPermaLink="false">https://developer.hpe.com/ai-news-18/</guid><pubDate>Thu, 11 Apr 2024 09:39:30 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Navier-Stokes in Chapel — Introduction]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/navier-stokes-in-chapel-—-introduction/</link><guid isPermaLink="false">https://developer.hpe.com/navier-stokes-in-chapel-—-introduction/</guid><pubDate>Wed, 10 Apr 2024 13:48:07 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Athonet LLM Platform: Divide et impera, designing the LLM Agentic Tool Mesh]]></title><description><![CDATA[In a recent series of blog posts,  I've explored various facets of generative AI and its profound impact on telecommunications and corporate…]]></description><link>https://developer.hpe.com/hpe-athonet-llm-platform-divide-et-impera-designing-the-ll-mesh/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-athonet-llm-platform-divide-et-impera-designing-the-ll-mesh/</guid><pubDate>Tue, 09 Apr 2024 09:11:29 GMT</pubDate><content:encoded>&lt;p&gt;In a recent series of blog posts,  I&apos;ve explored various facets of generative AI and its profound impact on telecommunications and corporate tools. HPE Athonet&apos;s revolutionary approaches in the digital landscape were highlighted in these discussions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/the-transformative-impact-of-generative-ai-on-telco-products/&quot;&gt;&quot;The transformative impact of generative AI on Telco products&quot;&lt;/a&gt; discussed the pivotal role of AI in enhancing product offerings.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-first-pillar-from-personal-assistant-to-collaborative-corporate-tool/&quot;&gt;&quot;HPE Athonet LLM Platform: From personal assistant to collaborative corporate tool&quot;&lt;/a&gt; highlighted the evolution of the platform into a powerful collaborative suite.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-driving-users-towards-peak-flow-efficiency/&quot;&gt;“HPE Athonet LLM Platform: Driving users towards peak &apos;Flow&apos; efficiency&lt;/a&gt; focused on optimizing user efficiency and engagement.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These insights set the stage for a profound transformation in AI centered around:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Transitioning to a team-focused collaborative framework.&lt;/li&gt;
&lt;li&gt;Enhancing user experience to boost productivity and focus.&lt;/li&gt;
&lt;li&gt;Building a resilient infrastructure based on data mesh principles, ensuring robust security and adherence to ethical standards.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Today, the discussion delves deeper into the third pillar inspired by the Data Mesh concepts detailed in the article &lt;a href=&quot;https://martinfowler.com/articles/data-mesh-principles.html&quot;&gt;“Data Mesh Principles and Logical Architecture”&lt;/a&gt; Here I introduce the &lt;strong&gt;&quot;LLM Agentic Tool Mesh&quot;&lt;/strong&gt; to create a clean, scalable architecture that can swiftly adapt to the rapid advancements in technology . This tailored approach adapts the Data Mesh framework to the unique needs of LLMs, merging data, context, and reasoning capabilities into a cohesive whole.&lt;/p&gt;
&lt;p&gt;The LLM Agentic Tool Mesh embodies the data itself and includes reasoning capabilities, orchestrating tools enhanced with LLM technology. Developers using this platform take charge of everything from the underlying data to the APIs and documentation, ensuring a seamless experience for end-users. This initiative is more than a technical upgrade—it’s a significant leap towards digital transformation that promises substantial competitive advantage through enhanced knowledge sharing within the company that adopts it, leading to the customization of products and services tailored specifically for customers.&lt;/p&gt;
&lt;p&gt;However, there are some challenges, including the need to master new generative AI technologies, the importance of managing data properly by adopting the &apos;data as code&apos; concept, and the cultural shift required to maintain agility and continuously improve all company processes.&lt;/p&gt;
&lt;p&gt;Expecting every team tasked with developing a tool to start from scratch is impractical. Instead, it&apos;s crucial to offer a supportive platform that delivers the essential services needed for creation, complemented by a federated governance structure that lays out clear rules and guidelines. In the sections that follow, these principles are explored more in detail, alongside additional insights to guide the development process.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Domain-specific LLM tool ownership and decentralization&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;In the LLM Agentic Tool Mesh framework, ownership of LLM tools is strategically distributed across various teams or departments, with each group taking full responsibility for their respective tool&apos;s development, maintenance, and quality assurance. This organizational structure aligns with modern enterprise setups, which are typically divided into specific business domains. By applying LLM Agentic Tool Mesh principles along these natural divisions, tools are finely tuned to meet the unique requirements of each domain, such as customer service or research and development.&lt;/p&gt;
&lt;p&gt;Central to LLM Agentic Tool Mesh are data and prompts, metadata, and models, all supported by the necessary computational power to process and deliver results efficiently. This architecture not only facilitates streamlined data management but also significantly enhances the model’s capability to learn and adapt. By aligning the decomposition of tools with the business domains, LLM Agentic Tool Mesh ensures that each tool operates within well-defined operational parameters, optimizing both performance and adaptability.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;LLM tool as product&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Each LLM tool is crafted as a specialized product, meticulously designed with the end-user in mind to ensure user-centricity, efficiency, and effectiveness. This philosophy treats each tool as a comprehensive solution that spans from data generation to consumption, emphasizing a seamless and integrated experience. It is crucial that a dedicated team, including a product owner, is responsible for managing the tool end-to-end, ensuring that all stages—from using documents for information analysis with LLMs to presenting results—are harmoniously interconnected. This holistic approach not only enhances the functionality of each phase but also significantly boosts the overall quality of the tool.&lt;/p&gt;
&lt;p&gt;The principle of viewing an LLM tool as a product also tackles the persistent issue of data silos. Like data products, there should be a system in place that enables the use of multiple LLM tools in concert. It is essential for these tools to provide clear interfaces and capabilities, including discoverability, security, understandability, and trustworthiness. This ensures that tools not only function independently but also work cohesively within a larger ecosystem, enhancing interoperability and user trust.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;LLM platform infrastructure&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;A self-service infrastructure platform is provided with the tool, enabling teams to autonomously develop, deploy, and manage their LLM tools. This platform supports the necessary frameworks to promote autonomy and innovation. Envisioned as LLM Agentic Tool Mesh, it extends capabilities beyond enhancing user experience for chatbot users to include interfaces for various stakeholders, including document owners, tool developers, and deployment teams. This requires the development of diverse user interfaces tailored to different interactions within the ecosystem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ssp_1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;At MWC 2024 (Mobile World Congress), a proof of concept demonstrated the robustness of this architecture by showcasing a chatbot capable of supporting multiple tools through the self-service capabilities of the LLM platform. This facilitates the development and integration of both individual tools and comprehensive chatbot solutions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ssp_2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The architecture leverages a reasoning engine where the LLM serves as a central hub, orchestrating various tools to ensure streamlined communication and functionality. Integrating a tool into this engine is straightforward: simply define the tool and its arguments in a prompt, and the LLM will intelligently determine when to engage the tool based on user requests.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ssp_3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Designed to be both extendable and modular, each component of the platform employs design patterns such as the factory method and strategy pattern. These patterns ensure a decoupling of creation from usage, providing a flexible and scalable architecture that is easy to maintain. Each module operates independently yet conforms to a unified global interface, enhancing system integrity and adaptability. The self-serve platform contains the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Utility functions simplify the complexity of underlying libraries, such as LangChain, Hugging Face, and LLaMA Index&lt;/li&gt;
&lt;li&gt;Domain-specific high-level functions, for example, parsing telecommunications standards using Retrieval-Augmented Generation (RAG).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ssp_4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Federated governance and standards&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;While the management of LLM tools is decentralized, a unified framework of governance policies and standards is crucial to ensure consistency, ethical integrity, and overall quality across the platform. Federated governance effectively balances the drive for innovation with rigorous standards for security, privacy, and compliance, which are vital to the operation of the LLM Agentic Tool Mesh. This approach encompasses several key elements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Decentralized autonomy: Domain-specific teams are empowered to manage their LLM tools, fostering a sense of ownership and responsiveness to specific needs, all while aligning with overarching governance standards of the mesh.&lt;/li&gt;
&lt;li&gt;Interoperable standards: Common guidelines for security, data privacy, and model behavior are established to ensure all interactions are helpful and harmless, promoting seamless and safe integration across different tools and domains.&lt;/li&gt;
&lt;li&gt;Ethical compliance: It is critical that all LLMs operate within ethical guidelines and regulatory frameworks to foster trust and reliability. This commitment to ethical compliance supports honest interactions and adherence to broader social and legal standards.&lt;/li&gt;
&lt;li&gt;Central oversight: A centralized governance body is tasked with enforcing these policies across the mesh, ensuring that all components adhere to responsible AI principles and maintain a high standard of integrity and accountability.&lt;/li&gt;
&lt;li&gt;Automated governance tools: Advanced technologies are deployed to automate the enforcement of governance standards, ensuring consistent compliance with the principles of being helpful, harmless, and honest.&lt;/li&gt;
&lt;li&gt;Collaboration and feedback: Continuous collaboration and feedback mechanisms within the LLM Agentic Tool Mesh are crucial for adapting and refining governance practices. This open dialogue encourages ongoing improvement and alignment with emerging technologies and challenges.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By embracing these principles, the LLM Agentic Tool Mesh seeks to cultivate a robust, ethical, and dynamic ecosystem for the development and deployment of LLM tools. This approach ensures that all tools not only add significant value but also meet stringent standards of responsibility and governance. These efforts underscore HPE Athonet&apos;s commitment to driving innovation and digital transformation in network management by developing a responsible framework and product.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake API for Backup and Recovery]]></title><description><![CDATA[W﻿hat's new? Recently, a new set of REST APIs for HPE GreenLake for Backup and Recovery Service was introduced in the HPE GreenLake…]]></description><link>https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-backup-and-recovery/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-backup-and-recovery/</guid><pubDate>Mon, 08 Apr 2024 00:16:50 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h2&gt;W﻿hat&apos;s new?&lt;/h2&gt;
&lt;p&gt;Recently, a new set of REST APIs for HPE GreenLake for Backup and Recovery Service was introduced in the HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/&quot;&gt;website&lt;/a&gt;. This is the third blog post from the series of blog postings (&lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services&quot;&gt;Data-Services&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-virtualization&quot;&gt;Virtualization&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-backup-and-recovery&quot;&gt;Backup and Recovery&lt;/a&gt;) that introduce some useful tips and best practices about using this new set of APIs given a specific use case.&lt;/p&gt;
&lt;p&gt;This set of APIs provides the capability for manipulating resources made available by HPE GreenLake for Backup and Recovery services. Consequently, any customer can use these APIs to protect their data in a hybrid cloud in the same manner as using the user interface for HPE GreenLake for Backup and Recovery in the HPE GreenLake console.  This set of APIs provides user capabilities to perform any of Create, Read, Update and Delete (CRUD) operations against HPE GreenLake Backup and Recovery resources such as: &lt;em&gt;data-orchestrator, protection store gateway, StoreOnce, protection-stores, protection-policies, snapshots, backups, on-premises assets (VM, DataStore, MSSQL, Storage Volumes).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There will be more resources added into this set of APIs in the future releases that will cover &lt;em&gt;cloud service providers, and cloud assets&lt;/em&gt;. For more information on how to use this HPE GreenLake for Backup and Recovery service, please visit the &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-backup-recovery.html&quot;&gt;website &lt;/a&gt;and the getting started &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=sd00003454en_us&quot;&gt;guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The specification of this API is publicized as an OpenAPI specification in JSON format, and the specification is available for download from this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/guide/&quot;&gt;section&lt;/a&gt; of the documentation (shown below). The specification follows the OpenAPI Standard v3.1, and it contains all required information so that this JSON file can be consumed by any OpenAPI tools to provide client library, server mock, or documentation as described in this &lt;a href=&quot;https://tools.openapis.org/&quot;&gt;OpenAPI &lt;/a&gt;Initiative.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;
There are two different sets of OpenAPI specs that are downloadable from the documentation page of the Backup and Recovery in the March 2024 release. The two sets represent the two versions of Backup and Recovery APIs that were made available for separate resources, namely &lt;strong&gt;hypervisor-managers&lt;/strong&gt; and &lt;strong&gt;the rest of other resources&lt;/strong&gt; as shown in this picture below. To get into the page for downloading each of the OpenAPI specification file, scroll through the left windows and select the appropriate API version page.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/backup-and-recovery-api-front-download-page.jpg&quot; alt=&quot;GLBR API documentation website&quot; title=&quot;Front page for Backup &amp;#x26; Recovery front page&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Backup and Recovery API specification files contain information that describes the set of REST APIs for HPE GreenLake for Backup and Recovery, such as the endpoints, authentication, syntax of parameters, expected response, and many other objects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/backup-and-recovery-json-information.png&quot; alt=&quot;GLBR openapi spec&quot; title=&quot;JSON open API spec example&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the example of the downloaded backup-and-recovery v1beta1.json file from the Backup and Recovery API documentation &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/guide/&quot;&gt;guide&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;API versioning&lt;/h2&gt;
&lt;p&gt;This set of APIs is released with two different specifications which are identified as revision V1 Alpha 1 and V1 Beta 1 at the time of i﻿ts release i﻿n March 2024. The short-term plan is to publish version 1 of the HPE GreenLake APIs for Backup and Recovery Service to replace the V1 Alpha 1 and V1 Beta 1 versions. Once the API is at version 1, the intent is to keep changes to the minimum required and publicly announce changes as they are made. As each individual API is updated, there will also be more capabilities added to any of the resources identified under this set of APIs.  For information regarding updates and deprecation, please refer the HPE GreenLake Developer Portal Versioning &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/public/standards/versioning_basics/&quot;&gt;guide&lt;/a&gt;. You can expect that the API categorized as V1 Alpha 1 will be updated within a short time; hence, I recommend you monitor for any announcement of the next revision of APIs for Backup and Recovery in this documentation &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/guide/&quot;&gt;guide&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;
At the time of its release in March 2024, all of resources for HPE GreenLake API for Backup and Recovery are limited to data protection of on-premises assets. The manipulation of the cloud assets will be made available in the next release.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;W﻿hat are Backup and Recovery API resources?&lt;/h2&gt;
&lt;p&gt;There are a lot of resources that are part of this set o﻿f APIs . Therefore, I like to break them down into two parts so that I can present the familiar view of the components that correspond to the resources.&lt;/p&gt;
&lt;p&gt;The below diagram displays components in the first part of the HPE GreenLake API for Backup and Recovery. These components consist of resources considered to be part of the infrastructure for HPE GreenLake for Backup and Recovery on-premises and in the cloud. These components must be deployed prior to operation of the HPE GreenLake for Backup and Recovery. The on-premises components consist of  Data Orchestrator VM, Protection Store Gateway VM or HPE StoreOnce (purpose-built backup appliance). Additionally, the cloud components consist of the data services instance deployed at HPE GreenLake workspace and the cloud protection store to contain the cloud recovery points (backups). The information on how to get started with HPE GreenLake for Backup and Recovery is available in the support &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=sd00003102en_us&amp;#x26;page=bar_overview_dscc.html&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glbr-architecture-overview-1.png&quot; alt=&quot;GLBR architecture 1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the resources that are part of the infrastructure u﻿sed to accommodate the data protection for HPE GreenLake for Backup and Recovery.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The second part of the HPE GreenLake API for Backup and Recovery resources is the resources from which HPE GreenLake for Backup and Recovery provides protection from data loss. In below diagram, you will see those assets which contains the components that need to be protected and the components which the assets are allocated at. Those components, such as virtual machine, datastore, or SQL database application, storage volumes, or physical hosts, are the assets that need to be protected. Nevertheless, other assets in this category include the components for the hypervisor, compute servers, storage array, networking, and the VMware vCenter that manage the hypervisor components. This HPE GreenLake API must maintain the inventory of all the on-premises assets that are protected by querying the VMware vCenter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glbr-architecture-overview-2.png&quot; alt=&quot;GLBR architecture 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure displays resources that are parts of the hypervisor and on-premises components that can be protected.&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;
In the current HPE GreenLake Developer website, there is a single resource categorized as V1 Alpha 1 that will be used to register, unregister, and update the hypervisor-manager to an instance of HPE GreenLake Backup and Recovery. The current supported on-premises hypervisor-manager is VMware vCenter version 7.0 or later. The HPE GreenLake API to discover t﻿he already onboarded hypervisor-manager is available from the virtualization API &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/tag/hypervisor-managers/&quot;&gt;set&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;What about the components in HPE GreenLake Backup and Recovery that are not mentioned above?&lt;/h3&gt;
&lt;p&gt;There are other resources that exist in the HPE GreenLake for Backup and Recovery that are not mentioned above; however, they are available in the user-interface for HPE GreenLake for Backup and Recovery. A couple of those resources are protection policy and protection group. The protection group is used to consolidate a multiple number of assets with a particular protection policy. On the other hand, the protection-policy is a resource to consolidate the schedule of protection-jobs and the flow of the recovery points at different tiers of protection-store. Together, both protection policy and protection group deliver the management of the Recovery Point Objective for data protection using HPE GreenLake for Backup and Recovery.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/protection-policy-ui-for-vmware-scheduling.png&quot; alt=&quot;Protection policy for vmware schedule&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure displays resources for schedule and the flow of the recovery-points part of the protection policy.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Additionally, there is also a resource known as protection job that is obscurely shown in the user-interface; however, it is required to perform the operation of data protection. You will need to manipulate protection jobs to create a protection or to recover an existing recovery point. Additionally, the protection-job is the resource that can be manipulated to suspend or to resume a protection schedule.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/protection-jobs-ui.png&quot; alt=&quot;Protection jobs for scheduling and protection tiers&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure displays the representation of protection jobs for the protected assets. The protection jobs are available after a protection policy applied to an asset or a protection groups for multiple assets.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Beyond protection-jobs, there are recovery points (also known as copies). These are resources that are used to represent that recovery points inside the protection store, and they are used to perform the recovery of assets.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; that even though there are three different categories of recovery-points that are available in the protection policy, there are only two different resources.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The first resource is called snapshots to represent copies that exist inside the storage array. The second resource is called backups to represent copies that exist inside the on-premises protection store such as Protection Store Gateway, and copies that exist inside the cloud protection store.&lt;/p&gt;
&lt;p&gt;The cloud protection store is a repository that sits in the cloud public providers that are managed by HPE, and these can exist either in AWS or Microsoft Azure. All of these recovery points exist in relation to the assets, where the recovery can be initiated using the HPE GreenLake API.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/protection-copies.png&quot; alt=&quot;Protection copies for hierarchy&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Using the Backup and Recovery APIs&lt;/h2&gt;
&lt;p&gt;This set of Backup and Recovery APIs use the same authorization and permission as the rest of the family of HPE GreenLake API for data services. To ensure that all programmatic interaction with the HPE GreenLake platform services and resources is secure and authenticated, these APIs require an access token. The token is generated using the client ID and client Secret you acquired during the creation of the client API credentials.&lt;/p&gt;
&lt;p&gt;Documentation concerning getting started with the HPE GreenLake API is provided on the HPE Developer Community &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;website&lt;/a&gt; and on the HPE GreenLake Developer portal &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;website&lt;/a&gt;. There is also blog &lt;a href=&quot;https://developer.hpe.com/blog/learn-what-you-can-do-with-hpe-data-services-cloud-console-api-in-just-3-minutes/&quot;&gt;post&lt;/a&gt; that describes how to use publicly available tool to manipulate this API without a programming language, such as Postman. An additional blog post that describes using Postman for this API is also available in this &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Lastly, anyone can follow the examples provided by each API reference in the documentation page, such as the &lt;code&gt;GET /backup-recovery/v1beta1/protection-jobs&lt;/code&gt;, which is shown i﻿n the figure b﻿elow. The documentation provides detail on the API syntax for a particular method, arguments used for the API, expected successful and failed responses, and several examples of creating the automation script using cURL, JavaScript, Python, and Go. The documentation page also provides the ability to execute the API directly on the documentation page as explained in the previous blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services/&quot;&gt;post&lt;/a&gt; (Getting started with HP GreenLake Data Services API).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/reference-document-for-hpe-glbr.png&quot; alt=&quot;HPE GLBR API documentation &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the three-panel of the interactive API reference documentation for one of HPE GreenLake APIs for Backup and Recovery.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Some tips and examples&lt;/h2&gt;
&lt;p&gt;The interactive API reference documentation guide provides information about the parameters and request payload (body) key-pair values required for every available HPE GreenLake API for Backup and Recovery. Additionally, I am presenting some use cases with detailed information on providing the correct parameters or building the correct request payload (body) key-pairs JSON structure required to achieve the use case.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;
The below examples assume that HPE GreenLake Backup and Recovery had been deployed, it was connected to an HPE array onboarded to HPE GreenLake, a VMware vCenter had been discovered, and some virtual machines had been deployed and discovered by HPE GreenLake Backup and Recovery. For more information on getting started with HPE GreenLake Backup and Recovery, please visit t﻿he Getting Started guide on HPE support &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=sd00003454en_us&amp;#x26;page=GUID-F25ABD00-C36B-42D8-A443-82584EE8E35A.html&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Creating a cloud protection store&lt;/h2&gt;
&lt;p&gt;A protection store is the critical resource that is required to store the recovery points on-premises and in the cloud. The cloud protection stores are created on top of either the Protection Store Gateway or HPE StoreOnce, because either one is required for connections to cloud protection-stores. To perform this use case in the below example, we will need to discover the StoreOnce and the storage location of the cloud protection store.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;
You will see in this blog post that I used a combination of b﻿oth HPE GreenLake APIs from &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/guide/&quot;&gt;data services&lt;/a&gt; and &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/&quot;&gt;virtualization&lt;/a&gt; to accomplish the below examples.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The example below displays the creation of the cloud protection store in the HPE GreenLake protection store at Microsoft Azure cloud storage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ui-to-create-cloud-protection.png&quot; alt=&quot;UI to create a cloud protection s﻿ite o﻿n Microsoft Azurea﻿t the eastus2 storage location&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he figure above shows the user interface used to create a cloud store protection at Microsoft Azure in eastus2 storage location using the user interface.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;H﻿ere is the list of the steps required to perform this use case using HPE GreenLake API:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I used the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/StoreOncesList/&quot;&gt;API&lt;/a&gt; to discover the StoreOnce instance that can connect to the cloud protection store and copi﻿ed the id which will be used as the value for storageSystemId as shown in below JSON request body. The API used for this is: &lt;code&gt;GET /backup-recovery/v1beta1/storonces?limit=20&amp;#x26;offset=0&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-discover-storeonce.png&quot; alt=&quot;Discover deployed StoreOnce to create cloud protection store&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Next, I discovered the cloud storage at the correct location from the list of the available storage location and copy the &lt;code&gt;storageLocationId&lt;/code&gt; as shown in below JSON response body. The discovery was done by using t﻿he HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/openapi/data-services-public-v1beta1/operation/ListLocations/&quot;&gt;API&lt;/a&gt; f﻿or D﻿ata Services API set. Note from the below figure that I used filter &lt;code&gt;“backup-and-recovery”&lt;/code&gt; in capabilities to capture selected storage locations with the correct capability. The below figure shows information about the location (region) where the data will be stored; conversely, it’s located at &lt;code&gt;“Richmond”&lt;/code&gt;, cloud service provide was &lt;code&gt;“AZURE”&lt;/code&gt;, and cloud services provider identification is &lt;code&gt;“eastus2”&lt;/code&gt;. I﻿ the c﻿opied value &lt;code&gt;&quot;azure:eastus2&quot;&lt;/code&gt; from the key &lt;code&gt;id&lt;/code&gt; of this API&apos;s response body as the &lt;code&gt;storageLocationID&lt;/code&gt; value. The API used for this: &lt;code&gt;GET /data-services/v1beta1/storage-locations?filter=”backup-and-recovery” in capabilities&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure-out-storage-location.png&quot; alt=&quot;API to figure out the Azure storage location&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Moving forward, I composed the cloud protection store at the storage location using values from the previous both API responses. For this API execution, I created a request JSON body structure for &lt;code&gt;POST /backup-recovery/v1beta1/protection-stores&lt;/code&gt; that contains some of the key-pair values from the previous API response. The below figure shows that the creation of protection stores using the JSON body structure to compose the cloud protection store that was connected to the HPE StoreOnce.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;The below REST API will be executed asynchronously, and I recognized from the Response status that this API was properly executed as shown by the &lt;code&gt;Status 0x202 Accepted&lt;/code&gt;. From the Header&apos;s location field part of the response body, I copied the task Id and stored that id into variable &lt;code&gt;{{TaskId}}&lt;/code&gt; so that I could track the completion of this REST API.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-compose-protection-store.png&quot; alt=&quot;API composing the protection store&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; T﻿he figure below shows all the fields about &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/ProtectionStoreCreate/&quot;&gt;this&lt;/a&gt; JSON body structures provided on the interactive documentation of &lt;code&gt;POST /backup-recovery/v1beta1/protection-stores&lt;/code&gt; under Payload tab.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-create-protection-stores-request-json-body.png&quot; alt=&quot;API Request body JSON to compose protection Store&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;After roughly five minutes, the cloud protection store was completely created based on response of the following &lt;code&gt;GET /data-services/async-operations/{Id}&lt;/code&gt; &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/openapi/data-services-public-v1beta1/operation/GetAsyncOperation/&quot;&gt;API&lt;/a&gt; where &lt;code&gt;{{TaskId}}&lt;/code&gt; came from previously executed API response.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/em&gt; I used a set of selection parameters in the figure below to provide a summary of this task information:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /data-services/v1beta1/async-operations/{{taskId}}/select=associatedResources,createdAt,displayName,customerId,logMessages,progressPercent,state
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I copied the task’s Id from the response header’s location value of the prior API execution into a Postman’s variable called &lt;code&gt;{{taskId}}&lt;/code&gt;, and incorporated t﻿he variable to the &lt;code&gt;async-operations&lt;/code&gt; API execution as shown below.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-async-on-post-protection-stores.png&quot; alt=&quot;Task completion on POST protection-stores&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;For reference, I went into the Protection Stores menu as shown in the figure below to show the available HPE GreenLake Backup and Recovery p﻿rotection-stores, and confirmed that the desired protection store was available.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/ui-validated-cloud-protection-store-completed.png&quot; alt=&quot;Validating cloud protection store has been created&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Creating a Protection-Policy&lt;/h2&gt;
&lt;p&gt;A common use-case that every user of HPE GreenLake Backup and Recovery will deploy is the creation of a protection-policy. This resource is important because it sets up the schedule for creation of recovery point. Additionally, it sets up the flow of a recovery point from a primary storage to a storage snapshot, to on-premises protection-store and eventually to the cloud protection-store.&lt;/p&gt;
&lt;p&gt;A protection policy contains several JSON objects that are displayed in the figure b﻿elow.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A schedule that contains the frequency for creation of the copy and time to retain the copies.&lt;/li&gt;
&lt;li&gt;Protection store where the copies are stored: snapshot (in the primary array), on-premises store, and cloud store.&lt;/li&gt;
&lt;li&gt;Length of time when the copy is being locked to satisfy immutability.&lt;/li&gt;
&lt;li&gt;The pre-script information that contains the link to the scripts to be executed prior to the creation of the copy.&lt;/li&gt;
&lt;li&gt;The post-script information that contains the link to the scripts to be executed after the creation of the copy.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-body-json-request-for-protection-policies.png&quot; alt=&quot;API request body JSON for protection policy creation&quot;&gt;&lt;/p&gt;
&lt;p&gt;﻿&lt;em&gt;The above figure shows the guide to create a protection-policy request body JSON structure.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To simplify this example, I created a three-tier protection-policy for VMware as depicted by this snippet from the protection policy’s menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/backup-protection-policy-with-3-tiers.png&quot; alt=&quot;3 Tiers protection policy &quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿his is the list of the steps u﻿sed to create this protection policy:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;From the figure below, inside the protection store gateways menu, I discovered the serial number of the protection-store gateway and used it as the filtering parameter to obtain the protection store gateway instance.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/psg-ui-to-show-the-serial-no.png&quot; alt=&quot;PSG UI to obtain the serial no&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Afterward, I used &lt;code&gt;GET /backup-recovery/v1beta1/protection-store-gateways&lt;/code&gt; &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/ProtectionStoreGatewaysList&quot;&gt;API&lt;/a&gt; and the &lt;code&gt;filter=serialNumber eq &amp;#x3C;StoreOnce SerialNo&gt;&lt;/code&gt; to figure out the &lt;code&gt;&quot;&amp;#x3C;protection-store-gateway-id&gt;&quot;&lt;/code&gt; that was associated with the protection-stores that would be incorporated into the protection-policy.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-display-registered-psg.png&quot; alt=&quot;API show registered PSG&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;From the figure below, I used &lt;code&gt;GET /backup-recovery/v1beta1/protection-stores&lt;/code&gt; &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/ProtectionStoreList/&quot;&gt;API&lt;/a&gt; to obtain the protection-store ids for both the on-premises protection store and the cloud protection store. To display protection-stores related to the protection store gateway, I used the parameter filter to display the exact the protection-store associated with protection storage gateway of &lt;code&gt;“&amp;#x3C;onprem-protection-store-id&gt;”&lt;/code&gt;. The filter parameter that I used are &lt;code&gt;protectionStoreType eq &apos;ON_PREMISES&apos;&lt;/code&gt; and &lt;code&gt;storageSystemInfo/id eq &apos;protection-store-gateway-id&apos;&lt;/code&gt;. Additionally, I used the following &lt;code&gt;select&lt;/code&gt; parameter &lt;code&gt;name,displayName,id,status,state,protectionStoreType&lt;/code&gt; to provide a shorter response that simplify the discovery of the protection-store on-premises. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /backup-recovery/v1beta1/protection-stores?select=name,displayName,id,status,state,protectionStoreType&amp;#x26;filter=protectionStoreType eq ‘ON_PREMISES’ and storageSystemInfo/id eq “\&amp;#x3C;protection-store-gateway-id\&gt;”
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-get-onpremises-protection-store-id.png&quot; alt=&quot;API to obtain the onpremises protection store id&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;I repeated the same execution of the &lt;code&gt;GET /backup-recovery/v1beta1/protection-stores&lt;/code&gt; to obtain the &lt;code&gt;&quot;&amp;#x3C;cloud-protection-store-id&gt;&quot;&lt;/code&gt;. To accomplish that, I used the following &lt;code&gt;filter: protectionStoreType eq &apos;CLOUD&apos; and storageSystemInfo/id eq “&amp;#x3C;protection-store-gateway-id&gt;”&lt;/code&gt;.  Additionally, I also used the  parameter &lt;code&gt;select: name,displayName,id,status,state,protectionStoreType&lt;/code&gt; to provide a shorter response for simpler discovery of the protection-store-id in the cloud. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /backup-recovery/v1beta1/protection-stores?select=name,displayName,id,status,state,protectionStoreType&amp;#x26;filter=protectionStoreType eq ‘CLOUD’ and storageSystemInfo/id eq “&amp;#x3C;protection-store-gateway-id&gt;”
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-discover-cloud-protection-store-id.png&quot; alt=&quot;API to obtain cloud protection store id&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;For the next step, I created a request body JSON structure that represents the protection policy schedule and each of the protection stores. Inside this JSON structures for request body, I defined the three objects that represent the &lt;code&gt;SNAPSHOT, BACKUP (on-premises), CLOUD_BACKUP&lt;/code&gt;. Note that this structure can be expanded or contracted depending on the required backup strategy. The SNAPSHOT object did not require &lt;code&gt;&quot;&amp;#x3C;protection-store-Id&gt;&quot;&lt;/code&gt; as that recovery points will exist inside the primary storage array. This request JSON body structure was required to create the protection policy using HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/DataManagementTemplateCreate/&quot;&gt;API&lt;/a&gt; &lt;code&gt;POST /backup-recovery/v1beta1/protection-policies&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I didn’t include objects for immutability, prescript, and postscript into the JSON structure. If it’s not intended, you don’t need to include unused key-pair values into the JSON structure. Additionally, the SNAPSHOT object does no require a &lt;code&gt;protectionStoreId&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;name&quot;: &quot;VMware create three tiers&quot;,
  &quot;description&quot;: &quot;Snapshot-local-cloud&quot;,
  &quot;protections&quot;: [
    {
      &quot;type&quot;: &quot;SNAPSHOT&quot;,
      &quot;schedules&quot;: [
        {
          &quot;scheduleId&quot;: 1,
          &quot;name&quot;: &quot;Array_Snapshot_1&quot;,
          &quot;namePattern&quot;: {
            &quot;format&quot;: &quot;Array_Snapshot_{DateFormat}&quot;
          },
          &quot;expireAfter&quot;: {
            &quot;unit&quot;: &quot;DAYS&quot;,
            &quot;value&quot;: 1
          },
          &quot;schedule&quot;: {
            &quot;recurrence&quot;: &quot;HOURLY&quot;,
            &quot;repeatInterval&quot;: {
              &quot;every&quot;: 4
            },
            &quot;activeTime&quot;: {
              &quot;activeFromTime&quot;: &quot;00:00&quot;,
              &quot;activeUntilTime&quot;: &quot;23:59&quot;
            }
          }
        }
      ]
    },
    {
      &quot;type&quot;: &quot;BACKUP&quot;,
      &quot;protectionStoreId&quot;: &quot;&amp;#x3C;onprem-protection-store-id&gt;&quot;,
      &quot;schedules&quot;: [
        {
          &quot;scheduleId&quot;: 2,
          &quot;name&quot;: &quot;On-Premises_Protection_Store_2&quot;,
          &quot;sourceProtectionScheduleId&quot;: 1,
          &quot;namePattern&quot;: {
            &quot;format&quot;: &quot;On-Premises_Protection_Store_{DateFormat}&quot;
          },
          &quot;expireAfter&quot;: {
            &quot;unit&quot;: &quot;DAYS&quot;,
            &quot;value&quot;: 3
          },
          &quot;schedule&quot;: {
            &quot;recurrence&quot;: &quot;DAILY&quot;,
            &quot;repeatInterval&quot;: {
              &quot;every&quot;: 1
            },
            &quot;startTime&quot;: &quot;00:00&quot;
          }
        }
      ]
    },
    {
      &quot;type&quot;: &quot;CLOUD_BACKUP&quot;,
      &quot;protectionStoreId&quot;: &quot;&amp;#x3C;cloud-protection-stores-id&gt;&quot;,
      &quot;schedules&quot;: [
        {
          &quot;scheduleId&quot;: 3,
          &quot;name&quot;: &quot;HPE_Cloud_Protection_Store_3&quot;,
          &quot;sourceProtectionScheduleId&quot;: 2,
          &quot;namePattern&quot;: {
            &quot;format&quot;: &quot;HPE_Cloud_Protection_Store_{DateFormat}&quot;
          },
          &quot;expireAfter&quot;: {
            &quot;unit&quot;: &quot;WEEKS&quot;,
            &quot;value&quot;: 1
          },
          &quot;schedule&quot;: {
            &quot;recurrence&quot;: &quot;DAILY&quot;,
            &quot;repeatInterval&quot;: {
              &quot;every&quot;: 2
            },
            &quot;startTime&quot;: &quot;00:00&quot;
          }
        }
      ]
    }
  ],
  &quot;applicationType&quot;: &quot;VMWARE&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The above figure shows JSON structure for request body of &lt;code&gt;POST /backup-recovery/v1beta1/protection-policies&lt;/code&gt; for the creation of protection-policy as shown by the example in this section.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Finally, I created the protection policies using the HPE GreenLake API for Backup and Recovery &lt;code&gt;POST /backup-recovery/v1beta1/protection-policies&lt;/code&gt;, and I used the above JSON structure in the request body. This is a special POST API execution where the response is returned immediately. The response of this API contained the body of JSON structure that will be useful to identify the protection-jobs such as the &lt;code&gt;“&amp;#x3C;protection-policies-id&gt;”&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-create-protection-policy.png&quot; alt=&quot;API to create protection policy&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;
&lt;p&gt;The below figure shows the complete response JSON body from the above API that shows the construction of the protection policy with different protection tiers and the schedules associated with the protection tier. The important values were the ids for different protection tiers that will be used for the next example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;ID: “&amp;#x3C;protection-policies-id&gt;&quot;
SNAPSHOT: “&amp;#x3C;snapshot-protection-id&gt;”
ON-PREMISES: “&amp;#x3C;onprem-protection-id&gt;”
CLOUD: “&amp;#x3C;cloud-protection-id&gt;”
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;T﻿he full listing of the response body from &lt;code&gt;POST /backup-recovery/v1beta1/protection-policies&lt;/code&gt; is shown in the below code snippet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsonc&quot;&gt;{
    &quot;assigned&quot;: false,
    &quot;description&quot;: &quot;Snapshot-local-cloud&quot;,
    &quot;id&quot;: &quot;&amp;#x3C;protection-policies-id&quot;,
    &quot;name&quot;: &quot;VMware create three tiers&quot;,
    &quot;protections&quot;: [
        {
            &quot;id&quot;: &quot;&amp;#x3C;snapshot-protection-id&gt;&quot;,           
            &quot;schedules&quot;: [
                {
                    &quot;scheduleId&quot;: 1,
                    &quot;name&quot;: &quot;Array_Snapshot_1&quot;,
                    &quot;schedule&quot;: {
                        &quot;activeTime&quot;: {
                            &quot;activeFromTime&quot;: &quot;00:00&quot;,
                            &quot;activeUntilTime&quot;: &quot;23:59&quot;
                        },
                        &quot;recurrence&quot;: &quot;HOURLY&quot;,
                        &quot;repeatInterval&quot;: {
                            &quot;every&quot;: 4
                        }
                    },
                    &quot;expireAfter&quot;: {
                        &quot;unit&quot;: &quot;DAYS&quot;,
                        &quot;value&quot;: 1
                    },
                    &quot;namePattern&quot;: {
                        &quot;format&quot;: &quot;Array_Snapshot_{DateFormat}&quot;
                    }
                }
            ],
            &quot;type&quot;: &quot;SNAPSHOT&quot;
        },
        {
            &quot;id&quot;: &quot;&amp;#x3C;onprem-protection-id&gt;&quot;,
            &quot;schedules&quot;: [
                {
                    &quot;scheduleId&quot;: 2,
                    &quot;sourceProtectionScheduleId&quot;: 1,
                    &quot;name&quot;: &quot;On-Premises_Protection_Store_2&quot;,
                    &quot;schedule&quot;: {
                        &quot;recurrence&quot;: &quot;DAILY&quot;,
                        &quot;repeatInterval&quot;: {
                            &quot;every&quot;: 1
                        },
                        &quot;startTime&quot;: &quot;00:00&quot;
                    },
                    &quot;expireAfter&quot;: {
                        &quot;unit&quot;: &quot;DAYS&quot;,
                        &quot;value&quot;: 3
                    },
                    &quot;namePattern&quot;: {
                        &quot;format&quot;: &quot;On-Premises_Protection_Store_{DateFormat}&quot;
                    }
                }
            ],
            &quot;protectionStoreInfo&quot;: {
                &quot;id&quot;: &quot;onprem-protectio-store-id&quot;,
                &quot;name&quot;: &quot;Local_CDS-TPM-PSG#1&quot;,
                &quot;type&quot;: &quot;backup-recovery/protection-store&quot;,
                &quot;resourceUri&quot;: &quot;/backup-recovery/v1beta1/protection-stores/501a99e7-fb79-4fd7-89d5-a5dfb3441859&quot;,
                &quot;protectionStoreType&quot;: &quot;ON_PREMISES&quot;
            },
            &quot;type&quot;: &quot;BACKUP&quot;
        },
        {
            &quot;id&quot;: “&amp;#x3C;cloud-protection-id&gt;”,
            &quot;schedules&quot;: [
                {
                    &quot;scheduleId&quot;: 3,
                    &quot;sourceProtectionScheduleId&quot;: 2,
                    &quot;name&quot;: &quot;HPE_Cloud_Protection_Store_3&quot;,
                    &quot;schedule&quot;: {
                        &quot;recurrence&quot;: &quot;DAILY&quot;,
                        &quot;repeatInterval&quot;: {
                            &quot;every&quot;: 2
                        },
                        &quot;startTime&quot;: &quot;00:00&quot;
                    },
                    &quot;expireAfter&quot;: {
                        &quot;unit&quot;: &quot;WEEKS&quot;,
                        &quot;value&quot;: 1
                    },
                    &quot;namePattern&quot;: {
                        &quot;format&quot;: &quot;HPE_Cloud_Protection_Store_{DateFormat}&quot;
                    }
                }
            ],
            &quot;protectionStoreInfo&quot;: {
                &quot;id&quot;: &quot;cloud-protection-store-id&quot;,
                &quot;name&quot;: &quot;Cloud_CDS-TPM-PSGno1_USA, North Virginia&quot;,
                &quot;type&quot;: &quot;backup-recovery/protection-store&quot;,
                &quot;region&quot;: &quot;USA, North Virginia&quot;,
                &quot;resourceUri&quot;: &quot;/backup-recovery/v1beta1/protection-stores/25611df9-xxxx-xxxx-xxxx-aa3a4887869f&quot;,
                &quot;protectionStoreType&quot;: &quot;CLOUD&quot;
            },
            &quot;type&quot;: &quot;CLOUD_BACKUP&quot;
        }
    ],
    &quot;createdAt&quot;: &quot;2024-04-05T03:28:03.000000Z&quot;,
    &quot;createdBy&quot;: {
        &quot;id&quot;: &quot;&amp;#x3C;user-id&gt;&quot;,
        &quot;name&quot;: &quot;ronald.dharma@hpe.com&quot;
    },
    &quot;generation&quot;: 1,
    &quot;resourceUri&quot;: &quot;/backup-recovery/v1beta1/protection-policies/f572ce6e-xxxx-xxxx-xxxx-530284bf2bc4&quot;,
    &quot;consoleUri&quot;: &quot;/backup-and-recovery/protection-policies/f572ce6e-xxxx-xxxx-xxxx-530284bf2bc4&quot;,
    &quot;applicationType&quot;: &quot;VMWARE&quot;,
    &quot;type&quot;: &quot;backup-recovery/protection-policy&quot;,
    &quot;updatedAt&quot;: &quot;2024-04-05T03:28:03.000000Z&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Applying protection policy to a Virtual Machine&lt;/h2&gt;
&lt;p&gt;Next in the list of use cases for this blog post, I followed the progression for the Day One activities. The next one is to protect a virtual machine t﻿hat is applying this protection policy to a virtual machine.&lt;/p&gt;
&lt;p&gt;T﻿hese are th﻿e steps required to apply the protection policy against a virtual machine:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;I obtained the values from &lt;code&gt;virtual machine id, name, and type&lt;/code&gt; keys t﻿hat were going to be used as the key-pair values required for the key &lt;code&gt;assetInfo&lt;/code&gt; as shown in below figures. To obtain those, I used one of the HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/VirtualMachinesList/&quot;&gt;API&lt;/a&gt; from Virtualization API set to discover the detail information of a virtual machine &lt;code&gt;“0-Linux-Demo-VM02”&lt;/code&gt;. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /virtualization/v1beta1/virtual-machines?sort=name desc&amp;#x26; select=appType,id,name,type,guestInfo,protectionJobInfo&amp;#x26; filter=name eq &apos;0-Linux-Demo-VM02&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-find-virtual-machines.png&quot; alt=&quot;API to find virtual-machines and it&amp;#x27;s properties&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;I created a JSON structure of a request body for applying the protection policy against a virtual machine without a protection group to make a simple example for this blog post. &lt;strong&gt;Note&lt;/strong&gt; that this JSON body structure created the association of the asset, which is a virtual machine, against the protection policy. The two other parameters entered here were the consistency &lt;code&gt;(CRASH vs APPLICATION)&lt;/code&gt; and technology for protection &lt;code&gt;(VMWARE_CBT vs VOLUME)&lt;/code&gt;. The three values obtained from previous response of the previous API &lt;code&gt;POST /backup-recovery/v1beta1/protection-policies&lt;/code&gt; associated with SNAPSHOT, BACKUP and CLOUD would be used as part of this request body. Each of t﻿hese protections will be identified with schedule id marked 1,2 and 3 as shown below. There are other options available as part of this JSON structure of this request body, and they are documented in the Payload tab of this interactive guide for this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/DataManagementJobCreate/&quot;&gt;API&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As of March 2024, value of &lt;code&gt;type&lt;/code&gt; from &lt;code&gt;virtualization/virtual-machine&lt;/code&gt; was translated to &lt;code&gt;hybrid-cloud/virtual-machine.&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsonc&quot;&gt;{
   &quot;assetInfo&quot;:{
      &quot;id&quot;: ”&amp;#x3C;virtual-machine-id&gt;”,
      &quot;type&quot;:&quot;hybrid-cloud/virtual-machine&quot;,
      &quot;name&quot;:&quot;0-Linux-Demo-VM02&quot;
   },
   &quot;protectionPolicyId”: “&amp;#x3C;protection-policies-id&gt;”,
   &quot;overrides&quot;:{
      &quot;protections&quot;:[
         {
            &quot;id&quot;: “&amp;#x3C;snapshot-protection-id&gt;”,
            &quot;schedules&quot;:[
               {
                  &quot;scheduleId&quot;:1,
                  &quot;consistency&quot;:&quot;CRASH&quot;,
                  &quot;backupGranularity&quot;:&quot;VMWARE_CBT&quot;
               }
            ]
         },
         {
            &quot;id&quot;: “&amp;#x3C;backup-protection-id&gt;”,
            &quot;schedules&quot;:[
               {
                  &quot;scheduleId&quot;:2,
                  &quot;consistency&quot;:&quot;CRASH&quot;,
                  &quot;backupGranularity&quot;:&quot;VMWARE_CBT&quot;
               }
            ]
         },
         {
            &quot;id&quot;: “&amp;#x3C;cloud-protection-id&gt;”,
            &quot;schedules&quot;:[
               {
                  &quot;scheduleId&quot;:3,
                  &quot;consistency&quot;:&quot;CRASH&quot;,
                  &quot;backupGranularity&quot;:&quot;VMWARE_CBT&quot;
               }
            ]
         }
      ]
   }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;In the figure below, I used HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/DataManagementJobCreate/&quot;&gt;API&lt;/a&gt; for Backup and Recovery &lt;code&gt;POST /backup-recovery/v1beta1/protection-jobs&lt;/code&gt; with the body JSON structure from the above so that I could associate the protection policy against a virtual machine. This POST API execution completed asynchronously and returned Status &lt;code&gt;0x202&lt;/code&gt; with a response. The response header contained the &lt;code&gt;{{taskId}}&lt;/code&gt; that I can use to ensure that this API completed properly.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-applying-the-protection-jobs-against-a-vm.png&quot; alt=&quot;API to apply protection policy against the VM&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;The API execution was executed successfully. I validated this completion using the task id that I obtained from the response headers as shown above. The API used for this: &lt;code&gt;GET /data-services/v1beta1/async-operations/{{taskId}}&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The execution of this API will trigger a protection execution right after the execution of this API completed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-task-list-for-protection-policy-application.png&quot; alt=&quot;Task list to display completion of the application of protection policy against a VM producing protection jobs&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Triggering a protection&lt;/h2&gt;
&lt;p&gt;Once the protection policy named &quot;VMware create three tiers&quot; was bound to the virtual machine as shown above, I then issued a trigger to create cloud protection against the virtual machine &lt;code&gt;“0-Linux-Demo-VM02”&lt;/code&gt; to create one-off cloud-protection.&lt;/p&gt;
&lt;p&gt;The steps required to trigger the protection are listed below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;I figured out the &lt;code&gt;protection-job-id&lt;/code&gt; that is associated with &lt;code&gt;“0-Linux-Demo-VM02”&lt;/code&gt;. and the  cloud backup &lt;code&gt;schedule Id&lt;/code&gt; of the virtual machine. To achieve that, I used the HPE GreenLake &lt;a href=&quot;%60https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/DataManagementJobsList%60&quot;&gt;API&lt;/a&gt; for Backup and Recovery &lt;code&gt;GET /backup-recovery/v1beta1/protection-jobs&lt;/code&gt; and &lt;code&gt;filter assetInfo/id eq &quot;{VM-id}”&lt;/code&gt; as shown below. Note that the variable &lt;code&gt;{vmId}&lt;/code&gt; contained the value of the virtual machine id as discovered in previous e﻿xample, namely &lt;code&gt;&quot;&amp;#x3C;virtual-machine-id&gt;&quot;&lt;/code&gt;. The response body’s JSON structure contained the id of the protection job associated with &lt;code&gt;“0-Linux-Demo-VM02”&lt;/code&gt;. From the same response body, I recognized that cloud protection is the &lt;code&gt;scheduleId no 3&lt;/code&gt;. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /backup-recovery/v1beta1/protection-jobs?filter=assetInfo/id eq {{vmId}}&amp;#x26;select=assetInfo,id,operational,protections
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-figure-out-protection-jobs.png&quot; alt=&quot;API to figure out protection-jobs&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;This use case is equivalent to run now button at the schedule of the &lt;code&gt;“0-Linux-Demo-V02”&lt;/code&gt; on the selected cloud protection as shown in below figure. After clicking &lt;code&gt;“Run Now”&lt;/code&gt; button, a cloud protection will commence against the virtual machine, a recovery point is going to be created at the cloud protection store. In next step, I will invoke &lt;code&gt;&quot;Run Now&quot;&lt;/code&gt; on the &lt;code&gt;HPE Cloud Protection Store&lt;/code&gt; using HPE GreenLake API for Backup and Recovery.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/gui-run-now-cloud-protection.png&quot; alt=&quot;UI for run now protection jobs&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;The HPE GreenLake API u﻿sed to accomplish the use case s﻿hown above was &lt;code&gt;POST /backup-recovery/v1beta1/protection-jobs/:id/run&lt;/code&gt;, and the documentation of this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/DataManagementJobRun&quot;&gt;API&lt;/a&gt; also lists the required JSON structure for the request body. I created a JSON request body structure for this example as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsonc&quot;&gt;{
   &quot;﻿fullBackup&quot;: false,
   &quot;﻿ScheduleIds&quot;: [3]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; There i﻿s a key called &lt;code&gt;“fullBackup”&lt;/code&gt; inside the JSON request body to enable the creation of full protection where a backup will be created independently from the existing copies in the protection store. I also entered number 3 into &lt;code&gt;ScheduleIds&lt;/code&gt; JSON array, to represent the cloud backup schedule. The below figure shows an example of the execution of &lt;code&gt;&quot;run now&quot;&lt;/code&gt; without full backup protection of the third schedule, which is cloud protection of this virtual machine. The value &lt;code&gt;&amp;#x3C;protection-jobs-id&gt;&lt;/code&gt; will be entered from the parameter of this API in this manner: &lt;code&gt;POST /backup-recovery/v1beta1/protection-jobs/&quot;&amp;#x3C;protection-jobs-id&gt;&quot;/run&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-execute-a-protection.png&quot; alt=&quot;API to execute a protection run&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;The result from the API execution above can be validated from the API &lt;code&gt;/data-services/v1beta1/async-operations/:id&lt;/code&gt; using the task id obtained from the above response header. The API used for this: &lt;code&gt;G﻿ET /data-services/v1beta1/async-operations/{{taskId}}&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-task-list-after-a-run-execution.png&quot; alt=&quot;API async-operations of execution of protection-jobs&quot;&gt;&lt;/p&gt;
&lt;p&gt;The activities above were validated from the HPE GreenLake Backup and Recovery list of the recovery points as shown in the figure below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/gui-cloud-run-now-completed-succesfully.png&quot; alt=&quot;GUI display the completed cloud protection run&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Wow.. those were so cool! Can I recover a new virtual machine from that recovery point that I just created?&lt;/h2&gt;
&lt;p&gt;Each of the recovery points, regardless of the location of store (array snapshot, On-Premises Protection-Store, or HPE Cloud Protection-Store), can be recovered using the HPE GreenLake APIs for Backup and Recovery.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To restore the backup from a virtual machine, this API requires a request body of JSON structure as documented in the HPE GreenLake developer website. In this blog post, I planned a demo of the steps to recover the virtual machine from a copy that had existed in the HPE Cloud Protection Store into the VMware cluster where the Protection Storage Gateway is hosted (recover as a new virtual machine). Copy the &lt;code&gt;{{vmId}}&lt;/code&gt; from the response JSON body to be used on subsequent API execution. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /virtualization/v1beta1/virtual-machines?sort=name desc&amp;#x26;filter=name eq’0-Linux-Demo-VM02’&amp;#x26;select=appType,id,name,type,guestinfo,protectionJobInfo
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-discover-vm-for-recovery.png&quot; alt=&quot;API to discover backup for recovery of VM&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;Let&apos;s obtain the Cloud Recovery Point id for the cloud protection recovery from the virtual machine id from the step #1 using the HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/VirtualMachineBackupList/&quot;&gt;API&lt;/a&gt; &lt;code&gt;GET /backup-recovery/v1beta1/virtual-machines/:id/backups&lt;/code&gt; given the virtual machine id &lt;code&gt;{{vmId}}&lt;/code&gt;. Copy the &lt;code&gt;{{backupId}}&lt;/code&gt; from the response body from the below figure so that it can be used in the subsequent API execution. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /backup-recovery/v1beta1/virtual-machines/{{vmId}}/backups?select=name,description,backupType,id
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-obtain-backup-id-for-recovery.png&quot; alt=&quot;API to obtain the backup Id of a cloud recovery point&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;Let&apos;s obtain the &lt;code&gt;datastore id&lt;/code&gt; and the &lt;code&gt;cluster id&lt;/code&gt; from the existing VMFS datastore that can accommodate the datastore type that this VM can be restored. In this hypervisor, I am using the datastore with the name &lt;code&gt;“0-BRS-VMFS-Test3”&lt;/code&gt; and enter that as part of the filter into HPE GreenLake &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/DatastoresList&quot;&gt;API&lt;/a&gt; &lt;code&gt;GET /virtualization/v1beta1/datastores&lt;/code&gt;. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /virtualization/v1beta1/datastores?filter=displayName eq &apos;0-BRS-VMFS-Test3&apos;&amp;#x26;select=clusterInfo,datastoreType,name,id
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-obtain-cluster-and-datastore.png&quot; alt=&quot;API to obtain the cluster and datastore Ids&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;To obtain the hypervisor Network Id that is required to recover the recovery point into a new virtual machine, I had to discover the hypervisor id. The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/HypervisorManagerList/&quot;&gt;API&lt;/a&gt; used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /virtualization/v1beta1/hypervisor-manager?select=name,id,state,status,dataOrchestratorInfo,services,hypervisorManagerType,releaseVersion&amp;#x26;filter=state eq &quot;OK&quot; and status eq &quot;OK&quot; and name eq &quot;vCenter Name&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-obtain-hypervisor-id.png&quot; alt=&quot;API to get hypervisorId&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;To set the virtual machine network to the correct network port group, I&apos;ll need to look into into the existing virtual machine &lt;code&gt;&quot;0-Linux-Demo-V02&quot;&lt;/code&gt;, and discovered the network port group &lt;code&gt;“Mgmt-DPortGroup”&lt;/code&gt;. From the list of the network port group, I selected the associated n﻿etwork port group of the virtual machine by filtering the name of the port group. Afterward, we copied the &lt;code&gt;&quot;&amp;#x3C;hypervisor-network-id&gt;&quot;&lt;/code&gt; from the response body of this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/HypervisorNetwork/&quot;&gt;API&lt;/a&gt; to be used for subsequent execution. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /virtualization/v1beta1/hypervisor-managers/{{hyperVisorId}}/networks?select=id,displayName&amp;#x26;filter=displayName eq &apos;Mgmt-DPortGroup&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-to-get-the-network-id.png&quot; alt=&quot;API to get network ID for VM&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;
&lt;p&gt;After I obtained all the values that were required to build the request body to recover a cloud recovery point from &lt;code&gt;“0-Linux-Demo-VM02&quot;&lt;/code&gt;, I constructed the JSON request body as shown in the below figure. To restore the recovery point into a new virtual machine, the &lt;code&gt;restoreType&lt;/code&gt; key of the request body JSON structure was set to &lt;code&gt;“ALTERNATE”&lt;/code&gt; as shown in the below figure. I also provided the new virtual machine name after the recovery a﻿s &lt;code&gt;“0-Linux-Demo-VN02-2-05-04-2024_05:48_PM”&lt;/code&gt;. The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/openapi/backup-recovery-public-v1beta1/operation/VirtualMachineRestore/&quot;&gt;API&lt;/a&gt; used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;POST /backup-recovery/v1beta1/virtual-machines/{{vmId}}/restore
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-restoring-a-cloud-protection-recovery-point.png&quot; alt=&quot;API to recover a cloud protection copy from a VM&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;
&lt;p&gt;To validate that the recovery was completed, I tracked the progress from the response using the &lt;code&gt;async-operations&lt;/code&gt; &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/openapi/data-services-public-v1beta1/operation/GetAsyncOperation/&quot;&gt;API&lt;/a&gt; based on &lt;code&gt;{{taskId}}&lt;/code&gt; from previous execution as shown below. The API used for this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET /data-services/v1beta1/async-operations/:id?select=associatedResources,createdAt,endedAt,error,displayName,healthStatus,id,customerId,progressPercent,name,type,state
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/task-display-recovery-is-completed.png&quot; alt=&quot;Task Id confirming the completion of the recovery&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;At the end, I went to the V﻿Mware vCenter console u﻿sed in this example to validate that the virtual machine 0-Linux-Demo-VN02-2-05-04-2024_05:48_PM was indeed part of the virtual machines inventory.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/the-vcenter-display-the-recovered-vm.png&quot; alt=&quot;vCenter display the VM recovery completed&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post introduces the new set of REST API from the family of the APIs for data services on HPE Greenlake, namely &lt;strong&gt;HPE GreenLake API for Backup and Recovery&lt;/strong&gt;. This set of APIs is documented at the HPE GreenLake for developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/backup-recovery/public/&quot;&gt;website&lt;/a&gt; using interactive documentation based on OpenAPI Standard version 3.1.&lt;/p&gt;
&lt;p&gt;Early in this blog post, I laid down the relationship of the resources in this set of HPE GreenLake APIs with the objects in the HPE GreenLake Backup and Recovery user interface. In this blog post, I also introduced the examples from several use cases associated with utilizing HPE GreenLake for Backup and Recovery to provide virtual machine protection from day one.&lt;/p&gt;
&lt;p&gt;The examples presented in this blog post provided some guides on using combination of the REST APIs that were announced in March 2024 to achieve the goal for protecting a virtual machine.&lt;/p&gt;
&lt;p&gt;All the execution for the examples were done using Postman API tool without any scripting language to encourage anyone to experiment with the family of REST APIs for data services on HPE GreenLake.&lt;/p&gt;
&lt;p&gt;Please don’t hesitate to explore this new set of APIs for Cloud Data Services on HPE GreenLake and see how you can improve your agility in managing your data. Any questions on HPE GreenLake Data Services Cloud Console API? Please join &lt;a href=&quot;https://developer.hpe.com/slack-signup&quot;&gt;the HPE Developer Community Slack Workspace&lt;/a&gt;, and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore the HPE GreenLake Developer portal to learn more about our foundational APIs]]></title><link>https://developer.hpe.com/2024-april-05/</link><guid isPermaLink="false">https://developer.hpe.com/2024-april-05/</guid><pubDate>Fri, 05 Apr 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Supercharged Chapel Editor Support]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/supercharged-chapel-editor-support/</link><guid isPermaLink="false">https://developer.hpe.com/supercharged-chapel-editor-support/</guid><pubDate>Thu, 04 Apr 2024 13:43:29 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake API for Virtualization]]></title><description><![CDATA[What’s New? Recently, a new set of REST APIs for HPE GreenLake edge-to-cloud platform was introduced on the HPE GreenLake Developer website…]]></description><link>https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-virtualization/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-virtualization/</guid><pubDate>Wed, 03 Apr 2024 21:18:41 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h2&gt;What’s New?&lt;/h2&gt;
&lt;p&gt;Recently, a new set of REST APIs for HPE GreenLake edge-to-cloud platform was introduced on the HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/guide/&quot;&gt;website&lt;/a&gt;.  These APIs are grouped under the set which is called HPE GreenLake API for Virtualization. Several articles will be written and posted on HPE Developer Community &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;blog&lt;/a&gt; to help you better understand and work with these APIs in conjunction with the family of APIs for data services on HPE GreenLake.&lt;/p&gt;
&lt;p&gt;This the second in a series of blog posts (&lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services&quot;&gt;data services&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-virtualization&quot;&gt;virtualization&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-backup-and-recovery&quot;&gt;Backup and Recovery&lt;/a&gt;) that will introduce some useful tips and best practices about using this new set of APIs given a specific use case. The purpose of this virtualization API is described from the HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public&quot;&gt;Guide&lt;/a&gt;: “The HPE GreenLake for Virtualization API provides management of virtual machines and other virtual resources in public clouds and on-premises systems.”&lt;/p&gt;
&lt;p&gt;At time of release, this set of APIs&apos; supports for on-premises hypervisor includes VMware (7.X and 8.X) and cloud public provider such as AWS Elastic Cloud Compute and Microsoft Azure virtual machine. The resources that are supported on-premises are &lt;em&gt;virtual machines, virtual machine images, datastores, VMware clusters (hosts, folders, networks, tags, networks, resource-pool)&lt;/em&gt;. Conversely, the resources that are supported on cloud providers include &lt;em&gt;virtual machine instance, virtual machine images, and virtual machine instance types&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtualization-api-guide-website-and-download.png&quot; alt=&quot;&quot; title=&quot;The HPE GreenLake API for Virtualization documentation.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the HPE GreenLake APIs for Virtualization drop down list, and the link to download the OpenAPI specification in JSON format.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The specification for this set of APIs is published as an OpenAPI specification in JSON format. The specification of this set of APIs is available for download from this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/overview/&quot;&gt;section&lt;/a&gt; of the documentation (shown below). The specification follows the OpenAPI standard 3.1, which contains all the information required so that this JSON spec-file can be consumed by any OpenAP﻿I tools to generate client library, SDK, server mock, or documentation as described in this OpenAPI &lt;a href=&quot;https://tools.openapis.org/&quot;&gt;Initiative&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtualization-openapi-3.1-spec-file.png&quot; alt=&quot;&quot; title=&quot;HPE GreenLake OAS 3.1 specification for Virtualization download.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows a sample of the downloaded OpenAPI specification of the HPE GreenLake API for Virtualization.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;API versioning&lt;/h2&gt;
&lt;p&gt;This set of APIs is identified as revision V1 Beta 1 at the time of its introduction in March 2024. Moving forward, the APIs will be updated to their next revision as they evolves toward the long-term release version. As each individual API is updated, there will also be more capabilities added to any of the resources identified under the APIs. Furthermore, there will be more resources that are not currently available for this API added in the future. For information about update stages, and deprecation, please follow the HPE GreenLake Developer Portal Versioning &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/public/standards/versioning_basics/&quot;&gt;Guide&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;What are these virtualization resources?&lt;/h2&gt;
&lt;p&gt;The following pictures depict some of the resources that are related to the virtualization APIs that can be discovered inside the two cloud data services that are part of HPE GreenLake. The two services which leverage these virtualization APIs are HPE GreenLake for Backup and Recovery (GLBR), and HPE GreenLake for Private Cloud Business Edition (PCBE). Both services leverage the virtualization APIs to discover assets that would need to be onboarded, protected, orchestrated, nurtured, or retired following the CRUD (create, revise, update, delete) principles of REST API. Each object presented in the pictures below are part of the User Interface that can be manipulated using this virtualization APIs.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Not all virtualization API resources that can be seen at HPE GreenLake for Backup and Recovery user interfaces and HPE GreenLake for Private Cloud Business Edition user interfaces, are going to be available inside this set of APIs upon its first release. Due to sharing of the virtualization services between the two services, any virtualization resources that are added into one HPE GreenLake workspace and used by both services can be manipulated by HPE GreenLake APIs for Virtualization using the same instance Id.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/onprem-resources-virtualization-api-glbr.png&quot; alt=&quot;&quot; title=&quot;On-prem resources in GLBR&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows virtualization APIs resources related to VMware in HPE GreenLake for Backup and Recovery.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/onprem-resources-virtualization-api-pcbe.png&quot; alt=&quot;&quot; title=&quot;on-prem resources in PCBE&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows virtualization APIs resources related to VMware in HPE GreenLake for Private Cloud Business Edition.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cloud-public-resources-inside-the-glbr.png&quot; alt=&quot;&quot; title=&quot;Cloud resources in GLBR&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows virtualization APIs resources related to Public Cloud Provider in HPE GreenLake for Backup and Recovery.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Using the HPE GreenLake APIs for Virtualization&lt;/h2&gt;
&lt;p&gt;This set of virtualization APIs uses the same authorization and permission as the rest of the family of HPE GreenLake APIs for data services.&lt;/p&gt;
&lt;p&gt;To ensure that all programmatic interaction with the HPE GreenLake platform services and resources is secure and authenticated, these APIs require an access token. The token is generated using the client ID and client Secret you obtained during the creation of the client API credentials. Documentation about getting started with the HPE GreenLake API is provided on the HPE Developer Community &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;website&lt;/a&gt;, and on the HPE GreenLake Developer portal &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Nevertheless, there is also blog &lt;a href=&quot;https://developer.hpe.com/blog/learn-what-you-can-do-with-hpe-data-services-cloud-console-api-in-just-3-minutes/&quot;&gt;post&lt;/a&gt; that describes how to use publicly available tools to manipulate this API without a programming language, such as Postman. An additional blog post that describes using Postman for this API is also available at this &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console&quot;&gt;link&lt;/a&gt;. Moreover, there will be blog posts available that provide guidance on how to convert this Open API JSON file based on the OpenAPI 3.1 to any scripting language library.&lt;/p&gt;
&lt;p&gt;Lastly, anyone can follow the examples provided by each virtualization API referenced in the HPE GreenLake Developer&apos;s documentation page, such as shown in the below figure. The documentation provides detail on the API syntax for a particular method, arguments used for the API, successful and failed responses, and several examples using cURL, JavaScript, Python, and Go. The documentation page also provides the ability to execute the API directly on the documentation page as explained in the previous blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services/&quot;&gt;post&lt;/a&gt; (Getting started with HP GreenLake APIs for Data Services ).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/get-all-registered-hypervisor-managers-download.png&quot; alt=&quot;&quot; title=&quot;Get all registered hypervisors managers&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows three-panel interactive API reference documentation for one of HPE GreenLake API for virtualization.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Some tips and examples&lt;/h2&gt;
&lt;p&gt;Even though there is documentation available in the HPE GreenLake Developer portal, here are some  recommendations and best practices for using the HPE GreenLake API for virtualization.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;﻿&lt;strong&gt;Note:&lt;/strong&gt; All of the below examples assume that you have access to HPE GreenLake workspace. For more information on acquiring an HPE GreenLake workspace, please follow this &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=a00120892en_us&quot;&gt;guide&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Discovery of services for a given hypervisor&lt;/h3&gt;
&lt;p&gt;The discovery of the on-premises assets in a HPE GreenLake workspace is started by obtaining the access token, and applying it to the virtualization API &lt;code&gt;GET {baseURL}/virtualization/v1beta1/hypervisor-managers&lt;/code&gt; to discover the hypervisors that are already onboarded into the your workspace.&lt;/p&gt;
&lt;p&gt;Along with the information about the hypervisor, the response of &lt;code&gt;GET {baseURL}/virtualization/v1beta1/hypervisor-managers&lt;/code&gt; provides additional information such as which HPE GreenLake services are associated with the discovered hypervisor. To discover the information, you can use the &lt;code&gt;select&lt;/code&gt; parameter with the API to discover properties such as &lt;code&gt;dataOrchestratorInfo&lt;/code&gt; and &lt;code&gt;services&lt;/code&gt;. The response values that are returned from this API execution provide the information on what services are associated with the hypervisor and the instance of Data Orchestrator VM that is providing protection against that hypervisor.&lt;/p&gt;
&lt;p&gt;The recommended select parameters to discover the hypervisor and services related to hypervisor is shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;GET https://{baseUrl}/virtualization/v1beta1/hypervisor-managers?select=name,id,state,status,dataOrchestratorInfo,services,hypervisorManagerType,releaseVersion
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The example of the response using the above recommended parameter is shown below. From this response information, I could derive that the hypervisor with its name &lt;code&gt;cds-tme-vcenter.rtplab.nimblestorage.com&lt;/code&gt; was associated with &lt;code&gt;backup-and-recovery&lt;/code&gt; service. However, the second hypervisor with its name &lt;code&gt;rtp-arra392-dhci.rtplab.nimblestorage.com&lt;/code&gt; was associated with both the &lt;code&gt;hci-manager&lt;/code&gt; and &lt;code&gt;backup-and-recovery&lt;/code&gt; services. These values gave me an idea that this workspace contained two sets of VMware vCenters, both of which were protected by HPE GreenLake for Backup and Recovery; however, only the second one was part of HPE GreenLake for Private Cloud Business Edition.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;items&quot;: [
        {
            &quot;dataOrchestratorInfo&quot;: {
                &quot;id&quot;: &quot;d36122b2-xxxx-xxxx-xxxx-15d2521926f0&quot;,
                &quot;resourceUri&quot;: &quot;/backup-recovery/v1beta1/data-orchestrators/d36122b2-xxxx-xxxx-xxxx-15d2521926f0&quot;,
                &quot;type&quot;: &quot;backup-recovery/data-orchestrator&quot;
            },
            &quot;hypervisorManagerType&quot;: &quot;VMWARE_VCENTER&quot;,
            &quot;id&quot;: &quot;0c9acaa0-xxxx-xxxx-xxxx-eaa239798f6d&quot;,
            &quot;name&quot;: &quot;cds-tme-vcenter.rtplab.nimblestorage.com&quot;,
            &quot;releaseVersion&quot;: &quot;7.0.3&quot;,
            &quot;services&quot;: [
                &quot;backup-and-recovery”
            ],
            &quot;state&quot;: &quot;OK&quot;,
            &quot;status&quot;: &quot;OK&quot;
        },
        {
            &quot;dataOrchestratorInfo&quot;: {
                &quot;id&quot;: &quot;d36122b2-xxxx-xxxx-xxxx-15d2521926f0&quot;,
                &quot;resourceUri&quot;: &quot;/backup-recovery/v1beta1/data-orchestrators/d36122b2-xxxx-xxxx-xxxx-15d2521926f0&quot;,
                &quot;type&quot;: &quot;backup-recovery/data-orchestrator&quot;
            },
            &quot;hypervisorManagerType&quot;: &quot;VMWARE_VCENTER&quot;,
            &quot;id&quot;: &quot;213a7f0d-xxxx-xxxx-xxxx-44e597c32121&quot;,
            &quot;name&quot;: &quot;rtp-array392-dhci.rtplab.nimblestorage.com&quot;,
            &quot;releaseVersion&quot;: &quot;7.0.3&quot;,
            &quot;services&quot;: [
                &quot;hci-manager&quot;,
                &quot;backup-and-recovery&quot;
            ],
            &quot;state&quot;: &quot;OK&quot;,
            &quot;status&quot;: &quot;OK&quot;
        }
    ],
    &quot;count&quot;: 2,
    &quot;offset&quot;: 0,
    &quot;total&quot;: 2
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;T﻿he above code snippet displays the response from &lt;code&gt;GET the hyper-managers&lt;/code&gt; API execution&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Creation and management of a virtual machine in a cloud service provider (CSP)&lt;/h3&gt;
&lt;p&gt;The figure below shows the documentation of HPE GreenLake API for virtualization to deploy a virtual machine in your CSP account: &lt;code&gt;POST /virtualization/v1beta1/csp-machine-instances&lt;/code&gt;. Note that this API requires that user to provide a &lt;code&gt;request body&lt;/code&gt; as part of the API execution. The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/CreateCSPMachineInstance/&quot;&gt;documentation&lt;/a&gt; for this API at HPE GreenLake Developer website used another terminology which is called &lt;code&gt;Payload&lt;/code&gt;. It is presented as one of the tabs in the &lt;code&gt;Request samples&lt;/code&gt; window, as shown in the figure below.  This &lt;code&gt;Payload&lt;/code&gt; tab provided details of the JSON structure required for the execution of the API &lt;code&gt;POST /virtualization/v1beta1/csp-machine-instance&lt;/code&gt;. Inside the JSON body structures, I recognized that I need to provide multiple key-pair values such as&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;accountId, imageId, instanceType, region, cspType, keyPairName, name
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/documentation-of-the-post-csp-machine-instance-with-required-payload.png&quot; alt=&quot;&quot; title=&quot;display csp-machine-instances with the required Payload to POST&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows the documentation on &lt;code&gt;POST /virtualization/v1beta1/csp-machine-instances&lt;/code&gt; to deploy a VM inside the cloud service provider.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;At the time of this release (March 2024), the API resource to discover the HPE GreenLake cloud service provider (CSP) &lt;code&gt;account id&lt;/code&gt; is not yet available in this released set of HPE GreenLake APIs for Virtualization. To display that id, I used an existing legacy HPE GreenLake API &lt;code&gt;GET https://{baseUrl}/api/v1/csp-accounts&lt;/code&gt;. My AWS CSP account was already onboarded into HPE GreenLake Private Cloud Business Edition, hence the prior API returned the &lt;code&gt;accountId&lt;/code&gt; that I need to provide as one of the key-pair object in the JSON request body for the virtualization API &lt;code&gt;POST /virtualization/v1beta1/csp-machine-instances&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-of-csp-account-to-obtain-accountid-csptype.png&quot; alt=&quot;&quot; title=&quot;List available CSP accounts (DSCC API v1.4)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows the accountId and the cspType required for the deployment of the virtual machine on the CSP account.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Another important information to deploy a virtual machine into my CSP account is the &lt;code&gt;imageId&lt;/code&gt; value. That &lt;code&gt;imageId&lt;/code&gt; (shown below) corresponded to the &lt;code&gt;AWS Linux VM (free tier)&lt;/code&gt; which was a machine-image (template of the virtual machine) that is going to be deployed at the AWS account. I used the following virtualization API &lt;code&gt;GET /virtualization/v1beta1/csp-machine-images&lt;/code&gt; to validate AWS machine image (AMI) from existing  virtual machine that had been deployed inside my CSP account.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-of-csp-machine-image-based-on-known-aws-machine-id-in-us-east-1.png&quot; alt=&quot;&quot; title=&quot;return imageId from csp-machine-images&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows the imageId value obtained from the existing machine instance.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;You can see in the below figure, I performed the deployment of a virtual machine using Postman tool. This tool provided the special field for me to enter the JSON body structure according to the definition of Payload in the developer’s API &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/CreateCSPMachineInstance/&quot;&gt;guide&lt;/a&gt;. Please note that I used the raw form of the body with JSON structure (shown below). Combining the information that I gathered above, I entered the values required by the JSON structures part of the request body. The key of &lt;code&gt;keyPairName&lt;/code&gt; was the name for the certificate key-pair created in my CSP account. I chose &lt;code&gt;RonD-deploy-CSP-1&lt;/code&gt; for the name of the deployed virtual machine. Additionally, I used instanceType of &lt;code&gt;t2.micro&lt;/code&gt; as the instance that was appropriate for this demo.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/post-creation-of-a-virtual-machine-inside-the-csp-account.png&quot; alt=&quot;&quot; title=&quot;Deploy a machine instance based on the payload POST&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure provides the example of creating a Payload (Body) JSON structure to provision a VM inside the cloud service provider.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Using the API &lt;code&gt;GET async-operations&lt;/code&gt; on the &lt;code&gt;task Id&lt;/code&gt; provided from location value in the response header of the above API, I was able to track the completion of the execution of the create virtual-machine in the AWS account. For more information on using the &lt;code&gt;GET async-operations&lt;/code&gt; API, please take look at my blog &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services/&quot;&gt;post&lt;/a&gt;  (Getting Started with HPE GreenLake API for Data Services).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    “associatedResources”: [],
    “childTasks”: [
        {
            “name”: “Create Instance”,
            “resourceUri”: “/data-services/v1beta1/async-operations/286cc55c-8636-4a8a-8428-0a5694f42785”,
            “type”: “task”
        }
    ],
    “createdAt”: “2024-03-28T01:53:30.395518007Z”,
    “customerId”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”
    “displayName”: “Create Instance: RonD-deploy-CSP-1”,
    “endedAt”: “2024-03-28T01:54:04.245722969Z”,
    “error”: null,
    “estimatedRunningDurationMinutes”: 0,
    “generation”: 3,
    “groups”: [
        {
            “id”: “eb988b5e2dcb11ec840712b3b5263ef4”,
            “name”: “Default Group”
        }
    ],
    “healthStatus”: “OK”,
    “id”: “5f54e591-xxxx-xxx-xxxx-bebadb977f93”,
    “logMessages”: [
        {
            “message”: “Create Instance: RonD-deploy-CSP-1 task is created”,
            “timestampAt”: “2024-03-28T01:53:30.395526907Z”
        },
        {
            “message”: “Create Instance: RonD-deploy-CSP-1 task is running”,
            “timestampAt”: “2024-03-28T01:53:30.395529707Z”
        },
        {
            “message”: “Create Instance: RonD-deploy-CSP-1 task is succeeded”,
            “timestampAt”: “2024-03-28T01:54:04.245712087Z”
        }
    ],
    “name”: “Create Instance: RonD-deploy-CSP-1”,
    “parentTask”: null,
    “progressPercent”: 100,
    “recommendations”: [],
    “resourceUri”: “/data-services/v1beta1/async-operations/5f54e591-a231-46ce-a2e2-bebadb977f93”,
    “rootTask”: {
        “id”: “5f54e591-a231-46ce-a2e2-bebadb977f93”,
        “name”: “”,
        “resourceUri”: “/data-services/v1beta1/async-operations/5f54e591-a231-46ce-a2e2-bebadb977f93”,
        “type”: “task”
    },
    “services”: [
        “private-cloud-business-edition”
    ],
    “startedAt”: “2024-03-28T01:53:30.395520016Z”,
    “state”: “SUCCEEDED”,
    “subtreeTaskCount”: 1,
    “suggestedPollingIntervalSeconds”: 30,
    “type”: “task”,
    “updatedAt”: “2024-03-28T01:54:04.324786754Z”,
    “userId”: “xxxx.yyyy@abc.com&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;T﻿he above snippet of the response body depicts that execution of &lt;code&gt;POST /virtual/v1beta1/machine-instance&lt;/code&gt; that was completed successfully.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To progress further, I needed to find the id that corresponds to the newly created virtual-machine instance in the AWS CSP account (&lt;code&gt;csp-machine-instance id&lt;/code&gt;). To find the virtual machine Id of the deployed VM, I used the legacy HPE GreenLake API &lt;code&gt;GET {baseUrl}/api/v1/csp-machine-instances&lt;/code&gt; to get the list of virtual-machine-instance that exist in that AWS account. From the response of that API, I obtained the VM instance Id of that Virtual Machine that was created in the prior example using &lt;code&gt;filter:&lt;/code&gt; parameter (shown below).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/get-csp-machine-instance-id-to-validate-the-body-structure.png&quot; alt=&quot;&quot; title=&quot;Get information about the VM deployed in the CSP to validate parameters&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure displays the name and the csp-machine-instance id so that I could manipulate the machine&apos;s state.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To control cost, I used the API &lt;code&gt;POST {baseUrl}/virtualization/v1beta1/csp-machine-instances/:id/power-off&lt;/code&gt; to power off the virtual-machine instance in the AWS account. Lastly, I invoked another API &lt;code&gt;DEL {baseUrl}/virtualization/v1beta1/csp-machine-instances/:id&lt;/code&gt; to terminate that virtual machine and retire it (delete) from the inventory of my AWS CSP account.  Just like any invocation of &lt;code&gt;POST API&lt;/code&gt;, the invocation of DEL method was asynchronously executed. Hence, the same strategy of using the &lt;code&gt;GET async-operations&lt;/code&gt; API would apply. Finally, the result of the execution will be returned as part of the response API from &lt;code&gt;GET async-operation&lt;/code&gt; indicated that VM that I created recently has already been terminated, as shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;associatedResources&quot;: [],
    &quot;childTasks&quot;: [
        {
            &quot;name&quot;: &quot;Terminate: RonD-deploy-CSP-1&quot;,
            &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/f267be2d-5a04-4f41-a760-c803212840c8&quot;,
            &quot;type&quot;: &quot;task&quot;
        }
    ],
    &quot;createdAt&quot;: &quot;2024-03-27T22:37:20.509665935Z&quot;,
    &quot;customerId&quot;: &quot;xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&quot;,
    &quot;displayName&quot;: &quot;Terminate: RonD-deploy-CSP-1&quot;,
    &quot;endedAt&quot;: &quot;2024-03-27T22:37:37.652544571Z&quot;,
    &quot;error&quot;: null,
    &quot;estimatedRunningDurationMinutes&quot;: 0,
    &quot;generation&quot;: 4,
    &quot;groups&quot;: [
        {
            &quot;id&quot;: &quot;eb988b5e2dcb11ec840712b3b5263ef4&quot;,
            &quot;name&quot;: &quot;Default Group&quot;
        }
    ],
    &quot;healthStatus&quot;: &quot;OK&quot;,
    &quot;id&quot;: &quot;de6d9725-7803-4ef5-9cce-f40ad05f879f&quot;,
    &quot;logMessages&quot;: [
        {
            &quot;message&quot;: &quot;Terminate: d4c46c6e-51ff-59c0-a24a-ef52d77f0b16 task is created&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-27T22:37:20.509674904Z&quot;
        },
        {
            &quot;message&quot;: &quot;Terminate: d4c46c6e-51ff-59c0-a24a-ef52d77f0b16 task is running&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-27T22:37:20.509678268Z&quot;
        },
        {
            &quot;message&quot;: &quot;Terminate: RonD-deploy-CSP-1 task is running&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-27T22:37:20.79512845Z&quot;
        },
        {
            &quot;message&quot;: &quot;Terminate: RonD-deploy-CSP-1 task is succeeded&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-27T22:37:37.652529902Z&quot;
        }
    ],
    &quot;name&quot;: &quot;Terminate: RonD-deploy-CSP-1&quot;,
    &quot;parentTask&quot;: null,
    &quot;progressPercent&quot;: 100,
    &quot;recommendations&quot;: [],
    &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/de6d9725-xxxx-xxxx-xxxx-f40ad05f879f&quot;,
    &quot;rootTask&quot;: {
        &quot;id&quot;: &quot;de6d9725-7803-4ef5-9cce-f40ad05f879f&quot;,
        &quot;name&quot;: &quot;&quot;,
        &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/de6d9725-xxxx-xxxx-xxxx-f40ad05f879f&quot;,
        &quot;type&quot;: &quot;task&quot;
    },
    &quot;services&quot;: [
        &quot;private-cloud-business-edition&quot;
    ],
    &quot;sourceResource&quot;: {
        &quot;name&quot;: &quot;RonD-deploy-CSP-1&quot;,
        &quot;resourceUri&quot;: &quot;/api/v1/csp-machine-instances/d4c46c6e-xxxx-xxxx-xxxx-ef52d77f0b16&quot;,
        &quot;type&quot;: &quot;AWS Instance&quot;
    },
    &quot;startedAt&quot;: &quot;2024-03-27T22:37:20.509668598Z&quot;,
    &quot;state&quot;: &quot;SUCCEEDED&quot;,
    &quot;subtreeTaskCount&quot;: 1,
    &quot;suggestedPollingIntervalSeconds&quot;: 30,
    &quot;type&quot;: &quot;task&quot;,
    &quot;updatedAt&quot;: &quot;2024-03-27T22:37:37.688700579Z&quot;,
    &quot;userId&quot;: &quot;ronald.dharma@hpe.com&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows the termination of machine-instance in CSP reached 100% progress and completed successfully.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Wow… that was so cool. What about provisioning  a VMware datastore on-premises?&lt;/h3&gt;
&lt;p&gt;Here is another example of using the HPE GreenLake API for virtualization to provision a datastore out of an HPE disaggregated Hyperconverged Infrastructure (dHCI) system for on-premises deployment. The HPE dHCI instance had already been onboarded into the HPE GreenLake for Private Cloud Business Edition service (PCBE). The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/virtualization/public/openapi/virtualization-public-v1beta1/operation/CreateDS/&quot;&gt;API&lt;/a&gt; that is used for the datastore provisioning is &lt;code&gt;POST {baseUrl} /virtualization/v1beta1/datastores&lt;/code&gt;, which also required a definition of this JSON structure with values in the &lt;code&gt;body (Payload)&lt;/code&gt;. The information about this request JSON body is presented in the figure below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-datastore-on-dhci.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure displays the required body (Payload) used to deploy datastore into a hyper-converged instance with VMware.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This virtualization API incorporated the virtual machine provisioning policy that is part of the HPE GreenLake for Private Cloud Business Edition. I used the HPE GreenLake for Private Cloud Business Edition API &lt;code&gt;GET /private-cloud-business/v1beta1/vm-provisioning-policies&lt;/code&gt; to obtain the &lt;code&gt;provisioningPolicyId&lt;/code&gt; and entered it into the Body JSON structure as required.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/get-virtualization-api-to-obtain-vm-provisioning-policy.png&quot; alt=&quot;obtain the provisioning policy for deploying datastores using HPE PCBE&quot; title=&quot;get the provisioning policy for deploying datastores in a dHCI using HPE PCBE&quot;&gt;&lt;/p&gt;
&lt;p&gt;I used another API from the virtualization API such as &lt;code&gt;GET {baseUrl}/virtualization/v1beta1/hypervisors-clusters&lt;/code&gt; to obtain the &lt;code&gt;targetHypervisorClusterId&lt;/code&gt; (shown below).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/obtain-the-cluster-id-for-deployment-at-dhci-using-pcbe.png&quot; alt=&quot;&quot; title=&quot;Obtain the cluster id which is required to deploy a datastore in a dHCI using PCBE&quot;&gt;&lt;/p&gt;
&lt;p&gt;Furthermore, I used the legacy API such as &lt;code&gt;GET /api/v1/storage-systems/device-type2&lt;/code&gt; to obtain the &lt;code&gt;Storage System Id&lt;/code&gt; (shown below).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/obtain-the-storage-system-id-using-the-legacy-api.png&quot; alt=&quot;&quot; title=&quot;legacy API to display the storage system Id&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, I entered all the above values into the JSON body below as part of the virtualization API to create a datastore in the designated VMware cluster using the attached storage system. The Integer value key &lt;code&gt;sizeInBytes&lt;/code&gt; was equivalent to 10 TB. Other String values were obtained from the previous HPE GreenLake API responses as I demonstrated above.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/post-virtualization-api-with-taskid-succesful.png&quot; alt=&quot;&quot; title=&quot;Deploy POST to create the datastore in the DHCI using PCBE&quot;&gt;&lt;/p&gt;
&lt;p&gt;From the response of the API &lt;code&gt;GET aync-operations&lt;/code&gt; for the &lt;code&gt;{task Id}&lt;/code&gt; shown below, I confirmed that creation of the datastore was completed. Note that the creation of the datastore also applied the HPE GreenLake Backup Recovery data protection-policy to the newly created datastore.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;associatedResources&quot;: [],
    &quot;childTasks&quot;: [
        {
            &quot;name&quot;: &quot;Create datastore: VMFS-ds2&quot;,
            &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/58955ba4-59ba-4f78-a4e8-5116aaba9e7d&quot;,
            &quot;type&quot;: &quot;task&quot;
        }
    ],
    &quot;createdAt&quot;: &quot;2024-04-21T22:08:25.439053461Z&quot;,
    &quot;customerId&quot;: &quot;xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&quot;,
    &quot;displayName&quot;: &quot;Provisioning datastore VMFS-ds2&quot;,
    &quot;endedAt&quot;: &quot;2024-04-21T22:12:00.738144472Z&quot;,
    &quot;error&quot;: null,
    &quot;estimatedRunningDurationMinutes&quot;: 0,
    &quot;generation&quot;: 8,
    &quot;groups&quot;: [
        {
            &quot;id&quot;: &quot;eb988b5e2dcb11ec840712b3b5263ef4&quot;,
            &quot;name&quot;: &quot;Default Group&quot;
        }
    ],
    &quot;healthStatus&quot;: &quot;OK&quot;,
    &quot;id&quot;: &quot;8de5e0a2-af09-4de3-8334-75638c147735&quot;,
    &quot;logMessages&quot;: [
        {
            &quot;message&quot;: &quot;Task created&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:08:25.439071294Z&quot;
        },
        {
            &quot;message&quot;: &quot;Task is running&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:08:25.439074148Z&quot;
        },
        {
            &quot;message&quot;: &quot;Preparing parameters&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:08:29.779163168Z&quot;
        },
        {
            &quot;message&quot;: &quot;Creating datastore&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:08:31.200651622Z&quot;
        },
        {
            &quot;message&quot;: &quot;Validating datastore&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:09:44.256999998Z&quot;
        },
        {
            &quot;message&quot;: &quot;Applying protection policy&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:10:14.382076894Z&quot;
        },
        {
            &quot;message&quot;: &quot;Task succeeded&quot;,
            &quot;timestampAt&quot;: &quot;2024-04-21T22:12:00.738160833Z&quot;
        }
    ],
    &quot;name&quot;: &quot;Provisioning datastore VMFS-ds2&quot;,
    &quot;parentTask&quot;: null,
    &quot;progressPercent&quot;: 100,
    &quot;recommendations&quot;: [],
    &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/8de5e0a2-af09-4de3-8334-75638c147735&quot;,
    &quot;rootTask&quot;: {
        &quot;id&quot;: &quot;8de5e0a2-af09-4de3-8334-75638c147735&quot;,
        &quot;name&quot;: &quot;&quot;,
        &quot;resourceUri&quot;: &quot;/data-services/v1beta1/async-operations/8de5e0a2-af09-4de3-8334-75638c147735&quot;,
        &quot;type&quot;: &quot;task&quot;
    },
    &quot;services&quot;: [
        &quot;private-cloud-business-edition&quot;
    ],
    &quot;sourceResource&quot;: {
        &quot;name&quot;: &quot;VMFS-ds2&quot;,
        &quot;resourceUri&quot;: &quot;/api/v1/datastores/ff5a8bf8-ddc9-5bca-9f9f-c4cf3f099364&quot;,
        &quot;type&quot;: &quot;Datastore&quot;
    },
    &quot;startedAt&quot;: &quot;2024-04-21T22:08:25.439056085Z&quot;,
    &quot;state&quot;: &quot;SUCCEEDED&quot;,
    &quot;subtreeTaskCount&quot;: 2,
    &quot;suggestedPollingIntervalSeconds&quot;: 30,
    &quot;type&quot;: &quot;task&quot;,
    &quot;updatedAt&quot;: &quot;2024-04-21T22:12:00.793422464Z&quot;,
    &quot;userId&quot;: &quot;ronald.dharma@hpe.com&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure shows that API &lt;code&gt;POST {baseURL)/virtualization/v1beta1/datastores&lt;/code&gt; executed succesfully.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post introduces you to the new set of HPE GreenLake APIs for Virtualization, which is part of data services family APIs for HPE GreenLake. This is the set of APIs to support resources from an on-premises hypervisor, that includes VMware (7.X and 8.X), as well as for cloud-public-providers such as AWS Elastic Cloud Compute and Microsoft Azure Virtual Machines. This set of APIs will evolve through future releases. The March 2024 announcement introduces version V1 Beta 1 of this set of APIs. This set of APIs is documented at HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com&quot;&gt;website&lt;/a&gt; and it is available for download as Open API 3.1 specification in JSON file.&lt;/p&gt;
&lt;p&gt;In this post, I started by introducing the HPE GreenLake virtualization API on the API documentation page. Afterward, I provided a couple of tips on using this virtualization REST API such as to discover the on-premises hypervisor resources and to determine the services that were applied against the discovered hypervisor&apos;s cluster. Next, I provided a complete guides to deploy a virtual machine inside the cloud service provider (AWS). Lastly, I also introduced an example on how to invoke the DEL method of the virtual machines at the cloud to clean up the deployed virtual machines on the cloud service provider. As a bonus, I also provided an example to deploy datastores in the on-premises VMware infrastructure part of the HPE GreenLake Private Cloud Business Edition. In future blogs, I will provide examples that incorporate HPE GreenLake APIs for virtualization with the family of APIs from different data services on HPE GreenLake.&lt;/p&gt;
&lt;p&gt;Please don’t hesitate to explore this new set of HPE GreenLake APIs for virtualization and see how you can improve your agility in managing your data. Any questions on HPE GreenLake APIs for virtualization? Or do you have any suggestions or cool ideas that you want share with the community? Please join &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;the HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services&lt;/a&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake APIs for Data Services]]></title><description><![CDATA[What’s New? Recently, a new set of REST APIs for HPE GreenLake edge-to-cloud Platform was introduced on the HPE GreenLake Developer website…]]></description><link>https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-data-services/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-hpe-greenlake-api-for-data-services/</guid><pubDate>Tue, 02 Apr 2024 00:37:09 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h2&gt;What’s New?&lt;/h2&gt;
&lt;p&gt;Recently, a new set of REST APIs for HPE GreenLake edge-to-cloud Platform was introduced on the HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/&quot;&gt;website&lt;/a&gt;.  These APIs are grouped into a set called HPE GreenLake APIs for Data Services. Several articles will be written and posted on &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer&apos;s forum blog site&lt;/a&gt; to help you better understand and use HPE GreenLake APIs to enhance your DataOps&apos; operations.&lt;/p&gt;
&lt;p&gt;This is the first in a series of blog postings (&lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-data-services&quot;&gt;Data-Services&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-virtualization&quot;&gt;Virtualization&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-hpe-greenlake-api-for-backup-and-recovery&quot;&gt;Backup and Recovery&lt;/a&gt; that introduce some useful tips and best practices about using this new set of APIs given a specific use case. The introduction of these APIs arises from the necessity for manipulation of the common resources that are shared by the existing family of data services on HPE GreenLake (DataOps Manager, Block Storage, virtualization, Backup and Recovery, Private Cloud Business Edition). This set of APIs provide users with the ability to perform any Create, Read, Update and Delete (CRUD) operations against these resources:  &lt;em&gt;async-operations, dual-auth-operations, issues, secrets, software-releases, storage locations, and tags&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The specification for these APIs is publicized as an OpenAPI specification in JSON format, and the specification is available for download from this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/openapi/data-services-public-v1beta1/overview/&quot;&gt;section&lt;/a&gt; of the documentation (shown below). Anyone can download the JSON file that contain the specification for this set, by clicking on the &lt;strong&gt;Download&lt;/strong&gt; button. The specification follows the OpenAPI standard 3.1 that contains all information required so that this JSON OpenAPI spec-file can be consumed by any OpenAPI tools to provide client library, server mock, or documentation as described in this OpenAPI &lt;a href=&quot;https://tools.openapis.org/&quot;&gt;Initiative&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/data-services-api-download-page.png&quot; alt=&quot;Figure 1. HPE GreenLake API for Data Services List&quot; title=&quot;Figure 1. HPE GreenLake API for Data Services List&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the list of HPE GreenLake APIs for Data Services&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/data-services-api-json-file.png&quot; alt=&quot;Figure 2. An example of the downloaded data-services.json contents.&quot; title=&quot;Figure 2. An example of the downloaded data-services.json contents.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows an example of the downloaded data-services.json contents.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This set of APIs is identified as revision V1 Beta 1 at the time of its introduction in March 2024. Moving forward, the API will be updated to its next revision, moving toward the long-term release version. As each individual API is updated, there will also be more capabilities added to any of the resources identified under this set of APIs. Furthermore, there will be more resources that are not currently available for this API added in the future. For information about management of versioning, please follow the HPE GreenLake Developer Portal Versioning Guide at this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/public/standards/versioning_basics/&quot;&gt;link&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;What are these data services&apos; resources?&lt;/h2&gt;
&lt;p&gt;The following pictures depict some of the resources that are related to Data Services that can be discovered on the main page for the Data Services Cloud Console (UI). Other resources, such as software-releases, storage-locations and tags, are embedded inside data services storage objects. The examples of fore mentioned resources are the software deployed for Backup and Recovery’s Data Orchestrator, the location of the storage repository for Backup and Recovery’s cloud protection store, and tags associated with the storage array.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;  The resources that are presented as &lt;strong&gt;Tasks&lt;/strong&gt; in this main page are identified as &lt;strong&gt;async-operations&lt;/strong&gt;, which is the universal API resource used to monitor completion or status of all services under data services on HPE GreenLake. Additionally, the async-operations API is also used to track any API operations that are running in background as covered later in this blog post. Future iterations of the API release will also add more resources, e.g. email-notification, or add more capabilities, e.g.  POST/PATCH for tag (currently GET is the only available method for tag).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/data-services-resources-in-dscc-ui.png&quot; alt=&quot;&quot; title=&quot;Figure 3 Some of the resources that are managed by Data Services API.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows some of the resources that are managed by HPE GreenLake APIs for data services&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Using the HPE GreenLake APIs for data services&lt;/h2&gt;
&lt;p&gt;Documentation on using this set of APIs is provided on the HPE GreenLake Developer Portal at this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/guide/&quot;&gt;link&lt;/a&gt;.  There is also a blog post that describes how to use publicly available tools, such as Postman, to manipulate this set of APIs without using any programming languages found at this &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;link&lt;/a&gt;. Additional blog post that describes about using Postman for this set of APIs is also found at this &lt;a href=&quot;https://developer.hpe.com/blog/learn-what-you-can-do-with-hpe-data-services-cloud-console-api-in-just-3-minutes/&quot;&gt;link&lt;/a&gt;. Moreover, there will be blog posts available that provide guidance on how to convert this data services OpenAPI spec that is based on OpenAPI spec 3.1 to a scripting client library in future.  Anyone can follow the examples provided by each API reference in the documentation &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/&quot;&gt;page&lt;/a&gt; such as what’s shown below. The documentation provides details on the API syntax for a particular method, arguments used for the API, successful and failed responses, and several examples using cURL, JavaScript, Python, and Go. The documentation page also provides capability to execute the API directly from the API reference documentation, as explained in the following paragraph.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-documentation-try-it-now.png&quot; alt=&quot;&quot; title=&quot;Figure 4 Documentation for Data Services REST API.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the three-panel interactive API reference guide for HPE GreenLake APIs for data services.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;An example of executing an API Call from the API reference documentation&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;﻿&lt;strong&gt;Note:&lt;/strong&gt; All of the below examples assume that you have access to HPE GreenLake workspace. For more information on acquiring an HPE GreenLake workspace, please follow this &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=a00120892en_us&quot;&gt;guide&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can start by obtaining an access token for the workspace where you have the permission. For  information on getting the access token, please see the following &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services&quot;&gt;guide&lt;/a&gt;, or  follow my other blog posts in this &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;link&lt;/a&gt;. Once the access token is obtained, please copy it (Ctrl-c short cut in Microsoft Window) so that you can enter that token into the &lt;strong&gt;Security&gt;Bearer Token:&lt;/strong&gt; field of the API reference documentation page displaying the list of &lt;code&gt;async-operations&lt;/code&gt; (as an example).&lt;/p&gt;
&lt;p&gt;To start with this example, go to the documentation page for the &lt;code&gt;async-operations&lt;/code&gt; on this &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/openapi/data-services-public-v1beta1/operation/ListAsyncOperations/&quot;&gt;link&lt;/a&gt;. Please click on the &lt;strong&gt;Try It&lt;/strong&gt; button located at the top right side of the page, as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tryit-process.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you are at the next page, select the end-point of the region under &lt;strong&gt;Target Server&lt;/strong&gt; menu where your workspace based on your access token is obtained. For my example, it would be: &lt;code&gt;&amp;#x3C;https://us1.data.cloud.hpe.com&gt;&lt;/code&gt; because my workspace was created at the US region. Afterward, please expand the &lt;strong&gt;Security&gt; Bearer Token:&lt;/strong&gt; field to enter the access token (Bearer Token).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/enter-bearer-token-and-target-end-point.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Paste the valid access token into the &lt;strong&gt;Bearer Token&lt;/strong&gt; using Ctrl-v (short cut in Microsoft Windows). Keep in mind that the access token will expire after 2 hours past from the time of creation. If the access token has expired, please obtain a new access token using your existing client-credentials.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/send-the-api-after-token-entered.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click the &lt;strong&gt;Send&lt;/strong&gt; button to execute that API and you will see the response page indicating a good status (&lt;code&gt;Status: 0x200&lt;/code&gt;). Finish up by clicking on the &lt;strong&gt;Body: Expand All&lt;/strong&gt; button to display the completed response body in JSON.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/response-from-the-tryit-test.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Congratulations! You have executed your first HPE GreenLake API for data services using the interactive API reference documentation page in the HPE GreenLake Developer website.&lt;/p&gt;
&lt;h2&gt;Some tips and examples&lt;/h2&gt;
&lt;p&gt;Even though there is documentation available in the HPE GreenLake Developer website, here are some of the recommendations and best practices on how to use the API.&lt;/p&gt;
&lt;h3&gt;async-operations&lt;/h3&gt;
&lt;p&gt;The responses from this resource are critical for debugging and monitoring the activities that happen from any API executions. Those APIs can be from Virtualization, Backup and Recovery, Block services, or Private Cloud Business Edition. Here is a tip on how to filter out those tasks (&lt;code&gt;async-operations&lt;/code&gt;) that belong to a particular service. Use the parameter: &lt;code&gt;filter: &amp;#x3C;service&gt; in services&lt;/code&gt; like below.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:  To simplify the response returned by this API, use the parameter: &lt;code&gt;select: associatedResources,displayName,services,createdAt&lt;/code&gt; as shown below.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/async-operations-invocation-parameters.png&quot; alt=&quot;&quot; title=&quot;Execution of async-services using filter, sort, and select parameters&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he response from the execution of &lt;code&gt;GET /data-services/v1beta1/async-operations&lt;/code&gt; is provided below. From this response page, the property &lt;code&gt;associatedResources&lt;/code&gt; pointed to the particular asset that encountered the execution. Additionally, the property &lt;code&gt;services&lt;/code&gt; indicated which set of service that this &lt;code&gt;associatedResources&lt;/code&gt; API execution applicable to.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-response-from-filtered-and-selected-async-response.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The corresponding task message from the HPE GreenLake task’s UI is shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/corresponding-task-for-the-async-operations.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;active-issues&lt;/h3&gt;
&lt;p&gt;This API provides the list of the issues that require attention by your HPE GreenLake users. The HPE GreenLake UI provides a bell icon on the top right of every HPE GreenLake&apos;s window (please see the previous UI image under &lt;em&gt;What are these data services resources?&lt;/em&gt; paragraph in this blog post) to access issues from every available services under the HPE GreenLake.&lt;/p&gt;
&lt;p&gt;To limit the display of properties from responses returned by this API, user can use parameter &lt;code&gt;select&lt;/code&gt; as part of the parameter of the API execution. However, there are minimal set of properties required by &lt;code&gt;active-issues&lt;/code&gt; API to be entered into the &lt;code&gt;select&lt;/code&gt; parameter as shown in the below response from the &lt;code&gt;active-issues&lt;/code&gt; API.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;https://&amp;#x3C;region-baseUrl&gt;/data-services/v1beta1/issues?select=body
{
    &quot;error&quot;: &quot;Missing required field(s) - \&quot;lastOccurredAt,customerId,createdAt,id,resourceUri,generation,type\&quot;&quot;,
    &quot;errorCode&quot;: &quot;422&quot;,
    &quot;traceId&quot;: &quot;18e7b6adf90315de57f2b177652e3649&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To alleviate this condition, the user can add the required properties to the &lt;code&gt;select&lt;/code&gt; parameter in addition to any other property that is desired, such as shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;https://&amp;#x3C;region-baseUrl&gt;/data-services/v1beta1/issues?select=body,lastOccuredAt,customerId,createdAt,id,resourceUri,generation,type
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Following the above recommendation, the following request body shows where I entered the select parameters as required above.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-request-issue-required-select-parameters.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The completed execution of this API is shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-response-with-select-for-active-issues.png&quot; alt=&quot;Response from issues with the correct select&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The above figure shows the output from GET issues given the parameter: &lt;code&gt;select=body,createdAt,customerId,generation,id,lastOccuredAt,resourceURI,type,updatedAt&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Completing POST method for REST API using &lt;code&gt;async-operations&lt;/code&gt; API with &lt;code&gt;task id&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Almost any HPE GreenLake REST API for data services with &lt;code&gt;POST, DELETE or PATCH&lt;/code&gt; methods (e.g. &lt;code&gt;POST /virtualization/v1beta1/virtual-machines&lt;/code&gt;) will be executed asynchronously. The asynchronous execution means that execution of the API will complete and return with &lt;code&gt;response status= 0x202&lt;/code&gt;. The POST REST API process will run in the background; nonetheless, this operation must be monitored until it comes to completion.&lt;/p&gt;
&lt;p&gt;To accomplish that monitoring, the user will receive a &lt;code&gt;task id&lt;/code&gt; value as part of &lt;code&gt;location&lt;/code&gt; response property as shown in the figure below. The user ould then poll that task id using the &lt;code&gt;GET /data-services/v1beta1/async-operations/{Task Id}&lt;/code&gt; to retrieve the progress and status of the completion. Below is an example of this use case, where I executed the creation of virtual machines in the on-premises hypervisor (VMware vCenter)&lt;/p&gt;
&lt;p&gt;I executed the REST API &lt;code&gt;POST https://{baseURL}/virtualization/v1beta1/virtual-machines&lt;/code&gt; and the response was completed with response status code of &lt;code&gt;0x202 (Accepted)&lt;/code&gt;. Moreover, you can discover the task Id value: &lt;code&gt;0xcad794d1-27ec-4050-bed4-45d13a8de9d0&lt;/code&gt; from the location property.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/location-output-contains-the-task-id.png&quot; alt=&quot;The task Id from response location field&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;T﻿he above figure displays the response header from &lt;code&gt;POST https://{baseUrl}/virtualization/v1beta1/virtual-machines&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;From the task Id that was obtained from the response header, I used &lt;code&gt;GET async-operations&lt;/code&gt; with the &lt;code&gt;specific task ID&lt;/code&gt; (e.g. &lt;code&gt;https://{baseUrl}/data-services/v1beta1/async-operations/cad794d1-27ec-4050-bed4-45d13a8de9d0&lt;/code&gt;) to obtain the status and progress of the previously executed REST API.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-request-async-operations-request-of-particular-id.png&quot; alt=&quot;Request async-response for a particular id&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following response snippets depict two different responses from the polling using the &lt;code&gt;async-operations&lt;/code&gt; API. The first response indicated the progress of the associated API execution at &lt;code&gt;40% (RUNNING)&lt;/code&gt;, and the second response indicated the progress at &lt;code&gt;100%(SUCCEEDED)&lt;/code&gt;. The progress between the two point of executions was about less than 3 minutes as shown by the difference from the following properties: &lt;code&gt;startedAt&lt;/code&gt; and &lt;code&gt;endedAt&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Below is the f﻿irst poll of the VM provisioning REST API task id:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    “displayName”: “Provisioning virtual machine 0-RRD-API-Deploy-4”,
    “endedAt”: “2024-03-24T00:13:53.558031307Z”,
    “healthStatus”: “OK”,
    “id”: “cad794d1-27ec-4050-bed4-45d13a8de9d0”,
    “logMessages”: [
        {
            “message”: “Task created”,
            “timestampAt”: “2024-03-24T00:13:52.002673131Z”
        },
        {
            “message”: “Task is running”,
            “timestampAt”: “2024-03-24T00:13:52.002675372Z”
        },
        {
            “message”: “Preparing parameters”,
            “timestampAt”: “2024-03-24T00:13:53.368619324Z”
        },
        {
            “message”: “Starting virtual machine deployment”,
            “timestampAt”: “2024-03-24T00:13:53.558043002Z”
        }
    ],
    “name”: “Provisioning virtual machine 0-RRD-API-Deploy-4”,
    “progressPercent”: 40,
    “services”: [
        “private-cloud-business-edition”
    ],
    “startedAt”: “2024-03-24T00:13:52.002663421Z”,
    “state”: “RUNNING”,
    “suggestedPollingIntervalSeconds”: 30,
    “type”: “task”,
    “updatedAt”: “2024-03-24T00:13:55.846052959Z”
} 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;The above response displays the result from the first poll of the VM provisioning REST API task Id with &lt;code&gt;progressPercent:40&lt;/code&gt; and &lt;code&gt;state: RUNNING&lt;/code&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;B﻿elow is the last poll of the VM provisioning REST API task id:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;displayName&quot;: &quot;Provisioning virtual machine 0-RRD-API-Deploy-4&quot;,
    &quot;endedAt&quot;: &quot;2024-03-24T00:15:49.906710665Z&quot;,
    &quot;error&quot;: null,
    &quot;healthStatus&quot;: &quot;OK&quot;,
    &quot;id&quot;: &quot;cad794d1-27ec-4050-bed4-45d13a8de9d0&quot;,
    &quot;logMessages&quot;: [
        {
            &quot;message&quot;: &quot;Task created&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:13:52.002673131Z&quot;
        },
        {
            &quot;message&quot;: &quot;Task is running&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:13:52.002675372Z&quot;
        },
        {
            &quot;message&quot;: &quot;Preparing parameters&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:13:53.368619324Z&quot;
        },
        {
            &quot;message&quot;: &quot;Starting virtual machine deployment&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:13:53.558043002Z&quot;
        },
        {
            &quot;message&quot;: &quot;Applying protection policy&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:15:49.598976645Z&quot;
        },
        {
            &quot;message&quot;: &quot;Virtual machine provisioning completed and initiated a task for applying backup policy.&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:15:49.906721852Z&quot;
        },
        {
            &quot;message&quot;: &quot;Task succeeded&quot;,
            &quot;timestampAt&quot;: &quot;2024-03-24T00:15:49.906727488Z&quot;
        }
    ],
    &quot;name&quot;: &quot;Provisioning virtual machine 0-RRD-API-Deploy-4&quot;,
    &quot;progressPercent&quot;: 100,
    &quot;services&quot;: [
        &quot;private-cloud-business-edition&quot;
    ],
    &quot;startedAt&quot;: &quot;2024-03-24T00:13:52.002663421Z&quot;,
    &quot;state&quot;: &quot;SUCCEEDED&quot;,
    &quot;suggestedPollingIntervalSeconds&quot;: 30,
    &quot;type&quot;: &quot;task&quot;,
    &quot;updatedAt&quot;: &quot;2024-03-24T00:15:49.955699371Z&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;The above response displays that at the moment of the second poll of the VM provisioning REST API &lt;code&gt;task Id&lt;/code&gt;, the creation of the virtual machines on-premises had completed successfully (&lt;code&gt;progressPercent: 100, state: SUCCEEDED&lt;/code&gt;).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;At that moment, I also discovered a virtual machine with the name &lt;code&gt;0-RRD-API-Deploy-4&lt;/code&gt; available at the VMware cluster where this provisioning was executed.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;This blog post introduces you to the new set of HPE Greenlake APIs for Data Services, to support resources such as: &lt;em&gt;async-operations, dual-auth-operations, issues, secrets, software-releases, storage locations, and tags.&lt;/em&gt; This set of APIs will evolve throughout the future toward the long term supported version and the number of APIs will expand to support more resources in the future.&lt;/p&gt;
&lt;p&gt;This March 2024 announcement introduces revision V1 Beta 1 of the API, which is documented at HPE GreenLake Developer &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/data-services/public/&quot;&gt;website&lt;/a&gt; using the interactive reference documentation based on OpenAPI 3.1 standard. In this post, I also introduced methods to exercise the API directly from the API reference documentation page using the access token obtained from HPE GreenLake API gateway. Lastly, I provided a list of tips on using this HPE GreenLake API for the specific use cases.&lt;/p&gt;
&lt;p&gt;Please don’t hesitate to explore this new set of HPE GreenLake APIs for data services and see how you can improve your agility in managing your data. If you have any questions on any of these APIs, or if you are interested sharing your feedback and use cases on this set of APIs, please join the HPE Developer Slack &lt;a href=&quot;https://developer.hpe.com/slack-signup&quot;&gt;Workspace&lt;/a&gt; and start a discussion in our &lt;em&gt;&lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services&lt;/a&gt;&lt;/em&gt; Slack channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing ChapelCon '24: The Chapel Event of the Year]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/introducing-chapelcon-24-the-chapel-event-of-the-year/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-chapelcon-24-the-chapel-event-of-the-year/</guid><pubDate>Tue, 02 Apr 2024 00:05:29 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Deploying Super Mario game on Kubernetes in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[In my recent blog post, I showed you how to expose applications using Ingress and TLS termination on Kubernetes (K8s) in HPE GreenLake for…]]></description><link>https://developer.hpe.com/deploying-super-mario-game-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-super-mario-game-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Fri, 29 Mar 2024 17:53:09 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;In my recent blog post, I showed you &lt;a href=&quot;https://developer.hpe.com/blog/exposing-an-application-using-ingress-and-tls-termination-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;how to expose applications using Ingress and TLS termination on Kubernetes (K8s) in HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;. Let&apos;s have a little fun practicing this through a real-world use case where I walk you through the steps of deploying gami﻿ng applications, like &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt;, on K8s in the HPE GreenLake for Private Cloud Enterprise. By using K8s Ingress, TLS termination, and a range of suitable tools, &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt; can be made available and securely accessible via HTTPS. The setup I show here strictly adheres to the rigorous security and compliance standards of the K8s production environment in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;Utilizing &lt;em&gt;YAML&lt;/em&gt; manifest files or &lt;em&gt;Helm&lt;/em&gt; charts along with Docker images, the installation of gami﻿ng applications on the K8s cluster is a straightforward process. Tools like &lt;em&gt;kubectl&lt;/em&gt;, &lt;em&gt;helm&lt;/em&gt;, and &lt;a href=&quot;https://kustomize.io/&quot;&gt;Kustomize&lt;/a&gt; are available for this purpose. The complexity arises when it comes to securely exposing the deployed games for external access over HTTPS, a common requirement for on-premises K8s clusters. This involves the generation and management of SSL/TLS certificates for the games within the cluster. These certificates are vital for secure inter-service communication. Proper installation and management are key to preventing access issues and security threats. As game traffic increases, particularly during peak usage hours, it becomes crucial to set up gami﻿ng applications with load balancing access. This presents a significant challenge ensuring the availability of load balancing for gami﻿ng applications running on K8s.&lt;/p&gt;
&lt;p&gt;Here&apos;s some of the things you need to deploy &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt; in an HPE GreenLake for Private Cloud Enterprise cluster and expose them using K8s Ingress and TLS termination. Remember that &lt;a href=&quot;https://developer.hpe.com/blog/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;MetalLB&lt;/a&gt; is employed to establish the load balancer in the cluster and &lt;a href=&quot;https://developer.hpe.com/blog/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/&quot;&gt;cert-manager&lt;/a&gt; is deployed for the generation and management of SSL/TLS certificates, which are stored as K8s &lt;em&gt;Secret&lt;/em&gt; objects and made available to the entire cluster upon creation. The &lt;a href=&quot;https://www.nginx.com/products/nginx-ingress-controller/&quot;&gt;Nginx Ingress controller&lt;/a&gt; is deployed within the cluster. The Ingress TLS configuration is used to decrypt encrypted traffic over HTTPS at the load balancer setup and forward the decrypted traffic to the target gami﻿ng applications. This configuration offloads the resource-intensive cryptographic operations to the dedicated load balancer, allowing the backend gami﻿ng applications to concentrate on efficiently processing client requests and responses. The gami﻿ng applications are deployed with the &lt;em&gt;ClusterIP&lt;/em&gt; service type in the backend, providing internal connectivity and being solely accessible from within the cluster. They do not directly handle SSL/TLS encryption and decryption.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/game-deploy.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;helm&lt;/em&gt; CLI tool, version 3.12.0 or later&lt;/li&gt;
&lt;li&gt;A domain and a list of subdomains to generate the SSL certificate and host the g﻿ame applications in the cluster&lt;/li&gt;
&lt;li&gt;The o﻿ptional &lt;em&gt;openssl&lt;/em&gt; CLI tool, for validating the generated certificates&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Deploy Super Mario&lt;/h3&gt;
&lt;p&gt;The game &lt;em&gt;Super Mario&lt;/em&gt;, together with &lt;em&gt;Tetris&lt;/em&gt;, can be deployed to the cluster using the &lt;em&gt;YAML&lt;/em&gt; manifest files from the GitHub repo &lt;a href=&quot;https://github.com/GuopingJia/k8s-games&quot;&gt;k8s-games&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tree k8s-games/
k8s-games/
├── README.md
├── super-mario
│   ├── deployment.yaml
│   └── service.yaml
└── tetris
    ├── deployment.yaml
    └── service.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following commands to deploy &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt; to the namespace &lt;em&gt;cfe-games&lt;/em&gt; in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create ns cfe-games
namespace/cfe-games created

$ kubectl apply -f super-mario/ -n cfe-games
deployment.apps/mario-deployment created
service/mario-service created

$ kubectl apply -f tetris/ -n cfe-games
deployment.apps/tetris-deployment created
service/tetris-service created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the command shown below to check the details of the game deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n cfe-games
NAME                                     READY   STATUS    RESTARTS   AGE
pod/mario-deployment-96f79d8f-dw9hh      1/1     Running   0          19s
pod/mario-deployment-96f79d8f-wsf7s      1/1     Running   0          13s
pod/tetris-deployment-86d744fb47-7kmwl   1/1     Running   0          7s
pod/tetris-deployment-86d744fb47-hqmgd   1/1     Running   0          10s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/mario-service    ClusterIP   10.104.144.88   &amp;#x3C;none&gt;        80/TCP    22s
service/tetris-service   ClusterIP   10.111.218.14   &amp;#x3C;none&gt;        80/TCP    10s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mario-deployment    2/2     2            2           24s
deployment.apps/tetris-deployment   2/2     2            2           12s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/mario-deployment-96f79d8f      2         2         2       24s
replicaset.apps/tetris-deployment-86d744fb47   2         2         2       12s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Two games, &lt;em&gt;mario-deployment&lt;/em&gt; and &lt;em&gt;tetris-deployment&lt;/em&gt;, are deployed in the cluster, each running with 2 Pod replicas by default. T﻿hey are exposed as the &lt;em&gt;ClusterIP&lt;/em&gt; type o﻿f services, providing internal connectivity and solely being accessible from within the cluster.&lt;/p&gt;
&lt;p&gt;You can configure the &lt;a href=&quot;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&quot;&gt;&lt;em&gt;Horizontal Pod Autoscaling&lt;/em&gt; (HPA)&lt;/a&gt; in the cluster by using the K8s &lt;em&gt;HorizontalPodAutoscaler&lt;/em&gt; resource. It will automatically scale the workload by deploying more Pods in the cluster according to gami﻿ng application memory or CPU usage.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to check that all the game service endpoints have been populated:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get endpoints -n cfe-games
NAME             ENDPOINTS                            AGE
mario-service    10.192.3.118:80,10.192.4.32:80       60s
tetris-service   10.192.3.119:3000,10.192.4.33:3000   50s
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set up the load balancer with &lt;em&gt;MetalLB&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;You can install &lt;em&gt;MetalLB&lt;/em&gt; and set up the load balancer in the K8s cluster by following the instructions shown in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Setting up the load balancer with MetalLB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;H﻿ere is the deployed &lt;em&gt;MetalLB&lt;/em&gt; to the namespace &lt;em&gt;metallb-system&lt;/em&gt; in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-57b4fdc957-dr4h4   1/1     Running   0          18d
pod/speaker-9kx9h                 1/1     Running   0          18d
pod/speaker-d6sdh                 1/1     Running   0          18d
pod/speaker-gxbbx                 1/1     Running   0          18d
pod/speaker-hflbj                 1/1     Running   0          18d
pod/speaker-wfw9n                 1/1     Running   0          18d

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.107.242.167   &amp;#x3C;none&gt;        443/TCP   18d

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   5         5         5       5            5           kubernetes.io/os=linux   18d

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           18d

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-57b4fdc957   1         1         1       18d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By running the following commands, you can see a range of virtual IP addresses, &lt;em&gt;&quot;10.6.115.251-10.6.115.254&quot;&lt;/em&gt;, defined in the CRD resource &lt;em&gt;IPAddressPool&lt;/em&gt;, and the layer 2 service IP address announcement in the CRD resource &lt;em&gt;L2Advertisement&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ipaddresspools -n metallb-system
NAME       AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cfe-pool   true          false             [&quot;10.6.115.251-10.6.115.254&quot;]



$ kubectl get l2advertisements -n metallb-system
NAME           IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
cfe-l2advert   [&quot;cfe-pool&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Generate a self-signed certificate using cert-manager&lt;/h3&gt;
&lt;p&gt;You can d﻿eploy cert-manager to the K8s cluster and generate a self-signed certificate by following the instructions found in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/&quot;&gt;Generating self-signed certificates using cert-manager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;H﻿ere is the cert-manager deployed to the namespace &lt;em&gt;cert-manager&lt;/em&gt; in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n cert-manager
NAME                                           READY   STATUS    RESTARTS   AGE
pod/cert-manager-6bcdd5f7c-f7lfw               1/1     Running   0          18d
pod/cert-manager-cainjector-5d4577b4d9-jmpsp   1/1     Running   0          18d
pod/cert-manager-webhook-bf957dc77-s9r2g       1/1     Running   0          18d

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/cert-manager           ClusterIP   10.109.28.203   &amp;#x3C;none&gt;        9402/TCP   18d
service/cert-manager-webhook   ClusterIP   10.100.82.119   &amp;#x3C;none&gt;        443/TCP    18d

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cert-manager              1/1     1            1           18d
deployment.apps/cert-manager-cainjector   1/1     1            1           18d
deployment.apps/cert-manager-webhook      1/1     1            1           18d

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-6bcdd5f7c               1         1         1       18d
replicaset.apps/cert-manager-cainjector-5d4577b4d9   1         1         1       18d
replicaset.apps/cert-manager-webhook-bf957dc77       1         1         1       18d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below is the deployed self-signed custom resource definition (CRD) &lt;em&gt;Issuer&lt;/em&gt; in the namespace &lt;em&gt;cfe-games&lt;/em&gt; where the game applications are deployed. You want to generate the certificate to this namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get issuer -n cfe-games
NAME                    READY   AGE
cfe-selfsigned-issuer   True    10s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the generated self-signed certificate in the namespace &lt;em&gt;cfe-games&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get certificate -n cfe-games
NAME                 READY   SECRET             AGE
cfe-selfsigned-tls   True    cfe-tls-key-pair   8s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he K8s &lt;em&gt;Secret&lt;/em&gt; &lt;em&gt;&apos;cfe-tls-key-pair&apos;&lt;/em&gt; is created automatically in the same namespace as part of certificate deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get secrets  -n cfe-games cfe-tls-key-pair
NAME               TYPE                DATA   AGE
cfe-tls-key-pair   kubernetes.io/tls   3      35s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following &lt;em&gt;openssl&lt;/em&gt; command to check the content of the generated certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ openssl x509 -in &amp;#x3C;(kubectl get secret -n cfe-games cfe-tls-key-pair -o jsonpath=&apos;{.data.tls\.crt}&apos; | base64 -d) -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            2d:0e:ee:67:d2:e0:e2:e6:bc:f2:9a:da:2b:78:66:86
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = example.com
        Validity
            Not Before: Feb 21 17:33:40 2024 GMT
            Not After : May 21 17:33:40 2024 GMT
        Subject: CN = example.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:d7:88:2a:e6:67:20:62:e4:25:f8:cd:63:b7:75:
                    bf:ac:d4:5a:8a:32:1c:06:29:17:96:cb:6b:36:97:
                    7f:9b:1d:f2:d6:f2:a4:f1:63:32:9b:7f:42:a1:31:
                    40:b6:02:ec:0b:37:a6:60:fb:11:72:28:96:91:90:
                    55:26:c5:58:3c:dd:a0:4b:a2:ab:33:19:29:88:24:
                    da:73:81:af:99:9b:df:7f:26:14:36:1b:56:93:24:
                    e9:91:d0:89:e1:62:d0:45:22:64:0b:c4:1d:96:71:
                    ab:ee:61:94:00:f6:60:71:10:10:fc:3e:d1:6b:b6:
                    5b:0b:bf:18:0c:86:90:b0:f9:eb:78:8c:dc:90:4e:
                    ef:87:1f:ac:22:56:2b:92:23:ae:fe:bb:48:1e:13:
                    40:03:b7:54:02:44:8f:ae:c6:61:bf:d4:e9:f7:17:
                    72:a8:98:72:b7:a6:e0:16:29:8d:ca:4a:1e:08:89:
                    78:f7:88:b7:ac:d2:b8:8d:89:88:c3:c7:04:f4:ff:
                    00:64:37:6f:3f:5a:43:2c:ce:e4:69:b2:a8:44:fe:
                    77:41:ec:97:b8:7b:82:49:b0:65:8e:fc:1f:1c:2b:
                    37:ea:46:9d:e4:5c:a0:56:9f:d8:3b:78:83:28:b5:
                    ac:a9:61:ce:25:c7:54:c8:a3:96:f6:a8:48:f4:57:
                    56:3b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                3F:DD:BB:BB:DB:23:47:E1:EC:39:1E:BE:03:AC:D4:7E:2A:E2:A6:FA
            X509v3 Subject Alternative Name:
                DNS:super-mario.example.com, DNS:tetris.example.com, DNS:example.com
    Signature Algorithm: sha256WithRSAEncryption
         78:46:61:2d:b8:27:fe:18:59:b2:57:ef:88:2b:2f:20:9f:a5:
         4a:28:33:64:46:78:e3:c4:7f:40:4a:38:ad:ca:0a:2e:7d:31:
         7f:70:81:e1:50:b6:4e:a5:02:31:bf:26:44:89:b2:1f:5c:3d:
         63:b8:62:bf:9c:b3:f0:96:76:bb:b0:3e:47:0e:bc:5e:fa:9c:
         9c:98:36:1d:2f:72:3d:b9:11:30:94:b0:2e:2f:a3:57:18:07:
         5d:bf:aa:0d:c6:36:20:2a:8f:a6:11:7c:e4:2f:03:07:2e:c4:
         cd:33:07:3f:c2:54:30:e0:bf:d1:8e:20:0a:bc:a3:90:39:46:
         d4:ed:03:c2:71:a1:43:b4:a6:c0:73:13:14:ea:a4:52:39:8f:
         72:59:00:1a:5f:1c:6e:1e:b7:4d:b5:9e:43:cd:e7:89:5a:07:
         ad:ce:41:f4:5a:cd:73:ee:bc:f4:01:73:92:9d:c4:a6:f1:8d:
         eb:43:af:65:78:8d:f0:e6:c3:df:bc:44:ca:19:c5:da:3f:a2:
         4d:89:fa:8e:63:33:3d:4d:8d:b3:98:3b:d9:12:c0:d9:3a:82:
         07:bc:81:fb:5d:c9:e5:38:3c:ec:d3:3e:e9:bc:e4:13:84:07:
         f3:c7:85:8a:46:ba:69:13:c7:a8:14:42:4b:ee:f9:2a:b4:3b:
         d9:8f:9c:50
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The line &lt;em&gt;X509v3 Subject Alternative Name&lt;/em&gt; contains the &lt;em&gt;dnsNames&lt;/em&gt;, &lt;em&gt;&apos;super-mario.example.com&apos;&lt;/em&gt; &amp;#x26; &lt;em&gt;&apos;tetris.example.com&apos;&lt;/em&gt;, which host two games, &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt;, respectively in the cluster.&lt;/p&gt;
&lt;h3&gt;Deploy Nginx Ingress controller&lt;/h3&gt;
&lt;p&gt;In order for an Ingress to work in the cluster, there must be an Ingress controller being deployed and running. It&apos;s the Ingress controller that accesses the certificate and the routing rules defined on the Ingress resource and makes them part of its configuration.&lt;/p&gt;
&lt;p&gt;A variety of Ingress controllers are available for deployment in the cluster, including &lt;a href=&quot;https://doc.traefik.io/traefik/providers/kubernetes-ingress/&quot;&gt;Traefik&lt;/a&gt;, &lt;a href=&quot;https://github.com/haproxytech/kubernetes-ingress#readme&quot;&gt;HAProxy&lt;/a&gt; and &lt;a href=&quot;https://www.nginx.com/products/nginx-ingress-controller/&quot;&gt;Nginx Ingress controller&lt;/a&gt;. Execute the command below to install the Nginx Ingress controller to the cluster using &lt;em&gt;Helm&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ helm upgrade --install ingress-nginx ingress-nginx \
&gt;   --repo https://kubernetes.github.io/ingress-nginx \
&gt;   --namespace ingress-nginx --create-namespace
Release &quot;ingress-nginx&quot; does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Wed Mar  6 18:30:55 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running &apos;kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch&apos;

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: &amp;#x3C;base64 encoded cert&gt;
    tls.key: &amp;#x3C;base64 encoded key&gt;
  type: kubernetes.io/tls
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he Nginx Ingress controller is deployed to the namespace &lt;em&gt;ingress-nginx&lt;/em&gt; in the cluster. Type the following command to check the deployment details:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-5957546d75-zjwjh   1/1     Running   0          15d

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.98.254.246    10.6.115.251   80:30209/TCP,443:30833/TCP   15d
service/ingress-nginx-controller-admission   ClusterIP      10.109.187.223   &amp;#x3C;none&gt;         443/TCP                      15d

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           15d

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-5957546d75   1         1         1       15d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he service &lt;em&gt;ingress-nginx-controller&lt;/em&gt; gets deployed as the service type of &lt;em&gt;LoadBalancer&lt;/em&gt; with the &lt;em&gt;EXTERNAL-IP&lt;/em&gt; assigned as &lt;em&gt;10.6.115.251&lt;/em&gt;. This IP address will be used for setting up domain and subdomain name resolution.&lt;/p&gt;
&lt;h3&gt;Set up Ingress TLS&lt;/h3&gt;
&lt;p&gt;The Ingress resource with TLS has to be created. Here is a sample Ingress TLS resource &lt;em&gt;ingress-host-based-selfsigned-games.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-host-based-selfsigned-games.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-host-based-selfsigned
  annotations:
    ingress.kubernetes.io/ssl-redirect: &quot;true&quot;
    cert-manager.io/issuer: &quot;cfe-selfsigned-issuer&quot;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - example.com
    secretName: cfe-tls-key-pair
  rules:
  - host: super-mario.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: mario-service
            port:
              number: 80
  - host: tetris.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tetris-service
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above sample YAML manifest file, there is the &lt;em&gt;&apos;tls&apos;&lt;/em&gt; block that contains the hostname &lt;em&gt;&apos;example.com&apos;&lt;/em&gt; and the secret &lt;em&gt;cfe-tls-key-pair&lt;/em&gt; created in the certification step. There is also the &lt;em&gt;&apos;rules&apos;&lt;/em&gt; block in which a list of routing rules is defined per host, e.g., the host &lt;em&gt;&apos;super-mario.example.com&apos;&lt;/em&gt; will be routed to the Super Mario game service &lt;em&gt;&apos;mario-service&apos;&lt;/em&gt; in the backend.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to deploy the Ingress resource to the namespace &lt;em&gt;cfe-games&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f ingress-host-based-selfsigned-games.yaml -n cfe-games
ingress.networking.k8s.io/ingress-host-based-selfsigned created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check the details of the &lt;em&gt;TLS&lt;/em&gt; and &lt;em&gt;Rules&lt;/em&gt; settings by t﻿yping the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl describe ingress ingress-host-based-selfsigned -n cfe-games
Name:             ingress-host-based-selfsigned
Labels:           &amp;#x3C;none&gt;
Namespace:        cfe-games
Address:
Ingress Class:    nginx
Default backend:  &amp;#x3C;default&gt;
TLS:
  cfe-tls-key-pair terminates example.com
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  super-mario.example.com
                           /   mario-service:80 (10.192.4.21:80,10.192.4.22:80)
  tetris.example.com
                           /   tetris-service:80 (10.192.3.231:3000,10.192.4.27:3000)
Annotations:               cert-manager.io/issuer: cfe-selfsinged-issuer
                           ingress.kubernetes.io/ssl-redirect: true
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  Sync               30s   nginx-ingress-controller   Scheduled for sync
  Normal  CreateCertificate  30s   cert-manager-ingress-shim  Successfully created Certificate &quot;cfe-tls-key-pair&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Access deployed games&lt;/h3&gt;
&lt;p&gt;B﻿efore accessing the deployed games, you need set up the subdomain name resolution. For the subdomains, &lt;em&gt;super-mario.example.com&lt;/em&gt; and &lt;em&gt;tetris.example.com&lt;/em&gt;, the workstation host file has been used for DNS resolution.&lt;/p&gt;
&lt;p&gt;Type the following commands to check that the domain/subdomain name resolution is set up correctly:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ host super-mario.example.com
super-mario.example.com has address 10.6.115.251

$ host tetris.example.com
tetris.example.com has address 10.6.115.251
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can then access the deployed games using the browser. S﻿tart the browser and type the URL &lt;em&gt;super-mario.example.com&lt;/em&gt;. It will be redirected over HTTPS with the warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mario-private.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿his is due to the fact that the self-signed certifcate is generated in cert-manager and configured in the K8s Ingress resource.&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Not secure&lt;/em&gt; and start the Certificate Viewer to check the certificate:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mario-certificate.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Proceed to super-mario.example.com (unsafe)&lt;/em&gt;. You will land on the &lt;em&gt;SUPER MARIO&lt;/em&gt; game page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/super-mario.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you type the URL &lt;em&gt;tetris.example.com&lt;/em&gt; to the browser, it will be redirected over HTTPS with the same warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tetris-private.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Proceed to tetris.example.com (unsafe)&lt;/em&gt;. You will then go to the Tetris &lt;em&gt;Start&lt;/em&gt; page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tetris-start.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;When you click on the &lt;em&gt;Start&lt;/em&gt; button, you will land on the &lt;em&gt;Tetris&lt;/em&gt; game page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tetris.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s all there is to it! E﻿njoy playing your games!&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post offers you a comprehensive guide on how to d﻿eploy &lt;em&gt;Super Mario&lt;/em&gt; and &lt;em&gt;Tetris&lt;/em&gt; in a K8 cluster and e﻿xpose those games to be securely accessed via HTTPS in HPE GreenLake for Private Cloud Enterprise. It details the process of configuring TLS termination on an Ingress controller at the load balancer setup, utilizing a K8s Ingress resource and a self-signed TLS certificate generated with cert-manager. This guide fully aligns with the stringent security and compliance requirements of the K8s production environment in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise and get more ideas on how you can use it in your everyday operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Configuring SSO for Aruba Central and HPE GreenLake using Okta]]></title><description><![CDATA[Aruba Central has gone GREEN…GreenLake that is! Aruba Central has recently been integrated into the HPE GreenLake Cloud Platform (GLCP…]]></description><link>https://developer.hpe.com/okta-sso-integration-for-green-lake-and-aruba-central/</link><guid isPermaLink="false">https://developer.hpe.com/okta-sso-integration-for-green-lake-and-aruba-central/</guid><pubDate>Tue, 26 Mar 2024 19:04:06 GMT</pubDate><content:encoded>&lt;p&gt;Aruba Central has gone GREEN…GreenLake that is! Aruba Central has recently been integrated into the HPE GreenLake Cloud Platform (GLCP). This provides IT administrators with the ability to view and orchestrate critical network services, such as Wired, Wireless and SD-Branch, through the same dashboard as their compute and storage infrastructure. GLCP also supports Single Sign On (SSO) which helps simplify account management.&lt;/p&gt;
&lt;p&gt;If you are new to Aruba Central and are looking to enable SSO, this guide is for you. It will walk you through the process of configuring SSO for HPE GreenLake and Aruba Central using Okta.&lt;/p&gt;
&lt;h3&gt;Before starting&lt;/h3&gt;
&lt;p&gt;Please review the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;HPE GreenLake&lt;/a&gt; User Guide to understand how the SAML framework works in the context of Common Cloud Services for the Aruba Central application.&lt;/p&gt;
&lt;h3&gt;Configure SSO/SAML applications in Okta&lt;/h3&gt;
&lt;p&gt;To configure application metadata in Okta, complete the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Step 1: Create an Okta SAML application&lt;/li&gt;
&lt;li&gt;Step 2: Configure Sign On settings&lt;/li&gt;
&lt;li&gt;Step 3: Export the SAML 2.0 IdP metadata&lt;/li&gt;
&lt;li&gt;Step 4: Configure the SAML connection in HPE GreenLake&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 1: Create an Okta SAML application&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Log in to the Okta administration console.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Applications &gt; Create new app integration.&lt;/strong&gt; The Create a new app integration window opens.&lt;/li&gt;
&lt;li&gt;Select SAML 2.0 and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image0.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Provide a name for the Aruba GreenLake SSO service (Okta application)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2: How to configure Single Sign On settings&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enter the SAML information.&lt;/p&gt;
&lt;p&gt;Under General:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Single Sign on URL:&lt;/strong&gt; &lt;a href=&quot;https://sso.common.cloud.hpe.com/sp/ACS.saml2&quot;&gt;https://sso.common.cloud.hpe.com/sp/ACS.saml2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Audience URI (SP Entity ID):&lt;/strong&gt; &lt;a href=&quot;https://sso.common.cloud.hpe.com&quot;&gt;https://sso.common.cloud.hpe.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Name ID format EmailAddress&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Application username Email&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NameID = user.email&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;gl_first_name = user.FirstName&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;gl_last_name = user.LastName&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;hpe_ccs_attribute = (See Below)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;See here for IdP attribute details: &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As part of the HPE GreenLake cloud platform integration, one of the additional features that was added is the Role Based Access Controls for Aruba Central and all other apps on the platform. A new SAML attribute has been added “hpe_ccs_attribute” which tells HPE GreenLake and Central the exact role/permissions for each user. The following describes how to format the attribute.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image17.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;hpe__ccs__attribute&lt;/strong&gt; always starts with &lt;code&gt;version_1#&lt;/code&gt;. You must first configure the attributes for HPE GreenLake CSS, and then Central. To do so, enter the PCID for the account, followed by the HPE GreenLake application ID. This will always be &lt;strong&gt;00000000-0000-0000-0000-000000000000&lt;/strong&gt;. Following this, enter the role name and &lt;strong&gt;ALL_SCOPES&lt;/strong&gt;. Next, enter in the Aruba Central information. Start with the &lt;strong&gt;app cid&lt;/strong&gt;, followed by the role name (i.e. Aruba Central Administrator), and then &lt;strong&gt;ALL_SCOPES&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;version_1#5b0ec0e8c4f422eca232ba72799953ac:00000000-0000-0000-0000-000000000000:Account Administrator:ALL__SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Aruba Central Administrator:ALL_SCOPES&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you want to add additional HPE GreenLake applications, or if you have multiple Aruba Central accounts, you can add them as well. Just follow the same syntax as before. Once you have the attribute defined, enter it into the SAML attribute statement in Okta as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;2﻿. Complete the setup.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click Next and Select “Internal App”, then Finish.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; &lt;strong&gt;Export the SAML 2.0 IdP metadata&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click Next – Configure the Sign On settings&lt;/p&gt;
&lt;p&gt;You will find two options are available: &lt;strong&gt;View Setup Instructions&lt;/strong&gt; which steps you through the SAML configuration and &lt;strong&gt;Identity Provider metadata&lt;/strong&gt;, which will produce an XML file that can be loaded into Aruba Central.&lt;/p&gt;
&lt;p&gt;Suggestion: Click &lt;strong&gt;Identity Provider metadata&lt;/strong&gt; and save the XML data to a file.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;C﻿lick Next.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select Internal app, and Click Finish.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Create SAML Authorization Profile in HPE GreenLake Cloud Platform&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log into HPE GreenLake and click Menu &gt; Manage &gt; Authentication and Click Set Up SAML Connection.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Before you can add a new SAML configuration, you must have at least one user account with that domain already enabled in HPE GreenLake. Also, you must be logged into HPE GreenLake with an account from that domain in order to enable SSO for it.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type in the domain you want to enable SSO on:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Input the metadata from the step above.&lt;/p&gt;
&lt;p&gt;While HPE GreenLake does support entering this information manually, it&apos;s recommended that you simply upload the XML metadata that was downloaded in the previous step. To do so, Select Metadata File, selecting the XML file. Then, click Next.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the SAML attributes to match what was entered in Okta. Set the idle timeout value as well.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then click Next.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a recover user so that, in the event SSO fails, an admin will still be able to access the HPE GreenLake portal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Congratulations! SSO will now be enabled for HPE GreenLake as well as the Aruba Central application. Log out and on the HPE GreenLake home page, click &lt;strong&gt;Sign in with SSO&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Testing and troubleshooting:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On the HPE GreenLake Cloud Platform home page, click &lt;strong&gt;Sign In with SSO&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image16.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Enter the SSO credentials. You will be redirected to Okta to authenticate. Once you successfully authenticate, you will be redirected back to HPE GreenLake. You can then click on the Aruba Central application and be given access based on the configured role/permissions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Additional notes:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There must be at least &lt;strong&gt;one&lt;/strong&gt; verified user belonging to the &lt;strong&gt;Domain&lt;/strong&gt; prior to configuration.&lt;/li&gt;
&lt;li&gt;In order to configure SSO, you must be logged into HPE GreenLake with a user from the domain.&lt;/li&gt;
&lt;li&gt;SSO user access is determined by the “role_name” attribute included in the SAML &quot;hpe_ccs_attribute&quot; provided by the IdP.&lt;/li&gt;
&lt;li&gt;SSO users can initiate a Single Sign On request by trying to log into Aruba Central (SP-initiated login).&lt;/li&gt;
&lt;li&gt;For more troubleshooting: &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[AI News #16]]></title><description><![CDATA[E﻿xternal post]]></description><link>https://developer.hpe.com/ai-news-16/</link><guid isPermaLink="false">https://developer.hpe.com/ai-news-16/</guid><pubDate>Tue, 26 Mar 2024 17:20:55 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel 2.0: Scalable and Productive Computing for All]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/chapel-2-0-scalable-and-productive-computing-for-all/</link><guid isPermaLink="false">https://developer.hpe.com/chapel-2-0-scalable-and-productive-computing-for-all/</guid><pubDate>Thu, 21 Mar 2024 22:42:35 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Athonet LLM Platform: Driving Users towards peak 'Flow' efficiency]]></title><description><![CDATA[In the preceding blog posts,"The transformative impact of generative AI on Telco products"and "HPE Athonet LLM Platform: From personal…]]></description><link>https://developer.hpe.com/hpe-athonet-llm-platform-driving-users-towards-peak-flow-efficiency/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-athonet-llm-platform-driving-users-towards-peak-flow-efficiency/</guid><pubDate>Wed, 20 Mar 2024 17:36:19 GMT</pubDate><content:encoded>&lt;p&gt;In the preceding blog posts,&lt;a href=&quot;https://developer.hpe.com/blog/the-transformative-impact-of-generative-ai-on-telco-products/&quot;&gt;&quot;The transformative impact of generative AI on Telco products&quot;&lt;/a&gt;and &lt;a href=&quot;https://developer.hpe.com/blog/hpe-athonet-llm-platform-first-pillar-from-personal-assistant-to-collaborative-corporate-tool/&quot;&gt;&quot;HPE Athonet LLM Platform: From personal assistant to collaborative corporate tool&quot;&lt;/a&gt;, I detailed how HPE Athonet is constructing a solution anchored in three fundamental aspects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Shifting towards a framework that centers on team collaboration.&lt;/li&gt;
&lt;li&gt;Elevating the user experience to foster productivity and concentration on tasks.&lt;/li&gt;
&lt;li&gt;Crafting a flexible infrastructure grounded in data mesh methodologies, encompassing the management of security and ethical standards.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These components collectively illustrate HPE Athonet&apos;s dedication not solely to the advancement of technology but also to the promotion of true digital evolution within network management through the application of generative AI.&lt;/p&gt;
&lt;p&gt;This article further explores the second core principle shaping the strategy: &lt;strong&gt;the enhancement of usability and user experience&lt;/strong&gt; to assist individuals in maintaining focus on their tasks.&lt;/p&gt;
&lt;p&gt;While current chatbot solutions offer limited functionalities, HPE Athonet&apos;s goal is to introduce a tool that significantly improves task concentration and multitasking capabilities. By creating a system that can be augmented with additional tools or plugins, the intention is to support users in sustaining focus and potentially reaching a state of &lt;strong&gt;&quot;Flow&quot;&lt;/strong&gt;, where they are at peak productivity.&lt;/p&gt;
&lt;p&gt;This strategy reflects design concepts present in some eLearning platforms, focusing on user involvement and efficiency, as outlined in the provided &lt;a href=&quot;https://www.ux-republic.com/en/flow-a-key-concept-for-ux-designers/&quot;&gt;“Flow, a key concept for UX designers”&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ux-copy.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Major companies in the tech industry are incorporating generative AI assistants across their office tool suites. HPE Athonet’s approach significantly diverges; rather than amplifying a variety of general-purpose products for individual usage, the aim is to forge a specialized instrument geared towards &lt;strong&gt;augmenting team collaboration&lt;/strong&gt; specifically within 5G project realms. This instrument consolidates all vital data and resources, facilitating each team member’s effective engagement and seamless contribution to unified project endeavors.&lt;/p&gt;
&lt;p&gt;HPE Athonet introduces a novel feature allowing the selection of a particular project, which unlocks a comprehensive archive of all related actions and a set of project-specific tools. For instance, when evolving a new attribute for the HPE Athonet 5G core, the project archive might encompass customer feature inquiries, viability studies, and strategic evaluations by product managers and architects, in addition to relevant 3rd Generation Partnership Project (3GPP) standards details. This amalgamation ensures seamless access to all essential data for the R&amp;#x26;D team via a straightforward chat interaction with the tool as they commence development.&lt;/p&gt;
&lt;p&gt;The platform accommodates &lt;strong&gt;individual and collective discussions&lt;/strong&gt;, maintaining the project archive up-to-date with fresh insights. To further refine the user experience, it is vital not only to address user queries but also to exhibit the source content, potentially showcasing the pertinent document segment with highlighted key areas. Furnishing an answer&apos;s relevance score could further enhance user engagement.&lt;/p&gt;
&lt;p&gt;For tools interfacing with devices or 5G networks to present real-time insights, the integration of sophisticated visualization attributes like Key Performance Indicator (KPI) charts, geographical mappings, and system layouts is vital. These improvements should transcend mere textual exchanges, providing a user interface that skillfully renders data through the most effective multimedia elements. This strategy guarantees &lt;strong&gt;that information is not just reachable but also intuitively understandable&lt;/strong&gt;, empowering users to immediately digest complex data and make prompt, well-informed choices.&lt;/p&gt;
&lt;p&gt;Technical instruments often demand specific input formats; thus, HPE Athonet is also introducing the ability to customize the user interface integration within the chatbot. All these initiatives aim in a unified direction: to assist the user in efficiently accessing or performing required tasks through a straightforward interface that can link to multiple tools.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_ux_tools.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This blog post highlights the second foundational pillar shaping HPE Athonet&apos;s forthcoming product: enhancing user experience to foster productivity and task concentration. It demonstrates how HPE Athonet intends to implement its strategy, which also encompasses transitioning to a framework centered around team collaboration and developing a secure, ethical, and adaptable infrastructure. These initiatives highlight HPE Athonet&apos;s dedication to innovation and digital transformation within network management.&lt;/p&gt;
&lt;p&gt;Stay tuned for an upcoming post on the final pillar concerning the development of an LLM Data Platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exposing applications using Ingress and TLS termination on Kubernetes in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Overview HPE GreenLake for Private Cloud Enterprise: Containers, one of the HPE GreenLake cloud services available on the HPE GreenLake for…]]></description><link>https://developer.hpe.com/exposing-an-application-using-ingress-and-tls-termination-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/exposing-an-application-using-ingress-and-tls-termination-on-kubernetes-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Wed, 20 Mar 2024 10:06:33 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;O﻿nce applications are deployed in a K8s cluster, the next step is to create services that expose these applications. By default, K8s services are created with the &lt;em&gt;ClusterIP&lt;/em&gt; type, which supports internal connectivity among different components of the applications. However, these services are not directly accessible from outside the cluster. It can be challenging to securely expose the deployed applications over HTTPS. This involves generating and managing SSL/TLS certificates for multiple applications deployed in the cluster. These certificates are crucial for secure communication between services. Installing them and managing them correctly is essential to avoid access issues and security risks.&lt;/p&gt;
&lt;p&gt;To address exposing applications over HTTPS, K8s provides the concept of &lt;em&gt;Ingress&lt;/em&gt;. An Ingress acts as an entry point for external traffic into the cluster. It can be configured with TLS termination. However, setting up K8s Ingress with TLS termination is intricate. It involves creating a K8s &lt;em&gt;Secret&lt;/em&gt; to host the certificate and referencing the secret in the Ingress resource. It also requires an additional load balancer configuration in the cluster.&lt;/p&gt;
&lt;p&gt;This blog post outlines the comprehensive steps used to expose applications u﻿sing Ingress and TLS termination on K8s in HPE GreenLake for Private Cloud Enterprise. &lt;a href=&quot;https://developer.hpe.com/blog/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;MetalLB&lt;/a&gt; is deployed to the cluster to set up the load balancer. It enables external access to services within the cluster. &lt;a href=&quot;https://developer.hpe.com/blog/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/&quot;&gt;Cert-manager&lt;/a&gt; is used for creating and managing SSL/TLS certificates. The generated certificate is stored as a K8s &lt;em&gt;Secret&lt;/em&gt; object. This secret can be mounted by application Pods or used by an Ingress controller. The &lt;a href=&quot;https://www.nginx.com/products/nginx-ingress-controller/&quot;&gt;Nginx Ingress controller&lt;/a&gt; is deployed and configured in the cluster. It handles SSL certificates and facilitates secure access to applications in the backend.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tls-termination-s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Despite the complexities, securely exposing applications in a K8s cluster over HTTPS is attainable. It can be achieved by leveraging Ingress and TLS termination, along with a suite of suitable tools and utilities deployed within the K8s cluster in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;helm&lt;/em&gt; CLI tool, version 3.12.0 or later&lt;/li&gt;
&lt;li&gt;A domain and a list of subdomains to generate the SSL certificate and host the applications in the cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Set up the load balancer with MetalLB&lt;/h3&gt;
&lt;p&gt;You can install MetalLB and set up the load balancer in the K8s cluster by following the instructions found in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Setting up the load balancer with MetalLB&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;H﻿ere is the deployed MetalLB to the namespace &lt;em&gt;metallb-system&lt;/em&gt; in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-57b4fdc957-56wv8   1/1     Running   0          22m
pod/speaker-c7sgk                 1/1     Running   0          22m
pod/speaker-dtlpm                 1/1     Running   0          22m
pod/speaker-gxccz                 1/1     Running   0          22m
pod/speaker-pwl87                 1/1     Running   0          22m
pod/speaker-rvvkz                 1/1     Running   0          22m
pod/speaker-wxd5n                 1/1     Running   0          22m

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.102.54.20   &amp;#x3C;none&gt;        443/TCP   22m

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   6         6         6       6            6           kubernetes.io/os=linux   22m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           22m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-57b4fdc957   1         1         1       22m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see the range of virtual IP addresses, &lt;em&gt;&quot;10.6.115.251-10.6.115.254&quot;&lt;/em&gt;, defined in the CRD resource &lt;em&gt;IPAddressPool&lt;/em&gt;, and the layer 2 service IP address announcement in the CRD resource &lt;em&gt;L2Advertisement&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get ipaddresspools -n metallb-system
NAME       AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cfe-pool   true          false             [&quot;10.6.115.251-10.6.115.254&quot;]

$ kubectl get l2advertisements -n metallb-system
NAME           IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
cfe-l2advert   [&quot;cfe-pool&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy Nginx Ingress controller&lt;/h3&gt;
&lt;p&gt;In order for an Ingress to work in the cluster, there must be an Ingress controller being deployed and running. It&apos;s the Ingress controller that accesses the certificate and the routing rules defined on the Ingress resource and makes them part of its configuration.&lt;/p&gt;
&lt;p&gt;A variety of Ingress controllers are available for deployment in the cluster, including &lt;a href=&quot;https://doc.traefik.io/traefik/providers/kubernetes-ingress/&quot;&gt;Traefik&lt;/a&gt;, &lt;a href=&quot;https://github.com/haproxytech/kubernetes-ingress#readme&quot;&gt;HAProxy&lt;/a&gt; and &lt;a href=&quot;https://www.nginx.com/products/nginx-ingress-controller/&quot;&gt;Nginx Ingress controller&lt;/a&gt;. Execute the command below to install the Nginx Ingress controller to the cluster using &lt;em&gt;Helm&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ helm upgrade --install ingress-nginx ingress-nginx \
&gt;   --repo https://kubernetes.github.io/ingress-nginx \
&gt;   --namespace ingress-nginx --create-namespace
Release &quot;ingress-nginx&quot; does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Wed Mar  6 18:30:55 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running &apos;kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch&apos;

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: &amp;#x3C;base64 encoded cert&gt;
    tls.key: &amp;#x3C;base64 encoded key&gt;
  type: kubernetes.io/tls
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he Nginx Ingress controller is deployed to the namespace &lt;em&gt;ingress-nginx&lt;/em&gt; in the cluster. Type the following command to check the deployment details:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-548768956f-8bz2q   1/1     Running   0          15m

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.108.173.7     10.6.115.251   80:32734/TCP,443:32265/TCP   15m
service/ingress-nginx-controller-admission   ClusterIP      10.108.100.150   &amp;#x3C;none&gt;         443/TCP                      15m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           15m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-548768956f   1         1         1       15m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he service &lt;em&gt;ingress-nginx-controller&lt;/em&gt; gets deployed as the service type of &lt;em&gt;LoadBalancer&lt;/em&gt; with the &lt;em&gt;EXTERNAL-IP&lt;/em&gt; assigned as &lt;em&gt;10.6.115.251&lt;/em&gt;. This IP address will be used for setting up domain and subdomain name resolution.&lt;/p&gt;
&lt;h3&gt;Generate a self-signed certificate using cert-manager&lt;/h3&gt;
&lt;p&gt;You can d﻿eploy cert-manager to the K8s cluster and generate a self-signed certificate by following the instructions found in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/&quot;&gt;Generating self-signed certificates using cert-manager&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;H﻿ere is the cert-manager deployed to the namespace &lt;em&gt;cert-manager&lt;/em&gt; in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n cert-manager
NAME                                           READY   STATUS    RESTARTS   AGE
pod/cert-manager-59fbb6655d-h7sqb              1/1     Running   0          18s
pod/cert-manager-cainjector-69548575fb-7fqd2   1/1     Running   0          18s
pod/cert-manager-webhook-57b78f476d-mp45s      1/1     Running   0          16s

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/cert-manager           ClusterIP   10.107.221.97    &amp;#x3C;none&gt;        9402/TCP   20s
service/cert-manager-webhook   ClusterIP   10.104.243.185   &amp;#x3C;none&gt;        443/TCP    19s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cert-manager              1/1     1            1           18s
deployment.apps/cert-manager-cainjector   1/1     1            1           18s
deployment.apps/cert-manager-webhook      1/1     1            1           17s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-59fbb6655d              1         1         1       19s
replicaset.apps/cert-manager-cainjector-69548575fb   1         1         1       19s
replicaset.apps/cert-manager-webhook-57b78f476d      1         1         1       18s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below is the deployed self-signed custom resource definition (CRD) &lt;em&gt;Issuer&lt;/em&gt; in the namespace &lt;em&gt;nginx-apps&lt;/em&gt; in which you want to generate the certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get issuer -n nginx-apps
NAME                    READY   AGE
cfe-selfsigned-issuer   True    115s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the generated self-signed certificate in the namespace &lt;em&gt;nginx-apps&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get certificate -n nginx-apps
NAME                 READY   SECRET             AGE
cfe-selfsigned-tls   True    cfe-tls-key-pair   2m23s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he K8s &lt;em&gt;Secret&lt;/em&gt; &lt;em&gt;&apos;cfe-tls-key-pair&apos;&lt;/em&gt; is created automatically in the same namespace as part of certificate deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get secrets -n nginx-apps cfe-tls-key-pair
NAME               TYPE                DATA   AGE
cfe-tls-key-pair   kubernetes.io/tls   3      2m25s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to check the &lt;em&gt;commonName&lt;/em&gt; and the &lt;em&gt;dnsNames&lt;/em&gt; in the generated certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl describe certificate cfe-selfsigned-tls -n nginx-apps
Name:         cfe-selfsigned-tls
Namespace:    nginx-apps
Labels:       &amp;#x3C;none&gt;
Annotations:  &amp;#x3C;none&gt;
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:
  Creation Timestamp:  2024-03-06T17:58:51Z
  Generation:          1
  Managed Fields:
    API Version:  cert-manager.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:commonName:
        f:dnsNames:
        f:isCA:
        f:issuerRef:
          .:
          f:kind:
          f:name:
        f:secretName:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2024-03-06T17:58:51Z
    API Version:  cert-manager.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:revision:
    Manager:      cert-manager-certificates-issuing
    Operation:    Update
    Subresource:  status
    Time:         2024-03-06T17:58:52Z
    API Version:  cert-manager.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:conditions:
          .:
          k:{&quot;type&quot;:&quot;Ready&quot;}:
            .:
            f:lastTransitionTime:
            f:message:
            f:observedGeneration:
            f:reason:
            f:status:
            f:type:
        f:notAfter:
        f:notBefore:
        f:renewalTime:
    Manager:         cert-manager-certificates-readiness
    Operation:       Update
    Subresource:     status
    Time:            2024-03-06T17:58:52Z
  Resource Version:  2128063
  UID:               977eaa8a-1612-489b-a34d-0e78ab113096
Spec:
  Common Name:  example.com
  Dns Names:
    green.nginx.example.com
    blue.nginx.example.com
    nginx.example.com
    example.com
  Is CA:  true
  Issuer Ref:
    Kind:       Issuer
    Name:       cfe-selfsigned-issuer
  Secret Name:  cfe-tls-key-pair
Status:
  Conditions:
    Last Transition Time:  2024-03-06T17:58:52Z
    Message:               Certificate is up to date and has not expired
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2024-06-04T17:58:52Z
  Not Before:              2024-03-06T17:58:52Z
  Renewal Time:            2024-05-05T17:58:52Z
  Revision:                1
Events:                    &amp;#x3C;none&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy sample Nginx applications&lt;/h3&gt;
&lt;p&gt;In order to configure and validate the Ingress TLS termination, three sample Nginx applications will be deployed to the cluster using the YAML manifest files from the GitHub repo &lt;a href=&quot;https://github.com/GuopingJia/ingress-demo&quot;&gt;ingress-demo&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tree ingress-demo/
ingress-demo/
├── apps
│   ├── nginx-blue.yaml
│   ├── nginx-green.yaml
│   └── nginx-main.yaml
├── ingress-host-based-selfsigned.yaml
├── ingress-path-based-selfsigned.yaml
└── README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each YAML manifest file in the folder &lt;em&gt;&apos;apps&apos;&lt;/em&gt; defines the &lt;em&gt;Deployment&lt;/em&gt; and the &lt;em&gt;Service&lt;/em&gt; resource.&lt;/p&gt;
&lt;p&gt;T﻿ype the following commands to deploy those Nginx applications to the namespace &lt;em&gt;nginx-apps&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd ingress-demo/
$ kubectl apply -f apps/nginx-main.yaml -n nginx-apps
service/nginx-main created
deployment.apps/nginx-main created
$ kubectl apply -f apps/nginx-green.yaml -n nginx-apps
service/nginx-green created
deployment.apps/nginx-green created
$ kubectl apply -f apps/nginx-blue.yaml -n nginx-apps
service/nginx-blue created
deployment.apps/nginx-blue created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the command shown below to check the details of each application deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n nginx-apps
NAME                              READY   STATUS    RESTARTS   AGE
pod/nginx-blue-78647f4c4b-z8wq9   1/1     Running   0          10s
pod/nginx-green-8956bbd9f-zz7hk   1/1     Running   0          22s
pod/nginx-main-64bfd77895-tf7xd   1/1     Running   0          31s

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx-blue    ClusterIP   10.108.51.116   &amp;#x3C;none&gt;        80/TCP    15s
service/nginx-green   ClusterIP   10.106.115.65   &amp;#x3C;none&gt;        80/TCP    23s
service/nginx-main    ClusterIP   10.108.33.44    &amp;#x3C;none&gt;        80/TCP    32s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-blue    1/1     1            1           15s
deployment.apps/nginx-green   1/1     1            1           24s
deployment.apps/nginx-main    1/1     1            1           32s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-blue-78647f4c4b   1         1         1       15s
replicaset.apps/nginx-green-8956bbd9f   1         1         1       24s
replicaset.apps/nginx-main-64bfd77895   1         1         1       32s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Three Nginx services, &lt;em&gt;nginx-main&lt;/em&gt;, &lt;em&gt;nginx-blue&lt;/em&gt; and &lt;em&gt;nginx-green&lt;/em&gt;, are deployed as the &lt;em&gt;ClusterIP&lt;/em&gt; type. They provide internal connectivity and can solely be accessed from within the cluster.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to check that all the application service endpoints have been populated:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get endpoints -n nginx-apps
NAME          ENDPOINTS        AGE
nginx-blue    10.192.3.78:80    1m
nginx-green   10.192.4.45:80    1m
nginx-main    10.192.4.44:80    1m
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set up Ingress TLS&lt;/h3&gt;
&lt;p&gt;The Ingress resource with TLS has to be created. Here is the sample Ingress TLS resource &lt;em&gt;ingress-host-based-selfsigned.yaml&lt;/em&gt;, available from the GitHub repo &lt;a href=&quot;https://github.com/GuopingJia/ingress-demo&quot;&gt;ingress-demo&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-host-based-selfsigned.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-host-based-selfsigned
  annotations:
    ingress.kubernetes.io/ssl-redirect: &quot;true&quot;
    cert-manager.io/issuer: &quot;nginx-selfsigned-issuer&quot;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - nginx.example.com
    secretName: cfe-tls-key-pair
  rules:
  - host: nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-main
            port:
              number: 80
  - host: blue.nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-blue
            port:
              number: 80
  - host: green.nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-green
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above sample YAML manifest file, there is the &lt;em&gt;&apos;tls&apos;&lt;/em&gt; block that contains the hostname &lt;em&gt;&apos;nginx.example.com&apos;&lt;/em&gt; and the secret &lt;em&gt;cfe-tls-key-pair&lt;/em&gt; created in the certification step. There is also the &lt;em&gt;&apos;rules&apos;&lt;/em&gt; block in which a list of routing rules is defined per host, e.g., host &lt;em&gt;nginx.example.com&lt;/em&gt; will be routed to the application service &lt;em&gt;nginx-main&lt;/em&gt; in the backend.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to deploy the Ingress resource to the namespace &lt;em&gt;nginx-apps&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f ingress-host-based-selfsigned.yaml -n nginx-apps
ingress.networking.k8s.io/ingress-host-based-selfsigned created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check the details of the &lt;em&gt;TLS&lt;/em&gt; and &lt;em&gt;Rules&lt;/em&gt; settings by t﻿yping the command shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl describe ingress ingress-host-based-selfsigned -n nginx-apps
Name:             ingress-host-based-selfsigned
Labels:           &amp;#x3C;none&gt;
Namespace:        nginx-apps
Address:
Ingress Class:    nginx
Default backend:  &amp;#x3C;default&gt;
TLS:
  cfe-tls-key-pair terminates nginx.example.com
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  nginx.example.com
                           /   nginx-main:80 (10.192.4.44:80)
  blue.nginx.example.com
                           /   nginx-blue:80 (10.192.3.78:80)
  green.nginx.example.com
                           /   nginx-green:80 (10.192.4.45:80)
Annotations:               cert-manager.io/issuer: nginx-selfsinged-issuer
                           ingress.kubernetes.io/ssl-redirect: true
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  Sync               20s   nginx-ingress-controller   Scheduled for sync
  Normal  CreateCertificate  20s   cert-manager-ingress-shim  Successfully created Certificate &quot;cfe-tls-key-pair&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Access deployed Nginx applications&lt;/h3&gt;
&lt;p&gt;B﻿efore accessing the deployed Nginx applications, you need set up the domain and the subdomain name resolution. For the sample domain name &lt;em&gt;nginx.example.com&lt;/em&gt;, and its subdomains, &lt;em&gt;blue.nginx.example.com&lt;/em&gt; and &lt;em&gt;green.nginx.example.com&lt;/em&gt;, the workstation host file has been used for DNS resolution.&lt;/p&gt;
&lt;p&gt;Type the following commands to check that this is done correctly:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ host nginx.example.com
nginx.example.com has address 10.6.115.251

$ host green.nginx.example.com
green.nginx.example.com has address 10.6.115.251

$ host blue.nginx.example.com
blue.nginx.example.com has address 10.6.115.251
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can then validate the Ingres TLS configuration of the deployed Nginx applications using the browser.&lt;/p&gt;
&lt;p&gt;S﻿tart the browser and type the URL &lt;em&gt;nginx.example.com&lt;/em&gt;. It will be redirected over HTTPS with the warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-main-warning.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿his is due to the fact that the self-signed certifcate is generated in cert-manager and configured in the K8s Ingress resource.&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Not secure&lt;/em&gt; and start the Certificate Viewer to check the certificate:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-main-cert.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Proceed to nginx.example.com (unsafe)&lt;/em&gt;. You will then go to the Nginx &lt;em&gt;MAIN&lt;/em&gt; page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-main.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Type the URL &lt;em&gt;green.nginx.example.com&lt;/em&gt; to the browser. It will be redirected over HTTPS with the same warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-green-warning.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Proceed to green.nginx.example.com (unsafe)&lt;/em&gt;. You will then go to the Nginx &lt;em&gt;GREEN&lt;/em&gt; page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-green.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he same thing occurs when you type the URL &lt;em&gt;blue.nginx.example.com&lt;/em&gt; to the browser. The access will be redirected over HTTPS with the same warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-blue-warning.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Proceed to blue.nginx.example.com (unsafe)&lt;/em&gt;. You will then go to the Nginx &lt;em&gt;BLUE&lt;/em&gt; page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-blue.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You have successfully configured the Ingress with the generated TLS c﻿ertificate and exposed the deployed applications with TLS termination.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post provided a comprehensive guide on how to expose applications and make them accessible securely via HTTPS in a K8 cluster in HPE GreenLake for Private Cloud Enterprise. It detailed the process of configuring TLS termination on an Ingress controller, utilizing a K8s Ingress resource and a self-signed TLS certificate generated with cert-manager. While the emphasis of this post was on self-signed certificates, the outlined procedure is equally applicable to any type of certificates. This flexibility allows customers to follow the steps using their own CA certificates or any commercially issued certificates for Ingress TLS termination, ensuring secure exposure of their applications in the K8s cluster over HTTPS.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing GenAI studio: Your generative AI playground built on Determined]]></title><description><![CDATA[E﻿xternal Blog]]></description><link>https://developer.hpe.com/announcing-genai-studio-your-generative-ai-playground-built-on-determined/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-genai-studio-your-generative-ai-playground-built-on-determined/</guid><pubDate>Wed, 13 Mar 2024 22:07:13 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal Blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Athonet LLM Platform: From personal assistant to collaborative corporate tool]]></title><description><![CDATA[As introduced in the previous blog post "The transformative impact of generative AI on Telco products, HPE Athonet is building a product…]]></description><link>https://developer.hpe.com/hpe-athonet-llm-platform-first-pillar-from-personal-assistant-to-collaborative-corporate-tool/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-athonet-llm-platform-first-pillar-from-personal-assistant-to-collaborative-corporate-tool/</guid><pubDate>Wed, 13 Mar 2024 08:17:11 GMT</pubDate><content:encoded>&lt;p&gt;As introduced in the previous blog post &quot;&lt;a href=&quot;https://developer.hpe.com/blog/the-transformative-impact-of-generative-ai-on-telco-products/&quot;&gt;The transformative impact of generative AI on Telco products&lt;/a&gt;, HPE Athonet is building a product based on three pillars:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Transitioning to a team-centric collaborative framework.&lt;/li&gt;
&lt;li&gt;Enhancing user experience to promote productivity and task focus.&lt;/li&gt;
&lt;li&gt;Developing a versatile architecture using data mesh principles that also include governance of security and ethical aspects.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Together, these elements represent our commitment not just to technological innovation, but also to fostering a genuine digital transformation in network management using generative AI. This installment delves deeper into the first foundational principles underpinning our strategy: specifically, the shift from utilizing isolated tools to embracing a holistic, team-centric collaborative framework.&lt;/p&gt;
&lt;p&gt;Sam Altman, CEO of OpenAI, is on a mission to develop an exceptional personal assistant aimed at enhancing individual learning and support. While this initiative is great, HPE Athonet has found that transitioning from a business-to-consumer (B2C) to a business-to-business (B2B) model necessitates a significant shift in focus from serving individuals to empowering teams. So, our team introduced the following &lt;strong&gt;innovation of meaning&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_col_tool.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The rationale behind this shift is the conviction that teamwork is fundamental to organizational success. The concern is that if this tool is used solely on an individual basis, it might inadvertently amplify personal biases, complicating communication within teams. By &lt;strong&gt;emphasizing teamwork&lt;/strong&gt;, one signal&apos;s a broader perspective on how this technology should be utilized.&lt;/p&gt;
&lt;p&gt;To clarify, there are several global initiatives to address and reduce biases in LLMs, particularly those related to data, like the demographic’s ones. These biases can often be minimized or managed using LLMs themselves, employing strategies such as guardrails or reinforcement learning from human feedback methods. Adhering to the &lt;strong&gt;HPE AI Ethics principles&lt;/strong&gt; commits to mitigating these biases.&lt;/p&gt;
&lt;p&gt;It is recognized, however, that there is a distinct category: the cognitive biases, which, unlike data biases, receive less focus in academic research and have fewer mitigation strategies available. Cognitive biases, such as confirmation bias, have profound effects on human interaction, potentially leading to polarization also within small social units like teams or departments. In response to this challenge, HPE Athonet is dedicating efforts towards understanding and addressing cognitive biases. The team is fostering environments that &lt;strong&gt;encourage diverse perspectives by creating &apos;shared memory&apos; spaces&lt;/strong&gt; within projects. In these spaces, employees can contribute with their individual chat memories, facilitating the collection of varied viewpoints and promoting openness and dialogue.&lt;/p&gt;
&lt;p&gt;In B2B environments, the transfer of employee knowledge into digital tools is crucial, underlining the significance of the design and presentation of these tools to the workforce. Historically, a common approach towards AI in business has been to automate processes and reduce costs, often at the expense of human resources. HPE Athonet&apos;s priority diverges from this path; we prioritize enhancing the intrinsic value of our products, focusing not just on reducing expenses but on adding qualitative benefits.&lt;/p&gt;
&lt;p&gt;Despite the advanced nature and associated costs of these technologies, they are viewed as vital investments in promoting collaboration and creativity within teams. The objective is to &lt;strong&gt;empower individuals&lt;/strong&gt; to work more efficiently and to &lt;strong&gt;foster innovation through improved teamwork&lt;/strong&gt;. By leveraging AI&apos;s unique capabilities, HPE Athonet aims to augment and elevate human efforts, rather than replace them.&lt;/p&gt;
&lt;p&gt;While it is acknowledged that eliminating biases from LLMs is unfeasible, we believe that evolving towards a more collaborative tool can foster an &lt;strong&gt;&apos;anti-fragile&apos; system&lt;/strong&gt; better equipped to manage and mitigate these biases.&lt;/p&gt;
&lt;p&gt;This blog highlights the initial foundational pillar shaping HPE Athonet&apos;s upcoming product: the shift towards a team-centric collaborative framework to leverage generative AI potential. It marks the initial step in a strategy that also aims to enhance user experience and develop a secure, ethical, and versatile architecture. These initiatives underscore HPE Athonet&apos;s commitment to innovation and the digital transformation of network management. Stay tuned for upcoming posts on the other pillars.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI News #14]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/ai-news-14/</link><guid isPermaLink="false">https://developer.hpe.com/ai-news-14/</guid><pubDate>Mon, 11 Mar 2024 17:04:26 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Generating self-signed certificates using cert-manager for Kubernetes in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[This blog post describes the details steps on how to generate a self-signed certificate using cert-manager for Kubernetes (K8s) in HPE…]]></description><link>https://developer.hpe.com/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/</link><guid isPermaLink="false">https://developer.hpe.com/generating-self-signed-certificates-using-cert-manager-for-kubernetes-in-hpe-greenlake-for-private-cloud-entreprise/</guid><pubDate>Mon, 11 Mar 2024 16:19:11 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;This blog post describes the details steps on how to generate a self-signed certificate using cert-manager for Kubernetes (K8s) in HPE GreenLake for Private Cloud Enterprise. The generated self-signed certificates can be used by DevOps teams and developers to configure Transport Layer Security (TLS) termination and expose applications deployed in the K8s cluster securely via HTTPS.&lt;/p&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a K8s cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;O﻿nce applications are deployed in a cluster, a common requirement is to expose the applications s﻿o that they can be securely accessed over HTTPS. This requires getting a valid SSL/TLS certificate in K8s. Generating and managing SSL/TLS certificates in K8s is not always easy. There is a list of popular tools and utilities, e.g, . &lt;a href=&quot;https://www.openssl.org/&quot;&gt;OpenSSL&lt;/a&gt;, &lt;a href=&quot;https://github.com/cloudflare/cfssl&quot;&gt;CloudFlare’s CFSSL&lt;/a&gt;, &lt;a href=&quot;https://github.com/OpenVPN/easy-rsa&quot;&gt;OpenVPN’s Easy-RSA&lt;/a&gt;, etc, that you can use for generating certificates.&lt;/p&gt;
&lt;p&gt;However, you s﻿till need to follow up t﻿hat with creating the root certificate authorities, generating certificate signing requests (CSRs), and signing the certificates. The process to generate those items is not very intuitive. Most often t﻿han not, it requires t﻿he help of a &lt;em&gt;DevOps&lt;/em&gt; engineer as well as assistance from different teams who are involved in installing and configuring the certificate chain.&lt;/p&gt;
&lt;p&gt;This blog post describes the detailed steps involved in the process of generating a &lt;strong&gt;self-signed&lt;/strong&gt; certificate using cert-manager for K8s in HPE GreenLake for Private Cloud Enterprise. Cert-manager integrates seamlessly with K8s for automated handling of certificates. It aligns well with the K8s resource model. This makes cert-manager a native and powerful solution for creating and managing certificates within K8s clusters.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The o﻿ptional &lt;em&gt;openssl&lt;/em&gt; CLI tool, for validating the generated certificate&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Cert-manager&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://cert-manager.io/&quot;&gt;Cert-manager&lt;/a&gt;, a popular open source certificate management add-on designed to work with K8s, streamlines the process of acquiring, renewing, and utilizing SSL/TLS certificates within a K8s cluster. When deployed in a K8s cluster, cert-manager introduces two custom resource definitions (CRDs): &lt;em&gt;Issuer&lt;/em&gt; and &lt;em&gt;Certificate&lt;/em&gt;. These CRDs automate the generation and renewal of certificates for various scenarios in K8s. Cert-manager can obtain certificates from a variety of certificate authorities (CAs), including &lt;em&gt;Let’s Encrypt&lt;/em&gt;, &lt;em&gt;HashiCorp Vault&lt;/em&gt;, and &lt;em&gt;private PKIs&lt;/em&gt;. It can also be configured to generate self-signed certificates if needed. When cert-manager creates a certificate, it makes it available to the entire cluster by storing the certificate as a K8s &lt;em&gt;Secret&lt;/em&gt; object, which can be mounted by application Pods or used by an Ingress controller. This makes the certificate accessible across all namespaces within the K8s cluster.&lt;/p&gt;
&lt;h3&gt;Generate a self-signed certificate&lt;/h3&gt;
&lt;h4&gt;Install cert-manager&lt;/h4&gt;
&lt;p&gt;A﻿s shown on the &lt;a href=&quot;https://cert-manager.io/docs/installation/&quot;&gt;cert-manager installation page&lt;/a&gt;, cert-manager can be installed by typing the following &lt;em&gt;kubectl  apply&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he latest cert-manager &lt;em&gt;v1.14.3&lt;/em&gt; will be installed to the namespace &lt;em&gt;cert-manager&lt;/em&gt;. Type the following command to check that all the Pods are s﻿howing a Running status:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n cert-manager
NAME                                           READY   STATUS    RESTARTS   AGE
pod/cert-manager-6bcdd5f7c-f7lfw               1/1     Running   0          3m36s
pod/cert-manager-cainjector-5d4577b4d9-jmpsp   1/1     Running   0          3m36s
pod/cert-manager-webhook-bf957dc77-s9r2g       1/1     Running   0          3m36s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/cert-manager           ClusterIP   10.109.28.203   &amp;#x3C;none&gt;        9402/TCP   3m39s
service/cert-manager-webhook   ClusterIP   10.100.82.119   &amp;#x3C;none&gt;        443/TCP    3m38s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cert-manager              1/1     1            1           3m37s
deployment.apps/cert-manager-cainjector   1/1     1            1           3m38s
deployment.apps/cert-manager-webhook      1/1     1            1           3m37s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-6bcdd5f7c               1         1         1       3m38s
replicaset.apps/cert-manager-cainjector-5d4577b4d9   1         1         1       3m39s
replicaset.apps/cert-manager-webhook-bf957dc77       1         1         1       3m38s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿s part of cert-manager installation, a list of cert-manager related CRDs has been added to the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get crds | grep cert-manager
certificaterequests.cert-manager.io                                 2024-02-02T15:42:53Z
certificates.cert-manager.io                                        2024-02-02T15:42:53Z
challenges.acme.cert-manager.io                                     2024-02-02T15:42:54Z
clusterissuers.cert-manager.io                                      2024-02-02T15:42:55Z
issuers.cert-manager.io                                             2024-02-02T15:42:55Z
orders.acme.cert-manager.io                                         2024-02-02T15:42:56Z
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Create an Issuer&lt;/h4&gt;
&lt;p&gt;A﻿n &lt;em&gt;Issuer&lt;/em&gt; in cert-manager is a K8s CRD resource that represents a certificate authority (CA) that&apos;s able to generate a signed certificate by honoring certificate signing request (CSR). All cert-manager certificates require a referenced issuer that is in a ready condition to attempt to honor the request.&lt;/p&gt;
&lt;p&gt;H﻿ere is a self-signed issuer YAML manifest file &lt;em&gt;issuer-selfsigned.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat issuer-selfsigned.yaml                                                            
apiVersion: cert-manager.io/v1                                                                                                                   
kind: Issuer                                                                                                                                     
metadata:                                                                                                                                         
 name: cfe-selfsigned-issuer                                                                                                                     
spec:                                                                                                                                             
 selfSigned: {}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following commands to create a namespace in which you want to generate certificates and deploy the CRD Issuer r﻿esource to this namespace. Replace the sample namespace &lt;em&gt;cfe-apps&lt;/em&gt; in the commands with your own namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl create ns cfe-apps
namespace/cfe-apps created

$ kubectl apply -f issuer-selfsigned.yaml -n cfe-apps
issuer.cert-manager.io/cfe-selfsigned-issuer created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to check the deployed issuer in the namespace. The issuer should show &lt;em&gt;READY&lt;/em&gt; as &lt;em&gt;&lt;strong&gt;True&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get issuer -n cfe-apps
NAME                    READY   AGE

cfe-selfsigned-issuer   True    7s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want to be able to request certificates from any namespace in a cluster, use the CRD resource called &lt;em&gt;ClusterIssuer&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Here is a sample &lt;em&gt;ClusterIssuer&lt;/em&gt; YAM manifest file &lt;em&gt;clusterissuer.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat clusterissuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-cluster-issuer
spec:
  selfSigned: {}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Generate a certificate&lt;/h4&gt;
&lt;p&gt;Y﻿ou can use the CRD resource &lt;em&gt;Certificate&lt;/em&gt; to generate a self-signed certificate.&lt;/p&gt;
&lt;p&gt;H﻿ere is a sample &lt;em&gt;Certificate&lt;/em&gt; YAML manifest file &lt;em&gt;certificate.yaml&lt;/em&gt; :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
 name: cfe-selfsigned-tls
spec:
 secretName: cfe-tls-key-pair
 isCA: true
 issuerRef:
   name: cfe-selfsigned-issuer
   kind: Issuer
 commonName: &quot;example.com&quot;
 dnsNames:
 - nginx.example.com
 - example.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿n this YAML file, the &lt;em&gt;commonName&lt;/em&gt; is set to a sample domain &lt;em&gt;&apos;example.com&apos;&lt;/em&gt;. The &lt;em&gt;dnsNames&lt;/em&gt; includes &lt;em&gt;&apos;example.com&apos;&lt;/em&gt; and its subdomain &lt;em&gt;&apos;nginx.example.com&apos;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Cert-manager supports t﻿he generation of wildcard certificates, e.g., using &apos;*.example.com&apos;, which allows o﻿ne to secure multiple subdomains under a single certificate. Wildcard certificates cover all subdomains under the specified domain. You need to be cautious when using them, as they grant access to any subdomain matching the pattern.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to generate the certificate in the namespace &lt;em&gt;cfe-apps&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f certificate.yaml -n cfe-apps
certificate.cert-manager.io/cfe-selfsigned-tls created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check the generated certificate in the namespace &lt;em&gt;cfe-apps&lt;/em&gt; by typing the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get certificate -n cfe-apps
NAME                 READY   SECRET             AGE
cfe-selfsigned-tls   True    cfe-tls-key-pair   23s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he K8s secret &lt;em&gt;cfe-tls-key-pair&lt;/em&gt; will be created automatically in the same namespace as part of certificate deployment. Type the command shown below to check it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get secrets -n cfe-apps cfe-tls-key-pair
NAME               TYPE                DATA   AGE
cfe-tls-key-pair   kubernetes.io/tls   3      52s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he secret &lt;em&gt;cfe-tls-key-pair&lt;/em&gt; contains 3 keys, &lt;em&gt;ca.crt&lt;/em&gt;, &lt;em&gt;tls.crt&lt;/em&gt; and &lt;em&gt;tls.key&lt;/em&gt;, which can be checked using the option &lt;strong&gt;-o yaml&lt;/strong&gt; in the above &lt;em&gt;get secrets&lt;/em&gt; command.&lt;/p&gt;
&lt;h3&gt;Test the certificate&lt;/h3&gt;
&lt;p&gt;T﻿ype the following &lt;em&gt;openssl&lt;/em&gt; command to check the generated certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ openssl x509 -in &amp;#x3C;(kubectl get secret -n cfe-apps cfe-tls-key-pair -o jsonpath=&apos;{.data.tls\.crt}&apos; | base64 -d)
-text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            2a:2a:5d:0f:d1:e2:6f:60:3e:8a:93:4f:f4:e8:52:1e
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = example.com
        Validity
            Not Before: Feb 21 14:17:18 2024 GMT
            Not After : May 21 14:17:18 2024 GMT
        Subject: CN = example.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:b7:7d:95:7f:55:a7:32:fd:66:b2:78:c0:2b:1f:
                    1f:69:c6:de:1f:85:eb:fb:2b:69:f3:60:23:df:9d:
                    3e:3d:41:df:c9:6b:b0:92:80:fe:6a:6f:19:4d:61:
                    20:3e:fc:19:af:f1:1d:5e:f6:b6:4f:17:5d:76:99:
                    3f:f4:d3:4a:70:15:f8:d5:3e:02:5c:c4:29:32:75:
                    cd:e3:5a:07:7d:ea:47:71:37:3b:3d:36:89:36:e5:
                    8f:0e:03:57:ab:99:b3:6d:47:67:8a:6b:3b:2b:61:
                    b0:08:96:a6:a2:5d:46:ed:ee:f3:5a:e3:6b:1d:05:
                    08:f1:ab:1b:ea:49:a3:2f:0d:82:37:80:76:00:18:
                    77:99:39:08:2e:06:54:28:24:e2:c8:9f:48:9c:ec:
                    75:0e:5e:a6:7b:ce:0b:68:96:d1:1a:4e:56:e1:ca:
                    42:ab:8e:11:a8:37:e1:70:ae:25:e3:2f:26:f1:7c:
                    95:fa:da:48:57:1f:a3:d7:47:84:86:9d:76:b3:99:
                    a5:ef:10:98:96:31:ee:32:31:05:bc:5a:c0:94:bd:
                    25:ba:d6:86:32:d1:a6:3e:8c:21:99:a8:96:d6:5d:
                    69:35:01:8e:4f:d8:e9:90:78:17:ce:ac:4a:f8:13:
                    59:9b:e3:a8:9b:59:cc:c6:5f:5b:ca:6c:73:5e:e6:
                    88:f9
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                53:55:6D:56:AA:75:E2:87:9E:BB:C2:C7:45:32:2F:E3:1C:FF:17:62
            X509v3 Subject Alternative Name:
                DNS:nginx.example.com, DNS:example.com
    Signature Algorithm: sha256WithRSAEncryption
         69:e4:ae:bb:15:c1:d7:1a:54:49:10:6b:04:f9:1b:ed:bf:64:
         0f:da:5e:b8:c2:e7:e2:d9:45:9e:66:92:0f:ce:f5:c9:5f:aa:
         b3:28:36:cd:16:da:6a:60:7f:eb:1d:85:fe:3a:38:65:71:0f:
         eb:da:e8:9e:1b:dc:f5:b7:14:4f:70:00:fd:bf:44:ed:37:35:
         bc:67:c7:4f:68:bc:5e:3b:bd:64:aa:5c:cd:1a:4f:11:90:c4:
         6f:6a:d2:4b:90:4c:25:e7:ab:83:12:d7:38:b1:bf:70:8c:d5:
         cc:cb:70:70:b6:de:dc:8f:66:21:42:88:d5:7e:59:5f:6e:83:
         73:81:e4:63:57:d1:c6:63:c0:9a:49:09:44:b5:d0:33:6b:3b:
         fd:3e:e4:c7:b7:d4:e4:72:0d:36:cf:a8:31:26:e3:ce:55:9f:
         46:b8:fd:ab:7c:cc:2a:4b:e2:a6:a5:cd:2f:0c:3a:b1:2d:84:
         1a:51:8b:e8:73:0f:cb:49:2e:a2:a6:ed:d5:e2:e8:cf:79:44:
         b9:2b:00:03:86:1a:a6:33:d4:20:33:9c:04:71:43:2d:9c:66:
         3b:13:9b:6f:9f:f6:5f:f2:e0:e4:4a:04:64:c3:e6:bd:78:18:
         19:22:d9:98:b5:47:85:0d:bd:b6:56:44:e6:89:34:30:90:20:
         36:63:4f:1e
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The line &lt;em&gt;X509v3 Subject Alternative Name&lt;/em&gt; contains the &lt;em&gt;dnsNames&lt;/em&gt; specified in the YAML file &lt;em&gt;certificate.yaml&lt;/em&gt; during the certificate generation.&lt;/p&gt;
&lt;h3&gt;Integrate certificate with applications&lt;/h3&gt;
&lt;p&gt;There are several ways to integrate the generated certificates into applications deployed in the K8s cluster and configure applications to be accessed securely over HTTPS.&lt;/p&gt;
&lt;p&gt;The simplest way is to create the K8s &lt;em&gt;Deployment&lt;/em&gt; resource with TLS block and &lt;em&gt;containerPort&lt;/em&gt; configuration.&lt;/p&gt;
&lt;p&gt;Here is one sample Nginx Deployment YAML manifest file &lt;em&gt;nginx-deployment.yaml&lt;/em&gt; that integrates the generated certificate:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 1
  template:
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 443
      tls:
      - secretName: cfe-tls-key-pair
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By specifying the &lt;em&gt;containerPort&lt;/em&gt; as &lt;em&gt;443&lt;/em&gt; and referring the &lt;em&gt;secretName&lt;/em&gt; to the generated K8s secret &lt;em&gt;cfe-tls-key-pair&lt;/em&gt; under &lt;em&gt;tls&lt;/em&gt; section, it enables TLS termination for the Nginx application.&lt;/p&gt;
&lt;p&gt;There is another way to integrate the certificate and configure it using the K8s &lt;em&gt;Ingress&lt;/em&gt; resource with TLS parameters. This configuration requires a working Ingress controller setup in the cluster. There is a list of Ingress controllers, like: &lt;a href=&quot;https://doc.traefik.io/traefik/providers/kubernetes-ingress/&quot;&gt;Traefik&lt;/a&gt;, &lt;a href=&quot;https://github.com/haproxytech/kubernetes-ingress#readme&quot;&gt;HAProxy&lt;/a&gt;, &lt;a href=&quot;https://www.nginx.com/products/nginx-ingress-controller/&quot;&gt;Nginx Ingress controller&lt;/a&gt;, you can deploy in the cluster.&lt;/p&gt;
&lt;p&gt;Here is one such sample Ingress YAML manifest file &lt;em&gt;ingress-nginx-selfsigned.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat ingress-nginx-selfsigned.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress-selfsigned
  annotations:
    ingress.kubernetes.io/ssl-redirect: &quot;true&quot;
    cert-manager.io/issuer: &quot;cfe-selfsinged-issuer&quot;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - nginx.example.com
    secretName: cfe-tls-key-pair
  rules:
  - host: nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-app
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It assumes the Nginx Ingress controller is deployed in the cluster. It configures the TLS block with the hostname &lt;em&gt;&apos;nginx.example.com&apos;&lt;/em&gt; and the generated K8s secret.&lt;/p&gt;
&lt;p&gt;One benefit of this approach is that the sample Nginx application can be deployed in the cluster with the default service type &lt;em&gt;ClusterIP&lt;/em&gt;, which provides internal connectivity and can solely be accessed from within the cluster. The Ingress controller will p﻿rovide external access and handle SSL by accessing the certificate in the cluster and route the traffic to the deployed Nginx application in the backend.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to deploy the Ingress resource to the namespace &lt;em&gt;cfe-apps&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -f ingress-nginx-selfsigned.yaml -n cfe-apps
ingress.networking.k8s.io/nginx-ingress-selfsigned created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿fter deploying the Ingress using the above command, together with Nginx application deployment, to the namespace &lt;em&gt;cfe-apps&lt;/em&gt;, you can validate the Ingress TLS using the browser.&lt;/p&gt;
&lt;p&gt;S﻿tart the browser and type the URL &lt;em&gt;nginx.example.com&lt;/em&gt;, it will be rediected over HTTPS with the warning message &lt;em&gt;&apos;Your connection is not private&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-private.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Y﻿ou can click &lt;em&gt;Not secure&lt;/em&gt; and start the Certificate Viewer to check the TLS certificate before clicking &lt;em&gt;Proceed to nginx.example.com (unsafe)&lt;/em&gt; to go to the Nginx page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/nginx-cert.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This blog post described the steps to generate a self-signed certificate using cert-manager for K8s in HPE GreenLake for Private Cloud Enterprise. Self-signed certificates provide an easy way to prove your own identity for the applications deployed in K8s cluster. This is a good option for development and testing environments. However, because self-signed certificates are not trusted certificates, they should not be used for production applications. For production use cases, you can try out cert-manager with &lt;a href=&quot;https://letsencrypt.org/&quot;&gt;Lets Encrypt&lt;/a&gt;. You can refer to &lt;a href=&quot;https://cert-manager.io/docs/&quot;&gt;cert-manager documentation&lt;/a&gt; on how to use it with the type of &lt;em&gt;Let’s Encrypt&lt;/em&gt; challenges, as well as other sources than &lt;em&gt;Let’s Encrypt&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with the HPE GreenLake Developer Portal]]></title><description><![CDATA[You know about the HPE Developer Community portal that provides an entry point to everything you need to know about HPE from a software…]]></description><link>https://developer.hpe.com/getting-started-with-the-hpe-greenlake-developer-portal/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-hpe-greenlake-developer-portal/</guid><pubDate>Fri, 08 Mar 2024 17:37:23 GMT</pubDate><content:encoded>&lt;p&gt;You know about the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt; that provides an entry point to everything you need to know about HPE from a software developer and  IT Ops standpoint, including how to access APIs (application programming interfaces), SDKs (software development kits), and training. The HPE Developer Community portal covers a wide range of HPE products, from HPE OneView, iLO, and Cray to the HPE GreenLake platform.&lt;/p&gt;
&lt;p&gt;From the HPE Developer Community portal, you can access the HPE GreenLake-specific developer portal where you can find the API documentation along with some trial capabilities. &lt;/p&gt;
&lt;p&gt;When reaching the &lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;, you are presented with three tiles that allow you to make the best use of the portal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Guides: Learn how to get started using the HPE GreenLake Developer Portal&lt;/h2&gt;
&lt;p&gt;The minimal requirement of being able to start using the portal is to have an HPE user account or an HPE GreenLake Account. To leverage the HPE GreenLake platform APIs, the user will need to obtain  API client credentials for the HPE GreenLake platform. With the API client credentials, the user will be able to create an access token.  &lt;/p&gt;
&lt;p&gt;One cannot generate an access token without having an HPE account associated to a workspace. You will be guided through all the different steps to create the account either through the HPE GreenLake developer portal or through the HPE GreenLake portal. &lt;/p&gt;
&lt;p&gt;It is important to note that some content on the portal is private or only visible to a restricted group (for example, partners or developers). W﻿ithin each group, several roles are identified and have specific privileges: &lt;/p&gt;
&lt;p&gt; &lt;em&gt;Public&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Guest: Every visitor will have this role with a single permission to read all public content on the portal. &lt;/p&gt;
&lt;p&gt;Authenticated-User: Every logged in user will have this role. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Partner&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Users with the Partner role can access restricted content that is specific to a Partner.  &lt;/p&gt;
&lt;h2&gt;Services:  Discover the HPE GreenLake platform APIs  &lt;/h2&gt;
&lt;p&gt;Clicking on the second tile (Services), provides you with access to the HPE GreenLake APIs. But, as mentioned previously, an API needs a token to be tested. Therefore, the very first step described in the documentation API &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/#generate-or-reset-application-credentials&quot;&gt;here&lt;/a&gt; is to create API client credentials for the HPE GreenLake platform and generate an access token. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you have your token, you can try the different APIs that are related to the different services listed on the left-hand side of the page.  &lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/audit-logs/public/&quot;&gt;The HPE GreenLake audit log service&lt;/a&gt;, for instance, will offer a collection of RESTful APIs for publishing audit logs, managing configurations, and retrieving application-specific and overall platform logs.&lt;/p&gt;
&lt;p&gt;Each service section offers, at a minimum, an overview, a guide, the Open API specification, and the associated API reference documentation. Some services provide additional parts to dive into more specific areas of the service. For instance, HPE GreenLake for Compute Ops Management service provides additional details about event webhooks and jobs. A webhook is an HTTP-based callback function that allows lightweight, event-driven communication between two APIs.  A job is a multi-step task that is managed by Compute Ops Management to perform an action on a resource. For example, performing power operations or firmware updates on a server. &lt;/p&gt;
&lt;p&gt;The Overview gives a definition of the service. The Guide explains the core details of the service from an API standpoint. Finally, the API reference documentation will provide you with a download link to the Open API specification file in JSON format of the API. This JSON file can be used in Postman, allowing you to build collection of REST API calls. In fact, the HPE Developer Community team has already put together a nice HPE GreenLake platform API collection. You can get the Postman collection from the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;HPE Developer Community tooling GitHub repository&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Finally, “la cerise sur le gateau” as we say in French, the try part. There you can make real API calls related to the selected service, bearing in mind that you have an HPE account, joined a workspace, and that you have generated the necessary API client credentials and access token. &lt;/p&gt;
&lt;p&gt;By simply clicking the Try it button, you can get access to the API. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, you can pass your first API call. Simply paste the token in the security window and hit the Send button. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a result, a 200 status will inform you that the call was passed correctly, and the Headers window will display the result of the call. &lt;/p&gt;
&lt;p&gt;On the right-hand side of the windows, you will see that samples are being presented. You will find Curl, JavaScript, Node.js, Java, Python versions that you can copy / paste to test, allowing you to integrate them easily. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In addition, a Change Log section will inform you about the versioning of the API and changes related to it.&lt;/p&gt;
&lt;h2&gt;HPE Developer Community&lt;/h2&gt;
&lt;p&gt;T﻿he third tile allows you to reach out to us,  [The HPE Developer Community](&lt;a href=&quot;https://developer.hpe.com/&quot;&gt;https://developer.hpe.com/&lt;/a&gt;). Our web portal as mentionned in the introductionnary words of this blog, will help you find all things software at HPE &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;Join us&lt;/a&gt; to collaborate and build applications and integrations with HPE products using the latest software and open source technologies.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake Developer Portal offers one last important feature:&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The search window:&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Rather than browsing through the page to look for a necessary piece of information, you can use the search box and get the result you need straight away. Let&apos;s say you would like to know about HPE GreenLake for Compute Ops Management&apos; APIs. A quick search will present you with many relevant entries.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-greenlake-dev-portal6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary &lt;/h2&gt;
&lt;p&gt;This blog post is intended to help you get started with the HPE GreenLake Developer Portal. Additional posts on the HPE GreenLake platform APIs have been produced by the HPE Developer Community team. There is &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-1-introduction-to-the-apis/&quot;&gt;a three-part series&lt;/a&gt; that takes you through the preparation steps you need  to use the APIs for common platform services and walks you through the steps required to obtain an OAuth access token to make secure REST API calls to the HPE GreenLake platform APIs. Then, you can dive into the Postman collection to learn how you, as an IT (Information Technology) administrator of the HPE GreenLake platform, can configure and manage workspace resources (users’ identity, devices, and subscriptions), and how you can track activities within your workspace to monitor the overall health of services and devices in your workspace. There is also a blog post about &lt;a href=&quot;https://developer.hpe.com/blog/hpe-greenlake-edge-to-cloud-platform-scripting-fundamentals/&quot;&gt;HPE GreenLake edge-to-cloud platform scripting fundamentals&lt;/a&gt; where you can learn about the HPE GreenLake platform APIs through simple coding examples leveraging bash, PowerShell, and Python.&lt;/p&gt;
&lt;p&gt;Another way to experience these APIs is to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake APIs mentioned in this blog post. The workshops are free, available 24/7, and quite easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you are interested and go! It is as simple as that.  &lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake platform APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We are always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The transformative impact of generative AI on Telco products]]></title><description><![CDATA[In February 2024, Harvard Business Review published an insightful article titled “Turn Generative AI from an Existential Threat into a…]]></description><link>https://developer.hpe.com/the-transformative-impact-of-generative-ai-on-telco-products/</link><guid isPermaLink="false">https://developer.hpe.com/the-transformative-impact-of-generative-ai-on-telco-products/</guid><pubDate>Thu, 07 Mar 2024 16:28:15 GMT</pubDate><content:encoded>&lt;p&gt;In February 2024, Harvard Business Review published an insightful article titled “&lt;a href=&quot;https://hbr.org/2024/01/turn-generative-ai-from-an-existential-threat-into-a-competitive-advantage&quot;&gt;Turn Generative AI from an Existential Threat into a Competitive Advantage&lt;/a&gt;”. The article outlines three strategic approaches for integrating generative AI into business operations to secure a competitive edge:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Adopting public AI tools:&lt;/strong&gt; Utilizing readily available AI technologies, like ChatGPT, offers early movers a competitive edge by enhancing efficiency and innovation. Early adoption offers a competitive edge, though the advantage may be short-lived as such tools become universally essential.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customizing AI tools:&lt;/strong&gt; Tailoring AI tools with company-specific data improves customer experiences through personalized services and interfaces. This strategy, fueled by user feedback, advances towards more sophisticated AI applications, enhancing both customization and usability.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automating data feedback loops&lt;/strong&gt;: Implementing self-improving AI through automatic feedback loops from customer interactions boosts personalization and efficiency. Despite the challenge of integrating these mechanisms smoothly, it offers a sustainable competitive advantage.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At HPE Athonet, our goal is to achieve this third level of implementation to enhance internal efficiency and maximize the value we deliver to our customers. This enables them to leverage our private networks for groundbreaking innovations. To achieve this, we are developing a generative AI platform founded on three core principles:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Transition from individual tools to a team-centric collaborative framework:&lt;/strong&gt; Shifting the focus towards collaborative efforts that enhance team synergy and productivity.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Designing a user experience to enhance focus and facilitate flow:&lt;/strong&gt; Crafting an interface that minimizes distractions and supports users in achieving a state of deep concentration, or &apos;Flow&apos;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Building a versatile architecture incorporating data mesh and microservices principles:&lt;/strong&gt; Developing a flexible system architecture that integrates advanced data management and service-oriented designs to support scalable and efficient operations.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We showcased a proof of concept (PoC) of our innovative solution at the Mobile World Congress in Barcelona, February 2024. Our chatbot demonstrated its capability to fetch information and execute commands over Athonet&apos;s 5G networks effectively.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_mwc1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The PoC highlighted the ability of our reasoning engine to orchestrate multiple tools, enabling seamless connection to Athonet 5G networks via APIs and efficient retrieval of document information, both features accessible through chat interactions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/athon_mwc2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This pioneering approach not only showcases the potential and flexibility of this architecture in network systems management but also emphasizes the profound impact such technologies can have on the digital transformation journey. We designed this strategy with the aim to elevate HPE Athonet into a future where AI not only enhances but significantly boosts our operational capabilities and the value we deliver to our customers.&lt;/p&gt;
&lt;p&gt;Stay tuned for our upcoming posts where we will delve deeper into the core components and principles of this idea.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Optimizing data management in HPE GreenLake for Private Cloud Enterprise and more!]]></title><link>https://developer.hpe.com/2024-march-05/</link><guid isPermaLink="false">https://developer.hpe.com/2024-march-05/</guid><pubDate>Tue, 05 Mar 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Finetuning Mistral-7B with LoRA and DeepSpeed]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/finetuning-mistral-7b-with-lora-and-deepspeed/</link><guid isPermaLink="false">https://developer.hpe.com/finetuning-mistral-7b-with-lora-and-deepspeed/</guid><pubDate>Wed, 28 Feb 2024 16:21:51 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Changes to Chapel 2.0 Since its First Release Candidate]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/changes-to-chapel-2-0-since-its-first-release-candidate/</link><guid isPermaLink="false">https://developer.hpe.com/changes-to-chapel-2-0-since-its-first-release-candidate/</guid><pubDate>Wed, 28 Feb 2024 01:53:46 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Latest AI Models, Datasets, and Research]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/the-latest-ai-models-datasets-and-research/</link><guid isPermaLink="false">https://developer.hpe.com/the-latest-ai-models-datasets-and-research/</guid><pubDate>Tue, 27 Feb 2024 15:40:41 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why is Redfish different from other REST APIs - Part 2]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/why-is-redfish®-different-from-other-rest-apis-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/why-is-redfish®-different-from-other-rest-apis-part-2/</guid><pubDate>Thu, 22 Feb 2024 17:22:57 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/why_is_redfish_different/why_is_redfish_different_part2&quot;&gt;Server Management Portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[Streamline your server deployments: Bare metal provisioning with HPE GreenLake for Compute Ops Management and Ansible]]></title><description><![CDATA[In the rapidly-evolving world of IT infrastructure management, achieving speed, efficiency, and reliability in server provisioning can make…]]></description><link>https://developer.hpe.com/streamline-your-server-deployments-bare-metal-provisioning-with-hpe-compute-ops-management-and-ansible/</link><guid isPermaLink="false">https://developer.hpe.com/streamline-your-server-deployments-bare-metal-provisioning-with-hpe-compute-ops-management-and-ansible/</guid><pubDate>Wed, 21 Feb 2024 13:16:34 GMT</pubDate><content:encoded>&lt;style&gt;ul li{ font-size:27px;padding-bottom: 0.5em;}&lt;/style&gt;
&lt;style&gt;ol li{ font-size:27px;padding-bottom: 0.5em;}&lt;/style&gt;
&lt;style&gt;ul ul li {padding-bottom: 8px;}&lt;/style&gt;
&lt;style&gt;ol ol li {padding-bottom: 8px;}&lt;/style&gt;
&lt;style&gt;li &gt; ul {margin-top: 10px;}&lt;/style&gt;
&lt;style&gt;li &gt; ul &gt; li:last-child {margin-bottom: 0px;}&lt;/style&gt;
&lt;style&gt; i{ color:grey;font-family:&apos;Courier New&apos;;font-size:22px; } &lt;/style&gt;
&lt;style&gt;

  img {

    max-width: 100%;

    height: auto;

    border: 1px solid #ccc;

    margin: 20px;

    box-shadow: 2px 2px 5px #ccc;

  }

&lt;/style&gt;
&lt;p&gt;In the rapidly-evolving world of IT infrastructure management, achieving speed, efficiency, and reliability in server provisioning can make a significant difference. This is where cutting-edge tools like &lt;a href=&quot;https://www.hpe.com/us/en/hpe-greenlake-compute-ops-management.html&quot;&gt;HPE GreenLake for Compute Ops Management&lt;/a&gt; and &lt;a href=&quot;https://www.ansible.com/&quot;&gt;Ansible&lt;/a&gt; come into play. Together, they create a robust platform for managing your infrastructure seamlessly. In this blog post, I will introduce an exciting new GitHub project that exemplifies how to harness these tools for optimal bare metal provisioning.&lt;/p&gt;
&lt;h2&gt;Introducing a new GitHub project&lt;/h2&gt;
&lt;p&gt;I am excited to share a new project. It is an open-source initiative hosted on GitHub that aims to enhance the integration between HPE GreenLake for Compute Ops Management and Ansible. This endeavor is focused on making it easier to configure, manage and provision bare metal servers at scale.&lt;/p&gt;
&lt;p&gt;The initial aim of this project was to focus on server provisioning for the ESXi, RHEL and Windows Server platforms. However, it also aims to provide an overview of the various capabilities of the Compute Ops Management API. The project effectively demonstrates a wide range of API interactions, covering everything from initial installation (Day0 operations) through the early stages of active use (Day1) to ongoing maintenance (Day2) with automated firmware updates.&lt;/p&gt;
&lt;p&gt;Main operations include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collecting server information&lt;/li&gt;
&lt;li&gt;Identifying storage destinations for the operating system install&lt;/li&gt;
&lt;li&gt;Configuring server settings:
&lt;ul&gt;
&lt;li&gt;BIOS settings&lt;/li&gt;
&lt;li&gt;Storage configurations&lt;/li&gt;
&lt;li&gt;OS provisioning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Creating tailor-made kickstart scripts and assembling ISOs&lt;/li&gt;
&lt;li&gt;Starting and monitoring OS image installation&lt;/li&gt;
&lt;li&gt;Installing and monitoring HPE Agentless Management Service (AMS) and Smart Update Tool (SUT)&lt;/li&gt;
&lt;li&gt;Creating server groups with specific settings&lt;/li&gt;
&lt;li&gt;Adding servers to temporary and permanent server groups&lt;/li&gt;
&lt;li&gt;Executing firmware updates&lt;/li&gt;
&lt;li&gt;Monitoring task execution&lt;/li&gt;
&lt;li&gt;Managing errors related to tasks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this project, automating the provisioning of operating systems on bare metal servers is made simple and accessible to anyone with basic knowledge of Ansible, HPE Compute Ops Management, and kickstart techniques. While it is generally a complex process that requires a wide range of skills, this project simplifies it with the use of auto-customized kickstarts, auto-generated ISO files and by exploiting the very compelling features of HPE Compute Ops Management server groups.&lt;/p&gt;
&lt;h2&gt;Key highlights of this project&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Automated provisioning&lt;/strong&gt;: Kickstart your server setups without tedious manual configuration.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Centralized control&lt;/strong&gt;: Manage your entire fleet of servers from a single pane of glass.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalable architecture&lt;/strong&gt;: Effortlessly scale your infrastructure to meet growing business demands.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pre-built playbooks&lt;/strong&gt;: Jumpstart your automation with curated collection, crafted for various deployment scenarios (Microsoft Windows Server, VMware ESXi and Red Hat Enterprise Linux).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Custom Ansible variables&lt;/strong&gt;: Enable you to define environment-specific parameters, ensuring that each server gets a configuration that fits its role in the infrastructure, ensuring that you have granular control over server provisioning.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Comprehensive documentation&lt;/strong&gt;: Detailed guides, videos and examples help you customize the workflow to your specific requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Mastering server management with HPE GreenLake for Compute Ops Management&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management is a comprehensive solution for hardware resource management, providing a seamless way to handle server deployments. With its ability to manage health monitoring, orchestrate server configuration and firmware update workflows, and automate bare metal provisioning, administrators can ensure their data centers operate optimally with less effort and greater oversight. To learn more, see &lt;a href=&quot;https://www.hpe.com/emea_europe/en/hpe-greenlake-compute-ops-management.html&quot;&gt;HPE GreenLake for Compute Ops Management&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Bridging HPE GreenLake for Compute Ops Management and Ansible&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management provides the foundational management capabilities essential for maintaining data center health and efficiency. When combined with the automation capabilities of Ansible, IT administrators can achieve unprecedented levels of automation.&lt;/p&gt;
&lt;h2&gt;Bringing it all together&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Automated Workflows&lt;/strong&gt;: Convert time-consuming manual processes into automated workflows that can be tracked and managed easily.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalable Infrastructure&lt;/strong&gt;: Embrace growth without compromising on performance or manageability.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reduced Human Error&lt;/strong&gt;: Minimize mistakes by standardizing server configurations across the board.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Mastering parallel execution with Ansible&lt;/h2&gt;
&lt;p&gt;A key attribute of Ansible that I looked for in this project is its impressive capability to execute tasks concurrently across multiple systems, thereby accelerating deployment processes. This feature is called &quot;forks&quot; in Ansible. Set to 5 by default, the forks value is adjustable based on available system resources (CPU and memory), signifying that Ansible can carry out playbook tasks in parallel across 5 hosts from the inventory list. This parallel execution is among Ansible&apos;s outstanding functionalities, enhancing the effectiveness of bare-metal provisioning substantially. Moreover, this approach ensures consistent configurations across all provisioned hosts.&lt;/p&gt;
&lt;h2&gt;Where to start?&lt;/h2&gt;
&lt;p&gt;To gain an understanding of the project, please refer to the &lt;a href=&quot;https://github.com/jullienl/HPE-COM-baremetal/blob/main/readme.md&quot;&gt;readme.md&lt;/a&gt; file within the project&apos;s repository. It will provide you with detailed instructions on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jullienl/HPE-COM-baremetal#prerequisites&quot;&gt;The necessary prerequisites for utilizing this project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jullienl/HPE-COM-baremetal#ansible-control-node-information&quot;&gt;The process for setting up the Ansible control node&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jullienl/HPE-COM-baremetal#preparation-to-run-the-playbooks&quot;&gt;The initial steps required prior to executing a playbook&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How to run a playbook?&lt;/h2&gt;
&lt;p&gt;A single command is required to provision all hosts listed in an inventory file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;ansible-playbook &amp;#x3C;provisioning_file&gt;.yml -i &amp;#x3C;inventory_file&gt; --ask-vault-pass --ask-become-pass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where &lt;code&gt;&amp;#x3C;provisioning_file&gt;&lt;/code&gt; should be replaced with &lt;code&gt;ESXi_provisioning&lt;/code&gt;, &lt;code&gt;RHEL_provisioning&lt;/code&gt;, or &lt;code&gt;WIN_provisioning&lt;/code&gt; depending on the target operating system. Similarly, replace &lt;code&gt;&amp;#x3C;inventory_file&gt;&lt;/code&gt; with the appropriate inventory filename such as &lt;code&gt;hosts_ESXi&lt;/code&gt;, &lt;code&gt;hosts_RHEL&lt;/code&gt;, or &lt;code&gt;hosts_WIN&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Upon running this command, Ansible will prompt you to enter the vault password and the sudo password to proceed with the provisioning process.&lt;/p&gt;
&lt;h2&gt;Explore my video series&lt;/h2&gt;
&lt;p&gt;Dive into this series of videos showcasing the seamless bare metal operation across three major operating systems. Each video provides a walk through over the different variables involved and the files that are required to update HPE drivers and software, along with the explanation of the different steps of each playbook.&lt;/p&gt;
&lt;h3&gt;Windows Server Bare Metal Provisioning on 2 x HPE ProLiant DL360 Gen10 Plus&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=A6RD6nIAFmw&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/A6RD6nIAFmw/hqdefault.jpg&quot; alt=&quot;Automatic Windows server&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;RHEL 9.3 Bare Metal Provisioning on 2 x HPE ProLiant DL360 Gen10 Plus&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=6_o8yB4cvag&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/6_o8yB4cvag/hqdefault.jpg&quot; alt=&quot;Automatic RHEL Bare Metal&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;VMware ESXi  Bare Metal Provisioning on 2 x HPE ProLiant DL360 Gen10 Plus&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=_ySgROdd_Bw&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/_ySgROdd_Bw/hqdefault.jpg&quot; alt=&quot;Automatic ESX Bare Metal&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;br /&gt;
&lt;p&gt;Join me on my &lt;a href=&quot;https://github.com/jullienl/HPE-COM-baremetal&quot;&gt;GitHub repository&lt;/a&gt;, where a wealth of information awaits you in the README file. Learn how to effectively utilize this project, from cloning it into your environment to commencing with its use, and witness the ways it can streamline your bare metal provisioning workflow.&lt;/p&gt;
&lt;p&gt;Stay tuned as I continue to update and maintain this project, incorporating user &lt;a href=&quot;mailto:lio@hpe.com&quot;&gt;feedback&lt;/a&gt; and the latest advancements that HPE GreenLake will offer.&lt;/p&gt;
&lt;p&gt;Get started now and begin transforming your server deployment strategy today!&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why is Redfish different from other REST APIs - Part 1]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal]]></description><link>https://developer.hpe.com/why-is-redfish®-different-from-other-rest-apis-part-1/</link><guid isPermaLink="false">https://developer.hpe.com/why-is-redfish®-different-from-other-rest-apis-part-1/</guid><pubDate>Mon, 19 Feb 2024 16:20:51 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/why_is_redfish_different/why_is_redfish_different_part1&quot;&gt;Server Management Portal&lt;/a&gt;&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[MESA and Co-Designing Model Architectures with Hardware]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/mesa-and-co-designing-model-architectures-with-hardware/</link><guid isPermaLink="false">https://developer.hpe.com/mesa-and-co-designing-model-architectures-with-hardware/</guid><pubDate>Mon, 05 Feb 2024 16:47:48 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Foundational APIs for HPE GreenLake, infrastructure monitoring, GPU programming & more!]]></title><link>https://developer.hpe.com/2024-February-05/</link><guid isPermaLink="false">https://developer.hpe.com/2024-February-05/</guid><pubDate>Mon, 05 Feb 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Get started with the foundational APIs for the HPE GreenLake platform – Part 3: Tracking activities and monitoring health]]></title><description><![CDATA[Editor's note: This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current…]]></description><link>https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-3-tracking-activities-and-monitoring-health/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-3-tracking-activities-and-monitoring-health/</guid><pubDate>Fri, 02 Feb 2024 10:38:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor&apos;s note:&lt;/strong&gt; This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current release of the HPE GreenLake service APIs, please visit the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake API catalog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is part three of a blog series that showcases the capabilities of the APIs for common HPE GreenLake platform services using a real customer scenario, displaying it from the perspective of a user of the platform, such as an IT administrator.&lt;/p&gt;
&lt;p&gt;Continuing on from the &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-2-configuring-and-managing-a-workspace/&quot;&gt;second part of this series&lt;/a&gt;, where I had put on my IT administrator’s hat for a &lt;em&gt;&lt;strong&gt;Standard Enterprise&lt;/strong&gt;&lt;/em&gt; workspace, I will now explore the set of REST API calls used for tracking activities in the workspace and monitoring the overall health of HPE services and HPE products in the workspace.&lt;/p&gt;
&lt;h2&gt;Tracking activities in the workspace&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Audit log&lt;/strong&gt; service records the occurrence of events emitted by users, any device or service in the workspace. These logs can be used for tracking user activities, doing root cause analysis of an incident, investigating breaches, and for auditing purposes.&lt;/p&gt;
&lt;p&gt;Let’s assume that I have been notified of unusual activities in the workspace and I would like to act to begin an investigation and identify the root cause of the incident in the workspace. This involves an analysis of logs for the services and the platform in the workspace, and tracking activities for users. To conduct this analysis, I will use the set of audit log API calls from the Postman collection folder: &lt;em&gt;&lt;strong&gt;Tracking GLP Workspace/Step5-audit-log/audit-log/v1beta1/logs&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Collecting service-specific logs and platform logs&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;GET&lt;/strong&gt; REST API request &lt;em&gt;&lt;strong&gt;Get all audit logs of an application&lt;/strong&gt;&lt;/em&gt; derived from the API call &lt;em&gt;&lt;strong&gt;GET all audit logs of an application or user&lt;/strong&gt;&lt;/em&gt; is used to get logs from a specific service in the workspace and platform logs in the workspace. I just need to specify the service identifier or the HPE GreenLake platform identifier to obtain the list of logs for a particular service or for the platform. In the example below, I specify the identifier of the HPE GreenLake platform and limit the output for activities that occurred after a certain date and time in UTC following the &lt;a href=&quot;https://en.wikipedia.org/wiki/ISO_8601&quot;&gt;ISO 8601 standard&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/audit-log/v1beta1/logs?filter=application/id eq &apos;{{GLP_Application_Id}}&apos;&amp;#x26;filter=createdAt ge &apos;2023-12-10T11:00:00.00000Z&apos;&amp;#x26;limit=50&amp;#x26;offset=0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part3-auditlog-application-specific-logs-image2.png&quot; alt=&quot;Figure 1: Audit logs for the HPE GreenLake platform in the workspace&quot; title=&quot;Figure 1: Audit logs for the HPE GreenLake platform in the workspace&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 1: Audit logs for the HPE GreenLake platform in the workspace&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Tracking user-specific activities&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;GET&lt;/strong&gt; REST API call &lt;em&gt;&lt;strong&gt;Get all audit logs of a user&lt;/strong&gt;&lt;/em&gt; derived from the REST API call &lt;em&gt;&lt;strong&gt;Get all audit logs of an application or user&lt;/strong&gt;&lt;/em&gt; is used to track activities for a specific user in the workspace based on the criteria specified in the filter query parameters. In this example I limit the output for a particular user’s activities that occurred after a certain date and time in UTC following the &lt;a href=&quot;https://en.wikipedia.org/wiki/ISO_8601&quot;&gt;ISO 8601 standard&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/audit-log/v1beta1/logs?filter=user/username eq &apos;&amp;#x3C;UserEmail@example.com&gt;&apos;&amp;#x26;filter=createdAt ge &apos;2023-12-14T11:00:00.00000Z&apos;&amp;#x26;limit=300&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can specify additional query parameters to limit the scope of the output to a specific category of activities. For example, &lt;em&gt;User Management&lt;/em&gt;, &lt;em&gt;Device Management&lt;/em&gt;, or &lt;em&gt;Customer Management&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part3-auditlog-tracking-user-activities-image1.png&quot; alt=&quot;Figure 2: Tracking activities for a specific user&quot; title=&quot;Figure 2: Tracking activities for a specific user&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 2: Tracking activities for a specific user&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Monitoring health events for the workspace&lt;/h3&gt;
&lt;p&gt;The HPE GreenLake platform provides a &lt;strong&gt;wellness&lt;/strong&gt; service to enable you to monitor the overall health of the managed services and devices in the workspace. The wellness service API provides programmatic access to view health events and insights about HPE services and HPE products in the workspace.&lt;/p&gt;
&lt;p&gt;I will use the set of Wellness API calls from the Postman collection folder: &lt;em&gt;&lt;strong&gt;Tracking GLP Workspace/Step6-Wellness&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;GET&lt;/strong&gt; REST API call &lt;em&gt;&lt;strong&gt;Get a list of wellness events&lt;/strong&gt;&lt;/em&gt; is used to retrieve a list of health events for services (for example, Networking, Storage, Compute or HPE GreenLake platform services) and HPE device models in the workspace. In the example below, I use a set of query parameters to view health information for a specific device model, service and severity:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/wellness/v2beta2/events?filter=serviceName in (&apos;Storage&apos;)&amp;#x26;filter=productName eq &apos;HPE Alletra 6000&apos;&amp;#x26;filter=condition/severity eq &apos;warning&apos;&amp;#x26;limit=100&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part3-list-wellness-v2beta2-events-for-service-product-severity-image3.png&quot; alt=&quot;Figure 3: Retrieve list of health events for a specific service, device model and severity&quot; title=&quot;Figure 3: Retrieve list of health events for a specific service, device model and severity&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 3:Retrieve list of health events for a specific service, device model and severity&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;To view information about a specific event ID, I can use the REST API request &lt;em&gt;&lt;strong&gt;Get wellness event with specific ID&lt;/strong&gt;&lt;/em&gt; by specifying the event ID as a Path variable in the parameters of the API call:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/wellness/v2beta2/events/:id&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part3-view-wellness-v2beta2-event-for-specific-event-id-image4.png&quot; alt=&quot;Figure 4: View information for a specific event ID&quot; title=&quot;Figure 4: View information for a specific event ID&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 4: View information for a specific event ID&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog series walks you through the APIs for common HPE GreenLake platform services &lt;strong&gt;for a single-tenant workspace&lt;/strong&gt; environment from the perspective of an IT administrator. I took advantage of the Postman collection available on the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;HPE Developer Community tooling GitHub repository&lt;/a&gt; to help you get started with these APIs, learn the REST API call syntax for the API requests through the use of examples, how to &lt;strong&gt;programmatically&lt;/strong&gt; configure and manage workspace resources such as users and infrastructure devices, and how to track activities and monitor health events for the workspace.&lt;/p&gt;
&lt;p&gt;To learn more about all the REST API calls for the platform, I invite you to refer to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake platform documentation&lt;/a&gt; for these APIs. The documentation leverages OpenAPI specifications and associated reference documentation for these API services. It provides a complete explanation of the operations supported by these APIs for common HPE GreenLake platform services, as well as sample requests and responses.&lt;/p&gt;
&lt;p&gt;If you’re interested in trying out what I just discussed, you might want to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake APIs mentioned in this blog series. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested in and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake platform APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get started with the foundational APIs for the HPE GreenLake platform – Part 2: Configuring and managing a workspace]]></title><description><![CDATA[Editor's note: This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current…]]></description><link>https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-2-configuring-and-managing-a-workspace/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-2-configuring-and-managing-a-workspace/</guid><pubDate>Wed, 31 Jan 2024 17:29:10 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;&lt;strong&gt;Editor&apos;s note:&lt;/strong&gt; This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current release of the HPE GreenLake service APIs, please visit the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake API catalog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is Part 2 of a blog series that showcases the capabilities of APIs for common HPE GreenLake platform services through a real customer use case, presenting it from the perspective of a user of the platform, such as an IT administrator.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-1-introduction-to-the-apis/&quot;&gt;In the previous blog post&lt;/a&gt;, I described the current set of APIs for the HPE GreenLake platform, and I covered the Postman collection aspect I built to get started with these APIs.&lt;/p&gt;
&lt;p&gt;In this second part of the series, I will put on my IT administrator’s hat and assume the role of the HPE GreenLake platform Workspace Administrator for a &lt;em&gt;&lt;strong&gt;Standard Enterprise&lt;/strong&gt;&lt;/em&gt; workspace. This type of workspace is a &lt;em&gt;single-tenant&lt;/em&gt; environment for a single customer and organization. As workspace administrator, I have full privileges to provision, manage and monitor the users and IT resources in the workspace.&lt;/p&gt;
&lt;p&gt;As I do so, I will show you how to use these foundational, common APIs to &lt;strong&gt;programmatically&lt;/strong&gt; configure and manage workspace resources (users and infrastructure devices) like an administrator would do using the HPE GreenLake platform User Interface (UI). I will walk you through some of the most common REST API calls to the HPE GreenLake platform API services based on a typical customer scenario that will allow you to learn what you can do on the platform using this set of APIs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configuring and managing an HPE GreenLake platform workspace:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Invite users to join a workspace.&lt;/li&gt;
&lt;li&gt;Add a device (for example an HPE Aruba Access Point) and subscription to the workspace.&lt;/li&gt;
&lt;li&gt;Attach the device to a regional instance of a service deployed in the workspace (for example, HPE Aruba Networking Central service in Central Europe) and a subscription key (a license) to operate the device.&lt;/li&gt;
&lt;li&gt;Remove service and subscription assignments for a device.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Obtaining the OAuth access token&lt;/h2&gt;
&lt;p&gt;As I prepare to access and use the HPE GreenLake platform workspace resources through REST API calls to the platform API services, I first need to generate an OAuth access token. Refer to &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-1-introduction-to-the-apis/&quot;&gt;my blog post Part 1&lt;/a&gt; to learn how to generate the access token using the Postman collection available &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once the access token is generated, it will be used as the &lt;strong&gt;authorization bearer token&lt;/strong&gt; for all subsequent REST API calls.&lt;/p&gt;
&lt;h2&gt;Inviting a tenant user to collaborate in the workspace&lt;/h2&gt;
&lt;p&gt;As an administrator in my HPE GreenLake platform workspace, I can easily invite other members of my organization to join the workspace by sending them a sign-up link in email. Here I am using the &lt;strong&gt;POST&lt;/strong&gt; REST API call - &lt;em&gt;&lt;strong&gt;Invite a user&lt;/strong&gt;&lt;/em&gt;, taken from the Postman collection folder: &lt;em&gt;&lt;strong&gt;Configuring and Managing GLP Workspace/Step2-IAM/Identity/v1/users&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;POST {{baseUrl}}/identity/v1/users&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part2-invite-user-image1.png&quot; alt=&quot;Figure 1: Invite a user REST API call&quot; title=&quot;Figure 1: Invite a user REST API call&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 1: Invite a user REST API call&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You must be assigned the HPE GreenLake platform &lt;em&gt;Workspace Administrator&lt;/em&gt; role to invite a user to join the workspace.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let’s look at this REST API request &lt;strong&gt;syntax&lt;/strong&gt;. All REST API calls to HPE GreenLake platform services are made by providing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The single unified domain endpoint (&lt;em&gt;&lt;a href=&quot;https://global.api.greenlake.hpe.com&quot;&gt;https://global.api.greenlake.hpe.com&lt;/a&gt;&lt;/em&gt;) defined in the baseURL variable,&lt;/li&gt;
&lt;li&gt;The HTTP request method such as GET, POST, PUT/PATCH, or DELETE. In this example, the POST method is specified to create a new user instance in the workspace.&lt;/li&gt;
&lt;li&gt;The path (&lt;em&gt;API-group-name/API-Version/Workspace-Resources&lt;/em&gt;). The path specifies the API group name (here identity), the version of the API (here v1), and the resource path in the workspace (here users).&lt;/li&gt;
&lt;li&gt;A data payload when using a method that involves changing (PUT/PATCH) or creating (POST) an object instance. In this example, as the method involves creating a user instance in the workspace, the data payload (the Body) specifies the e-mail user address, and a welcome email is sent to the user to invite the user to join the workspace.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To get started, I hit the &lt;strong&gt;Send&lt;/strong&gt; button and I get a &lt;em&gt;&lt;strong&gt;201 Created&lt;/strong&gt;&lt;/em&gt; response indicating that the user has been successfully invited. The user then receives an email to confirm and accept the invitation. The user will then be invited to create and activate an HPE account to join the workspace.  Users who are invited are not &lt;strong&gt;verified&lt;/strong&gt; users until they accept the email confirmation.&lt;/p&gt;
&lt;p&gt;I can then verify the user has joined the workspace by using the GET REST API call &lt;em&gt;&lt;strong&gt;Get invited users by Username&lt;/strong&gt;&lt;/em&gt; below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/identity/v1/users?filter=username eq &apos;&amp;#x3C;user’s email address&gt;&apos;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part2-get-invited-user-image2.png&quot; alt=&quot;Figure 2: Checking the status of the invited user in the workspace&quot; title=&quot;Figure 2: Checking the status of the invited user in the workspace&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 2: Checking the status of the invited user in the workspace&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;The REST API call syntax is the same as the previous API request, but here a &lt;strong&gt;GET&lt;/strong&gt; method is used to list the users in the workspace.&lt;/p&gt;
&lt;p&gt;One or more &lt;strong&gt;query parameters&lt;/strong&gt; indicated after the question mark (“&lt;strong&gt;?&lt;/strong&gt;”) in the URL can be used to filter the data that an API request returns. Typical query parameters are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;filter:&lt;/strong&gt; filter the set of resources returned based on the criteria specified in the filter query parameter&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;limit:&lt;/strong&gt; maximum number of records to return&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;offset:&lt;/strong&gt; resource offset to start the response from&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this example, I use &lt;em&gt;&lt;strong&gt;filter&lt;/strong&gt;&lt;/em&gt; as the query parameter to limit the scope of the output of the call to just the invited user’s email specified in the filter query parameter.&lt;/p&gt;
&lt;p&gt;I now hit the &lt;strong&gt;Send&lt;/strong&gt; button. The request indicates success (&lt;em&gt;&lt;strong&gt;Status: 200 OK&lt;/strong&gt;&lt;/em&gt;). In the response, a &lt;em&gt;userStatus&lt;/em&gt; of &lt;em&gt;VERIFIED&lt;/em&gt; means that the user has activated the HPE account and joined the workspace. A user who has already activated their HPE account will automatically be added to the workspace upon invitation. A &lt;em&gt;userStatus&lt;/em&gt; of &lt;em&gt;UNVERIFIED&lt;/em&gt; would mean that the user has not created and activated the HPE account yet.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Thanks to the Postman &lt;em&gt;Post-response Script&lt;/em&gt; associated with this request, the unique identifier of the invited user is automatically saved as collection variable. The identifier of the user is needed should an administrator want to disassociate (delete) a user from the workspace using the REST API call &lt;em&gt;&lt;strong&gt;DELETE Disassociate a user&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Managing IT resources (devices and subscriptions) into the workspace&lt;/h2&gt;
&lt;p&gt;A typical scenario to manage infrastructure resources from the HPE GreenLake platform would be where one would:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Add an infrastructure device and a subscription key for this device to the workspace. In this scenario, I will add an HPE Aruba Access Point and associated subscription key to the inventory of the workspace.&lt;/li&gt;
&lt;li&gt;Attach the device to a service to manage and operate the device. In this scenario, I will assign the HPE Aruba Access Point to the HPE Aruba Networking Central service already deployed in the workspace. The HPE Aruba Networking Central service is a SaaS-based User Interface that lets customers manage their fleet of networking equipment from edge-to-cloud from a single web interface.&lt;/li&gt;
&lt;li&gt;Assign a subscription key to the device. A subscription key is a license key needed to activate the device and allows the IT administrator to use and operate it using the appropriate service management console such as HPE Aruba Networking Central for networking devices, HPE GreenLake for Compute Ops Management for compute servers, and Data Services Cloud Console for storage arrays.&lt;/li&gt;
&lt;li&gt;Remove assignment of a service or a subscription for a device.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Adding a device and subscription&lt;/h3&gt;
&lt;p&gt;Here I am going to use the REST API calls from the Postman collection folder: &lt;em&gt;&lt;strong&gt;Configuring and Managing GLP Workspace/Step4-Service Catalog, Devices and Subscriptions&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;POST&lt;/strong&gt; REST API call &lt;em&gt;&lt;strong&gt;Add devices - Aruba Access Point with Tag&lt;/strong&gt;&lt;/em&gt; from &lt;em&gt;&lt;strong&gt;/devices/v1beta1/devices&lt;/strong&gt;&lt;/em&gt; subfolder allows me to add an HPE Aruba Networking device to the inventory in the workspace by providing device details in the data payload (Body) of the request. The device information for HPE Aruba Networking equipment includes the &lt;em&gt;Serial Number&lt;/em&gt; and the &lt;em&gt;MAC address&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Optionally I can assign a “&lt;strong&gt;tag&lt;/strong&gt;” to the device while adding it to the inventory. Tags are &lt;em&gt;name-value&lt;/em&gt; pairs that can be very useful for identifying and categorizing groups of resources. In the example below, the tag’s name is “&lt;em&gt;Location&lt;/em&gt;”, and the value is “&lt;em&gt;Lab Building 2&lt;/em&gt;”.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;POST {{baseUrl}}/devices/v1beta1/devices&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;compute&quot;: [],
    &quot;storage&quot;: [],
    &quot;network&quot;: [
        {
            &quot;serialNumber&quot;: &quot;&amp;#x3C;SerialNumber of the device&gt;&quot;,
            &quot;macAddress&quot;: &quot;&amp;#x3C;MAC-address of the device&gt;&quot;,
             &quot;tags&quot;: {
               &quot;Location&quot;: &quot;Lab Building 2&quot;
             }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The “&lt;strong&gt;Add devices&lt;/strong&gt;” API call is an asynchronous operation. Asynchronous operations are API operations that cannot be completed immediately. The response of the request indicates &lt;em&gt;&lt;strong&gt;Status: 202 Accepted&lt;/strong&gt;&lt;/em&gt; and contains the &lt;em&gt;transaction Id&lt;/em&gt; of the asynchronous operation that I can use as a Path variable in the subsequent &lt;strong&gt;GET&lt;/strong&gt; API call &lt;em&gt;&lt;strong&gt;Get progress or status of async operations in devices&lt;/strong&gt;&lt;/em&gt; to verify whether the asynchronous operation is successful or not:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/devices/v1beta1/async-operations/:id&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part2-adding-device-async-operation-image5.png&quot; alt=&quot;Figure 3: Checking status of asynchronous operation for adding devices&quot; title=&quot;Figure 3: Checking status of asynchronous operation for adding devices&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 3: Checking the status of the asynchronous operation for adding devices&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;A similar sequence of REST API calls can be used to add a subscription key in the workspace inventory and verify the status of the asynchronous operation. The &lt;strong&gt;POST&lt;/strong&gt; REST API call &lt;em&gt;&lt;strong&gt;Add subscriptions key for AP device&lt;/strong&gt;&lt;/em&gt; derived from the API call &lt;em&gt;&lt;strong&gt;POST Add subscriptions&lt;/strong&gt;&lt;/em&gt; from &lt;em&gt;&lt;strong&gt;/subscriptions/v1beta1&lt;/strong&gt;&lt;/em&gt; subfolder allows me to add a subscription for HPE Aruba Networking devices to the inventory in the workspace by providing the &lt;em&gt;subscription key&lt;/em&gt; in the data payload (Body) of the request. The API call is an asynchronous operation.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;POST {{baseUrl}}/subscriptions/v1beta1/subscriptions&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;subscriptions&quot;: [
    {
      &quot;key&quot;: &quot;&amp;#x3C;Subscription key&gt;&quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The GET API call &lt;em&gt;&lt;strong&gt;Get progress or status of async operations in subscriptions&lt;/strong&gt;&lt;/em&gt; is used to verify status of the asynchronous operation:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/subscriptions/v1beta1/async-operations/:id&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I can now use the two subsequent REST API calls below to fetch detailed information about the device and the subscription key I have just added to the inventory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;em&gt;&lt;strong&gt;Get devices managed in a workspace&lt;/strong&gt;&lt;/em&gt; API call allows me to obtain detailed information about the device by specifying the &lt;em&gt;SerialNumber&lt;/em&gt; of the device in the filter query parameter. This request will allow me to save the unique identifier of the device as a collection variable.&lt;/li&gt;
&lt;li&gt;Similarly, the &lt;em&gt;&lt;strong&gt;Get subscriptions of a workspace&lt;/strong&gt;&lt;/em&gt; API call allows me to get detailed information about the &lt;em&gt;subscription key&lt;/em&gt; and fetch the unique identifier of the subscription key as collection variable.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/devices/v1beta1/devices?filter=serialNumber eq &apos;&amp;#x3C;SerialNumber&gt;&apos;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;GET {{baseUrl}}/subscriptions/v1alpha1/subscriptions?filter=key eq &apos;&amp;#x3C;SubcriptionKey&gt;&apos;&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I will need the &lt;em&gt;identifier&lt;/em&gt; of the device to attach the device to a regional instance of a service management console. I will also need the &lt;em&gt;identifier&lt;/em&gt; of the subscription key to assign the subscription key to the device as explained in the next step.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Assigning the device to a regional instance of a service&lt;/h3&gt;
&lt;p&gt;Next, using the &lt;strong&gt;PATCH&lt;/strong&gt; &lt;em&gt;&lt;strong&gt;Update devices - Assign Application to a device&lt;/strong&gt;&lt;/em&gt; REST API request (derived from the &lt;em&gt;&lt;strong&gt;PATCH Update devices API call&lt;/strong&gt;&lt;/em&gt;), I can attach the device to a regional instance of the HPE Aruba Networking Central management console service already deployed in the workspace. The identifier of the device is specified as a query parameter, the HPE Aruba Networking Central service &lt;em&gt;identifier&lt;/em&gt; and &lt;em&gt;region&lt;/em&gt; are specified in the data payload (Body) as shown below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PATCH {{baseUrl}}/devices/v1beta1/devices?id={{DeviceId}}&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;application&quot;: {
    &quot;id&quot;: &quot;{{Aruba_Application_Id}}&quot;
  },
  &quot;region&quot;: &quot;&amp;#x3C;region&gt;&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part2-assign-app-to-device-image6.png&quot; alt=&quot;Figure 4: Assign device to a regional service instance&quot; title=&quot;Figure 4: Assign device to a regional service instance&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 4: Assign device to a regional instance of the HPE Aruba Networking Central service&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;This API call is an asynchronous operation, and I can use the &lt;strong&gt;GET&lt;/strong&gt; API call &lt;em&gt;&lt;strong&gt;Get progress or status of async operations in devices&lt;/strong&gt;&lt;/em&gt; to verify the status of the operation.&lt;/p&gt;
&lt;h3&gt;Applying a subscription key to the device&lt;/h3&gt;
&lt;p&gt;Similarly, I can use the same REST API call to assign a subscription key to the device specifying the identifier of the device as the query parameter and the identifier of the subscription key in the data payload (Body):&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PATCH {{baseUrl}}/devices/v1beta1/devices?id={{DeviceId}}&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;subscription&quot;: [
    {
      &quot;id&quot;: &quot;{{SubscriptionKeyId}}&quot;
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part2-assign-subscriptionkey-to-device-image7.png&quot; alt=&quot;Figure 5: Assign a subscription key to a device&quot; title=&quot;Figure 5: Assign a subscription key to a device&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 5: Assign a subscription key to a device&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;This API call is an asynchronous operation, and I can use the &lt;strong&gt;GET&lt;/strong&gt; API call &lt;em&gt;&lt;strong&gt;Get progress or status of async operations in devices&lt;/strong&gt;&lt;/em&gt; to verify the status of the operation.&lt;/p&gt;
&lt;h3&gt;Removing assignment of service and subscription&lt;/h3&gt;
&lt;p&gt;During ongoing operations in the workspace, I may need to remove assignment of a service and a subscription for a particular device. I can use the &lt;strong&gt;PATCH&lt;/strong&gt; REST API calls &lt;em&gt;&lt;strong&gt;Update devices – Unassign Application for a Device&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;Update devices - Unassign Subscription Key for a device&lt;/strong&gt;&lt;/em&gt; respectively.&lt;/p&gt;
&lt;p&gt;To remove an assignment of a service for a device, leave the &lt;em&gt;application&lt;/em&gt; field as empty value in the data payload as shown here:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PATCH {{baseUrl}}/devices/v1beta1/devices?id={{DeviceId}}&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;application&quot;: {
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To remove an assignment of a subscription key for a device, leave the &lt;em&gt;subscription&lt;/em&gt; field as empty value in the data payload as shown here:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;PATCH {{baseUrl}}/devices/v1beta1/devices?id={{DeviceId}}&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;subscription&quot;: [
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post walks you through the APIs for common HPE GreenLake platform services &lt;strong&gt;for a single-tenant workspace&lt;/strong&gt; environment from the perspective of an IT administrator. I took advantage of the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;Postman collection&lt;/a&gt; to help you get started with these APIs, learn through examples the REST API call syntax for the API requests and how to &lt;strong&gt;programmatically&lt;/strong&gt; configure and manage workspace resources such as users, users’ roles and infrastructure devices.&lt;/p&gt;
&lt;p&gt;To learn more about all the REST API calls for the platform, I invite you to refer to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake platform documentation&lt;/a&gt; for these APIs. You can get the Postman collection from the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;HPE Developer Community tooling GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;the next part&lt;/a&gt; of this blog series, I will explore the set of APIs used for tracking activities and monitoring overall health of services and devices in the workspace.&lt;/p&gt;
&lt;p&gt;If you’re interested in trying out what I just discussed, you might want to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake APIs mentioned in this blog post. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested in and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake platform APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Finetuning an LLM using HuggingFace + Determined]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/finetuning-an-llm-using-huggingface-determined/</link><guid isPermaLink="false">https://developer.hpe.com/finetuning-an-llm-using-huggingface-determined/</guid><pubDate>Wed, 31 Jan 2024 15:29:49 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Comparing Standard Library Sorts: The Impact of Parallelism]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/comparing-standard-library-sorts-the-impact-of-parallelism/</link><guid isPermaLink="false">https://developer.hpe.com/comparing-standard-library-sorts-the-impact-of-parallelism/</guid><pubDate>Tue, 30 Jan 2024 19:16:00 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[MambaByte, Multimodal Pathway, and CrossMAE]]></title><description><![CDATA[E﻿xternal blog post]]></description><link>https://developer.hpe.com/mambabyte-multimodal-pathway-and-crossmae/</link><guid isPermaLink="false">https://developer.hpe.com/mambabyte-multimodal-pathway-and-crossmae/</guid><pubDate>Mon, 29 Jan 2024 15:59:48 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog post&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to backup and restore stateful applications on Kubernetes using Kasten K10 in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Overview HPE GreenLake for Private Cloud Enterprise: Containers, one of the HPE GreenLake cloud services available on the HPE GreenLake for…]]></description><link>https://developer.hpe.com/how-to-backup-and-restore-stateful-applications-on-kubernetes-using-kasten-k10-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-backup-and-restore-stateful-applications-on-kubernetes-using-kasten-k10-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Fri, 26 Jan 2024 10:07:05 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;In the blog post &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-volume-snapshots-on-a-kubernetes-cluster-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;Getting started with volume snapshots on K8s cluster&lt;/a&gt;, I explained how to create a volume snapshot of a persistent volume in a MySQL database instance running on a K8s cluster deployed on HPE GreenLake for Private Cloud Enterprise. In this blog post, I will show you how to backup and restore the stateful applications deployed in a K8s cluster in HPE GreenLake for Private Cloud Enterprise using Kasten K10. Kasten K10 uses the volume snapshot capability in the HPE Container Storage Interface (CSI) driver for K8s to connect to different HPE storage systems and take volume snapshots of persistent volumes in K8s. It provides a user-friendly and intuitive interface and platform for easy and reliable backup and restore of the stateful applications running in the cluster.&lt;/p&gt;
&lt;h3&gt;Kasten K10&lt;/h3&gt;
&lt;p&gt;Kasten K10 is a data management platform purpose-built for K8s that was developed by Kasten. Following Veeam&apos;s acquisition of Kasten early in 2020, Kasten K10 is often referred to as Kasten by Veeam.&lt;/p&gt;
&lt;p&gt;Kasten K10 has been named &lt;a href=&quot;https://www.veeam.com/news/kasten.html&quot;&gt;a Leader and Outperformer in GigaOm’s K8s Data Protection report for the third consecutive year&lt;/a&gt;. It offers an easy-to-use, scalable, and secure system for K8s backup/restore, disaster recovery and mobility of K8s applications.&lt;/p&gt;
&lt;p&gt;Apart from direct integration with a number of storage providers, Kasten K10 supports invoking volume snapshots operations via the CSI driver for K8s. By using the volume snapshot capability in the CSI driver for K8s, Kasten K10 can access different types of storage systems that enable you to backup and restore persistent volumes of the stateful applications running on K8s.&lt;/p&gt;
&lt;h3&gt;HPE CSI driver for K8s&lt;/h3&gt;
&lt;p&gt;The CSI defines a standard interface that allows container orchestration systems, such as K8s, to access storage systems. The CSI driver for K8s is a software component that implements the CSI specification and enables K8s to communicate with external storage systems. K8s supports many CSI drivers. HPE CSI Driver for K8s is one of the CSI drivers developed by HPE that uses the CSI to perform data management operations on different HPE storage systems, such as Nimble Storage, 3PAR and Primera.&lt;/p&gt;
&lt;p&gt;The K8s cluster provisioned in HPE GreenLake for Private Cloud Enterprise comes with the HPE CSI driver for K8s pre-installed and configured. The details of the HPE CSI driver for K8s deployed in the cluster under the namespace &lt;em&gt;&apos;hpe-storage&apos;&lt;/em&gt; are shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n hpe-storage
NAME                                       READY   STATUS    RESTARTS      AGE
pod/hpe-csi-controller-54cf448d85-g4w4c    9/9     Running   0             56d
pod/hpe-csi-node-5xtdb                     2/2     Running   0             56d
pod/nimble-csp-74d57f9487-qxwln            1/1     Running   0             56d
pod/primera3par-csp-59f5dfc499-hfghx       1/1     Running   0             56d
pod/snapshot-controller-5fd799f6b5-f6k7n   1/1     Running   6 (22d ago)   56d
pod/snapshot-controller-5fd799f6b5-z62dc   1/1     Running   2 (27d ago)   56d

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/alletra6000-csp-svc   ClusterIP   10.101.79.85    &amp;#x3C;none&gt;        8080/TCP   56d
service/alletra9000-csp-svc   ClusterIP   10.97.147.230   &amp;#x3C;none&gt;        8080/TCP   56d
service/nimble-csp-svc        ClusterIP   10.110.238.43   &amp;#x3C;none&gt;        8080/TCP   56d
service/primera3par-csp-svc   ClusterIP   10.101.42.76    &amp;#x3C;none&gt;        8080/TCP   56d

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/hpe-csi-node   1         1         1       1            1           &amp;#x3C;none&gt;          56d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hpe-csi-controller    1/1     1            1           56d
deployment.apps/nimble-csp            1/1     1            1           56d
deployment.apps/primera3par-csp       1/1     1            1           56d
deployment.apps/snapshot-controller   2/2     2            2           56d

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/hpe-csi-controller-54cf448d85    1         1         1       56d
replicaset.apps/nimble-csp-74d57f9487            1         1         1       56d
replicaset.apps/primera3par-csp-59f5dfc499       1         1         1       56d
replicaset.apps/snapshot-controller-5fd799f6b5   2         2         2       56d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;HPE CSI driver for K8s supports both dynamic persistent volumes and volume snapshots. The following are the &lt;em&gt;StorageClasses&lt;/em&gt; and the &lt;em&gt;VolumeSnapshotClass&lt;/em&gt; that are configured in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get storageclasses
NAME                                 PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gl-sbc-hpe                           csi.hpe.com                    Delete          Immediate              true                   56d
gl-sbp-frank-gl1-sstor01 (default)   csi.hpe.com                    Delete          Immediate              true                   56d
hpe-hdd-storage                      kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d
hpe-nvme-storage                     kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d
hpe-ssd-storage                      kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d

$ kubectl  get volumesnapshotclasses
NAME                                 DRIVER        DELETIONPOLICY   AGE
gl-sbp-frank-gl1-sstor01             csi.hpe.com   Delete           56d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href=&quot;https://www.kasten.io/kubernetes/resources/blog/kubernetes-backup-with-hpe-csi-and-kasten-k10&quot;&gt;The joint partnership between HPE and Veeam&lt;/a&gt; supports HPE CSI driver for K8s and Kasten K10 as a data management solution for K8s backup and recovery. The following sections will show you how to install Kasten K10 on the cluster and how to use it with the HPE CSI driver for K8s to backup and restore the persistent volumes of the stateful applications running in the cluster using volume snapshots.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The kubectl CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The &lt;a href=&quot;https://helm.sh/docs/intro/install/&quot;&gt;Helm&lt;/a&gt; CLI tool, version 3.12.1 or later&lt;/li&gt;
&lt;li&gt;The o﻿ptional mysql CLI tool, for accessing the deployed sample MySQL database service&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install Kasten K10&lt;/h3&gt;
&lt;p&gt;Kasten K10 can be deployed on K8s like any other application and it runs in its own namespace.&lt;/p&gt;
&lt;p&gt;F﻿ollowing the &lt;a href=&quot;https://docs.kasten.io/latest/index.html&quot;&gt;Kasten K10 installation page&lt;/a&gt;, Kasten K10 can be installed to the cluster with the following commands using helm:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ helm repo add kasten https://charts.kasten.io/
$ helm repo update

$ helm install k10 kasten/k10 --namespace=kasten-io --create-namespace
NAME: k10
LAST DEPLOYED: Thu Jan 18 22:34:17 2024
NAMESPACE: kasten-io
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing Kasten\u2019s K10 Data Management Platform 6.5.2!

Documentation can be found at https://docs.kasten.io/.

How to access the K10 Dashboard:

To establish a connection to it use the following `kubectl` command:

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the above commands, Kasten K10 is installed to the namespace &lt;em&gt;&apos;kasten-io&apos;&lt;/em&gt; in the cluster. To validate the installation, type the following command to watch the status of all Pods. Helm installs a list of Pods to the namespace. It takes a while before all those Pods start running.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl  get pods -n kasten-io -w
NAME                                    READY   STATUS    RESTARTS   AGE
aggregatedapis-svc-6fc8fcf7bd-cdw8p     1/1     Running   0          15m
auth-svc-6fcb76d7df-pt748               1/1     Running   0          15m
catalog-svc-7c6f8b76fb-bsdqn            2/2     Running   0          15m
controllermanager-svc-5fffc97d7-b5whv   1/1     Running   0          15m
crypto-svc-8568584f9f-br8kn             4/4     Running   0          15m
dashboardbff-svc-b58b6d8cd-gnt5n        2/2     Running   0          15m
executor-svc-cb5fd4698-7zqjg            1/1     Running   0          15m
executor-svc-cb5fd4698-n27d5            1/1     Running   0          15m
executor-svc-cb5fd4698-rvj4v            1/1     Running   0          15m
frontend-svc-6c5677595b-9tsmj           1/1     Running   0          15m
gateway-54d778c955-n9wt5                1/1     Running   0          15m
jobs-svc-668b76cb86-q27nk               1/1     Running   0          15m
k10-grafana-889ff545b-g7px7             1/1     Running   0          15m
kanister-svc-76cdb967bd-hkhql           1/1     Running   0          15m
logging-svc-79599589f6-hdsp5            1/1     Running   0          15m
metering-svc-55f84f7766-rsm5f           1/1     Running   0          15m
prometheus-server-689ccf5f57-j9hpz      2/2     Running   0          15m
state-svc-b4b996d9b-jnbrl               3/3     Running   0          15m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿fter all the Pods are in running states, edit the service &lt;em&gt;gateway&lt;/em&gt; to change its service type from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;NodePort&lt;/em&gt;. This will generate a service port and expose the service via the configured gatway host name plus the generated port.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl edit svc gateway -n kasten-io
…
spec:
  selector:
    service: gateway
  sessionAffinity: None
  type: NodePort
…
service/gateway edited
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to get the &lt;em&gt;gateway&lt;/em&gt; service endpoint:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get svc gateway -n kasten-io -o jsonpath={.metadata.annotations.hpecp-internal-gateway/8000}
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10021
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he Kasten K10 service dashboard can now be accessed by pointing your browser to the URL &apos;&lt;em&gt;&lt;a href=&quot;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10021/k10/#/&quot;&gt;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10021/k10/#/&lt;/a&gt;&apos;&lt;/em&gt; :&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-login.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;C﻿lick &lt;em&gt;Accept Terms&lt;/em&gt; after specifying your email and company name. This will land you on the Kasten K10 dashboard:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-dashboard.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Kasten K10 automatically discovers all the applications and their data across namespaces in the cluster. The K10 dashboard displays a list of applications that are mapped to namespaces. It also displays a summary of the cluster’s backup data footprint, showing &lt;em&gt;0.0 B&lt;/em&gt; when accessing the dashboard for the first time.&lt;/p&gt;
&lt;p&gt;To use Kasten K10 with HPE CSI driver for K8s, you need to ensure the configured &lt;em&gt;VolumeSnapshotClass&lt;/em&gt; in the cluster contains the K10 annotation &lt;em&gt;&lt;strong&gt;k10.kasten.io/is-snapshot-class: &quot;true&quot;&lt;/strong&gt;&lt;/em&gt;.  Typing the following command to add this required K10 annotation to the &lt;em&gt;VolumeSnapshotClass&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get volumesnapshotclasses
NAME                                 DRIVER        DELETIONPOLICY   AGE
gl-sbp-frank-gl1-sstor01             csi.hpe.com   Delete           69d

$ kubectl annotate volumesnapshotclasses gl-sbp-frank-gl1-sstor01  k10.kasten.io/is-snapshot-class=true
volumesnapshotclasses.snapshot.storage.k8s.io/gl-sbp-frank-gl1-sstor01 annotated
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Whenever Kasten K10 detects volumes that were provisioned via the CSI driver deployed in the cluster, it will look for a &lt;em&gt;VolumeSnapshotClass&lt;/em&gt; with this K10 annotation for the identified CSI driver and use it to create snapshots.&lt;/p&gt;
&lt;p&gt;Type the following command to verify the &lt;em&gt;VolumeSnapshotClass&lt;/em&gt; has the required K10 annotation added:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get volumesnapshotclass gl-sbp-frank-gl1-sstor01 -o yaml -o jsonpath=&apos;{.metadata.annotations}&apos; | jq . | grep kasten
  &quot;k10.kasten.io/is-snapshot-class&quot;: &quot;true&quot;,
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy MySQL database&lt;/h3&gt;
&lt;p&gt;I﻿n order to show backup and restore process, a MySQL database instance from &lt;a href=&quot;https://github.com/GuopingJia/mysql-app&quot;&gt;my GitHub repo&lt;/a&gt; will be deployed as a sample stateful application to the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Install MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MySQL database requires a persistent volume to store data. Here is the &lt;em&gt;PersistentVolumeClaim&lt;/em&gt; (PVC) YAML manifest file &lt;em&gt;mysql-pvc.yaml&lt;/em&gt; in the repo&apos;s &lt;em&gt;&apos;base&apos;&lt;/em&gt; folder:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat mysql-app/base/mysql-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: mysql
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This PVC file, together with other YAML manifest files in the folder &lt;em&gt;&apos;base&apos;&lt;/em&gt;, will be used to install the MySQL database instance using &lt;a href=&quot;https://kustomize.io/&quot;&gt;Kustomize&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tree mysql-app/base
mysql-app
/base
├── kustomization.yaml
├── mysql-deployment.yaml
└── mysql-pvc.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The file &lt;em&gt;kustomization.yaml&lt;/em&gt; lists all YAML files in its resources section, together with the secret generator for MySQL password:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat mysql-app/base/kustomization.yaml 
secretGenerator:
- name: mysql-pass
  namespace: wordpress
  literals:
  - password=CfeDemo@123
resources:
  - mysql-deployment.yaml
  - mysql-pvc.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype below command to install the MySQL database to the namespace &lt;em&gt;&apos;mysql&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -k mysql-app/base
namespace/mysql created
secret/mysql-pass-m62cbhd9kf created
service/mysql created
persistentvolumeclaim/mysql-pvc created
deployment.apps/mysql created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to check the MySQL database deployment state. The MySQL Pod should be in &lt;em&gt;Running&lt;/em&gt; status:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-6974b58d48-wb8g5   1/1     Running   0          14s

NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/mysql   ClusterIP   None         &amp;#x3C;none&gt;        3306/TCP   24s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   1/1     1            1           23s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-6974b58d48   1         1         1       24s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou can check that the &lt;em&gt;PersistentVolume&lt;/em&gt; (PV) and the PVC get provisioned as part of the MySQL database deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get persistentvolumes 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                 STORAGECLASS               REASON   AGE

pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            Delete           Bound    mysql/mysql-pvc                                                                                                                           

$ kubectl get persistentvolumeclaims -n mysql
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
mysql-pvc   Bound    pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            gl-sbp-frank-gl1-sstor01   9m50s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2. Access MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In order to access MySQL database service using the mysql CLI, first set the port-forward of &lt;em&gt;service/mysql&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl port-forward service/mysql -n mysql 42281:3306
Forwarding from 127.0.0.1:42281 -&gt; 3306
Forwarding from [::1]:42281 -&gt; 3306
Handling connection for 42281
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The d﻿eployed MySQL database service can be accessed by typing the following mysql command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0,282 sec)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;3. Populate MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The MySQL application repo has a &lt;em&gt;&apos;test&apos;&lt;/em&gt; folder that contains a list of scripts for populating data records and testing the contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tree mysql-app/test
mysql-app/test
├── employees.sql
├── load_departments.dump
├── load_dept_emp.dump
├── load_dept_manager.dump
├── load_employees.dump
├── load_salaries1.dump
├── load_salaries2.dump
├── load_salaries3.dump
├── load_titles.dump
├── show_elapsed.sql
├── test_employees_md5.sql
└── test_employees_sha.sql
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to populate a sample &lt;em&gt;employees&lt;/em&gt; data to the MySQL database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd mysql-app/test
$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281 &amp;#x3C; employees.sql
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
INFO
LOADING departments
INFO
LOADING employees
INFO
LOADING dept_emp
INFO
LOADING dept_manager
INFO
LOADING titles
INFO
LOADING salaries
data_load_time_diff
NULL
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he added sample data records called &lt;em&gt;employees&lt;/em&gt; can be checked and verified by running the commands shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| employees          |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0,237 sec)

$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281 -t &amp;#x3C; test_employees_sha.sql
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:27         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Back up MySQL database&lt;/h3&gt;
&lt;p&gt;In order to back up the MySQL database, go to the Kasten K10 dashboard and click the &lt;em&gt;Applications&lt;/em&gt;. Find the deployed MySQL database &lt;em&gt;&apos;mysql&apos;&lt;/em&gt; from the application list and expand its menu. Then click the &lt;em&gt;Snapshot&lt;/em&gt; button.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-backup-button.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;U﻿sing all the default options from &lt;strong&gt;Snapshot &lt;em&gt;mysql&lt;/em&gt;&lt;/strong&gt;, click &lt;em&gt;Snapshot Application&lt;/em&gt; button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-backup.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he snapshot of the MySQL database will be started. This takes a few seconds. When you go back to the K10 dashboard, you should see the completed &lt;em&gt;Backup&lt;/em&gt; entry under &lt;strong&gt;Actions&lt;/strong&gt; with protected object as &lt;em&gt;mysql&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-dashboard-backup.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Y﻿ou can also check the &lt;strong&gt;Data Usage&lt;/strong&gt; page to see the data used by database backups:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-data-backup.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿n the cluster, after snapshot of the MySQL database, you can check that there is a &lt;em&gt;VolumeSnapshot&lt;/em&gt; &lt;em&gt;&apos;k10-csi-snap-ltxzrwxgp6r5pwkp&apos;&lt;/em&gt; created f﻿rom the source PVC &lt;em&gt;&apos;mysql-pvc&apos;&lt;/em&gt; in the namespace &lt;em&gt;mysql&lt;/em&gt;, together with a &lt;em&gt;VolumeSnapshotContent&lt;/em&gt; object created at cluster level. The &lt;em&gt;READYTOUSE&lt;/em&gt; of the &lt;em&gt;VolumeSnapshot&lt;/em&gt; should show as &lt;em&gt;true&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get volumesnapshot -n mysql
NAME                            READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS              SNAPSHOTCONTENT                                    CREATIONTIME   AGE
k10-csi-snap-ltxzrwxgp6r5pwkp   true         mysql-pvc                           1Gi           gl-sbp-frank-gl1-sstor01   snapcontent-f3890356-d47f-4b36-a7e4-eb4c5792ec59   6d12h          6d12h

 $ kubectl get volumesnapshotcontents
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER        VOLUMESNAPSHOTCLASS        VOLUMESNAPSHOT                  VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-f3890356-d47f-4b36-a7e4-eb4c5792ec59   true         1073741824    Delete           csi.hpe.com   gl-sbp-frank-gl1-sstor01   k10-csi-snap-ltxzrwxgp6r5pwkp   mysql                     6d12h
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿his volume snapshot can be used for MySQL database restore.&lt;/p&gt;
&lt;h3&gt;Restore MySQL database&lt;/h3&gt;
&lt;p&gt;B﻿efore showing the database restore, I﻿ will first delete a table from MySQL database to simulate a loss of data. Then, I will perform the database recovery using the Kasten K10.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Delete table&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;D﻿elete data from the table &apos;&lt;em&gt;departments&apos;&lt;/em&gt; by typing the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281 -Demployees
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [employees]&gt; show tables;
+----------------------+
| Tables_in_employees  |
+----------------------+
| current_dept_emp     |
| departments          |
| dept_emp             |
| dept_emp_latest_date |
| dept_manager         |
| employees            |
| salaries             |
| titles               |
+----------------------+
8 rows in set (0,237 sec)

MySQL [employees]&gt; delete from departments;
Query OK, 9 rows affected (1,523 sec)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿f you rerun the testing script &lt;em&gt;test_employees_sha.sql&lt;/em&gt;, it will show the failures of &lt;em&gt;CRC&lt;/em&gt; and &lt;em&gt;count&lt;/em&gt;, which indicate the loss of data in the MySQL database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281 -t &amp;#x3C;test_employees_sha.sql
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                0 |                                          |
| dept_emp     |                0 |                                          |
| dept_manager |                0 |                                          |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | not ok        | not ok    |
| dept_emp     | not ok        | not ok    |
| dept_manager | not ok        | not ok    |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:24         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | FAIL   |
| count   | FAIL   |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2. Perform MySQL database restore&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In order to restore the MySQL database, g﻿o to the Kasten K10 dashboard, locate the MySQL database &lt;em&gt;&apos;mysql&apos;&lt;/em&gt; from the application list, expand the menu of &lt;em&gt;mysql&lt;/em&gt;, and then click the &lt;em&gt;Restore&lt;/em&gt; button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-restore-button.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;S﻿elect a restore point from the list and click it. The &lt;strong&gt;Restore Point&lt;/strong&gt; page will show up:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-restore-point.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;U﻿se all the default options from &lt;strong&gt;Restore Point&lt;/strong&gt; and click the &lt;em&gt;Restore&lt;/em&gt; button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-restore.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he restore of the MySQL database will be started from the selected restore point. It will take a few seconds. Go back to the Kasten K10 dashboard. You should see the completed &lt;em&gt;Restore&lt;/em&gt; entry under &lt;strong&gt;Actions&lt;/strong&gt; with target namespace as &lt;em&gt;mysql&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/k10-dashboard-restore.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Verify MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Connect to the MySQL database service and re-run the testing script &lt;em&gt;test_employees_sha.sql&lt;/em&gt;. You should see the testing script now reports everything is &lt;em&gt;OK&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 42281 -t &amp;#x3C; test_employees_sha.sql
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:31         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿his indicates the MySQL database has been recovered from its backup and the MySQL database data is back!&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;In this blog post, I explored the functionalities of Kasten K10 and HPE CSI driver for K8s. Using the volume snapshot capability in HPE CSI driver for K8s, I demonstrated how to use Kasten K10 to backup the persistent volume of a sample MySQL database deployed in the cluster in HPE GreenLake for Private Cloud Enterprise. I then illustrated how to restore the database from the backup. Kasten K10, with its user-friendly and intuitive interface, simplifies the backup and recovery of stateful applications running in the cluster. It enhances the efficiency and reliability of data management in a K8s cluster.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Adding a monitoring stack to a Kubernetes cluster using Prometheus and Grafana in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Introduction HPE GreenLake for Private Cloud Enterprise: Containers ("containers service"), one of the HPE GreenLake cloud services…]]></description><link>https://developer.hpe.com/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-monitoring-using-prometheus-and-grafana-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Thu, 25 Jan 2024 14:13:04 GMT</pubDate><content:encoded>&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt; (&quot;containers service&quot;), one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;I﻿n this blog post, I will discuss K8s monitoring and show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster in HPE GreenLake for Private Cloud Enterprise. By setting up Prometheus as the data source and importing different dashboard templates into Grafana, various aspects of K8s, including metrics, performance, and health, can be monitored in the K8s cluster.&lt;/p&gt;
&lt;h3&gt;Why monitor K8s&lt;/h3&gt;
&lt;p&gt;Though K8s dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and designed using microservices, where the number of components increases by an order of magnitude. K8s security is self-configured, typically being specified in code through K8s yaml manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as Code (IaC), automated configuration management and orchestration also add to monitoring and troubleshooting complexity. As such, K8s monitoring is critical to managing application performance, service uptime, and enabling troubleshooting. Having a good monitoring tool is essential.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure your setup has the following:&lt;/p&gt;
&lt;style&gt; li { font-size: 100%; line-height: 23px; max-width: none; } &lt;/style&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;Terraform, installed using &lt;a href=&quot;https://learn.hashicorp.com/tutorials/terraform/install-cli&quot;&gt;these steps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The kubectl CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The Helm CLI tool, version 3.12.0 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Prometheus and Grafana&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://prometheus.io/docs/introduction/overview/&quot;&gt;Prometheus&lt;/a&gt; is a robust open-source monitoring and alerting tool used to collect, store, query, and provide alerts on time-series data. It employs a pull-based model to gather metrics from instrumented targets and features a powerful query language (PromQL) for data analysis. It enables developers to monitor various aspects of their systems, including metrics, performance, and health.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://grafana.com/&quot;&gt;Grafana&lt;/a&gt; is a powerful data visualization and monitoring tool. It serves as the interface for developers to visualize and analyze the data collected by Prometheus. With its rich set of visualization options and customizable dashboards, Grafana empowers developers to gain real-time insights into their systems’ performance, identify trends, and detect anomalies. By leveraging Grafana’s capabilities, developers can create comprehensive visual representations of their systems’ metrics, facilitating informed decision-making and proactive system management.&lt;/p&gt;
&lt;p&gt;In the following sections, I will show you how to add a monitoring stack using Prometheus and Grafana to a K8s cluster deployed in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Deploy Prometheus and Grafana using Terraform&lt;/h3&gt;
&lt;p&gt;Prometheus and Gafana will be deployed to the K8s cluster using the &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest&quot;&gt;HPE GreenLake Terraform provider &lt;em&gt;hpegl&lt;/em&gt;&lt;/a&gt;, together with the &lt;a href=&quot;https://registry.terraform.io/providers/hashicorp/helm/latest&quot;&gt;Helm provider from Hashicorp&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Create a Terraform config&lt;/h4&gt;
&lt;p&gt;H﻿ere is the terraform config file. You can refer to &lt;a href=&quot;https://developer.hpe.com/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/&quot;&gt;Infrastructure-as-code on HPE GreenLake using Terraform&lt;/a&gt; for the details about HPE GreenLake Terraform provider and its usage.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat main.tf 
terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}

provider &quot;hpegl&quot; {
  caas {
    api_url = var.api_url
  }
}

data &quot;hpegl_caas_cluster&quot; &quot;iaccluster&quot; {
  name     = var.cluster_name
  space_id = var.space_id
}

provider &quot;helm&quot; {
  kubernetes {
       host     = yamldecode(base64decode(data.hpegl_caas_cluster.iaccluster.kubeconfig)).clusters[0].cluster.server
       token    = yamldecode(base64decode(data.hpegl_caas_cluster.iaccluster.kubeconfig)).users[0].user.token
       insecure = true
  }
}

resource &quot;helm_release&quot; &quot;prometheus-stack&quot; {
   name = &quot;prometheus-stack&quot;
   repository = &quot;https://prometheus-community.github.io/helm-charts&quot;
   chart = &quot;prometheus&quot;
   version = &quot;23.0.0&quot;
   namespace = &quot;monitoring&quot;
   create_namespace = true

   set {
    name  = &quot;server.service.type&quot;
    value = &quot;NodePort&quot;
  }

   set {
    name  = &quot;prometheus-node-exporter.hostRootFsMount.enabled&quot;
    value = &quot;false&quot;
  }

   set {
    name  = &quot;prometheus-node-exporter.hostNetwork&quot;
    value = &quot;false&quot;
  }

   set {
    name  = &quot;prometheus-node-exporter.hostPID&quot;
    value = &quot;false&quot;
  }
}

resource &quot;helm_release&quot; &quot;grafana-dashboard&quot; {
   name = &quot;grafana-dashboard&quot;
   repository = &quot;https://grafana.github.io/helm-charts&quot;
   chart = &quot;grafana&quot;
   version = &quot;6.57.4&quot;
   namespace = &quot;monitoring&quot;
   create_namespace = true

   set {
    name  = &quot;service.type&quot;
    value = &quot;NodePort&quot;
  }

   set {
    name  = &quot;persistence.enabled&quot;
    value = &quot;true&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a few things worth noting in the above Terraform configuration file:&lt;/p&gt;
&lt;style&gt; li { font-size: 100%; line-height: 23px; max-width: none; } &lt;/style&gt;
&lt;ul&gt;
&lt;li&gt;In Grafana, the persistence by default is disabled. In case Grafana Pod gets terminated for some reason, you will lose all your data. In production deployments, such as HPE GreenLake for Private Cloud Enterprise: Containers, this needs to be enabled, by setting &lt;em&gt;persistence.enabled&lt;/em&gt; as &lt;em&gt;true&lt;/em&gt;, to prevent any data loss.&lt;/li&gt;
&lt;li&gt;In Prometheus, the &lt;em&gt;DaemonSet&lt;/em&gt; deployment of the node exporter tries to mount the &lt;em&gt;hostPath&lt;/em&gt; volume to the container root “/”, which violates the deployed OPA (Open Policy Agent) policy to the K8s cluster for filesystem (FS) mount protections. As such, the DaemonSet deployment will never be ready and keeps showing the warning events as &lt;em&gt;Warning  FailedCreate daemonset-controller  Error creating: admission webhook &quot;soft-validate.hpecp.hpe.com&quot; denied the request: Hostpath (&quot;/&quot;) referenced in volume is not valid for this namespace because of FS Mount protections.&lt;/em&gt;. You need to disable the &lt;em&gt;hostRootFsMount&lt;/em&gt;, together with &lt;em&gt;hostNetwork&lt;/em&gt; and &lt;em&gt;hostPID&lt;/em&gt;, to comply with the security policy in the cluster.&lt;/li&gt;
&lt;li&gt;Both Prometheus and Grafana services are deployed as &lt;em&gt;NodePort&lt;/em&gt; service types. Those services will be mapped to the gateway host with automatically generated ports for easy access and service configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Initialize working directory&lt;/h4&gt;
&lt;p&gt;With above &lt;em&gt;main.tf&lt;/em&gt; config file, the working directory can be initialized by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;    $ terraform init
    
    Initializing the backend...
    
    Initializing provider plugins...
    - Finding hpe/hpegl versions matching &quot;&gt;= 0.2.2&quot;...
    - Finding latest version of hashicorp/helm...
    - Installing hpe/hpegl v0.3.17...
    - Installed hpe/hpegl v0.3.17 (signed by a HashiCorp partner, key ID D1F277A1AC66CE3D)
    - Installing hashicorp/helm v2.10.1...
    - Installed hashicorp/helm v2.10.1 (signed by HashiCorp)
    
    Partner and community providers are signed by their developers.
    If you&apos;d like to know more about provider signing, you can read about it here:
    https://www.terraform.io/docs/cli/plugins/signing.html
    
    Terraform has created a lock file .terraform.lock.hcl to record the provider
    selections it made above. Include this file in your version control repository
    so that Terraform can guarantee to make the same selections by default when
    you run &quot;terraform init&quot; in the future.
    
    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running &quot;terraform plan&quot; to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;F﻿rom the output of the command run, the provider &lt;em&gt;helm&lt;/em&gt; is also installed to the Terraform working directory in addition to the HPE GreenLake Terraform provider &lt;em&gt;hpegl&lt;/em&gt;.&lt;/p&gt;
&lt;h4&gt;Deploy Prometheus and Grafana&lt;/h4&gt;
&lt;p&gt;Type the following command to apply the Terraform configuration and deploy Prometheus and Grafana to the K8s cluster while responding &lt;em&gt;yes&lt;/em&gt; at the prompt to confirm the operation. You may want to first try a dry run to preview the changes to your infrastructure based on the data you provide in your Terraform file by running &lt;em&gt;terraform plan&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;    $ terraform apply --var-file=variables.tfvars 
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # helm_release.grafana-dashboard will be created
      + resource &quot;helm_release&quot; &quot;grafana-dashboard&quot; {
          + atomic                     = false
          + chart                      = &quot;grafana&quot;
          + cleanup_on_fail            = false
          + create_namespace           = true
          + dependency_update          = false
          + disable_crd_hooks          = false
          + disable_openapi_validation = false
          + disable_webhooks           = false
          + force_update               = false
          + id                         = (known after apply)
          + lint                       = false
          + manifest                   = (known after apply)
          + max_history                = 0
          + metadata                   = (known after apply)
          + name                       = &quot;grafana-dashboard&quot;
          + namespace                  = &quot;monitoring&quot;
          + pass_credentials           = false
          + recreate_pods              = false
          + render_subchart_notes      = true
          + replace                    = false
          + repository                 = &quot;https://grafana.github.io/helm-charts&quot;
          + reset_values               = false
          + reuse_values               = false
          + skip_crds                  = false
          + status                     = &quot;deployed&quot;
          + timeout                    = 300
          + verify                     = false
          + version                    = &quot;6.57.4&quot;
          + wait                       = true
          + wait_for_jobs              = false
    
          + set {
              + name  = &quot;persistence.enabled&quot;
              + value = &quot;true&quot;
            }
          + set {
              + name  = &quot;service.type&quot;
              + value = &quot;NodePort&quot;
            }
        }
    
      # helm_release.prometheus-stack will be created
      + resource &quot;helm_release&quot; &quot;prometheus-stack&quot; {
          + atomic                     = false
          + chart                      = &quot;prometheus&quot;
          + cleanup_on_fail            = false
          + create_namespace           = true
          + dependency_update          = false
          + disable_crd_hooks          = false
          + disable_openapi_validation = false
          + disable_webhooks           = false
          + force_update               = false
          + id                         = (known after apply)
          + lint                       = false
          + manifest                   = (known after apply)
          + max_history                = 0
          + metadata                   = (known after apply)
          + name                       = &quot;prometheus-stack&quot;
          + namespace                  = &quot;monitoring&quot;
          + pass_credentials           = false
          + recreate_pods              = false
          + render_subchart_notes      = true
          + replace                    = false
          + repository                 = &quot;https://prometheus-community.github.io/helm-charts&quot;
          + reset_values               = false
          + reuse_values               = false
          + skip_crds                  = false
          + status                     = &quot;deployed&quot;
          + timeout                    = 300
          + verify                     = false
          + version                    = &quot;23.0.0&quot;
          + wait                       = true
          + wait_for_jobs              = false
    
          + set {
              + name  = &quot;prometheus-node-exporter.hostNetwork&quot;
              + value = &quot;false&quot;
            }
          + set {
              + name  = &quot;prometheus-node-exporter.hostPID&quot;
              + value = &quot;false&quot;
            }
          + set {
              + name  = &quot;prometheus-node-exporter.hostRootFsMount.enabled&quot;
              + value = &quot;false&quot;
            }
          + set {
              + name  = &quot;server.service.type&quot;
              + value = &quot;NodePort&quot;
            }
        }
    
    Plan: 2 to add, 0 to change, 0 to destroy.
    
    Do you want to perform these actions?
      Terraform will perform the actions described above.
      Only &apos;yes&apos; will be accepted to approve.
    
      Enter a value: yes
    
    helm_release.grafana-dashboard: Creating...
    helm_release.prometheus-stack: Creating...
    helm_release.grafana-dashboard: Still creating... [10s elapsed]
    helm_release.prometheus-stack: Still creating... [10s elapsed]
    helm_release.grafana-dashboard: Still creating... [20s elapsed]
    helm_release.prometheus-stack: Still creating... [20s elapsed]
    helm_release.grafana-dashboard: Still creating... [30s elapsed]
    helm_release.prometheus-stack: Still creating... [30s elapsed]
    helm_release.grafana-dashboard: Still creating... [40s elapsed]
    helm_release.grafana-dashboard: Creation complete after 43s [id=grafana-dashboard]
    helm_release.prometheus-stack: Still creating... [40s elapsed]
    helm_release.prometheus-stack: Still creating... [50s elapsed]
    helm_release.prometheus-stack: Still creating... [1m0s elapsed]
    helm_release.prometheus-stack: Still creating... [1m10s elapsed]
    helm_release.prometheus-stack: Creation complete after 1m18s [id=prometheus-stack]
    
    Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Check Prometheus and Grafana&lt;/h4&gt;
&lt;p&gt;A﻿fter a few minutes of Terraform running, both Prometheus and Grafana will be deployed in the K8s cluster to the namespace &lt;em&gt;monitoring&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to check the deployed monitoring resources. All the Pods should show that they are in &lt;em&gt;Running&lt;/em&gt; states.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n monitoring 
NAME                                                           READY   STATUS    RESTARTS   AGE
pod/grafana-dashboard-5674bcd6d4-zh8zk                         1/1     Running   0          4d17h
pod/prometheus-stack-alertmanager-0                            1/1     Running   0          4d17h
pod/prometheus-stack-kube-state-metrics-6fb8684695-r7zp6       1/1     Running   0          4d17h
pod/prometheus-stack-prometheus-node-exporter-bcmt2            1/1     Running   0          4d17h
pod/prometheus-stack-prometheus-node-exporter-hgr2x            1/1     Running   0          4d17h
pod/prometheus-stack-prometheus-pushgateway-559cc996d5-jrjdn   1/1     Running   0          4d17h
pod/prometheus-stack-server-7646574d75-zswws                   2/2     Running   0          4d17h

NAME                                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/grafana-dashboard                           NodePort    10.97.7.23       &amp;#x3C;none&gt;        80:30488/TCP   4d17h
service/prometheus-stack-alertmanager               ClusterIP   10.101.171.170   &amp;#x3C;none&gt;        9093/TCP       4d17h
service/prometheus-stack-alertmanager-headless      ClusterIP   None             &amp;#x3C;none&gt;        9093/TCP       4d17h
service/prometheus-stack-kube-state-metrics         ClusterIP   10.98.134.240    &amp;#x3C;none&gt;        8080/TCP       4d17h
service/prometheus-stack-prometheus-node-exporter   ClusterIP   10.104.44.251    &amp;#x3C;none&gt;        9100/TCP       4d17h
service/prometheus-stack-prometheus-pushgateway     ClusterIP   10.104.146.18    &amp;#x3C;none&gt;        9091/TCP       4d17h
service/prometheus-stack-server                     NodePort    10.106.152.143   &amp;#x3C;none&gt;        80:30245/TCP   4d17h

NAME                                                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/prometheus-stack-prometheus-node-exporter   2         2         2       2            2           kubernetes.io/os=linux   4d17h

NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana-dashboard                         1/1     1            1           4d17h
deployment.apps/prometheus-stack-kube-state-metrics       1/1     1            1           4d17h
deployment.apps/prometheus-stack-prometheus-pushgateway   1/1     1            1           4d17h
deployment.apps/prometheus-stack-server                   1/1     1            1           4d17h

NAME                                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-dashboard-5674bcd6d4                         1         1         1       4d17h
replicaset.apps/prometheus-stack-kube-state-metrics-6fb8684695       1         1         1       4d17h
replicaset.apps/prometheus-stack-prometheus-pushgateway-559cc996d5   1         1         1       4d17h
replicaset.apps/prometheus-stack-server-7646574d75                   1         1         1       4d17h

NAME                                             READY   AGE
statefulset.apps/prometheus-stack-alertmanager   1/1     4d17h
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the &lt;em&gt;helm list&lt;/em&gt; command to show both Prometheus and Grafana helm charts and versions that are deployed to the &lt;em&gt;monitoring&lt;/em&gt; namespace in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ helm list -n monitoring
NAME             	NAMESPACE 	REVISION	UPDATED                                	STATUS  	CHART            	APP VERSION
grafana-dashboard	monitoring	1       	2023-11-22 15:28:07.986364628 +0100 CET	deployed	grafana-6.57.4   	9.5.5      
prometheus-stack 	monitoring	1       	2023-11-22 15:28:13.290386574 +0100 CET	deployed	prometheus-23.0.0	v2.45.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set up Prometheus and Grafana for K8s monitoring&lt;/h3&gt;
&lt;h4&gt;Access Prometheus&lt;/h4&gt;
&lt;p&gt;Prometheus can be accessed by pointing your browser to the URL &lt;em&gt;&lt;a href=&quot;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015&quot;&gt;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015&lt;/a&gt;&lt;/em&gt;, extracted by using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get service/prometheus-stack-server -n monitoring -o jsonpath=&apos;{.metadata.annotations.hpecp-internal-gateway/80}&apos;
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou can execute the query in Prometheus by using some metrics, e.g., &lt;em&gt;node_procs_running&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheus.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Access Grafana&lt;/h4&gt;
&lt;p&gt;Grafana can be accessed by pointing your browser to the URL &lt;em&gt;&lt;a href=&quot;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016&quot;&gt;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016&lt;/a&gt;&lt;/em&gt;. The URL and the admin password can be extracted by using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get service/grafana-dashboard -n monitoring -o jsonpath=&apos;{.metadata.annotations.hpecp-internal-gateway/80}&apos;
gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10016

$ kubectl get secrets -n monitoring grafana-dashboard -o jsonpath=&apos;{.data.admin-password}&apos; | base64 -d
cs3O6LF2H9m0jLrgdR8UXplmZG22d9Co9WbnJNzx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Configure Grafana data sources&lt;/h4&gt;
&lt;p&gt;From the Grafana Administration page, Prometheus can be configured as the data source by specifying the HTTP URL as &lt;em&gt;&lt;a href=&quot;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/&quot;&gt;http://gl-tor-upc-cp-gw-node1.customer.yyz.gl-hpe.local:10015/&lt;/a&gt;&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-datasources.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Import Grafana dashboards&lt;/h4&gt;
&lt;p&gt;F﻿rom &lt;a href=&quot;https://grafana.com/grafana/dashboards/&quot;&gt;Grafana Labs&lt;/a&gt;, there is a list of Grafana dashboard templates you can download and then import as monitoring dashboards to the Grafana.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-dashboard-import.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;H﻿ere is the imported dashboard for &lt;em&gt;K8s cluster monitoring (via Prometheus)&lt;/em&gt;. It shows overall cluster CPU / Memory / Filesystem usage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-cluster-monitoring.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;H﻿ere is another imported dashboard for &lt;em&gt;K8s Pod metrics&lt;/em&gt;. It shows individual Pod CPU / memory usage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-pod-metrics.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;This blog post described the detailed process to deploy and set up Prometheus and Grafana as a monitoring stack in a K8s cluster in HPE GreenLake for Private Cloud Enterprise. Prometheus excels at collecting and storing time-series data, enabling developers to monitor various aspects of K8s, including metrics, performance, and health. Grafana complements Prometheus by providing developers with intuitive dashboards and visualizations, enabling them to gain meaningful insights into K8s performance and behavior. Integration of Prometheus and Grafana by deploying them in the K8s cluster adds a monitoring stack. It empowers users to gain a deep understanding of the cluster’s internal states and behaviors, enabling them to identify potential issues, optimize performance and enhance overall reliability.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake cloud scripting fundamentals]]></title><description><![CDATA[What are the HPE GreenLake cloud APIs   The foundational APIs for common HPE GreenLake cloud services allow IT administrators and IT…]]></description><link>https://developer.hpe.com/hpe-greenlake-edge-to-cloud-platform-scripting-fundamentals/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-edge-to-cloud-platform-scripting-fundamentals/</guid><pubDate>Wed, 24 Jan 2024 09:51:57 GMT</pubDate><content:encoded>&lt;style&gt;
ul li{
 font-size:27px;
}
&lt;/style&gt;
&lt;style&gt;
ol li{
 font-size:27px;
}
&lt;/style&gt;
&lt;h2&gt;What are the HPE GreenLake cloud APIs  &lt;/h2&gt;
&lt;p&gt;The foundational APIs for common HPE GreenLake cloud services allow IT administrators and IT operators to programmatically operate and manage users and resources in an HPE GreenLake cloud workspace.   &lt;/p&gt;
&lt;p&gt;This set of APIs for common platform services includes APIs for workspace management, identity and access management, device and subscription, locations, audit logs, and wellness.   &lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake cloud documentation&lt;/a&gt; for these APIs leverages OpenAPI specifications and associated reference material. The documentation provides a complete explanation of the operations supported by these APIs for common HPE GreenLake cloud services, as well as sample requests and responses.&lt;/em&gt;   &lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt; The following blog posts are an excellent way to learn more about the APIs using Postman. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-1-introduction-to-the-apis/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 1: Introduction to the APIs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-2-configuring-and-managing-a-workspace/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 2: Configuring and managing a workspace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;Get started with the foundational APIs for the HPE GreenLake edge-to-cloud platform – Part 3: Tracking activities and monitoring health&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog post here, I will explore the usage of the HPE GreenLake cloud APIs one step further by using the APIs to build automation scripts or custom integrations. To develop my script, I will use bash, python and PowerShell. &lt;/p&gt;
&lt;h2&gt;Let’s pick a use case&lt;/h2&gt;
&lt;p&gt;Let’s say I’d like to check what is in my audit log at regular intervals in order to keep an eye on my HPE GreenLake workspace. The following graphics explain what I will be doing: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/don-picture.png&quot; alt=&quot;Figure 1: Illustrating the interactions made between workspace users and the HPE GreenLake cloud&quot; title=&quot;Figure 1: Illustrating the interactions made between workspace users and the HPE GreenLake cloud&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Figure 1: Illustrating the interactions made between workspace users and the HPE GreenLake cloud&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For reference, I can also check the content of this audit log in the HPE GreenLake cloud console, under the Manage Workspace tab. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/auditlogui.jpg&quot; alt=&quot;Figure 2: Audit log in HPE GreenLake cloud console&quot; title=&quot;Figure 2: Audit log in HPE GreenLake cloud console&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Figure 2: Audit log in HPE GreenLake platform console&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;High-level algorithm &lt;/h2&gt;
&lt;p&gt;Let’s look at the steps necessary to accomplish this. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Gather details about the API access&lt;/li&gt;
&lt;li&gt;Get an API access/session token &lt;/li&gt;
&lt;li&gt;Compute date for filtering events &lt;/li&gt;
&lt;li&gt;Call audit log API &lt;/li&gt;
&lt;li&gt;Extract data and print results &lt;/li&gt;
&lt;li&gt;Wait a bit and go to Step 3 &lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Give me a token, my friend!&lt;/h2&gt;
&lt;p&gt;The HPE GreenLake cloud console provides a way to create API client credentials (up to 5 per workspace) in the form of a Client ID and a Client Secret pair, which, in turn, I am going to use to generate a session token.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: To make REST API calls to HPE GreenLake cloud APIs, you will need to select “HPE GreenLake platform” as an option when configuring API client credentials. To learn how to create API client credentials for HPE GreenLake cloud APIs, check out the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-23E6EE78-AAB7-472C-8D16-7169938BE628.html&quot;&gt;Configuring API client credentials&lt;/a&gt; and &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-771F9B3A-B029-43E5-A38F-6D8D04178FAB.html&quot;&gt;Requesting access to HPE GreenLake cloud APIs&lt;/a&gt; in the HPE GreenLake cloud user guide.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;My script will prompt for these two values (&lt;strong&gt;client_id&lt;/strong&gt; and &lt;strong&gt;client_secret&lt;/strong&gt;) and will make sure that &lt;strong&gt;client_secret&lt;/strong&gt; is never printed anywhere. Because these values are quite long, I will also test the presence of the operating system’s environment variables CLIENTID and CLIENTSECRET. If present, I will use their values and not prompt the user. &lt;/p&gt;
&lt;p&gt;From the HPE GreenLake cloud console, I’ve learned that the cURL command to get a session token is: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;curl -s --location &apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos; \ 
--header &apos;Content-Type: application/x-www-form-urlencoded&apos; \ 
--data-urlencode &apos;grant_type=client_credentials&apos; \ 
--data-urlencode &apos;client_id=&apos;&amp;#x3C;CLIENTID&gt;&apos; \ 
--data-urlencode &apos;client_secret=&apos;&amp;#x3C;CLIENTSECRET&gt;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see this in the API section of the Manage Workspace screen of the HPE GreenLake cloud console shown below: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/apiaccess.jpg&quot; alt=&quot;Figure 3: API access management page&quot; title=&quot;Figure 3: API access management page&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Figure 3: API access management page&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/codesample-for-token.jpg&quot; alt=&quot;Figure 4: cURL sample code for generating access token&quot; title=&quot;Figure 4: cURL sample code for generating access token&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Figure 4: cURL sample code for generating access token&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The JSON response is received from this call in the following format: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
&quot;access_token&quot;:&quot;eyJhbGciOiJSUzI1NiIsImtpZCI6IjJGSmhvZ1lRMDZNazNBc2Q4UU8zU09ZVE9wayIsInBpLmF0bSI6ImRlejAifQ.eyJjbGllbnRfaWQiOiI1ZDVjMDVjMi00OGM5LTRmNzAtOWY4ZS1iMzIwZmQxNTA0NmYiLCJpc3MiOiJodHRwczovL3Nzby5jb21tb24uY2xvdWQuaHBZGUyODI1ZjIxMWVjOGE4NGZlZGViY2I0YTc1NCIsImF1dGhfc291cmNlIjoiY2NzX3Rva2VuX21hbmFnZW1lbnQiLCJwbGF0Zm9ybV9jdXN0b21lcl9pZCI6IjMwMQlkZTI4MjVmMjExZWM4YTg0ZmVkZWJjYjRhNzU0IiwiaWF0IjoxNzAyMDUxMDg1LCJhcHBsaWNhdGlvbl9pbnN0YW5jZV9pZCI6IjAwMDAwMDAwLTAwMDAtMDAwMC0wMDAwLTAwMDAwMDAwMDAwMCIsImV4cCI6MTcwMjA1ODI4NX0.ocpBLPKG5XdL1s_ndMmuySGt5S2-ngcaZDTrb3P0L_M4px-6_7JOavSOW-x_lCns1i1mrYKk6vfswgsRtVVq7HQA-NT8PCbxNGBVzeBjhf0SLYtPUsDLr8tfZgIH3-tE4KoW9frAWVOM1plJ5DL8i7xIpj33yyrQiLEb84IAq5TuLQ6KesSvgatQyKgB4dGRZ6lISqh9jeXU7ZuoO2rnFRC8wDcPlx-XNX3oGM0-ZO5U-NXckdmxhaWMETKmDxnvvqmLbr_jvDxUwZWCcbPfIYyqP_OYpCljhtAPkGbj8U4V0xF7HMBms1qazSy9ZVgfJEPwvbdRwo5iRKAxi7oFnQ&quot;, 
&quot;token_type&quot;:&quot;Bearer&quot;, 
&quot;expires_in&quot;:7199
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The response provides an access token of type “Bearer” with a time to live of 7200 seconds (2 hours). You should renew a token before expiration, but for the purposes of this blog, I will just check and terminate cleanly if it happens. &lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: The token is returned as a standard JWT (JSON Web Token) described by &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc7519&quot;&gt;RFC 7519&lt;/a&gt;. You can dissect the content of your token using &lt;a href=&quot;https://jwt.io/&quot;&gt;https://jwt.io/&lt;/a&gt;. Part of the data provided in the content is the date of expiration.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Querying the audit log &lt;/h2&gt;
&lt;p&gt;According to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/audit-logs/public/&quot;&gt;API Reference documentation&lt;/a&gt; for the Audit Log service, I can query the log using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;GET /audit-log/v1/logs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I can also see from the documentation, that I can use a filter to keep only logs after a certain date using the following parameter: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;GET /audit-log/v1/logs?filter=createdAt ge &apos;2023-07-24T04:21:22.00Z&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: the format of the date used by the API, which is &lt;a href=&quot;https://www.iso.org/standard/70908.html&quot;&gt;ISO 8601&lt;/a&gt; of the form: YYYY-MM-DDTHH:MM:SS.ss-/+FF:ff. For example: &apos;2023-07-24T04:21:22.00Z&apos; for 4:21AM on the 24th of July, 2023 in UTC (Z=Zero Meridian)&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;This call needs an &lt;strong&gt;Authorization&lt;/strong&gt; header which contains the access_token preceded with the string “Bearer ”. It is also a best practice to provide an &lt;strong&gt;Accept&lt;/strong&gt; header to specify that a response in JSON (application/json) is expected, although this has become the default nowadays. &lt;/p&gt;
&lt;p&gt;The JSON response received from this API call should be in the form of: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{ 
&quot;items&quot;:[], 
&quot;count&quot;: 0, 
&quot;offset&quot;: 0, 
&quot;total&quot;: 0, 
&quot;remainingRecords&quot;: true 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this response, &lt;strong&gt;count&lt;/strong&gt; provides the size of &lt;strong&gt;items&lt;/strong&gt;, the returned array of items and &lt;strong&gt;total&lt;/strong&gt;, the total number of existing items. If &lt;strong&gt;total&lt;/strong&gt; is greater than &lt;strong&gt;count&lt;/strong&gt;, I would need to call the same API multiple times, specifying an &lt;strong&gt;offset&lt;/strong&gt; parameter to get the next batch, until &lt;strong&gt;total&lt;/strong&gt; is reached or &lt;strong&gt;remainingRecords&lt;/strong&gt; is false. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Items&lt;/strong&gt; is an array of audit items, with a single audit item being defined as shown below: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{ 
&quot;id&quot;: &quot;string&quot;, 
&quot;type&quot;: &quot;/audit-log/logs&quot;, 
&quot;application&quot;:  
   { 
   &quot;id&quot;: &quot;string&quot; 
   }, 
&quot;region&quot;: &quot;string&quot;, 
&quot;user&quot;:  
   { 
   &quot;username&quot;: &quot;string&quot; 
   }, 
&quot;category&quot;: &quot;string&quot;, 
&quot;description&quot;: &quot;string&quot;, 
&quot;workspace&quot;:  
  { 
  &quot;id&quot;: &quot;string&quot;, 
  &quot;workspaceName&quot;: &quot;string&quot; 
  }, 
&quot;createdAt&quot;: &quot;2019-08-24T14:15:22Z&quot;, 
&quot;updatedAt&quot;: &quot;2019-08-24T14:15:22Z&quot;, 
&quot;generation&quot;: 0, 
&quot;additionalInfo&quot;: { }, 
&quot;hasDetails&quot;: true 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To keep it simple and human readable, in my scripts, I will only display: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;user.username &lt;/li&gt;
&lt;li&gt;description &lt;/li&gt;
&lt;li&gt;createdAt &lt;/li&gt;
&lt;li&gt;additionalInfo.ipAddress&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Putting it all together in bash &lt;/h2&gt;
&lt;p&gt;Let’s first try to assemble a bash script. In the next sections, I will show you how to achieve the same using PowerShell and Python. &lt;/p&gt;
&lt;h3&gt;Step 1: Gather details about the API Access &lt;/h3&gt;
&lt;p&gt;The first section of the script needs to take care of collecting the CLIENTID and CLIENTSECRET, checking first for environment variables, and prompting the user if no environment variables are set. Reading the CLIENTSECRET shall be done in a secure way to avoid displaying it. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;if [[ -z &quot;${CLIENTID}&quot; ]]; then 
  read -p &quot;Enter your HPE GreenLake Client ID: &quot; client_id 
else 
  client_id=&quot;${CLIENTID}&quot; 
fi 

if [[ -z &quot;${CLIENTSECRET}&quot; ]]; then 
  client_secret=&quot;&quot; 
  pass_var=&quot;Enter your HPE GreenLake Client Secret: &quot;       

  while IFS= read -p &quot;$pass_var&quot; -r -s -n 1 letter 
  do 
    if [[ $letter == $&apos;\0&apos; ]]        
    then 
        break 
    fi 
    client_secret=&quot;${client_secret}$letter&quot;      
    pass_var=&quot;*&quot;             
  done 
else 
  client_secret=&quot;${CLIENTSECRET}&quot; 
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Get a session token &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;access_token=&quot;Bearer &quot;`curl -s --location &apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos; \ 
--header &apos;Content-Type: application/x-www-form-urlencoded&apos; \ 
--data-urlencode &apos;grant_type=client_credentials&apos; \ 
--data-urlencode &apos;client_id=&apos;$client_id&apos;&apos; \ 
--data-urlencode &apos;client_secret=&apos;$client_secret&apos;&apos;  | jq .access_token | xargs`
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Compute date for filtering events &lt;/h3&gt;
&lt;p&gt;I can start this infinite loop and each time, I will collect the audit logs that were generated during the last minute. To do so, I will need to compute the time now and subtract 1 minute. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;for (( ; ; )) 
do 
  d=`date -v &quot;-1M&quot; -u +&quot;%Y-%m-%dT%H:%M:%S.00Z&quot;` 
  echo Last check at \(UTC\): $d 
  echo &apos;---------------------&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 4: Call audit log API &lt;/h3&gt;
&lt;p&gt;I can now call the API with the right authorization header and set the filter parameter, &lt;strong&gt;startTime&lt;/strong&gt; greater than the computed date: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;http_response=$(curl -s -o out.json -w &quot;%{http_code}&quot; --location &quot;https://global.api.greenlake.hpe.com/audit-log/v1/logs?filter=startTime%20ge%20&apos;$d&apos;&quot; \ 
--header &apos;Accept: application/json&apos; \ 
--header &quot;Authorization: $access_token&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Extract data and print results &lt;/h3&gt;
&lt;p&gt;I need to check that the call returned an HTTP status code of 200 (Success) and then use &lt;a href=&quot;https://jqlang.github.io/jq/download/&quot;&gt;jq&lt;/a&gt; to display the selected fields: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;  if [ &quot;$http_response&quot; != &quot;200&quot; ]; then 
      echo &quot;Error calling the API or token has expired!&quot; 
      exit $http_response 
  else 
      cat out.json | jq &apos;.items[] | { createdAt: .createdAt, username: .user.username, description: .description, ipAddress: .additionalInfo.ipAddress}&apos; 
  fi
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 6: Wait a bit and go to Step 3 &lt;/h3&gt;
&lt;p&gt;I decided that a 1mn was a good interval, so I used the sleep command to wait 60 seconds and go back to the beginning of the infinite loop (Step 3). &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt; sleep 60

done
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Running the bash code &lt;/h3&gt;
&lt;p&gt;When the code is invoked in bash (tested on MacOS and Ubuntu), I can see that, every minute, the script displays time and audit logs (if any have occurred): &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ ./spy_workspace.sh 

Last check at (UTC): 2023-12-18T09:05:53.00Z 
--------------------- 
{ 
  &quot;createdAt&quot;: &quot;2023-12-18T09:06:11.000000Z&quot;, 
  &quot;username&quot;: &quot;&amp;#x3C;usename&gt;&quot; 
  &quot;description&quot;: &quot;Loading workspace 3009de2825f211ec8a84fedebcb4a754 for user &amp;#x3C;username&gt;&quot;
  &quot;ipAddress&quot;: &quot;&amp;#x3C;ip address&gt;&quot; 
} 
{ 
  &quot;createdAt&quot;: &quot;2023-12-18T09:05:55.000000Z&quot;, 
  &quot;username&quot;: &quot;&amp;#x3C;usename&gt;&quot;, 
  &quot;description&quot;: &quot;User &amp;#x3C;usename&gt; logged in via ping mode.&quot;, 
  &quot;ipAddress&quot;: &quot;&amp;#x3C;ip address&gt;&quot; 
} 
Last check at (UTC): 2023-12-18T09:06:54.00Z 
--------------------- 
Last check at (UTC): 2023-12-18T09:07:55.00Z 
---------------------
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: The audit log API returns logs in LIFO (Last In First Out) mode. This is great for a GUI interface; however, it makes things a little more complicated for CLI and scripts. Sorting the logs is outside the scope of the blog post.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;When the token expires after 2 hours, I can catch the error, display a message, and exit. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;Last check at (UTC): 2023-12-18T11:05:55.00Z 
--------------------- 
Error calling the API or token has expired!
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Putting it all together in PowerShell &lt;/h2&gt;
&lt;p&gt;Let’s now see how I could do the same (or better) using PowerShell: &lt;/p&gt;
&lt;h3&gt;Step 1: Gather details about the API Access &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;if ($Env:CLIENTID -eq $null) { 
    $ClientID = read-host &quot;Enter your HPE GreenLake Client ID&quot;  
} 
else { 
    $ClientID = $Env:CLIENTID  
} 

if ($Env:CLIENTSECRET -eq $null) { 
    $secClientSecret = read-host  &quot;Enter your HPE GreenLake Client Secret&quot; -AsSecureString 
    $bstr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($secClientSecret) 
    $ClientSecret = [System.Runtime.InteropServices.Marshal]::PtrToStringBSTR($bstr)  
} 
else { 
    $ClientSecret = $Env:CLIENTSECRET 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Get a session token &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$headers = @{}  
$body = &quot;grant_type=client_credentials&amp;#x26;client_id=&quot; + $ClientID + &quot;&amp;#x26;client_secret=&quot; + $ClientSecret 

# Get a Token 
$headers = @{}  
$headers[&quot;Content-Type&quot;] = &quot;application/x-www-form-urlencoded&quot; 

try { 
    $response = Invoke-webrequest &quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot; -Method POST -Headers $headers -Body $body 
} 
catch { 
    Write-Host &quot;Error retrieving access token!&quot;  
    exit 
} 

# Capturing API Access Token 
$AccessToken = ($response.Content  | Convertfrom-Json).access_token
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I can now prepare the headers for Step 4 &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Headers creation 
$headers = @{}  
$headers[&quot;Authorization&quot;] = &quot;Bearer $AccessToken&quot; 
$headers[&quot;Accept&quot;] = &quot;application/json&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Compute date for filtering events &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;While ($true) { 
    $d=((Get-Date).AddMinutes(-1)).ToUniversalTime() 
    $sd=$d.tostring(&apos;yyyy-MM-ddTHH:mm:ss.00Z&apos;) 
    write-host &quot;Last check at (UTC): &quot; $sd 
    write-host &quot;--------------------&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 4: Call audit log API &lt;/h3&gt;
&lt;p&gt;Here, you’ll see that I can leverage exceptions that PowerShell supports: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    try { 
        $response = Invoke-webrequest &quot;https://global.api.greenlake.hpe.com/audit-log/v1/logs?filter=startTime%20ge%20&apos;$sd&apos;&quot; -Method GET -Headers $headers  
    } 
    catch { 
        write-host &quot;Error calling the API or token has expired!&quot; 
        exit 
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Extract data and print results &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$my_json=$response | ConvertFrom-Json 

    foreach ($i in $my_json.items){ 
        write-host &quot;createdAt:&quot; $i.createdAt 
        write-host &quot;username: &quot; $i.user.username  
        write-host &quot;description: &quot; $i.description  
        write-host &quot;ipAddress: &quot; $i.additionalInfo.ipAddress 
        write-host &quot;--------------&quot; 
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 6: Wait a bit and go to Step 3 &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;  start-sleep -Seconds 60 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Running the PowerShell code &lt;/h3&gt;
&lt;p&gt;Running the code in PowerShell (tested on Windows and MacOS) provides similar results: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;PS&gt; ./spy-workspace.ps1 
Last check at (UTC):  2023-12-18T15:16:59.00Z                                                            -------------------- 
createdAt: 12/18/2023 3:17:40 PM                                                                                 username:  &amp;#x3C;usename&gt; 
description:  User &amp;#x3C;usename&gt; logged out 
ipAddress:  &amp;#x3C;ip address&gt; 
-------------- 
createdAt: 12/18/2023 3:17:36 PM 
username:  &amp;#x3C;usename&gt; 
description:  Platform customer 3009de2825f211ec8a84fedebcb4a754 profile Address updated 
ipAddress:  &amp;#x3C;ip address&gt; 
-------------- 
createdAt: 12/18/2023 3:17:17 PM 
username:  &amp;#x3C;usename&gt; 
description:  Loading workspace 3009de2825f211ec8a84fedebcb4a754 for user &amp;#x3C;usename&gt; 
ipAddress:  &amp;#x3C;ip address&gt; 
-------------- 
createdAt: 12/18/2023 3:17:11 PM 
username:  &amp;#x3C;usename&gt; 
description:  User &amp;#x3C;usename&gt; logged in via ping mode. 
ipAddress:  &amp;#x3C;ip address&gt; 
--------------
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note: The audit log API returns logs in LIFO (Last In First Out) mode. This is great for a GUI interface; however, it makes things a little more complicated for CLI and scripts. Sorting the logs is outside the scope of the blog post.&lt;/em&gt;   &lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here I can catch the exception when the token has expired, display a message, and stop: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Last check at (UTC):  2023-12-18T17:17:02.00Z                                                                            
-------------------- 
Error calling the API or token has expired!
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Putting it all together in Python &lt;/h2&gt;
&lt;h3&gt;Step 1: Gathering details about the API Access &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from time import sleep 
from oauthlib.oauth2 import BackendApplicationClient        
from requests.auth import HTTPBasicAuth        
from requests_oauthlib import OAuth2Session        
from datetime import datetime, timedelta 
import requests 
import json 
import os 
import pwinput 
 
client_id = os.environ.get(&quot;CLIENTID&quot;, &quot;&quot;) 
client_secret = os.environ.get(&quot;CLIENTSECRET&quot;, &quot;&quot;) 

if client_id == &quot;&quot;: 
    client_id = input(&quot;Enter your HPE GreenLake Client ID: &quot;) 
 
if client_secret == &quot;&quot;: 
    client_secret = pwinput.pwinput(&quot;Enter your HPE GreenLake Client Secret: &quot;) 

client = BackendApplicationClient(client_id)        
oauth = OAuth2Session(client=client)        
auth = HTTPBasicAuth(client_id,client_secret)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Get a session token &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;try: 
    token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)        
except: 
    print(&quot;Error retrieving access token.&quot;) 
    exit() 

my_token = &quot;Bearer &quot; + token[&quot;access_token&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Compute date for filtering events &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;while True: 
# Get date in right format 

    now = datetime.utcnow() + timedelta(minutes = -1) 
    date = now.strftime(&quot;%Y-%m-%dT%H:%M:%S.00Z&quot;) 
    print(&quot;Last check at (UTC): &quot;, date) 
    print(&apos;-------------------&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 4: Call audit log API &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;my_headers = { 
        &apos;accept&apos;: &apos;application/json&apos;, 
        &apos;Authorization&apos;: my_token, 
    } 
    my_url = &quot;https://global.api.greenlake.hpe.com/audit-log/v1/logs?filter=startTime%20ge%20&apos;&quot; + date + &quot;&apos;&quot; 

# Fetch audit logs since last minute 
    response = requests.get(url=my_url, headers=my_headers) 

    try: 
        response.raise_for_status() 
    except: 
        print(&quot;Error calling the API or token has expired!&quot;) 
        exit()
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 5: Extract data and print results &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;json = response.json()  

    e = 0 
    while (e &amp;#x3C; json[&apos;count&apos;]): 
        print(&apos;createdAt: &apos;+ json[&apos;items&apos;][e][&apos;createdAt&apos;]) 
        print(&apos;username: &apos; + json[&apos;items&apos;][e][&apos;user&apos;][&apos;username&apos;]) 
        print(&apos;description: &apos; + json[&apos;items&apos;][e][&apos;description&apos;]) 
        try: 
            print(&apos;ipAddress: &apos; + json[&apos;items&apos;][e][&apos;additionalInfo&apos;][&apos;ipAddress&apos;]) 
        except: { 
            print(&apos;ipAddress:&apos;) 
        } 
        print(&apos;-----------&apos;) 
        e = e + 1
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 6: Wait a bit and go to Step 3 &lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    sleep(60)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Running the Python code  &lt;/h3&gt;
&lt;p&gt;I can now run this script (tested on MacOS) and get the same behavior: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ python3 ./spy_workspace.py 
Last check at (UTC):  2023-12-18T15:17:10.00Z 
------------------- 
createdAt: 2023-12-18T15:17:36.000000Z 
username: &amp;#x3C;username&gt; 
description: Platform customer 3009de2825f211ec8a84fedebcb4a754 profile Address updated 
ipAddress: &amp;#x3C;ip address&gt; 
-----------
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Similar to how it was done in the other versions, I can catch an exception calling the audit log API, display a message and terminate: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Last check at (UTC):  2023-12-18T17:17:23.00Z 
------------------- 
Error calling the API or token has expired!
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What’s next? &lt;/h2&gt;
&lt;p&gt;The next step for the HPE GreenLake cloud APIs is to provide language specific SDK, which would provide better handling in PowerShell and Python, with stronger type checking and exception handling. In the meantime, I have shown you through this blog post that it is already possible to integrate with HPE GreenLake cloud using the most popular scripting languages. You can get the source code for these scripts from &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling&quot;&gt;our community tooling repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you’re interested in trying out what I just discussed, you might first want to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake cloud APIs mentioned in this blog post. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake cloud APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with volume snapshots on a Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Overview HPE GreenLake for Private Cloud Enterprise: Containers ("containers service"), one of the HPE GreenLake cloud services available on…]]></description><link>https://developer.hpe.com/getting-started-with-volume-snapshots-on-a-kubernetes-cluster-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-volume-snapshots-on-a-kubernetes-cluster-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Tue, 23 Jan 2024 14:24:28 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt; (&quot;containers service&quot;), one of the HPE GreenLake cloud services available on the HPE GreenLake for Private Cloud Enterprise, allows customers to create a Kubernetes (K8s) cluster, view details about existing clusters, and deploy containerized applications to the cluster. It provides an enterprise-grade container management service using open source K8s.&lt;/p&gt;
&lt;p&gt;In this blog post, I discuss first the persistent volumes and volume snapshots in K8s. Then, I describe the Container Storage Interface (CSI) and HPE CSI driver for K8s in HPE GreenLake for Private Cloud Enterprise. With a MySQL database instance deployed as a sample stateful application using persistent volume in the cluster, I show the detailed steps used to create a volume snapshot of the database as a backup using HPE CSI driver for K8s. Finally, I demonstrate how to restore the MySQL database using the created volume snapshot.&lt;/p&gt;
&lt;h3&gt;Persistent volumes and volume snapshots&lt;/h3&gt;
&lt;p&gt;In K8s, a persistent volume (PV) is a piece of storage in the cluster that has been provisioned, either statically by an administrator or dynamically using &lt;em&gt;StorageClasses&lt;/em&gt;. It provides a way for data to persist beyond the lifecycle of individual Pods. PV provides the necessary data persistence for stateful applications, ensuring that they function correctly even in the event of Pod or node failures. It&apos;s a key component in managing storage in K8s. As such, backing up PVs has become a critical aspect of managing stateful applications in K8s.&lt;/p&gt;
&lt;p&gt;A volume snapshot is a copy of the data stored in a PV in K8s at a specific point in time. It provides the ability to create a snapshot of a PV from stateful applications. A volume snapshot can be used to back up data from a PV, restore a PV from a previous state, or create a new PV from a snapshot. A volume snapshot provides K8s users with a standardized way to copy the contents of a PV at a particular point in time without creating an entirely new volume. As an example of how this is used, this functionality can enable database administrators to backup databases before performing edit or delete modifications.&lt;/p&gt;
&lt;p&gt;Support of volume snapshots in K8s is only available for CSI driver deployed in the cluster.&lt;/p&gt;
&lt;h3&gt;HPE CSI driver for K8s&lt;/h3&gt;
&lt;p&gt;The CSI defines a standard interface for container orchestration systems, like K8s, to expose arbitrary block and file storage systems to their containerized workloads. Support for CSI in K8s was introduced as &lt;em&gt;alpha&lt;/em&gt; in its v1.9 release, and promoted to &lt;em&gt;beta&lt;/em&gt; in its v1.10 release. Implementation of the CSI has been in &lt;em&gt;GA&lt;/em&gt; in K8s since v1.13 release. With the adoption of CSI, the K8s volume layer becomes truly extensible. Using CSI, 3rd party storage providers, such as HPE,  can write and deploy plugins exposing new storage systems in K8s without ever having to touch the core K8s code. This gives K8s users more options for storage and makes the system more secure and reliable.&lt;/p&gt;
&lt;p&gt;A CSI driver for K8s is a plugin that allows K8s to access different types of storage systems, such as &lt;em&gt;Azure Disks&lt;/em&gt;, &lt;em&gt;AWS EBS&lt;/em&gt;, and &lt;em&gt;HPE Storage&lt;/em&gt;, etc. HPE CSI driver for K8s is one of those CSI driver plugins that follows the K8s CSI specification and enables K8s to use various HPE storage systems, such as &lt;em&gt;Nimble Storage&lt;/em&gt;, &lt;em&gt;3PAR&lt;/em&gt; and &lt;em&gt;Primera&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;As part of K8s cluster provisioning in HPE GreenLake for Private Cloud Enterprise, HPE CSI driver for K8s has been installed in the cluster. The installation consists of two components, a &lt;em&gt;controller&lt;/em&gt; component and a &lt;em&gt;per-node&lt;/em&gt; component.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The controller component is deployed as a &lt;em&gt;Deployment&lt;/em&gt; on any node in the K8s cluster. It implements the CSI Controller service and a list of sidecar containers, such as &lt;em&gt;external-provisioner&lt;/em&gt;, &lt;em&gt;external-attacher&lt;/em&gt;, &lt;em&gt;external-snapshotter&lt;/em&gt;, and &lt;em&gt;external-resizer&lt;/em&gt;, etc. These controller sidecar containers typically interact with K8s objects, make calls to the driver’s CSI Controller service, manage K8s events and make the appropriate calls to the CSI driver.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The per-node component is deployed as a &lt;em&gt;DaemonSet&lt;/em&gt; on every node in the cluster. It implements the CSI Node service, together with the &lt;em&gt;node-driver-registrar&lt;/em&gt; sidecar container, which registers the CSI driver to kubelet that runs on every cluster node and is responsible for making the CSI Node service calls. These calls mount and unmount the storage volume from the HPE storage system, making it available to the Pod to consume.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Details about the deployed HPE CSI driver for K8s in the cluster to its namespace &lt;em&gt;hpe-storage&lt;/em&gt; are shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ k﻿ubectl get all -n hpe-storage
NAME                                       READY   STATUS    RESTARTS      AGE
pod/hpe-csi-controller-54cf448d85-g4w4c    9/9     Running   0             56d
pod/hpe-csi-node-5xtdb                     2/2     Running   0             56d
pod/nimble-csp-74d57f9487-qxwln            1/1     Running   0             56d
pod/primera3par-csp-59f5dfc499-hfghx       1/1     Running   0             56d
pod/snapshot-controller-5fd799f6b5-f6k7n   1/1     Running   6 (22d ago)   56d
pod/snapshot-controller-5fd799f6b5-z62dc   1/1     Running   2 (27d ago)   56d

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/alletra6000-csp-svc   ClusterIP   10.101.79.85    &amp;#x3C;none&gt;        8080/TCP   56d
service/alletra9000-csp-svc   ClusterIP   10.97.147.230   &amp;#x3C;none&gt;        8080/TCP   56d
service/nimble-csp-svc        ClusterIP   10.110.238.43   &amp;#x3C;none&gt;        8080/TCP   56d
service/primera3par-csp-svc   ClusterIP   10.101.42.76    &amp;#x3C;none&gt;        8080/TCP   56d

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/hpe-csi-node   1         1         1       1            1           &amp;#x3C;none&gt;          56d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hpe-csi-controller    1/1     1            1           56d
deployment.apps/nimble-csp            1/1     1            1           56d
deployment.apps/primera3par-csp       1/1     1            1           56d
deployment.apps/snapshot-controller   2/2     2            2           56d

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/hpe-csi-controller-54cf448d85    1         1         1       56d
replicaset.apps/nimble-csp-74d57f9487            1         1         1       56d
replicaset.apps/primera3par-csp-59f5dfc499       1         1         1       56d
replicaset.apps/snapshot-controller-5fd799f6b5   2         2         2       56d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As part of HPE CSI driver configuration, a list of &lt;em&gt;StorageClasses&lt;/em&gt; is created that refers to the CSI driver name. The &lt;em&gt;PersistentVolumeClaim&lt;/em&gt; (PVC) can then be created, which uses the &lt;em&gt;StorageClass&lt;/em&gt; to dynamically provision a PV backed by the HPE storage systems. Apart from features such as dynamic provisioning, raw block volumes, inline ephemeral volumes, and volume encryption, HPE CSI driver implements and supports volume snapshot on a K8s cluster. As you can see in above deployment, the common snapshot controller &lt;em&gt;snapshot-controller&lt;/em&gt; and a &lt;em&gt;VolumeSnapshotClass&lt;/em&gt;, together with a list of snapshot &lt;em&gt;CustomResourceDefinitions&lt;/em&gt; (CRDs), all get deployed and added to the cluster.&lt;/p&gt;
&lt;p&gt;Here is the list of &lt;em&gt;StorageClasses&lt;/em&gt; and the &lt;em&gt;VolumeSnapshotClass&lt;/em&gt; created in this cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ k﻿ubectl get storageclasses
NAME                                 PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gl-sbc-hpe                           csi.hpe.com                    Delete          Immediate              true                   56d
gl-sbp-frank-gl1-sstor01 (default)   csi.hpe.com                    Delete          Immediate              true                   56d
hpe-hdd-storage                      kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d
hpe-nvme-storage                     kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d
hpe-ssd-storage                      kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  56d

$ k﻿ubectl  get volumesnapshotclasses
NAME                                 DRIVER        DELETIONPOLICY   AGE
gl-sbp-frank-gl1-sstor01             csi.hpe.com   Delete           56d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that you understand the basics, in﻿ the following sections, I will describe how to create volume snapshots of persistent volumes in K8s using the HPE CSI driver for K8s.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The kubectl CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;The o﻿ptional mysql CLI tool, for accessing the deployed sample MySQL database service&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Deploy MySQL database&lt;/h3&gt;
&lt;p&gt;B﻿efore showing the volume snapshots, a MySQL database instance from &lt;a href=&quot;https://github.com/GuopingJia/mysql-app&quot;&gt;my GitHub repo&lt;/a&gt; will be deployed as a sample stateful application to the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1﻿. Install MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MySQL database requires a persistent volume to store data. Here you can see the PVC YAML manifest file &lt;em&gt;mysql-pvc.yaml&lt;/em&gt; in the repo&apos;s &lt;em&gt;base&lt;/em&gt; folder:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;

$ cat mysql-app/base/mysql-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: mysql
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This PVC file, together with other YAML manifest files in the folder &lt;em&gt;base&lt;/em&gt;, will be used to install a MySQL database instance using &lt;a href=&quot;https://kustomize.io/&quot;&gt;Kustomize&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt; $ tree mysql-app/base
mysql-app/base
├── kustomization.yaml
├── mysql-deployment.yaml
└── mysql-pvc.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The file &lt;em&gt;kustomization.yaml&lt;/em&gt; lists all YAML files in its resources section, together with the secret generator for the MySQL password:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat mysql-app/base/kustomization.yaml
secretGenerator:
- name: mysql-pass
  namespace: mysql
  literals:
  - password=CfeDemo@123
resources:
  - mysql-deployment.yaml
  - mysql-pvc.yaml

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype command shown below to install the MySQL database to the namespace &lt;em&gt;mysql&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl apply -k mysql-app/base
namespace/mysql created
secret/mysql-pass-m62cbhd9kf created
service/mysql created
persistentvolumeclaim/mysql-pvc created
deployment.apps/mysql created


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to check the MySQL database deployment state. The MySQL Pod should be in &lt;em&gt;Running&lt;/em&gt; status:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get all -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-6974b58d48-wb8g5   1/1     Running   0          14s

NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/mysql   ClusterIP   None         &amp;#x3C;none&gt;        3306/TCP   24s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   1/1     1            1           23s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-6974b58d48   1         1         1       24s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou can check that the PVC and the PV were created as part of the MySQL database deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get persistentvolumes 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                 STORAGECLASS               REASON   AGE

pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            Delete           Bound    mysql/mysql-pvc    
                                                                                                                       

$ kubectl get persistentvolumeclaims -n mysql
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
mysql-pvc   Bound    pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            gl-sbp-frank-gl1-sstor01   9m50s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2﻿. Access MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In order to access the MySQL database service using the mysql CLI, you must first set the port-forward of &lt;em&gt;service/mysql&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl port-forward service/mysql -n mysql :3306
Forwarding from 127.0.0.1:41797 -&gt; 3306
Forwarding from [::1]:41797 -&gt; 3306
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The d﻿eployed MySQL database service can be accessed by typing the following mysql command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0,237 sec)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;3﻿. Populate MySQL database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The MySQL application repo has a &lt;em&gt;test&lt;/em&gt; folder that contains a list of scripts for populating data records and testing the contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ tree mysql-app/test
mysql-app/test
├── employees.sql
├── load_departments.dump
├── load_dept_emp.dump
├── load_dept_manager.dump
├── load_employees.dump
├── load_salaries1.dump
├── load_salaries2.dump
├── load_salaries3.dump
├── load_titles.dump
├── show_elapsed.sql
├── test_employees_md5.sql
└── test_employees_sha.sql
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Type the following command to populate a sample &lt;em&gt;employees&lt;/em&gt; data to the MySQL database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cd mysql-app/test
$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 &amp;#x3C; employees.sql
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
INFO
LOADING departments
INFO
LOADING employees
INFO
LOADING dept_emp
INFO
LOADING dept_manager
INFO
LOADING titles
INFO
LOADING salaries
data_load_time_diff
NULL
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The added sample data records called &lt;em&gt;employees&lt;/em&gt; can be checked and verified by running the commands shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| employees          |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0,237 sec)





$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 -t &amp;#x3C; test_employees_sha.sql
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:27         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create volume snapshot&lt;/h3&gt;
&lt;p&gt;H﻿ere is the &lt;em&gt;VolumeSnapshot&lt;/em&gt; YAML manifest file that creates a volume snapshot from the source PVC &lt;em&gt;&apos;mysql-pvc&apos;&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat volumesnapshot.yaml 
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: mysql-snapshot
  namespace: mysql
spec:
  volumeSnapshotClassName: gl-sbp-frank-gl1-sstor01
  source:
persistentVolumeClaimName: mysql-pvc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to create the volume snapshot:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl apply -f volumesnapshot.yaml 
volumesnapshot.snapshot.storage.k8s.io/mysql-snapshot created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou can check that a &lt;em&gt;VolumeSnapshot&lt;/em&gt; &lt;em&gt;&apos;mysql-snapshot&apos;&lt;/em&gt; is created in the namespace &lt;em&gt;mysql&lt;/em&gt; together with a &lt;em&gt;VolumeSnapshotContent&lt;/em&gt; object created at cluster level. The &lt;em&gt;READYTOUSE&lt;/em&gt; of the &lt;em&gt;VolumeSnapshot&lt;/em&gt; should show as &lt;em&gt;true&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get volumesnapshot -n mysql
NAME             READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS              SNAPSHOTCONTENT                                    CREATIONTIME   AGE
mysql-snapshot   true         mysql-pvc                           1Gi           gl-sbp-frank-gl1-sstor01   snapcontent-41de6346-1ba3-4ce7-9483-2ca074e476a2   2m21s          2m22s


$ kubectl get volumesnapshotcontents
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER        VOLUMESNAPSHOTCLASS        VOLUMESNAPSHOT                  VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-41de6346-1ba3-4ce7-9483-2ca074e476a2   true         1073741824    Delete           csi.hpe.com   gl-sbp-frank-gl1-sstor01   mysql-snapshot                  mysql                     2m50s

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Restore MySQL database using volume snapshot&lt;/h3&gt;
&lt;p&gt;B﻿efore showing the database restore, I﻿ will first delete a table from MySQL database to simulate a loss of data. Then, I will perform the database recovery from the created volume snapshot.&lt;/p&gt;
&lt;h4&gt;Delete table&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; use employees;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [employees]&gt; show tables;
+----------------------+
| Tables_in_employees  |
+----------------------+
| current_dept_emp     |
| departments          |
| dept_emp             |
| dept_emp_latest_date |
| dept_manager         |
| employees            |
| salaries             |
| titles               |
+----------------------+
8 rows in set (0,237 sec)

MySQL [employees]&gt; delete from departments;
Query OK, 9 rows affected (1,523 sec)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿f you rerun the testing script &lt;em&gt;test_employees_sha.sql&lt;/em&gt;, it will show the failures of &lt;em&gt;CRC&lt;/em&gt; and &lt;em&gt;count&lt;/em&gt;, which indicate the loss of data in the MySQL database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 41797 -t &amp;#x3C;test_employees_sha.sql
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                0 |                                          |
| dept_emp     |                0 |                                          |
| dept_manager |                0 |                                          |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | not ok        | not ok    |
| dept_emp     | not ok        | not ok    |
| dept_manager | not ok        | not ok    |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:24         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | FAIL   |
| count   | FAIL   |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Perform MySQL database restore&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;1﻿. Scale MySQL database deployment config to 0&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before starting the MySQL database restore, you first need to stop the mysql Pod by scaling the replicas in the MySQL deployment to 0:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl scale deployment.apps/mysql -n mysql --replicas=0
deployment.apps/mysql scaled
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿ype the following command to check that the MySQL database deployment has 0 replica:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n mysql
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/mysql   ClusterIP   None         &amp;#x3C;none&gt;        3306/TCP   28m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   0/0     0            0           28m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-6974b58d48   0         0         0       28m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2﻿. Create a new PVC using volume snapshot&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is the PVC YAML manifest file that creates a new PVC &lt;em&gt;mysql-pvc-restore&lt;/em&gt; from the volume snapshot &lt;em&gt;mysql-snapshot&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ cat mysql-pvc-restore.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc-restore
  namespace: mysql
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  dataSource:
    name: mysql-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io



$ kubectl apply -f mysql-pvc-restore.yaml 
persistentvolumeclaim/mysql-pvc-restore created


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou will see the new PVC &lt;em&gt;mysql-pvc-restore&lt;/em&gt;, together with its PV, is created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl get persistentvolumeclaims -n mysql
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
mysql-pvc           Bound    pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            gl-sbp-frank-gl1-sstor01   33m
mysql-pvc-restore   Bound    pvc-92940c36-eb1d-4de5-9c1e-57261ccbecad   1Gi        RWO            gl-sbp-frank-gl1-sstor01   8s



$ kubectl get persistentvolumes
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                 STORAGECLASS               REASON   AGE

pvc-3e55e9b3-097f-4ddf-bdcb-60825a7905ec   1Gi        RWO            Delete           Bound    mysql/mysql-pvc                                                                                                       gl-sbp-frank-gl1-sstor01            42m
pvc-92940c36-eb1d-4de5-9c1e-57261ccbecad   1Gi        RWO            Delete           Bound    mysql/mysql-pvc-restore                                                                                               gl-sbp-frank-gl1-sstor01            8m48s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;3﻿. Edit MySQL deployment config&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;T﻿ype the following command to edit the MySQL deployment config and change the PVC name from &lt;em&gt;mysql-pvc&lt;/em&gt; to &lt;em&gt;mysql-pvc-restore&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl edit deployment.apps/mysql -n mysql
…
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc-restore
…

deployment.apps/mysql edited
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;4﻿. Scale MySQL database deployment config back to 1&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Start the mysql Pod by scaling the replicas in the MySQL deployment back to 1:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ kubectl scale deployment.apps/mysql -n mysql --replicas=1
deployment.apps/mysql scaled



$ kubectl get all -n mysql
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-697499cd4c-k4phg   1/1     Running   0          13s

NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/mysql   ClusterIP   None         &amp;#x3C;none&gt;        3306/TCP   36m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysql   1/1     1            1           36m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysql-697499cd4c   1         1         1       107s
replicaset.apps/mysql-6974b58d48   0         0         0       36m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;5﻿. Verify MySQL database data records&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Y﻿ou can connect to the MySQL database service and rerun the testing script. You should see the testing script now reports everything is &lt;em&gt;OK&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ k port-forward service/mysql -n mysql :3306
Forwarding from 127.0.0.1:43959 -&gt; 3306
Forwarding from [::1]:43959 -&gt; 3306
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 43959
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.51 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type &apos;help;&apos; or &apos;\h&apos; for help. Type &apos;\c&apos; to clear the current input statement.

MySQL [(none)]&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| employees          |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0,238 sec)




$ mysql -h 127.0.0.1 -uroot -pCfeDemo@123 -P 43959 -t &amp;#x3C;test_employees_sha.sql 
+----------------------+
| INFO                 |
+----------------------+
| TESTING INSTALLATION |
+----------------------+
+--------------+------------------+------------------------------------------+
| table_name   | expected_records | expected_crc                             |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+------------------+------------------------------------------+
| table_name   | found_records    | found_crc                                |
+--------------+------------------+------------------------------------------+
| departments  |                9 | 4b315afa0e35ca6649df897b958345bcb3d2b764 |
| dept_emp     |           331603 | d95ab9fe07df0865f592574b3b33b9c741d9fd1b |
| dept_manager |               24 | 9687a7d6f93ca8847388a42a6d8d93982a841c6c |
| employees    |           300024 | 4d4aa689914d8fd41db7e45c2168e7dcb9697359 |
| salaries     |          2844047 | b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f |
| titles       |           443308 | d12d5f746b88f07e69b9e36675b6067abb01b60e |
+--------------+------------------+------------------------------------------+
+--------------+---------------+-----------+
| table_name   | records_match | crc_match |
+--------------+---------------+-----------+
| departments  | OK            | ok        |
| dept_emp     | OK            | ok        |
| dept_manager | OK            | ok        |
| employees    | OK            | ok        |
| salaries     | OK            | ok        |
| titles       | OK            | ok        |
+--------------+---------------+-----------+
+------------------+
| computation_time |
+------------------+
| 00:00:29         |
+------------------+
+---------+--------+
| summary | result |
+---------+--------+
| CRC     | OK     |
| count   | OK     |
+---------+--------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿his indicates the database restore using the volume snapshot succeeded and the MySQL database data is back!&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;I﻿n this blog post, I described persistent volumes, volume snapshots, and the CSI driver for K8s. Using HPE CSI driver for K8s, I demonstrated how to create a volume snapshot of a MySQL database and how to restore a database using the created volume snapshot in the cluster. The volume snapshot capability can be easily integrated with third-party tools like &lt;a href=&quot;https://www.veeam.com/products/cloud/kubernetes-data-protection.html&quot;&gt;Kasten K10 by Veeam&lt;/a&gt; as an automatic backup and recovery solution. It can significantly simplify the process and enhance the robustness of data management in a K8s cluster. Feel free to take a look at my blog post &lt;a href=&quot;https://developer.hpe.com/blog/how-to-backup-and-restore-stateful-applications-on-kubernetes-using-kasten-k10-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;How to backup and restore stateful app using Kasten 10&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer Community blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Deploying Cribl Stream Containers on HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Hewlett Packard Enterprise and Cribl bring together breakthrough technology to optimize and modernize observability data management…]]></description><link>https://developer.hpe.com/deploying-cribl-stream-containers-on-hpe-greenlake/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-cribl-stream-containers-on-hpe-greenlake/</guid><pubDate>Wed, 17 Jan 2024 21:25:38 GMT</pubDate><content:encoded>&lt;p&gt;Hewlett Packard Enterprise and &lt;a href=&quot;https://cribl.io/&quot;&gt;Cribl&lt;/a&gt; bring together breakthrough technology to optimize and modernize observability data management, offering new levels of performance and platform independence.&lt;/p&gt;
&lt;p&gt;The challenges of security and log management are only partly solved by existing software solutions. HPE and Cribl address the remaining problems of optimizing, routing, and replaying logs to provide independence from the industry’s software products in this space. HPE provides a robust way to run multiple log management software solutions and the Cribl Stream in a modern, easy-to-use, and robust platform. Together, HPE and Cribl reduce the total cost of ownership of log management systems by optimizing the software, accelerating the infrastructure, and reducing management costs.&lt;/p&gt;
&lt;p&gt;Cribl Stream is an observability and data streaming platform for real-time processing of logs, metrics, traces, and observability data that enables the ITops, SRE, SecOps and observability teams to collect the data they want, shape the data in the formats they need, route the data wherever they want it to go, and replay data on-demand; thereby enabling customers to observe more and spend less, to have choice and flexibility, and to provide control over their data. HPE GreenLake is a private and hybrid cloud service that delivers the benefits of public cloud to your on-premises environment.&lt;/p&gt;
&lt;p&gt;C﻿ribl software can be deployed as stand alone software or run on a fully managed HPE GreenLake platform to offer further ease-of-use for organizations that want the benefits of cloud in an on-premise private cloud offering.&lt;/p&gt;
&lt;p&gt;Deploying Cribl Stream containers on HPE GreenLake is a simple and effective way to implement a vendor-agnostic observability pipeline. Cribl Stream containers offer a number of advantages, including agility, cost savings, security, and management simplicity. &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/cribl-stream.html&quot;&gt;Cribl software&lt;/a&gt; is available in the&lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace.html&quot;&gt; HPE GreenLake Marketplace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Deploying Cribl Stream containers on HPE GreenLake offers a number of advantages, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Agility:&lt;/strong&gt; Cribl Stream containers can be deployed quickly and easily on HPE GreenLake, giving you the agility to scale your observability pipeline up or down as needed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost savings:&lt;/strong&gt; Cribl Stream containers can help you reduce the cost of your observability pipeline by optimizing your data storage and processing through data reduction, data normalization and log routing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security:&lt;/strong&gt; Cribl Stream containers can help you secure your data by encrypting it at rest and in transit.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Management simplicity:&lt;/strong&gt; HPE GreenLake provides a single management console for managing your Cribl Stream containers, making it easy to keep your observability pipeline running smoothly.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/cribl-on-hpe-architecture.png&quot; alt=&quot;Cribl architecture diagram&quot; title=&quot;Cribl architecture&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Prerequisites&lt;/h4&gt;
&lt;p&gt;Before you deploy Cribl Stream containers on HPE GreenLake, you will need to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Have an active HPE GreenLake agreement and deployed HPE GreenLake for Private Cloud Enterprise and an account on &lt;a href=&quot;https://common.cloud.hpe.com/&quot;&gt;https://common.cloud.hpe.com/&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install the HPE Ezmeral Runtime Enterprise &lt;a href=&quot;https://docs.ezmeral.hpe.com/runtime-enterprise/56/reference/kubernetes/tenant-project-administration/Dashboard__Kubernetes_TenantProject_Administrator.html&quot;&gt;Kubectl executable&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a HPE Ezmeral Runtime Enterprise &lt;a href=&quot;https://youtu.be/HSYWa2MalF4&quot;&gt;Kubernetes cluster&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Install the Cribl Stream &lt;a href=&quot;https://docs.cribl.io/stream/getting-started-guide/&quot;&gt;Kubernetes operator&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Steps to deploy Cribl Stream containers on HPE GreenLake:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a Cribl Stream deployment file. This file will specify the Cribl Stream containers that you want to deploy, as well as the resources that they need.&lt;/li&gt;
&lt;li&gt;Deploy the Cribl Stream containers to your HPE GreenLake cluster using the Cribl Stream Kubernetes operator.&lt;/li&gt;
&lt;li&gt;Verify that the Cribl Stream containers are running and healthy.&lt;/li&gt;
&lt;li&gt;Configure Cribl Stream to collect and process your data.&lt;/li&gt;
&lt;li&gt;Send your data to your analysis platform of choice.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Example deployment file&lt;/h4&gt;
&lt;p&gt;The following example deployment file deploys a Cribl Stream container that collects and processes logs from a Kubernetes cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: cribl-stream
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cribl-stream
  template:
    metadata:
      labels:
        app: cribl-stream
    spec:
      containers:
      - name: cribl-stream
        image: cribl/cribl-stream:latest
        ports:
        - containerPort: 9000
        volumeMounts:
        - name: cribl-stream-config
          mountPath: /etc/cribl-stream
      volumes:
      - name: cribl-stream-config
        configMap:
          name: cribl-stream-config
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Deploying Cribl Stream using Helm Charts&lt;/h4&gt;
&lt;p&gt;The Cribl Stream helm charts can be found on github (&lt;a href=&quot;https://github.com/criblio/helm-charts&quot;&gt;https://github.com/criblio/helm-charts&lt;/a&gt;). This assumes that the namespace is set to &lt;code&gt;cribl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;﻿Log into cloud CLI or jump box and issue the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export KUBECONFIG=&amp;#x3C;path_to_kube_settings&gt;
kubectl get nodes -n cribl
kubectl get svc  -n cribl
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Label the leader node and the worker nodes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl label nodes &amp;#x3C;leader_node&gt; stream=leader
kubectl label nodes &amp;#x3C;worker_node&gt; stream=worker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Validate by running:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl get nodes --show-labels
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create and modify the &lt;code&gt;values.yaml&lt;/code&gt; file for workers and leader nodes. For the leader nodes, create a file named &lt;code&gt;Leader_values.yaml&lt;/code&gt; and modify line 97:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;nodeSelector:
     stream: leader
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For the worker nodes, create a file named &lt;code&gt;Worker_values.yaml&lt;/code&gt; and modify line 97:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;nodeSelector:
     stream: worker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, set the labels for your workers and leader node.&lt;/p&gt;
&lt;p&gt;To do this, you&apos;ll first need to get a list of all the nodes and the labels associated with them.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl get nodes --show-labels
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, identify the nodes and make sure to label the nodes according to their role for this deployment.&lt;/p&gt;
&lt;p&gt;Here is an example of setting the host &lt;code&gt;k8s-cribl-master-t497j-92m66.gl-hpe.net&lt;/code&gt; as a leader:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl label nodes k8s-cribl-master-t497j-92m66.gl-hpe.net stream=leader
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is an example of setting the host &lt;code&gt;k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net&lt;/code&gt; as a worker node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl label nodes k8s-cribl-wor8v32g-cdjdc-8tkhn.gl-hpe.net stream=worker
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you accidentally label a node and want to remove or overwrite the label, you can use this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl label nodes k8s-cribl-wor8v32g-cdjdc-876nq.gl-hpe.net stream=worker --overwrite=true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the labels have been set, you are ready to run the helm command and deploy Cribl Stream on your environment. The first command will deploy the Cribl Leader node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;helm install --generate-name cribl/logstream-leader -f leader_values.yaml -n cribl
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When successful, you will see output similar to what&apos;s shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;NAME: logstream-leader-1696441333
LAST DEPLOYED: Wed Oct  4 17:42:16 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that this will deploy the leader node with the parameters found in the &lt;code&gt;leader_values.yaml&lt;/code&gt; file and into the namespace &lt;code&gt;cribl&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Next, deploy the worker nodes using the &lt;code&gt;worker_values.yaml&lt;/code&gt; file into the namespace &lt;code&gt;cribl&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;helm install --generate-name cribl/logstream-workergroup -f workers_values.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When successful, you will see a similar output like the one below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;NAME: logstream-workergroup-1696441592
LAST DEPLOYED: Wed Oct  4 17:46:36 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can validate the deployment by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see the following results:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;NAME    	                          TYPE       	 CLUSTER-IP   	 EXTERNAL-IP  PORT(S)                                                                                                                                                    AGE
kubernetes                         	  ClusterIP  	 10.96.0.1    	 &amp;#x3C;none&gt;    	  443/TCP                                                                                                                                                  	 22d
logstream-leader-1696441333        	  LoadBalancer   10.111.152.178  &amp;#x3C;pending&gt; 	  9000:31200/TCP                                                                                                                                           	 9m56s
logstream-leader-1696441333-internal  ClusterIP      10.105.14.164	 &amp;#x3C;none&gt;    	  9000/TCP,4200/TCP                                                                                                                                        	 9m56s
logstream-workergroup-1696441592   	  LoadBalancer   10.102.239.137  &amp;#x3C;pending&gt; 	  10001:30942/TCP,9997:32609/TCP,10080:32174/TCP,10081:31898/TCP,5140:30771/TCP,8125:31937/TCP,9200:32134/TCP,8088:32016/TCP,10200:32528/TCP,10300:30836/TCP 5m35s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note: the names and IP addresses will differ from the above example. To test that the deployment was successful, you can run the following command and log into your deployment using the localhost and port 9000:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl port-forward service/logstream-leader-1696441333 9000:9000 &amp;#x26;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Uninstalling Cribl using Helm&lt;/h4&gt;
&lt;p&gt;You can uninstall the Cribl deployment for both the leader and worker nodes by running the following commands respectively:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;helm uninstall logstream-leader-1696441333 -n default
helm uninstall logstream-workergroup-1696441592 -n default
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure to use your leader and worker group name when uninstalling Cribl from your deployment.&lt;/p&gt;
&lt;h4&gt;Configuring Cribl Stream&lt;/h4&gt;
&lt;p&gt;Once you have &lt;a href=&quot;https://docs.cribl.io/stream/deploy-kubernetes-leader/&quot;&gt;deployed the Cribl Stream&lt;/a&gt; containers, you need to configure them to collect and process your data. You can do this by editing the Cribl Stream configuration file. The Cribl Stream documentation provides detailed instructions on how to configure Cribl Stream.&lt;/p&gt;
&lt;h4&gt;Sending your data to your analysis platform of choice&lt;/h4&gt;
&lt;p&gt;Once you have configured Cribl Stream to collect and process your data, you need to send it to your analysis platform of choice. Cribl Stream supports a wide range of analysis platforms, including Elasticsearch, Splunk, and Kafka.&lt;/p&gt;
&lt;h4&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;For more information on Cribl Stream, check out &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50006507enw&quot;&gt;Optimized Enterprise Logging Solution With HPE Ezmeral And Cribl Business white paper&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For more blog posts related to HPE Ezmeral Software, keep coming back to the HPE Developer Community blog and search on HPE Ezmeral.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How Multimodal LLMs Work]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/how-multimodal-llms-work/</link><guid isPermaLink="false">https://developer.hpe.com/how-multimodal-llms-work/</guid><pubDate>Wed, 17 Jan 2024 18:30:00 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Working with Benk: A storage provisioning and IO performance benchmark suite for Kubernetes]]></title><description><![CDATA[Recently Hewlett Packard Enterprise (HPE) published an open source benchmark suite for storage drivers capable of dynamically provisioning…]]></description><link>https://developer.hpe.com/working-with-benk-a-storage-provisioning-and-io-performance-benchmark-suite-for-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/working-with-benk-a-storage-provisioning-and-io-performance-benchmark-suite-for-kubernetes/</guid><pubDate>Fri, 12 Jan 2024 20:03:16 GMT</pubDate><content:encoded>&lt;p&gt;Recently Hewlett Packard Enterprise (HPE) published an open source benchmark suite for storage drivers capable of dynamically provisioning persistent volumes to Kubernetes. The suite is called &lt;a href=&quot;https://github.com/hpe-storage/benk&quot;&gt;Benk&lt;/a&gt; and it is an acronym that plays with the word bench as in benchmark and Kubernetes. Benk is used internally at HPE for mapping performance metrics around the provisioning process itself as well as for IO performance. It’s still a bit rough around the edges and not feature complete, but it’s still a very useful tool to capture performance across large swaths of configurations with a high degree of automation and repeatability without too much user attendance.&lt;/p&gt;
&lt;p&gt;A few of the features include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Highly customizable rendering of Kubernetes resources through kustomize templates&lt;/li&gt;
&lt;li&gt;A simple Kubernetes batch job that manage all the provisioning, decommissioning and benchmarking&lt;/li&gt;
&lt;li&gt;A single configuration file per job that abstracts Kubernetes constructs and IO parameters&lt;/li&gt;
&lt;li&gt;Use of industry standard Flexible I/O tester (FIO) for filesystem benchmarking&lt;/li&gt;
&lt;li&gt;Easy-to-build-your-own output templates using Jinja2 with the rich metrics in JSON format&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s walk through a practical example of configuring, running and reporting a benchmark with Benk.&lt;/p&gt;
&lt;h1&gt;Prerequisites&lt;/h1&gt;
&lt;p&gt;At the time of writing, parts of Benk can only be run on Mac and Linux due to a dependency on Bash. It will of course run in WSL on Windows. Besides cloning the &lt;a href=&quot;https://github.com/hpe-storage/benk&quot;&gt;GitHub repository&lt;/a&gt; with &lt;code&gt;git&lt;/code&gt;, a recent version of &lt;code&gt;kubectl&lt;/code&gt; and Python 3.x needs to be installed.&lt;/p&gt;
&lt;p&gt;Some pro-efficiency working with scripts, JSON and Kubernetes is helpful to better understand the workflows.&lt;/p&gt;
&lt;h1&gt;Hello Benk!&lt;/h1&gt;
&lt;p&gt;The synopsis section of Benk on GitHub highlights the main steps to run the default job. It&apos;s a good measuring stick one can use to understand if the lights come on and determine if the system under test is ready for a more complex job.&lt;/p&gt;
&lt;p&gt;I&apos;ll walk you through each step and explain more as I go along. Descriptions are below the commands unless noted.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;git clone https://github.com/hpe-storage/benk &amp;#x26;&amp;#x26; cd benk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Cloning and entering the &lt;code&gt;benk&lt;/code&gt; directory is now considered your home. Your logs will go into &lt;code&gt;logs&lt;/code&gt;, your workload configurations are in &lt;code&gt;kustomize/overlays&lt;/code&gt; and the Jinja2 templates are in &lt;code&gt;jinja2&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will install the required Python packages for the Benk &lt;code&gt;outputter.py&lt;/code&gt; script. Python is not required if you don’t intend to render any output and you just want to use Benk for load generation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;cp kustomize/base/config-dist.env kustomize/base/config.env
cp kustomize/base/storageclass-dist.yaml kustomize/base/storageclass.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The two “dist” files are the base of your configuration. A &lt;code&gt;config.env&lt;/code&gt; file that contains information about your storage backend that will be inserted into the &lt;code&gt;StorageClass&lt;/code&gt; &lt;code&gt;Secret&lt;/code&gt; reference. While the base configuration is heavily biased towards the HPE CSI Driver for Kubernetes, any storage driver that uses a &lt;code&gt;StorageClass&lt;/code&gt; for dynamic provisioning for filesystems &lt;code&gt;Persistent Volume Claims&lt;/code&gt; (PVC) can be used. Using the &lt;code&gt;storageclass.yaml&lt;/code&gt; to configure your driver and pertinent details is at your discretion.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;kubectl create ns benk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All &lt;code&gt;Namespace&lt;/code&gt; resources will be provisioned into the “benk” &lt;code&gt;Namespace&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;kubectl apply -k kustomize/overlays/default
kubectl wait -n benk  --for=condition=complete job/benk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will create the default job. Wait for it to complete. It runs a 80/20 read/write random 8K workload for 30 seconds on a single replica &lt;code&gt;Deployment&lt;/code&gt; using a single &lt;code&gt;PVC&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;kubectl logs -n benk job/benk | jq
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will render the log in colorized and indented JSON. The log is pretty substantial and intentionally left out from the blog. An example log entry from the above job is available &lt;a href=&quot;https://github.com/hpe-storage/benk/tree/main/jinja2&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There’s also a tiny output template provided in the repository &lt;code&gt;jinja2/example-default.yaml.j2&lt;/code&gt; that pulls some data out of the log file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;kubectl logs -n benk job/benk | ./src/benk/outputter.py -t jinja2/example-default.yaml.j2 -l-
---
report:
  name: Example YAML template
  logfile: &amp;#x3C;stdin&gt;
  jobs:
    - threads: 1
      runtime: 49s
      iops: 1258
      bandwidth: 10MB/s
      bs: 8k
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Congratulations. You&apos;ve just completed the three corners of running Benk. You&apos;ve configured the environment and a job, ran the job successfully and distilled the results into something more easily readable by humans for our demonstration.&lt;/p&gt;
&lt;h1&gt;The sequencer&lt;/h1&gt;
&lt;p&gt;The core of Benk, for now, is to run preconfigured jobs with kustomize. It may seem like a lot of work for very little output. That’s why the repository contain two important scripts, &lt;code&gt;sequencer.sh&lt;/code&gt; and &lt;code&gt;sequencer-cluster.sh&lt;/code&gt;. These scripts will help you run multiple jobs and structure the output to create more meaty reports.&lt;/p&gt;
&lt;p&gt;First, copy the “default” kustomize directory into eight separate directories.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;for i in {1..8}; do cp -a kustomize/overlays/default kustomize/overlays/mytest-${i}; done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, edit each &lt;code&gt;config.env&lt;/code&gt; in each directory. I use &lt;code&gt;vi&lt;/code&gt; to “:wn” myself through the templates.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;vi kustomize/overlays/mytest-*/config.env
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In each iteration, double &lt;code&gt;workloadThreads&lt;/code&gt; so you end up with a sequence like 1, 2, 4, 8, 16, 32, 64 and 128. Would it not be useful to run these in sequence and report the results in the same template used previously to understand if the system scales to when adding more threads to the workload? (The entire demo environment runs in virtual machines on decade-old hardware, so please don’t judge.)&lt;/p&gt;
&lt;p&gt;Run the sequence:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;./sequencer.sh mytest-
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What happens here is that the sequencer script will use globbing to find all overlays that have the prefix “mytest-”. Be mindful of how you name things to avoid unexpected jobs to be executed.&lt;/p&gt;
&lt;p&gt;Note that the same report can be used for all the jobs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;./src/benk/outputter.py -l logs/run-mytest-&amp;#x3C;unique timestamp&gt;.log -t jinja2/example-default.yaml.j2
---
report:
  name: Example YAML template
  logfile: logs/run-mytest-20240111161937.log
  jobs:
    - threads: 1
      runtime: 55s
      iops: 1664
      bandwidth: 13MB/s
      bs: 8k
    - threads: 2
      runtime: 49s
      iops: 3291
      bandwidth: 26MB/s
      bs: 8k
    - threads: 4
      runtime: 48s
      iops: 8044
      bandwidth: 63MB/s
      bs: 8k
    - threads: 8
      runtime: 53s
      iops: 12075
      bandwidth: 94MB/s
      bs: 8k
    - threads: 16
      runtime: 46s
      iops: 16357
      bandwidth: 128MB/s
      bs: 8k
    - threads: 32
      runtime: 51s
      iops: 17284
      bandwidth: 135MB/s
      bs: 8k
    - threads: 64
      runtime: 48s
      iops: 17489
      bandwidth: 137MB/s
      bs: 8k
    - threads: 128
      runtime: 54s
      iops: 18761
      bandwidth: 147MB/s
      bs: 8k
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Observe that it shows diminishing returns by adding more than 16 threads to this workload.&lt;/p&gt;
&lt;h1&gt;A/B comparison&lt;/h1&gt;
&lt;p&gt;A/B testing is something that has gained popularity in human interaction testing for websites. This is also a crucial testing methodology used to analyze systems performance by simply changing a single parameter and rerunning the exact same test to compare the outcomes. We’re in luck as Benk allows reporting on two log files at once with just a slightly different data structures being fed to the Jinja2 templates.&lt;/p&gt;
&lt;p&gt;Now, change the block size for the previous workload example. We’re going to test out if increasing the block size will increase the bandwidth for the workload.&lt;/p&gt;
&lt;p&gt;Change the “workloadBlockSize” to “64k” with the &lt;code&gt;vi&lt;/code&gt; ceremony.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;vi kustomize/overlays/mytest-*/config.env
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Re-run the sequencer.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;./sequencer.sh mytest-
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&apos;ll have to use a different template, as Jinja2 now has to deal with managing two log files at a time. The syntax for the &lt;code&gt;outputter.py&lt;/code&gt; script is also slightly different.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;./src/benk/outputter.py -a logs/run-mytest-&amp;#x3C;unique timestamp&gt;.log -b logs/run-mytest-&amp;#x3C;unique timestamp&gt;.log -t jinja2/example-default-ab.md.j2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This report template will summarize the findings in a markdown table suitable to include in a GitHub pull request or similar to illustrate a certain discovery.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;| Threads | A (MB/s) | B (MB/s) | Diff |
| ------- | -------- | -------- | ---- |
| 1       | 13       | 55       | 4.2x |
| 2       | 26       | 106      | 4.1x |
| 4       | 63       | 181      | 2.9x |
| 8       | 94       | 450      | 4.8x |
| 16      | 128      | 661      | 5.2x |
| 32      | 135      | 748      | 5.5x |
| 64      | 137      | 840      | 6.1x |
| 128     | 147      | 833      | 5.7x |
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Clearly, the performance increases nearly 5x across the board. We also have a discrepancy in the dataset. Which one?&lt;/p&gt;
&lt;p&gt;A good exercise would be to factor in latency into the report in order to understand the impact, which there will be, as performance plateaus when you add more workloads – usually as a result of higher latency. But, how high?&lt;/p&gt;
&lt;h1&gt;More sequencers and examples!&lt;/h1&gt;
&lt;p&gt;There’s also a &lt;code&gt;sequencer-cluster.sh&lt;/code&gt; script that allows users to orchestrate load generation across multiple clusters attached to one or many storage systems to isolate problems with high concurrency. The possibilities are quite endless.&lt;/p&gt;
&lt;p&gt;You can learn more about multi-cluster scaling in &lt;a href=&quot;https://github.com/hpe-storage/benk#multi-cluster-testing&quot;&gt;the GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You&apos;ll also find more practical examples in &lt;a href=&quot;https://github.com/hpe-storage/benk/tree/main/examples&quot;&gt;the GitHub repository&lt;/a&gt; stemming from real world performance testing conducted by the HPE Hybrid Cloud solutions team.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Hopefully, this blog post gets you started on ideas you’d like to build and use cases you want to explore. The possibilities are endless. Bear in mind that Benk is, by all means, provided as-is. In addition, to be fully transparent, this is a very early implementation that HPE chose to open source in order to better collaborate with customers and partners to isolate performance bottlenecks. For example, not all advertised features in &lt;code&gt;config.env&lt;/code&gt; have been implemented yet and the CLI is not yet completed.&lt;/p&gt;
&lt;p&gt;HPE invites collaboration and accepts pull requests to Benk on GitHub. The &lt;a href=&quot;https://github.com/hpe-storage/benk/issues/2&quot;&gt;first issue&lt;/a&gt; discusses a solution on how to collapse initial configuration and reporting into a single intuitive Python CLI.&lt;/p&gt;
&lt;p&gt;Let us know what you’re building or have questions. The team behind the tool is available on HPE Developer Community Slack in the &lt;a href=&quot;https://hpedev.slack.com/archives/C81QZ4X62&quot;&gt;#Kubernetes&lt;/a&gt; channel. Sign up &lt;a href=&quot;https://developer.hpe.com/slack-signup&quot;&gt;here&lt;/a&gt; and sign in at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get started with the foundational APIs for the HPE GreenLake platform – Part 1: Introduction to the APIs]]></title><description><![CDATA[Editor's note: This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current…]]></description><link>https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-1-introduction-to-the-apis/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-–-part-1-introduction-to-the-apis/</guid><pubDate>Fri, 12 Jan 2024 15:33:48 GMT</pubDate><content:encoded> &lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;&lt;strong&gt;Editor&apos;s note:&lt;/strong&gt; This blog post series may refer to older release of the HPE GreenLake platform APIs. For information about the current release of the HPE GreenLake service APIs, please visit the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake API catalog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;HPE’s unified management plane for hybrid cloud, the HPE GreenLake platform, provides a set of &lt;em&gt;common services&lt;/em&gt; that are used by cloud services that run on top of the platform. Cloud services rely on these common services for user&apos;s authentication, authorization, devices and subscriptions management, monitoring, audit trail and more.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake platform now provides a collection of RESTful application programming interfaces (APIs) for these foundational, common services.&lt;/p&gt;
&lt;p&gt;If you are looking for a quick way to discover what you can do with the HPE GreenLake platform APIs using popular tools that don’t require programming (such as &lt;a href=&quot;https://www.postman.com/product/what-is-postman/&quot;&gt;Postman&lt;/a&gt;), this blog post series is for you. This series will offer you the opportunity to automate IT operations via these APIs to achieve velocity, be more agile, get consistent results, reduce costs, and scale.&lt;/p&gt;
&lt;p&gt;In Part 1 of this series, I will help you get started with the HPE GreenLake platform APIs by taking advantage of a Postman collection I built for you. I will describe the current set of APIs for HPE GreenLake platform. I will also show you how to obtain an OAuth access token to make subsequent secure REST API calls to the HPE GreenLake platform APIs.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-2-configuring-and-managing-a-workspace/&quot;&gt;Part 2&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;Part 3&lt;/a&gt; of the blog series, I will take you on a deep dive into the foundational HPE GreenLake platform APIs through a typical customer scenario in which one automates IT operations via APIs such as managing users and resources, tracking activities and monitoring the overall health of services and devices in a &lt;em&gt;&lt;strong&gt;Standard Enterprise&lt;/strong&gt;&lt;/em&gt; workspace. This type of workspace is a single-tenant environment for a single customer and organization.&lt;/p&gt;
&lt;p&gt;Let’s embark on this exciting journey into the HPE GreenLake platform APIs.&lt;/p&gt;
&lt;h2&gt;Introducing the foundational APIs for the HPE GreenLake platform&lt;/h2&gt;
&lt;p&gt;The foundational APIs for the HPE GreenLake platform services are designed to enable IT administrators and IT operators to automate IT operations by &lt;strong&gt;programmatically&lt;/strong&gt; managing users and resources in an HPE GreenLake platform &lt;strong&gt;workspace&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; A workspace is an identity and access management boundary. HPE GreenLake customers can organize their users, hardware and services into one or more workspaces. Users and resources must be in a workspace to be operated according to specific user permissions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For example, the current set of APIs for common platform services allows HPE GreenLake customers and partners to &lt;strong&gt;programmatically&lt;/strong&gt; add users, add devices and associated subscriptions (licenses), track users’ activities and monitor the overall health of the managed services and devices in the workspace.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note:&lt;/strong&gt; This set of APIs for common platform services differentiates from the specific APIs for the cloud services that HPE GreenLake administrators can deploy in their workspace to operate and manage workloads and their underlying infrastructure for networking, compute, storage and data services. You can find more information about these services’ specific APIs in the HPE Developer Community portal: &lt;a href=&quot;https://developer.hpe.com/greenlake/aruba-central/home/&quot;&gt;HPE Aruba Networking Central&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/greenlake/hpe-greenlake-for-compute-ops-management/home/&quot;&gt;HPE Compute Ops Management&lt;/a&gt;, and &lt;a href=&quot;https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/&quot;&gt;HPE GreenLake for Data Services&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The set of APIs for common platform services includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Identity and Access management (IAM) services:&lt;/strong&gt; Identity and Access management services control access to HPE GreenLake workspace. The services ensure that users are granted appropriate access rights based on their roles. IAM includes the following services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Workspace management service:&lt;/strong&gt; Workspace management service allows you to manage workspace information and operate tenants for a Managed Service Provider (MSP) workspace.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Identity management service:&lt;/strong&gt; Identity management service allows you to manage the workspace users. The service allows you to invite users to join the workspace, retrieve a list of existing users in the workspace and delete users from the workspace.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API client credentials service:&lt;/strong&gt; HPE GreenLake API Client Credentials service allows programmatic access to manage workspace API Client credentials.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Location management:&lt;/strong&gt; Location management service manages service delivery information (SDI), including device location and support contact information.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Device inventory management:&lt;/strong&gt; Device service maintains the inventory of all devices (networking, compute and storage devices) connected to the workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Subscription management:&lt;/strong&gt; Subscription management service maintains the subscriptions and licenses for cloud management of devices for networking, compute and storage, and cloud software as-a-service.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Service catalog management:&lt;/strong&gt; Service Catalog service allows you to manage the workspace services and service managers that are used to operate and manage workloads and their underlying infrastructure. These services run on top of the HPE GreenLake platform.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Audit log management:&lt;/strong&gt; Audit log service records the occurrence of events emitted by any device or service. These logs can also be used for auditing purposes, track user activity, investigate breaches and ensure compliance with regulatory requirements.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Wellness event service:&lt;/strong&gt; Wellness service presents wellness events for several HPE services and products in the workspace. In a near future, it will also enable you to open a support ticket corresponding to a wellness event when appropriate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These APIs conform to &lt;a href=&quot;https://spec.openapis.org/oas/latest.html&quot;&gt;OpenAPI specifications&lt;/a&gt; and are &lt;a href=&quot;https://restfulapi.net/&quot;&gt;RESTful&lt;/a&gt;. This makes them easy to learn, discoverable by code, and accessible with any programming language. By using OAuth protocol to authenticate and authorize API client applications, secure and time-limited access to the collection of HPE GreenLake platform service APIs are provided via an access token. The token ensures that client API requests access HPE GreenLake platform services and resources securely and according to the authorization granted to the user who created the access token.&lt;/p&gt;
&lt;p&gt;The REST APIs support standard HTTP request methods (GET, POST, PATCH, PUT and DELETE). An HTTP request is made by providing a single unified domain endpoint (&lt;em&gt;&lt;a href=&quot;https://global.api.greenlake.hpe.com&quot;&gt;https://global.api.greenlake.hpe.com&lt;/a&gt;&lt;/em&gt;) to the HPE GreenLake platform APIs, HTTP request method, access token and data payload. The HTTP response for these requests is returned in a JSON format.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake platform documentation&lt;/a&gt; for these APIs leverages OpenAPI specifications and associated reference documentations. The documentation provides a complete explanation of the operations supported by these APIs for common HPE GreenLake platform services, as well as sample requests and responses.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Postman&lt;/h2&gt;
&lt;p&gt;Postman is an API platform for building and using APIs. You can sign into your Postman account either from the &lt;a href=&quot;https://identity.getpostman.com/login&quot;&gt;web application&lt;/a&gt; or from the desktop application. If you don’t have a Postman account already, you can sign up for a Postman account &lt;a href=&quot;https://identity.getpostman.com/signup&quot;&gt;here&lt;/a&gt; or download the desktop application &lt;a href=&quot;https://www.postman.com/downloads/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Preparing to use the APIs for common platform services&lt;/h2&gt;
&lt;p&gt;As an IT administrator, before you can work with the APIs for common HPE GreenLake platform services, you will need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create an HPE account and a company workspace for your organization. Ensure you get assigned the &lt;em&gt;&lt;strong&gt;Workspace Administrator&lt;/strong&gt;&lt;/em&gt; role in HPE GreenLake platform for your organization workspace.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can refer to the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=index.html&quot;&gt;HPE GreenLake platform user guide&lt;/a&gt; to learn how to create an HPE account, a workspace and assign roles.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Generate personal API client credentials for the &lt;em&gt;HPE GreenLake platform&lt;/em&gt;. The credentials consist of a &lt;em&gt;ClientID&lt;/em&gt; and &lt;em&gt;ClientSecret&lt;/em&gt; pair that represents the permissions granted to the user who creates the personal API client credentials. &lt;strong&gt;Save&lt;/strong&gt; the &lt;em&gt;ClientID&lt;/em&gt; and &lt;em&gt;ClientSecret&lt;/em&gt; to a safe location. You will need the credentials to generate and refresh an expired OAuth based access token when making REST API calls. Once the token is generated or refreshed, it can be used as an &lt;strong&gt;authorization bearer token&lt;/strong&gt; to make further secure REST API calls to the APIs for HPE GreenLake platform common services.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; To make REST API calls to HPE GreenLake platform APIs, you will need to select “&lt;strong&gt;HPE GreenLake platform&lt;/strong&gt;” as an option when configuring personal API client credentials. To learn how to create personal API client credentials for HPE GreenLake platform APIs, check out the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-23E6EE78-AAB7-472C-8D16-7169938BE628.html&quot;&gt;Creating a personal API client&lt;/a&gt; and &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-771F9B3A-B029-43E5-A38F-6D8D04178FAB.html&quot;&gt;Requesting access to HPE GreenLake platform APIs&lt;/a&gt; in the HPE GreenLake platform user guide.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;Gather the unique identifier of your organization workspace: Go to &lt;strong&gt;Manage Workspace&lt;/strong&gt; in the &lt;a href=&quot;https://common.cloud.hpe.com/&quot;&gt;HPE GreenLake platform Graphical User Interface&lt;/a&gt; to get the identifier of your workspace. &lt;strong&gt;Save&lt;/strong&gt; the &lt;em&gt;workspace identifier&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Obtain the unique &lt;em&gt;&lt;strong&gt;identifier&lt;/strong&gt;&lt;/em&gt; of your services deployed in your workspace. These services are typically HPE Aruba Networking Central, Data Services, and HPE Compute Ops Management used to manage and operate your networking, storge and compute infrastructure. One method is to use your Internet browser, log in to the HPE GreenLake platform UI and launch the &lt;strong&gt;inspect element&lt;/strong&gt; feature of your browser to inspect the &lt;strong&gt;Network&lt;/strong&gt; activity. In your workspace, select &lt;strong&gt;Services&lt;/strong&gt; and check the network activity in the inspect element. In the left-end panel, select &lt;strong&gt;provisions&lt;/strong&gt;, and select &lt;strong&gt;Response&lt;/strong&gt; in the Network activity panel to display the list of services provisioned in your workspace. &lt;strong&gt;Save&lt;/strong&gt; the &lt;em&gt;&lt;strong&gt;identifier&lt;/strong&gt;&lt;/em&gt; (displayed as &lt;em&gt;application_id&lt;/em&gt; in the &lt;em&gt;Response&lt;/em&gt; tab) for each of your &lt;em&gt;PROVISIONED&lt;/em&gt; services. Another method is to use the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/service-catalog/public/&quot;&gt;Service catalog API&lt;/a&gt; to list the services that are &lt;em&gt;&lt;strong&gt;provisioned&lt;/strong&gt;&lt;/em&gt; in your workspace. You will need the information about the provisioned services when making REST API calls to the foundational, common services for the HPE GreenLake platform.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get information (email address) for a user to invite to your workspace.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get information for a Networking device (Serial Number and MAC address), or for a Storage device (Serial Number), or for a Compute device (Serial Number and Product ID) to allow you to manage these devices from the HPE GreenLake platform workspace using the APIs. The product’s serial number and other identifying details are information you received in the product order confirmation email. You will also need to get the associated subscription keys for these devices.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Setting the Postman collection for the HPE GreenLake platform APIs&lt;/h2&gt;
&lt;p&gt;As you know, one of the benefits of working within a community is the ability to take advantage of open collaboration, sharing hints, tools, and resources. Although you can build your own Postman collection by downloading the OpenAPI specification files from the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/&quot;&gt;HPE GreenLake documentation&lt;/a&gt; and importing them to Postman, you can take advantage of the Postman collection I built for you. The Postman collection for the &lt;em&gt;APIs for common HPE GreenLake platform services&lt;/em&gt; is available in the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;HPE Developer Community tooling repository&lt;/a&gt;. Simply download the JSON file and import it to Postman. Then set the collection variables as explained in the next section.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As HPE will enrich the APIs for common platform services over time, I will update the Postman collection as appropriate. So, check out the link above regularly to download the latest release of the Postman collection.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Defining the HPE GreenLake platform APIs collection variables&lt;/h3&gt;
&lt;p&gt;The collection I built makes use of collection variables that are available throughout all the REST API requests in the collection. Select the collection and then select the &lt;strong&gt;Variables&lt;/strong&gt; tab as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part1-collection-variables-image1.png&quot; alt=&quot;Figure 1: HPE GreenLake platform API collections variables&quot; title=&quot;Figure 1: HPE GreenLake platform API collections variables&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 1: HPE GreenLake platform API collections variables&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Define the &lt;strong&gt;current value&lt;/strong&gt; of the collection variables to match your HPE GreenLake platform workspace context:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;BaseUrl:&lt;/strong&gt; This variable defines the base URL of the REST API requests. It matches the single unified domain endpoint (&lt;em&gt;&lt;a href=&quot;https://global.api.greenlake.hpe.com&quot;&gt;https://global.api.greenlake.hpe.com&lt;/a&gt;&lt;/em&gt;) to APIs for common HPE GreenLake platform services.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ClientId&lt;/strong&gt; and &lt;strong&gt;ClientSecret:&lt;/strong&gt; These variables should be set with the value of your personal Client Application API credentials you previously created using the HPE GreenLake platform GUI. These variables are used to request an OAuth access token by authenticating with the authorization server referenced in the &lt;strong&gt;sso_URI&lt;/strong&gt; variable or the &lt;strong&gt;TokenIssuerURL&lt;/strong&gt; variable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;sso_URI:&lt;/strong&gt; This variable is the URI of the OAuth authorization server. If your organization has set up their own HPE GreenLake SAML Single Sign-On (SSO) authorization server to create an access token, replace the current default value with your SSO URI. Otherwise you can keep the value for this variable as currently set to &lt;em&gt;sso.common.cloud.hpe.com/as/token.oauth2&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;TokenIssuerURL:&lt;/strong&gt; This variable is the URL of the Token Issuer you obtain when you create your personal API client credentials from the HPE GreenLake GUI for your workspace.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;BearerToken:&lt;/strong&gt; Do not edit this variable. Keep the value field empty. The collection variable &lt;em&gt;BearerToken&lt;/em&gt; will be set automatically upon successful execution of the &lt;em&gt;&lt;strong&gt;Generate AccessToken&lt;/strong&gt;&lt;/em&gt; API call as explained in the next step.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Workspace ID:&lt;/strong&gt; This variable should be set with the value of the &lt;em&gt;identifier&lt;/em&gt; of your Workspace you previously saved.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Aruba_Application_Id&lt;/strong&gt;, &lt;strong&gt;COM_Application_Id&lt;/strong&gt;, and &lt;strong&gt;DSCC_Application_Id:&lt;/strong&gt; These variables should be set with the value of the &lt;em&gt;identifier&lt;/em&gt; of the services you deployed in your workspace to manage your infrastructure services for networking, compute and storage.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GLP_Application_Id:&lt;/strong&gt; This variable is the &lt;em&gt;identifier&lt;/em&gt; of the HPE GreenLake platform. This is always set to value “00000000-0000-0000-0000-000000000000”.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ServiceManagerName:&lt;/strong&gt; This variable should be set with the name of a cloud service manager used to manage your infrastructure equipment for compute, storage and networking. For example &quot;&lt;em&gt;Aruba Central&lt;/em&gt;&quot;, &quot;&lt;em&gt;Compute Ops Management&lt;/em&gt;&quot;, or &quot;&lt;em&gt;Data Services&lt;/em&gt;&quot;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RegionId:&lt;/strong&gt; This variable should be set with the name of the region where your service managers are installed. For example &quot;&lt;em&gt;eu-central&lt;/em&gt;&quot;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Do not edit the other variables. Keep the value field empty. The collection variables will be set automatically upon successful execution of REST API calls using Postman &lt;em&gt;&lt;strong&gt;Scripts&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Acquire an OAuth access token as your session bearer token&lt;/h2&gt;
&lt;p&gt;The APIs for common HPE GreenLake platform services use a bearer token as an authorization type to ensure that all REST API requests access authorized platform services securely. So, you first need to obtain a token from the OAuth authorization server before you can make any REST API calls to the HPE GreenLake platform services. To do so, proceed as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From your collection, generate the token using the &lt;em&gt;&lt;strong&gt;Generate AccessToken&lt;/strong&gt;&lt;/em&gt; API call or the &lt;em&gt;&lt;strong&gt;Generate AccessToken with TokenIssuerURL&lt;/strong&gt;&lt;/em&gt; from the &lt;em&gt;&lt;strong&gt;Step1-Generate Token&lt;/strong&gt;&lt;/em&gt; folder. Click the &lt;strong&gt;Send&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Verify you get a status code of 200 for a successful response with token value in the response body.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;em&gt;Generate AccessToken&lt;/em&gt; API call has defined a Post-response script in the &lt;strong&gt;Scripts&lt;/strong&gt; tab (formerly known as Postman Tests script) to programmatically set the collection variable &lt;em&gt;BearerToken&lt;/em&gt; as shown in the picture below. The programmatically defined token is then used to authenticate any subsequent REST API calls.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part1-bearertoken-testscript-image2.png&quot; alt=&quot;Figure 2: Defining collection variables programmatically in script&quot; title=&quot;Figure 2: Defining collection variables programmatically in script&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 2: Defining collection variables programmatically in script&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Access bearer tokens expire after 15 minutes for HPE GreenLake platform tokens. Run the &lt;em&gt;Generate AccessToken&lt;/em&gt; API request or the &lt;em&gt;Generate AccessToken with TokenIssuerURL&lt;/em&gt; API request again to refresh the token before or after it expires.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Make subsequent secure REST API calls to HPE GreenLake platform services&lt;/h2&gt;
&lt;p&gt;All subsequent REST API requests are authenticated by presenting the access token as the authorization bearer token to the APIs for common HPE GreenLake platform services. The services validate the access token, and if valid, serve the requests.&lt;/p&gt;
&lt;p&gt;As shown in the two pictures below, all REST API requests in the collection will inherit the authorization bearer token that is specified at the collection level:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part1-collection-level-authorization-image3.png&quot; alt=&quot;Figure 3: Authorization type (bearer token) specified at the collection level&quot; title=&quot;Figure 3: Authorization type (bearer token) specified at the collection level&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 3: Authorization type (bearer token) specified at the collection level&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-part1-collection-api-call-authorization-image4.png&quot; alt=&quot;Figure 4: REST API request with authorization type inherited from parent collection&quot; title=&quot;Figure 4: REST API request with authorization type inherited from parent collection&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; Figure 4: REST API request with authorization type inherited from parent collection&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;To validate the access token, pick the next REST API call &lt;em&gt;&lt;strong&gt;Get workspace information&lt;/strong&gt;&lt;/em&gt; from the &lt;em&gt;&lt;strong&gt;Step1-Generate Token&lt;/strong&gt;&lt;/em&gt; folder. Click the &lt;strong&gt;Send&lt;/strong&gt; button and verify you get a &lt;em&gt;status code of 200&lt;/em&gt; for a successful response. You will get a JSON representation of your HPE GreenLake platform workspace. An example is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;id&quot;: &quot;&amp;#x3C;Your WorkspaceId&gt;&quot;,
    &quot;type&quot;: &quot;workspace&quot;,
    &quot;generation&quot;: 26,
    &quot;createdAt&quot;: &quot;2021-10-05T15:37:45.991228&quot;,
    &quot;updatedAt&quot;: &quot;2023-12-08T08:42:31.708206&quot;,
    &quot;workspaceName&quot;: &quot;&amp;#x3C;Your Workspace Name&gt;&quot;,
    &quot;createdBy&quot;: &quot;&amp;#x3C;name of person who created the workspace&gt;&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have placed your first API calls to the common HPE GreenLake platform services using Postman.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post helps you get started with the HPE GreenLake platform APIs by taking advantage of Postman collection. It explains you the preparation steps you need to take to use the APIs for common platform services and walks you through the steps required to obtain an OAuth access token to make secure REST API calls to the HPE GreenLake platform APIs.&lt;/p&gt;
&lt;p&gt;You can get the Postman collection from the &lt;a href=&quot;https://github.com/hpe-dev-incubator/GLP-API-Tooling/tree/main/Postman-Collections&quot;&gt;HPE Developer Community tooling GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Don’t miss &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-2-configuring-and-managing-a-workspace/&quot;&gt;Part 2&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-the-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform-%E2%80%93-part-3-tracking-activities-and-monitoring-health/&quot;&gt;Part 3&lt;/a&gt; of this blog series, where you will further explore the rest of the collection to learn how you, as an IT administrator of the HPE GreenLake platform, can configure and manage workspace resources (users’ identity, devices and subscriptions) and how you can track activities within your workspace and monitor overall health of services and devices in your workspace.&lt;/p&gt;
&lt;p&gt;If you’re interested in trying out what I just discussed, you might want to check out one of our hands-on Workshops-on-Demand that lets you play with the HPE GreenLake APIs mentioned in this blog post. The workshops are free, available 24/7, and very easy to use. They give you a real-world experience without any risk. Check out our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;catalog of workshops&lt;/a&gt;, register for the one you’re interested in and go! It’s as simple as that.&lt;/p&gt;
&lt;p&gt;If you still have any questions regarding the HPE GreenLake platform APIs, join the &lt;a href=&quot;https://developer.hpe.com/slack-signup/&quot;&gt;HPE Developer Community Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake-api&lt;/a&gt; channel. We’re always here to help.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Start the year with new 5G network testing tools, AI modeling paradigms, and more!]]></title><link>https://developer.hpe.com/2024-January-11/</link><guid isPermaLink="false">https://developer.hpe.com/2024-January-11/</guid><pubDate>Thu, 11 Jan 2024 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Introduction to GPU Programming in Chapel]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/introduction-to-gpu-programming-in-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/introduction-to-gpu-programming-in-chapel/</guid><pubDate>Thu, 11 Jan 2024 06:25:58 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Mobile ALOHA, AppAgent, Virtual Token Counter, and Time Vectors]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/determined-ai-weekly-update-6/</link><guid isPermaLink="false">https://developer.hpe.com/determined-ai-weekly-update-6/</guid><pubDate>Tue, 09 Jan 2024 09:55:36 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Be an HPE Developer blogger!]]></title><description><![CDATA[Want to participate more in the HPE Developer Community but don’t know how? Have you ever dreamt about being published? Consider writing an…]]></description><link>https://developer.hpe.com/be-an-hpe-dev-blogger/</link><guid isPermaLink="false">https://developer.hpe.com/be-an-hpe-dev-blogger/</guid><pubDate>Thu, 04 Jan 2024 13:03:49 GMT</pubDate><content:encoded>&lt;p&gt;Want to participate more in the HPE Developer Community but don’t know how? Have you ever dreamt about being published? Consider writing an article for the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog!&lt;/a&gt; Share your knowledge with like-minded developers, designers, data scientists, and IT admins all over the world. Maybe you’ve just discovered something new you’d like to share. Maybe you found an easier way to do something. Others in the community have probably encountered similar situations and would be eager to hear your tips.&lt;/p&gt;
&lt;p&gt;As daunting as it may feel, writing a blog post is not as hard as you think. The HPE Developer Community team has templates you can use to organize your thoughts and draft an article. You can submit your post in the way that best suits your needs. And we’re right here to help you along the way. Not confident about your writing or language skills? No worries… we even have an editor on board who can help make your article shine. Become a star blog contributor, as this post gives you all the tools you need.&lt;/p&gt;
&lt;h2&gt;What’s the blog process?&lt;/h2&gt;
&lt;p&gt;If you’d like to be published on the HPE Developer blog, it’s really very easy. Here are the basic steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Draft your article either directly in the Netlify CMS or in one of the templates provided&lt;/li&gt;
&lt;li&gt;Submit your draft for a quick technical and editorial review&lt;/li&gt;
&lt;li&gt;Make the recommended changes&lt;/li&gt;
&lt;li&gt;Once approved, the article will be posted and promoted&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;General guidelines&lt;/h3&gt;
&lt;p&gt;Whenever you’re writing, it’s important to remember who your audience is. Articles placed on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt; are seen by developers, data scientists, data architects, IT admins and managers worldwide. It is a highly technical audience who will discount anything that reeks of marketing. So, consider your topic and audience carefully. Simple advertisements for a product won’t be published. If you have any question as to the appropriateness of your topic, feel free to &lt;a href=&quot;mailto:hpedev@hpe.com&quot;&gt;reach out to us&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Is there a limit on word count? Not really. You can write a post as short as 250-500 words. You can also write a much longer post. Best industry practices indicate that an ideal blog post length is between 2,100-2,400 words. If your post is very long, the editor may suggest breaking it up into a multi-part series. This often works quite well, providing you with greater visibility in the community.&lt;/p&gt;
&lt;p&gt;If you need to have a post published by a certain date, please let us know in advance so we can prioritize our review appropriately.&lt;/p&gt;
&lt;h2&gt;There are two paths you can take&lt;/h2&gt;
&lt;p&gt;The HPE Developer portal is set up so you can submit your articles directly into the Netlify Content Management System (CMS) and have them reviewed by the HPE Developer Community team via GitHub. This is the preferred method for submitting your drafts. We also accept direct submissions of Microsoft Word or Google Docs files, in case you do not have a GitHub account or require this ability for other reasons. Templates and instructions on how to provide submissions via these documents are provided below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blogger-two-paths-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The major difference between the two methods is that there is a greater amount of responsibility placed on the writer with the Netlify CMS method. With an external document submission, the HPE Developer Community team takes care of placing the document into the Netlify CMS. The preferred method is to use the Netlify CMS, but for those who feel they might need a bit more assistance or who don’t have a GitHub account, we offer Microsoft Word and Google Docs templates you can use to submit via email.&lt;/p&gt;
&lt;h3&gt;Netlify CMS method:&lt;/h3&gt;
&lt;p&gt;You’ll find complete instructions for entering your content into the Netlify CMS and submitting your post for review in the &lt;a href=&quot;https://github.com/hpe-dev-incubator/hpe-dev-portal/blob/master/docs/ContributorGuide-v2.md&quot;&gt;HPE DEV External Contributor Guide&lt;/a&gt;. Once you have created a new blog entry, entered your content, added your tags and saved your work, your post becomes visible in the Editorial Workflow in the Drafts section. Make sure you move your post entry from the &lt;strong&gt;Drafts&lt;/strong&gt; column to the &lt;strong&gt;In Review&lt;/strong&gt; column in the &lt;strong&gt;Editorial Workflow&lt;/strong&gt;. This will automatically open a Pull Request (PR) in the GitHub repository associated with the Netlify CMS. This indicates to the HPE DEV team that the post is ready, initiating the review process.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: If you need to make edits after you’ve placed it in the &lt;strong&gt;In Review&lt;/strong&gt; column, there is no need to move it back into the &lt;strong&gt;Drafts&lt;/strong&gt; column. In fact, if you do so, it will delete the history, so please just keep it in the &lt;strong&gt;In Review&lt;/strong&gt; column while you make your edits.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/blogger-workflow.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Microsoft Word Templates:&lt;/h3&gt;
&lt;p&gt;To use the Microsoft Word templates, click on the link. It will automatically download the file to your local workstation. Save it to your local drive and create your article from there. When you are ready submit your draft &lt;a href=&quot;mailto:hpedev@hpe.com&quot;&gt;to the editor via email&lt;/a&gt; for review.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/8/HPE-DEV-BASIC-BLOG-TEMPLATE-FINAL.docx&quot;&gt;Microsoft Word Basic Blog template&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/8/HPE-DEV-TECHNICAL-TUTORIAL-TEMPLATE-FINAL.docx&quot;&gt;Microsoft Word Technical Tutorial template&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Google Docs Templates:&lt;/h3&gt;
&lt;p&gt;To use the Google Docs templates, click on the link. It will bring you to the Google account where the document is stored. Go to &lt;strong&gt;File&lt;/strong&gt; and pull down to &lt;strong&gt;Make a Copy&lt;/strong&gt;. Save it to your drive and rename the file. Once you have filled in the sections, &lt;em&gt;share the document&lt;/em&gt; with the &lt;a href=&quot;mailto:denis.choukroun@hpe.com&quot;&gt;technical review team&lt;/a&gt; and the &lt;a href=&quot;mailto:dale.rensing@hpe.com&quot;&gt;content editor&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.google.com/document/d/1uAHcsJxavfmC0oRoccjBFI_WmuALDWhOINATiCEoDIw/edit?usp=sharing&quot;&gt;Google Docs Basic Blog template&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.google.com/document/d/1bY0QL0TYgQtzjCF4JpsLbDMvPUMarIwQoVZFjQYej1Y/edit?usp=sharing&quot;&gt;Google Docs Technical Tutorial template&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Remember to include hi res images as separate .jpg attachments to the email.&lt;/p&gt;
&lt;h2&gt;Ready to get writing?&lt;/h2&gt;
&lt;p&gt;Here are some tips to help you get your blog read. As you draft your article, keep in mind Aristotle’s advice on how to communicate for impact by breaking your post into three parts; the &lt;strong&gt;Introduction&lt;/strong&gt;, the &lt;strong&gt;Body&lt;/strong&gt;, and the &lt;strong&gt;Summary&lt;/strong&gt;. You want to first introduce what you are going to cover in the post. Then, go into details on your topic, telling the reader everything you want to convey in the body. At the end, reiterate and summarize your key points.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blogger-aristotle.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;But there are three other things you’ll also want to include; a &lt;strong&gt;Problem Statement&lt;/strong&gt;, a &lt;strong&gt;Benefit Statement&lt;/strong&gt;, and a &lt;strong&gt;Call to Action&lt;/strong&gt;. The first two should be in your introduction, and the last, in your summary.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Problem Statement&lt;/strong&gt; piques the interest of the reader. When the reader sees this, they often recognize an issue that’s familiar to them, which encourages further reading. The &lt;strong&gt;Benefit Statement&lt;/strong&gt; almost serves as a summary of what the remainder of the article covers. It explains how the &lt;strong&gt;Problem Statement&lt;/strong&gt; is addressed; the &lt;em&gt;why you should read this&lt;/em&gt;, if you will.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Call to Action&lt;/strong&gt; is placed at the end in your summary. This gives you the opportunity to reach out to the reader and ask them to do something. It could be to go to another web page, to connect with you on Slack or Twitter, or to remember salient points from your article.&lt;/p&gt;
&lt;p&gt;A few other things to keep in mind as you draft your article:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Read through a few of the existing &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;blog posts&lt;/a&gt; first to get ideas.&lt;/li&gt;
&lt;li&gt;Avoid the “royal we”. Imagine you are talking to the reader directly and use “I” and “you”, instead.&lt;/li&gt;
&lt;li&gt;Don’t use quotation marks (“  “) for things that are not actual quotes or code requiring this sort of notation. For emphasis, use &lt;em&gt;italics&lt;/em&gt; or &lt;strong&gt;bold.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Include images. Images are often pulled from your blog post and used as part of the social promotion for your post.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The review process&lt;/h2&gt;
&lt;p&gt;Please note that the length of the review cycle depends on the technical depth of the piece, the writer’s fluency in written English, the immediacy of the need, and current backlog.&lt;/p&gt;
&lt;h3&gt;Netlify CMS:&lt;/h3&gt;
&lt;p&gt;The review process is covered in detail in the &lt;a href=&quot;https://github.com/hpe-dev-incubator/hpe-dev-portal/blob/master/docs/ContributorGuide-v2.md&quot;&gt;HPE DEV External Contributor Guide&lt;/a&gt;. But to quickly summarize:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The HPE Developer Community team will review your markdown copy on the Netlify CMS.&lt;/li&gt;
&lt;li&gt;They’ll offer technical and editorial suggestions, commenting on each line directly on GitHub Pull Request.&lt;/li&gt;
&lt;li&gt;The writer is notified of the edits by email.&lt;/li&gt;
&lt;li&gt;It is up to the writer to make the suggested edits. The writer may determine that an edit is not appropriate and can comment back.&lt;/li&gt;
&lt;li&gt;Once agreement is established, the post will be published.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Microsoft Word and Google Docs Templates:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The submitted draft is reviewed by the team and returned to the writer with Track Changes turned on.&lt;/li&gt;
&lt;li&gt;The writer may accept all changes or determine if they should be accepted or rejected one by one.&lt;/li&gt;
&lt;li&gt;Once agreement is established, the post will be reformatted into markdown by the HPE Developer Community team and published.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Let’s get writing!&lt;/h2&gt;
&lt;p&gt;Writing articles for the HPE Developer blog can be a fun and rewarding experience. It gives you a chance to work more closely with the HPE Developer Community team and often keys us into more opportunities of working together to create other assets, like a Workshop-on-Demand or a Munch &amp;#x26; Learn session. We have a number of bloggers who consistently come back to add more to their portfolio.&lt;/p&gt;
&lt;p&gt;We’ve tried to make it as easy for you to contribute as possible, offering multiple ways to create and submit your posts. If you have any questions, please feel free to reach out to us either through &lt;a href=&quot;mailto:hpedev@hpe.com&quot;&gt;email&lt;/a&gt; or &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt;. We’re really looking forward to seeing what you have to offer, and we bet our readers are, too!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open source monitoring at the HPE Customer Technology Center Böblingen]]></title><description><![CDATA[The challenge T﻿he HPE Customer Technology Center (CTC) Böblingen is a customer visit center in Germany where a wide range of HPE solutions…]]></description><link>https://developer.hpe.com/open-source-monitoring-at-the-hpe-customer-technology-center-böblingen/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-monitoring-at-the-hpe-customer-technology-center-böblingen/</guid><pubDate>Tue, 02 Jan 2024 15:34:10 GMT</pubDate><content:encoded>&lt;h1&gt;The challenge&lt;/h1&gt;
&lt;p&gt;T﻿he HPE Customer Technology Center (CTC) Böblingen is a customer visit center in Germany where a wide range of HPE solutions are demonstrated. The core infrastructure of the CTC is based on HPE SimpliVity. Its performance, capacity and uptime has been monitored for some time using Prometheus and Grafana. You can find the implementation of the HPE SimpliVity-Prometheus exporter described in the &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50000514enw&quot; target=&quot;_blank&quot;&gt;HPE SimpliVity Prometheus exporter whitepaper&lt;/a&gt;. Over time, this monitoring environment was extended with the &lt;a href=&quot;https://github.com/hpe-storage/array-exporter&quot; target=&quot;_blank&quot;&gt;HPE array exporter&lt;/a&gt; for HPE storage arrays (HPE Alletra 6000, HPE Alletra 9000 and HPE Alletra MP).&lt;/p&gt;
&lt;p&gt;O﻿ver the last few summers, the CTC lab team was challenged with how to cool the growing infrastructure. The cooling capacity initially planned was no longer sufficient given the new generation servers and storage that were deployed in the CTC. The team found that the lab temperature was rising to a level that caused several incidents where HPE ProLiant servers were automatically shutting down to avoid damages. To remedy this situation, the team looked for a CTC lab temperature warning system that would be easy to implement and would provide a &quot;heads up&quot; of any rise in temperature in the CTC and allow us to shutdown equipment that was not urgently needed for customer demos - ensuring that the booked customer demos would still be able to run.&lt;/p&gt;
&lt;h1&gt;Redfish® iLO RESTful API as a solution?&lt;/h1&gt;
&lt;p&gt;Each HPE ProLiant server has a lot of temperature sensors spread across the system board, CPU, memory and power supplies. This detailed temperature data is available and visualized on each ILO interface in the Power &amp;#x26; Thermal section:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ilotemperature.png&quot; alt=&quot;&quot; title=&quot;ILO Temperature Information&quot;&gt;&lt;/p&gt;
&lt;p&gt;The first sensor, 01-Inlet Ambient temperature, is measuring the inlet temperature at the front of the server and can be used as an indicator of the temperature of the overall CTC environment. Now that I have identified a usable data point an automated reading of the temperature values is needed. Since I already used the the HPE Simplivity REST API to build the HPE SimpliVity-Prometheus exporter used to monitor the core infrastructure, it was only natural that I would look into the &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home/&quot; target=&quot;_blank&quot;&gt;Redfish® iLO RESTful API&lt;/a&gt;. The Redfish® API ecosystem is an open industry-standard specification (published by the Distributed Management Task Force &lt;a href=&quot;http://www.dmtf.org/standards/redfish&quot; target=&quot;_blank&quot;&gt;DMTF&lt;/a&gt; and provides remote server provisioning, inventory and monitoring.&lt;/p&gt;
&lt;p&gt;A directory of the available Redfish® resources at an ILO interface can be retrieved with the following RESTful API command: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-http&quot;&gt;GET {{baseUrl}}/redfish/v1/resourcedirectory
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I used &lt;a href=&quot;https://www.postman.com/&quot; target=&quot;_blank&quot;&gt;Postman&lt;/a&gt; as a developer tool to retrieve and search the resource directory. The above RESTful API comand was used in Postman with a basic username/password authorization at one of the ILOs in the CTC, where the baseUrl is defined as &lt;a href=&quot;https://iloIPaddress&quot;&gt;https://iloIPaddress&lt;/a&gt;. I searched in the obtained resource directory for &quot;thermal&quot; in order to find the correct command to retrieve thermal data:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/redfishresourcedirectory.png&quot; alt=&quot;&quot; title=&quot;Redfish resource directory (Postman output)&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿ found that I could retrieve thermal server data using the following RESTful command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-http&quot;&gt;GET /redfish/v1/chassis/1/thermal
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And using this Get command with Postman confirmed that I could get the values of all temperature sensors of the server:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/redfishtemparture.png&quot; alt=&quot;&quot; title=&quot;ILO Redfish thermal information (Postman output)&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Building the HPE server exporter&lt;/h1&gt;
&lt;p&gt;O﻿nce I had the Prometheus-Grafana based monitoring established in the Customer Technology Center and knew the RESTful command required to retrieve the data I was looking for, the only missing piece was a Prometheus exporter for the HPE ILO interfaces and the according Grafana dashboards. This required me to write another Prometheus exporter - one designed for the HPE ILO interface. I decided to use Python as the programming language for this connector, because I could build on the knowledge gained when I developed the HPE SimpliVity-Prometheus exporter. &lt;/p&gt;
&lt;p&gt;First I had to decide which redfish library I wanted to use - the &lt;a href=&quot;https://github.com/DMTF/python-redfish-library&quot; target=&quot;_blank&quot;&gt;DMTF Redfish Python Library&lt;/a&gt; or the &lt;a href=&quot;https://github.com/HewlettPackard/python-ilorest-library&quot; target=&quot;_blank&quot;&gt;HPE Python Redfish library&lt;/a&gt; - as they cannot coexist both in a Python environment. I decided to use the DMTF Redfish Python library, since I didn&apos;t need the additional features of the HPE Python Redfish library for my use case. The retrieved data can easily made digestible for Prometheus with the prometheus_client Python library. The additional libraries, that I used are mainly for providing the input data in XML-format (etree library) and for the permanent monitoring loop (time, sys and os library) and logging (datetime library). Hence, I needed the following libraries included in my Python script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from lxml import etree 
import time
from datetime import datetime
from prometheus_client import Counter, Gauge, start_http_server, Info
import sys
import redfish
import os
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I decided to use one username for all ILOs that I wanted to add into the monitoring realm. Login with read only access privileges were sufficient for the Prometheus exporter user defined local on the ILOs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ilouser.png&quot; alt=&quot;&quot; title=&quot;ILO Prometheus user&quot;&gt;&lt;/p&gt;
&lt;p&gt;Testing the initial Prometheus exporter showed that I could consolidate multiple server ILOs into a single exporter while keeping the data collection time below 30 to 60 seconds.  The ILO Prometheus exporter itself has a simple structure - I didn&apos;t want to complicate things if not absolutely necessary:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheusconnectorstructure.png&quot; alt=&quot;&quot; title=&quot;ILO Prometheus Connector - Architecture&quot;&gt;&lt;/p&gt;
&lt;p&gt;F﻿irst, the exporter reads the input data; i.e. the ILO IP addresses and the ILO username and password. Next, the exporter retrieves the correct URLs for each monitored server and stores it in a Python directory. Afterwards, the script starts the http server and the corresponding Prometheus counters and gauges. Having completed these preparation steps, the script enters an endless loop that first captures the start time of the current iteration, second retrieves the data from the monitored server, third displays the captured data and waits until the current monitoring interval is completed. Additionally, I have added that at least once per day the input data and monitor URLs are refreshed. This allows me to change the list of monitored systems without starting and stopping the script. I just need to edit the input data file to adjust the list of monitored systems.&lt;/p&gt;
&lt;h2&gt;Main routine&lt;/h2&gt;
&lt;p&gt;As a result, the Python main routine of the ILO Prometheus exporter is as simple as shown below (I removed some of the log commands to improve the readability):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if __name__ == &quot;__main__&quot;:
    input = getServerList()    
    # Get the monitoring URLs of the server
    monitor_urls = get_server_urls(input[&apos;user&apos;], input[&apos;password&apos;], input[&apos;server&apos;], input[&apos;lfile&apos;])
    # Start the http_server and the counters, gauges
    start_http_server(input[&apos;port&apos;])
    c = Counter(&apos;ilorest_sample&apos;,&apos;ILO REST sample number&apos;)
    node = Gauge(&apos;ilorest_node&apos;,&apos;ILO Node Data&apos;,[&apos;nodename&apos;,&apos;rack&apos;,&apos;nodemetric&apos;,&apos;metricdetail&apos;])
    delta = Gauge(&apos;ConnectorRuntime&apos;,&apos;Time required for last data collection in seconds&apos;)
    inode = Info(&apos;node&apos;,&apos;Additional Node Info&apos;,[&apos;node&apos;])
    # Start the endless loop
    while True: 
        t0 = time.time()
        start0 = t0
        c.inc()      
        for server in monitor_urls:
            try:
                server_metrics = get_server_data(input[&apos;user&apos;], input[&apos;password&apos;], monitor_urls[server], input[&apos;lfile&apos;])
                display_results(node, inode, server_metrics, monitor_urls[server])
            except Exception as ex:
                log=logopen(input[&apos;lfile&apos;])
                logwriter(log,&apos;Exception&apos;)
                logwriter(log,str(ex.__context__))
                logclose(log)
                pass
        t1 = time.time()
        delta.set((t1-t0))
        while ((t1-t0) &amp;#x3C; input[&apos;mintervall&apos;]):
            time.sleep(1.0)
            t1 = time.time()   
        # update once per day the input
        if ( t1 - start0 &gt; 86400):
            start0 = t1
            input = getServerList() 
            monitor_urls = get_server_urls(input[&apos;user&apos;], input[&apos;password&apos;], input[&apos;server&apos;], input[&apos;lfile&apos;])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, I used four Python functions within the main routine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;getServerList&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;get_server_urls&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;get_server_data&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;display_results&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first one - &lt;em&gt;getServerList&lt;/em&gt; - is just reading the input data out of an XML file. Since this is a pretty standard task in Python, I do not want to discuss this function here further. It should be sufficient to know, that it is reading the list of server IPs and additional runtime parameters (username, password, monitoring intervall, …) into a Python directory that is used in the subsequent parts of the script. I want to focus on the other three functions that are more specific to the ILO Prometheus exporter.&lt;/p&gt;
&lt;h2&gt;g﻿et_server_urls&lt;/h2&gt;
&lt;p&gt;The get_server_urls routine uses  the list of ILO IP addresses to gather the correct URLs out of the Redfish resource directory. If the ILO is alive  a Redfish client object for this ILO is created and the Redfish resource directory is read. The client object is stored together with the URLs for the thermal, power and computer system information in the python directory server_urls, which is the return value of the routine.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_server_urls( login_account, login_password, server, lfile):

    server_urls={}
    if LINUX:
        cmd=&apos;ping -c 2 &apos;
    log=logopen(lfile)
    logwriter(log,&apos;get_server_urls&apos;)
    for s in server:
        ilo_url=&quot;https://&quot;+s[&apos;ilo&apos;]
        s[&quot;url&quot;]=ilo_url
        if LINUX:
            response=os.system(cmd+s[&apos;ilo&apos;])   # works on Linux without root priviliges
        else:
            response = ping(address=s[&apos;ilo&apos;],count=1)  # requires root priviliges on Linux
            if(response.is_alive):
                response = 0
            else:
                response = 256
        if (response == 0):        
            try:
                # Create a Redfish client object
                REDFISHOBJ = redfish.redfish_client(base_url=ilo_url, username=login_account, password=login_password)  
                # Login with the Redfish client
                REDFISHOBJ.login()
                s[&quot;redfish&quot;]=REDFISHOBJ
                resource_instances = get_resource_directory(REDFISHOBJ, lfile)
                for instance in resource_instances:
                    if &apos;#ComputerSystem.&apos; in instance [&apos;@odata.type&apos;]:
                        s[&quot;ComputerSystem&quot;]=instance[&apos;@odata.id&apos;] 
                    if &apos;#Power.&apos; in instance [&apos;@odata.type&apos;]:
                        s[&quot;Power&quot;]=instance[&apos;@odata.id&apos;] 
                    if &apos;#Thermal.&apos; in instance [&apos;@odata.type&apos;]:
                        s[&quot;Thermal&quot;]=instance[&apos;@odata.id&apos;]
                if len(s) &gt; 4:
                    server_urls[s[&apos;ilo&apos;]]=s
                    logwriter(log,s[&apos;ilo&apos;]+&apos; completed&apos;)
            except Exception as ex:
                logwriter(log,&apos;Exception - get_server_urls: &apos;+s[&apos;ilo&apos;])
                logwriter(log,str(ex.__context__))
                if len(s) &gt; 4:
                    server_urls[s[&apos;ilo&apos;]]=s
                pass
        else:
            logwriter(log,&apos;Exception - ILO is not reachable: &apos;+s[&apos;ilo&apos;])
    logclose(log)
    return server_urls
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;g﻿et_server_data&lt;/h2&gt;
&lt;p&gt;The get_server_data routine takes the obtained server_urls of one of the monitored servers together with the Prometheus user information to retrieve the power and thermal data as well as some general system information data. The captured data is packed into a server_data directory and returned to the main program.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def get_server_data( login_account, login_password, server, lfile):

    server_data={}
    try:
        REDFISHOBJ = server[&quot;redfish&quot;]
        # System Data
        server_data[&apos;System&apos;] = REDFISHOBJ.get(server[&quot;ComputerSystem&quot;]).obj
        # Power Data
        uri = REDFISHOBJ.get(server[&quot;Power&quot;]).obj.Oem.Hpe[&apos;Links&apos;][&apos;PowerMeter&apos;][&apos;@odata.id&apos;]
        server_data[&quot;PowerMeter&quot;] = REDFISHOBJ.get(uri).obj
        # Temperatures
        server_data[&apos;Temperatures&apos;] = REDFISHOBJ.get(server[&quot;Thermal&quot;]).obj[&quot;Temperatures&quot;]
        return server_data;
    
    except Exception as ex:
        sys.stderr.write(&quot;ERROR during get_server_date: &quot;+server[&quot;url&quot;])
        if ex.status == 401:
            REDFISHOBJ = redfish.redfish_client(base_url=server[&quot;url&quot;], username=login_account, password=login_password)
            REDFISHOBJ.login()
            server[&quot;redfish&quot;] = REDFISHOBJ
        log=logopen(lfile)
        logwriter(log,&apos;Exception&apos;)
        logwriter(log,str(ex.__context__))
        logclose(log)        
        pass
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;d﻿isplay_results&lt;/h2&gt;
&lt;p&gt;The display_results routine takes the server_data retrieved by the get_server_data routine and puts it into the corresponding Prometheus node gauges.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def display_results( node, inode, server_metrics, server):
    hostname = (server_metrics[&apos;System&apos;][&apos;HostName&apos;]).split(&apos;.&apos;)[0].replace(&apos;-&apos;,&apos;_&apos;)
    cn = server[&apos;ilo&apos;]
    inode.labels(cn).info({&quot;Model&quot;:server_metrics[&apos;System&apos;][&quot;Model&quot;],&quot;Manufacturer&quot;:server_metrics[&apos;System&apos;][&quot;Manufacturer&quot;],&quot;SerialNumber&quot;:server_metrics[&apos;System&apos;][&quot;SerialNumber&quot;],&quot;Hostname&quot;:hostname})
    node.labels(cn,server[&apos;Rack&apos;],&apos;Power&apos;,&apos;State&apos;).set(power_state[server_metrics[&apos;System&apos;][&quot;PowerState&quot;]]) 
    node.labels(cn,server[&apos;Rack&apos;],&apos;Power&apos;,&apos;Average&apos;).set(server_metrics[&quot;PowerMeter&quot;][&apos;Average&apos;])           
    node.labels(cn,server[&apos;Rack&apos;],&apos;Power&apos;,&apos;Maximum&apos;).set(server_metrics[&quot;PowerMeter&quot;][&apos;Maximum&apos;])
    node.labels(cn,server[&apos;Rack&apos;],&apos;Power&apos;,&apos;Minimum&apos;).set(server_metrics[&quot;PowerMeter&quot;][&apos;Minimum&apos;])
    for temperature in server_metrics[&apos;Temperatures&apos;]:
        if temperature[&apos;Status&apos;][&apos;State&apos;] == &apos;Enabled&apos;: 
            node.labels(cn,server[&apos;Rack&apos;],&apos;Temperature&apos;,temperature[&quot;Name&quot;]).set(temperature[&apos;ReadingCelsius&apos;])
    return 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The most current and complete HPE ILO Prometheus exporter (iloPromConnector.v4.3.py at the time of writing this blog), together with a routine (createILOcreadentials.v1.0.py) to generate XML-input file with encrypted user credentials, is available on  &lt;a href=&quot;https://github.com/tbeha/iloPrometheus&quot; target=&quot;_blank&quot;&gt;Github&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;V﻿isualize the data&lt;/h1&gt;
&lt;p&gt;The CTC monitoring environment uses Prometheus as a time series database to store and Grafana to visualize the gathered data. Prometheus, Grafana, the &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-prometheus-and-grafana-on-docker-with-hpe-storage-array-exporter/&quot; target=&quot;_blank&quot;&gt;HPE Storage Array Exporter for Prometheus&lt;/a&gt; and the &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50000514enw&quot; target=&quot;_blank&quot;&gt;HPE SimpliVity Exporter&lt;/a&gt; are deployed as Kubernetes pods in order to easily scale and adjust the CTC monitoring environment.&lt;/p&gt;
&lt;p&gt;In order to insert the CTC power and temperature monitoring into this environment, I first had to containerize the HPE Server Exporter.  I based the container on Ubuntu Linux, added Python and the necessary Python libraries to it and stored the server exporter script in the directory /opt/prometheus:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-dockerfile&quot;&gt;# User Ubuntu as the base Image
FROM ubuntu
#
LABEL maintainer=&quot;Thomas Beha&quot;
LABEL version=&quot;4.3&quot;
LABEL copyright=&quot;Thomas Beha, 2023&quot;
LABEL license=&quot;GNU General Public License v3&quot;
LABEL DESCRIPTION=&quot;CTC ILO Redfish Prometheus Connector Python container based on Ubuntu&quot;
#
RUN apt-get update
RUN apt-get -y install python3.6 &amp;#x26;&amp;#x26; \
	apt-get -y install python3-pip  &amp;#x26;&amp;#x26; \
	apt-get -y install iputils-ping
RUN /usr/bin/pip3 install requests &amp;#x26;&amp;#x26; \
	/usr/bin/pip3 install fernet &amp;#x26;&amp;#x26; \
	/usr/bin/pip3 install cryptography &amp;#x26;&amp;#x26; \
	/usr/bin/pip3 install lxml &amp;#x26;&amp;#x26; \
	/usr/bin/pip3 install redfish &amp;#x26;&amp;#x26; \
	/usr/bin/pip3 install prometheus_client
# copy the necessary python files to the container
RUN mkdir /opt/prometheus
COPY ./iloPromConnector.v4.3.py /opt/prometheus
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I used this Dockerfile to generate a container image and uploaded it into my registry. Before I could define the Kubernetes pod based on this container image, I had to decide how to provide the input data to the pod. The current HPE server exporter expects to have the input data available at /opt/prometheus/data. The easiest and most flexible way to provide the input data is to use a Kubernetes config map. This allowed me to use the same container image multiple times, each time with a different config map. The large number of servers hosted in the CTC required multiple server exporter, each one with a different list of server to monitor. An example of a config map that I used is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1 
kind: ConfigMap 
metadata: 
  name: iloprom1-xml 
  namespace: svtprometheus 
data: 
  iloprometheus.key: |- 
    LVkjkV5j40Fl6U0O7ratU0RDf5W1NmAE0oqoQ2DaE74=
  iloprometheus.xml: |- 
    &amp;#x3C;data&gt; 
      &amp;#x3C;username&gt;Prometheus&amp;#x3C;/username&gt;
      &amp;#x3C;user&gt;gAAAAABig6ZJgST83R7v5AFjsOKiYt0bdledJpF3-TnUBR5871eNttG8P-O_eYA55-5NyyZgmmeWRnWy3hPAbUFk18_sWNw==&amp;#x3C;/user&gt;
      &amp;#x3C;password&gt;gAAAAABig6ZJAZL8wS41b0snZ76VWo17JysUq0-k5NUCU1y5AAPw5p1MphkFcB_u6z0TgcPbmR9yJcX39Cx0Egwv5Lou8A==&amp;#x3C;/password&gt;
      &amp;#x3C;monitoringintervall&gt;30&amp;#x3C;/monitoringintervall&gt;
      &amp;#x3C;logfile&gt;iloprometheus.log&amp;#x3C;/logfile&gt;
      &amp;#x3C;port&gt;9091&amp;#x3C;/port&gt;
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.1.40.5&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;i5&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;16&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt;
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.1.40.6&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;i5&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;14&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt;  
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.1.40.7&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;i5&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;19&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt; 
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.1.40.8&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;i5&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;18&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt; 
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.0.44.11&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;F12&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;24&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt; 
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.0.44.12&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;F12&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;22&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt; 
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.0.44.13&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;F12&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;20&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt;     
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.1.24.66&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;C5&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;30&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt;     
      &amp;#x3C;server&gt; &amp;#x3C;ILO_ip&gt;10.0.43.14&amp;#x3C;/ILO_ip&gt;,&amp;#x3C;Rack&gt;i12&amp;#x3C;/Rack&gt;,&amp;#x3C;Loc&gt;9&amp;#x3C;/Loc&gt; &amp;#x3C;/server&gt;
    &amp;#x3C;/data&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, the config map contains two data parts: the iloprometheus.key and the iloprometheus.xml section. The first one stores the encryption key, while the second one stores username and password encrypted with the encryption key iloprometheus.key, the monitoring intervall, the logfile name and the port used by the exporter together with the server data. I used the rack location, in addition to the ILO IP address, in order to be able to visualize a rack temperature diagram if needed.&lt;/p&gt;
&lt;p&gt;T﻿he config map was stored in a YAML-file (iloprom1.yml) and then applied in my Kubernetes environment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl apply -f iloprom1.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the config map in place, I was able to define the Kubernetes pod for the HPE Server exporter. The pod uses the iloprometheus:v4.3.1 container image, mounts the config map data to the /opt/prometheus/data directory in the container and starts the Python script /opt/prometheus/iloPromConnector.v4.3.py.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cat &amp;#x3C;&amp;#x3C; &apos;EOF&apos; | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ilo1
  namespace: svtprometheus
  
  labels:
    app: ilo1
spec:
  selector:
    matchLabels:
      app: ilo1
  template:
    metadata:
      labels:
        app: ilo1
    spec:
      containers:
        - name: ilo1
          image: tb1378/iloprometheus:v4.3.1
          command: [&quot;/usr/bin/python3&quot;]
          args: [&quot;/opt/prometheus/iloPromConnector.v4.3.py&quot;]
          volumeMounts:
            - name: iloprometheusxml
              mountPath: /opt/prometheus/data
      volumes:
        - name: iloprometheusxml
          configMap:
            name: iloprom1-xml   # the correct name of the configmap needs to be added here. 
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿fter I checked that the pod was up and running, I found that I still needed to define the Kubernetes service using this pod/app in order to have the exporter data available for Prometheus.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cat &amp;#x3C;&amp;#x3C; &apos;EOF&apos; | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: ilo1
  namespace: svtprometheus
#  labels: 
#     hpecp.hpe.com/hpecp-internal-gateway: &quot;true&quot;    
spec:
  selector:
    app: ilo1
  ports:
    - port: 9091               # The Port of that the Prometheus connector uses
      targetPort: 9091
      protocol: TCP
  type: NodePort               # expose the Prometheus connector if you want/need to debug it. 
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, I updated the Prometheus config map to make sure that the server exporter data was collected and stored in the Prometheus time series database. Hence, I added a job for the server exporter to the existing job list of the Prometheus config map:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;    - job_name: &apos;iloredfish&apos;
      static_configs:
      - targets: [&apos;ilo1:9091&apos;,&apos;ilo2:9091&apos;,&apos;ilo3:9091&apos;,&apos;ilo4:9091&apos;,&apos;ilo5:9091&apos;,&apos;ilo6:9091&apos;]
      honor_timestamps: true
      scrape_interval: 60s
      scrape_timeout: 20s
      metrics_path: /metrics
      scheme: http
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿s you can see from the above, I actually used six server exporter (ilo1 up to ilo6) in the CTC monitoring environment to cover the complete list of server hosted in the CTC. After having the data collected into the Prometheus timeseries database, I defined some Grafana dashboards to visualize the collected data. I don&apos;t want to get into the details of defining Grafana dashboards here - this is well documented in the Grafana documentation. I just want to give you an example of a dashboard I have used:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ctcracktemppower.2.png&quot; alt=&quot;&quot; title=&quot;CTC Rack Temperature &amp;#x26; Power Overview&quot;&gt;&lt;/p&gt;
&lt;h1&gt;C﻿onclusion&lt;/h1&gt;
&lt;p&gt;I﻿n this blog post, I documented my approach on solving a challenge we had in the Customer Technology Center Böblingen by using available API interfaces and building on an existing Kubernetes, Prometheus and Grafana environment. I decided to write this rather long blog in order to give you all necessary steps I did in detail with the goal to show you what can be done with public APIs, a little bit of Python scripting and the use of open source tools like Prometheus and Grafana. I do hope that this is helpful when you need to develop your own data collectors.&lt;/p&gt;
&lt;p&gt;The complete Python script together with sample Grafana dashboards and a Jupyter notebook for the deployment as a Kubernetes service is available at &lt;a href=&quot;https://github.com/tbeha/iloPrometheus&quot; target=&quot;_blank&quot;&gt;&lt;a href=&quot;https://github.com/tbeha/iloPrometheus&quot;&gt;https://github.com/tbeha/iloPrometheus&lt;/a&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;C﻿heck out what the Customer Technology Center in Boeblingen looks like  &lt;a href=&quot;https://www.hpe.com/us/en/about/customer-centers/boeblingen.html&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Mixtral 8x7B could pave the way to adopt the "Mixture of Experts" model]]></title><description><![CDATA[On December 8th, 2023, Mistral AI released Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights (ref…]]></description><link>https://developer.hpe.com/mixtral-8x7b-that-may-pave-the-trend-to-adopt-the-mixture-of-experts-model/</link><guid isPermaLink="false">https://developer.hpe.com/mixtral-8x7b-that-may-pave-the-trend-to-adopt-the-mixture-of-experts-model/</guid><pubDate>Wed, 20 Dec 2023 01:25:25 GMT</pubDate><content:encoded>&lt;p&gt;On December 8th, 2023, Mistral AI released Mixtral 8x7B, a high-quality sparse mixture of experts model (SMoE) with open weights (ref: &lt;a href=&quot;https://mistral.ai/news/mixtral-of-experts/&quot;&gt;Mistral AI&lt;/a&gt;). What is this Mixtral 8x7B? Very simply, it takes 8 x Mixtral 7B models and combines them to make one model. For those who are unfamiliar with the term Mixture of Experts (MOE), it is a machine learning technique that uses multiple expert networks to divide a problem space into homogeneous regions. As noted in this &lt;a href=&quot;https://arxiv.org/pdf/2305.14705.pdf&quot;&gt;paper&lt;/a&gt;, MOE models build upon the observation that language models can be decomposed into smaller, specialized sub-models, or &quot;experts&quot;. These models or experts focus on distinct aspects of the input data, thereby enabling more efficient computation and resource allocation. It has become quite popular because AI models with sparsely activated MOE significantly reduce the computational involved with large language models (LLMs).&lt;/p&gt;
&lt;p&gt;So, how does it work? If you are into LLM modeling, you probably know that the most simple representation of an LLM would be an input going into a single model and producing an output. In an MOE model, your input goes through a gateway that decides, based on the input context, which of the eight models are to be used to process the input. It could be just one model or possibly two or more that will process the input and coordinate to product the output. Below is a simplified pictorial representation of the process:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-6.png&quot;&gt;&lt;img src=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-6.png?w=900&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To best understand this, you could compare it to your supervisor looking at the nature of a job and finding the best expert within his team (i.e. MOE model) to complete the task rather than giving all the tasks to a single person (i.e. the traditional LLM model). In this way, it is much more efficient, except that you now need to maintain eight models instead of one.&lt;/p&gt;
&lt;p&gt;Generally, models often reuse the same parameters for all inputs. But MOE models uses different parameters based on the input given. Routing multiple models is not as easy as you think. There is an overhead in communication between the models as well as the routing between them. This &lt;a href=&quot;https://arxiv.org/abs/2101.03961&quot;&gt;paper&lt;/a&gt; explains it well. The gating network (typically a neural network), produces a sparse distribution over the available experts. It might only choose the top-k highest-scoring experts, or softmax gating, which encourages the network to select only a few experts. Getting the right balance is not easy, meaning that you may end up choosing the same expert every single time versus evenly distributing it across the experts.&lt;/p&gt;
&lt;p&gt;How good is Mixtral 8x7B? According to &lt;a href=&quot;https://mistral.ai/news/mixtral-of-experts/&quot;&gt;mistral AI&lt;/a&gt;, Mixtral 8x7B outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks (see diagram below). As you may have guessed, there isn&apos;t a performance comparison against GPT 4. Hopefully, we will see one put together in the near future.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-7.png&quot;&gt;&lt;img src=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-7.png?w=890&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;From the benchmarks, you can see that even the scrore for the LLaMA 2 70B parameters outperform Mixtral 8x7B parameters by only a few percentage points. Mixtral also shows improvements in reducing hallucinations and biases in the &lt;a href=&quot;https://mistral.ai/news/mixtral-of-experts/&quot;&gt;TruthfulQA/BBQ/BOLD&lt;/a&gt; benchmark.&lt;/p&gt;
&lt;p&gt;If you look at the post on &quot;X&quot; (below), the filename says it all.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-8.png&quot;&gt;&lt;img src=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-8.png?w=521&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Mixtral 8x7B stands out for its high-quality performance. The model boasts a range of capabilities, including the ability to handle a context of 32k tokens and support for multiple languages, including English, French, Italian, German, and Spanish. Mixtral is a decoder-only sparse mixture-of-experts network. Its architecture enables an increase in parameters while managing cost and latency. Take note that all the expert models are loaded into VRAM, which means it requires lots of super-fast VRAM. Also, the model only uses a fraction of the total set of parameters per token. According to &lt;a href=&quot;https://mistral.ai/news/mixtral-of-experts/&quot;&gt;Mistral AI&lt;/a&gt;, Mixtral has 46.7B total parameters, but it only uses 12.9B parameters per token. What this mean is that it processes both input and generate output at same cost as a 12.9B parameters. Hence, it lowers LLM computational cost.&lt;/p&gt;
&lt;p&gt;You might be wondering if you can deploy one and try it out? This would be unlikely for most general users, unless they happen to have some spare 2 x 80GB 100A or 4 x 40 GB 100A GPs lying around or have access to them through a cloud provider. However, there are some demo platforms you can go to and try it out. Here are a few you can look into: Perplexity, Poe, Vercel, and Replicate. Below is a screenshot of what I found on Perplexity:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-5.png&quot;&gt;&lt;img src=&quot;https://soonhengblog.files.wordpress.com/2023/12/image-5.png?w=1024&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Is this something new? Nope. This &lt;a href=&quot;https://ieeexplore.ieee.org/document/6797059&quot;&gt;paper&lt;/a&gt; called &quot;Adaptive Mixtures of Local Experts&quot; written by the GodFather of AI, Geoffrey Hinton back in 1991, already talk about this concept. Like most AI, the advancement in computing technology, the affordability of computational cost and the availability of mass amount of data make those technology conceptualized in the early days, possible now.&lt;/p&gt;
&lt;p&gt;So, why the sudden interest in Mixture of Experts? Although OpenAI hasn&apos;t publicly commented on the details of the GPT-4 technical specifications, it has been shared within the OpenAI developer community that GPT-4 uses a Mixture of Experts model. With the release of a Mixtral 8x7B model to the open source world, it could very well set a trend for future model developers to embark on a similar path. Who really knows? Only time will tell.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open-sourcing PacketRusher: A 5G Core performance tester]]></title><description><![CDATA[In the fast-evolving landscape of 5G technology, the demand for robust and efficient testing tools has never been higher. Enter PacketRusher…]]></description><link>https://developer.hpe.com/open-sourcing-packetrusher-a-5g-core-performance-tester/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-packetrusher-a-5g-core-performance-tester/</guid><pubDate>Mon, 18 Dec 2023 10:04:11 GMT</pubDate><content:encoded>&lt;p&gt;In the fast-evolving landscape of 5G technology, the demand for robust and efficient testing tools has never been higher. Enter PacketRusher, a cutting-edge 5G core Network performance testing tool. As the lead developer behind this groundbreaking project, I am thrilled to share the power and potential PacketRusher holds in revolutionizing the way we test and optimize 5G networks.&lt;/p&gt;
&lt;h2&gt;PacketRusher&lt;/h2&gt;
&lt;p&gt;PacketRusher is a tool dedicated to the performance testing and automatic validation of 5G Core Networks. It tests a 5G Core Network via its external interfaces as defined by the 3rd Generation Partnership Project (3GPP): N1, N2 and N3 interfaces.&lt;/p&gt;
&lt;p&gt;The tool can emulate various types of user equipment (eg. 5G phones or 5G-enabled IoT devices) on the N1 interface using the Non-access Stratum (NAS-5GS) 5G protocol. It can do stress testing of the 5G Core with up to 100k simulated user equipment doing simultaneously multiple 5G Control Plane’s procedures: Registration with authentication and Deregistration, Creation and Deletions of Protocol Data-Unit (PDU) Sessions.&lt;/p&gt;
&lt;p&gt;These user equipements can be used together with PacketRusher’s simulated gNodeBs (eg. antennas in Radio Access Network) implementing the N2 interface. These simulated gNodeB implements a set of pre-defined procedures on top of both New Generation Application Protocol (NGAP) which allow the usage of advanced gNodeB features as user equipment handover between gNodeB with PDU Session continuity.&lt;/p&gt;
&lt;p&gt;The other most important feature of PacketRusher is its highly performant N3 generic tunnel. Alongside its control plane testing features, PacketRusher is also able to stress test the UPF of a 5G Core with user equipment being able to do each up to 5 GB/s of userplane traffic.&lt;/p&gt;
&lt;p&gt;All of these features of PacketRusher can be offered from a simple Linux virtual machine, that can compiled in minutes, without the needs for expensive commercial tools, or 100k phones on a table ;-)&lt;/p&gt;
&lt;h2&gt;Architecture&lt;/h2&gt;
&lt;p&gt;Before I show you how PacketRusher works, it&apos;s important to have an understanding of the components of a minimal 5G network:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/high-level-diagram-of-a-5g-deployment.png&quot; alt=&quot;High-level diagram of a 5G deployment&quot; title=&quot;High-level diagram of a 5G deployment&quot;&gt;&lt;/p&gt;
&lt;p&gt;To summarize, there are essentially three main components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User equipment, which is any 5G device used directly by a user to communicate. For example, a smartphone or an IoT device.&lt;/li&gt;
&lt;li&gt;The gNodeB, which is the radio used for wireless communication between the User Equipment and the 5G network core.&lt;/li&gt;
&lt;li&gt;And finally, the 5G network core itself, which manages all mobile network functionalities. This is the component that will manage phone authentication in the network, quality of service, routing to the data network, and so on...&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/packetrusher-architecture.png&quot; alt=&quot;﻿High-level diagram of the PacketRusher&amp;#x27;s architecture and its interaction with a 5G Core&amp;#x27;s AMF and UPF&quot; title=&quot;﻿High-level diagram of the PacketRusher&amp;#x27;s architecture and its interaction with a 5G Core&amp;#x27;s AMF and UPF&quot;&gt;&lt;/p&gt;
&lt;p&gt;PacketRusher&apos;s behavior is quite simple; it essentially simulates two of the three major components of a 5G network:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the User Equipments, i.e. the phones&lt;/li&gt;
&lt;li&gt;and the gNodeB, i.e. the antennas.&lt;/li&gt;
&lt;li&gt;At the same time, it connects to the 5G Core via its external interfaces, as if it were a black box.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;PacketRusher simulates both components en masse, which means it can simulate several user devices and gNodeBs at the same time.&lt;/p&gt;
&lt;h2&gt;Community contributions&lt;/h2&gt;
&lt;p&gt;Excitingly, PacketRusher has caught the attention of a leading industry player. Orange, a key player in the telecommunications sector, has recognized the potential of PacketRusher and is actively integrating it into their open-source GitHub project, &lt;a href=&quot;https://github.com/Orange-OpenSource/towards5gs-helm&quot;&gt;toward-5gs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I would also like to mention the outstanding work of Github User &lt;a href=&quot;https://github.com/s5uishida&quot;&gt;s5uishida&lt;/a&gt; who made a &lt;a href=&quot;https://github.com/s5uishida/simple_measurement_of_upf_performance&quot;&gt;high-quality performance comparison&lt;/a&gt; of open-source UPF using PacketRusher.&lt;/p&gt;
&lt;h2&gt;Join the 5G revolution&lt;/h2&gt;
&lt;p&gt;How 5G evolves will be based on community collaboration and you&apos;re invited to be a part of this exciting journey. Whether you are a seasoned developer, a researcher, or simply passionate about the possibilities of 5G, your contributions can make a significant impact.&lt;/p&gt;
&lt;p&gt;To get started, visit the &lt;a href=&quot;https://github.com/HewlettPackard/PacketRusher&quot;&gt;GitHub repository&lt;/a&gt; to access the source code, documentation, and engage with the community. Your feedback, suggestions, and contributions are invaluable in shaping the future of 5G technology.&lt;/p&gt;
&lt;p&gt;At HPE, we are committed to pushing the boundaries of innovation, and with the release of PacketRusher as open-source, we are laying the foundation for a new era of connectivity.&lt;/p&gt;
&lt;p&gt;PacketRusher is not just a testing tool; it&apos;s a community-driven effort to push the boundaries of 5G technology. Let&apos;s build the future of 5G together!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.linkedin.com/in/valentin-d-emmanuele/&quot;&gt;V﻿alentin D&apos;Emmanuele&lt;/a&gt;,&lt;br&gt;
Lead Developer and Maintainer, PacketRusher&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Dark Mode Theming in Grommet - Part 3: Theme color customization]]></title><description><![CDATA[g1lightdark3_1 This is the final post in a three-part series. Parts 1 and 2 introduced how to apply a theme, change the theme’s mode between…]]></description><link>https://developer.hpe.com/dark-mode-theming-in-grommet-theme-color-customization/</link><guid isPermaLink="false">https://developer.hpe.com/dark-mode-theming-in-grommet-theme-color-customization/</guid><pubDate>Fri, 15 Dec 2023 16:54:03 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/g1lightdark3_1-1603892557138.png&quot; alt=&quot;g1lightdark3_1&quot;&gt;&lt;/p&gt;
&lt;p&gt;This is the final post in a three-part series. Parts &lt;a href=&quot;/blog/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme&quot;&gt;1&lt;/a&gt; and &lt;a href=&quot;/blog/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes&quot;&gt;2&lt;/a&gt; introduced how to apply a theme, change the theme’s mode between light and dark, and, finally, provide an app’s user the ability to change the theme’s mode to their preference.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/blog/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme&quot;&gt;Part 1 - How to set up and apply a theme&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes&quot;&gt;Part 2 - Adding dark and light theme modes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 3 - Theme color customization&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this final post, I’ll demonstrate how to add custom light and dark mode colors to the theme. I will cover the following content:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating a custom theme file&lt;/li&gt;
&lt;li&gt;Defining the color palette&lt;/li&gt;
&lt;li&gt;Mapping the color palette to component definitions in the theme&lt;/li&gt;
&lt;li&gt;Merging the customizations with an existing theme&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Customizing the Theme&lt;/h2&gt;
&lt;p&gt;Up to this point, we have been using Grommet’s theme. Let’s say, however, we’d like to tweak the Grommet theme by adding some custom colors for my fictional company, Acme, Inc.&lt;/p&gt;
&lt;p&gt;To do this, continue modifying your app from where &lt;a href=&quot;/blog/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes&quot;&gt;Part 2&lt;/a&gt; concluded, or reference this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-2addtogglebutton-txbux?file=/src/App.js&quot;&gt;Codesandbox&lt;/a&gt; if you are catching up and joining midstream.&lt;/p&gt;
&lt;p&gt;Acme’s brand colors are Ruby, Gold, and Amethyst, with some warm greys for backgrounds and text. The hex values for Acme’s color palette, plus values for the light and dark variants, are provided below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/g3-table-1603909712399.JPG&quot; alt=&quot;g3 table&quot;&gt;&lt;/p&gt;
&lt;p&gt;To begin the custom Acme, Inc. theme,  &lt;strong&gt;create a theme file&lt;/strong&gt;, &lt;strong&gt;define the color palette&lt;/strong&gt;, and then &lt;strong&gt;map the color palette&lt;/strong&gt; to the Grommet components for which you want the colors to be applied.&lt;/p&gt;
&lt;h2&gt;Create Theme File&lt;/h2&gt;
&lt;p&gt;In the &lt;code&gt;src&lt;/code&gt; directory, create a theme file called &lt;code&gt;acme-theme.js&lt;/code&gt; with the theme object &lt;code&gt;acme&lt;/code&gt; as an export:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/g2part-3-image-1603892563485.png&quot; alt=&quot;g2part 3 image&quot;&gt;&lt;/p&gt;
&lt;h2&gt;&lt;a id=&quot;head2&quot;&gt;&lt;/a&gt;Define the Color Palette&lt;/h2&gt;
&lt;p&gt;Define the color palette by naming each color, plus their dark and light variants. Colors are defined in the theme object’s &lt;code&gt;global.colors&lt;/code&gt; property. For each color, the following structure is used:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;uniqueColorName: {
    dark: ‘hexidecimal value or reference a color name’,
    light: ‘hexidecimal value or reference a color name’,
},
uniqueColorName! : ‘hexidecimal value or reference a color name’,
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that any Grommet component with color as an attribute can accept either a string or an object containing &lt;code&gt;dark&lt;/code&gt; and &lt;code&gt;light&lt;/code&gt; properties. &lt;code&gt;Dark&lt;/code&gt; specifies the color Grommet uses when the element is on a dark background; &lt;code&gt;light&lt;/code&gt; specifies the color for use on a light background.&lt;/p&gt;
&lt;p&gt;Additionally, by convention, a color name followed by a ‘!’ bang represents the color’s “true” value rather than a dark/light variant.&lt;/p&gt;
&lt;p&gt;Add Acme, Inc. colors to &lt;code&gt;acme-theme.js&lt;/code&gt; like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;export const acme = {
 global: {
   colors: {
     ruby: {
       dark: &quot;#d4111e&quot;,
       light: &quot;#f58990&quot;
     },
     &quot;ruby!&quot;: &quot;#EF3F4C&quot;,
     gold: {
       dark: &quot;#df9007&quot;,
       light: &quot;#e7b86b&quot;
     },
     &quot;gold!&quot;: &quot;#F9B644&quot;,
     amethyst: {
       dark: &quot;#9B59B6&quot;,
       light: &quot;#C39BD3&quot;
     },
     &quot;amethyst!&quot;: &quot;#AF7AC5&quot;,
     &quot;grey-1&quot;: &quot;#ECE9E3&quot;,
     &quot;grey-2&quot;: &quot;#CECCC6&quot;,
     &quot;grey-3&quot;: &quot;#737069&quot;,
     &quot;grey-4&quot;: &quot;#52504C&quot;, 
   }
 }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;a id=&quot;head3&quot;&gt;&lt;/a&gt;Map Colors to Grommet Namespaces and Components&lt;/h2&gt;
&lt;p&gt;Now that the colors are defined, it’s time to apply them. For the purposes of this tutorial, I will only map the colors to a handful of theme properties. This will demonstrate the process of implementation, but it is certainly not exhaustive. For the curious, inspecting &lt;a href=&quot;https://github.com/grommet/grommet/blob/master/src/js/themes/base.js&quot;&gt;Grommet’s base theme&lt;/a&gt; is a great place to take inventory of the many possibilities to fully customize your own theme.&lt;/p&gt;
&lt;p&gt;Modify &lt;code&gt;acme-theme.js&lt;/code&gt; by mapping the colors to background, brand, control, and anchor properties:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;export const acme = {
 global: {
   colors: {
     /* BEGIN: Color Palette Definition */
     ruby: {
       dark: &quot;#d4111e&quot;,
       light: &quot;#f58990&quot;
     },
     &quot;ruby!&quot;: &quot;#EF3F4C&quot;,
     gold: {
       dark: &quot;#df9007&quot;,
       light: &quot;#e7b86b&quot;
     },
     &quot;gold!&quot;: &quot;#F9B644&quot;,
     amethyst: {
       dark: &quot;#9B59B6&quot;,
       light: &quot;#C39BD3&quot;
     },
     &quot;amethyst!&quot;: &quot;#AF7AC5&quot;,
     &quot;grey-1&quot;: &quot;#ECE9E3&quot;,
     &quot;grey-2&quot;: &quot;#CECCC6&quot;,
     &quot;grey-3&quot;: &quot;#737069&quot;,
     &quot;grey-4&quot;: &quot;#52504C&quot;,
     /* END: Color Palette Definition */
     /* BEGIN: Mapping Colors to Grommet Namespaces */
     background: {
       dark: &quot;grey-4&quot;,
       light: &quot;grey-1&quot;
     },
     &quot;background-back&quot;: {
       dark: &quot;grey-4&quot;,
       light: &quot;grey-1&quot;
     },
     &quot;background-front&quot;: {
       dark: &quot;grey-3&quot;,
       light: &quot;grey-2&quot;
     },
     brand: &quot;ruby!&quot;,
     control: {
       dark: &quot;brand&quot;,
       light: &quot;brand&quot;
     },
     input: {
       background: &quot;blue&quot;
     },
     text: {
       dark: &quot;grey-1&quot;,
       light: &quot;grey-3&quot;
     }
   },
   focus: {
     border: {
       color: &quot;gold&quot;
     }
   }
   /* END: Mapping Colors to Grommet Namespaces */
 },
 /* BEGIN: Mapping Colors to Components */
 anchor: {
   color: {
     dark: &quot;gold&quot;,
     light: &quot;amethyst!&quot;
   }
 }
 /* END: Mapping Colors to Components */
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Merging a Custom Theme with an Existing Theme&lt;/h2&gt;
&lt;p&gt;In &lt;code&gt;App.js&lt;/code&gt;, add the following imports:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;import { deepMerge } from “grommet/utils”;&lt;/code&gt; - A function allowing an existing theme to be customized or extended&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;import { acme } from “./acme-theme”;&lt;/code&gt; - Our custom theme file&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;import { DemoSection } from “./DemoSection”;&lt;/code&gt; - A section of sample components to see how theme customizations are applied&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Then, add the line &lt;code&gt;const theme = deepMerge(grommet, acme)&lt;/code&gt;. The imported &lt;code&gt;deepMerge&lt;/code&gt; function incorporates the &lt;code&gt;acme&lt;/code&gt; specifications into the &lt;code&gt;grommet&lt;/code&gt; theme, resulting in a new custom &lt;code&gt;theme&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;import React from &quot;react&quot;;
import { Grommet, grommet, Anchor, Box, Button, Heading, Paragraph } from &quot;grommet&quot;;
import { deepMerge } from &quot;grommet/utils&quot;;

import { acme } from &quot;./acme-theme&quot;;
import { DemoSection } from &quot;./DemoSection&quot;;

const theme = deepMerge(grommet, acme);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, swap out the &lt;code&gt;grommet&lt;/code&gt; theme with the newly created &lt;code&gt;theme&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;  &amp;#x3C;Grommet full theme={theme} themeMode={darkMode ? &quot;dark&quot; : &quot;light&quot;}&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That concludes this tutorial. Your final code and resulting app should resemble this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-3customizeourtheme-9wqfb?file=/src/App.js&quot;&gt;Codesandbox&lt;/a&gt;. I hope you have enjoyed this three-part tutorial.&lt;/p&gt;
&lt;p&gt;As review, here’s how the app was modified:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Created a custom theme file.&lt;/li&gt;
&lt;li&gt;Defined the color palette and its namespaces.&lt;/li&gt;
&lt;li&gt;Mapped the color namespaces to Grommet namespaces and component definitions.&lt;/li&gt;
&lt;li&gt;Finally, merged the theme customizations with Grommet’s theme.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Next Steps for Exploration&lt;/h2&gt;
&lt;p&gt;Now that you have seen how easy it is to apply a theme to Grommet, set and toggle the theme’s light/dark modes, and even start applying custom colors to your own theme, here are some great next steps you can take:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check out &lt;a href=&quot;https://theme-designer.grommet.io/&quot;&gt;Grommet’s Theme Designer&lt;/a&gt; (Beta) and other &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home/&quot;&gt;Grommet Resources&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Explore Grommet’s other theme properties which can be customized.&lt;/li&gt;
&lt;li&gt;Create and apply your own theme to your own project, then share it on Grommet’s &lt;a href=&quot;https://grommet.slack.com/archives/CG25TE0KZ&quot;&gt;#i-made-this&lt;/a&gt; Slack channel for community members to enjoy.&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Dark Mode Theming in Grommet - Part 2: Adding dark and light theme modes]]></title><description><![CDATA[f1lightdark2 In Part 1 of this 3-part series on Grommet’s support for light and dark modes, I covered setting up a simple Grommet app and…]]></description><link>https://developer.hpe.com/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes/</link><guid isPermaLink="false">https://developer.hpe.com/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes/</guid><pubDate>Fri, 15 Dec 2023 15:46:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/f1lightdark2-1603286799167.png&quot; alt=&quot;f1lightdark2&quot;&gt;&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;/blog/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme&quot;&gt;Part 1&lt;/a&gt; of this 3-part series on Grommet’s support for light and dark modes, I covered setting up a simple Grommet app and applying a theme to that app. Here in Part 2, I’ll guide you through the steps required to implement dark/light theme modes. At the conclusion of this post, the app will have some basic UI components and a control to toggle the interface between light and dark modes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/blog/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme&quot;&gt;Part 1 - How to set up and apply a Theme&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Part 2 - Adding dark and light theme modes&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;Part 3 - Theme color customizations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this post, I’ll cover content regarding adding a theme toggle button, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Introducing the &lt;code&gt;themeMode&lt;/code&gt; prop, which allows specifying which version of the theme the app renders.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adding a button to the interface to serve as a control to toggle the value that gets passed to &lt;code&gt;themeMode&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/f10thememodetoggle-1603286872853.gif&quot; alt=&quot;f10thememodetoggle&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Adding a Theme Toggle Button&lt;/h2&gt;
&lt;p&gt;For this exercise, you’ll continue modifying your app from the example you created in Part 1 of this series.  If you are catching up and joining midstream, you can reference this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-1adding-theme-rg91i?file=/src/App.js&quot;&gt;Codesandbox&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, I’m going to show you how to add a button that, when clicked, will toggle the theme between light and dark modes.&lt;/p&gt;
&lt;p&gt;In App.js:&lt;/p&gt;
&lt;p&gt;Add the &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;https://v2.grommet.io/grommet#themeMode&quot;&gt;themeMode&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt; prop to the &lt;code&gt;&amp;#x3C;Grommet&gt;&lt;/code&gt; component and set its value to &lt;code&gt;&quot;dark&quot;&lt;/code&gt;. The value referenced by &lt;code&gt;themeMode&lt;/code&gt; specifies whether Grommet should use the dark or light versions of the theme.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;  &amp;#x3C;Grommet full theme={grommet} themeMode=&quot;dark&quot;&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This should result in:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/f4part-2-toggle-1603286827841.png&quot; alt=&quot;f4part 2 toggle&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, add a button to serve as the toggle control.&lt;/p&gt;
&lt;p&gt;Import &lt;code&gt;Button&lt;/code&gt; as a component from Grommet&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;import { Grommet, Anchor, Box, Button, List, Heading, Paragraph, Text } from &quot;grommet&quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below the &lt;code&gt;&amp;#x3C;List&gt;&lt;/code&gt;, add a theme toggle button with some formatting props and an &lt;code&gt;onClick&lt;/code&gt; handler.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;  &amp;#x3C;Button
    label=&quot;Toggle Theme&quot;
    primary
    alignSelf=&quot;center&quot;
    margin=&quot;large&quot;
    onClick={() =&gt; {}} 
  /&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Enable Toggling of ThemeMode’s State&lt;/h2&gt;
&lt;p&gt;Next, make the theme mode dynamic by adding a variable &lt;code&gt;darkMode&lt;/code&gt; to hold the current theme mode, storing it in the component’s state, and adjusting the state each time the theme toggle button is clicked.&lt;/p&gt;
&lt;p&gt;Create variable &lt;code&gt;darkMode&lt;/code&gt; and its state using React’s &lt;a href=&quot;https://reactjs.org/docs/hooks-state.html&quot;&gt;&lt;code&gt;useState&lt;/code&gt; Hook&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const App = () =&gt; {
 const [darkMode, setDarkMode] = React.useState(false);
  return (
   &amp;#x3C;Grommet full theme={grommet} themeMode=&quot;dark&quot;&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Modify the button’s &lt;code&gt;onClick&lt;/code&gt; handler to toggle &lt;code&gt;darkMode&lt;/code&gt; between &lt;code&gt;true&lt;/code&gt; and &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-html&quot;&gt;&amp;#x3C;Button
    label=&quot;Toggle Theme&quot;
    primary
    alignSelf=&quot;center&quot;
    margin=&quot;large&quot;
    onClick={() =&gt; setDarkMode(!darkMode)}
  /&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, replace &lt;code&gt;themeMode&lt;/code&gt;’s value to be “dark” when &lt;code&gt;darkMode&lt;/code&gt; is true, and “light” when &lt;code&gt;darkMode&lt;/code&gt; is false.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;&amp;#x3C;Grommet full theme={grommet} themeMode={darkMode ? &quot;dark&quot; : &quot;light&quot;} &gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The theme mode toggling should be good to go. Give the toggle button a few clicks!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/f2thememodetogglemid-1603286807584.gif&quot; alt=&quot;f2thememodetogglemid&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Incorporating More Visual Interest&lt;/h2&gt;
&lt;p&gt;Finally, to better demonstrate a changing theme, let’s add some more interesting visuals to the application.&lt;/p&gt;
&lt;p&gt;Remove the following from App.js&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;   &amp;#x3C;Paragraph&gt;We will be modifying this project to:&amp;#x3C;/Paragraph&gt;
    &amp;#x3C;List data={projectTasks}&gt;
      {(task, index) =&gt; (
        &amp;#x3C;Text key={index}&gt;
          {index + 1}) {task}
        &amp;#x3C;/Text&gt;
      )}
    &amp;#x3C;/List&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, import the DemoSection from &lt;code&gt;DemoSection.js&lt;/code&gt; and add it below the toggle button. DemoSection contains a sampling of Grommet components to better demonstrate the effect themeMode has across components.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;  import { DemoSection } from &quot;./DemoSection&quot;;	
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then add DemoSection directly beneath this button.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;      &amp;#x3C;Button
         label=&quot;Toggle Theme&quot;
         primary
         alignSelf=&quot;center&quot;
         margin=&quot;large&quot;
         onClick={() =&gt; setDarkMode(!darkMode)}
       /&gt;
       &amp;#x3C;DemoSection /&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, your code and resulting app should resemble what is shown in this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-2addtogglebutton-txbux?file=/src/App.js&quot;&gt;Codesandbox&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/f14part-2-non-animated-white-theme-toggle-1603286900031.png&quot; alt=&quot;f14part 2 non animated white theme toggle&quot;&gt;&lt;/p&gt;
&lt;p&gt;As quick review, here’s what we’ve done to modify the app:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Added &lt;code&gt;themeMode&lt;/code&gt; as a prop on the &lt;code&gt;&amp;#x3C;Grommet&gt;&lt;/code&gt; component. The value provided to &lt;code&gt;themeMode&lt;/code&gt; specifies which mode of the theme to use.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Created a state variable called &lt;code&gt;darkMode&lt;/code&gt; to store whether the theme should currently be in dark mode.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enabled the value of the &lt;code&gt;themeMode&lt;/code&gt; prop to update when the value of &lt;code&gt;darkMode&lt;/code&gt; changes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Added a &lt;code&gt;&amp;#x3C;Button&gt;&lt;/code&gt; to serve as control to toggle the theme mode by toggling the state of &lt;code&gt;darkMode&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lastly, made the app interface a bit more interesting to demonstrate how various components are affected by toggling the &lt;code&gt;themeMode&lt;/code&gt;. That’s it for Part 2! In &lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;Part 3&lt;/a&gt;, I’ll demonstrate how to customize a theme with custom dark and light mode colors. Don’t forget to check back at the HPE DEV blog to catch &lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;Part 3&lt;/a&gt; of this series. Again, if you have any questions, please feel free to reach out to me and others in the Grommet group on our Slack channel.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Dark Mode Theming in Grommet - Part 1: How to set up and apply a theme]]></title><description><![CDATA[darkmode intro part1 This post is Part 1 of a three-part series. Part 1 - How to set up and apply a theme Part 2 - Adding dark and light…]]></description><link>https://developer.hpe.com/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme/</link><guid isPermaLink="false">https://developer.hpe.com/dark-mode-theming-in-grommet-how-to-set-up-and-apply-a-theme/</guid><pubDate>Fri, 15 Dec 2023 15:45:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/darkmode-intro-part1-1603293464825.png&quot; alt=&quot;darkmode intro part1&quot;&gt;&lt;/p&gt;
&lt;p&gt;This post is Part 1 of a three-part series.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Part 1 - How to set up and apply a theme&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes/&quot;&gt;Part 2 - Adding dark and light theme modes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;Part 3 - Theme color customization&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a UI/UX developer at HPE, I use &lt;a href=&quot;https://grommet.io&quot;&gt;Grommet&lt;/a&gt; extensively. I am also a regular contributor on the &lt;a href=&quot;https://grommet.slack.com&quot;&gt;Grommet Slack&lt;/a&gt; channels where the Theming capabilities, especially Grommet’s support for light and dark modes, are a consistent topic of interest.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://grommet.io&quot;&gt;Grommet&lt;/a&gt; has robust support for light and dark themes. Due to the apparent interest in this topic, I thought I’d share my approach on how to get started with theme mode styling in &lt;a href=&quot;https://grommet.io&quot;&gt;Grommet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To illustrate the method I use, I’m going to create a simple application demonstrating the ability to toggle between light and dark modes. By the end of this 3-part blog series, I will have demonstrated how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Apply a theme in a Grommet application&lt;/li&gt;
&lt;li&gt;Incorporate dark and light theme modes&lt;/li&gt;
&lt;li&gt;Modify a theme to apply custom colors in dark and light modes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The final product will be a simple application with a custom theme applied and the ability for a user to toggle between the app’s light and dark modes.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/themetutorialapp-1602698870239.gif&quot; style=&quot;height:300px, width:300px&quot;&gt;
&lt;h1&gt;Setup for the tutorial&lt;/h1&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;This tutorial assumes familiarity with HTML, JavaScript, and React. However, even if any of these are new to you, you should be able to follow along and complete the exercise.&lt;/p&gt;
&lt;h2&gt;Get the Starter Code&lt;/h2&gt;
&lt;p&gt;Open this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-0starter-1k1cv?file=/src/App.js&quot;&gt;starting code&lt;/a&gt; in your browser. Create your own copy by clicking &apos;Fork&apos; from the menu in the upper right corner. The starter app will look like this:&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture-2-1602661773922.png&quot;&gt;
&lt;h2&gt;Starter Code Orientation&lt;/h2&gt;
&lt;h3&gt;Development Environment&lt;/h3&gt;
&lt;p&gt;For this tutorial, I’m using &lt;a href=&quot;https://codesandbox.io/&quot;&gt;Codesandbox&lt;/a&gt; as the development environment. Codesandbox  presents the user with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An interface for the project’s file and directory structure&lt;/li&gt;
&lt;li&gt;A text editor for editing files&lt;/li&gt;
&lt;li&gt;A browser window to view and interact with the application&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Project Structure and Dependencies&lt;/h3&gt;
&lt;p&gt;The project structure mimics a minimal Create React App (CRA) structure and is organized like so:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;/public&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;index.html (this gets loaded by the browser and is what renders src/index.js)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;/src&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;index.js (This is the project’s root file. I won’t be modifying it.)&lt;/li&gt;
&lt;li&gt;App.js&lt;/li&gt;
&lt;li&gt;DemoSection.js (This is a component I will use later in the tutorial)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Package.json (Create React App ships with &lt;code&gt;react&lt;/code&gt;, &lt;code&gt;react-dom&lt;/code&gt;, and &lt;code&gt;react-scripts&lt;/code&gt; as dependencies. Additionally, I’ve added &lt;code&gt;grommet&lt;/code&gt; and its peer dependency, &lt;code&gt;styled-components&lt;/code&gt;, to the starter project.)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;App.js&lt;/h3&gt;
&lt;p&gt;For this tutorial, most of the development will happen in App.js. The App.js file consists of three parts; imports, projectTasks array, and the App component.&lt;/p&gt;
&lt;p&gt;Import of React and Grommet components:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;import React from &quot;react&quot;;
import { Grommet, Anchor, Box, List, Heading, Paragraph, Text } from &quot;grommet&quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;projectTasks&lt;/code&gt; array with the tasks to be implemented&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const projectTasks = [
 &quot;Apply a Theme - Add an existing theme to provide some styling&quot;,
 `Add Theme Toggle Button - Add a button, which when clicked,
 will toggle the theme between light and dark modes`,
 &quot;Customize Theme - Add custom colors to theme&quot;
];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;App&lt;/code&gt; is composed of the following Grommet components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&amp;#x3C;Grommet&gt;&lt;/code&gt; is the top level Grommet container&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;#x3C;Box&gt;&lt;/code&gt; for some basic page layout&lt;/li&gt;
&lt;li&gt;For the page’s content, I have &lt;code&gt;&amp;#x3C;Heading&gt;&lt;/code&gt;,&lt;code&gt;&amp;#x3C; Paragraph&gt;&lt;/code&gt;, and &lt;code&gt;&amp;#x3C;List&gt;&lt;/code&gt; which iterates over the &lt;code&gt;projectTasks&lt;/code&gt; array, returning a &lt;code&gt;&amp;#x3C;Text&gt;&lt;/code&gt; component for each task.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;const App = () =&gt; {
 return (
   &amp;#x3C;Grommet full&gt;
     &amp;#x3C;Box gap=&quot;small&quot; pad=&quot;large&quot; align=&quot;start&quot;&gt;
       &amp;#x3C;Heading level=&quot;1&quot;&gt;Hello Grommet Theme Toggle&amp;#x3C;/Heading&gt;
       &amp;#x3C;Paragraph&gt;
         This is the first step in a &amp;#x3C;Anchor href=&quot;#&quot;&gt;theming tutorial&amp;#x3C;/Anchor&gt;{&quot; &quot;}
         using Grommet.
       &amp;#x3C;/Paragraph&gt;
       &amp;#x3C;Paragraph&gt;We will be modifying this project to:&amp;#x3C;/Paragraph&gt;
       &amp;#x3C;List data={projectTasks}&gt;
         {(task, index) =&gt; (
           &amp;#x3C;Text key={index}&gt;
             {index + 1}) {task}
           &amp;#x3C;/Text&gt;
         )}
       &amp;#x3C;/List&gt;
     &amp;#x3C;/Box&gt;
   &amp;#x3C;/Grommet&gt;
 );
};

export default App;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Applying a Theme&lt;/h1&gt;
&lt;p&gt;To provide some initial styling, I’ll apply the &lt;code&gt;grommet&lt;/code&gt; theme. In &lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;Part 3&lt;/a&gt; of this series, I will show you how to customize and incorporate a custom theme.&lt;/p&gt;
&lt;p&gt;In App.js, import and apply the Grommet theme:&lt;/p&gt;
&lt;p&gt;Import grommet theme&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt; import { grommet } from &quot;grommet&quot;; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Apply it by adding &lt;code&gt;theme={grommet}&lt;/code&gt; to the Grommet component.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt; &amp;#x3C;Grommet full theme={grommet}&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture-3-1602661789429.png&quot; alt=&quot;matt1-picture 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture-4-1602661802053.png&quot; alt=&quot;matt1-picture 4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Your code and resulting app should resemble this &lt;a href=&quot;https://codesandbox.io/s/grommet-theme-toggle-1adding-theme-rg91i?file=/src/App.js&quot;&gt;Codesandbox&lt;/a&gt;, and the app’s UI should have updated with the applied theme.&lt;/p&gt;
&lt;p&gt;At this point, you now have an understanding for how to apply a theme to a Grommet app. This is the foundation from which we will build custom theming. In my next post, I will show you how to add dark/light theming and give users control over toggling between theme modes. In my final post, I will wrap up with a how to customize a theme with custom dark and light mode colors.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; to catch Parts &lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-adding-dark-and-light-theme-modes/&quot;&gt;2&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/blog/dark-mode-theming-in-grommet-theme-color-customization/&quot;&gt;3&lt;/a&gt; of this series. If you have any questions, please feel free to reach out to me and others in the Grommet group on our &lt;a href=&quot;https://app.slack.com/client/T04LMHMUT/C04LMHN59&quot;&gt;Slack channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 1.33!]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/announcing-chapel-1-33/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-1-33/</guid><pubDate>Thu, 14 Dec 2023 23:14:56 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[SC23 from the Chapel Language Perspective]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/sc23-from-the-chapel-language-perspective/</link><guid isPermaLink="false">https://developer.hpe.com/sc23-from-the-chapel-language-perspective/</guid><pubDate>Thu, 07 Dec 2023 18:24:50 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streamline modern app development with HPE GreenLake and open source solutions]]></title><link>https://developer.hpe.com/2023-December-05/</link><guid isPermaLink="false">https://developer.hpe.com/2023-December-05/</guid><pubDate>Mon, 04 Dec 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Discover the power of data center monitoring using Redfish telemetry and cloud-native tooling: Part 2 - Streaming]]></title><description><![CDATA[Ensuring the seamless operation of your data center is critical to business continuity. In my first article in this series, I showed you how…]]></description><link>https://developer.hpe.com/discover-the-power-of-data-center-monitoring-using-redfish-telemetry-and-cloud-native-tooling-part-2-streaming/</link><guid isPermaLink="false">https://developer.hpe.com/discover-the-power-of-data-center-monitoring-using-redfish-telemetry-and-cloud-native-tooling-part-2-streaming/</guid><pubDate>Fri, 01 Dec 2023 21:50:51 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Ensuring the seamless operation of your data center is critical to business continuity. In my &lt;a href=&quot;https://developer.hpe.com/blog/discover-the-power-of-data-center-monitoring-using-redfish-telemetry-and-cloud-native-tooling/&quot;&gt;&lt;strong&gt;first article in this series&lt;/strong&gt;,&lt;/a&gt; I showed you how coupling Redfish telemetry, a standardized way to access telemetry data from servers and other devices, with cloud-native monitoring tools can improve the efficiency and insightfulness of your data center monitoring. In this second article, I&apos;ll show you how you can unlock the full potential of data center monitoring with Redfish telemetry streaming and cutting-edge cloud-native tools. Together, we&apos;ll delve deep into the world of streaming, as I showcase the efficiency and scalability of HPE iLO&apos;s Redfish telemetry streaming using event subscription.&lt;/p&gt;
&lt;h1&gt;A holistic approach&lt;/h1&gt;
&lt;p&gt;To fully tap the potential of data center monitoring, embark on a comprehensive journey by exploring my latest technical whitepaper, &lt;strong&gt;&quot;&lt;a href=&quot;https://www.hpe.com/psnow/doc/a50009739enw&quot;&gt;Data center monitoring using Redfish telemetry and cloud-native tooling: Part 2 — streaming&lt;/a&gt;&quot;&lt;/strong&gt;. This guide not only introduces you to the intricacies of Redfish telemetry streaming but also demonstrates the seamless integration with open-source cloud-native tools such as Telegraf, InfluxDB, and Grafana. The telemetry data source, HPE iLO, provides a wealth of performance metrics through the Redfish interface, and event subscription ensures a continuous flow of metric reports from the telemetry service.&lt;/p&gt;
&lt;h1&gt;Key highlights from the whitepaper&lt;/h1&gt;
&lt;p&gt;In my whitepaper, I provide you with a deep understanding of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Redfish telemetry streaming:&lt;/strong&gt; Gain insights into the efficiency and scalability of Redfish telemetry streaming using event subscription for metric collection.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud-native tools:&lt;/strong&gt; Explore the capabilities of Telegraf, InfluxDB, and Grafana, and witness how they elevate data center monitoring to new heights.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Architecture deep dive:&lt;/strong&gt; Delve into the intricacies of the integrated solution&apos;s architecture, unraveling how data seamlessly flows from the source to the visualization layer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Setup process:&lt;/strong&gt; Follow a step-by-step guide on implementing this monitoring solution in your data center, ensuring a smooth and efficient setup.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Benefits:&lt;/strong&gt; Uncover the array of benefits this approach brings, including real-time monitoring, effortless scalability, and the power to craft customizable visualizations.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Take the leap&lt;/h1&gt;
&lt;p&gt;Discover how the fusion of Redfish telemetry streaming with cloud-native tools revolutionizes data center monitoring. Dive into the depths of insights and efficiency by reading my whitepaper. Transform your data center monitoring with HPE iLO today! For more enlightening articles on HPE iLO, stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automate ITOps: announcing foundational APIs for the HPE GreenLake edge-to-cloud platform]]></title><description><![CDATA[External blog]]></description><link>https://developer.hpe.com/automate-itops-announcing-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform/</link><guid isPermaLink="false">https://developer.hpe.com/automate-itops-announcing-foundational-apis-for-the-hpe-greenlake-edge-to-cloud-platform/</guid><pubDate>Fri, 01 Dec 2023 16:43:11 GMT</pubDate><content:encoded>&lt;p&gt;External blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to implement a single sign-on solution to authenticate users onto the HPE GreenLake edge-to-cloud platform]]></title><description><![CDATA[Enterprises looking to use HPE GreenLake for Private Cloud Enterprise can benefit from the use of SSO, as it has been integrated onto the…]]></description><link>https://developer.hpe.com/configuring-sso-for-hpe-greenlake-central-private-cloud-enterprise-and-hpe-greenlake-glcp-using-okta/</link><guid isPermaLink="false">https://developer.hpe.com/configuring-sso-for-hpe-greenlake-central-private-cloud-enterprise-and-hpe-greenlake-glcp-using-okta/</guid><pubDate>Wed, 29 Nov 2023 12:41:00 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Enterprises looking to use HPE GreenLake for Private Cloud Enterprise can benefit from the use of SSO, as it has been integrated onto the HPE GreenLake edge-to-cloud platform (also known as HPE GreenLake platform), which supports single sign-on.&lt;/p&gt;
&lt;p&gt;In this blog post, I will walk you through the process of configuring Okta Active Directory (AD) to authenticate users into the HPE GreenLake for Private Cloud Enterprise application on the HPE GreenLake platform using SAML Identity Provider (IdP) for single sign-on.&lt;/p&gt;
&lt;h3&gt;Before starting&lt;/h3&gt;
&lt;p&gt;Please review the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-D7192971-EF71-4304-B51E-548E7954E644.html&quot;&gt;HPE GreenLake&lt;/a&gt; User Guide to understand how the SAML framework works in the context of HPE GreenLake for Private Cloud Enterprise Services for the HPE GreenLake edge-to-cloud platform.&lt;/p&gt;
&lt;h3&gt;Configure SSO/SAML applications in Okta&lt;/h3&gt;
&lt;p&gt;To configure application metadata in Okta, complete the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Step 1: Create an Okta SAML application&lt;/li&gt;
&lt;li&gt;Step 2: Configure Sign On settings&lt;/li&gt;
&lt;li&gt;Step 3: Export the SAML 2.0 IdP metadata&lt;/li&gt;
&lt;li&gt;Step 4: Configure the SAML connection in the HPE GreenLake platform&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 1: Create an Okta SAML application&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Log into the Okta administration console.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Applications &gt; Create new app integration.&lt;/strong&gt; The Create a new app integration window opens.&lt;/li&gt;
&lt;li&gt;Select SAML 2.0 and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image0.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Provide a name for the SAML application which gets connected to the HPE GreenLake platform:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/saml_app-okta.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2: How to configure single sign-on settings&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enter the SAML information.&lt;/p&gt;
&lt;p&gt;Under General:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Single Sign on URL:&lt;/strong&gt; &lt;a href=&quot;https://sso.common.cloud.hpe.com/sp/ACS.saml2&quot;&gt;https://sso.common.cloud.hpe.com/sp/ACS.saml2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Audience URI (SP Entity ID):&lt;/strong&gt; &lt;a href=&quot;https://sso.common.cloud.hpe.com&quot;&gt;https://sso.common.cloud.hpe.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Name ID format EmailAddress&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Application username Email&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NameID = user.email&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;gl_first_name = user.FirstName&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;gl_last_name = user.LastName&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;hpe_ccs_attribute = (See Below)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;See here for IdP attribute details: &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-D7192971-EF71-4304-B51E-548E7954E644.html&quot;&gt;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&amp;#x26;page=GUID-D7192971-EF71-4304-B51E-548E7954E644.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A new SAML attribute has been added “hpe_ccs_attribute” which tells HPE GreenLake platform and HPE GreenLake for Private Cloud Enterprise application the exact role/permissions for each user. The following describes how to format the attribute.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Format: {version}#{pcid}:{app id}:{role_name}:{ALL_SCOPES}&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;N﻿ote :  At present HPE GreenLake for Private Cloud Enterprise application role should be excluded.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-greenlake-saml-attributes.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/workspace-pcid.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glp_role_name.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;hpe_ccs_attribute&lt;/strong&gt; always starts with version1#. You must first configure the attributes for HPE GreenLake platform and to do so, enter the Platform Customer ID (PCID) for the account (this is the identifier assigned to your HPE GreenLake platform Workspace), followed by the HPE GreenLake platform application ID. This will always be &lt;strong&gt;00000000-0000-0000-0000-000000000000&lt;/strong&gt;. Following this, enter the role name and ALL_SCOPES**.** &lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;version_1#7ede5c36b7b911edacf45a78eb8b07d1:00000000-0000-0000-0000-000000000000:Observer:ALL_SCOPES&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/saml_settings.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;2﻿. Complete the setup.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt; and select &lt;strong&gt;Internal App&lt;/strong&gt;, then &lt;strong&gt;Finish&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; &lt;strong&gt;Export the SAML 2.0 IdP metadata&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click Next – Configure the single sign-on settings&lt;/p&gt;
&lt;p&gt;You will find two options are available: &lt;strong&gt;View Setup Instructions&lt;/strong&gt; which steps you through the SAML configuration and &lt;strong&gt;Identity Provider metadata&lt;/strong&gt;, which will produce an XML file that can be loaded into HPE GreenLake platform application.&lt;/p&gt;
&lt;p&gt;Suggestion: click &lt;strong&gt;Identity Provider metadata&lt;/strong&gt; and save the XML data to a file.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;C﻿lick &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Internal app&lt;/strong&gt;, and click &lt;strong&gt;Finish&lt;/strong&gt;.&lt;/p&gt;
&lt;h5&gt;&lt;strong&gt;Step 3.1 :  Access to the SAML application and HPE GreenLake platform is determined by assigning only those members or group to the SAML application.&lt;/strong&gt;&lt;/h5&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/customer-user-assignment-to-saml.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; &lt;strong&gt;Configure the SAML connection in the HPE GreenLake platform&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log into HPE GreenLake platform and click &lt;strong&gt;Menu&lt;/strong&gt; &gt; &lt;strong&gt;Manage&lt;/strong&gt; &gt; &lt;strong&gt;Authentication&lt;/strong&gt; and click &lt;strong&gt;Set Up SAML Connection&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Before you can add a new SAML configuration, you must have at least &lt;strong&gt;one&lt;/strong&gt; user account with that &lt;strong&gt;domain&lt;/strong&gt; already enabled in HPE GreenLake platform. Also, you must be logged into HPE GreenLake platform with an account from that domain in order to enable SSO for it.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type in the domain you want to enable SSO on:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glp_domain.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Input the metadata from the step above.&lt;/p&gt;
&lt;p&gt;While HPE GreenLake platform does support entering this information manually, it&apos;s recommended that you simply upload the XML metadata that was downloaded in the previous step. To do so, select &lt;strong&gt;Metadata File&lt;/strong&gt;, selecting the XML file. Then, click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the SAML attributes to match what was entered in Okta. Set the idle timeout value as well.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/config_setting_sso_appjpg.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Then click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a recovery user so that, in the event SSO fails, an admin will still be able to access the HPE GreenLake platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/recovery_user.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Congratulations! SSO will now be enabled for HPE GreenLake platform as well as the HPE GreenLake for Private Cloud Enterprise application. Log out and on the HPE GreenLake platform home page, click &lt;strong&gt;Sign in with SSO&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Testing and troubleshooting:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On the HPE GreenLake edge-to-cloud platform home page, click &lt;strong&gt;Sign In with SSO&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ws-image15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-greenlake-sso-page.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Enter the SSO credentials. You will be redirected to Okta to authenticate. Once you successfully authenticate, you will be redirected back to HPE GreenLake platform. You can then click on the HPE GreenLake for Private Cloud Enterprise application and be given access based on the configured role/permissions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Additional notes:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There must be at least &lt;strong&gt;one&lt;/strong&gt; verified user belonging to the &lt;strong&gt;Domain&lt;/strong&gt; prior to configuration.&lt;/li&gt;
&lt;li&gt;In order to configure SSO, you must be logged into the HPE GreenLake edge-to-cloud platform with a user from the domain.&lt;/li&gt;
&lt;li&gt;SSO user access is determined by the “role_name” attribute included in the SAML hpe_ccs_attribute provided by the IdP.&lt;/li&gt;
&lt;li&gt;For more troubleshooting: &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Customer users should be given access to SAML application.&lt;/li&gt;
&lt;li&gt;After authentication when clicking the HPE GreenLake for Private Cloud Enterprise application**,** if it leads to the below error, it will take 1 hr to sync. If it does not do so within that time period, the customer should contact their HPE administrator. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope this blog post answers any questions you may have had in regards to how to configure single sign-on for HPE GreenLake for Private Cloud Enterprise on the HPE GreenLake platform using Okta Active Directory. Please return back to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more tips and tricks on working with the HPE GreenLake platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Help us prioritize features for future releases of Chapel]]></title><description><![CDATA[In a recent blog post, the Chapel team gave a preview of Chapel 1.32, the release candidate for the official 2.0 version which will declare…]]></description><link>https://developer.hpe.com/help-us-prioritize-features-for-future-releases-of-chapel/</link><guid isPermaLink="false">https://developer.hpe.com/help-us-prioritize-features-for-future-releases-of-chapel/</guid><pubDate>Wed, 15 Nov 2023 18:36:57 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In a &lt;a href=&quot;https://chapel-lang.org/blog/posts/announcing-chapel-1.32/&quot;&gt;recent blog post&lt;/a&gt;, the Chapel team gave a preview of Chapel 1.32, the release candidate for the official 2.0 version which will declare a core subset of the language and library features as &quot;stable&quot;. These features are ones that we intend to support in their current form going forward, such that code relying on them will not break across releases.&lt;/p&gt;
&lt;p&gt;With this candidate release available, we thought now would be a good time to get a sense for what remaining currently unstable features are important to our user base, so that we can better prioritize what to stabilize next.  To do so, we&apos;ve created a Chapel program (&lt;a href=&quot;https://github.com/chapel-lang/chapel/blob/main/tools/unstableWarningAnonymizer/unstableAnonScript.chpl&quot;&gt;available here&lt;/a&gt;) that will summarize the unstable warnings in your key programs. This summary will not include any identifying details like module or variable names, so even if your source code is not intended for public eyes, you should be able to send us the result of running this script without worry.&lt;/p&gt;
&lt;p&gt;This script is intended to be used with the most recent Chapel release (1.32), as that release has the most complete coverage of unstable features.  If you use it with earlier releases, please let us know when providing your result file.&lt;/p&gt;
&lt;p&gt;Here&apos;s how to use this script:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Compile the program linked above like you would any normal Chapel program. This should create an executable named &apos;unstableAnonScript&apos;. For example:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;chpl unstableAnonScript.chpl
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Compile and run your program as you would normally, with the addition of the compilation flag &lt;code&gt;--warn-unstable&lt;/code&gt; (which will cause any use of unstable features to trigger a warning), and save the full output into a file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;chpl --warn-unstable myProgram.chpl &gt;myUnstableOutput.txt 2&gt;&amp;#x26;1  
./myProgram &gt;&gt;myUnstableOutput.txt 2&gt;&amp;#x26;1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After these commands, &lt;code&gt;myUnstableOutput.txt&lt;/code&gt; (or whatever you&apos;ve named it) should contain any unstable warnings you may trigger in your code, as well as any other potential output that occurs when compiling and running your program.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Run the built script over your output file.  For our convenience, it would be helpful to run with the &lt;code&gt;--csv&lt;/code&gt; and &lt;code&gt;--sorted&lt;/code&gt; flags, or &lt;code&gt;-c&lt;/code&gt; and &lt;code&gt;-d&lt;/code&gt; if you want to use the shorter version of those flags.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;./unstableAnonScript --csv --sorted --inputFiles myUnstableOutput.txt --outputFile mySummary.csv

or

./unstableAnonScript -c -d -i myUnstableOutput.txt -o mySummary.csv
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that &lt;code&gt;--inputFiles&lt;/code&gt;/&lt;code&gt;-i&lt;/code&gt; can take multiple files, so if you have multiple chapel programs you&apos;d like to share the results for, you can combine the results together by specifying the unstable warnings from all of those programs at the same time:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;./unstableAnonScript -c  -d -n -i myUnstableOutput1.txt myUnstableOutput2.txt myUnstableOutput3.txt -o mySummary.csv
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After any of these commands, &lt;code&gt;mySummary.csv&lt;/code&gt; (or whatever you&apos;ve named
it) should contain a comma-separated list of the unstable warnings generated by your program(s) and their counts, sorted from most common to least.  There should be no identifying information in this file, so at this point, it should be safe to send the file to us.&lt;/p&gt;
&lt;p&gt;You could also additionally run with the &lt;code&gt;--numFiles&lt;/code&gt; flag (&lt;code&gt;-n&lt;/code&gt; for short), which will include the number of different files where each unstable warning was generated:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;./unstableAnonScript --csv --sorted --numFiles --inputFiles myUnstableOutput.txt --outputFile mySummary.csv

or

./unstableAnonScript -c -d -n -i myUnstableOutput.txt -o mySummary.csv
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This information would be helpful for our metrics, but is not essential.&lt;/p&gt;
&lt;p&gt;Please send your resulting file to &lt;a href=&quot;https://chapel.discourse.group/c/users/6&quot;&gt;chapel users mailing list on Discourse&lt;/a&gt;. If you would like to send it to just the Chapel team at HPE, you can either send it to the &lt;a href=&quot;mailto:chapel+info@discoursemail.com&quot;&gt;chapel+info@discoursemail.com&lt;/a&gt; or to &lt;a href=&quot;https://chapel.discourse.group/u/lydia&quot;&gt;Lydia Duncan at Discourse&lt;/a&gt; directly. Note that if you do not already have a Discourse account, you may be asked to create one to send it.&lt;/p&gt;
&lt;p&gt;If you have any questions or concerns about this, please don&apos;t hesitate to voice them.&lt;/p&gt;
&lt;p&gt;Thank you for your continued interest in Chapel, and for helping us prioritize on the features that matter most to you.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Programming with Chapel: Making the Power of Parallelism and Supercomputers More Accessible]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/programing-with-chapel-making-the-power-of-parallelism-and-supercomputers-more-accessible/</link><guid isPermaLink="false">https://developer.hpe.com/programing-with-chapel-making-the-power-of-parallelism-and-supercomputers-more-accessible/</guid><pubDate>Tue, 14 Nov 2023 16:38:00 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Need ways to enhance your workloads? We’ve got answers!]]></title><link>https://developer.hpe.com/2023-November-02/</link><guid isPermaLink="false">https://developer.hpe.com/2023-November-02/</guid><pubDate>Thu, 02 Nov 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Highly available NFS workload on HPE GreenLake for Private Cloud Enterprise using Serviceguard for Linux]]></title><description><![CDATA[Introduction HPE GreenLake for Private Cloud Enterprise offers the ability to manage and provision compute through machine-readable…]]></description><link>https://developer.hpe.com/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux/</link><guid isPermaLink="false">https://developer.hpe.com/highly-available-nfs-workload-on-hpe-greenlake-for-private-cloud-enterprise-using-serviceguard-for-linux/</guid><pubDate>Mon, 30 Oct 2023 17:51:45 GMT</pubDate><content:encoded>&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise offers the ability to manage and provision compute through machine-readable definition files, otherwise known as Infrastructure-as-Code (IaC). This offers many significant benefits; for example, it helps increase operational agility, simplifies management, reduces errors, and saves cost.&lt;/p&gt;
&lt;p&gt;In this blog post, you will discover how to provision a highly available and scalable NFS server solution on HPE GreenLake for Private Cloud Enterprise platform in conjunction with HPE Serviceguard for Linux. This blog post covers provisioning of VMs and other required components using Terraform, and the second part talks about installing and configuring an NFS server and Serviceguard for Linux (SGLX) to provide a highly available NFS service.&lt;/p&gt;
&lt;h2&gt;HPE GreenLake for Private Cloud Enterprise&lt;/h2&gt;
&lt;p&gt;One of the options provided through HPE GreenLake is to make it easy for customers to order and operate a private cloud with a mix of virtual machines, containers, and physical servers. This service allows customers to create resources such as virtual machines using just a few button clicks. It provides access via a public API, allowing developers to use an Infrastructure-as-Code type of tool to automate the provisioning, i.e., Terraform.&lt;/p&gt;
&lt;h2&gt;Terraform&lt;/h2&gt;
&lt;p&gt;Terraform is an open-source Infrastructure-as-Code framework originally created by HashiCorp that is written in Go. It uses a declarative language (HashiCorp Configuration Language HCL or JSON more recently) to describe the desired state of the infrastructure in terms of cloud, virtual machines, networks, storage, and many other components. Terraform uses the concept of “providers” to integrate with all major public clouds. Terraform is a so-called idempotent system in the sense that it does not generate any side effects if applied multiple times on infrastructure already in the desired state. Terraform has gained quite the momentum in the last few years.&lt;/p&gt;
&lt;h2&gt;HPE Serviceguard for Linux&lt;/h2&gt;
&lt;p&gt;HPE Serviceguard for Linux (SGLX) is a high availability (HA) and disaster recovery (DR) clustering solution that increases uptime for your critical applications by protecting them from a multitude of infrastructure and application faults across physical or virtual environments over any distance. The solution also reduces the impact of unplanned downtime with no compromise on data integrity or performance, and it helps achieve near zero planned downtime for maintenance.&lt;/p&gt;
&lt;h2&gt;Ansible&lt;/h2&gt;
&lt;p&gt;Ansible Automation Platform is an end-to-end automation platform to configure systems, deploy software, and orchestrate advanced workflows. It includes resources to create, manage, and scale across the entire enterprise.&lt;/p&gt;
&lt;h1&gt;Preparing for an Infrastructure-as-Code implementation&lt;/h1&gt;
&lt;h2&gt;Terraform installation&lt;/h2&gt;
&lt;p&gt;The first step is to create a virtual machine that can act as a Jump Host and where you will execute your code. Virtual machines created in later steps will be reachable from your Jump Host if you configure your networking infrastructure appropriately.
You can create a virtual machine in the Virtual Machines service of HPE GreenLake for Private Cloud Enterprise as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Log into &lt;strong&gt;HPE GreenLake Central&lt;/strong&gt; by navigating to &lt;a href=&quot;https://client.greenlake.hpe.com&quot;&gt;https://client.greenlake.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Launch the &lt;strong&gt;Virtual Machines Service Console&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Provisioning --&gt; Instances&lt;/strong&gt; and create a Linux-based virtual machine using whichever VM (Virtual Machine) image, Plan, and the network you find most suitable.&lt;/li&gt;
&lt;li&gt;(optional) Configure your virtual machine so you can access your GitHub or other code repository.&lt;/li&gt;
&lt;li&gt;Log into your virtual machine and install Terraform: Hashi Corp has provided a useful &lt;a href=&quot;https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli&quot;&gt;tutorial&lt;/a&gt; on how to do this for various distributions of Linux&lt;/li&gt;
&lt;li&gt;Verify the installation by executing this command: &lt;code&gt;terraform --help&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At this point, you are ready to start building your infrastructure description file.&lt;/p&gt;
&lt;h2&gt;Building a Terraform configuration file from scratch&lt;/h2&gt;
&lt;p&gt;Let’s start building this Terraform (TF) file using your favourite editor.&lt;/p&gt;
&lt;h3&gt;Selecting a Terraform provider&lt;/h3&gt;
&lt;p&gt;The first section of the file will enumerate the “providers” you rely upon for building your infrastructure, and they could be multiple providers in a single TF file. In the case here, you only have the HPE GreenLake provider referenced as hpe/hpegl in the official Terraform registry.
The first lines of your Terraform configuration file should look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Load HPE GreenLake terraform provider

terraform {
      required_providers {
         hpegl = {
            source  = &quot;hpe/hpegl&quot;
            version = &quot;0.3.17&quot;
         }
      }
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find out more about the HPE GreenLake Terraform provider from its &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest&quot;&gt;Terraform Registry page&lt;/a&gt;.  This page also provides a link to the GitHub repository corresponding to this provider.
The docs folder is your best source of information for using the different data sources and resources provided by the provider. If you navigate to the resources section, you will see that one resource you can configure with this provider is a VM instance. This article will focus on this resource.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/em&gt; Because this is open source, do not hesitate to open issues, or even a pull request, if you identify an issue.&lt;/p&gt;
&lt;h3&gt;Setting up the Terraform provider&lt;/h3&gt;
&lt;p&gt;Set up the required parameters for hpegl provider that was specified earlier. As previously explained, you can either explicitly set those parameters in your TF file or have them set in a series of environment variables or have a mix of both. It is recommended to add the following two parameters in your TF file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Setup provider environment (location and space)
provider &quot;hpegl&quot; {
      vmaas {
         location   = &quot;FTC06&quot;
         space_name = &quot;Default&quot;
      }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The rest (such as tenant id, user id and user secret key) can be placed in a RC file, which you can source before running your Terraform command.
You can find your location and your space name from the HPE GreenLake for Private Cloud Enterprise Overview.&lt;/p&gt;
&lt;p&gt;In the example shown below, FTC06 is our location:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the capture below, Default is the space you will use for your work with Terraform. You can check your available Spaces from the HPE GreenLake console under your profile icon, Change Space.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Setting up API Client access&lt;/h2&gt;
&lt;p&gt;Next, you need to create a new API Client access dedicated to Terraform. You can do this from the HPE GreenLake console under your settings icon, select &lt;strong&gt;User Management&lt;/strong&gt; and then the &lt;strong&gt;API Clients&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create a new API Client and be sure to note down the Issuer, Client ID and Client Secret values which are shown.
The value for the tenant ID may be seen in the Tenant ID field under the API Access menu and in the URL of your browser.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With this you can now build a resource file that defines the following environment variables:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export HPEGL_TENANT_ID=&amp;#x3C;Your Tenant ID&gt;
export HPEGL_USER_ID=&amp;#x3C;Client ID of the API Client&gt;
export HPEGL_USER_SECRET=&amp;#x3C;Secret Key displayed when you created the API Client&gt;
export HPEGL_IAM_SERVICE_URL=&amp;#x3C;Issuer URL&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And execute it on your machine to set these environment variables.&lt;/p&gt;
&lt;h2&gt;Assign Roles to API Client&lt;/h2&gt;
&lt;p&gt;Once your API Client has been created, you need to assign a Role and a Space. You can assign a Role and a Space by clicking on your new API Client and then clicking the Create Assignment button.
Since intent is to use this API Client to create resources in the Virtual Machines Service, we need to assign an appropriate Virtual Machines Role. Choose a Role like ‘Private Cloud Tenant Contributor’ and choose the same Space as used earlier, I.e., ‘Default.’&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/em&gt; More details on HPE GreenLake user roles can be found in the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;HPE GreenLake documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Set API Client Usernames and Passwords&lt;/h3&gt;
&lt;p&gt;When a user creates virtual machines using the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, they first set the Linux and Windows username and password. Once this is done, any virtual machines subsequently created by that user will inherit these credentials. The user can later use these credentials to log into these virtual machines.&lt;/p&gt;
&lt;p&gt;API Clients which are used to create virtual machines can also set Linux and Windows username and password values. Since the API Client does not use the HPE GreenLake for Private Cloud Enterprise: Virtual Machines user interface, this must be done via an API call.&lt;/p&gt;
&lt;p&gt;Here is a sample script which reads the VM_USERNAME and VM_PASSWORD environment variables and uses the values for Linux and Windows username and password for the API Client. The script assumes a Location value of ‘FTC06’ and Space value of ‘Default’.&lt;/p&gt;
&lt;p&gt;To execute this script, first set appropriate values for the VM_USERNAME and VM_PASSWORD environment variables. Next, execute the resource file, which was created earlier, which sets the HPEGL** environment variables for your API Client.&lt;/p&gt;
&lt;p&gt;Finally, execute the script below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;#!/bin/bash
export LOCATION=&apos;FTC06&apos;
export SPACE=&apos;Default&apos;
export SPACE_ENCODED=$(echo -n -e &quot;$SPACE&quot; | od -An -tx1 | tr &apos; &apos; % | xargs printf &quot;%s&quot;)

ACCESS_TOKEN=$(curl -s -k -X POST \
  &quot;${HPEGL_IAM_SERVICE_URL}/v1/token&quot; \
  -H &quot;Content-Type: application/x-www-form-urlencoded&quot; \
  -d &quot;client_id=${HPEGL_USER_ID}&quot; \
  -d &quot;client_secret=${HPEGL_USER_SECRET}&quot; \
  -d grant_type=client_credentials \
  -d scope=hpe-tenant | jq -r &apos;.access_token&apos;)
echo &quot;Token: ${ACCESS_TOKEN}&quot;

curl -s -k -X GET \
   &quot;https://client.greenlake.hpe.com/api/iac-vmaas/v1/whoami?space=${SPACE_ENCODED}&amp;#x26;location=${LOCATION}&quot; \
   -H &quot;Authorization: ${ACCESS_TOKEN}&quot; | jq &apos;.&apos;
   
# Sets user settings
curl -s -k -X POST \
  &quot;https://client.greenlake.hpe.com/api/iac-vmaas/v1beta1/user-settings?space=${SPACE_ENCODED}&amp;#x26;location=${LOCATION}&quot; \
  -H &quot;Authorization: ${ACCESS_TOKEN}&quot; \
  -H &quot;Content-Type: application/json&quot; \
  -d &apos;{
    &quot;user&quot;: {
      &quot;linuxUsername&quot;: &apos;${VM_USERNAME}&apos;,
      &quot;linuxPassword&quot;: &apos;${VM_PASSWORD}&apos;,
      &quot;windowsUsername&quot;: &apos;${VM_USERNAME}&apos;,
      &quot;windowsPassword&quot;: &apos;${VM_PASSWORD}&apos;
    }
  }&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Querying for infrastructure components&lt;/p&gt;
&lt;p&gt;Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the documentation, you can see that you need to gather the following information:&lt;/p&gt;
&lt;p&gt;•	Cloud ID&lt;/p&gt;
&lt;p&gt;•	Group ID&lt;/p&gt;
&lt;p&gt;•	Layout ID&lt;/p&gt;
&lt;p&gt;•	Plan ID&lt;/p&gt;
&lt;p&gt;•	Instance type code&lt;/p&gt;
&lt;p&gt;•	Network ID&lt;/p&gt;
&lt;p&gt;•	Resource Pool ID&lt;/p&gt;
&lt;p&gt;•	Template ID&lt;/p&gt;
&lt;p&gt;•	Folder Code&lt;/p&gt;
&lt;p&gt;For this, you will use the Terraform data statements. For example, the following statement retrieves the Cloud ID and stores it (in variable called cloud), which we can later retrieve using: data.hpegl_vmass_cloud.cloud.id&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Retrieve cloud id
data &quot;hpegl_vmaas_cloud&quot; &quot;cloud&quot; {
     name = &quot;HPE GreenLake VMaaS Cloud&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using a similar technique, you can retrieve the rest of the data you need:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# And a network
data &quot;hpegl_vmaas_network&quot; &quot;blue_segment&quot; {
     name = &quot;Blue-Segment&quot;
   }
 
data &quot;hpegl_vmaas_cloud_folder&quot; &quot;compute_folder&quot; {
   cloud_id = data.hpegl_vmaas_cloud.cloud.id
   name     = &quot;ComputeFolder&quot;
   }
 
# Locate a resource pool
data &quot;hpegl_vmaas_resource_pool&quot; &quot;cl_resource_pool&quot; {
     cloud_id = data.hpegl_vmaas_cloud.cloud.id
     name = &quot;gl-ftc06-G2i-vm-02&quot;
   }
 
# And a group
data &quot;hpegl_vmaas_group&quot; &quot;default_group&quot; {
  name = &quot;Default&quot;
}
 
# Locate a plan
data &quot;hpegl_vmaas_plan&quot; &quot;g2i_medium&quot; {
     name = &quot;G2i-medium&quot;
   }
 
# A layout
data &quot;hpegl_vmaas_layout&quot; &quot;vmware&quot; {
  name               = &quot;Vmware VM&quot;
  instance_type_code = &quot;vmware&quot;
}
 
# And a template
data &quot;hpegl_vmaas_template&quot; &quot;vanilla&quot; {
     name = &quot;redhat8-20220331T1850&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can get information about each of the data statements supported by the hpegl provider from &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/data-sources&quot;&gt;GitHub.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Creating VM resources&lt;/h2&gt;
&lt;p&gt;The next step is to use a Terraform resource statement to create a random integer (used in VM names) and a second resource to request the creation of several VM instances:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;resource &quot;random_integer&quot; &quot;random&quot; {
  min = 1
  max = 50000
}

resource &quot;hpegl_vmaas_instance&quot; &quot; my_HA_NFS&quot; {
     count              = 2 
     name               = &quot;drbd-${count.index}-${random_integer.random.result}&quot;
     hostname           = &quot;drbd-${count.index}-${random_integer.random.result}&quot;
     cloud_id           = data.hpegl_vmaas_cloud.cloud.id
     group_id           = data.hpegl_vmaas_group.default_group.id
     layout_id          = data.hpegl_vmaas_layout.vmware.id
     plan_id            = data.hpegl_vmaas_plan.g2i_medium.id
     instance_type_code = data.hpegl_vmaas_layout.vmware.instance_type_code
     network {
         id = data.hpegl_vmaas_network.blue_segment.id
     }
 
     volume {
         name         = &quot;root_vol&quot;
         size         = 50
         datastore_id = &quot;auto&quot;
     }
      volume {
         name         = &quot;drbd_vol&quot;
         size         = 50
         datastore_id = &quot;auto&quot;
     }

     config {
         resource_pool_id = data.hpegl_vmaas_resource_pool.cl_resource_pool.id
         template_id      = data.hpegl_vmaas_template.vanilla.id
         no_agent         = false
         asset_tag        = &quot;vm_terraform_sglx&quot;
         folder_code      = data.hpegl_vmaas_cloud_folder.compute_folder.code
         create_user      = true
     }
 
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, we will create a VM to act as Serviceguard quorum node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;resource &quot;hpegl_vmaas_instance&quot; &quot;my_quorum&quot; {
     count           = 1 
     name            = &quot;drbd-${count.index}-qs-${random_integer.random.result}&quot;
     hostname        = &quot;drbd-${count.index}-qs-${random_integer.random.result}&quot;
     cloud_id        = data.hpegl_vmaas_cloud.cloud.id
     group_id        = data.hpegl_vmaas_group.default_group.id
     layout_id       = data.hpegl_vmaas_layout.vmware.id
     plan_id         = data.hpegl_vmaas_plan.g2i_medium.id
     instance_type_code = data.hpegl_vmaas_layout.vmware.instance_type_code
     network {
         id = data.hpegl_vmaas_network.blue_segment.id
     }
 
     volume {
         name         = &quot;root_vol&quot;
         size         = 50
         datastore_id = &quot;auto&quot;
     }

     config {
         resource_pool_id = data.hpegl_vmaas_resource_pool.cl_resource_pool.id
         template_id      = data.hpegl_vmaas_template.vanilla.id
         no_agent         = false
         asset_tag        = &quot;vm_terraform_sglx_quorum&quot;
         folder_code      = data.hpegl_vmaas_cloud_folder.compute_folder.code
         create_user      = true
     }
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;3 VMs need to be created to setup SGLX. 2 VMs will be used to create Serviceguard for Linux nodes where the NFS service will be up and running. The third VM will act as a quorum server for the Serviceguard cluster to ensure that split brain of the cluster does not impact the availability of the monitored workload.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/em&gt; You can get information about each of the resource statements supported by the hpegl provider from &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/resources&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/em&gt; An existing Serviceguard Quorum Server in your environment can be used instead of provisioning a third VM, provided the Quorum Server is reachable to the 2 VM’s that were created.&lt;/p&gt;
&lt;h3&gt;Terraform init&lt;/h3&gt;
&lt;p&gt;Before you can use Terraform, you need to initialize it from the configuration file we have created. This is done with the following step:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Terraform ready to plan&lt;/h3&gt;
&lt;p&gt;To validate your configuration file, it is recommended to run the terraform validate command. Once ready, the terraform plan command will provide the a summary of the deployment that would be built when terraform apply method would be  used.
Once you agree with the plan and confirm, you can apply the configuration.&lt;/p&gt;
&lt;h3&gt;Terraform ready to apply&lt;/h3&gt;
&lt;p&gt;The command you need to use is now:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will rerun the plan command, then prompt you to confirm before it starts building what is in the plan. Here is some sample output from the &lt;code&gt;terraform apply&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.

  Enter a value: yes

hpegl_vmaas_instance.my_drbd[1]: Creating...
hpegl_vmaas_instance.my_quorum[0]: Creating...
hpegl_vmaas_instance.my_drbd[0]: Creating...
hpegl_vmaas_instance.my_quorum[0]: Still creating... [10s elapsed]
hpegl_vmaas_instance.my_drbd[1]: Still creating... [10s elapsed]
hpegl_vmaas_instance.my_drbd[1]: Still creating... [10s elapsed]

hpegl_vmaas_instance.my_drbd[1]: Creation complete after 2m8s [id=3105]
hpegl_vmaas_instance.my_drbd[0]: Creation complete after 2m8s [id=3111]
hpegl_vmaas_instance.my_quorum[0]: Creation complete after 2m8s [id=3108]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the command completes, your virtual machines are ready.&lt;/p&gt;
&lt;h1&gt;Configuring a Highly Available NFS solution&lt;/h1&gt;
&lt;p&gt;Now that the VMs are provisioned, we can now deploy HPE Serviceguard for Linux on these VMs to create a cluster to provide high availability for the applications running on the VMs, NFS server in this case.&lt;/p&gt;
&lt;h2&gt;Installing Serviceguard for Linux&lt;/h2&gt;
&lt;p&gt;Serviceguard and all its components can be installed using Ansible playbooks. Clone the repository on ansible control node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/HewlettPackard/serviceguard.git
cd serviceguard/ansible-sglx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Checkout the stable branch. For ex: to checkout branch 1.0,&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git checkout Stable-v1.0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To upgrade to the latest version of the playbooks:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;git pull https://github.com/HewlettPackard/serviceguard.git&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Master playbook &lt;code&gt;site.yml&lt;/code&gt; contains the roles which will be executed for the inventory defined in hosts.
When the master playbook is run, the version specified in the parameters file will be installed. The parameters for the master playbook, roles are configured in &lt;code&gt;group_vars/all.yml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now let&apos;s look into some of the fields in this file which needs to be configured.
It’s time to configure the version of Serviceguard to be installed, in this case SGLX 15.10.00 will be installed.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sglx_version : 15.10.00&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Provide the Serviceguard for Linux ISO location on the controller node.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;sglx_inst_upg_mode: iso
sglx_inst_upg_additional_params:
    ..
    iso_params:
        iso_location: &amp;#x3C;absolute path of the iso on ansible controller node&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Provide the Serviceguard Flex storage add-on ISO location on the control node.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;   storage_flex_add_on:
        install_upgrade: yes
        install_upg_mode: iso 
        iso_params: 
            iso_location: &amp;#x3C;absolute path of the iso on ansible controller node&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, install Serviceguard NFS add-on.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;sglx_add_on_inst_upg_params:
    sglx_addon: nfs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Serviceguard installation mandates a replicated user configuration. As part of the installation, a replicated user for Serviceguard Manager (sgmgr) is created on the hosts and the password for the same can be configured under the below parameter.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sglx_sgmgr_password: &quot;{{ vault_sglx_sgmgr_password }}&quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Ansible vault will be used to encrypt this password, when you run the command as below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ansible-vault encrypt_string &apos;your_password&apos; --name &apos;vault_sglx_sgmgr_password&apos;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The generated output must be substituted in&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;vault_sglx_sgmgr_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          34363834323266326237363636613833396665333061653138623431626261343064373363656165
          6639383863383633643035656336336639373161323663380a303331306337396435366535313663
          31336636333862303462346234336138393135393363323739633661653534306162323565646561
          6662396366333534350a663033303862646331613765306433353632316435306630343761623237
          3863
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once these parameters are populated, you can modify the hosts file to add the two  VMs that were provisioned earlier where the cluster will be formed, and the quorum server that was provisioned earlier. In this case, it’s as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;[sglx-storage-flex-add-on-hosts]
drbd-0-808
drbd-1-808
[sglx-cluster-hosts:children]
sglx-storage-flex-add-on-hosts 
[quorum-server-hosts]
drbd-0-qs-808
[primary]
drbd-0-808
[secondary]
drbd-1-808
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the parameters specified above are configured, playbook &lt;code&gt;site.yml&lt;/code&gt; can be run from the directory where the repository is cloned on the ansible control node.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cd serviceguard/ansible-sglx
ansible-playbook -i hosts -v --vault-password-file &amp;#x3C;path_to_vault_password_file&gt; site.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This completes the Serviceguard software installation.&lt;/p&gt;
&lt;h2&gt;Configuring data replication using Serviceguard flex Storage Add-on&lt;/h2&gt;
&lt;p&gt;Serviceguard for Linux Flex Storage Add-on is a software-based, shared-nothing, replicated storage solution that mirrors the content of block devices. NFS server export data will be replicated to all Serviceguard cluster nodes using this add-on. The Ansible snippet below can be used to configure the replication.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- hosts: sglx-storage-flex-add-on-hosts
  tasks:
  - name: Populate /etc/drbd.d/global_common.conf file
    become: True
    shell: |
      echo &quot;global { usage-count yes; } common { handlers { disconnected /usr/local/cmcluster/conf/scripts/sgenss/replication_software/drbd/drbd_disconnect_handler.sh; } options { auto-promote no; } }&quot; &gt; /etc/drbd.d/global_common.conf

  - name: Create first part of drbd config file
    become: True
    shell: |
      echo &quot;resource drbd0 {&quot; &gt;/etc/drbd.d/drbd0.res
      echo &quot;    disk /dev/sdb;&quot; &gt;&gt; /etc/drbd.d/drbd0.res
      echo &quot;    device /dev/drbd0;&quot; &gt;&gt; /etc/drbd.d/drbd0.res
      echo &quot;    meta-disk internal;&quot; &gt;&gt; /etc/drbd.d/drbd0.res

  - name: Create second part of drbd config file
    become: True
    vars:
      this_nodename: &quot;{{ hostvars[item][&apos;ansible_hostname&apos;] }}&quot;
    shell: |
        echo &quot;    on {{ this_nodename }} {&quot; &gt;&gt; /etc/drbd.d/drbd0.res
        echo &quot;      address {{ item }}:7789;&quot; &gt;&gt; /etc/drbd.d/drbd0.res
        echo &quot;      node-id {{ my_index }};&quot; &gt;&gt; /etc/drbd.d/drbd0.res
        echo &quot;    }&quot; &gt;&gt; /etc/drbd.d/drbd0.res
    loop: &quot;{{ ansible_play_batch }}&quot;
    loop_control:
      index_var: my_index

  - name: Set initial empty mesh list
    set_fact:
      mesh: &quot;&quot;

  - name: Build list of nodes for connection-mesh entry
    loop: &quot;{{ ansible_play_batch }}&quot;
    set_fact:
      mesh: &quot;{{ mesh + hostvars[item][&apos;ansible_hostname&apos;] + &apos; &apos; }}&quot;

  - name: Check mesh nodes
    debug: var=mesh

  - name: Create connection-mesh portion of config file
    become: True
    shell: |
      echo &quot;    connection-mesh {&quot; &gt;&gt; /etc/drbd.d/drbd0.res
      echo &quot;      hosts {{ mesh|trim }};&quot; &gt;&gt; /etc/drbd.d/drbd0.res
      echo &quot;    }&quot; &gt;&gt; /etc/drbd.d/drbd0.res

  - name: Create last part of drbd config file
    become: True
    shell: |
      echo &quot;}&quot; &gt;&gt; /etc/drbd.d/drbd0.res

 - name: Create drbd0 device
    become: True
    shell: |
      drbdadm create-md drbd0
    when: res.rc != 0

  - name: Start DRBD service
    become: True
    systemd:
      name: drbd
      enabled: True
      state: started

- hosts: primary
  tasks:
  - name: Enable this node as Primary
    become: True
    shell: |
      drbdadm primary drbd0 --force
    when: res.rc != 0
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configuring LVM&lt;/h2&gt;
&lt;p&gt;Once data replication is configured on the nodes, you can configure LVM on top of the DRBD disk &lt;em&gt;/dev/drbd0&lt;/em&gt;. The following Ansible snippet can be used to configure the LVM volume group named nfsvg and logical volume names nfsvol of size 45GB.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- hosts: sglx-storage-flex-add-on-hosts
  tasks:
  - name: Modify lvm configuration
    become: True
    lineinfile:
      path: /etc/lvm/lvm.conf
      regexp: &quot;# volume_list&quot;
      line: volume_list=[&quot;nfsvg&quot;,&quot;nfsvg/nfsvol&quot;,&quot;@tag1&quot;,&quot;@*&quot;]
      state: present
      backup: True
  - name: reject disk in lvm configuration
    become: True
    lineinfile:
      path: /etc/lvm/lvm.conf
      regexp: &quot;.*/dev/cdrom.*&quot;
      line: &apos;      filter = [ &quot;r|/dev/sdb|&quot;, &quot;a|/dev/drbd0|&quot; ] &apos;
      state: present
      backup: True

- hosts: primary 
  tasks:
  - name: Create a volume group on /dev/drbd0
    become: True
    lvg:
      vg: nfsvg
      pvs: /dev/drbd0

  - name: create logical volume for nfs
    become: True
    lvol:
      vg: nfsvg
      lv: nfsvol
      size: 45g
      force: True

  - name: Format filesystem
    become: True
    filesystem:
      dev: /dev/nfsvg/nfsvol 
      fstype: xfs
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Setting up the NFS server&lt;/h2&gt;
&lt;p&gt;Now start the NFS service and export the NFS share from the primary node using the Ansible snippet below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- hosts: sglx-storage-flex-add-on-hosts
  tasks:
  - name: Install NFS Server and related components
    become: True
    ansible.builtin.yum:
      name:
        - nfs-utils
      state: present
    ignore_errors: True
  
  - name: Enable NFS related services
    become: True
    systemd:
      name: &quot;{{ item }}&quot;
      enabled: True
    with_items:
      - rpcbind
      - nfs-server

  - name: Start NFS related services
    become: True
    systemd:
      name: &quot;{{ item }}&quot;
      state: started
    with_items:
      - rpcbind
      - nfs-server

  - name: Add /etc/exports entry and create NFS mount point
    become: True
    shell: |
       mkdir -p /nfs
       chmod go+rwx /nfs
       echo &apos;/nfs *(rw,sync,no_root_squash)&apos; &gt; /etc/exports
      
- hosts: primary 
  tasks:
    - name: mount nfs on primary
      become: True
      shell: |
         mount /dev/nfsvg/nfsvol /nfs
         exportfs -a
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creating an SGLX cluster and providing HA to the NFS workload&lt;/h2&gt;
&lt;p&gt;Once NFS share is configured, look into creating an SGLX cluster and deploy the NFS workload in the SGLX environment to make it highly available. The below snippet will help you achieve this.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
 - hosts: primary
  - name: Build string of primary nodes
    set_fact:
      primary_nodes: &quot;{{ primary_nodes  | default (&apos;&apos;) + &apos; -n &apos; + hostvars[item].ansible_hostname }}&quot;
    with_items: 
      - &quot;{{ groups[&apos;primary&apos;] }}&quot; 

  - name: Build string of secondary nodes
    set_fact:
      secondary_nodes: &quot;{{ secondary_nodes  | default (&apos;&apos;) + &apos; -n &apos; + hostvars[item].ansible_hostname }}&quot;
    with_items:
      - &quot;{{ groups[&apos;secondary&apos;] }}&quot;

  - name: Build string of quorum nodes
    set_fact:
      quorum_nodes: &quot;{{ quorum_nodes  | default (&apos;&apos;) + &apos; -q &apos; + hostvars[item].ansible_hostname }}&quot;
    with_items:
      - &quot;{{ groups[&apos;quorum-server-hosts&apos;] }}&quot;

  - name: Run cmdeploycl command
    become: True
    ansible.builtin.expect:
      command: &quot;$SGSBIN/cmdeploycl {{ primary_nodes }} {{secondary_nodes }} {{ quorum_nodes }}&quot; 
      responses:
        password: &quot;{{ root_pass }}&quot;
      timeout: 300
  - name: Update cluster config
    become: True
    shell: |
      rm -rf /tmp/cluster.txt
      $SGSBIN/cmgetconf &gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_NAME CGR_SGeNSS_drbd&quot; &gt;&gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_TYPE simple&quot; &gt;&gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_CMD $SGSBIN/scripts/sgenss/replication_software/drbd/cluster_generic_resource.sh&quot; &gt;&gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_SCOPE node&quot; &gt;&gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_RESTART none&quot; &gt;&gt; /tmp/cluster.txt
      echo &quot;GENERIC_RESOURCE_HALT_TIMEOUT 10000000&quot; &gt;&gt; /tmp/cluster.txt
      
  - name: Run cmapplyconf command
    become: True
    shell: |
      $SGSBIN/cmapplyconf -v -C /tmp/cluster.txt -f
      
  - name: Create a DRBD and NFS package
    become: True
    shell: |
      rm -rf /tmp/nfs_drbd.conf
      $SGSBIN/cmmakepkg -m sgenss/rf_drbd -m tkit/nfs/nfs /tmp/nfs_drbd.conf

  - name: update the drbd resource name
    become: True
    replace:
      path: /tmp/nfs_drbd.conf
      regexp: &quot;{{ item.regexp }}&quot;
      replace: &quot;{{ item.rep }}&quot;
    with_items: 
      - { regexp: &apos;res0&apos;, rep: &apos;drbd0&apos;}

  - name: Make change to package configuration
    become: True
    lineinfile:
      path: /tmp/nfs_drbd.conf
      regexp: &quot;{{ item.regexp }}&quot;
      line: &quot;{{ item.line }}&quot;
      state: present
    with_items:
      - { regexp: &apos;^package_name&apos;, line: &apos;package_name          nfs_drbd&apos;}
      - { regexp: &apos;^#vg&apos;, line: &apos;vg     nfsvg&apos;}
      - { regexp: &apos;^tkit/nfs/nfs/XFS&apos;, line: &apos;tkit/nfs/nfs/XFS  &quot;-o rw,sync,no_root_squash *:/nfs&quot;&apos;}
      - { regexp: &apos;^tkit/nfs/nfs/QUOTA_MON&apos;, line: &apos;tkit/nfs/nfs/QUOTA_MON      no&apos;}

  - name: Add additional NFS configuration
    become: True
    lineinfile:
      path: /tmp/nfs_drbd.conf
      insertafter: EOF
      line: |
        fs_name /dev/nfsvg/nfsvol
        fs_directory /nfs
        fs_type &quot;xfs&quot;
        fs_mount_opt &quot;-o rw&quot;
        ip_subnet 10.10.180.0
        ip_address 10.10.180.99

  - name: check the package and apply it
    become: True
    shell: |
$SGSBIN/cmcheckconf -P /tmp/nfs_drbd.conf
      $SGSBIN /cmapplyconf -P /tmp/nfs_drbd.conf -f

  - name: enable the package
    become: True
    shell: |
      $SGSBIN /cmmodpkg -e nfs_drbd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The NFS server is now deployed in Serviceguard cluster with high availability.&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;In this blog, you learned how to use platforms like Terraform and Ansible to easily provision and deploy a highly available NFS server solution with Serviceguard for Linux on an HPE GreenLake Private Cloud Enterprise environment.&lt;/p&gt;
&lt;p&gt;Serviceguard for Linux has high availability and disaster recovery solutions available for various applications such as SAP, Oracle, SQL Server On Linux, EDB Postgres, etc. For more information regarding these solutions you can refer to the HPE Serviceguard for Linux operational guide for workloads and solutions available at  &lt;a href=&quot;https://www.hpe.com/info/linux-serviceguard-docs&quot;&gt;https://www.hpe.com/info/linux-serviceguard-docs&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Check back on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer Community blog&lt;/a&gt; for more articles on HPE GreenLake for Private Cloud.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake for Private Cloud Enterprise: Scaling and orchestrating modern applications for the enterprise]]></title><description><![CDATA[In my blog post on HPE GreenLake for Private Cloud Enterprise: Deploying and scaling traditional applications, I highlighted how HPE…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-scaling-and-orchestrating-modern-applications-for-the-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-scaling-and-orchestrating-modern-applications-for-the-enterprise/</guid><pubDate>Thu, 19 Oct 2023 14:34:37 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In my blog post on &lt;a href=&quot;https://developer.hpe.com/blog/hpe-greenlake-for-private-cloud-enterprise-glpce-deploying-and-scaling-traditional-applications/&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Deploying and scaling traditional applications&lt;/a&gt;, I highlighted how HPE GreenLake for Private Cloud Enterprise seamlessly integrates traditional applications with modern demands, transforming infrastructure into programmable code for optimal flexibility and security. Its strategic approach to scalability ensures businesses consistently operate at their best, making applications resilient to ever-changing requirements.
In this post, I&apos;ll delve into the features and capabilities of HPE GreenLake for Private Cloud Enterprise, with a specific focus on its support for scaling containers using Kubernetes (K8s). Let&apos;s explore the advancements and offerings of the HPE GreenLake for Private Cloud Enterprise platform.&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;In today’s rapidly evolving digital landscape, it&apos;s imperative for enterprises to have agile and secure solutions that can effortlessly adapt to emerging trends. HPE GreenLake for Private Cloud Enterprise: Containers (&quot;containers service&quot;) offers a robust solution tailored to meet these specific demands. Containers service efficiently adjusts resources based on changing workloads, ensuring optimal use. This flexibility eliminates unexpected costs, allowing businesses to pay only for the resources they use. A unified dashboard makes this process even more transparent and manageable.&lt;/p&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise enables seamless integration between on-premises infrastructure and cloud platforms. This capability ensures that, as workloads move or scale between these environments, the performance remains consistent, providing a dependable experience for users. It&apos;s not just about meeting current needs; it&apos;s about anticipating the future. By aligning with the latest standards and supporting innovative architectural trends, HPE GreenLake for Private Cloud Enterprise positions businesses at the cutting edge of technology. In essence, containers service is a harmonious blend of traditional and modern needs, paving a clear path for businesses to move confidently into the future with unmatched agility and security.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Kubernetes (K8s) on HPE GreenLake Private Cloud Enterprise&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise provides Kubernetes deployment via its container services. Notably, it&apos;s a CNCF-compliant K8s distribution, ensuring adherence to industry standards.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key Points:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;On-premises Kubernetes: It is designed for on-premises deployments with the ability to scale based on business needs.&lt;/li&gt;
&lt;li&gt;Ready to use: The platform is pre-configured, reducing setup time.&lt;/li&gt;
&lt;li&gt;Performance and security: It provides consistent performance and security for operations.&lt;/li&gt;
&lt;li&gt;Quick Kubernetes start: The platform features one-click provisioning that gets Kubernetes operations running fast.&lt;/li&gt;
&lt;li&gt;Central Kubernetes management: Its management console allows for easy control over Kubernetes clusters.&lt;/li&gt;
&lt;li&gt;Pay-as-you-go pricing: It uses a flexible pricing model where businesses pay for what they use.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Containers service stands out by delivering the security, efficiency, and cost-effectiveness modern businesses seek, maximizing resource use. In tandem with Kubernetes, containers service provides a smooth and managed infrastructure. Enhanced by HPE Ezmeral Runtime Enterprise, it gives businesses a refined platform for deploying applications, ensuring scalability, and streamlined operations.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Under the hood – A fungible infrastructure resource pool&lt;/strong&gt;&lt;br&gt;
As discussed in &lt;a href=&quot;https://developer.hpe.com/blog/hpe-greenlake-for-private-cloud-enterprise--Exploring-a-flexible-infrastructure-resource-pool/&quot;&gt;Part 1&lt;/a&gt; of this blog series, HPE GreenLake for Private Cloud Enterprise integrates an adaptable infrastructure that leverages the strengths of its infrastructure resource pool and detailed capacity planning. This approach not only addresses present business requirements but also anticipates future needs. Within HPE GreenLake for Private Cloud Enterprise, the resource pool offers flexibility, letting businesses choose container deployments on bare metal, virtual machines, or a combination of both, based on their specific demands.
Diving into the heart of HPE GreenLake for Private Cloud Enterprise: Containers, we encounter two primary deployment models for Kubernetes: on virtual machines and on bare metal.&lt;/p&gt;
&lt;p&gt;Kubernetes on bare metal:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tailored for expansive, consistent deployments where performance is paramount. * Efficiency takes center stage, devoid of VM-associated costs.&lt;/li&gt;
&lt;li&gt;Simplified management structure cuts down on overhead&lt;/li&gt;
&lt;li&gt;Perfect for business-critical applications or those where speed and responsiveness are crucial, like financial or imaging apps.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kubernetes on virtual machines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Delivers optimal performance in diverse environments, making it an ideal choice for development&lt;/li&gt;
&lt;li&gt;Flexibility in deployments with the convenience of having mixed workloads run simultaneously&lt;/li&gt;
&lt;li&gt;Typical applications include general-purpose web apps and event-driven microservices.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In essence, whether your workload requires the adaptable environment of VMs or the robust power of bare metal, containers service ensures your Kubernetes deployments are optimized to your unique needs.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;K8s in Action on HPE GreenLake for Private Cloud Enterprise&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In the evolving landscape of enterprise IT, scalability isn&apos;t just a luxury; it&apos;s a necessity. With HPE GreenLake for Private Cloud Enterprise and Kubernetes at its helm, businesses are equipped to meet dynamic demands head-on. When administrators first work with HPE GreenLake, they can manually allocate resources based on anticipated needs. But as traffic unpredictably rises, Kubernetes springs into action, enabling it to scale.
Kubernetes, with its sophisticated orchestration capabilities, monitors workloads in real-time on HPE GreenLake for Private Cloud Enterprise. Should traffic surge unexpectedly, Kubernetes autonomously scales containers, ensuring optimal performance without overburdening resources.
The combined prowess of manual fine-tuning with Kubernetes&apos; automated scalability represents the future-forward approach of HPE GreenLake for Private Cloud Enterprise, promising enterprises reliability, efficiency, and adaptability all in one package.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes Cluster blueprints:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-4-creation-of-kubernetes-cluster-blueprint.png&quot; alt=&quot;Screenshot 1: Creation of Kubernetes cluster blueprint&quot; title=&quot;Screenshot 1: Creation of Kubernetes cluster blueprint&quot;&gt;&lt;/p&gt;
&lt;p&gt;In screenshot 1, Kubernetes cluster blueprints in containers service serve as templates to simplify cluster deployments, including K8s setups. They set configurations like the Kubernetes version, storage class, and node details. Blueprints ensure consistent deployments and allow users to use standard templates or create their own for specific needs. Let me show you a concise demo on how to effortlessly configure clusters and optimize auto-scaling parameters!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to create K8s clusters and scale worker nodes (up or down) in a running cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-2-k8s-cluster-creation-and-configuring-scaled-worker-nodes.png&quot; alt=&quot;Screenshot 2: K8s Cluster creation and configuring scaled worker nodes&quot; title=&quot;Screenshot 2: K8s Cluster creation and configuring scaled worker nodes&quot;&gt;&lt;/p&gt;
&lt;p&gt;In screenshot 2 , I created a cluster. I selected configurations and resources, set parameters, and containers service provisions and initialized the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Scaling worker nodes in containers service&lt;/strong&gt;: With containers service, you can scale worker nodes based on workload. You just increase or decrease the number of worker nodes in a running cluster to align with resource requirements.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Autoscaler in Kubernetes clusters&lt;/strong&gt;: The Autoscaler adjusts the Kubernetes cluster&apos;s size based on specific conditions. It scales up when pods can&apos;t be scheduled due to resource limitations and scales down if nodes are underutilized for over 10 minutes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Key points:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Scaling range: Defined by a minimum and maximum node count. The range is between 1 and 200 nodes. By default, autoscaling is off with equal min-max values.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Scaling criteria&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Pending pods due to limited resources trigger a scale-up.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Underutilized nodes for 10 minutes prompt a scale-down, but certain conditions can prevent this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Recent scale-up within 10 minutes&lt;/li&gt;
&lt;li&gt;Failed scale-down in the last 3 minutes&lt;/li&gt;
&lt;li&gt;Nodes with critical pods or those facing specific constraints&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Autoscaler operates from the control plane, checking conditions every minute.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-3-scale-cluster.png&quot; alt=&quot;Screenshot 3: Scale cluster&quot; title=&quot;Screenshot 3: Scale cluster&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;How to use configure:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ensure the cluster is ready and familiarize yourself with autoscaler guidelines.&lt;/li&gt;
&lt;li&gt;Navigate to HPE GreenLake for Private Cloud Enterprise &gt; Containers &gt; Selected Cluster.&lt;/li&gt;
&lt;li&gt;Choose Scale from Actions.&lt;/li&gt;
&lt;li&gt;Set the min-max node count values to either enable or disable autoscaling as shown in Screenshot 3.&lt;/li&gt;
&lt;li&gt;Optionally, add or remove node pools.&lt;/li&gt;
&lt;li&gt;To see the autoscaler logs, utilize the given kubectl command.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Conclusion – The perfect synergy&lt;/strong&gt;&lt;br&gt;
HPE GreenLake for Private Cloud Enterprise, in conjunction with Kubernetes, addresses a broad spectrum of enterprise applications. Whether dealing with brownfield applications that have evolved over time or greenfield applications that are freshly developed, this combination ensures seamless integration and deployment.&lt;/p&gt;
&lt;p&gt;Containers service’s ability to scale resources up or down based on workload demands ensures that businesses can respond effectively to varying operational requirements.
Additionally, the integrated framework provides a secure stack, reinforcing infrastructure integrity, governance, compliance, and application security. In essence, the union of HPE GreenLake for Private Cloud Enterprise and Kubernetes provides a comprehensive solution that caters to both existing and new enterprise applications, fostering a flexible, responsive, and secure environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 1.32!]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/announcing-chapel-1-32/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-1-32/</guid><pubDate>Thu, 05 Oct 2023 17:16:19 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Discover the power of data center monitoring using Redfish telemetry and cloud-native tooling]]></title><description><![CDATA[Monitoring data center infrastructure is critical to ensuring optimal performance, resource utilization, and timely issue detection. Redfish…]]></description><link>https://developer.hpe.com/discover-the-power-of-data-center-monitoring-using-redfish-telemetry-and-cloud-native-tooling/</link><guid isPermaLink="false">https://developer.hpe.com/discover-the-power-of-data-center-monitoring-using-redfish-telemetry-and-cloud-native-tooling/</guid><pubDate>Thu, 05 Oct 2023 17:03:36 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Monitoring data center infrastructure is critical to ensuring optimal performance, resource utilization, and timely issue detection. Redfish, an open industry standard for hardware management, provides a standardized way to access telemetry data from servers and other devices. Coupling Redfish telemetry with cloud-native monitoring tools offers a robust solution for real-time monitoring, data analysis, visualization, and alerting.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Why does data center monitoring matter?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Data centers are the backbone of modern businesses, housing critical applications, databases, and services. Ensuring the seamless operation of these data centers is essential for business continuity and meeting the demands of today&apos;s digital world. Data center monitoring helps with following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Performance optimization:&lt;/strong&gt; Monitoring helps identify bottlenecks, inefficiencies, and potential failures, allowing for proactive optimization.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource utilization:&lt;/strong&gt; Tracking resource usage ensures that capacity is allocated effectively, saving costs and energy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Predictive maintenance:&lt;/strong&gt; Real-time insights enable predictive maintenance, reducing downtime and associated costs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Monitoring helps meet regulatory requirements by maintaining accurate records and ensuring security.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;A comprehensive approach&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;My latest technical whitepaper, &lt;em&gt;“&lt;a href=&quot;https://www.hpe.com/psnow/doc/a00134351enw&quot;&gt;Data center monitoring using Redfish telemetry and cloud-native tooling&lt;/a&gt;”&lt;/em&gt;, presents a comprehensive approach to data center monitoring by integrating the Redfish telemetry with cloud-native open-source tools including Telegraf, Prometheus, Alertmanager, and Grafana. These tools work seamlessly together to provide a holistic view of your data center infrastructure. The telemetry data source for this stack is HPE Integrated Lights-Out (iLO) which exposes metrics via the Redfish interface.&lt;/p&gt;
&lt;p&gt;Here&apos;s a glimpse of what you&apos;ll discover in the whitepaper:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Understanding Redfish telemetry:&lt;/strong&gt; Learn about the Redfish standard and how it simplifies hardware telemetry data collection.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud-native tools:&lt;/strong&gt; Explore the capabilities of Telegraf, Prometheus, Alertmanager, and Grafana and how they enhance data center monitoring.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; Dive into the architecture of this integrated solution, detailing how data flows from the source to the visualization layer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Setup process:&lt;/strong&gt; Follow a step-by-step guide on how to set up this monitoring solution in your data center.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Benefits:&lt;/strong&gt; Understand the benefits of this approach, including real-time monitoring, scaling, and customizable visualization.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Take the next step&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Discover how the integration of Redfish telemetry service with cloud-native tools can transform your data center monitoring, making it not only efficient but also highly insightful by reading my whitepaper and dive into the world of data center monitoring with HPE iLO today! Keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt; for more articles on HPE iLO.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Unified analytics, private clouds, AI, and help for K8s]]></title><link>https://developer.hpe.com/2023-October-05/</link><guid isPermaLink="false">https://developer.hpe.com/2023-October-05/</guid><pubDate>Thu, 05 Oct 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE GreenLake for Private Cloud Enterprise: Deploying and scaling traditional applications]]></title><description><![CDATA[In my first blog post on HPE GreenLake for Private Cloud Enterprise: Exploring a flexible infrastructure resource pool I underscored the…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-glpce-deploying-and-scaling-traditional-applications/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise-glpce-deploying-and-scaling-traditional-applications/</guid><pubDate>Tue, 03 Oct 2023 03:45:24 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In my first blog post on &lt;a href=&quot;https://developer.hpe.com/blog/hpe-greenlake-for-private-cloud-enterprise--Exploring-a-flexible-infrastructure-resource-pool/&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Exploring a flexible infrastructure resource pool&lt;/a&gt; I underscored the flexible nature of HPE GreenLake for Private Cloud Enterprise infrastructure. Its transparent cost analytics stand out as a significant benefit, aiding organizations in making well-informed financial and infrastructural choices. In our present digital era, having the right infrastructure is imperative, a need that HPE GreenLake for Private Cloud Enterprise aptly fulfills through its robust support for automated tools like Terraform and Ansible, ensuring seamless and efficient infrastructure management and scaling.&lt;/p&gt;
&lt;p&gt;In this second segment, I will delve further into the enhanced features of HPE GreenLake for Private Cloud Enterprise: Virtual Machines. This exploration will highlight the substantial support these services extend to modern businesses. By demystifying the intricacies of deploying and efficiently scaling traditional applications, HPE GreenLake for Private Cloud Enterprise stands as a pivotal ally for contemporary enterprise operations.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the current business landscape, organizations face the intricate task of integrating their applications, built on traditional time-tested and trusted technology and processes, like three-tier architectures, with the rapid pace and adaptability of modern applications, like containers and serverless computing. These older technologies have proven their reliability and effectiveness over time, forming the backbone of many enterprise systems. Meanwhile, applications have become modernized, emphasizing scalability, agility, and efficiency, and often relying on cloud-native architectures and services to deliver seamless, robust functionality.&lt;/p&gt;
&lt;p&gt;Navigating the integration of these disparate systems requires a thoughtful approach, balancing the stability and reliability of traditional architectures with the innovative features of modern application development and deployment strategies. This blending ensures organizations can harness the full potential of both technological paradigms to drive operational excellence and competitive advantage in the marketplace. Addressing this challenge, the HPE GreenLake for Private Cloud Enterprise: Virtual Machines serves as a comprehensive platform. It is designed to support the simultaneous operation and integration of both traditional and modern application types.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Traditional architecture&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Figure 1 illustrates a three-tier architecture with the pivotal roles of various components. Their collective functionality in ensuring consistent application responsiveness, user engagement, and efficient processing of requests is highlighted. The architecture is delineated into the Load Balancer, the web-mobile tier, the application/business Logic tier, and the database tier, each playing a significant role in ensuring the smooth and efficient functioning of the application.
Amidst this backdrop, HPE GreenLake for Private Cloud Enterprise emerges as a significant facilitator for deploying traditional applications. It streamlines the complexities intertwined with deployment, enhancing the synchronization between different architectural tiers.&lt;/p&gt;
&lt;p&gt;The real advantage that HPE GreenLake for Private Cloud Enterprise delivers lies in its ability to deploy essential components like Tier 0 (top or external tier) and Tier 1 (middle or distribution tier) routers, load balancers, and firewalls. This ability enables customers to seamlessly establish a three-tier application in the private cloud environment, ensuring robustness and efficiency in their operations. Beyond this, HPE GreenLake for Private Cloud Enterprise empowers customers to construct a comprehensive blueprint of a three-tier application, leveraging it for consistent and repeatable deployments. This approach not only simplifies deployment challenges but also enhances the efficiency and reliability of deploying traditional three-tier applications in a cloud environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/a-traditional-n-tier-application-to-be-hosted-on-virtual-machine-service.png&quot; alt=&quot;Figure 1: A Traditional n-tier application to be hosted on Virtual Machine Service&quot; title=&quot;Figure 1: A Traditional n-tier application to be hosted on Virtual Machine Service&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 1: A Traditional n-tier application to be hosted on Virtual Machine Service &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;By integrating these capabilities, HPE GreenLake for Private Cloud Enterprise stands out as a solution that not only supports the deployment of traditional applications but also underscores the advancements in modern applications, ensuring seamless cloud transitions and robust integration. It adeptly positions enterprises to navigate the intricate waters of technological evolution, ensuring they remain at the forefront of innovation and operational efficiency.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deploying and scaling traditional applications&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Virtual Machine service enables the utilization of virtual machine infrastructure through code, offering support for tools like Terraform and Ansible. This promotes rapid deployments, uniformity, and enhanced scalability. It also strengthens security and consistency by reducing manual steps, priming businesses for efficient digital asset management.
In this blog post, I’ll help you navigate the HPE GreenLake for Private Cloud Enterprise console to initialize assets for hosting and deploying code. I begin by setting up web layer instances with the compute-optimized (c2i) type and scale them twice based on memory thresholds. Using an Ansible playbook, I will be able to automate tasks and incorporate a load balancer to prevent any single point of failure. Although the focus is on port 80 for HTTP, options for port 443, SSL certificates, and other TCP ports are available for diverse cluster scaling. Screenshots will illustrate instance launches upon reaching scaling thresholds.&lt;/p&gt;
&lt;p&gt;Next, I will explore the two key phases of application deployment and scaling:&lt;/p&gt;
&lt;h3&gt;1. Design and Development:&lt;/h3&gt;
&lt;p&gt;In this section, I guide you through managing groups and establishing scaling parameters for CPU and memory. During the Design and Development phase, initiate hosting via the HPE GreenLake for Private Cloud Enterprise console and configure web layer instances using the c2i type. With Ansible&apos;s help, introduce threshold-based scaling and incorporate a load balancer for uninterrupted service. While we&apos;ll emphasize port 80, several other ports are accessible as well.&lt;/p&gt;
&lt;h3&gt;2. Runtime&lt;/h3&gt;
&lt;p&gt;In the Runtime phase, as instances reach scaling thresholds, they are activated and tracked. For detailed insights, I provide a series of screenshots in this post.&lt;/p&gt;
&lt;p&gt;Now lets begin by login into the HPE GreenLake for Private Cloud Enterprise console, then head to the Virtual Machines section displayed on the dashboard.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Design and development phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Creating a Group: Start by accessing the Virtual Machines link on the dashboard. Click on &lt;strong&gt;Launch Service console&lt;/strong&gt; and then select &lt;strong&gt;Groups&lt;/strong&gt; from the &lt;strong&gt;Infrastructures&lt;/strong&gt; dropdown. Here, initiates the creation of a designated group, as illustrated in screenshot 1. This approach allows you to bundle associated resources, ensuring streamlined management and clear oversight.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/managing-groups-and-setting-scaled-thresholds-for-cpu-and-memory.png&quot; alt=&quot;Screenshot 1: Managing groups and setting scaled thresholds for CPU and memory.&quot; title=&quot;Screenshot 1: Managing groups and setting scaled thresholds for CPU and memory.&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 1: Managing groups and setting scaled thresholds for CPU and memory. &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;Setting scale threshold: Navigate to &lt;strong&gt;Library&lt;/strong&gt; and choose &lt;strong&gt;Scale&lt;/strong&gt; &lt;strong&gt;threshold&lt;/strong&gt;. Here is where you will define specific criteria for automatic scaling as shown in screenshot 1. With this in place, the system self-adjusts, activating anywhere from 1 to 4 instances based on memory consumption. When memory usage is minimal, the system conserves resources by operating fewer instances. Conversely, as memory usage approaches its limit, the system scales upwards, guaranteeing consistent performance without the need for human input.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create NSX-T Load Balancer: Screenshot 2 shows creating the &lt;strong&gt;Scale-Post-LB&lt;/strong&gt; load balancer, set to &lt;strong&gt;Small&lt;/strong&gt; size and &lt;strong&gt;Enabled&lt;/strong&gt;, is in an Up administrative state. Connected to the main network router, &lt;strong&gt;Tier-1 Gateway&lt;/strong&gt;, it logs activity at a Warning level.
&lt;img src=&quot;/img/nsx-t-load-balancer.png&quot; alt=&quot;Screenshot 2: NSX-T Load Balancer&quot; title=&quot;Screenshot 2: NSX-T Load Balancer&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 2: NSX-T Load Balancer &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;This tool distributes traffic across servers, enhancing performance and reliability, while granting access to all groups.&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Instance provisioning: Screenshot 3 takes you through the detailed steps of setting up an instance:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;a. Initiating instance creation: Begin by navigating to &lt;strong&gt;Provisioning&lt;/strong&gt;, then select &lt;strong&gt;Instances&lt;/strong&gt; and click on Create Instance. At this juncture, identify and assign the right group for your resources. It is now also the time to detail key attributes, such as the instance name, its operating environment, and any relevant labels. This meticulous labeling aids in precise resource tracking and management.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot-3-creating-and-configuring-instance.png&quot; alt=&quot;Screenshot 3: Creating and configuring instance&quot; title=&quot;Screenshot 3: Creating and configuring instance&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 3: Creating and configuring instance &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;b. Layout configuration: Subsequently, shift your focus to integrating the instance with a load balancer. While many prominent load-balancing options are available, including the likes of F5, the NSX-T Load Balancer (screenshot 2) native to HPE GreenLake for Private Cloud Enterprise  is a good fit. It&apos;s equipped with a comprehensive set of features tailored to meet our deployment requirements, ensuring even distribution of incoming traffic across multiple endpoints.&lt;/p&gt;
&lt;p&gt;c. Final settings: Concluding the process, determine the root volume size to match your data storage necessities. Concurrently, select and designate the network on which these instances will function. This guarantees flawless interaction and operation within the stipulated digital ecosystem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/configuring-automation-scale-factor-and-load-balancer.png&quot; alt=&quot;Screenshot 4: Configuring automation, scale factor and load balancer&quot; title=&quot;Screenshot 4: Configuring automation, scale factor and load balancer&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 4: Configuring automation, scale factor and load balancer &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Screenshot 4, as you navigate the automation phase of the instance wizard:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Initiate the workflow configuration, opting for Ansible based on your needs. It&apos;s worth noting the availability of a Node.js workflow alternative.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Scale type selection follows, referencing the threshold previously established in Screenshot 1. For clarity, you can revisit Screenshot 1.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In Screenshot 2, you’ll see how to set up an NSX-T Load Balancer. Port 80 is primarily designated for HTTP traffic. While port 80 is the focus, other ports like 443 for HTTPS or custom ports for database tasks (e.g., 1433, 1521) can be configured. It&apos;s crucial to note that, initially, configurations apply only to the servers behind the load balancer. When a new instance arises due to scaling thresholds, it inherits these settings but isn&apos;t automatically added to the load balancer. A separate script is required to include the new instance to the load balancer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It&apos;s essential to highlight that activating port 443 for HTTPS provides the capability to configure and implement SSL certificates, bolstering security.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Additionally, there&apos;s flexibility to choose the installation of custom agents on the instances, such as third-party monitoring tools or security vulnerability scanners.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tagging-the-resources..png&quot; alt=&quot;Screenshot 5: Tagging the resources.&quot; title=&quot;Screenshot 5: Tagging the resources.&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 5: Tagging the resources. &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Screenshot 5, you’ll notice an invaluable feature: the ability to create tags during the instance setup. Tagging resources allows organizations to effortlessly track and evaluate their resource consumption, providing insights into patterns, trends, and potential areas of optimization. Furthermore, these tags become indispensable when delving into spend analytics, helping to allocate and manage costs effectively. This functionality is especially crucial within the HPE GreenLake suite, as it promotes both transparency and strategic decision-making for businesses.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Runtime&lt;/strong&gt;: Observe screenshot 6, where you’ll notice instances are activated and are attached to load balancer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/provisioning-%E2%80%93-2-instances-up-and-attached-to-load-balancer.png&quot; alt=&quot;Screenshot 6: Provisioning – 2 instances up and attached to load balancer.&quot; title=&quot;Screenshot 6: Provisioning – 2 instances up and attached to load balancer.&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 6: Provisioning – 2 instances up and attached to load balancer. &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Provisioning: Screenshot 6 shows the final phase of the instance wizard: the Review screen. This overview permits users to review and affirm their established configurations. By clicking the complete button, the provisioning process is activated. Although the instances themselves do not undergo optimization, the earlier described workflow and automation contribute to enhancing application performance. This enhancement is achieved by efficiently scaling out or scaling in the number of virtual machines based on the predefined scale factor, ensuring optimal application operation without manually adjusting the VM count.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/scaled-up-vms.png&quot; alt=&quot;Screenshot 7: Scaled up VMs&quot; title=&quot;Screenshot 7: Scaled up VMs&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 7: Scaled up VMs. &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;Scaling observation: Now, draw your attention to the tripartite progression of instance scaling that is shown in Screenshot 7.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Initial state&lt;/strong&gt;: This stage represents the baseline, illustrating the system&apos;s configuration before any significant memory demand. It&apos;s the foundation, where the system awaits triggers or conditions to initiate any scaling.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Intermediate phase&lt;/strong&gt;: During the transition, memory demand escalates and meets the scaling threshold. This juncture is pivotal — the system responds by initializing new instances. It&apos;s an adaptive phase, wherein the system&apos;s agility becomes evident, reacting in real-time to the increasing demands.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Final state&lt;/strong&gt;: The culmination is a full display of responsiveness. With memory utilization hitting its designated peak, the system maximizes its resources. Four active instances, mirroring the set scaling threshold, now operate cohesively. This not only reflects an adaptive system but also showcases a setup geared for efficiency and robust performance, ensuring user demands are consistently met.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this post, I showed you how HPE GreenLake for Private Cloud Enterprise:  Virtual Machines simplifies the deployment and scaling of traditional applications, making it an efficient choice for modern digital business needs. The Design and Development phase highlights the ease of managing groups and setting scaling thresholds for CPU and memory, emphasized by the detailed visual walkthrough.&lt;/p&gt;
&lt;p&gt;Through the HPE GreenLake for Private Cloud Enterprise console and Ansible automation, the establishment of a robust and efficient scaling environment is showcased, ensuring operational consistency and responsiveness.&lt;/p&gt;
&lt;p&gt;In the Runtime phase, the system’s agility comes to the forefront. The seamless integration with NSX-T Load Balancer and clear, step-by-step provisioning and setup underscore the system’s focus on optimal performance and reliability. The nuanced instance setup process, from initiation to layout configuration and tagging, exemplifies the comprehensive control and oversight organizations hold over their resources and deployment processes.&lt;/p&gt;
&lt;p&gt;In conclusion, the HPE GreenLake for Private Cloud Enterprise stands out as a robust, intuitive, and adaptive platform for efficiently deploying and scaling traditional applications. This detailed overview underscores the significant value it brings to enterprises, ensuring consistent meeting of user demands while promoting operational efficiency and innovation. Stay tuned for the final post in this series.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The iLORest tool in conjunction with direct-attached and controller-connected drives]]></title><description><![CDATA[Introduction In the domain of data storage and server setups, the method of connecting hard drives holds considerable sway over performance…]]></description><link>https://developer.hpe.com/the-ilorest-tool-in-conjunction-with-directly-attached-and-controller-connected-drives/</link><guid isPermaLink="false">https://developer.hpe.com/the-ilorest-tool-in-conjunction-with-directly-attached-and-controller-connected-drives/</guid><pubDate>Sat, 30 Sep 2023 06:13:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the domain of data storage and server setups, the method of connecting hard drives holds considerable sway over performance, scalability, and the general operational capabilities. Two prevalent techniques for linking hard drives to a server or storage system include direct-attached and controller-connected setups. This article will delve into a comprehensive examination of these two methodologies, investigating their respective merits, practical applications, and their impact on contemporary computing environments. Additionally, it will focus on examining how the iLORest tool facilitates Drive Firmware (FW) updates in both of these configurations.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Direct-attached drives&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Direct-attached drives, as the name suggests, are storage devices that are directly connected to a server or host system. These drives can be connected using various interfaces, such as SATA (Serial Advanced Technology Attachment), NVMe (Non-Volatile Memory Express), or SAS (Serial Attached SCSI). Let&apos;s examine the key aspects of direct-attached drives:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Simplicity&lt;/strong&gt;: Direct-attached drives are straightforward to set up and manage. They are typically installed inside the server&apos;s chassis or externally connected via cables.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;/strong&gt;: These drives often offer excellent performance, especially when used in high-speed interfaces like NVMe. They are well-suited for applications that demand low latency and high throughput.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: While direct-attached drives are easy to install, their scalability is somewhat limited. Expanding storage capacity often requires physically adding more drives to the server, which may not be suitable for large-scale data storage needs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use cases&lt;/strong&gt;: Direct-attached drives are commonly used in small to medium-sized businesses, as well as for specific applications where high-speed local storage is essential, such as gaming servers or databases.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When updating configurations where direct-attached drives are being used, iLORest automatically identifies drives that are directly attached, initiates the upload of the firmware component, and generates a UEFI task to perform the update. The UEFI task is scheduled to execute during the next server reboot, and the firmware update is carried out at that time as well.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Controller-connected drives&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Controller-connected drives, on the other hand, involve a more complex setup. These drives are connected to a storage controller or RAID (Redundant Array of Independent Disks) controller, which is then connected to the server. This controller manages the storage devices and can offer several advantages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Controller-connected drives are highly scalable. Storage controllers can handle a large number of drives, allowing for significant storage expansion without the need to clutter the server with additional physical drives.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data protection&lt;/strong&gt;: Many controller systems offer RAID configurations, which provide data redundancy and protection against drive failures. This is crucial for mission-critical applications and data centers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Centralized management&lt;/strong&gt;: Storage controllers often come with management interfaces that allow for centralized monitoring, configuration, and maintenance of all connected drives. This simplifies storage administration in large-scale environments.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use cases&lt;/strong&gt;: Controller-connected drives are commonly used in enterprise environments, data centers, and any scenario where data reliability, scalability, and centralized management are paramount.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In this particular setup, iLORest identifies drives connected to the controller, initiates the upload of the firmware component, and then proceeds to flash it directly using iLO (BMC). In this configuration, there is no necessity to restart the server.&lt;/p&gt;
&lt;p&gt;However, if the server incorporates a mix of direct-attached and controller-connected drives, both the direct flashing and UEFI task become essential. Consequently, a reboot is needed to finalize the firmware update process.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Choosing the right attachment method&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The choice between direct-attached drives and controller-connected drives depends on your specific requirements and use cases. Here are some considerations to guide your decision:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Performance vs. Scalability&lt;/strong&gt;: If you need high-performance local storage and have a limited number of drives, direct-attached drives may be sufficient. However, if scalability is crucial and data protection is a concern, controller-connected drives are a better choice.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data redundancy&lt;/strong&gt;: For applications where data loss is unacceptable, such as financial systems or healthcare databases, controller-connected drives with RAID configurations offer a higher level of data protection.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost&lt;/strong&gt;: Direct-attached drives are often more cost-effective for small to medium-sized deployments. Controller-connected drives can be more expensive due to the additional hardware required.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Management complexity&lt;/strong&gt;: Consider the level of management and administration your storage solution requires. Controller-connected drives offer centralized management but may require more initial setup.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In summary, gaining a comprehensive understanding of how the iLORest tool updates firmware for both direct-attached and controller-connected drives is crucial. This knowledge forms the basis for making well-informed decisions when designing and overseeing your storage infrastructure. Each approach carries its own set of advantages and limitations, and your choice should align with the specific requirements and objectives of your organization. Whether you prioritize aspects such as performance, scalability, data protection, or cost-effectiveness, the selection of the appropriate hard drive attachment method can greatly influence the effectiveness of your IT infrastructure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streamline and optimize ML workflows with HPE Ezmeral Unified Analytics ]]></title><description><![CDATA[This is part two of a blog series that showcases the capabilities of HPE Ezmeral Unified Analytics through the real-world example of stock…]]></description><link>https://developer.hpe.com/streamline-and-optimize-ml-workflows-with-hpe-ezmeral-unified-analytics/</link><guid isPermaLink="false">https://developer.hpe.com/streamline-and-optimize-ml-workflows-with-hpe-ezmeral-unified-analytics/</guid><pubDate>Wed, 27 Sep 2023 13:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This is part two of a blog series that showcases the capabilities of &lt;a href=&quot;https://www.hpe.com/us/en/hpe-ezmeral-unified-analytics&quot;&gt;HPE Ezmeral Unified Analytics&lt;/a&gt; through the real-world example of stock market predicting. In that &lt;a href=&quot;https://developer.hpe.com/blog/seamless-data-engineering-for-financial-services/&quot;&gt;previous blog post&lt;/a&gt;, we covered the data engineering aspects of Apache Spark and Superset to streamline pipelines and visualize data. In this post, the same use case is retained, stock market forecasting, to showcase how HPE Ezmeral simplifies building, deploying, and productizing machine learning models and pipelines. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Model building and training&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Business problem: Forecast stock prices of different companies listed in the National Stock Exchange (NSE) of India.&lt;/p&gt;
&lt;p&gt;Step 1 Data Gathering:&lt;br&gt;
The data is streamed from external servers hosted publicly and then saved to a HPE Ezmeral Data Fabric volume as explained in the  previous blog.&lt;/p&gt;
&lt;p&gt;Step 2 Data Preprocessing:&lt;br&gt;
The stock market does not function on holidays and weekends but the model expects continuous data. For this reason, the data feature, along with the open and closing price for each stock, is being used to impute the model with the previous working day’s data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/step2-data-preprocessing.png&quot; alt=&quot;&quot; title=&quot;Figure 1. Selecting data for preprocessing. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Step 3 Modeling:&lt;br&gt;
First, the data is divided between training and validation and then used to train a long short term memory (LSTM) model. After training, the model is evaluated on the error metric.&lt;/p&gt;
&lt;p&gt;LSTM is a variety of &lt;a href=&quot;https://www.techtarget.com/searchenterpriseai/definition/recurrent-neural-networks#:~:text=A%20recurrent%20neural%20network%20is,predict%20the%20next%20likely%20scenario.&quot;&gt;recurrent neural networks (RNNs)&lt;/a&gt; capable of learning long-term dependencies, especially in sequence prediction problems. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Model tracking and monitoring using MLflow&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MLflow is an open-source platform designed to manage the end-to-end machine learning lifecycle. It provides tools for tracking experiments, packaging code, sharing models, and managing deployment. MLflow was developed by Databricks, a company that specializes in big data and AI. They have gained popularity within the machine learning community for their ability to streamline and organize various stages of machine learning workflows.&lt;/p&gt;
&lt;p&gt;MLflow consists of several key components:&lt;/p&gt;
&lt;p&gt;Tracking: This component allows you to record and compare different experiments, including the parameters, metrics, and artifacts (such as models, visualizations, and data) associated with each experiment. It helps keep track of the different iterations and configurations tried during the model development process.&lt;/p&gt;
&lt;p&gt;Projects: MLflow projects provide a way to package code into a reproducible format, allowing you to define and share your machine learning projects with others. This ensures that the code and dependencies used for a particular experiment can be easily reproduced by others.&lt;/p&gt;
&lt;p&gt;Models: MLflow model management capabilities enable you to easily log, version, and deploy machine learning models. It supports various machine learning frameworks, for example TensorFlow, PyTorch, and scikit-learn. MLflow also makes it possible to deploy models directly onto different platforms. &lt;/p&gt;
&lt;p&gt;Registry: The MLflow model registry provides a centralized repository for storing, versioning, and managing models. It allows teams to collaborate on model development and helps maintain a history of different model versions.&lt;/p&gt;
&lt;p&gt;UI and APIs: MLflow makes it easy to use and integrate programmatically by offering a web-based user interface as well as APIs. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Experiments&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;An experiment is registered in the MLflow tracking service, and different runs are carried out with different parameters and features. The parameters can then be logged into MLflow and viewed on the UI. Different runs are then compared so that the best metric can be chosen and moved into production. &lt;/p&gt;
&lt;p&gt;To begin, allow the experiment to access MLflow from the Jupyter notebook. Then, as we have used TensorFlow LSTM as our ML model, we use mlflow.tensorflow.autolog() to log automatically the training parameters, hyperparameters and model metrics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure-1a-experiment.png&quot; alt=&quot;&quot; title=&quot;Figure 2. Creating an experiment with auto logging. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the model is ready, the run details are available via MLflow APIs like search_run(). The best run can be auto-selected as per our requirements.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-1b-best-run-experiment.png&quot; alt=&quot;&quot; title=&quot;Figure 3. Selecting the best run of the experiment and loading the model.&quot;&gt;&lt;/p&gt;
&lt;p&gt;On the Web UI for MLflow, you can access multiple runs of the ML experiment for different parameter combinations. Details of the run can be accessed by clicking on the corresponding run name.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-2-multiple-runs.png&quot; alt=&quot;&quot; title=&quot;Figure 4. Results from multiple runs made on the experiment with different parameters in each run. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Model performance can be obtained by comparing various runs. In the image below, all six runs have been selected and you can see that the model with epochs=20, performs best with the least loss and mean absolute error (MAE)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-3-multiple-runs-compared.png&quot; alt=&quot;&quot; title=&quot;Figure 5. Comparison of all runs based on a relevant metric. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that the best model has been identified, it can be registered programmatically or through the MLflow Web UI. Once registered, the model can be deployed to the appropriate staging-production region. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-4-model-registry-different-models.png&quot; alt=&quot;&quot; title=&quot;Figure 6. Model registry with different models. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Code repository&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A code repository is a storage location for code and other software development assets, such as documentation, tests, and scripts. They are often used to manage and organize a software project&apos;s codebase and collaborate with other project developers.&lt;/p&gt;
&lt;p&gt;The source code for the model is stored in a private repository on GitHub then seamlessly pulled into the IDE using git commands. The updated model can be pushed to different branches of the source repo. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/code-repository.png&quot; alt=&quot;&quot; title=&quot;Figure 7.  Source code for models is stored on Github in a private repository. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Model deployment&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The next step is to deploy the best model chosen from the MLflow experiment. Here, Kserve, which is a standard model inference platform on Kubernetes that is build for highly scalable use cases, is being used.&lt;/p&gt;
&lt;p&gt;Kubeflow Serving, also known as KServe, is a component of the Kubeflow ecosystem and is designed to serve machine learning models in Kubernetes environments. Kubeflow itself is an open-source platform for deploying, monitoring, and managing machine learning models on Kubernetes. KServe provides several features and benefits:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Scalability: It is designed to handle serving machine learning models at scale. You can easily scale up or down based on the traffic and load requirements.&lt;/li&gt;
&lt;li&gt;Multi-framework support: KServe supports serving models trained in various machine learning frameworks, such as Tensor Flow, PyTorch, scikit-learn, and others.&lt;/li&gt;
&lt;li&gt;Advanced deployment strategies: KServe supports various deployment strategies, for example  Canary deployments or Blue-Green deployments,  allowing you to roll out new model versions gradually and monitor their performance.&lt;/li&gt;
&lt;li&gt;Monitoring and metrics: It provides metrics and monitoring capabilities, allowing you to keep track of model performance, errors, and other relevant statistics.&lt;/li&gt;
&lt;li&gt;Customization: KServe is highly customizable, allowing you to define how you want to serve your models, handle preprocessing and post-processing tasks, and even add custom logic to your serving containers.&lt;/li&gt;
&lt;li&gt;Integration with Kubeflow pipelines: If you&apos;re using Kubeflow pipelines for end-to-end machine learning workflows, KServe can be integrated seamlessly to deploy models as part of your pipeline.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Visit the &lt;a href=&quot;https://kserve.github.io/website/0.10/modelserving/control_plane/&quot;&gt;Kserve documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;The first step of model deployment is to define the service account with MinIO object store secrets.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-1a-define-service-acct.png&quot; alt=&quot;&quot; title=&quot;Figure 8. Defining service account with MinIO secret. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, script the KServe yaml file to begin the inference service. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-1b-inference-service.png&quot; alt=&quot;&quot; title=&quot;Figure 9. The inference service is defined. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, a new model server must be created in Kubeflow and the yaml file uploaded. Once it is running, you can send your test data through REST APIs to get the response.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Model retraining&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The models deployed into production are constantly monitored to identify inconsistencies and degradation in performance. Because models can skew to the old data used for training resulting in under performance, models should be retrained whenever performance drops below a defined threshold. To retrain a model, a pipeline is developed that could trigger all the steps associated with model retraining to occur in the appropriate sequence using Kubeflow pipelines.  &lt;/p&gt;
&lt;p&gt;Python programs need to be modified to run as a Kubeflow pipeline because each step runs as a containerized module. The whole cycle of data collection, preprocessing, and modeling are divided into separate components with each process isolated and running as independent containers inside individual Kubernetes pods. All the dependencies of each task must be mentioned inside the task due to the isolated nature of the environment. The artifacts that are used in the process, such as input data files and model objects, are saved in specific locations and referenced by the tasks. &lt;/p&gt;
&lt;p&gt;To prepare the Kubeflow pipelines, define each task as a function. First “read_data()” is defined to read the input file from the data volume on the Kubeflow pipeline.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-1-define-each-task.png&quot; alt=&quot;&quot; title=&quot;Figure 10. Each task is defined as a function that includes all necessary packages. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Once defined, functions are converted into a Kubeflow component using “create_component_from_func()”.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-2-converting-each-task.png&quot; alt=&quot;&quot; title=&quot;Figure 11. Each task is converted into a component that includes base line images and packages. &quot;&gt;&lt;/p&gt;
&lt;p&gt;Once each of the tasks are defined, the Kubeflow pipelines are defined to call each of the steps using DSL (Domain Specific Language).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig3-define-the-pipeline.png&quot; alt=&quot;&quot; title=&quot;Figure 12. Pipelines are defined along with tasks in the appropriate sequence. &quot;&gt;&lt;/p&gt;
&lt;p&gt;When the Kubeflow pipeline is set up, the pipeline can run using the Kubeflow client where the required arguments and parameters are passed for the specific run.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kubeflow-setup-run.png&quot; alt=&quot;&quot; title=&quot;Figure 13. Kubeflow client is created and executed with appropriate arguments. &quot;&gt;&lt;/p&gt;
&lt;p&gt;In the Kubeflow UI, “stock_pipeline_run” would be triggered to provide a green-red status of each step as it executes. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-5-simplified-pipeline.png&quot; alt=&quot;&quot; title=&quot;Figure 14. Status of each pipeline step and it&amp;#x27;s status. &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As you can see, HPE Ezmeral Unified Analytics has significantly enhanced the end-to-end pipeline of financial services use cases by providing a comprehensive platform tailored to the specific needs of ML practitioners and data scientists.&lt;/p&gt;
&lt;p&gt;In summary:&lt;/p&gt;
&lt;p&gt;Seamless integration: HPE Ezmeral Unified Analytics offers seamless integration of various data engineering and ML components like Spark, EzPresto, Superset, MLflow, Kubeflow. This integration streamlines the entire ML workflow, reducing friction between dissimilar stages and enables more efficient development.&lt;/p&gt;
&lt;p&gt;Data exploration and visualization: By seamlessly integrating Apache Superset into the Ezmeral ecosystem, organizations can harness the power of data-driven insights. Teams can easily access and visualize data, track experiment results, and make data-informed decisions to improve model quality and performance.&lt;/p&gt;
&lt;p&gt;Reproducibility: Kubeflow ensures reproducibility by allowing users to define and version their entire ML pipelines using HPE Ezmeral Unified Analytics. This means that every step, from data preparation to model deployment, can be tracked, versioned, and replicated easily, leading to better model accountability and auditability.&lt;/p&gt;
&lt;p&gt;Scalability: Kubeflow on HPE Ezmeral Unified Analytics excels at scaling applications. Kube Flow leverages this capability to scale ML workloads as needed, handling large datasets and complex models with ease. This scalability is crucial for training and serving models in real-world, production environments.&lt;/p&gt;
&lt;p&gt;Model serving: HPE Unified Analytics through Kubeflow Serving (KServe) simplifies model deployment and serving. It provides features like advanced deployment strategies, model versioning, and monitoring, making it easier to roll out new models, manage production deployments, and ensure model performance and reliability.&lt;/p&gt;
&lt;p&gt;Automated workflow orchestration: Kubeflow pipelines simplify the orchestration of ML workflows as code. With HPE Unified Analytics, this automation reduces manual effort, enhances reproducibility, and accelerates experimentation, resulting in more efficient model operations.&lt;/p&gt;
&lt;p&gt;Contributors to this blog post include Suvralipi Mohanta (&lt;a href=&quot;mailto:suvralipi.mohanta@hpe.com&quot;&gt;suvralipi.mohanta@hpe.com&lt;/a&gt;), Harikrishnan Nair (&lt;a href=&quot;mailto:harikrishnan.nair@hpe.com&quot;&gt;harikrishnan.nair@hpe.com&lt;/a&gt;), Ashok Manda (&lt;a href=&quot;mailto:ashok.manda@hpe.com&quot;&gt;ashok.manda@hpe.com&lt;/a&gt;) and Joann Starke (&lt;a href=&quot;mailto:joann.starke@hpe.com&quot;&gt;joann.starke@hpe.com&lt;/a&gt;).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[GitOps: The next step in cloud-native development ]]></title><description><![CDATA[Editor’s note: This article was originally posted on HPE Enterprise.nxt on June 2, 2021 What do you get when you combine DevOps management…]]></description><link>https://developer.hpe.com/gitops-the-next-step-in-cloud-native-development/</link><guid isPermaLink="false">https://developer.hpe.com/gitops-the-next-step-in-cloud-native-development/</guid><pubDate>Mon, 25 Sep 2023 15:37:56 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s note: This article was originally posted on HPE Enterprise.nxt on June 2, 2021&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;What do you get when you combine DevOps management and the Git distributed version control system? Say hello to GitOps. &lt;/p&gt;
&lt;p&gt;Automation makes software better and more reliable. GitOps takes automation a step further and merges it with deployment. &lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/devops-lessons-in-software-infrastructure-and-business-success-1702.html&quot;&gt;DevOps&lt;/a&gt;, in which system administrators work hand-in-hand with developers, can speed up software development and operational deployments from months to days. At the same time, we use &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/devops-lessons-in-software-infrastructure-and-business-success-1702.html&quot;&gt;Kubernetes&lt;/a&gt; to orchestrate containers over clouds to speed up software development and operational deployments. &lt;/p&gt;
&lt;p&gt;Ultimately, both approaches lend themselves to &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/continuous-integration-and-delivery-tool-basics-1807.html&quot;&gt;continuous Integration/continuous delivery (CI/CD&lt;/a&gt;). Wouldn&apos;t it be great, thought &lt;a href=&quot;https://www.weave.works/%22%20/t%20%22_blank&quot;&gt;Weaveworks &lt;/a&gt;CEO Alexis Richardson, if we could combine these approaches and use the &lt;a href=&quot;https://www.weave.works/blog/gitops-git-push-all-the-things%22%20/t%20%22_blank&quot;&gt;Git distributed version control system as the ultimate source of truth&lt;/a&gt;? So, when there&apos;s a dispute over the correct state of the site, people know where to go for the correct version. &lt;/p&gt;
&lt;p&gt;It turns out Richardson was on to something. Cornelia Davis, Weaveworks&apos; CTO, recently said, &quot;&lt;a href=&quot;https://www.itprotoday.com/development-techniques-and-management/why-gitops-model-future-devops&quot;&gt;I believe that GitOps is the model that will dominate operations&lt;/a&gt;. … I think, five years from now, everybody will be doing some level of GitOps.&quot; &lt;/p&gt;
&lt;p&gt;Why? Because GitOps is a set of practices that enables you to manage and deploy highly distributed software that&apos;s constantly changing without breaking a sweat. &lt;/p&gt;
&lt;p&gt;Now if just a single vendor was promoting its approach, you might be wise to be skeptical. But it&apos;s not just Weaveworks. Priyanka Sharma, general manager at the &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt;, believes GitOps is becoming to Kubernetes what Git already is to Linux: the fundamental building tool for the next generation of &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/how-to-implement-cloud-native-computing-with-kubernetes-1710.html&quot;&gt;cloud-native computing&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;GitOps is designed for and, as it now stands, really applicable to just orchestrated cloud-native applications. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/gitops_-the-next-step-in-cloud-native-development.png&quot; alt=&quot;Block Text&quot; title=&quot;Block text&quot;&gt;&lt;/p&gt;
&lt;h2&gt;So what is GitOps today? &lt;/h2&gt;
&lt;p&gt;Officially, Weaveworks defines GitOps as &quot;a way to do Kubernetes cluster management and application delivery. &lt;a href=&quot;https://www.weave.works/technologies/gitops/%22%20/t%20%22_blank&quot;&gt;GitOps works by using Git as a single source of truth for declarative infrastructure and applications&lt;/a&gt;. With GitOps, the use of software agents can alert on any divergence between Git with what&apos;s running in a cluster, and if there&apos;s a difference, Kubernetes reconcilers automatically update or roll back the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests, to accelerate and simplify both application deployments and operations tasks to Kubernetes.&quot; &lt;/p&gt;
&lt;p&gt;Superficially, GitOps is quite simple. GitOps uses a version control system, Git, to house all information, documentation, and code for a Kubernetes deployment. Kubernetes then automatically deploys changes to the cluster. &lt;/p&gt;
&lt;p&gt;Of course, simple concepts are always more complex in reality. Let&apos;s look at the fundamentals. &lt;/p&gt;
&lt;h3&gt;1) Everything that can be described must be stored in Git. &lt;/h3&gt;
&lt;p&gt;By using Git as the source of truth, it is possible to observe your cluster and compare it with the desired state. The goal is to describe everything: policies, code, configuration, and even monitored events and version control. Keeping everything under version control enforces convergence where changes can be reapplied if at first they didn&apos;t succeed. &lt;/p&gt;
&lt;p&gt;These descriptions of the entire system are described declaratively in &lt;a href=&quot;https://yaml.org/&quot;&gt;YAML&lt;/a&gt;. This is a human-readable data serialization language. YAML is commonly used for configuration files and data storage and transmission. &lt;/p&gt;
&lt;p&gt;If this sounds a lot like DevOps, you&apos;re right, it does. For example, &lt;a href=&quot;https://www.ansible.com/%22%20/t%20%22_blank&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://azure.microsoft.com/en-us/services/devops/pipelines/&quot;&gt;Azure Pipelines&lt;/a&gt;, &lt;a href=&quot;https://saltproject.io/&quot;&gt;Salt&lt;/a&gt;, and &lt;a href=&quot;https://puppet.com/&quot;&gt;Puppet&lt;/a&gt; all use YAML. No matter the program, the idea is the same: Use declarations written in YAML to control operations. This approach is also known as &lt;a href=&quot;https://stackify.com/what-is-infrastructure-as-code-how-it-works-best-practices-tutorials/&quot;&gt;infrastructure as code&lt;/a&gt; (IAC). &lt;/p&gt;
&lt;p&gt;Within GitHub YAML files, you find not instructions like &quot;Start 10 MySQL servers&quot; but declarations—for instance, &quot;There are 10 MySQL servers. These are their names.&quot; &lt;/p&gt;
&lt;p&gt;The bottom line is to take fundamental DevOps and IAC concepts and move them to the cloud-native world. &lt;/p&gt;
&lt;h3&gt;2) Use a Kubernetes controller that follows an operator pattern &lt;/h3&gt;
&lt;p&gt;With a Kubernetes controller that follows the operator pattern, your cluster is always in sync with your Git repository, the source of truth. Since the desired state of your cluster is kept in Git YAML files, you can easily spot differences between your Git files and the running cluster. &lt;/p&gt;
&lt;p&gt;For GitOps, the most popular controller is the open source program &lt;a href=&quot;https://fluxcd.io/&quot;&gt;Flux&lt;/a&gt;. This is a collection of tools for keeping Kubernetes clusters in sync with YAML files in Git repositories and automating configuration updates when there&apos;s new code to deploy. Indeed, although Flux&apos;s code is changing rapidly, &lt;a href=&quot;https://radar.cncf.io/2020-06-continuous-delivery&quot;&gt;Flux has already been recommended by the CNCF for adoption&lt;/a&gt; by CD users. &lt;/p&gt;
&lt;p&gt;With Flux, you can describe everything about the entire desired state of your system in Git. This includes apps, configuration, dashboards, monitoring, and everything else. &lt;/p&gt;
&lt;p&gt;This means everything—and I mean everything—is controlled through pull requests. There&apos;s no learning curve for new programmers; they just use the Git commands they already know. If there&apos;s a production issue, it&apos;s fixed via a pull request instead of manually changing the running system. As a big side benefit, your Git history automatically provides a log of transactions, enabling you to recover your system state from any snapshot. &lt;/p&gt;
&lt;p&gt;Flux also uses &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/&quot;&gt;Kubernetes Custom Resources&lt;/a&gt;, object state reports, and via Kubernetes Events, integration with &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&quot;&gt;Kubernetes role-based access control (RBAC)&lt;/a&gt;, making it declaratively configurable. &lt;/p&gt;
&lt;p&gt;The next version, Flux v2, is being built from the ground up to use Kubernetes&apos; API extension system and to integrate with &lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt; and other core Kubernetes components. In version 2, Flux supports multi-tenancy and can sync an arbitrary number of Git repositories, among other long-requested features. The Flux people are building this with the &lt;a href=&quot;https://toolkit.fluxcd.io/components/%22%20/t%20%22_blank&quot;&gt;GitOps Toolkit&lt;/a&gt;, a set of composable APIs and specialized tools for building CD on top of Kubernetes. &lt;/p&gt;
&lt;p&gt;There are non-Flux Kubernetes platforms that support GitOps as well, including &lt;a href=&quot;https://www.hpe.com/content/hpe/country/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;. Ezmeral &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/kubernetes/Policy_Management_Overview.html&quot;&gt;delivers GitOps through its Centralized Policy Management capabilities&lt;/a&gt; using &lt;a href=&quot;https://argoproj.github.io/argo-cd/&quot;&gt;Argo CD&lt;/a&gt;, a declarative, GitOps continuous delivery tool for Kubernetes. &lt;/p&gt;
&lt;h3&gt;3) Software agents are used to ensure correctness and act on divergence, a.k.a. &lt;a href=&quot;https://cluster-api.sigs.k8s.io/developer/providers/implementers-guide/controllers_and_reconciliation.html&quot;&gt;Kubernetes reconcilers&lt;/a&gt; &lt;/h3&gt;
&lt;p&gt;As Richardson puts it, &quot;One of the most important functions of GitOps is to enable a group of system changes to be applied correctly and then verified. After that, GitOps should enable the users and orchestrators to be notified [alerted] if any of the systems has drifted from the correct state so that it may then be converged back to the correct desired state, which is in Git.&quot; &lt;/p&gt;
&lt;p&gt;So, how would this work in practice? Something like this. &lt;/p&gt;
&lt;p&gt;You make your changes to your Git files. Then, &lt;a href=&quot;https://www.jenkins.io/&quot;&gt;Jenkins&lt;/a&gt;, the open source automation server, pushes these changes to the &lt;a href=&quot;https://www.projectquay.io/&quot;&gt;Quay&lt;/a&gt; container image registry. Jenkins then pushes the new config, and &lt;a href=&quot;https://helm.sh/&quot;&gt;Helm&lt;/a&gt;, the Kubernetes package manager, charts to the master Git storage bucket. Once the merge request is complete, the automated GitOps operator detects the change and calls Flux to make the changes operational by deploying the updated YAML files to the master Git repository and hence to the operational Kubernetes cluster. &lt;/p&gt;
&lt;p&gt;That may sound complicated, but once it&apos;s set up and debugged, it will greatly speed up your deployment of applications. What do you end up with? GitOps is a CI/CD system that&apos;s greater than the sum of its parts.&lt;/p&gt;
&lt;p&gt;As a corollary to this, you don&apos;t use &lt;a href=&quot;https://kubectl.docs.kubernetes.io/%22%20/t%20%22_blank&quot;&gt;kubectl&lt;/a&gt;, the Kubernetes command-line interface, because all changes are handled automatically by the GitOps pipeline. Indeed, Richardson says it&apos;s not a good idea to deploy directly to the cluster using kubectl. That&apos;s because by relying on kubectl, you&apos;re making production potentially open to shell-based hacking attacks. &lt;/p&gt;
&lt;h2&gt;Why GitOps? &lt;/h2&gt;
&lt;p&gt;Sharma says, just as &quot;Kubernetes unleashes the power of cloud computing for building software fast and resiliently, GitOps is basically utilizing the Git workflows that every developer is used to.&quot; So, how important is this approach? &quot;Not everyone who is touching Kubernetes is using GitOps, but I know everyone wants to because it would make their life easier.&quot; &lt;/p&gt;
&lt;p&gt;&quot;Everyone&quot; includes the &lt;a href=&quot;https://github.com/gitops-working-group/gitops-working-group&quot;&gt;CNCF GitOps Working Group&lt;/a&gt;, which is working on best practices for the still quite new GitOps. Its members include Hewlett Packard Enterprise, Amazon Web Services, GitHub, Codefresh, Microsoft, and, of course, Weaveworks. &lt;/p&gt;
&lt;p&gt;Besides best practices, the working group is also &lt;a href=&quot;https://github.com/gitops-working-group/gitops-working-group&quot;&gt;hammering out the GitOps Manifest&lt;/a&gt;. This is very much a work in progress. When done, it will define GitOps&apos; principles and technical aspects in a vendor- and implementation-neutral manner. It will also lay out a common understanding of GitOps systems based on shared principles rather than on individual opinion. Another aim is to encourage innovation by clarifying the technical outcomes rather than the code, tests, or organizational elements needed to achieve them. &lt;/p&gt;
&lt;p&gt;Sharma, &lt;a href=&quot;https://gitlab.com/users/sign_in%22%20/t%20%22_blank&quot;&gt;GitLab&lt;/a&gt;&apos;s director of technical evangelism, says it best: &quot;For those shops doing DevOps, this approach can be appealing because &lt;a href=&quot;https://thenewstack.io/what-is-gitops-and-why-it-might-be-the-next-big-thing-for-devops/%22%20/t%20%22_blank&quot;&gt;GitOps brings the workflow closer to the developer&lt;/a&gt;.&quot; &lt;/p&gt;
&lt;p&gt;Programmers just keep using the Git tools they already know to push code into production. And since the workflow goes directly through Git, it&apos;s recorded and logged. Sharma says, &quot;There is an audit trail, the ability to revert problematic changes, and ultimately a single source of truth of what is happening in the system from both the software development and infrastructure perspective.&quot; &lt;/p&gt;
&lt;p&gt;Richardson says, &quot;Imagine a world where every time you do a deployment it&apos;s correct. And if it&apos;s not correct, then the deployment fails completely, so you can try again or make other intelligent decisions. … That is just an incredible cost-saver in operational overhead—moving from an unsafe, semireliable system to one that is basically more robust.&quot; &lt;/p&gt;
&lt;p&gt;But it&apos;s not magic. Simply storing your old operational patterns into Git won&apos;t get you anywhere. As Cornelia Davis, Weaveworks CTO, comments, &quot;&lt;a href=&quot;https://sdtimes.com/softwaredev/gitops-its-the-cloud-native-way/&quot;&gt;Just because you put something in Git doesn&apos;t make it GitOps.&lt;/a&gt; It isn&apos;t actually the central part of GitOps. Ops is the central part of GitOps.&quot; &lt;/p&gt;
&lt;p&gt;The biggest mistake people make about GitOps, Davis says, is people don&apos;t get that things are always correcting themselves and you always have to respond to change with reconciliation loops. You must think about this and use it in your approach or you&apos;ll just be recycling old mistakes in a new concept. &lt;/p&gt;
&lt;p&gt;Do it right, however, and you can expect to reap the following: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Increased CI/CD productivity. &lt;/li&gt;
&lt;li&gt;A better developer experience, by letting developers push code instead of managing containers. &lt;/li&gt;
&lt;li&gt;Improved stability, thanks to Git&apos;s audit log of Kubernetes cluster changes. &lt;/li&gt;
&lt;li&gt;Better reliability as a result of Git&apos;s built-in revert/rollback and fork from a single source of truth. &lt;/li&gt;
&lt;li&gt;And last but never least, improved cost efficiency from less downtime and improved productivity. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Sound good to you? It does to me. I predicted long before most people did that Kubernetes would become the container orchestration program. I&apos;m now going to go on a limb and predict that GitOps, in turn, is going to become the way most of us will end up deploying programs to the cloud in the next few years. It just makes too much sense for it not to work. &lt;/p&gt;
&lt;p&gt;Tom Phelan, Fellow, big data and storage organization at Hewlett Packard Enterprise, contributed to this article. &lt;/p&gt;
&lt;h2&gt;Lessons for leaders &lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Cloud-native development does an excellent job of enabling new capabilities, including GitOps. &lt;/li&gt;
&lt;li&gt;GitOps enables more efficient processes and, ultimately, better service for customers than traditional approaches. &lt;/li&gt;
&lt;li&gt;One of the best reasons to give GitOps a chance is that it uses tools that developers already know. &lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;br /&gt;
&lt;h2&gt;About the author: &lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html&quot;&gt;Steven Vaughan-Nichols&lt;/a&gt; &lt;/p&gt;
&lt;p&gt; CEO, Vaughan-Nichols &amp;#x26; Associates 53 publications &lt;/p&gt;
&lt;p&gt;Steven J. Vaughan-Nichols, a.k.a. sjvn, has been writing about technology and the business of technology since CP/M-80 was the cutting-edge PC operating system, 300bps was a fast Internet connection, WordStar was the state-of-the-art word processor, and we liked it. His work has been published in everything from highly technical publications (IEEE Computer, ACM NetWorker, Byte) and business publications (eWeek, InformationWeek, ZDNet) to popular technology magazines (Computer Shopper, PC Magazine, PC World) and the mainstream press (Washington Post, San Francisco Chronicle, Businessweek). &lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake for Private Cloud Enterprise: Exploring a flexible infrastructure resource pool ]]></title><description><![CDATA[Introduction  Welcome to the HPE GreenLake for Private Cloud Enterprise (GLPCE) blog series: Showcasing how GLPCE features address business…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise--Exploring-a-flexible-infrastructure-resource-pool/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud-enterprise--Exploring-a-flexible-infrastructure-resource-pool/</guid><pubDate>Mon, 18 Sep 2023 10:08:35 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;h2&gt;Introduction &lt;/h2&gt;
&lt;p&gt;Welcome to the HPE GreenLake for Private Cloud Enterprise (GLPCE) blog series: Showcasing how GLPCE features address business challenges. In the series I delve deep into the advanced capabilities of modern cloud infrastructure, illuminating the path to heightened innovation and sustainable growth for organizations. By consistently offering up-to-date insights and expert perspectives, I aim to be your trusted guide in this rapidly shifting digital landscape. Join me in this exploration, and together, we&apos;ll chart the course for the next era of enterprise cloud technology. &lt;/p&gt;
&lt;p&gt;Enterprises today face a complex data landscape due to the rise of edge technologies. They deal with data that originates from and is processed in either in data centers, at the edge, or a combination of both. As more data is produced at the edge, the importance of local processing grows. This, combined with the challenges of private cloud scalability and costs, means organizations need to adopt more streamlined and efficient approaches. HPE GreenLake for Private Cloud Enterprise (GLPCE) offers a solution. GLPCE blends the strengths of public and private clouds together. While it offers the flexibility and scalability demonstrated by public clouds, it also retains the advantages of private cloud implementations, such as data locality, dedicated infrastructure, and the capacity to integrate with and capitalize on pre-existing on-premises IT infrastructure investments. By using standardized instance types, GLPCE ensures that organizations maintain agility without compromising on cost-effectiveness.  &lt;/p&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise (GLPCE) offers a robust solution with its ability to flexibly allocate resources, prevent bottlenecks, and ensure cost-aware scalability. A standard GLPCE setup may include a single or multi rack configuration with various nodes, such as 4 from the bare metal pool with a mix of general purpose and memory optimized instance types, 8 dedicated to a Virtual Machines pool, and 4 for container tasks, emphasizing its versatile deployment capabilities.&lt;/p&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise (GLPCE) provides a variety of instance types, such as:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;General purpose: a good balance of compute, memory, and networking resources, and  suitable for a wide variety of workloads. &lt;/li&gt;
&lt;li&gt;Compute optimized: designed for workloads that require high CPU performance, such as web servers, CI/CD pipelines, and container and VM orchestration. &lt;/li&gt;
&lt;li&gt;Memory optimized: designed for workloads that require high memory performance, such as in-memory databases and analytics. &lt;/li&gt;
&lt;li&gt;Storage optimized: designed for workloads that require high storage performance, such as data lakes and Splunk. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this first blog post, I will delve into the challenges organizations face in the current digital landscape. While private clouds offer notable advantages like geo-locality and enhanced performance, they also come with the added complexity of managing scalability, costs, and agility. Addressing this, HPE GreenLake for Private Cloud Enterprise (GLPCE) emerges as a solution that blends the flexibility of public clouds with the specific controls of private ones, ensuring an infrastructure that&apos;s both agile and cost-efficient. A standout aspect of GLPCE is its flexible infrastructure resource pool, designed for immediate resource allocation, enabling organizations to swiftly adjust to varying digital traffic demands. Furthermore, as a managed solution, GLPCE alleviates the administrative burden, allowing businesses to focus on their core operations while enjoying the benefits of an efficient cloud infrastructure.&lt;/p&gt;
&lt;p&gt;Join me in this first post of a three-part series on HPE GreenLake for Private Cloud Enterprise (GLPCE). I will dive deep into GLPCE, spotlighting its unique strengths. With each post, you can gain valuable insights into the future of enterprise cloud technology. Step in, and let&apos;s explore together. &lt;/p&gt;
&lt;h2&gt;Features &lt;/h2&gt;
&lt;p&gt;Navigating cloud computing complexities calls for a solution that&apos;s efficient and user-friendly. HPE GreenLake for Private Cloud Enterprise (GLPCE) meets these needs, providing features designed to simplify and enhance cloud functionalities.  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flexible resource allocation: GLPCE enables targeted allocation and release of computing instance types from a dynamic infrastructure resource pool based on demand. &lt;/li&gt;
&lt;li&gt;Prevention of bottlenecks: GLPCE allows for adjusting resources for each service based on current needs, ensuring smooth operations, and preventing potential service interruptions or slowdowns. &lt;/li&gt;
&lt;li&gt;Cost-awareness: Through GLPCE&apos;s Consumption Analytics Service, administrators gain a clear insight into both private and public cloud costs, enabling a more streamlined approach to budgeting and cost management. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Allocation / Deallocation &lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise (GLPCE) supports resource allocation in line with expected workloads. Should a cluster harbor idle resources, they can be redirected to areas of higher demand. It offers consumption analytics and ongoing monitoring tools to ensure efficient resource utilization. I&apos;ll delve deeper into this concept with a forthcoming visual representation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog1.png&quot; alt=&quot;Screenshot 1: Initial allocation &quot; title=&quot;Screenshot 1: Initial allocation &quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 1: Initial allocation &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Resource allocation strategy&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Initial allocation&lt;/strong&gt;: As seen in screenshot 1, the initial resource allocation process is primarily based on anticipated workload demands. When preparing for extensive data-processing tasks, customers can allocate a higher number of memory-optimized (M2i) instance types to Cluster 2 within the virtualization resource pool. On the other hand, for every day, routine operations, Cluster 1 is typically outfitted with general purpose (G2i) instance types to ensure smooth and efficient operations. This allocation strategy aims to match resource types with the specific requirements of each cluster. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog2.png&quot; alt=&quot;Screenshot 2: Reallocation &quot; title=&quot;Screenshot 2: Reallocation &quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Screenshot 2: Reallocation &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Reallocation&lt;/strong&gt;: As illustrated in screenshot 2, there are times when instances within the bare metal resource pool remain unused. These idle resources are not merely dormant; they hold value. As shown in screenshot 2, these instance types can be quickly reallocated to any virtualization clusters based on their current needs. Such dynamic reallocation ensures optimal resource utilization, adapting promptly to varying workload demands.&lt;/p&gt;
&lt;h2&gt;Tune resource allocations &lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Monitoring for efficiency/capacity&lt;/strong&gt;: Using HPE GreenLake for Private Cloud Enterprise, you can continuously monitor the utilization rates across all three services: bare metal, virtual machines, and containers. To understand how such resource utilization is tracked and assessed, check out the next section on Capacity planning and monitoring.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture3-20-sep-2023.png&quot; alt=&quot;Screenshot 3: Responsive adjustments &quot; title=&quot;Screenshot 3: Responsive adjustments &quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;&lt;em&gt;Screenshot 3: Responsive adjustments&lt;/em&gt;  &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Responsive adjustments&lt;/strong&gt;: Referring to screenshot 3, observe that if Cluster 2 possesses idle memory optimized (M2i) instances, they can be shifted from Cluster 2 either to the bare metal pool or to Cluster 1, depending on demand. Conversely, if Cluster 1 has surplus general purpose (G2i) units, they can be relocated in a similar manner. Such adaptability ensures that resource utilization is consistently maximized, preventing them from sitting unused when they could be beneficial elsewhere.&lt;/p&gt;
&lt;h2&gt;Benefits &lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Private Cloud Enterprise (GLPCE) stands out with its versatile offerings: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Flex capacity for varied workloads&lt;/strong&gt;: Adapts to any challenge, be it data-heavy tasks needing memory-optimized solutions or general-purpose assignments. The infrastructure&apos;s flexibility allows you to dynamically allocate resources from different instance types, ensuring you&apos;re always prepared for varying workloads. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost efficiency&lt;/strong&gt;: The intelligent utilization of varied instance types avoids redundant costs by preventing over-provisioning, leading to tangible savings. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance enhancement&lt;/strong&gt;: No matter the environment - bare metal, VM, or container - resources are meticulously tuned to guarantee top-tier operational efficiency. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For a Cloud or an IT admin, this translates to: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Resource agility&lt;/strong&gt;: Resources are allocated quickly and as per demand, ensuring seamless coding and testing phases. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Optimized app development&lt;/strong&gt;: Tailored resources mean your applications run at their prime, ensuring your work shines. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hassle-free environment transition&lt;/strong&gt;: While GLPCE doesn&apos;t make transitioning between BM, VM, and Containers effortless, it does simplify the infrastructure aspect, ensuring that whatever environment you&apos;re developing for or operating within is backed by a robust, flexible, and optimal resource. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Informed application optimization&lt;/strong&gt;: With GLPCE’s optimized infrastructure, developers gain insights into the utilization and cost implications of their applications. This transparency empowers them to refine and optimize their apps, ensuring efficient resource use and cost-effectiveness during the application&apos;s lifecycle. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Performance assurance&lt;/strong&gt;: You develop with the certainty that applications will consistently deliver peak performance, enhancing the end-user experience. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In wrapping up, HPE GreenLake for Private Cloud Enterprise is more than just a sophisticated enterprise infrastructure; it caters to both the developer and the admin. For developers, it simplifies workflows and magnifies their contributions. For admins, it offers tools and insights that align with organizational goals. The value that GLPCE delivers makes developers and admins advocates for the adoption of GLPCE within their organizations. This dual appeal makes GLPCE a prime selection for entities seeking contemporary, streamlined, and role-friendly infrastructure solutions.&lt;/p&gt;
&lt;h2&gt;Capacity planning &amp;#x26; monitoring &lt;/h2&gt;
&lt;p&gt;Here’s how you can plan and monitor capacity usage in HPE GreenLake for Private Cloud Enterprise (GLPCE) across the three services: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture5-19-sep-23.png&quot; alt=&quot;Screenshot 4: Bare metal service monitoring&quot; title=&quot;Screenshot 4: Bare metal service monitoring&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;&lt;em&gt;Screenshot 4: Bare metal service monitoring&lt;/em&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Screenshot 4, the Capacity tab displays bare metal resources categorized by &quot;Compute Group, Site, and Instance Type.&quot; Users can set thresholds for CPU and memory usage. Yellow bars in the CPU and memory columns signify that resource usage is nearing the configured maximum threshold.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture6-19-sep-23.png&quot; alt=&quot;Screenshot 5: Virtual Machine service monitoring&quot; title=&quot;Screenshot 5: Virtual Machine service monitoring&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;&lt;em&gt;Screenshot 5: Virtual Machine service monitoring&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Screenshot 5, the Capacity tab categorizes Virtual Machine service resources by &quot;Cluster and Instance Type.&quot; The screen displays four key metrics: CPU usage, CPU allocated, memory usage, and memory allocated.  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPU and memory usage thresholds reflect active resource consumption by hosted apps/processes. By defining a range, we can optimize resources, preventing underuse or potential performance issues. &lt;/li&gt;
&lt;li&gt;CPU and memory allocated thresholds show reserved resources per cluster/instance type, regardless of activity. This ensures we identify over-allocated or under-allocated resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture7-19-sep-23.png&quot; alt=&quot;Screenshot 6: Container service monitoring&quot; title=&quot;Screenshot 6: Container service monitoring&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;&lt;em&gt;Screenshot 6: Container service monitoring&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;In Screenshot 6, the Capacity tab displays grouping of Container resources by &quot;Cluster, Instance Type, and Site.&quot; GLPCE supports two container deployment models: on Virtual Machines and on bare metal. While allocated thresholds are predetermined, this interface allows for the customization of usage thresholds. Stay tuned for our upcoming blog post on container service scaling.&lt;/p&gt;
&lt;h2&gt;Conclusion &lt;/h2&gt;
&lt;p&gt;In conclusion, HPE GreenLake for Private Cloud Enterprise offers organizations greater flexibility through its flexible infrastructure resource pool and capacity planning. This not only addresses current needs but also prepares for future demands. Moreover, GLPCE&apos;s consumption analytics provide clear cost insights, facilitating informed financial decisions. In the digital age, choosing the right infrastructure is essential. With GLPCE, organizations gain both scalability and transparency in costs. Consider HPE GreenLake for Private Cloud Enterprise for a comprehensive infrastructure solution.&lt;/p&gt;
&lt;p&gt;Stay tuned for upcoming blog posts as I delve deeper into GLPCE&apos;s dynamic features, underscoring how they come together to create an agile and resilient foundation for the modern digital enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Closing the gap between High-Performance Computing (HPC) and artificial intelligence (AI)]]></title><description><![CDATA[Scientists and engineers face a number of technical hurdles to utilizing artificial intelligence (AI) techniques alongside high-performance…]]></description><link>https://developer.hpe.com/closing-the-gap-between-hpc-and-ai/</link><guid isPermaLink="false">https://developer.hpe.com/closing-the-gap-between-hpc-and-ai/</guid><pubDate>Fri, 15 Sep 2023 06:34:38 GMT</pubDate><content:encoded>&lt;p&gt;Scientists and engineers face a number of technical hurdles to utilizing artificial intelligence (AI) techniques alongside high-performance scientific computing applications. Recently, a multidisciplinary team of researchers at Hewlett Packard Enterprise (HPE), the Institute of Aerodynamics and Gas Dynamics of the University of Stuttgart, and the High-Performance Computing Center Stuttgart (HLRS), created Relexi to overcome these technical challenges and provide the community with a large-scale simulation framework that can be tailored to their use cases. This solution was built on top of another existing open-source library developed at HPE called &lt;a href=&quot;https://developer.hpe.com/platform/smartsim/home/&quot;&gt;SmartSim&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Researchers were looking for machine learning (ML)-based techniques that could help with common engineering problems, such as using a simulation of fluid flow to improve the performance of an airplane wing. Relexi was able to realize this by combining FLEXI, a computational fluid dynamics solver, and Tensorflow, one of the most popular machine learning packages, to create a reinforcement learning framework by which a computer can ‘learn’ how to optimize these types of problems.&lt;/p&gt;
&lt;p&gt;FLEXI was developed at the Institute for Aerodynamics and Gas Dynamics (IAG) in the lab for Numerical Methods in Fluid Mechanics of Prof. Dr. Ing. Andrea Beck over the course of many years, and is used for computationally demanding, high-resolution computational fluid dynamics (CFD) simulations. Engineers use the code, for example, to simulate the flow around an airfoil to optimize the performance of new airplane wing designs. Modeling such phenomena based on physical principles is currently only practical using powerful, highly parallelized supercomputers. Recently, however, researchers at the IAG have been exploring how machine learning could enhance FLEXI’s capabilities and make such simulations more accessible. Tested on HLRS’s Hawk supercomputer, this approach holds the potential to make high-performance simulation more accessible for state-of-the-art engineering applications.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Reinforcement learning makes simulations easier&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Turbulence is one of the most challenging problems for engineers to deal with because it exists at a variety of scales, all the way down to the molecular level. Normally, the simulations that represent the smallest structures of a turbulent flow are very computationally intensive and expensive. Additionally, there is no single formula that describes the effects of turbulence well in the types of coarser simulations that most engineers are practically restricted to. Traditionally, fluid dynamicists have needed to derive approximations using complex mathematical techniques, most of which form the foundation of countless PhD theses, with no clear ‘winner’ emerging among the field. More recently, researchers have tried using a machine learning approach called supervised learning that analyzes a training dataset to develop a turbulence model. This approach has also proved to be insufficient, because of discrepancies between the training set and the actual simulation environment, which are not easy to resolve this way.&lt;/p&gt;
&lt;p&gt;Scientists in the Beck Lab have been exploring whether another ML approach, called reinforcement learning, could help to overcome these limitations. In reinforcement learning, a machine learning model is not trained on a static dataset, but rather on data generated by the actual system it will later model. The team calls the new framework &lt;a href=&quot;https://github.com/flexi-framework/relexi&quot;&gt;Relexi&lt;/a&gt;, and in recent publications they have demonstrated its effectiveness in finding suitable turbulence models and making the simulations more accurate. Figure 1 below shows the general architecture of Relexi.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog-hpc-ai.png&quot; alt=&quot;Figure 1: General architecture of Relexi.&quot; title=&quot;Figure 1: General architecture of Relexi. To train the agent, Relexi launches simulations on distributed worker nodes during each iteration of the training process. The entire training process is configured by Relexi’s YAML configuration file. The communication between TensorFlow and FLEXI is implemented via the Orchestrator database and the SmartRedis (SR) Clients provided by SmartSim.&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;&lt;em&gt;Fig. 1: General architecture of Relexi. To train the agent, Relexi launches simulations on distributed worker nodes during each iteration of the training process. The entire training process is configured by Relexi’s YAML configuration file. The communication between TensorFlow and FLEXI is implemented via the Orchestrator database and the SmartRedis (SR) Clients provided by SmartSim.&lt;/em&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;To combine HPC and AI, Relexi uses an iterative workflow. Simulation data from FLEXI are fed into a reinforcement learning algorithm, becoming training data that the program uses to optimize the parameters of the turbulence model. The turbulence model then predicts the eddy viscosity as input data for the larger FLEXI simulation, which once again generates data for another round of training. By producing many FLEXI instances over multiple iterations, the optimization of the turbulence model eventually converges to a point where the simulation remains stable and accurate. The scientists can then be confident that their model is suitable for other types of applications.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;SmartSim enables the implementation of hybrid AI/HPC workflows&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Beck and her colleagues initially developed a proof of this concept on a desktop computer, but it required 36 hours to perform just 1,000 iterations, not enough to achieve a stable and reliable turbulence model. To scale it up to simulate more realistic systems, they turned to HLRS, whose Hawk supercomputer, a 26-petaflop system installed by HPE in 2020, offered an ideal platform. Considering the state of the art in implementing hybrid workflows, however, implementing this approach was far from straightforward.&lt;/p&gt;
&lt;p&gt;This is because traditional simulation and newer artificial intelligence approaches typically operate on completely different computing architectures. Except in rare situations, CFD codes like FLEXI are optimized for CPUs, are written in languages like Fortran, C, or C++, and are managed using parallel programming models like MPI and OpenMP. In contrast, AI methods run most effectively on GPUs, are based on other programming languages like Python, rely on libraries like TensorFlow or PyTorch, and use containers. Coupling legacy CFD codes with AI libraries is therefore one of the great challenges that high-performance computing is currently facing, particularly in a case like Relexi, where communication between HPC and AI needs to happen continuously in a single workflow.&lt;/p&gt;
&lt;p&gt;Hawk was initially built exclusively using CPUs, but in 2021 HLRS installed an expansion that includes 24 HPE Apollo 6500 Gen10 Plus systems with 192 NVIDIA A100 GPUs, offering 120 petaflops of AI performance. This expanded architecture now offers the capability to run hybrid workflows that combine traditional simulation with data science methods in a more efficient manner.&lt;/p&gt;
&lt;p&gt;A critical piece that was needed to make Relexi work, however, was an open-source library developed at HPE called SmartSim. This library provides a framework for efficiently moving data between legacy scientific applications developed for CPU architectures and newer data analytics methods requiring GPUs. In this case, the team used SmartSim to mediate efficient communication between FLEXI instances and the reinforcement learning program during runtime. Although SmartSim has previously been used for performing inference inside of applications at-scale in weather and climate simulations, this collaboration with the Beck Lab, HLRS and HPE was an early experiment to see how SmartSim could be used to construct a reinforcement learning solution within the context of a large-scale simulation.&lt;/p&gt;
&lt;p&gt;The implementation of SmartSim within FLEXI was relatively straightforward, as the programmers needed to change only 16 lines of code in FLEXI to make it work. The approach also scaled extremely well to 16 compute nodes (2,048 cores) on Hawk, with minimal drop-offs in performance. The team’s study demonstrated that by running higher numbers of simulations per training iteration, the turbulence model converged more quickly. They also showed that the more data that is produced, the better the resulting turbulence model will be.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Relexi could support other CFD software&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Relexi was developed to enhance FLEXI, but is designed to be modular so that other CFD software can also be used as well. In this way, it holds great potential to support other hybrid applications that combine traditional high-performance computing with AI methods. The Beck Lab, HLRS, and HPE will continue developing Relexi, and plan to test it on a real-world simulation to determine the flow around a plane wing in the near future.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Additional resources&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;SmartSim on GitHub: &lt;a href=&quot;https://github.com/CrayLabs/SmartSim&quot;&gt;https://github.com/CrayLabs/SmartSim&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;FLEXI and Relexi on GitHub:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/flexi-framework/flexi&quot;&gt;https://github.com/flexi-framework/flexi&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/flexi-framework/relexi&quot;&gt;https://github.com/flexi-framework/relexi&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Related publications:&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Kurz M, Offenhäuser P, Beck A. 2023. &lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S0142727X2200162X?via%3Dihub&quot;&gt;Deep reinforcement learning for turbulence modeling in largy eddy simulations&lt;/a&gt;. Int J Heat Fluid Flow. 99: 109094.&lt;/p&gt;
&lt;p&gt;Kurz M, Offenhäuser P, Viola D, Shcherbakov O, Resch M, Beck A. 2022. &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S1877750322002435&quot;&gt;Deep reinforcement learning for computational fluid dynamics on HPC systems&lt;/a&gt;. J Comp Sci. 65: 101884.&lt;/p&gt;
&lt;p&gt;Kurz M, Offenhäuser P, Viola D, Shcherbakov O, Resch M, Beck A. 2022. &lt;a href=&quot;https://www.softwareimpacts.com/article/S2665-9638(22)00106-3/fulltext&quot;&gt;Relexi — A scalable open source reinforcement learning framework for high-performance computing&lt;/a&gt;. Software Impacts 14: 100422.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Seamless data engineering for financial services ]]></title><description><![CDATA[Welcome to the three-part blog series showcasing the remarkable capabilities of HPE Ezmeral Unified Analytics through a real-world use case…]]></description><link>https://developer.hpe.com/seamless-data-engineering-for-financial-services/</link><guid isPermaLink="false">https://developer.hpe.com/seamless-data-engineering-for-financial-services/</guid><pubDate>Mon, 11 Sep 2023 13:56:53 GMT</pubDate><content:encoded>&lt;p&gt;Welcome to the three-part blog series showcasing the remarkable capabilities of HPE Ezmeral Unified Analytics through a real-world use case: Stock Market Prediction. In Part 1 of this series, I will delve into the data engineering aspect of the platform, exploring how it facilitates seamless data management and analysis.&lt;/p&gt;
&lt;p&gt;In Part 2 of the blog series, we will take you on a deep dive into the platform&apos;s ML/AI capabilities. Together, we will explore how the transformed data can be utilized for model building, leveraging Jupyter notebooks to perform interactive data exploration, pre-processing, and model training. Additionally, you will see how HPE Ezmeral Unified Analytics integrates seamlessly with MLflow for efficient model management and KServe for inference, allowing you to track and reproduce experiments easily.&lt;/p&gt;
&lt;p&gt;Finally, in Part 3 of the series, I will focus on automation using MLOps.  Now, let&apos;s embark on this exciting journey into the design and implementation of this cutting-edge solution.&lt;/p&gt;
&lt;h2&gt;What is HPE Ezmeral Unified Analytics?&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics software is a usage-based Software-as-a-Service (SaaS) platform that fully manages, supports, and maintains hybrid and multi-cloud modern analytics workloads through open-source tools. It goes beyond traditional analytics by seamlessly integrating machine learning and artificial intelligence capabilities, empowering users to develop and deploy data, analytics, and AI applications. By providing access to secure, enterprise-grade versions of popular open-source frameworks, the platform enables efficient and flexible scalability while securely accessing data stored in distributed data platforms. With its consistent SaaS experience, organizations can unlock data and insights faster, make data-driven predictions, and gain valuable business insights for faster decision-making, regardless of whether they operate on private, public, or on-premises infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh6.googleusercontent.com/I8m-F21ZhxExbOUCZsUjrAlQ6i1oapby2gnfJcelgsVDgSXEtQt_hvOeQSLWBwXAPVydqEEwMjiC_C2_mG3d7EKFL6uN6igCaetumxr-PFPvHtBNa6DiMKgp-qSvan4WzHf3UfRRwhxPOvgsD9sN7gA&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This use case involves leveraging external pricing server/rest API calls, which are streamed into the data lake/data warehouse of a cloud provider (Microsoft Azure) using Spark from HPE Ezmeral Unified Analytics. Let me demonstrate how this platform enables data analysis using EzPresto (an enterprise-supported version of Presto) and empowers the creation of live dashboards using Superset.&lt;/p&gt;
&lt;h2&gt;Step 1: &lt;strong&gt;Data Gathering&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The data consists of stock prices of different companies listed in National Stock Exchange (NSE) of India. The files consist of historical data from the year 2000 to 2021, which was transformed to a streaming data source. The data was pulled from external servers hosted publicly and then saved to HPE Ezmeral Data Fabric Volume.&lt;/p&gt;
&lt;p&gt; &lt;img src=&quot;https://lh4.googleusercontent.com/EqEcRejQD_tZemLF6h9J5I6KIZrzTJdqeAkvGzDUI0KN-x4XD5FrNTRzz2N8VvmRVwhk5YG_RtpClmbctddZyhz7t47ooK8ZipEEsHWyuEk3sTKavVtUtskbBGi1tKrBSBCP2aGWkqMxTdtQIKjQdlo&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 2: &lt;strong&gt;Data Ingestion&lt;/strong&gt;&lt;/h2&gt;
&lt;h2&gt;Apache Livy&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics gives access to Apache Livy, which enables easy interaction with the Spark cluster via REST interface. It simplifies the access between Spark cluster and application servers. It enables long running Spark contexts that can be used for multiple Spark jobs and multiple clients. Multiple Spark context can be managed that runs on the Spark Clusters. Spark applications can be either batch jobs or real-time streaming applications as per the business needs. Financial services have both long running batch applications as well as streaming applications, Apache Livy provides seamless management of Spark for the data engineers and application support team.&lt;/p&gt;
&lt;p&gt;Apache Livy on the HPE Ezmeral platform enables programmatic, fault-tolerant, multi-tenant submission of Spark jobs from web/mobile apps (no Spark client needed). So, multiple users can interact with the Spark cluster concurrently and reliably. Livy speaks either Scala or Python, so clients can communicate with the Spark cluster via either language remotely. Also, batch job submissions can be done in Scala, Java, or Python.&lt;/p&gt;
&lt;p&gt;It enables easy interaction with a Spark cluster over a REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark context management, all via a simple REST interface or an RPC client library.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics provides functions like %reload_ext sparkmagics and %manage_spark for seamless connection to the Spark cluster. %reload_ext sparkmagics loads the Spark session and authenticates the user for secured access to the Spark session. %manage_spark will create the Spark session with predefined Spark cluster configuration in the background.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh3.googleusercontent.com/uLQ2f7fgl0NeTwPaf9L9WvumSN4prHhtxZl7jdM1fdCKfOjGJmGqphn5DPqiUkDhP4-sNUEbpLlW_EuyEzx8zGx9lzuDPjrM6j6SGo6Rbm8LO1zWenZBpKQGfuNNk4IjG1gfZX5O__F1HBA4jv_pH-w&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the Livy session is enabled, the code can be run on the notebook servers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh3.googleusercontent.com/NLB3_xj7PZ0W5w9JDXVvMGL3gHOZpV1pdWlMf2_toidQjxWAv7kRuY9n8AQRatl55Ht1ULl-v-BeiWB0TPJi-Pv4f-nwyFMEsjluZPUIzY-gXThHbzLGLs_3Nv7bjHD3EX-pV-1l4BZv3Geanr0VN9k&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Spark Streaming&lt;/h2&gt;
&lt;p&gt;Financial applications like real-time transaction processing, fraud detection, trade matching and settlement systems are widely distributed and deal with large volume and variety of data. These systems require parallel processing of transactions in a distributed computing architecture. Hence, Spark streaming best suits the needs of such financial applications like the stock market prediction analysis.&lt;/p&gt;
&lt;p&gt;Spark Streaming is a real-time data processing module in Apache Spark, a popular distributed computing framework for big data processing. It enables processing and analysis of live data streams in a scalable and fault-tolerant manner. Spark Streaming brings the power and flexibility of Spark&apos;s batch processing capabilities to real-time data streams, making it a versatile choice for various real-time data processing use cases.&lt;/p&gt;
&lt;p&gt;Micro-Batch Processing: Spark Streaming follows the micro-batch processing model, where it divides the continuous stream of data into small, discrete batches. Each batch of data is processed as a RDD (Resilient Distributed Dataset), which is Spark&apos;s fundamental data abstraction for distributed computing. This approach allows Spark Streaming to process data in mini-batches, providing low-latency processing and better resource utilization.&lt;/p&gt;
&lt;p&gt;Data Sources and Sinks: Spark Streaming can ingest data from various sources, including Kafka, Flume, Kinesis, HDFS, TCP sockets, and more. It supports a wide range of input formats, making it compatible with different streaming data pipelines. Similarly, Spark Streaming can write the processed data to various sinks, such as HDFS, databases (e.g., MySQL, Cassandra), and external systems.&lt;/p&gt;
&lt;h2&gt;Notebook servers&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics is equipped with notebook servers that can execute Python commands seamlessly along with scalable resources like CPUs, GPUs, and memory. Notebook servers can be spun up on Kubeflow using pre-defined Jupyter notebook images or custom-built notebook images based on your requirement. It will take a few minutes to bring the notebook server up and running. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/05-kubeflow-new-notebook-screen-shot.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once it is available, you can connect to the notebook server either on HPE Ezmeral Unified Analytics Notebooks Tab or directly from the Kubeflow Notebooks.&lt;/p&gt;
&lt;p&gt; &lt;img src=&quot;https://lh6.googleusercontent.com/bY2CLk9ru69g0lTgPTpHECnv1oS_LZrRpeP7h3zN3-LlwQQMIAVE5LPeuD7aaOu0xPmrQN7tNwv_gqSYWfWwrTIrXUNe4U179lxbHUwOdlio0SeDohJB4nfan2IGYhg00xL8LylNoDVlV6rAUSoABFI&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MySQL Database&lt;/h2&gt;
&lt;p&gt;A MySQL database was created and hosted in Microsoft Azure to capture the structured streaming data to a single table. The database server is configured to permit access to the select IP addresses.&lt;/p&gt;
&lt;h2&gt;Step 3: &lt;strong&gt;Streaming data to database&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The data is read from HPE Ezmeral Data Fabric volume by the Spark Streaming engine in constant time intervals. The Spark engine converts the files into batches and does some data engineering like transformations and aggregations on the data. Finally, it is saved to MySQL database using jdbc connections. It is mandatory for all the incoming files to share the same schema.&lt;/p&gt;
&lt;p&gt;3.1 Load the required Spark libraries.&lt;/p&gt;
&lt;p&gt;Once connected to the Livy server, the Spark connection is configured and managed internally by HPE Ezmeral Unified Analytics platform. Now you can directly import the required libraries and you’ll be ready to use Spark. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh5.googleusercontent.com/3-oBLpDhQ6DY4QJSn4K9-nR3fXB2IMtpQuphVxIcR9rd6SpSVaolvNg_xTp9VwUX726oJZJE8Cb9ii4xhrrJc-QtTee5xn2jes1qLDPnRVj3GKQUoWoCvl43IeACRSzCMJmVtDIOTK-p6CZ5eZFmj6s&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;3.2 Define the Data Schema&lt;/p&gt;
&lt;p&gt;Define the data schema for the data to stream in the application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh6.googleusercontent.com/vwbcqYFB0FdwVTsb_lFDHyBZKBWRrv18B0P3BWRaHWG2q7tXS-nNTrUEjA_GnJVP5e5r2RnZZ3ds3GLCVTY0XZOj6L9WJtL-f3ll_COXW1YIW8PjfqcVNQGPE31ODOhhEFeAKLvGDvBi-_0v9r2z3eE&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;3.3 Read the input stream of files from the external server&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh5.googleusercontent.com/LRu4GZMlkTQZ8crHQBXBXd5aFtFDzATuVJXSXOpAcQDb3DSQ8NUH4PQuXAsATtrTcUON1j9quD28gFuzc_Bgj3er771mdrR62JtSKezBrcTlxPCwI8tVXseRg1pxeSxjh-ktolizmQV08H8u8_QaJ20&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;3.4 Write the output stream to the destination path.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh5.googleusercontent.com/aUcWv_pT4sW9AUowSlfnBHKBRy_BCebTyWO99epi3koEaw8srIYP5uLFQc6WKFUt9ZaFKCfrSS1ewWeWScU2-Tv65fHmcM2CPEGGEN8l7YZZv8zis_LEBq4qcr3qS2_8g3eBPFrqXdvLuXwhShdOL3Y&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;3.5 Read the data using Spark SQL and perform Exploratory Data Analysis.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh4.googleusercontent.com/gMBQ7IbeVKFi9YDjjVn485VXx8Kt-jcJvNBFV7HsWYjGqnSZzpdEKrgfvypRiTCUIYMVLlOag3FDjwhNFPdVJyZrXqqf9cOiLJkJ6T89MajBJccQizawt0lqo7gPrVM4DPTEbx9dJj5zR7BHVwbnFUs&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 4: &lt;strong&gt;Connecting the database to HPE Ezmeral Unified Analytics&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Unified Analytics provides users with a quick and simple process to connect to external data sources like different databases, Hive, Snowflake, Teradata, etc. Here a new data source connection is added, and the source is selected as MySQL. The connection is established once the jdbc connection url, username and password are validated.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh6.googleusercontent.com/BmmyC843T9xG9NukA_XqQ7-qxTD8S9SP3y1IH6UGnMflFmEYDMAN-J6-YgURJAuBFPC5fNkrXZBGKdHDthHo-nRqOQO_5R3W04FtYGq67yIiJRmCxQrBq0qZC-4lhzeD7wXe5-H8Bem1_X9EXDTCK2M&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This will connect the database to EzPresto, which is a distributed analytic query engine for big data, integrated into HPE Ezmeral Unified Analytics. This enables users to query the tables in the database using SQL commands. This service helps users to use the database seamlessly by enabling them to insert, delete, update and query records from the tables. The data can be accessed from a remote server or on HPE Ezmeral Data Fabric Software. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh3.googleusercontent.com/EJZsyxnH6tUhlduTbkddolYL3QHc3QUmAihD_QmE8L0_p1j_y1DfgA8HE0ug1dF3RecvRuomELlrp7LtSYIqWuk6U2vs15MR6SvmnLz-1zTeZGf_v9iXzkCu43kyrEpucyHnBVRTvAXb7nff5uWXEiM&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 5: &lt;strong&gt;Visualization using Superset&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Apache Superset, a data visualization tool has been integrated into HPE Ezmeral Unified Analytics, which helps with the graphical representation of data, from simple line charts to highly detailed geospatial charts. The dashboards help users to get a clear picture of the KPIs and other relevant metrices of the business.&lt;/p&gt;
&lt;p&gt;Here, a new dashboard is created in HPE Ezmeral Unified Analytics, and the connection to the database is established. Different visuals on the stock data are integrated into the dashboard and it is customized to auto refresh to a customer-defined time interval. Once the data starts streaming, the dashboard updates the visuals periodically and the latest data is available on the dashboard for analysis.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh6.googleusercontent.com/MvJQkbWinxA42eJ6J0-mOK-9sBNdh6JOeU0gMAplO308IyLPIFB1H7oweUZ9cTMngpkr4qyi9saWsQ9PaNFGSBxF8O9DE8HctUaqHmvDPGFmEL586UAySdnAW6b9KFxrC2vbUyiCP-WUvkeDjd5BJxI&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh3.googleusercontent.com/QmnYcBblPyGIxZ-6lElbDCn0lCcG7PcPqSxHCwFslo3IVG9ExQxvSWdvmB1s5W6uOeiCJMCmSEuIZSUBtF8gJxrHP1nIw8qPyUTZH3QEn4kvPbW-GyXfNLvP4bAAbdIgbtOg6MsccLM-USWpFuDeKJ8&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://lh3.googleusercontent.com/nQul3Zy2RuOYA7RXOqokiTIhCLQYDktS_VzCU5YBVLSjItmICHnRCQlLp_pzyLR1tnHKbvCcXLBnG7E7JeYfKh7pBC_OGGpxS5ewCKfi2IWvzXA7LVIq05Lk9DNFVdlc9EuqsxDN_xiPGrMsAWW8B38&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In concluding Part 1 of this blog series, you’ve journeyed through the data engineering and analytics aspects of using Spark, EzPresto, and Superset, powered by HPE Ezmeral Unified Analytics. With a spotlight on assimilating external pricing data to craft a dynamic dashboard, I hope I have illuminated how this platform brings together best of breed open-source tools to transform complex data into valuable insights.&lt;/p&gt;
&lt;p&gt;Don&apos;t miss Part 2, where you’ll get to explore the machine learning capabilities of our platform. To get familiar with HPE Unified Analytics Software, &lt;a href=&quot;https://www.hpe.com/us/en/hpe-ezmeral-unified-analytics.html&quot;&gt;try it&lt;/a&gt; for free or visit our &lt;a href=&quot;https://docs.ezmeral.hpe.com/unified-analytics/11/index.html&quot;&gt;website&lt;/a&gt; for details. Let&apos;s unlock the future of analytics together!&lt;/p&gt;
&lt;p&gt;Contributors to this blog post include Suvralipi Mohanta (&lt;a href=&quot;suvralipi.mohanta@hpe.com&quot;&gt;suvralipi.mohanta@hpe.com&lt;/a&gt;), Harikrishnan Nair (&lt;a href=&quot;mailto:harikrishnan.nair@hpe.com&quot;&gt;harikrishnan.nair@hpe.com&lt;/a&gt;), and Joann Starke (&lt;a href=&quot;joann.starke@hpe.com&quot;&gt;joann.starke@hpe.com&lt;/a&gt;).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From DevOps to HPE NonStop]]></title><link>https://developer.hpe.com/2023-September-05/</link><guid isPermaLink="false">https://developer.hpe.com/2023-September-05/</guid><pubDate>Tue, 05 Sep 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Setting up the load balancer with MetalLB in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[This blog post describes how to set up the load balancer using MetalLB for a Kubernetes (K8s) cluster in HPE GreenLake for Private Cloud…]]></description><link>https://developer.hpe.com/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/set-up-load-balancer-with-metallb-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Fri, 01 Sep 2023 08:59:11 GMT</pubDate><content:encoded>&lt;p&gt;This blog post describes how to set up the load balancer using &lt;a href=&quot;https://metallb.universe.tf/&quot;&gt;MetalLB&lt;/a&gt; for a Kubernetes (K8s) cluster in &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;. It allows customers to configure load balancing services for their workloads deployed in K8s clusters.&lt;/p&gt;
&lt;h3&gt;Overview&lt;/h3&gt;
&lt;p&gt;Using &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, customers can create services with the type of &lt;em&gt;NodePort&lt;/em&gt; for their workloads deployed in K8s clusters using the label &lt;em&gt;hpecp.hpe.com/hpecp-internal-gateway=true&lt;/em&gt;. The services will be automatically exposed to a container platform gateway host with assigned ports. The deployed workloads will become accessible externally using the gateway host name and the assigned ports as access URLs.&lt;/p&gt;
&lt;p&gt;Different from various public cloud providers, such as &lt;em&gt;GCP&lt;/em&gt;, &lt;em&gt;AWS&lt;/em&gt; and &lt;em&gt;Microsoft Azure&lt;/em&gt;, HPE GreenLake for Private Cloud Enterprise doesn’t support network load balancers by default. As cluster administrators, after you create K8s cluster, you can integrate cluster with any load balancers in place to support K8s services of type &lt;em&gt;LoadBalancer&lt;/em&gt;. This blog post will show you how to use MetalLB to provide load balancing services for K8s clusters in HPE GreenLake for Private Cloud Enterprise. It provides customers with the flexibility to configure custom load balancers for their deployed workloads in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you meet the following requirements:&lt;/p&gt;
&lt;style&gt; li { font-size: 100%; line-height: 23px; max-width: none; } &lt;/style&gt;
&lt;ul&gt;
&lt;li&gt;A K8s cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The kubectl CLI tool, together with the kubeconfig file for accessing the K8s cluster&lt;/li&gt;
&lt;li&gt;A range of virtual IP addresses. Those IP addresses should not be used in any existing K8s clusters. They will be assigned to the load balancer services.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;D﻿eploy MetalLB for load balancing&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://metallb.universe.tf/&quot;&gt;MetalLB&lt;/a&gt; is a software solution that provides a network load balancer implementation for Kubernetes clusters using standard routing protocols. By installing MetalLB, it will support the LoadBalancer services within the Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;This section describes the detailed steps to deploy MetalLB and configure it to support the &lt;em&gt;LoadBalancer&lt;/em&gt; services in the Kubernetes clusters.&lt;/p&gt;
&lt;h4&gt;1. Deploy MetalLB&lt;/h4&gt;
&lt;p&gt;MetalLB can be deployed by applying the following YAML manifest file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ MetalLB_RTAG=$(curl -s https://api.github.com/repos/metallb/metallb/releases/latest|grep tag_name|cut -d &apos;&quot;&apos; -f 4|sed &apos;s/v//&apos;)
$ echo $MetalLB_RTAG
0.13.10
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v$MetalLB_RTAG/config/manifests/metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above command installs the latest MetalLB &lt;em&gt;v0.13.10&lt;/em&gt; to the K8s cluster. It first creates the namespace &lt;em&gt;metallb-system&lt;/em&gt;, sets up the role-based access control (&lt;em&gt;RBAC&lt;/em&gt;), creates a list of customer resource definitions (CRDs) and finally deploys a list of pods and services.&lt;/p&gt;
&lt;p&gt;You can check and confirm all pods and services are deployed successfully:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n metallb-system 
NAME                             READY   STATUS    RESTARTS   AGE
pod/controller-7967ffcf8-8lgwc   0/1     Running   0          37s
pod/speaker-24l42                1/1     Running   0          36s
pod/speaker-g2q9h                1/1     Running   0          36s
pod/speaker-kkmsj                1/1     Running   0          36s
pod/speaker-ss4w7                1/1     Running   0          36s
pod/speaker-xl7bv                1/1     Running   0          36s
pod/speaker-zfl7s                1/1     Running   0          36s

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/webhook-service   ClusterIP   10.105.154.106   &amp;#x3C;none&gt;        443/TCP   38s

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   6         6         4       6            4           kubernetes.io/os=linux   37s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   0/1     1            0           38s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-7967ffcf8   1         1         0       38s
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Define a range of IP addresses&lt;/h4&gt;
&lt;p&gt;After all MetalLB components are deployed, you can start creating and allocating a range of IP addresses, which can be used by MetalLB to assign IP addresses to services.&lt;/p&gt;
&lt;p&gt;The customer resource definition (CRD) &lt;em&gt;IPAddressPool&lt;/em&gt; will be used for defining the range of IP addresses. After it’s deployed to the cluster, all the IP addresses will be allocated for MetalLB to use.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat IPAddressPool.yaml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cfe-pool
  namespace: metallb-system
spec:
  addresses:
- 172.16.17.250-172.16.17.254

$ kubectl apply -f IPAddressPool.yaml 
ipaddresspool.metallb.io/cfe-pool created

$ kubectl get IPAddressPool -n metallb-system
NAME       AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
cfe-pool   true          false             [&quot;172.16.17.250-172.16.17.254&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above command allocates the IP pool that has the IP range 172.16.17.250-172.16.17.254. The IP addresses in the &lt;em&gt;IPAddressPool&lt;/em&gt; can be defined by &lt;em&gt;CIDR&lt;/em&gt; and &lt;em&gt;IPV6&lt;/em&gt; addresses as well.&lt;/p&gt;
&lt;h4&gt;3. Announce the service IP addresses&lt;/h4&gt;
&lt;p&gt;Once the IP addresses are allocated, you must announce service IPs. The &lt;a href=&quot;https://metallb.universe.tf/configuration/#announce-the-service-ips&quot;&gt;MetalLB Configuration site&lt;/a&gt; shows a list of configuration approaches you can use to announce service IPs. The below example shows the details of using the &lt;em&gt;Layer 2&lt;/em&gt; mode to configure service IP addresses. This approach does not need any protocol specific configuration, only IP addresses from the &lt;em&gt;IPAddressPool&lt;/em&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat L2Advertisement.yaml 
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - cfe-pool

$ kubectl apply -f L2Advertisement.yaml 
l2advertisement.metallb.io/example created

$ kubectl get L2Advertisement -n metallb-system
NAME      IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
example   [&quot;cfe-pool&quot;]               
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;D﻿eploy Nginx app as the service type &lt;em&gt;LoadBalancer&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;As a sample web application, the &lt;em&gt;Nginx&lt;/em&gt; with the service type of &lt;em&gt;LoadBalancer&lt;/em&gt; will be deployed to the K8s cluster using the following YAML manifest file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat nginx-deployment.yaml 
apiVersion: v1
kind: Service
metadata:
  name: cfe-nginx-app
  labels:
    app: nginx-app
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
  selector:
    app: nginx-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-app
  name: cfe-nginx-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      volumes:
      - name: webdata
        emptyDir: {}
      initContainers:
      - name: web-content
        image: busybox
        volumeMounts:
        - name: webdata
          mountPath: &quot;/webdata&quot;
        command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &apos;echo &quot;&amp;#x3C;h1&gt; Hi, this is the sample &amp;#x3C;font color=blue&gt;Nginx App&amp;#x3C;/font&gt; deployed as the Load Balancer service type !&amp;#x3C;/h1&gt;&quot; &gt; /webdata/index.html&apos;]
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: webdata
          mountPath: &quot;/usr/share/nginx/html&quot;

$ kubectl apply -f nginx-deployment.yaml 
service/cfe-nginx-app created
deployment.apps/cfe-nginx-app created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can check the Nginx a﻿pplication deployment b﻿y typing the following command, using the label &lt;em&gt;app=nginx-app&lt;/em&gt;, and confirm all pods and services are in running states. For the sample service &lt;em&gt;cfe-nginx-app&lt;/em&gt;, you should see it’s been deployed as the &lt;em&gt;LoadBalancer&lt;/em&gt; type and an IP address, &lt;em&gt;172.16.17.250&lt;/em&gt;, gets assigned as its &lt;em&gt;EXTERNAL-IP&lt;/em&gt; :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -l app=nginx-app
NAME                                 READY   STATUS    RESTARTS   AGE
pod/cfe-nginx-app-66cb4f5bbf-4nfw5   1/1     Running   0          3m20s

NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
service/cfe-nginx-app   LoadBalancer   10.98.244.64   172.16.17.250   80:31631/TCP   3m22s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cfe-nginx-app   1/1     1            1           3m21s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/cfe-nginx-app-66cb4f5bbf   1         1         1       3m22s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿o verify the deployed Nginx application is working, l﻿aunch your web browser a﻿nd open &lt;em&gt;&lt;a href=&quot;http://172.16.17.250&quot;&gt;http://172.16.17.250&lt;/a&gt;&lt;/em&gt;. The following should now show in your browser:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/web-nginx-app.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Summary&lt;/h3&gt;
&lt;p&gt;This blog post describes the detailed process used to deploy and set up MetalLB to support customers to configure load balancers for K8s clusters in HPE GreenLake for Private Cloud Enterprise. By deploying load balancing configuration, it provides an externally accessible IP address that sends traffic to the deployed workload. It also allows customers to use &lt;a href=&quot;https://kubernetes.io/docs/concepts/services-networking/ingress/&quot;&gt;Kubernetes Ingress&lt;/a&gt;, in place of &lt;em&gt;Service&lt;/em&gt;, for different traffic routing support of their deployed workloads in K8s clusters. This supports deploying applications by passing the customer certificates through their own authority. It unblocks a list of potential use cases and enhances HPE GreenLake by providing additional flexibilities.&lt;/p&gt;
&lt;p&gt;You can keep coming back to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt; to learn more about HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE NonStop DevOps]]></title><description><![CDATA[The DevOps approach has gained significant momentum over the past decade as today’s enterprises require shorter development and deployment…]]></description><link>https://developer.hpe.com/hpe-nonstop-devops/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-nonstop-devops/</guid><pubDate>Wed, 30 Aug 2023 13:50:59 GMT</pubDate><content:encoded>&lt;p&gt;The DevOps approach has gained significant momentum over the past decade as today’s enterprises require shorter development and deployment cycles to gain faster time to market and obtain fast feedback for improvement. DevOps helps in continual product rollouts by bridging silos and streamlining change management practices, improving not only  development efficiency but also ensuring consistency and reliability of deployments.&lt;/p&gt;
&lt;p&gt;Gartner forecasts a 5.1% increase in Worldwide IT Spending in 2023, with organizations focused on realizing operational efficiency, cost reductions and/or cost avoidance during the current economic uncertainty. One can infer from this that investments in DevOps and improving DevOps practices is likely to increase. As HPE NonStop customers increasingly embrace DevOps, this is the right time to look at DevOps offerings for the HPE NonStop platform. And now is even more important than ever, as HPE NonStop is takings it steps towards becoming cloud ready!&lt;/p&gt;
&lt;p&gt;DevOps is a culture and a continuous process of improving efficiency. It can be applied to the various phases of software development, deployment, and post deployment with the aid of tools and automation. While the culture aspect is organization specific, in this article, I will discuss the tools recommendation, automation and artifacts that aid the DevOps adoption for HPE NonStop.&lt;/p&gt;
&lt;h1&gt;HPE NonStop is DevOps ready!&lt;/h1&gt;
&lt;p&gt;In DevOps, you build and operate Continuous Integration/Continuous Delivery (CI/CD) pipelines. Continuous Integration includes phases such as plan, code, build, and test, while Continuous Delivery (CD) covers the continuous delivery and deployment aspects, including release, deployment, operation, and monitoring of the software. &lt;/p&gt;
&lt;p&gt;HPE NonStop is DevOps ready. This means that, for each of the phases, there are tool sets specific to the platform identified and workflows that can be automated to achieve efficient, repeatable, and reliable production ready software.&lt;/p&gt;
&lt;h1&gt;HPE NonStop DevOps starter kits&lt;/h1&gt;
&lt;p&gt;HPE NonStop supports multiple languages and usage of DevOps for HPE NonStop is classified based on languages. (&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/blob/main/nsdevops/images/CustomerUsageProfileClassification.jpg&quot;&gt;CustomerUsageProfileClassification&lt;/a&gt;). This classification is not only required for toolset recommendations, but also for demonstrating the tool usages through starter kits. The starter kits are customer usage profile specific, ready to use, developer-friendly and production-ready. Each starter kit consists of a sample application (typically client/server), a set of pipeline scripts and a README file with usage instructions. These are hosted on GitHub &lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops&quot;&gt;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The intent of the starter kit is to demonstrate pipelines and tools for a language usage profile. These reusable scripts and pipelines in the starter kit are an effortless way to get started with DevOps and try it out with customer applications.&lt;/p&gt;
&lt;p&gt;The starter kits currently available are:&lt;/p&gt;
&lt;style&gt;
table {
    display: block;
    width: 100%;
    width: max-content;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
    padding: 10px !important;
}
&lt;/style&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Name&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Location&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;C Starter Kit&lt;/td&gt;
&lt;td&gt;CI sample for C/C++  based applications built using cross-compilers&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops/c&quot;&gt;C&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java Starter Kit&lt;/td&gt;
&lt;td&gt;CI sample for Java based applications built off-platform compilers&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops/java&quot;&gt;java&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java JNI Starter Kit&lt;/td&gt;
&lt;td&gt;CI sample for polyglot Java and C applications built on-platform on HPE NonStop&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops/javajni&quot;&gt;javajni&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python Starter Kit&lt;/td&gt;
&lt;td&gt;CI sample for Pythons based applications&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops/python&quot;&gt;python&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CD Starter Kit&lt;/td&gt;
&lt;td&gt;Continuous Deployment using HPE NonStop Manageability Framework (NSMF) and Ansible&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops/cd&quot;&gt;cd&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Most of the starter kits demonstrate continuous integration (CI). For the continuous deployment, NSMF and Ansible-based deployments are used. While the CD starter kit is demonstrated with a Java application, the dev, test and production environment setup and concepts remain the same for application of any language profile. The CD starter kit can also be extended to HPE NonStop system configuration management and HPE NonStop system administration activities.&lt;/p&gt;
&lt;h1&gt;Using the Starter Kits  &lt;/h1&gt;
&lt;p&gt;Developers new to DevOps should use the &lt;a href=&quot;https://github.com/HewlettPackard/NonStop/blob/main/nsdevops/HPE%20Nonstop%20Server-Modern%20DevOps-Instructions-for-CI-CD-Setup%20Documnet_v1.2.pdf&quot;&gt;HPE NonStop ModernDevOps - Instructions for CI-CD setup&lt;/a&gt; for preparing the environment to get started. If the enterprise already has a DevOps setup, go through the recommended setup (&lt;a href=&quot;https://github.com/HewlettPackard/NonStop/blob/main/nsdevops/images/RecommendedSetup.jpg&quot;&gt;Recommended Setup&lt;/a&gt;) and the instructions in the above guide to ensure HPE NonStop systems are configured to be accessible in the CI Tool.&lt;/p&gt;
&lt;p&gt;Getting started with the starter kits is easy.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First download the starter kit from the HPE NonStop &lt;a href=&quot;https://github.com/HewlettPackard/NonStop/tree/main/nsdevops&quot;&gt;git repository&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the appropriate folder applicable to language profile.&lt;/li&gt;
&lt;li&gt;Upload the code from that folder to the organization GIT. Go through the instructions in the README.md file of the starter kit and update the pipeline script to point to the right repository.&lt;/li&gt;
&lt;li&gt;Make appropriate changes based on the environment. Commit the changes to the organization git. The pipelines will start executing.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The scripts can be reused for applications in the same language profile with minor changes, such as modifying GIT repository and build steps, if any.&lt;/p&gt;
&lt;h1&gt;Starter Kits are cloud ready!&lt;/h1&gt;
&lt;p&gt;As more development environments are moving to public cloud, cloud vendors are offering DevOps services. HPE NonStop development environment (NSDEE) and cross-compilers and tools are also available on the public cloud now.&lt;/p&gt;
&lt;p&gt;The starter kits have ready-to-use pipelines for popular cloud vendors and their DevOps services, such as AzureDevOps and AWS CodeBuild, can be used to demonstrate how the HPE NonStop development and deployment can be integrated easily using those cloud services.&lt;/p&gt;
&lt;p&gt;For the platform agnostic language profiles on HPE NonStop, such as Python and Java, follow the instructions given in the vendors’ DevOps documentation. Alternately, specific scripts are provided in the starter kit. The buildspec.yml file, if present in the root folder of the repository – i.e. the AWS CodeBuild, will automatically build the project. Similarly, while using Azure, create a build script or choose to use the one provided with the starter kit.&lt;/p&gt;
&lt;p&gt;The C, Python, Java starter kits are cloud ready. Currently, support is provided for AWS CloudBuild or AzureDevOps. In future, the HPE NonStop development team will include other cloud vendor specific scripts and integrations in the starter kits.&lt;/p&gt;
&lt;p&gt;DevOps is supported on HPE NonStop, and starter kits are here to help you get started. Go, explore the HPE NonStop DevOps starter kits now!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A guide to connecting Tableau with the HPE NonStop Database SQL/MX v2]]></title><description><![CDATA[HPE NonStop is a platform and an operating environment that turns your applications into fault-tolerant apps. Trusting in over 40 years of…]]></description><link>https://developer.hpe.com/a-guide-to-connecting-tableau-with-the-hpe-nonstop-database-sql-mx-v2/</link><guid isPermaLink="false">https://developer.hpe.com/a-guide-to-connecting-tableau-with-the-hpe-nonstop-database-sql-mx-v2/</guid><pubDate>Mon, 28 Aug 2023 11:15:01 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;HPE NonStop is a platform and an operating environment that turns your applications into fault-tolerant apps. Trusting in over 40 years of continued evolution and developments on the platform, financial institutions payment processors, manufacturers and retailers continue to put their most mission-critical workloads on HPE NonStop.&lt;/p&gt;
&lt;p&gt;HPE NonStop offers a modern RDBMS, with interfaces, tools, and experts available to lead the integration into customers’ IT environments.&lt;/p&gt;
&lt;p&gt;HPE NonStop SQL/MX is an HPE-patented database that inherits the fault-tolerant design and coherent integration between OS and the database, and adopts industry-standard features and tools.&lt;/p&gt;
&lt;h2&gt;What does it mean to adopt industry standard features and tools?&lt;/h2&gt;
&lt;p&gt;This means that software and tools programmed to use industry-standard &lt;a href=&quot;https://insightsoftware.com/blog/what-is-odbc/&quot;&gt;Open Database Connectivity drivers (ODBC)&lt;/a&gt; and &lt;a href=&quot;https://www.ibm.com/docs/en/informix-servers/12.10?topic=started-what-is-jdbc&quot;&gt;Java Database Connectivity (JDBC) drivers&lt;/a&gt; will be able to connect to the HPE NonStop database.&lt;/p&gt;
&lt;p&gt;There are many data visualization and analytics tools that provide such connectors, and, in this tutorial, we will be connecting the SQL/MX database to Tableau, a leading data visualization tool used for data analysis and business intelligence.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/connectingtableau-tohpenonstopaql.png&quot; alt=&quot;Connecting Tableau to HPE NonStop&quot; title=&quot;Connecting Tableau to HPE NonStop&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Tableau?&lt;/h2&gt;
&lt;p&gt;Tableau is an excellent data visualization and business intelligence tool used for reporting and analysing vast volumes of data, helping users create charts, graphs, and dashboards that assist in business decision making. It helps users see and understand their data through its intuitive interface and feature-packed application. Tableau has spent a decade as a leader in the Gartner Magic Quadrant for Analytics and Business Intelligence space, and remains one of the top business intelligence platforms for graduates just out of college and &lt;a href=&quot;https://www.tableau.com/solutions/customers&quot;&gt;top companies, such as LinkedIn and RedHat&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sampletableaudashboard-withsql.png&quot; alt=&quot;Sample Tableau Dashboard with SQL/MX Database&quot; title=&quot;Sample Tableau Dashboard with SQL/MX Database&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Connecting Tableau to the SQL/MX database&lt;/h2&gt;
&lt;p&gt;This article serves as a companion to the &lt;a href=&quot;https://help.tableau.com/current/guides/get-started-tutorial/en-us/get-started-tutorial-home.htm&quot;&gt;existing Tableau Tutorial to their desktop client&lt;/a&gt;. The tutorial is quite good, so make sure you check it out and continue exploring Tableau’s features while being connected to your SQL/MX database.&lt;/p&gt;
&lt;p&gt;This document will provide specific guidance for connecting the Tableau software to the SQL/MX database.&lt;/p&gt;
&lt;h3&gt;Connection and ODBC prerequisites:&lt;/h3&gt;
&lt;p&gt;This tutorial assumes that HPE NonStop ODBC 3.x Unicode driver has already been installed. Check out the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00045523en_us&amp;#x26;docLocale=en_US&quot;&gt;HPE NonStop ODBC/MX Client Drivers User Guide&lt;/a&gt; for more information on the driver.&lt;/p&gt;
&lt;p&gt;You can refer to this tutorial to get help in configuring your ODBC &lt;a href=&quot;https://shanice-abigail.medium.com/python-how-to-use-odbc-to-connect-hpe-nonstop-sql-mx-44ca90047eb3&quot;&gt;— Getting Prerequisites &gt; Configuring your ODBC&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This tutorial also assumes that on your host, HPE NonStop SQL/MX has been installed, MXCS is running, and a MXCS data source has been added and started. Check with your administrator for the IP address, port number etc. (If you’re the administrator you can find out more in this &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=emr_na-a00090054en_us&quot;&gt;manual&lt;/a&gt;).&lt;/p&gt;
&lt;h3&gt;Data model and database structure:&lt;/h3&gt;
&lt;p&gt;The data for this tutorial was taken from Tableau’s tutorial and desktop client for “Superstore”. The raw data can be downloaded from Tableau’s desktop client, or through this link - &lt;a href=&quot;https://github.com/shaniceabigail/medium-materials/raw/main/2022%20tableau%20demo/Superstore.xls&quot;&gt;Superstore.xls from Tableau’s Tutorial (Click to Download)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Each excel sheet is visualized as one table in the database. To import the data into SQL/MX, you need to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Save each excel sheet as a .csv file.&lt;/li&gt;
&lt;li&gt;Create tables in the database through MXCI.&lt;/li&gt;
&lt;li&gt;Import from OSS using the command:
&lt;strong&gt;/usr/tandem/sqlmx/bin/import&lt;/strong&gt; [catalog].[schema].[table name] &lt;strong&gt;-I&lt;/strong&gt; [table name]&lt;strong&gt;.csv&lt;/strong&gt;. For example: &lt;em&gt;/usr/tandem/sqlmx/bin/import catalog.schema.orders -I orders.csv&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Connecting to the HPE NonStop server&lt;/h2&gt;
&lt;p&gt;Create a new workbook in Tableau and select the “Other Databases (ODBC)” option when choosing the type of connection to a server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/newtableauconnection.png&quot; alt=&quot;New Tableau Connection&quot; title=&quot;New Tableau Connection&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Connection requirements&lt;/h3&gt;
&lt;p&gt;Here are the details you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Database username&lt;/li&gt;
&lt;li&gt;Database password&lt;/li&gt;
&lt;li&gt;Catalog name&lt;/li&gt;
&lt;li&gt;Schema name&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpenonatopodbcdatasource.png&quot; alt=&quot;HPE NonStop ODBC DataSource&quot; title=&quot;HPE NonStop ODBC DataSource&quot;&gt;&lt;/p&gt;
&lt;p&gt;You will see another window prompt — make sure that you select the DSN (data source name) that you have registered in your ODBC configuration.&lt;/p&gt;
&lt;h3&gt;Server details and format of server connection string&lt;/h3&gt;
&lt;p&gt;Server attributes needed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;IP address&lt;/li&gt;
&lt;li&gt;Port number&lt;/li&gt;
&lt;li&gt;Catalog name&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Format:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TCP:[IP address]/[Port number]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Database name: [Catalog name]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/databaseserverdetails.png&quot; alt=&quot;Database server details&quot; title=&quot;Database server details&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Setting up tables for Tableau’s Superstore tutorial&lt;/h2&gt;
&lt;p&gt;If you’ve made it to this step, it means that your SQL/MX database has successfully connected to the Tableau software. Congratulations!&lt;/p&gt;
&lt;p&gt;Now you can configure the relationship between the tables in this database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/succesfulconnectiontodb.png&quot; alt=&quot;Shows a successful connection to database&quot; title=&quot;Shows a successful connection to database&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Selecting tables for Superstore&lt;/h3&gt;
&lt;p&gt;Select the database and schema where you have created and populated the database. Click and drag the tables into the orange space indicated.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dragtablesintothearea.png&quot; alt=&quot;Click and drag tables into the area&quot; title=&quot;Click and drag tables into the area&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Creating relationships between tables&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Start with the “Orders” table, and then the “People” table&lt;/li&gt;
&lt;li&gt;A line between the tables, and a box showing the relation between the tables will appear&lt;/li&gt;
&lt;li&gt;Check the fields used to link the 2 tables together&lt;/li&gt;
&lt;li&gt;Repeat with the “Returned” table and “Orders” table&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/relationshiporderstable-peopletable.png&quot; alt=&quot;Relationship between orders table and People table&quot; title=&quot;Relationship between orders table and People table&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once completed, you should be all set. Now you can continue onto the Tableau tutorial &lt;a href=&quot;https://help.tableau.com/current/guides/get-started-tutorial/en-us/get-started-tutorial-drag.htm&quot;&gt;Step 2: Drag and Drop to take a first look&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Other data visualization tools that can work with SQL/MX&lt;/h2&gt;
&lt;p&gt;Tableau is not the only data visualization tool that works with HPE NonStop SQL/MX’s ODBC driver. Other applications such as Power BI, and even Excel, can also connect in a similar way to the HPE NonStop database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/populardatavisualizationtools.png&quot; alt=&quot;Popular data visualization tools&quot; title=&quot;Popular data visualization tools&quot;&gt;&lt;/p&gt;
&lt;p&gt;It’s important to note that not only is HPE NonStop SQL/MX ANSI compliant, but it also adopts ODBC and JDBC standards, allowing effortless database connection with a state-of-the-art fault tolerance.&lt;/p&gt;
&lt;p&gt;Don’t forget to check back frequently on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt; to find more information on HPE NonStop.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Simplifying OpenAPI specification with reusable common data models]]></title><description><![CDATA[The OpenAPI specification (OAS)  is one of the most widely followed API contracts. It is language-agonistic.
With the help of the…]]></description><link>https://developer.hpe.com/sivayanama-simplifying-open-api-specification-with-reusable-common-data-models/</link><guid isPermaLink="false">https://developer.hpe.com/sivayanama-simplifying-open-api-specification-with-reusable-common-data-models/</guid><pubDate>Sat, 26 Aug 2023 08:06:19 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;a href=&quot;https://www.openapis.org&quot;&gt;OpenAPI specification&lt;/a&gt; (OAS)  is one of the most widely followed API contracts. It is language-agonistic.
With the help of the information found in the OpenAPI specification, clients can better understand APIs and how to invoke them without having access to the code or worrying about the implementation details.&lt;/p&gt;
&lt;p&gt;At times, this open specification file can be too complex to manage and understand. In this article, I will discuss the techniques to simplify the open specification with loosely coupled reusable data models.&lt;/p&gt;
&lt;h2&gt;Items found in the OpenAPI Specification&lt;/h2&gt;
&lt;p&gt;OpenAPI specifications files have many definitions in them. However, the below list of entities are typically bigger in terms of the number of lines and tend to be reused in the specification file.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;schemas&lt;/li&gt;
&lt;li&gt;pathitems&lt;/li&gt;
&lt;li&gt;parameters&lt;/li&gt;
&lt;li&gt;requestBodies&lt;/li&gt;
&lt;li&gt;responses&lt;/li&gt;
&lt;li&gt;headers&lt;/li&gt;
&lt;li&gt;examples&lt;/li&gt;
&lt;li&gt;linkscallbacks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will discuss four approaches to deal with these definitions.&lt;/p&gt;
&lt;h2&gt;Inline definition&lt;/h2&gt;
&lt;p&gt;With inline definition, the definition is inline right at the reference point, as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;schema:
  type: object
  properties:
    id:
      type: string
    name:
      type: string
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Inline inside components object&lt;/h2&gt;
&lt;p&gt;The components object in the OpenAPI specification is the home of reusable object definitions. However, these defined objects must be explicitly referred to outside of the components section wherever required.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;components:
  parameters:
    tenantId:
      in: path
      name: tenantId
      schema:
        type: string
      required: true
      description: Describes the clientId or mspId of tenant
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;$ref&lt;/em&gt; is one of the fixed fields in the schema. It is a string value that refers to other components in the OpenAPI document, internally and externally. The above defined &lt;em&gt;tenantId&lt;/em&gt; parameter can be referred to below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;$ref: &apos;#/components/parameters/tenantId’
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Externalized definition&lt;/h2&gt;
&lt;p&gt;Data models can be defined outside of the OpenAPI specification file using a $ref reference to an external definition. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;example-multiple-threshold-type-example-request:
   $ref: ./models/opsramp-monitoring-management/multiple-threshold-type-example-request-v1.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Content of multiple-threshold-type-example-request-v1.yaml&lt;/h3&gt;
&lt;p&gt;Please note that the request object definition should be defined within the &lt;em&gt;value:&lt;/em&gt; as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;value:
  id: 1ecf993f-9b54-4ce3-9581-c365188f7e58
  name: OpsRamp Gateway Performance Template
  description: Monitors basic UNIX parameters like UNIXCPU, UNIXSTORAGE, UNIXUPTIME,
    UNIXMEMORY, UNIXLOAD and UNIXStats
  resourceType: DEVICE
  collectorType: OpsRamp Gateway
  status:
    state: ACTIVE
  generation: 2
  tags: Performance Monitors
  createdDate: &apos;2022-10-09T15:03:44+0000&apos;
  updatedDate: &apos;2022-10-09T15:23:40+0000&apos;
  scope: Client
  templateFamily: Performance Monitors Family
  notes: Sample notes related to performance monitors
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Externalized definition with local aliases&lt;/h2&gt;
&lt;p&gt;The above externalized definition can be further improved by defining local aliases. The local aliases can be used instead of repeating the relative path of the definition in all references. In the example shown below, the external definition referenced with the $ref can be referred by &lt;em&gt;#components/parameters/tenanId&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;components:
  parameters:
    tenantId:
      $ref: ./models/opsramp-monitoring-management/tenantId-v1.yaml
      # this can be referred by #components/parameters/tenantId
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Advantages of externalized definitions&lt;/h2&gt;
&lt;p&gt;Externalized definitions have many advantages over traditional inline definitions, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Loosely coupled data models, and definitions from the OpenAPI Specification&lt;/li&gt;
&lt;li&gt;Reusable data models with common definitions&lt;/li&gt;
&lt;li&gt;This will reduce the OpenAPI specification files significantly&lt;/li&gt;
&lt;li&gt;Easy to manage and govern OpenAPI specification files&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, I explained four techniques used to simplify the OpenAPI Specification using loosely coupled, reusable data model definitions and pointed out the advantages of these approaches. Check back for more articles on this and other subjects on the HPE Developer &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with the HPE Data Services Cloud Console Powershell SDK]]></title><description><![CDATA[HPE provides the Data Services Cloud Console (DSCC), a SaaS-based cloud console, to eliminate silos and to reduce complexity by delivering a…]]></description><link>https://developer.hpe.com/getting-started-with-the-hpe-data-services-cloud-console-powershell-sdk/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-hpe-data-services-cloud-console-powershell-sdk/</guid><pubDate>Mon, 07 Aug 2023 09:01:09 GMT</pubDate><content:encoded>&lt;p&gt;HPE provides the Data Services Cloud Console (DSCC), a SaaS-based cloud console, to eliminate silos and to reduce complexity by delivering a unified management of HPE storage arrays.  It&apos;s UI may be sufficient for many customers to manage their HPE storage assets, but lately I was approached from multiple customers that were looking for an efficient way to manage their HPE storage assets with PowerShell scripts. When I started looking into the available options to access the DSCC API with PowerShell, I first found the &lt;a href=&quot;https://developer.hpe.com/blog/new-powershell-toolkit-available-for-managing-hpe-data-services-cloud-console/&quot;&gt;PowerShell toolkit for managing the HPE Data Services Cloud Console&lt;/a&gt; on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community Portal&lt;/a&gt;. Working with this PowerShell toolkit, I realized that it is, on one hand, an honorable first approach by my colleagues to write such a toolkit, but it is not complete and not yet fully tested.
H﻿ence, a different solution needed to be found.&lt;/p&gt;
&lt;p&gt;The DSCC provides an public OpenAPI 3.x spec that can be used to generate your preferred language SDK as it is described for Python in this &lt;a href=&quot;https://developer.hpe.com/blog/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/&quot;&gt;blog&lt;/a&gt;. U﻿sing the OpenAPI 3.x spec of the DSCC API has the benefit of providing complete list of possible API operations. And once the procedure to generate the SDK is in place, a new SDK can easily be created if new features are exposed via the DSCC API.&lt;/p&gt;
&lt;p&gt;In this blog post, I will describe the steps I used to generate a DSCC PowerShell SDK and show how this SDK is used for daily tasks.&lt;/p&gt;
&lt;h1&gt;Creating the DSCC Powershell SDK&lt;/h1&gt;
&lt;p&gt;My first step to generate the DSCC Powershell SDK was to download the current &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.yaml&quot;&gt;YAML&lt;/a&gt;- or &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.json&quot;&gt;JSON&lt;/a&gt; OpenAPI 3.x spec of the DSCC API that can be found on the &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;Data Services Cloud API documentation&lt;/a&gt; page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc-api-spec.png&quot; alt=&quot;DSCC API Spec&quot; title=&quot;DSCC API Documentation&quot;&gt;&lt;/p&gt;
&lt;p&gt;Y﻿ou can also download the DSCC API spec YAML file with the following PowerShell command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;invoke-webrequest -uri  https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.yaml -outfile storage-api.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿ ensured that I had the latest Microsoft PowerShell 7 installed on the Microsoft Windows system that I used to generate the DSCC PowerShell SDK. The easiest way to get the latest PowerShell 7 installed is to use the winget utility.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;winget search Microsoft.PowerShell
winget install --id Microsoft.PowerShell --source winget
winget install --id Microsoft.PowerShell.Preview --source winget
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are multiple generators available to create a SDK out of the OpenAPI 3.x spec. In the end, I had the best result with &lt;a href=&quot;https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/6.6.0/&quot;&gt;openapi-generator-cli-6.6.0.jar&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;T﻿he minimum requirement to execute this JAR File is to have as a minimum JAVA Runtime Environment (JRE) version 8 installed. I used the &lt;a href=&quot;https://learn.microsoft.com/en-us/java/openjdk/download&quot;&gt;Microsoft OpenJDK 17.0.8 LTS&lt;/a&gt; build as JRE on my system. Having all requirements in place, I simply created the DSCC PowerShell SDK with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;java -jar openapi-generator-cli-6.6.0.jar generate -i storage-api.yaml -g powershell -o dscc-powershell-sdk 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he output directory that I specified with the -o flag of the generate command contains the following files:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openapigeneratoroutput.png&quot; alt=&quot;&quot; title=&quot;Openapi Generator Output&quot;&gt;&lt;/p&gt;
&lt;p&gt;A﻿fter generating the DSCC PowerShell SDK, I needed to install the PowerShell modules (stored in the src directory) locally before I could use it in PowerShell scripts. The local installation was done simply by running the Build.ps1 script located in the openapi-generator-cli output directory that is specified with the -o flag.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;powershell Build.ps1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;H﻿aving completed all these steps, I was ready to use the DSCC PowerShell SDK to automate daily DSCC tasks.&lt;/p&gt;
&lt;h1&gt;How to use the DSCC PowerShell SDK&lt;/h1&gt;
&lt;p&gt;T﻿he DSCC PowerShell SDK output directory contains a README.md file that gives a complete overview of all API Endpoints.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc_ps_api_endpoints.png&quot; alt=&quot;&quot; title=&quot;DSCC PowerShell SDK API Endpoints&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿o do what I did, you can scroll through the README.md file and use the links in the file to get to the detailed information for every available API endpoint. The detailed information includes usage examples for the commands, like the following one that is used for getting all storage systems, &lt;em&gt;Invoke-SystemsList&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# general setting of the PowerShell module, e.g. base URL, authentication, etc
$Configuration = Get-Configuration
# Configure HTTP basic authorization: JWTAuth
$Configuration.Username = &quot;YOUR_USERNAME&quot;
$Configuration.Password = &quot;YOUR_PASSWORD&quot;

$Limit = 10 # Int32 | Number of items to return at a time (optional)
$Offset = 5 # Int32 | The offset of the first item in the collection to return (optional)
$Filter = &quot;name eq VEGA_CB1507_8400_2N_150&quot; # String | oData query to filter systems by Key. (optional)
$Sort = &quot;id asc,name desc&quot; # String | Query to sort the response with specified key and order (optional)
$Select = &quot;id&quot; # String | Query to select only the required parameters, separated by . if nested (optional)

# Get all storage systems
try {
    $Result = Invoke-SystemsList -Limit $Limit -Offset $Offset -Filter $Filter -Sort $Sort -Select $Select
} catch {
    Write-Host (&quot;Exception occurred when calling Invoke-SystemsList: {0}&quot; -f ($_.ErrorDetails | ConvertFrom-Json))
    Write-Host (&quot;Response headers: {0}&quot; -f ($_.Exception.Response.Headers | ConvertTo-Json))
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou may be wondering ... how did I get from the example given in the README.md file to usable code. First of all, I  needed a Client Id and Client Secret in order to open a connection to the DSCC. If you do not have yet a Client Id / Client Secret pair, then you need to create one as it is described in the &lt;!--StartFragment--&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Using HPE GreenLake Console&apos;s API Gateway for Data Services Cloud Console&lt;/a&gt;&lt;!--EndFragment--&gt; blog post.&lt;/p&gt;
&lt;p&gt;U﻿sing the Client Id / Client Secret pair, I was able to retrieve an Access Token that was valid for two hours.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Import-Module -Name &apos;.\dscc-powershell-sdk\src\PSOpenAPITools&apos;

$Configuration = Get-Configuration
$Configuration.BaseUrl  = &apos;https://eu1.data.cloud.hpe.com&apos;
$Configuration.Username = &apos;Your Client Id&apos;
$Configuration.Password = &apos;Your Client Secret&apos;

$AuthUri = &quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot;
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$AuthHeaders =  @{  &apos;Content-Type&apos; = &apos;application/x-www-form-urlencoded&apos;
								}
$AuthBody    = [ordered]@{   &apos;grant_type&apos;    = &apos;client_credentials&apos;
							 &apos;client_id&apos;     = $Configuration.Username
							 &apos;client_secret&apos; = $Configuration.Password
						  }
try {
    $Configuration.AccessToken = ( invoke-restmethod -uri $AuthURI -Method Post -headers $AuthHeaders -body $AuthBody ).access_token
} catch { 
    Write-Host (&quot;Exception occurred when retrieving access toke: {0}&quot; -f ($_.ErrorDetails | ConvertFrom-Json))
    Write-Host (&quot;Response headers: {0}&quot; -f ($_.Exception.Response.Headers | ConvertTo-Json))
    exit(-1)
}	
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see the Access Token, Client Id, Client Secret and the Base URL are stored in a global variable: $Configuration. The retrieved Access Token is valid for two hours and needs to be refreshed afterwards.&lt;/p&gt;
&lt;p&gt;P﻿lease note: the $Configuration.BaseUri depends on the DSCC location. You will need to use BaseUri given for your DSCC location. At the time of writing this blog the three possible DSCC BaseUri are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;E﻿urope:  &lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;U﻿SA:  &lt;a href=&quot;https://us1.data.cloud.hpe.com&quot;&gt;https://us1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A﻿PJ:   &lt;a href=&quot;https://jp1.data.cloud.hpe.com&quot;&gt;https://jp1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I﻿f you want to have a list of storage systems that you have access to on the DSCC, you can use the &lt;em&gt;Invoke-SystemList&lt;/em&gt; API endpoint as shown in the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;try {
    $Systems = (Invoke-SystemsList).items
    $SystemIds = @{}
	foreach($s in $Systems){
		$SystemIds += @{$s.name = $s.Id}
	}
} catch {
    Write-Host (&quot;Exception occurred when calling Invoke-DeviceType1SystemsList: {0}&quot; -f ($_.ErrorDetails | ConvertFrom-Json))
    Write-Host (&quot;Response headers: {0}&quot; -f ($_.Exception.Response.Headers | ConvertTo-Json))    
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿fter getting the systems list with the &lt;em&gt;Invoke-SystemsList&lt;/em&gt; command, I stored the system name and system ID in an array because I needed the system ID for many of the possible API commands. Generally it is the case that  the systems, volume, hosts, initiators etc. you want to manipulate in your scripts are defined by their IDs. For example, if you work with the DSCC API, you will work with the id of the object you want to change or delete.&lt;/p&gt;
&lt;p&gt;F﻿or instance, if you want to create a new volume, then you need to define the volume parameters, like name, size, etc., and the system where you want to create the volume. A new volume can be created with the &lt;em&gt;Invoke-VolumeCreate&lt;/em&gt; command that has two input parameters: the system ID and the volume parameters that are stored in a &lt;em&gt;CreateVolumeInput&lt;/em&gt; object using the &lt;em&gt;Initialize-CreateVolumeInput&lt;/em&gt; method.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$SystemId = $System.Primera650
$CreateVolumeInput = Initialize-CreateVolumeInput -Comments &quot;DSCC API -Thomas Beha&quot; `
  -Count 1 `
  -DataReduction $true `
  -Name &quot;DSCC-API-Vol&quot; `
  -SizeMib 16384 `
  -SnapCpg &quot;SSD_r6&quot; `
  -UserCpg &quot;SSD_r6&quot;  
try {
	$Result = Invoke-VolumeCreate -SystemId $SystemId -CreateVolumeInput $CreateVolumeInput
} catch {
    Write-Host (&quot;Exception occurred when calling Invoke-CreateVolume: {0}&quot; -f ($_.ErrorDetails | ConvertFrom-Json))
    Write-Host (&quot;Response headers: {0}&quot; -f ($_.Exception.Response.Headers | ConvertTo-Json))	
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he &lt;em&gt;Invoke-VolumeCreate&lt;/em&gt; command will return the information of the generated task. You can use the &lt;em&gt;Get-Task&lt;/em&gt; command to monitor the progress of this task.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$Status = Get-Task $Result.taskUri
while( ($Status.progressPercent -lt 100) -and ($Status.state -ne &quot;FAILED&quot;)){
    $Status.progressPercent
    Start-Sleep -Seconds 5	
    $Status = Get-Task  $TaskId
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These are the steps I used to create a new Volume on my HPE Primera 650 array with a name of DSCC-API-Vol and a size of 16GB.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc-api-vol.png&quot; alt=&quot;&quot; title=&quot;DSCC-API-Vol UI view&quot;&gt;&lt;/p&gt;
&lt;h1&gt;S﻿ummary&lt;/h1&gt;
&lt;p&gt;In conclusion, the DSCC PowerShell SDK, derived from the DSCC Open API 3.x spec, includes at the time of writing this blog already 349 commands that can be used to manage your storage with  the DSCC API. At the moment, it t is the most complete PowerShell tool to manage DSCC managed HPE storage arrays. Since it is also easily updated with new features once they are available - simply recreate it with the most current DSCC OpenAPI 3.x spec - it is the recommended way for customers  to build PowerShell scripts accessing the DSCC API.&lt;/p&gt;
&lt;p&gt;I﻿ hope the above example helps get you started with the DSCC PowerShell SDK. While I have not tested every possible command, I have not found any issue working with the SDK on the extensive daily task list received from my customers.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A little light reading]]></title><link>https://developer.hpe.com/2023-August-01/</link><guid isPermaLink="false">https://developer.hpe.com/2023-August-01/</guid><pubDate>Tue, 01 Aug 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Testing OpsRamp APIs with PyTest Fixtures]]></title><description><![CDATA[The OpsRamp platform provides a rich set of APIs.By using these APIs, customers can build solutions to automate various IT Operations…]]></description><link>https://developer.hpe.com/sivayanama-testing-opsramp-apis-with-pytest-fixtures/</link><guid isPermaLink="false">https://developer.hpe.com/sivayanama-testing-opsramp-apis-with-pytest-fixtures/</guid><pubDate>Wed, 26 Jul 2023 06:58:31 GMT</pubDate><content:encoded>&lt;p&gt;The OpsRamp platform provides a rich set of APIs.By using these APIs, customers can build solutions to automate various IT Operations Management (ITOM) workflows. These could include discovery and monitoring, event and incident management, or remediation and automation.Testing these APIs and the resulting automated solution is critical to ensure the reliability and stability of the workflow.&lt;/p&gt;
&lt;p&gt;PyTest is a powerful Python testing framework and it is widely used to test APIs. Fixtures are one of the most important capabilities of the PyTest framework. In this article, I will discuss some advanced techniques for testing OpsRamp APIs using PyTest fixtures.&lt;/p&gt;
&lt;h2&gt;What fixtures are&lt;/h2&gt;
&lt;p&gt;PyTest fixtures are a special type of Python function that provisions a fixed baseline for testing. With the help of this baseline, you can ensure tests are run reliably and produce consistent results. In addition, you can ensure that the same tests are repeatable.&lt;/p&gt;
&lt;h2&gt;Install PyTest&lt;/h2&gt;
&lt;p&gt;Run the below command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pip install -U pytest
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Verify PyTest installation&lt;/h2&gt;
&lt;p&gt;Run the below command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pytest --version
pytest 7.4.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Define fixture&lt;/h2&gt;
&lt;p&gt;You can define fixtures by decorating a simple Python function with &lt;a href=&quot;https://docs.pytest.org/en/6.2.x/reference.html#pytest.fixture&quot;&gt;@pytest.fixture&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Python Decorators are a very powerful and convenient way to alter the behaviour of functions. For more details, refer to &lt;a href=&quot;https://www.geeksforgeeks.org/decorators-in-python&quot;&gt;Python Decorators&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pytest


@pytest.fixture
def hello_world():
    return &apos;Hello, Happy testing&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Invoke fixtures&lt;/h2&gt;
&lt;p&gt;Test functions can invoke fixtures just by declaring the required fixtures as arguments.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def test_hello_world(hello_world):
    assert hello_world == &apos;Hello, Happy testing&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Invoke OpsRamp API&lt;/h2&gt;
&lt;p&gt;As shown below, execute OpsRamp &lt;a href=&quot;https://develop.opsramp.com/v2/api/auth/tenancy-auth-oauth-token&quot;&gt;&lt;em&gt;Get Access Token&lt;/em&gt; API&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import http.client
import json
import traceback
import urllib
from asyncio.log import logger
import pytest


def get_auth_header(api_endpoint, client_id, client_secret, grant_type=&quot;client_credentials&quot;):
    &quot;&quot;&quot;
    Returns bearer token string in the blow format
    bearer adfa-afdaf-1599-402c-a3ee-1ed24f597cc8
    &quot;&quot;&quot;
    api_connection = http.client.HTTPSConnection(api_endpoint)
    params = {&quot;client_id&quot;: client_id, &quot;client_secret&quot;: client_secret, &quot;grant_type&quot;: &quot;client_credentials&quot;}
    payload = urllib.parse.urlencode(params)


    headers = {
        &apos;accept&apos;: &quot;application/json&quot;,
        &apos;content-type&apos;: &quot;application/x-www-form-urlencoded&quot;,
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Define fixture to get access token&lt;/h2&gt;
&lt;p&gt;You can transform the above function as a PyTest fixture by simply decorating the code as shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@pytest.fixture
def get_auth_header(api_endpoint, client_id, client_secret, grant_type=&quot;client_credentials&quot;):
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Autouse fixtures&lt;/h2&gt;
&lt;p&gt;Sometimes some of your fixtures are required for all other tests. The above Get access token is the perfect use case for this. Every API call depends on this API. In this case, you can designate this fixture as an “autouse fixture”  by passing in &lt;em&gt;autouse=True&lt;/em&gt; to the fixture’s decorator. The autouse fixture capability will reduce a lot of redundant requests.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@pytest.fixture(autouse=True)
def get_auth_header(api_endpoint, client_id, client_secret, grant_type=&quot;client_credentials&quot;):
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Provisioning of third-party plugin fixtures&lt;/h2&gt;
&lt;p&gt;You do not necessarily have to define all fixtures on your code. Fixtures can be provided by third-party plugins as well and you are free to use them. You just need to install the required plugins. You can see an example with the pytest-datadir plugin.&lt;/p&gt;
&lt;p&gt;Install this plugin as shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pip install pytest-datadir
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@pytest.fixture(autouse=True)
def get_bearer_token(shared_datadir,request):
    json_path = (shared_datadir / &quot;auth_header_input.json&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://lh4.googleusercontent.com/dacTgDdw17BzeyCitShA73WSip9LVtenQoNN-uraaN5tKEU5cA_xP3cEmNPWmTzU3A1HegdoOVvwPbyYqQuoLeEk4W766nIvpBdoTzUdIiT2dXiOQG0_h7atQWS7-T9qvRrieuhlEV84VS15ir11Ocw&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The get_bearer_token fixture is using &lt;a href=&quot;https://pypi.org/project/pytest-datadir/&quot;&gt;shared_datadir&lt;/a&gt; fixture from a third-party plugin. The &lt;em&gt;Shared_datadir&lt;/em&gt; fixture provisions the data folder as &lt;em&gt;pathlib.Path&lt;/em&gt; object.&lt;/p&gt;
&lt;h2&gt;Fixture finalization&lt;/h2&gt;
&lt;p&gt;Usually, you do teardown or cleanup activities for each test. There are many good reasons for this, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The executed tests will not impact other tests&lt;/li&gt;
&lt;li&gt;Do not want to have the testing environment piled up with tons of test data&lt;/li&gt;
&lt;li&gt;Every test should execute in a clean state&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Fixtures offer a very powerful teardown/cleanup capability known as finalization.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;@pytest.fixture()
def create_client(shared_datadir, get_bearer_token, request):
    client = Client()
    client.logger.debug(&quot;IN***&quot;)


    payload_file = (shared_datadir / &quot;create_client1.json&quot;)
    input_payload = client.http_helper.to_json(payload_file)


    # client_response=client.http_helper.do_post(url, input_payload, get_bearer_token)
    client_response = client.create_client(input_payload, get_bearer_token)


    def terminate_client():
        client.logger.debug(&quot;IN***&quot;)
        util = Utils()


        clientId = Client.clientId


        if len(str(clientId)) &gt; 0:
            oauth_token = get_bearer_token
            client.terminate_client(clientId, oauth_token)
        client.logger.debug(&quot;OUT***&quot;)


    request.addfinalizer(terminate_client)


    client.print_log(client_response)
    client.logger.debug(&quot;OUT***&quot;)
    return client_response


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should add the finalizer function to the request’s context object of the test as shown below:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;request.addfinalizer(terminate_client)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In the above example, you do all the test set-up on the client and execute the test. Once the test is complete, the finalizer function runs and cleans up the whole thing.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, I have explained what PyTest fixtures are and how to define, invoke and implement fixtures. I also showed you how to leverage the testing of OpsRamp APIs using these fixtures and how to do teardown activities using the finalizer functions. Check back often on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer Community blog&lt;/a&gt; to find more blog posts on OpsRamp.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Office of the CTO announces hiring at new Center of Excellence in Galway]]></title><description><![CDATA[At Hewlett Packard Enterprise (HPE), our purpose is to advance the way people live and work. And we invite you to join us in our mission…]]></description><link>https://developer.hpe.com/office-of-the-cto-announces-hiring-at-new-center-of-excellence-in-galway/</link><guid isPermaLink="false">https://developer.hpe.com/office-of-the-cto-announces-hiring-at-new-center-of-excellence-in-galway/</guid><pubDate>Tue, 25 Jul 2023 10:59:41 GMT</pubDate><content:encoded>&lt;p&gt;At Hewlett Packard Enterprise (HPE), our purpose is to advance the way people live and work. And we invite you to join us in our mission! Hewlett Packard Enterprise and the Office of the Chief Technology Officer recently announced hiring for several roles at a new Center of Excellence in Galway, Ireland. The new cloud R&amp;#x26;D center will help expand the capabilities of the HPE GreenLake edge-to-cloud platform.&lt;/p&gt;
&lt;h2&gt;HPE expands presence in Galway&lt;/h2&gt;
&lt;p&gt;The Office of the CTO is focused on leading HPE engineering teams to build differentiated inventions and co-inventions that drive secure, sustainable and transformative outcomes, guiding the industry into a more sustainable, equitable and efficient future through engineering design and innovation.&lt;/p&gt;
&lt;p&gt;This new cloud R&amp;#x26;D center is integral to the development of the HPE GreenLake platform, which is the foundation of HPE’s as-a-service offerings. The HPE GreenLake platform is used by our customers to accelerate data-first modernization with cloud services that can run on premise, at the edge, in a co-location facility, and in the public cloud.&lt;/p&gt;
&lt;p&gt;HPE already maintains a large presence in Galway and serves as the hub for Digital Services R&amp;#x26;D, HPE GreenLake Cloud Services &amp;#x26; Solutions R&amp;#x26;D, and Cyber Security in Europe.&lt;/p&gt;
&lt;h2&gt;The Office of the CTO wants you!&lt;/h2&gt;
&lt;p&gt;The new roles announced by the Office of the CTO will round out a team of software R&amp;#x26;D professionals who will support customers and partners with a unified hybrid cloud experience and easy access to cloud services.&lt;/p&gt;
&lt;p&gt;Now is your opportunity to join the team as we advance the platform forward. The new Center of Excellence will recruit top talent, from graduates to experienced technology professionals, across a range of roles including: architects, software engineers, product, engineering and project managers, scrum maters, researchers, user experience and test engineers, security specialists, data analysts and AI professionals.&lt;/p&gt;
&lt;h2&gt;Join us today!&lt;/h2&gt;
&lt;p&gt;At Hewlett Packard Enterprise, we seek candidates who are relentlessly curious and embrace courage over comfort. We prioritize candidates who have a strong desire to be a force for good and are dedicated to use technology to make the world better.  We value diverse backgrounds and those with a variety of experiences.&lt;/p&gt;
&lt;p&gt;Our team members help create history, pushing the industry forward by helping to advance our customers’ and partners’ businesses in ways they could not previously have imagined.  If you’re looking for an opportunity that’s as challenging as it is rewarding, come join us.&lt;/p&gt;
&lt;p&gt;Check out our &lt;a href=&quot;https://careers.hpe.com/us/en/hpe-chief-technology-office&quot;&gt;hottest Office of the CTO employment opportunities&lt;/a&gt; here. You can also view more information using the following links:&lt;/p&gt;
&lt;p&gt;Global: &lt;a href=&quot;https://careers.hpe.com/hpe-octo&quot;&gt;https://careers.hpe.com/hpe-octo&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ireland: &lt;a href=&quot;https://careers.hpe.com/us/en/hpe-octo-ireland-jobs&quot;&gt;https://careers.hpe.com/us/en/hpe-octo-ireland-jobs&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;India: &lt;a href=&quot;https://careers.hpe.com/us/en/hpe-octo-india&quot;&gt;https://careers.hpe.com/us/en/hpe-octo-india&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;US: &lt;a href=&quot;https://careers.hpe.com/us/en/hpe-octo-us&quot;&gt;https://careers.hpe.com/us/en/hpe-octo-us&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ft. Collins (campaigns only): &lt;a href=&quot;https://careers.hpe.com/hpe-octo-ft-collins&quot;&gt;https://careers.hpe.com/hpe-octo-ft-collins&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open sourcing Workshops-on-Demand part 4: Manage the Backend]]></title><description><![CDATA[I﻿n previous articles of this series dedicated to the open sourcing of our Workshops-on-Demand project, I covered the reasons why we open…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part4-managing-the-backend/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part4-managing-the-backend/</guid><pubDate>Tue, 18 Jul 2023 09:28:35 GMT</pubDate><content:encoded>&lt;p&gt;I﻿n previous articles of this series dedicated to the &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;open sourcing of our Workshops-on-Demand project&lt;/a&gt;, I covered the reasons why we open sourced  the project and how we did it. I also explained in details how you could install your own Workshops-on-Demand backend server. I also took the time to detail the automation that was hosted on this backend server. Today, I plan to describe to you the management of this backend server. This is what is often referred to as Day2 operations.&lt;/p&gt;
&lt;p&gt;O﻿nce up and running, the main purpose of the backend server is to deliver workshops-on-Demand. But to do so, it may require updates, upgrades, and/or new kernels for the JupyterHub server. If new workshops are created, this means you&apos;ll need new jinja templates for related workshops&apos; scripts (i.e &lt;code&gt;create&amp;#x3C;WKSHP&gt;.sh&lt;/code&gt;, &lt;code&gt;cleanup&amp;#x3C;WKSHP&gt;.sh&lt;/code&gt;, &lt;code&gt;reset&amp;#x3C;WKSHP&gt;.sh&lt;/code&gt;, among others). This also means new variable files. And obviously, these templates and variables will need to be taken into account by scripts and notebooks. Some tasks handle all of this. And that&apos;s what I&apos;ll show now.&lt;/p&gt;
&lt;h4&gt;B﻿ackend server management:&lt;/h4&gt;
&lt;p&gt;If you take a look at the file structure of the &lt;code&gt;wod-backend&lt;/code&gt; directory, you will discover that the team did its best to sort things properly depending on their relationship to system  or workshops.&lt;/p&gt;
&lt;h5&gt;C﻿ontent of the backend server:&lt;/h5&gt;
&lt;p&gt;S﻿imple tree view of the wod-backend directory:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie2-tree1.png&quot; alt=&quot;&quot; title=&quot;Tree view of wod-backend directory&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he &lt;code&gt;ansible&lt;/code&gt; folder contains all the necessary playbooks and variables files to support the main functions of the backend server. It provides playbooks for a minimal installation of the servers or appliances. It also allows the setup of the different types of servers (i.e backend, frontend, and/or api-db), appliances (virtual machines or containers), or workshops as well as maintenance tasks.&lt;/p&gt;
&lt;p&gt;A﻿t the root of this directory can be found:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;C﻿heck*.yml playbooks&lt;/code&gt;: These playbooks are used to perform checks on the different systems. These checks ensure that this a compliant WoD system by checking firewall rules and many other things. You will see this a bit later in more details.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;C﻿opy_folder.yml&lt;/code&gt;: Historically, this is one of very first playbook we used and therefore, it is very important to me. It performs the necessary actions to deploy and personnalize (by substituting Ansible variables) the selected notebook to the appropriate student home folder.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;c﻿ompile_scripts.yml&lt;/code&gt;: Should you need to hide from the student a simple api call that is made on some private endpoint with non-shareable data (credentials for instance), this playbook will make sure to compile it and create a executable file allowing it to happen.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;d﻿istrib.yml&lt;/code&gt;: This playbook retrieves the distribution name and version from the machine it is run on.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;install_*.yml&lt;/code&gt;: These playbooks take care of installing the necessary packages needed by the defined type (frontend, backend, api-db, base-system or even appliance).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;setup_*.ym&lt;/code&gt;: There are several types of setup playbooks in this directory.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;setup_WKSHP-*.yml&lt;/code&gt;: These playbooks are responsible for preparing a base appliance for a given workshop by adding and configuring the necessary packages or services related to the workshop.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;s﻿etup_appliance.yml&lt;/code&gt;: This playbook is used to perform the base setup for a JupyterHub environment server or appliance. It includes setup_base_appliance.yml playbook.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;s﻿etup_base_appliance&lt;/code&gt;: This takes care of setting the minimal requierements for an appliance. It includes &lt;code&gt;install_base_system.yml&lt;/code&gt; playbook. On top of it, it creates and configures the necessary users.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;s﻿etup_docker_based_appliance.yml&lt;/code&gt;: Quite self explanatory ? it performs setup tasks to enable docker on a given appliance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It also hosts the &lt;code&gt;inventory&lt;/code&gt; file describing the role of JupyterHub servers. Place your JupyterHub machine (FQDN) in a group used as PBKDIR namerole.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#
# Place to your JupyterHub machine (FQDN) in a group used as PBKDIR name.
#
[production]
127.0.0.1  ansible_connection=localhost
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he &lt;code&gt;conf&lt;/code&gt; folder hosts configuration files in a Jinja format. Once expanded, the resulting files will be used by relevant workshops. I will explain in a future article all the steps and requirements to create a workshop.&lt;/p&gt;
&lt;p&gt;A﻿s part of the refactoring work to open source the project, we reaaranged the different scripts&apos; locations. We have created an install folder to handle the different installation scripts either from a JupyterHub&apos;s perpective or from an appliance&apos;s standpoint, too.&lt;/p&gt;
&lt;p&gt;We separated the workshops&apos; related scripts from the system ones. When one creates a workshop, one needs to provide a series of notebooks and in some cases some scripts to manage the creation and setup of a related appliance along with additional scripts to manage its lifecycle in the overall Workshops-on-Demand architecture (Create, Cleanup, Reset scripts at deployment or Cleanup times). These scripts need to be located in the &lt;code&gt;scripts&lt;/code&gt; folder. On the other hand, the system scripts are located in the &lt;code&gt;sys&lt;/code&gt; folder.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tree-wkshop2.png&quot; alt=&quot;&quot; title=&quot;Tree view of the sys directory&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿his directory hosts important configuration files for both the system and JupyterHub. You can see for instance &lt;code&gt;fail2ban&lt;/code&gt; configuration files. Some Jinja templates are present here, too. These templates will be expanded through the &lt;code&gt;deliver&lt;/code&gt; mechanism allowing the creation of files customized with Ansible variables. All the wod related tasks are prefixed with wod for better understanding and ease of use.&lt;/p&gt;
&lt;p&gt;These Jinja templates can refer to some JupyterHub kernel needs like &lt;code&gt;wod-build-evcxr.sh.j2&lt;/code&gt; that aims at creating a script allowing the rust kernel installation. Some other templates are related to the system and JupyterHub. &lt;code&gt;wod-kill-processes.pl.j2&lt;/code&gt; has been created after discovering the harsh reality of online mining. In a ideal world, I would not have to explain further as the script would not be needed. Unfortunately, this is not the case. When one offers access to some hardware freely online, sooner or later, he can expect to  see his original idea to be hyjacked.&lt;/p&gt;
&lt;p&gt;Let&apos;s say that you want to provide some AI/ML 101 type of workshops. As part of it,
you may consider providing servers with some GPUs. Any twisted minded cryptominer discovering your resources will definitely think he&apos;s hits the jackpot! This little anecdot actually happened to us and not only on GPU based servers, some regular servers got hit as well. We found out that performance on some servers became very poor and when looking into it, we found some scripts that were not supposed to run there. As a result, we implemented monitors to check the load on our servers and made sure that to  kill any suspicious processes before kicking out the misbehaving student.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;wod-test-action.sh.j2&lt;/code&gt; is another interesting template that will create a script that can be used for testing workshops. This script mimics the procmail API and actually enables you to test the complete lifecycle of a workshop from deployment to cleanup or reset.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;wodadmin@server:/usr/local/bin$ ./wod-test-action.sh
Syntax: wod-test-action.sh &amp;#x3C;CREATE|CLEANUP|RESET|PURGE|PDF|WORD&gt; WKSHOP [MIN[,MAX]
ACTION is mandatory
wodadmin@server:/usr/local/bin$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿t requires the verb, the workshop&apos;s name and the student id. Using the script, one does not need to provide participant id.  The script is run locally on the JupyterHub server.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;wodadmin@server:/usr/local/bin$ ./wod-test-action.sh
Syntax: wod-test-action.sh &amp;#x3C;CREATE|CLEANUP|RESET|PURGE|PDF|WORD&gt; WKSHOP [MIN[,MAX]
ACTION is mandatory
wodadmin@server:/usr/local/bin$ ./wod-test-action.sh CREATE WKSHP-API101 121
Action: CREATE
We are working on WKSHP-API101
Student range: 121
Sending a mail to CREATE student 121 for workshop WKSHP-API101
220 server.xyz.com ESMTP Postfix (Ubuntu)
250 2.1.0 Ok
250 2.1.5 Ok
354 End data with &amp;#x3C;CR&gt;&amp;#x3C;LF&gt;.&amp;#x3C;CR&gt;&amp;#x3C;LF&gt;
250 2.0.0 Ok: queued as 9749E15403AB
221 2.0.0 Bye
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I﻿n order to retrieve the result of the script, you simply need to run a &lt;code&gt;tail&lt;/code&gt; command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;wodadmin@server:~$ tail -100f .mail/from
++ date
....
From xyz@hpe.com  Fri Mar  3 09:08:35 2023
 Subject: CREATE 121 0
  Folder: /home/wodadmin/wod-backend/scripts/procmail-action.sh CREATE       11
+ source /home/wodadmin/wod-backend/scripts/wod.sh
....
+ echo &apos;end of procmail-action for student 121 (passwd werty123) with workshop WKSHP-API101 with action CREATE at Fri Mar  3 09:11:39 UTC 2023&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he very last line of the trace will provide you with the credentials necessary to test your workshop.&lt;/p&gt;
&lt;p&gt;T﻿here are two types of activities that can occur on the backend server: punctual or regular. The punctual activity is one that is performed once every now and then. The regular one is usually set up on the backend server as a cron job. Sometimes however, one of these cron tasks can be forced manually if necessary. One of the most important scheduled task is the &lt;code&gt;deliver&lt;/code&gt; task. I will explain it later on in this chapter. I will start now by explaining an important possible punctual task, the update of the backend server.&lt;/p&gt;
&lt;h4&gt;U﻿pdate of the backend server:&lt;/h4&gt;
&lt;p&gt;T﻿he backend server hosts all the necessary content for delivering workshops: it supplies notebooks,scripts and playbooks to deploy and personalize them. It also hosts some services that are needed by the overall architecture solution (JupyterHub, Procmail, Fail2ban among others).&lt;/p&gt;
&lt;p&gt;S﻿ervices are installed once and for all at the installation time. These services may evolve over time. One may need to update the JupyterHub application to fix a bug or get new features. In the same fashion, you may consider bumping from one Python version to a new major one. If you are willing to update these services or add new ones, you will need to update the relevant installation playbooks in &lt;code&gt;wod-backend/ansible&lt;/code&gt; directory.&lt;/p&gt;
&lt;p&gt;H﻿ere is a small extract of the &lt;code&gt;install_backend.yml&lt;/code&gt; playbook: Full version &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/install_backend.yml&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi install_backend
- hosts: all
  gather_facts: true
  vars:
    IJAVAVER: &quot;1.3.0&quot;
    KUBECTLVER: &quot;1.21.6&quot;

  tasks:
    - name: Include variables for the underlying distribution
      include_vars: &quot;{{ ANSIBLEDIR }}/group_vars/{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml&quot;

    - name: Base setup for a JupyterHub environment server or appliance
      include_tasks: &quot;{{ ANSIBLEDIR }}/setup_base_appliance.yml&quot;

    - name: Add CentOS SC repository into repo list
      become: yes
      become_user: root
      yum:
        name: centos-release-scl-rh
        state: present
      when:
        - ansible_distribution == &quot;CentOS&quot;
        - ansible_distribution_major_version &gt;= &quot;7&quot;

    - name: Add conda GPG Key to APT
      become: yes
      become_user: root
      apt_key:
        url: https://repo.anaconda.com/pkgs/misc/gpgkeys/anaconda.asc
        state: present
      when:
       - ansible_distribution == &quot;Ubuntu&quot;
       - ansible_distribution_major_version &gt;= &quot;20&quot;

      # TODO: Do it for EPEL if really needed
    - name: Add conda APT repository
      become: yes
      become_user: root
      apt_repository:
        repo: deb [arch=amd64] https://repo.anaconda.com/pkgs/misc/debrepo/conda stable main
        state: present
      when:
       - ansible_distribution == &quot;Ubuntu&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;P﻿ossible Use Cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;U﻿pgrade to a newer version of JupyterHub&lt;/li&gt;
&lt;li&gt;A﻿dd a new kernel to JupyterHub&lt;/li&gt;
&lt;li&gt;A﻿dd a new Ansible Galaxy collection&lt;/li&gt;
&lt;li&gt;A﻿dd a new PowerShell library&lt;/li&gt;
&lt;li&gt;A﻿dd a new package needed by a workshop.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For e.g:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kubectl client&lt;/li&gt;
&lt;li&gt;T﻿erraform client&lt;/li&gt;
&lt;li&gt;P﻿owerShell module&lt;/li&gt;
&lt;li&gt;P﻿ython Library&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You will start by moving to your public backend forked repository and apply the necessary changes before committing and push locally.&lt;/p&gt;
&lt;p&gt;Then you will perform a merge request with the main repository. We plan to integrate here  in a proper CICD (continuous integration continous development) pipeline to allow a vagrant based test deployment. Whenever someone performs a merge request on the main repo, the test deployment task kicks in and deploys a virtual backend server on which the new version of the installation process is automatically tested. When successful, the merge request is accepted. Once merged, you will need to move to your backend server and perform git remote update and git rebase on the wod-backend directory. Once done, you will then be able to perform the installation process.&lt;/p&gt;
&lt;h4&gt;R﻿egular maintenance of the backend server:&lt;/h4&gt;
&lt;p&gt;O﻿n a daily basis, some tasks are launched to check the integrity of the backend server. Some tasks are related to the security integrity of the system. The following playbook is at the heart of this verification: &lt;strong&gt;wod-backend/ansible/check_backend.yml&lt;/strong&gt;. Full version of the file is available &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/check_backend.yml&quot;&gt;here&lt;/a&gt; for review.&lt;/p&gt;
&lt;p&gt;I﻿t checks a quite long list of items like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;W﻿od System compliancy: is this really a wod system? by calling out &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/check_system.yml&quot;&gt;check_system.yml&lt;/a&gt; playbook.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;T﻿his first check includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;nproc hard and soft limits&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;n﻿ofile hard and soft limits&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setup sysctl params&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;net.ipv4.tcp_keepalive_time, value: &quot;1800&quot;&lt;/li&gt;
&lt;li&gt;kernel.threads-max, value: &quot;4096000&quot;&lt;/li&gt;
&lt;li&gt;kernel.pid_max, value: &quot;200000&quot;&lt;/li&gt;
&lt;li&gt;vm.max_map_count, value: &quot;600000&quot;&lt;/li&gt;
&lt;li&gt;S﻿etup UDP and TCP firewall rules&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fi﻿rewalld&lt;/li&gt;
&lt;li&gt;N﻿tp&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;S﻿tudent Management:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ensure limits are correct for students accounts&lt;/li&gt;
&lt;li&gt;Copy the skeleton content under /etc/skel&lt;/li&gt;
&lt;li&gt;Test &lt;code&gt;.profile&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Ensure vim is the default EDITOR&lt;/li&gt;
&lt;li&gt;Setup &lt;code&gt;logind.conf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;M﻿anage &lt;code&gt;/etc/hosts&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Install the pkg update script&lt;/li&gt;
&lt;li&gt;Setup &lt;code&gt;crontab&lt;/code&gt; for daily pkg security update&lt;/li&gt;
&lt;li&gt;Deliver create/reset/setup scripts as ansible template for variable expansion&lt;/li&gt;
&lt;li&gt;Install utility scripts&lt;/li&gt;
&lt;li&gt;Deliver the system scripts (&lt;code&gt;cleanup-processes.sh.j2&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Installation of the cleanup-processes script&lt;/li&gt;
&lt;li&gt;Setup weekly cleanup processes task&lt;/li&gt;
&lt;li&gt;Enable WoD service&lt;/li&gt;
&lt;li&gt;Test private tasks YAML file&lt;/li&gt;
&lt;li&gt;Call private tasks if available. It performs the private part before users management to allow interruption of the deliver script during normal operations - waiting till end of users management can take hours for 2000 users. Potential impact: private scripts are run before users creation, so may miss some part of setup.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;U﻿ser Management:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Remove existing JupyterHub users&lt;/li&gt;
&lt;li&gt;Remove Linux users and their home directory&lt;/li&gt;
&lt;li&gt;Ensure dedicated students groups exist&lt;/li&gt;
&lt;li&gt;Ensure Linux students users exists with their home directory&lt;/li&gt;
&lt;li&gt;Ensure JupyterHub students users exist&lt;/li&gt;
&lt;li&gt;Setup ACL for students with JupyterHub account&lt;/li&gt;
&lt;li&gt;Setup default ACL for students with JupyterHub account&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A﻿ similar set of scripts exist for the different parts of the solution (&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/check_api-db.yml&quot;&gt;check_api-db.yml&lt;/a&gt; for api-db server, &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/ansible/check_frontend.yml&quot;&gt;check_frontend.yml &lt;/a&gt;for frontend server for instance).&lt;/p&gt;
&lt;p&gt;You should now have a better understanding of the maintenance tasks associated to the backend server. Similar actions are available for the other components of the project. Checking tasks have been created for the frontend and api-db server. Having now mostly covered all the subjects related to the backend server from an infrastructure standpoint, it is high time to discuss the content part. In my next blog, I plan to describe the workshop creation process.  Time to understand how to build up some content for the JupyterHub server!&lt;/p&gt;
&lt;p&gt;If we can be of any help in clarifying any of this, please reach out to us on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt;. Please be sure to check back at &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; for a follow up on this. Also, don&apos;t forget to check out also the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt;! Willing to collaborate with us? Contact us so we can build more workshops!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Hot topics: AI and machine learning]]></title><link>https://developer.hpe.com/2023-July-10/</link><guid isPermaLink="false">https://developer.hpe.com/2023-July-10/</guid><pubDate>Thu, 06 Jul 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Announcing Chapel 1.31!]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/announcing-chapel-1-31/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-1-31/</guid><pubDate>Thu, 22 Jun 2023 23:45:57 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[End-to-end, easy-to-use pipeline for training a model on Medical Image Data using HPE Machine Learning Development Environment]]></title><description><![CDATA[In this blog post, we’ll be covering how HPE Machine Learning Development Environment can add value to your machine learning workflow, as…]]></description><link>https://developer.hpe.com/end-to-end-easy-to-use-pipeline-for-training-a-model-on-medmnist-v2-using-hpe-machine-learning-development-environment-flask/</link><guid isPermaLink="false">https://developer.hpe.com/end-to-end-easy-to-use-pipeline-for-training-a-model-on-medmnist-v2-using-hpe-machine-learning-development-environment-flask/</guid><pubDate>Fri, 16 Jun 2023 16:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In this blog post, we’ll be covering how &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; can add value to your machine learning workflow, as well as how to utilize HPE Machine Learning Development Environment and Flask together to train and serve a model on a medical domain-specific use case. An end-to-end workflow and step-by-step instructions are provided in the &quot;Practice&quot; section. If you want to jump right in, the &lt;a href=&quot;https://github.com/ighodgao/determined_medmnist_e2e&quot;&gt;repository&lt;/a&gt; contains all code referenced in this post as well as instructions to run.&lt;/p&gt;
&lt;h1&gt;Introduction &lt;/h1&gt;
&lt;h2&gt;MedMNIST &lt;/h2&gt;
&lt;p&gt;Cancer is a horrible disease, and hospitals and research labs are increasingly using AI technology, such as convolutional neural networks (CNNs), to assist in image-based diagnoses. &lt;a href=&quot;https://medmnist.com/&quot;&gt;MedMNIST&lt;/a&gt; is a meta-dataset which contains 12 2-D datasets and 6 3-D computer vision datasets for various biomedical image classification problems. One of these datasets is PathMNIST, a colon cancer classification dataset. PathMNIST is derived from a prior study, and contains images of colorectal cancer histology slides, some of which contain cancer-associated stroma. Researchers can train a model on this dataset (for example, a CNN model) to help them identify cancer-associated-stroma in future patients.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/pathmedmnist.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The goal of training a model on this dataset is to accurately classify images into their respective categories, for example, this image of adipose tissue: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/adipose.jpeg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&quot;prediction&quot;: &quot;adipose&quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;HPE Machine Learning Development Environment &lt;/h2&gt;
&lt;p&gt;On its own, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; helps developers and scientists focus on innovation by removing the complexity and cost associated with at-scale machine learning model training. In this post we’re going to explore the advantages of building an end-to-end ML workflow using HPE Machine Learning Development Environment – occasionally referring to HPE’s open source platform, &lt;a href=&quot;http://determined.ai/&quot;&gt;Determined&lt;/a&gt; - for model training. We also provide an example end-to-end solution using Flask for model deployment, as well as the steps taken to develop this example in the &quot;Practice&quot; section. &lt;/p&gt;
&lt;h3&gt;Why use HPE Machine Learning Development Environment? &lt;/h3&gt;
&lt;p&gt;At its core, Determined and HPE Machine Learning Development Environment are training platforms that reduce complexity for ML researchers and help research teams collaborate. &lt;/p&gt;
&lt;p&gt;Researchers currently write training scripts that include not only the core ML functionalities to train a model on a dataset, but also code to manage the underlying infrastructure such as training on multiple GPUs, running a hyperparameter search, visualizing the training progress, and saving model checkpoints. Researchers should not have to focus on infrastructure problems – taking away these software engineering and systems administration-related tasks can allow researchers to focus on what’s important: building great models.  &lt;/p&gt;
&lt;p&gt;Additionally, collaboration is an important part of ML development. Many research teams don’t have the appropriate resources to share experiment results or share GPU infrastructure, resulting in a lack of reproducibility and ad-hoc resource management. This frustration due to a lack of high-quality resources causes slow progress and is a common reason why ML projects fail. Even at the smallest scale – say, one data scientist working alone – using a tool like HPE Machine Learning Development Environment can drastically speed up iteration time by removing the need to write boilerplate code.&lt;/p&gt;
&lt;p&gt;In this blog post, you&apos;ll get to see firsthand how HPE Machine Learning Development Environment can remove infrastructure code in a real-world research script and, at the same time, provide out-of-the-box distributed training, checkpointing, hyperparameter search, and visualization functionailty, drastically accelerating research teams’ capabilities. You&apos;ll also learn about features that allow teams to collaborate effectively. &lt;/p&gt;
&lt;p&gt;If you are interested in more details about how this example was developed, take a look at the &quot;Practice&quot; section. For a full, in-depth, model porting guide, check out this &lt;a href=&quot;https://docs.determined.ai/latest/tutorials/pytorch-porting-tutorial.html&quot;&gt;model porting guide.&lt;/a&gt; The code for this example and the instructions used to run it can be found in the &lt;a href=&quot;https://github.com/ighodgao/determined_medmnist_e2e&quot;&gt;repository&lt;/a&gt;.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Without HPE Machine Learning Development Environment&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;With HPE Machine Learning Development Environment&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Distributed Training&lt;/td&gt;
&lt;td&gt;Configure using open-source tools of your choice (e.g. Ray, Horovod)&lt;/td&gt;
&lt;td&gt;Fault tolerant distributed training automatically enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Experiment Visualization&lt;/td&gt;
&lt;td&gt;Write custom code or configure using open-source tools of your choice, (e.g. Weights &amp;#x26; Biases, Tensorboard)&lt;/td&gt;
&lt;td&gt;Training metrics (model accuracy, model loss) available natively in WebUI, including Tensorboard extension&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;﻿Checkpointing&lt;/td&gt;
&lt;td&gt;Write custom logic to save checkpoints during training, which may not be robust to code failures, or configure using open-source tools of your choice&lt;/td&gt;
&lt;td&gt;Automatic, robust checkpoint management (e.g. best checkpoint saved at end of training, automatic checkpoint deletion, save checkpoint on experiment pause)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hyperparameter Search&lt;/td&gt;
&lt;td&gt;Write custom code or configure using tools of your choice (e.g. Ray Tune, Optuna)&lt;/td&gt;
&lt;td&gt;State-of-the-art hyperparameter search algorithm (Adaptive ASHA) automatically available out of the box&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;As you can see, without a centralized training platform to handle all necessary features in one place, users are left to write custom code or use a variety of open-source tools. This can get complicated very quickly, as it’s difficult to manage multiple dependencies, and compatibility issues start to arise between tools.&lt;/p&gt;
&lt;p&gt;In many cases, HPE Machine Learning Development Environment can reduce the length of a training script to nearly half its original size, due to the sheer amount of boilerplate code normally required to enable these features. Let’s take a closer look at each of these to see HPE Machine Learning Development Environment in action.&lt;/p&gt;
&lt;p&gt;L﻿et&apos;s take a closer look at the core features of HPE Machine Learning Development Environment!&lt;/p&gt;
&lt;h3&gt;Distributed training &lt;/h3&gt;
&lt;p&gt;Distributed training refers to the process of distributing a model training workload across multiple devices, such as GPUs. It’s very common for machine learning workloads to run for weeks on end due to large model and dataset sizes, so distributing mode training across GPUs can drastically speed up the time it takes to develop a machine learning model, from weeks to hours.  &lt;/p&gt;
&lt;p&gt;However, this is difficult to set up and difficult to manage: manual interaction with GPUs through code is often necessary when setting up distributed training, and, once set up, managing distributed training is cumbersome due to issues like fault tolerance. Fault tolerance refers to the ability of a system to gracefully handle and continue a training job even if something on the infrastructure level goes wrong, such as a device failing. Setting up a fault tolerant solution manually is an enormous lift on an ML team, and not normally within the scope of a researcher’s abilities.  &lt;/p&gt;
&lt;p&gt;Determined not only takes away the need to automatically interface with individual GPUs, but is also fault-tolerant. Let’s take a look at how the &lt;a href=&quot;https://github.com/MedMNIST/experiments/blob/main/MedMNIST2D/train_and_eval_pytorch.py&quot;&gt;original training script&lt;/a&gt; handles running the model on GPUs.&lt;/p&gt;
&lt;p&gt;In line 291, the &lt;code&gt;gpu_ids&lt;/code&gt; are received from input arguments from the user: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    gpu_ids = args.gpu_ids
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; The device is configured using only the first &lt;code&gt;gpu_id&lt;/code&gt; (distributed training is not yet enabled): &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    device = torch.device(&apos;cuda:{}&apos;.format(gpu_ids[0])) if gpu_ids else torch.device(&apos;cpu&apos;) 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the model is ported to the GPU in line 90: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    model = model.to(device)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As well as the inputs to the model, for example, in line 190: &lt;/p&gt;
&lt;pre&gt;&lt;code&gt;    outputs = model(inputs.to(device))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The variable “device” is referenced a total of 27 times in the training script, for purposes like porting other inputs to the GPU, and passing the device variable around to different functions so they are aware of the GPU. This is a perfect example of how a researcher would normally need to manage training on a GPU – manually. And we haven&apos;t even started distributed training yet!&lt;/p&gt;
&lt;p&gt;With Determined or HPE Machine Learning Development Environment, none of this manual device management is necessary. Simply using one of our high-level APIs gives you access to not only running on GPUs but running distributed training on multiple GPUs out-of-the-box. The only configuration needed (after porting your model to one of our APIs) would be to set the number of desired resources (GPUs) to use in your experiment settings, e.g.:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;resources:
    slots_per_trial: 4
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After taking these steps, you’d be able to watch your experiment progress in the WebUI and be assured that you’re utilizing multiple GPUs. By viewing the “Cluster” tab, you will notice that 4/5 CUDA Slots are allocated and in use: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Experiment visualization and metric logging &lt;/h3&gt;
&lt;p&gt;Visualization tools are important when developing models due to the probabilistic nature of machine learning. Debugging a model often involves analyzing a model’s training journey by visualizing metrics at different timestamps during an experiment. Commonly used tools for visualization often require manual configuration. Let’s take a look at how the &lt;a href=&quot;https://github.com/MedMNIST/experiments/blob/main/MedMNIST2D/train_and_eval_pytorch.py&quot;&gt;original training script&lt;/a&gt; handles visualization:  &lt;/p&gt;
&lt;p&gt;The original script uses a library called &lt;a href=&quot;https://tensorboardx.readthedocs.io/en/latest/tensorboard.html#module-tensorboardX&quot;&gt;tensorboardX&lt;/a&gt;: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from tensorboardX import SummaryWriter
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using this library, a writer object is created for handling visualization data: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;writer = SummaryWriter(log_dir=os.path.join(output_root, &apos;Tensorboard_Results&apos;))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The writer object is referenced a total of 9 times throughout the script.  &lt;/p&gt;
&lt;p&gt;In addition, training and testing metrics are manually calculated and logged in various places throughout the script, e.g.:  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;logs = [&apos;loss&apos;, &apos;auc&apos;, &apos;acc&apos;]
    train_logs = [&apos;train_&apos;+log for log in logs]
    val_logs = [&apos;val_&apos;+log for log in logs]
    test_logs = [&apos;test_&apos;+log for log in logs]
    log_dict = OrderedDict.fromkeys(train_logs+val_logs+test_logs, 0)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    train_log = &apos;train  auc: %.5f  acc: %.5f\n&apos; % (train_metrics[1], train_metrics[2])
    val_log = &apos;val  auc: %.5f  acc: %.5f\n&apos; % (val_metrics[1], val_metrics[2])
    test_log = &apos;test  auc: %.5f  acc: %.5f\n&apos; % (test_metrics[1], test_metrics[2])

    log = &apos;%s\n&apos; % (data_flag) + train_log + val_log + test_log
    print(log)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With Determined, no manual metric tracking or logging is necessary. When porting your model to one of our high-level APIs, the default training and testing metrics, such as model losses, are automatically configured and rendered natively in the WebUI: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Automatic checkpointing &lt;/h3&gt;
&lt;p&gt;Checkpointing a model throughout an experiment is important to maintain training progress and for preserving the best model at the end of an experiment. Let’s take a look at how the original training script handles model checkpointing.&lt;/p&gt;
&lt;p&gt;The original training script saves the last model at the very end of the training process, in a location specified by the user, but does not checkpoint throughout the training job: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    path = os.path.join(output_root, &apos;best_model.pth&apos;)
    torch.save(state, path)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This approach is problematic for the following reasons: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The last model, at the end of a training job, is not always the best model. &lt;/li&gt;
&lt;li&gt;The script could fail before the end of a training job, resulting in no model checkpoint, which wastes time and resources. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Determined automatically checkpoints in the following situations: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Periodically throughout the course of model training, to keep a record of the training progress. &lt;/li&gt;
&lt;li&gt;During training – If a trial fails on epoch 9 and the last checkpoint was saved during epoch 1, Determined will save yet another checkpoint at epoch 9, making this a very efficient system. &lt;/li&gt;
&lt;li&gt;Upon completion of the trial. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Additional checkpoint configuration settings can be modified to make this even more customizable. Checkpoints can be easily examined through the WebUI even after an experiment completes, and downloaded easily through the Determined APIs: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/screenshot3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Hyperparameter search &lt;/h3&gt;
&lt;p&gt;Hyperparameter search refers to the process of searching for the optimal configuration settings for your machine learning model – for example, searching for the optimal learning rate or convolution sizes in a convolutional neural network. The original training script does not implement hyperparameter search. This is not surprising, as hyperparameter search is yet another heavy lift for a researcher who wants to focus on experimentation, not the grunt work of setting up a hyperparameter search.  &lt;/p&gt;
&lt;p&gt;With Determined or HPE Machine Learning Development Environment, configuring hyperparameter search is easy. Defining hyperparameters, either static or in ranges, is easy through experiment configuration: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;hyperparameters:
    global_batch_size: 128
    data_flag: pathmnist
    dataset_name: &quot;pathmnist.npz&quot;
    model_flag: resnet18
    lr: 0.001
    gamma: 0.1
    resize: True
    task: &quot;multi-class&quot;
    num_epochs: 15
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this case, the “hyperparameters” feature can also be used to switch between datasets and types of training tasks if configured appropriately in the training script – which is neat! &lt;/p&gt;
&lt;h2&gt;Collaboration &lt;/h2&gt;
&lt;p&gt;Data scientists and researchers rarely work alone, especially in today’s data-driven technological boom. Sharing experiment results across a large team and sharing a single GPU cluster is not traditionally straightforward, but HPE Machine Learning Development Environment makes for a much better experience: &lt;/p&gt;
&lt;h3&gt;Resource management &lt;/h3&gt;
&lt;p&gt;At the enterprise level, HPE Machine Learning Development Environment automatically scales up and down workloads depending on priority and resource availability. For example, if a researcher is running a hefty training job and is utilizing all 10 GPUs on a shared cluster, but their colleague needs two of them for a higher priority smaller job, HPE Machine Learning Development Environment can temporarily scale back the larger job utilizing all 10 GPUs down to using only 8, leaving room for the smaller training job.  &lt;/p&gt;
&lt;h3&gt;Model sharing and reproducibility &lt;/h3&gt;
&lt;p&gt;The HPE Machine Learning Development Environment WebUI makes it easy to track experiments and see which model configurations resulted in particular results, across a team. This makes reproducibility easy, something that is of utmost importance when developing models for a use case like predicting cancer in biomedical images.  &lt;/p&gt;
&lt;p&gt;Now that you have a good background on the HPE Machine Learning Development Environment, l&apos;ll show you how you can build an E2E solution for your model training.&lt;/p&gt;
&lt;h1&gt;Practice: Building an E2E solution &lt;/h1&gt;
&lt;h2&gt;Step 1: Model training &lt;/h2&gt;
&lt;p&gt;To get started, we’ll take a look at the original training script provided &lt;a href=&quot;https://github.com/MedMNIST/experiments/blob/main/MedMNIST2D/train_and_eval_pytorch.py&quot;&gt;here&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;This script contains all the functionality needed to download the dataset as well as train and test the model (a commonly used convolutional neural network architecture – ResNet), on the PathMNIST data. &lt;/p&gt;
&lt;p&gt;The original script has some boilerplate code that can be removed since HPE Machine Learning Development Environment handles functionality such as distributed training and experiment visualization. For a full guide to model porting, refer to the &lt;a href=&quot;https://hpe-mlde.determined.ai/latest/tutorials/pytorch-porting-tutorial.html#pytorch-porting-tutorial&quot;&gt;model porting guide&lt;/a&gt;. Walk through the steps taken to port the code to HPE Machine Learning Development Environment using the PyTorch API. The PyTorch API is a high-level API which allows you to utilize the full functionality of HPE Machine Learning Development Environment out-of-the-box. &lt;/p&gt;
&lt;h2&gt;Step 1.1: Connect to HPE Machine Learning Development Environment  &lt;/h2&gt;
&lt;p&gt;Once your admin has provisioned a cluster with HPE Machine Learning Development Environment and configured you as a user, connect to HPE Machine Learning Development Environment by exporting the DET_MASTER variable: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;export DET_MASTER=&amp;#x3C;your cluster address&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then log in as your user: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;det login user &amp;#x3C;your username&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can check to make sure you are logged in via &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;det user whoami
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Refer to the Determined User Guide and the Reference Page for more information on how to interact with the cluster.  &lt;/p&gt;
&lt;p&gt;&lt;em&gt;In steps 1.2 - 1.4, I&apos;m going to describe how to port this model to HPE Machine Learning Development Environment step-by-step, (model_def.py). If you are interested in running the final training job, skip to step 1.5.&lt;/em&gt; &lt;/p&gt;
&lt;h2&gt;Step 1.2: Port model definition  &lt;/h2&gt;
&lt;p&gt;To train your own custom model using one of Determined’s high level APIs, such as the PyTorch API, you need to port your code to the API first. A template of all the functions Determined needs to run your training loops is provided &lt;a href=&quot;https://docs.determined.ai/latest/training/apis-howto/api-pytorch-ug.html#pytorch-trial&quot;&gt;here&lt;/a&gt;. Fill them out, one-by-one, to port your code. Once these functions are populated, Determined can use these, along with the provided configuration file, to run your experiment. &lt;/p&gt;
&lt;p&gt;To start, create a class definition that inherits from PyTorchTrial (this is the template referred to above). Create a context (line 29). This is like an interface to the Determined master, allowing you to communicate back and forth with it. Also include the model, optimizer, and criterion in the initialization function so that Determined is aware of them. Make each object an attribute of the class as shown below and transfer relevant hyperparameters to &lt;code&gt;config.yaml&lt;/code&gt;. As shown here, we can obtain hyperparameters defined in this configuration file by calling &lt;code&gt;self.context.get_hparam()&lt;/code&gt;, making it easier to change these hyperparameters without modifying training code.  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;class MyMEDMnistTrial(PyTorchTrial):
    def __init__(self, context: PyTorchTrialContext) -&gt; None:
        self.context = context

        self.info = INFO[self.context.get_hparam(&quot;data_flag&quot;)]
        task = self.info[&quot;task&quot;]
        n_classes = len(self.info[&quot;label&quot;])

        self.context = context
        if self.context.get_hparam(&quot;model_flag&quot;) == &quot;resnet18&quot;:
            model = resnet18(pretrained=False, num_classes=n_classes)
        elif self.context.get_hparam(&quot;model_flag&quot;) == &quot;resnet50&quot;:
            model = resnet50(pretrained=False, num_classes=n_classes)
        else:
            raise NotImplementedError

        self.model = self.context.wrap_model(model)

        optimizer = torch.optim.Adam(
            self.model.parameters(), lr=self.context.get_hparam(&quot;lr&quot;)
        )
        self.optimizer = self.context.wrap_optimizer(optimizer)

        if self.context.get_hparam(&quot;task&quot;) == &quot;multi-label, binary-class&quot;:
            self.criterion = nn.BCEWithLogitsLoss()
        else:
            self.criterion = nn.CrossEntropyLoss()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 1.3: Port data loaders  &lt;/h2&gt;
&lt;p&gt;Next, port the training and evaluation data loaders to the following class functions as follows. This is the code that trains one batch of data. You no longer need the standard for loop to iterate over batches or epochs inside these functions, since Determined will handle the training loop for you. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    def build_training_data_loader(self) -&gt; DataLoader:
        DataClass = getattr(medmnist, self.info[&quot;python_class&quot;])

        if self.context.get_hparam(&quot;resize&quot;):
            data_transform = transforms.Compose(
                [
                    transforms.Resize((224, 224), interpolation=PIL.Image.NEAREST),
                    transforms.ToTensor(),
                    transforms.Normalize(mean=[0.5], std=[0.5]),
                ]
            )
        else:
            data_transform = transforms.Compose(
                [transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5])]
            )

        train_dataset = DataClass(
            split=&quot;train&quot;,
            transform=data_transform,
            download=False,
            as_rgb=True,
            root=DATASET_ROOT,
        )
        train_loader = determined.pytorch.DataLoader(
            dataset=train_dataset,
            batch_size=self.context.get_per_slot_batch_size(),
            shuffle=True,
        )

        return train_loader
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In each function, initialize and return the relevant PyTorch DataLoader object (e.g. &lt;code&gt;val_loader from build_validation_data_loader&lt;/code&gt;, and &lt;code&gt;train_loader in build_training_data_loader&lt;/code&gt;).  &lt;/p&gt;
&lt;h2&gt;Step 1.4: Port training and evaluation functions &lt;/h2&gt;
&lt;p&gt;Finally, port your training and evaluation functions to the following class functions by including the relevant steps to train and evaluate the model on one batch of data. Here, make sure to remove the for-loop iterating over epochs and include only the relevant code corresponding to one data batch. HPE Machine Learning Development Environment handles the training loop behind the scenes.  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;    def train_batch(
        self, batch: TorchData, epoch_idx: int, batch_idx: int
    ) -&gt; Dict[str, Any]:
        inputs, targets = batch
        outputs = self.model(inputs)

        if self.context.get_hparam(&quot;task&quot;) == &quot;multi-label, binary-class&quot;:
            targets = targets.to(torch.float32)
            loss = self.criterion(outputs, targets)
        else:
            targets = torch.squeeze(targets, 1).long()
            loss = self.criterion(outputs, targets)

        self.context.backward(loss)
        self.context.step_optimizer(self.optimizer)

        return {&quot;loss&quot;: loss}

    def evaluate_batch(self, batch: TorchData) -&gt; Dict[str, Any]:
        inputs, targets = batch
        outputs = self.model(inputs)

        if self.context.get_hparam(&quot;task&quot;) == &quot;multi-label, binary-class&quot;:
            targets = targets.to(torch.float32)
            loss = self.criterion(outputs, targets)
            m = nn.Sigmoid()
            outputs = m(outputs)
        else:
            targets = torch.squeeze(targets, 1).long()
            loss = self.criterion(outputs, targets)
            m = nn.Softmax(dim=1)
            outputs = m(outputs)
            targets = targets.float().resize_(len(targets), 1)

        return {&quot;test_loss&quot;: loss}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that in all functions defined as part of MyMEDMnistTrial, all instances of manual device management, metric logging, and accuracy calculations have been removed. HPE Machine Learning Development Environment’s Trial APIs handle these automatically, given only the Trial definition and configuration file.  &lt;/p&gt;
&lt;p&gt;After porting the model, compare the original training script and our newly defined model_def.py training file. At 302 vs. 177 lines of code, we have cut the training script nearly in half! &lt;/p&gt;
&lt;h3&gt;&lt;em&gt;Optional&lt;/em&gt;: Upload dataset to S3 bucket  &lt;/h3&gt;
&lt;p&gt;The PathMNIST dataset includes images that are preprocessed to 28x28 images with corresponding classification labels, complete with data augmentation. The original training script downloads this dataset and performs data normalization prior to model training.  &lt;/p&gt;
&lt;p&gt;When submitting experiments to HPE Machine Learning Development Environment, the dataset will be downloaded to the master in the same way, which may cause a small delay. To avoid this, upload your data to an S3 bucket and access it via the following instead: &lt;/p&gt;
&lt;p&gt;Modify &lt;code&gt;config.yaml&lt;/code&gt; to include the data url as follows: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;data:
  url: https://medmnist-pathmnist.s3.us-east-2.amazonaws.com/pathmnist.npz
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Access the dataset via the following in the class initialization function: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;os.makedirs(DATASET_ROOT, exist_ok=TRUE)
wget.download(
    context.get_data_config()[&quot;url&quot;],
    out=os.path.join(DATASET_ROOT, &quot;pathmnist.npz&quot;),
)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 1.5: Train the model &lt;/h2&gt;
&lt;p&gt;In HPE Machine Learning Development Environment, an experiment is a training job that consists of one or more variations, or trials, on the same model &lt;/p&gt;
&lt;p&gt;To begin training our PathMNIST model, you need to submit an experiment to HPE Machine Learning Development Environment. After setting up Determined, use the CLI to submit an experiment (refer to full setup instructions in the &lt;a href=&quot;https://github.com/ighodgao/determined_medmnist_e2e&quot;&gt;repository&lt;/a&gt;): &lt;/p&gt;
&lt;p&gt;&lt;code&gt;det e create config.yaml .&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;View progress on the WebUI located at &amp;#x3C;DET_MASTER&gt;:8080, which you can reach by pasting this address into a browser window. &lt;/p&gt;
&lt;p&gt;You can configure additional training parameters by modifying config.yaml. Refer to the &lt;a href=&quot;https://docs.determined.ai/latest/reference/reference-training/experiment-config-reference.html#experiment-configuration-reference&quot;&gt;Experiment Configuration Reference&lt;/a&gt; for more information. &lt;/p&gt;
&lt;p&gt;And that’s it! Now that you have a trained PathMNIST model, progress to deployment, found in Step 2.  &lt;/p&gt;
&lt;h2&gt;Step 2: Model deployment &lt;/h2&gt;
&lt;p&gt;Once you have finished training with HPE Machine Learning Development Environment, you can deploy your model using any solution.  &lt;/p&gt;
&lt;p&gt;In Step 2, you&apos;ll see an example of using Flask to deploy your model as a RESTful API. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Steps 2.1-2.4 describe the steps needed to create a Flask server (deploy.py). If you are interested in deploying the model directly, skip to steps 2.5-2.6.&lt;/em&gt; &lt;/p&gt;
&lt;h2&gt;Step 2.1: Load the model from your saved checkpoints in your experiment &lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Load the Determined model
checkpoint = client.get_experiment(os.getenv(&quot;EXPERIMENT_ID&quot;)).top_checkpoint()
path = checkpoint.download()
trial = pytorch.load_trial_from_checkpoint_path(path)
model = trial.model
model.eval()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that you have a trained model, to properly load the model, reference checkpoints from the experiment.&lt;/p&gt;
&lt;h2&gt;Step 2.2: Define a function to preprocess your data &lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def preprocess_data(image):
    # Resize the image
    image = image.resize((28, 28))

    # Convert the image to a NumPy array
    image = np.array(image)

    # Add a channel dimension if it&apos;s a grayscale image
    if len(image.shape) == 2:
        image = np.expand_dims(image, axis=-1)

    # Normalize the image by dividing by 255 and subtracting 0.5
    image = (image / 255.0) - 0.5

    # Transpose the image dimensions
    image = np.transpose(image, (2, 0, 1))

    # Convert the image to a torch tensor
    processed_data = torch.tensor(image, dtype=torch.float32)

    # Add a batch dimension
    processed_data = processed_data.unsqueeze(0)

    return processed_data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, perform the same normalizations done when training the model on the training dataset.Step 2.3: Define a Flask server.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Define a Flask route for serving predictions
@app.route(&quot;/predict&quot;, methods=[&quot;POST&quot;])
def predict():
    # Get the input image file from the request
    file = request.files[&quot;file&quot;]

    # Read the image file into a PIL Image object
    image = Image.open(io.BytesIO(file.read()))

    # Preprocess the input data
    processed_data = preprocess_data(image)

    # Use the Determined model to make a prediction
    output_tensor = model(processed_data)

    # Convert the output tensor to a numpy array
    output_array = output_tensor.detach().numpy()

    # Apply softmax to get probabilities for each class
    probabilities = F.softmax(output_tensor, dim=1)

    # Convert the output tensor to a numpy array
    output_array = probabilities.detach().numpy()

    # Get the predicted class label
    class_label = np.argmax(output_array)
    class_labels = [
        &quot;adipose&quot;,
        &quot;background&quot;,
        &quot;debris&quot;,
        &quot;lymphocytes&quot;,
        &quot;mucus&quot;,
        &quot;smooth muscle&quot;,
        &quot;normal colon mucosa&quot;,
        &quot;cancer-associated stroma&quot;,
        &quot;colorectal adenocarcinoma epithelium&quot;,
    ]

    # Return the predicted class label as a JSON response
    return jsonify({&quot;prediction&quot;: class_labels[class_label]})
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, you will define a server that accepts an image file, performs the preprocessing steps defined in the &lt;code&gt;preprocess_data&lt;/code&gt; function, and outputs the prediction as a json object.  &lt;/p&gt;
&lt;h2&gt;Step 2.4: Define a function to start the Flask server: &lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Start the Flask server
if __name__ == &quot;__main__&quot;:
    app.run(debug=True, host=&quot;0.0.0.0&quot;, port=5000)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example the server is deployed on localhost on port 5000. &lt;/p&gt;
&lt;h2&gt;Step 2.5: Obtain test data&lt;/h2&gt;
&lt;p&gt;Your test data should be in the form of .jpeg, .png, or .jpg images. Deploy the Flask server and submit requests for inference.&lt;/p&gt;
&lt;p&gt;Full instructions for running the example can be found in the &lt;a href=&quot;https://github.com/ighodgao/determined_medmnist_e2e&quot;&gt;repository&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;For example, running prediction on the following image of adipose tissue results in: &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/adipose.jpeg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;&quot;prediction&quot;: &quot;adipose&quot;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Deploying your model can make it a lot easier to use in production, and while this example uses Flask as a simple proof-of-concept, tools like KServe can help deploy your models in production. &lt;/p&gt;
&lt;h1&gt;Conclusion  &lt;/h1&gt;
&lt;p&gt;That’s it! I&apos;ve have shown you how to implement an end-to-end machine learning workflow using HPE Machine Learning Development Environment and Flask. For more information about HPE’s AI toolkit please visit the  &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence.html&quot;&gt;HPE AI Solutions homepage here.&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;If you enjoyed reviewing this model training and deployment example, &lt;a href=&quot;https://join.slack.com/t/determined-community/shared_invite/zt-1txc10qgy-yJ2puE6DxgrhdH9TIK93tw&quot;&gt;we invite you to get in touch with the Determined Community&lt;/a&gt; and try this out for yourself by following the Determined &lt;a href=&quot;https://docs.determined.ai/latest/&quot;&gt;Documentation&lt;/a&gt;.  &lt;/p&gt;
&lt;p&gt;If you and your team are ready for premium machine learning support from HPE, please please contact the team via the &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment homepage&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Production-ready object detection model training workflow with HPE Machine Learning Development Environment ]]></title><description><![CDATA[This in-depth blog tutorial is divided into five separate sections, where I will recount the seamless user experience one has when working…]]></description><link>https://developer.hpe.com/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment/</link><guid isPermaLink="false">https://developer.hpe.com/production-ready-object-detection-model-training-workflow-with-hpe-machine-learning-development-environment/</guid><pubDate>Fri, 16 Jun 2023 16:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This in-depth blog tutorial is divided into five separate sections, where I will recount the seamless user experience one has when working with &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt;, pointing out how easy it is to achieve machine learning at scale with HPE.  &lt;/p&gt;
&lt;p&gt;Throughout the different parts of this tutorial, I will review the end-to-end training of an object detection model using NVIDIA’s PyTorch Container from &lt;a href=&quot;https://www.nvidia.com/en-us/gpu-cloud/&quot;&gt;NVIDIA&apos;s NGC Catalog&lt;/a&gt;, a Jupyter Notebook, the open-source training platform from &lt;a href=&quot;http://www.determined.ai/&quot;&gt;Determined AI&lt;/a&gt;, and &lt;a href=&quot;https://www.kubeflow.org/docs/external-add-ons/kserve/kserve/&quot;&gt;Kserve&lt;/a&gt; to deploy the model into production.  &lt;/p&gt;
&lt;h1&gt;Part 1: End-to-end example training object detection model using NVIDIA PyTorch Container from NGC&lt;/h1&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/blob/main/ngc_blog/E2E-Part-1-Installation.md#installation&quot;&gt;&lt;/a&gt;Installation&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note: this object detection demo is based on the &lt;a href=&quot;https://github.com/pytorch/vision/tree/v0.11.3&quot;&gt;TorchVision package GitHub repository&lt;/a&gt;.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This notebook walks you each step to train a model using containers from the NGC Catalog. We chose the GPU optimized PyTorch container as an example. The basics of working with docker containers apply to all NGC containers.&lt;/p&gt;
&lt;h2&gt;NGC&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.nvidia.com/en-us/gpu-cloud/&quot;&gt;NGC catalog from NVIDIA&lt;/a&gt; offers ready-to-use containers, pre-trained models, SDKs, and Helm charts for diverse use cases and industries to speed up model training, development, and deployment. For this example, I&apos;m pulling the popular PyTorch container from NGC and show you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the Docker engine on your system&lt;/li&gt;
&lt;li&gt;Pull a PyTorch container from the NGC catalog using Docker&lt;/li&gt;
&lt;li&gt;Run the PyTorch container using Docker&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&apos;s get started!&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/blob/main/ngc_blog/E2E-Part-1-Installation.md#1-install-the-docker-engine&quot;&gt;&lt;/a&gt;1. Install the Docker Engine&lt;/h3&gt;
&lt;p&gt;Go to the &lt;a href=&quot;https://docs.docker.com/engine/install/&quot;&gt;Docker Installation Engine documentation&lt;/a&gt; to install the Docker Engine on your system.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/blob/main/ngc_blog/E2E-Part-1-Installation.md#2-download-the-tensorflow-container-from-the-ngc-catalog&quot;&gt;&lt;/a&gt;2. Download the TensorFlow container from the NGC Catalog&lt;/h3&gt;
&lt;p&gt;Once the Docker Engine is installed on your machine, visit the &lt;a href=&quot;https://catalog.ngc.nvidia.com/containers&quot;&gt;NVIDIA NGC Container Catalog&lt;/a&gt; and search for&lt;a href=&quot;https://ngc.nvidia.com/catalog/containers&quot;&gt;&lt;/a&gt;and search for the TensorFlow container. Click on the TensorFlow card and copy the pull command. &lt;a href=&quot;https://raw.githubusercontent.com/kbojo/images/master/NGC.png&quot;&gt;&lt;img src=&quot;https://raw.githubusercontent.com/kbojo/images/master/NGC.png&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Open the command line of your machine and paste the pull command into your command line. Execute the command to download the container.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ docker pull nvcr.io/nvidia/pytorch:21.11-py3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The container starts downloading to your computer. A container image consists of many layers; all of them need to be pulled.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/blob/main/ngc_blog/E2E-Part-1-Installation.md#3-run-the-tensorflow-container-image&quot;&gt;&lt;/a&gt;3. Run the TensorFlow container image&lt;/h3&gt;
&lt;p&gt;Once the container download is completed, run the following code in your command line to run and start the container:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ docker run -it --gpus all -p 8888:8888 -v $PWD:/projects --network=host nvcr.io/nvidia/pytorch:21.11-py3&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/kbojo/images/master/commandline1.png&quot;&gt;&lt;img src=&quot;https://raw.githubusercontent.com/kbojo/images/master/commandline1.png&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/blob/main/ngc_blog/E2E-Part-1-Installation.md#4-install-jupyter-lab-and-open-a-notebook&quot;&gt;&lt;/a&gt;4. Install Jupyter lab and open a notebook&lt;/h3&gt;
&lt;p&gt;Within the container, run the following commands:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;pip install torchvision==0.11.3 jupyterlab&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;jupyter lab --ip=0.0.0.0 --port=8888 --allow-root&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Open up your favorite browser and enter: &lt;a href=&quot;http://localhost:8888/?token=*yourtoken&quot;&gt;http://localhost:8888/?token=*yourtoken&lt;/a&gt;*. &lt;a href=&quot;https://raw.githubusercontent.com/kbojo/images/master/commandline2.png&quot;&gt;&lt;img src=&quot;https://raw.githubusercontent.com/kbojo/images/master/commandline2.png&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You should see the Jupyter Lab application. Click on the plus icon to launch a new Python 3 notebook. Follow the instructions regarding the image classification with the TensorFlow example provided in Part 2.&lt;/p&gt;
&lt;p&gt;Now that you have your Docker engine installed and the PyTorch Container running, we need to fetch and prepare our training dataset. You&apos;ll see that coming up in Part 2 below.&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Part 2: Data preparation&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;Note this Demo is based on NGC Docker image&lt;/em&gt; &lt;code&gt;nvcr.io/nvidia/pytorch:21.11-py3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This notebook walks you through each step required to train a model using containers from the NGC catalog. We chose the GPU optimized PyTorch container as an example. The basics of working with docker containers apply to all NGC containers.&lt;/p&gt;
&lt;p&gt;Here, I will show you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Download the xView Dataset&lt;/li&gt;
&lt;li&gt;How to convert labels to coco format&lt;/li&gt;
&lt;li&gt;How to conduct the preprocessing step, &lt;strong&gt;Tiling&lt;/strong&gt;: slicing large satellite imagery into chunks&lt;/li&gt;
&lt;li&gt;How to upload to S3 bucket to support distributed training&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&apos;s get started!&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Pre-reqs, set up Jupyter Notebook environment using NGC container&lt;/h2&gt;
&lt;h3&gt;Execute Docker run to create NGC environment for data preparation&lt;/h3&gt;
&lt;p&gt;Make sure to map host directory to Docker directory. You will use the host directory again to do the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker run   --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v /home/ubuntu:/home/ubuntu  -p 8008:8888 -it nvcr.io/nvidia/pytorch:21.11-py3  /bin/bash&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Run Jupyter Notebook command within Docker container to access it on your local browser&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cd /home/ubuntu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jupyter lab --ip=0.0.0.0 --port=8888 --NotebookApp.token=&apos;&apos; --NotebookApp.password=&apos;&apos;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;git clone https://github.com/interactivetech/e2e_blogposts.git&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Download the xView dataset&lt;/h3&gt;
&lt;p&gt;The dataset you will be using is from the &lt;a href=&quot;https://challenge.xviewdataset.org&quot;&gt;DIUx xView 2018 Challenge&lt;/a&gt; by U.S. National Geospatial-Intelligence Agency (NGA). You will need to &lt;a href=&quot;https://challenge.xviewdataset.org/welcome&quot;&gt;create an account&lt;/a&gt;, agree to the terms and conditions, and download the dataset manually.&lt;/p&gt;
&lt;p&gt;You can also &lt;a href=&quot;https://challenge.xviewdataset.org/data-download&quot;&gt;download the dataset&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# run pip install to get the SAHI library
!pip install sahi scikit-image opencv-python-headless==4.5.5.64
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Example command to download train images with wget command, you will need to update the url as the token is expired&quot;
!wget -O train_images.tgz &quot;https://d307kc0mrhucc3.cloudfront.net/train_images.tgz?Expires=1680923794&amp;#x26;Signature=pn0R9k3BpSukGEdjcNx7Kvs363HWkngK8sQLHxkDOqqkDAHSOCDBmAMAsBhYZ820uMpyu4Ynp1UAV60OmUURyvGorfIRaVF~jJO8-oqRVLeO1f24OGCQg7HratHNUsaf6owCb8XXy~3zaW15FcuORuPV-2Hr6Jxekwcdw9D~g4M2dLufA~qBfTLh3uNjWK5UCAMvyPz2SRLtvc3JLzGYq1eXiKh1dI9W0DyWXov3mVDpBdwS84Q21S2lVi24KJsiZOSJqozuvahydW2AuR~tbXTRbYtmAyPF9ZqT8ZCd9MLeKw2qQJjb7tvzaSZ0F9zPjm2RS8961bo6QoBVeo6kzA__&amp;#x26;Key-Pair-Id=APKAIKGDJB5C3XUL2DXQ&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Example command to download train images with wget command, you will need to update the url as the token is expired&quot;
!wget -O train_labels.tgz &quot;https://d307kc0mrhucc3.cloudfront.net/train_labels.tgz?Expires=1680923794&amp;#x26;Signature=YEX~4gioZ7J0pAjEPx7BjJfnOa2j412mx2HlStlqa0cHj-T0T21vo17S8Fs71DXgPlZ5qnIre2-icc7wQ~EuQV-HL1ViS8qH1Aubgj9i0pnHZL07ktiyulX7QStOLywxJ7bOOmQ37iFF~-OcJW3MZfQCTWrP~LdlZMmXz0yGs5WEIYeMyvfUfIhGvrpHcJ14Z3czasSMeOKfwdQsUJoRcFTbmlbZk98IVeEWjmnGTfxGbPBdMmQ96XdT4NohggtzGdqeZhGNfwm7dKGSUbXvGCoFe~fIjBz0~5BvB6rNIaMaFuBA6aGTbCLeG8FlvijcECouhZdMTHmQUlgtSlZjGw__&amp;#x26;Key-Pair-Id=APKAIKGDJB5C3XUL2DXQ&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# unzip images and labels from /home/ubuntu/e2e_blogposts/ngc_blog
!tar -xf train_images.tgz -C xview_dataset/
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# unzip labels from /home/ubuntu/e2e_blogposts/ngc_blog directory 
!tar -xf train_labels.tgz -C xview_dataset/
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;1. Convert TIF to RGB&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Here loop through all the images and convert them to RGB, this is important for tiling the images and training with pytorch
# will take about an hour to complete
!python data_utils/tif_2_rgb.py --input_dir xview_dataset/train_images \
  --out_dir xview_dataset/train_images_rgb/
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;2. How to convert labels to COCO format&lt;/h2&gt;
&lt;p&gt;Run a script to convert the dataset labels from .geojson format to COCO format. &lt;a href=&quot;https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch&quot;&gt;Read more details about the COCO format at this link.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The result will be two files (in COCO formal) generated &lt;code&gt;train.json&lt;/code&gt; and &lt;code&gt;val.json&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# make sure train_images_dir is pointing to the .tif images
!python data_utils/convert_geojson_to_coco.py --train_images_dir xview_dataset/train_images/ \
  --train_images_dir_rgb xview_dataset/train_images_rgb/ \
  --train_geojson_path xview_dataset/xView_train.geojson \
  --output_dir xview_dataset/ \
  --train_split_rate 0.75 \
  --category_id_remapping data_utils/category_id_mapping.json \
  --xview_class_labels data_utils/xview_class_labels.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;3. Slicing/Tiling the dataset&lt;/h2&gt;
&lt;p&gt;Here, you will be using the SAHI library to slice our large satellite images. Satellite images can be up to 50k^2 pixels in size, which wouldn&apos;t fit in GPU memory. You can alleviate this problem by slicing the image.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;!python data_utils/slice_coco.py --image_dir xview_dataset/train_images_rgb/ \
  --train_dataset_json_path xview_dataset/train.json \
  --val_dataset_json_path xview_dataset/val.json \
  --slice_size 300 \
  --overlap_ratio 0.2 \
  --ignore_negative_samples True \
  --min_area_ratio 0.1 \
  --output_train_dir xview_dataset/train_images_rgb_no_neg/ \
  --output_val_dir xview_dataset/val_images_rgb_no_neg/
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4. Upload to S3 bucket to support distributed training&lt;/h2&gt;
&lt;p&gt;Now, you can upload your exported data to a publicly accessible AWS S3 bucket. For a large-scale distributed experiment, this will enable you to access the dataset without installing the dataset on the device.
View &lt;a href=&quot;https://docs.determined.ai/latest/model-dev-guide/load-model-data.html&quot;&gt;Determined Documentation&lt;/a&gt; and &lt;a href=&quot;https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/&quot;&gt;AWS instructions&lt;/a&gt; to learn how to upload your dataset to an S3 bucket. Review the &lt;code&gt;S3Backend&lt;/code&gt; class in &lt;code&gt;data.py&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Once you create an S3 bucket that is publicly accessible, here are example commands to upload the preprocessed dataset to S3:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;aws s3 cp --recursive xview_dataset/train_sliced_no_neg/   s3://determined-ai-xview-coco-dataset/train_sliced_no_neg&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;aws s3 cp --recursive xview_dataset/val_sliced_no_neg/   s3://determined-ai-xview-coco-dataset/val_sliced_no_neg&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now that the satellite imagery data is in an S3 bucket and is prepped for distributed training, you can progress to model training and inferencing via the NGC container.&lt;/p&gt;
&lt;h1&gt;Part 3: End-to-End example training object detection model using NVIDIA PyTorch container from NGC&lt;/h1&gt;
&lt;h2&gt;Training and inference via NGC Container&lt;/h2&gt;
&lt;p&gt;This notebook walks you through each step to train a model using containers from the NGC Catalog. I chose the GPU-optimized PyTorch container for this example. The basics of working with Docker containers apply to all NGC containers.&lt;/p&gt;
&lt;p&gt;We will show you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Execute training an object detection model on satellite imagery using TensorFlow and Jupyter Notebook&lt;/li&gt;
&lt;li&gt;Run inference on a trained object detection model using the SAHI library&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note this object detection demo is based on &lt;a href=&quot;https://github.com/pytorch/vision/tree/v0.11.3&quot;&gt;this PyTorch repo&lt;/a&gt; and ngc docker image &lt;code&gt;nvcr.io/nvidia/pytorch:21.11-py3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;It is assumed that, by now, you have completed step 2 of dataset preprocessing and have your tiled satellite imagery dataset completed and in the local directory &lt;code&gt;train_images_rgb_no_neg/train_images_300_02&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s get started!&lt;/p&gt;
&lt;h2&gt;Execute Docker run to create NGC environment for data prep&lt;/h2&gt;
&lt;p&gt;Make sure to map host directory to Docker directory. You will use the host directory again to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker run   --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v /home/ubuntu:/home/ubuntu  -p 8008:8888 -it nvcr.io/nvidia/pytorch:21.11-py3  /bin/bash&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Run Jupyter Notebook command within Docker container to access it on your local browser&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cd /home/ubuntu&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;jupyter lab --ip=0.0.0.0 --port=8888 --NotebookApp.token=&apos;&apos; --NotebookApp.password=&apos;&apos;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;%%capture
!pip install cython pycocotools matplotlib terminaltables
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;TLDR; Run training job on 4 GPUs&lt;/h2&gt;
&lt;p&gt;The below cell will run a multi-gpu training job. This job will train an object detection model (faster-rcnn) on a dataset of satellite imagery images that contain 61 classes of objects.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Change &lt;code&gt;nproc_per_node&lt;/code&gt; argument to specify the number of GPUs available on your server&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;!torchrun --nproc_per_node=4 detection/train.py\
    --dataset coco --data-path=xview_dataset/ --model fasterrcnn_resnet50_fpn --epochs 26\
    --lr-steps 16 22 --aspect-ratio-group-factor 3
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;1. Object detection on satellite imagery with PyTorch (single GPU)&lt;/h3&gt;
&lt;p&gt;Follow and run the code to train a Faster RCNN FPN (Resnet50 backbone) that classifies images of clothing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import sys
sys.path.insert(0,&apos;detection&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Import python dependencies
import datetime
import os
import time

import torch
import torch.utils.data
import torchvision
import torchvision.models.detection
import torchvision.models.detection.mask_rcnn

from coco_utils import get_coco, get_coco_kp

from group_by_aspect_ratio import GroupedBatchSampler, create_aspect_ratio_groups
from engine import train_one_epoch, evaluate

import presets
import utils
from coco_utils import get_coco, get_coco_kp
from train import get_dataset, get_transform
from group_by_aspect_ratio import GroupedBatchSampler, create_aspect_ratio_groups
from engine import train_one_epoch, evaluate
from models import build_frcnn_model
from PIL import Image
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from collections import OrderedDict
from tqdm import tqdm
from vis_utils import load_determined_state_dict, visualize_pred, visualize_gt, predict
import numpy as np

import matplotlib.pyplot as plt
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;output_dir=&apos;output&apos;
data_path=&apos;xview_dataset/&apos;
dataset_name=&apos;coco&apos;
model_name=&apos;fasterrcnn_resnet50_fpn&apos;
device=&apos;cpu&apos;
batch_size=8
epochs=26
workers=4
lr=0.02
momentum=0.9
weight_decay=1e-4
lr_scheduler=&apos;multisteplr&apos;
lr_step_size=8
lr_steps=[16, 22]
lr_gamma=0.1
print_freq=20
resume=False
start_epoch=0
aspect_ratio_group_factor=3
rpn_score_thresh=None
trainable_backbone_layers=None
data_augmentation=&apos;hflip&apos;
pretrained=True
test_only=False
sync_bn=False
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Import the dataset.
# Data loading code
print(&quot;Loading data&quot;)

dataset, num_classes = get_dataset(dataset_name, &quot;train&quot;, get_transform(True, data_augmentation),
                                   data_path)
dataset_test, _ = get_dataset(dataset_name, &quot;val&quot;, get_transform(False, data_augmentation), data_path)
print(dataset.num_classes)
print(&quot;Creating data loaders&quot;)
train_sampler = torch.utils.data.RandomSampler(dataset)
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
group_ids = create_aspect_ratio_groups(dataset, k=aspect_ratio_group_factor)
train_batch_sampler = GroupedBatchSampler(train_sampler, group_ids, batch_size)
train_batch_sampler = torch.utils.data.BatchSampler(
            train_sampler, batch_size, drop_last=True)

data_loader = torch.utils.data.DataLoader(
    dataset, batch_sampler=train_batch_sampler, num_workers=workers,
    collate_fn=utils.collate_fn)

data_loader_test = torch.utils.data.DataLoader(
    dataset_test, batch_size=1,
    sampler=test_sampler, num_workers=0,
    collate_fn=utils.collate_fn)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Getting three examples from the test dataset
inds_that_have_boxes = []
test_images = list(data_loader_test)
for ind,(im,targets) in tqdm(enumerate(test_images),total=len(list(data_loader_test))):
    # print(ind,targets)
    if targets[0][&apos;boxes&apos;].shape[0]&gt;0:
        # print(targets[0][&apos;boxes&apos;].shape[0])
        # print(ind,targets)
        inds_that_have_boxes.append(ind)

images_t_list=[]
targets_t_list=[]
for ind in tqdm(range(3)):
    im,targets = test_images[ind]
    images_t_list.append(im[0])
    targets_t_list.append(targets[0])
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Let&apos;s have a look at one of the images. The following code visualizes the images using the matplotlib library.
im = Image.fromarray((255.*images_t_list[0].cpu().permute((1,2,0)).numpy()).astype(np.uint8))
plt.imshow(im)
plt.show()
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Let&apos;s look again at the first three images, but this time with the class names.

for i,t in zip(images_t_list,targets_t_list):
    im = Image.fromarray((255.*i.cpu().permute((1,2,0)).numpy()).astype(np.uint8))
    plt.imshow(im)
    plt.show()
    im = visualize_gt(i,t)
    plt.imshow(im)
    plt.show()
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Let&apos;s build the model:
print(&quot;Creating model&quot;)
print(&quot;Number of classes: &quot;,dataset.num_classes)
model = build_frcnn_model(num_classes=dataset.num_classes)
_=model.to(&apos;cpu&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Compile the model:
# Define loss function, optimizer, and metrics.
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(
    params, lr=lr, momentum=momentum, weight_decay=weight_decay)

lr_scheduler = lr_scheduler.lower()
if lr_scheduler == &apos;multisteplr&apos;:
    lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, 
                                                        milestones=lr_steps, 
                                                        gamma=lr_gamma)
elif lr_scheduler == &apos;cosineannealinglr&apos;:
    lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=epochs)
else:
    raise RuntimeError(&quot;Invalid lr scheduler &apos;{}&apos;. Only MultiStepLR and CosineAnnealingLR &quot;
                       &quot;are supported.&quot;.format(args.lr_scheduler))
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Train the model:
# Let&apos;s train 1 epoch. After every epoch, training time, loss, and accuracy will be displayed.
print(&quot;Start training&quot;)
start_time = time.time()
for epoch in range(start_epoch, epochs):
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
    lr_scheduler.step()
    if output_dir:
        checkpoint = {
            &apos;model&apos;: model.state_dict(),
            &apos;optimizer&apos;: optimizer.state_dict(),
            &apos;lr_scheduler&apos;: lr_scheduler.state_dict(),
            &apos;args&apos;: args,
            &apos;epoch&apos;: epoch
        }
        utils.save_on_master(
            checkpoint,
            os.path.join(output_dir, &apos;model_{}.pth&apos;.format(epoch)))
        utils.save_on_master(
            checkpoint,
            os.path.join(output_dir, &apos;checkpoint.pth&apos;))

    # evaluate after every epoch
    evaluate(model, data_loader_test, device=device)

total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
print(&apos;Training time {}&apos;.format(total_time_str))
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def build_frcnn_model2(num_classes):
    print(&quot;Loading pretrained model...&quot;)
    # load an detection model pre-trained on COCO
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

    # get the number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
    model.min_size=800
    model.max_size=1333
    # RPN parameters
    model.rpn_pre_nms_top_n_train=2000
    model.rpn_pre_nms_top_n_test=1000
    model.rpn_post_nms_top_n_train=2000
    model.rpn_post_nms_top_n_test=1000
    model.rpn_nms_thresh=1.0
    model.rpn_fg_iou_thresh=0.7
    model.rpn_bg_iou_thresh=0.3
    model.rpn_batch_size_per_image=256
    model.rpn_positive_fraction=0.5
    model.rpn_score_thresh=0.05
    # Box parameters
    model.box_score_thresh=0.0
    model.box_nms_thresh=1.0
    model.box_detections_per_img=500
    model.box_fg_iou_thresh=1.0
    model.box_bg_iou_thresh=1.0
    model.box_batch_size_per_image=512
    model.box_positive_fraction=0.25
    return model
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Let&apos;s see how the model performs on the test data:
model = build_frcnn_model2(num_classes=61)
ckpt = torch.load(&apos;model_8.pth&apos;,map_location=&apos;cpu&apos;)
model.load_state_dict(ckpt[&apos;model&apos;])
_=model.eval()
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;_=predict(model,images_t_list,targets_t_list)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the next part of this blog post, I will show you how to scale your model training using using distributed training within HPE Machine Learning Development Environment &amp;#x26; System.&lt;/p&gt;
&lt;h1&gt;Part 4: Training on HPE Machine Learning Development &amp;#x26; System&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; is a training platform software that reduces complexity for ML researchers and helps research teams collaborate. HPE combines this incredibly powerful training platform with best-of-breed hardware and interconnect in &lt;a href=&quot;https://www.hpe.com/us/en/hpe-machine-learning-development-system.html&quot;&gt;HPE Machine Learning Development System&lt;/a&gt;, an AI turnkey solution that will be used for the duration of the tutorial.&lt;/p&gt;
&lt;p&gt;This notebook walks you through the commands to run the same training you did in stepin Step 3, but using the HPE Machine Learning Development Environment together with the PyTorchTrial API.
All the code is configured to run out of the box. The main change is defining a &lt;code&gt;class ObjectDetectionTrial(PyTorchTrial)&lt;/code&gt; to incorporate the model, optimizer, dataset, and other training loop essentials.
You can view implementation details by looking at &lt;code&gt;determined_files/model_def.py&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Here, I will show you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run a distributed training experiment&lt;/li&gt;
&lt;li&gt;Run a distributed hyperparameter search&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Note: This notebook was tested on a deployed HPE Machine Learning Development System cluster, running HPE Machine Learning Development Environment (0.21.2-dev0).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s get started!&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Pre-req: Run startup-hook.sh&lt;/h2&gt;
&lt;p&gt;This script will install some python dependencies, and install dataset labels needed when loading the xView dataset:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;## Temporary disable for Grenoble Demo
wget &quot;https://determined-ai-xview-coco-dataset.s3.us-west-2.amazonaws.com/train_sliced_no_neg/train_300_02.json&quot;
mkdir /tmp/train_sliced_no_neg/
mv train_300_02.json /tmp/train_sliced_no_neg/train_300_02.json 
wget &quot;https://determined-ai-xview-coco-dataset.s3.us-west-2.amazonaws.com/val_sliced_no_neg/val_300_02.json&quot;
mkdir /tmp/val_sliced_no_neg
mv val_300_02.json /tmp/val_sliced_no_neg/val_300_02.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note that completing this tutorial requires you to upload your dataset from Step 2 into a publicly accessible S3 bucket. This will enable for a large scale distributed experiment to have access to the dataset without installing the dataset on device. View &lt;a href=&quot;https://docs.determined.ai/latest/model-dev-guide/load-model-data.html&quot;&gt;Determined Documentation&lt;/a&gt; and &lt;a href=&quot;https://codingsight.com/upload-files-to-aws-s3-with-the-aws-cli/&quot;&gt;AWS instructions&lt;/a&gt; to learn how to upload your dataset to an S3 bucket. Review the&lt;/em&gt; &lt;code&gt;S3Backend&lt;/code&gt; class in &lt;code&gt;data.py&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;When you define your S3 bucket and uploaded your dataset, make sure to change the &lt;code&gt;TARIN_DATA_DIR&lt;/code&gt; in &lt;code&gt;build_training_data_loader&lt;/code&gt; with the defined path in the S3 bucket.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def build_training_data_loader(self) -&gt; DataLoader:
    # CHANGE TRAIN_DATA_DIR with different path on S3 bucket
    TRAIN_DATA_DIR=&apos;determined-ai-xview-coco-dataset/train_sliced_no_neg/train_images_300_02/&apos;

    dataset, num_classes = build_xview_dataset(image_set=&apos;train&apos;,args=AttrDict({
                                            &apos;data_dir&apos;:TRAIN_DATA_DIR,
                                            &apos;backend&apos;:&apos;aws&apos;,
                                            &apos;masks&apos;: None,
                                            }))
    print(&quot;--num_classes: &quot;,num_classes)

    train_sampler = torch.utils.data.RandomSampler(dataset)

    data_loader = DataLoader(
                             dataset, 
                             batch_sampler=None,
                             shuffle=True,
                             num_workers=self.hparams.num_workers, 
                             collate_fn=unwrap_collate_fn)
    print(&quot;NUMBER OF BATCHES IN COCO: &quot;,len(data_loader))# 59143, 7392 for mini coco
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;!bash demos/xview-torchvision-coco/startup-hook.sh 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Define environment variable DET_MASTER and login in terminal&lt;/h2&gt;
&lt;p&gt;Run the below commands in a terminal, and complete logging into the Determined cluster by changing &lt;username&gt; to your username.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;export DET_MASTER=10.182.1.43&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;det user login &amp;#x3C;username&gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Define Determined experiment&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://www.determined.ai/&quot;&gt;Determined&lt;/a&gt;, a &lt;em&gt;trial&lt;/em&gt; is a training task that consists of a dataset, a deep learning model, and values for all of the model’s hyperparameters. An &lt;em&gt;experiment&lt;/em&gt; is a collection of one or more trials: an experiment can either train a single model (with a single trial), or can train multiple models via a hyperparameter sweep a user-defined hyperparameter space.&lt;/p&gt;
&lt;p&gt;Here is what a configuration file looks like for a distributed training experiment.&lt;/p&gt;
&lt;p&gt;Below is what the &lt;code&gt;determined_files/const-distributed.yaml&lt;/code&gt; contents look like. &lt;code&gt;slots_per_trial: 8&lt;/code&gt; defines that we will use 8 GPUs for this experiment.&lt;/p&gt;
&lt;p&gt;Edit the change the workspace and project settings in the &lt;code&gt;determined_files/const-distributed.yaml&lt;/code&gt; file&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: resnet_fpn_frcnn_xview_dist_warmup
workspace: &amp;#x3C;WORKSPACE_NAME&gt;
project: &amp;#x3C;PROJECT_NAME&gt;
profiling:
 enabled: true
 begin_on_batch: 0
 end_after_batch: null
hyperparameters:
    lr: 0.01
    momentum: 0.9
    global_batch_size: 128
    # global_batch_size: 16
    weight_decay: 1.0e-4
    gamma: 0.1
    warmup: linear
    warmup_iters: 200
    warmup_ratio: 0.001

    step1: 18032 # 14 epochs: 14*1288 == 18,032
    step2: 19320 # 15 epochs: 15*1288 == 19,320
    model: fasterrcnn_resnet50_fpn
    # Dataset
    dataset_file: coco
    backend: aws # specifiy the backend you want to use.  one of: gcs, aws, fake, local
    data_dir: determined-ai-coco-dataset # bucket name if using gcs or aws, otherwise directory to dataset
    masks: false
    num_workers: 4
    device: cuda
environment:
    image: determinedai/environments:cuda-11.3-pytorch-1.10-tf-2.8-gpu-mpi-0.19.10
    environment_variables:                                                                          
        - NCCL_DEBUG=INFO                                                                           
        # You may need to modify this to match your network configuration.                          
        - NCCL_SOCKET_IFNAME=ens,eth,ib
bind_mounts:
    - host_path: /tmp
      container_path: /data
      read_only: false
scheduling_unit: 400
min_validation_period:
    batches: 1288 # For training

searcher:
  name: single
  metric: mAP
  smaller_is_better: true
  max_length:
    batches: 38640 # 30*1288 == 6440# Real Training
records_per_epoch: 1288
resources:
    slots_per_trial: 8
    shm_size: 2000000000
max_restarts: 0

entrypoint: python3 -m determined.launch.torch_distributed --trial model_def:ObjectDetectionTrial
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Run the below cell to kick off an experiment
!det e create determined_files/const-distributed.yaml determined_files/
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Preparing files to send to master... 237.5KB and 36 files
Created experiment 77
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Launching a distributed hyperparameter search experiment&lt;/h2&gt;
&lt;p&gt;To implement an automatic hyperparameter tuning experiment, define the hyperparameter space, e.g. by listing the decisions that may impact model performance. You can specify a range of possible values in the experiment configuration for each hyperparameter in the search space.&lt;/p&gt;
&lt;p&gt;View the &lt;code&gt;x.yaml&lt;/code&gt; file that defines a hyperparameter search where the model architecture that achieves the best performance on the dataset is found..&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: xview_frxnn_search
workspace: Andrew
project: Xview FasterRCNN
profiling:
 enabled: true
 begin_on_batch: 0
 end_after_batch: null
hyperparameters:
    lr: 0.01
    momentum: 0.9
    global_batch_size: 128
    weight_decay: 1.0e-4
    gamma: 0.1
    warmup: linear
    warmup_iters: 200
    warmup_ratio: 0.001
    step1: 18032 # 14 epochs: 14*1288 == 18,032
    step2: 19320 # 15 epochs: 15*1288 == 19,320
    model:
      type: categorical
      vals: [&apos;fasterrcnn_resnet50_fpn&apos;,&apos;fcos_resnet50_fpn&apos;, &apos;ssd300_vgg16&apos;,&apos;ssdlite320_mobilenet_v3_large&apos;,&apos;resnet152_fasterrcnn_model&apos;,&apos;efficientnet_b4_fasterrcnn_model&apos;,&apos;convnext_large_fasterrcnn_model&apos;,&apos;convnext_small_fasterrcnn_model&apos;]

    # Dataset
    dataset_file: coco
    backend: aws # specifiy the backend you want to use.  one of: gcs, aws, fake, local
    data_dir: determined-ai-coco-dataset # bucket name if using gcs or aws, otherwise directory to dataset
    masks: false
    num_workers: 4

    device: cuda
environment:
    environment_variables:                                                                          
        - NCCL_DEBUG=INFO                                                                           
        # You may need to modify this to match your network configuration.                          
        - NCCL_SOCKET_IFNAME=ens,eth,ib
bind_mounts:
    - host_path: /tmp
      container_path: /data
      read_only: false
scheduling_unit: 400
# scheduling_unit: 40
min_validation_period:
    batches: 1288 # For Real training
searcher:
  name: grid
  metric: mAP
  smaller_is_better: false
  max_length:
    batches: 51520 # 50*1288 == 51520# Real Training

records_per_epoch: 1288
resources:
    # slots_per_trial: 16
    slots_per_trial: 8
    shm_size: 2000000000
max_restarts: 0

entrypoint: python3 -m determined.launch.torch_distributed --trial model_def:ObjectDetectionTrial
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Run the below cell to run a hyperparameter search experiment
!det e create determined_files/const-distributed-search.yaml determined_files/
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Preparing files to send to master... 312.2KB and 40 files
Created experiment 79
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Load checkpoint of trained experiment&lt;/h2&gt;
&lt;p&gt;Replace the &lt;code&gt;&amp;#x3C;EXP_ID&gt;&lt;/code&gt; and run the below cells with the experiment ID once the experiment is completed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from determined_files.utils.model import build_frcnn_model
from utils import load_model_from_checkpoint
from determined.experimental import Determined,client
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;experiment_id = 76
MODEL_NAME = &quot;xview-fasterrcnn&quot;
checkpoint = client.get_experiment(experiment_id).top_checkpoint(sort_by=&quot;mAP&quot;, smaller_is_better=False)
print(checkpoint.uuid)
loaded_model = load_model_from_checkpoint(checkpoint)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that you have a checkpoint from the trained object detection model, you can deploy it to Kserve to run inference and predictions.&lt;/p&gt;
&lt;h1&gt;Part 5: Deploying trained model on Kserve&lt;/h1&gt;
&lt;p&gt;This notebook walks you each step to deploy a custom object detection model on KServe.&lt;/p&gt;
&lt;p&gt;Here, I will show you how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install Kserve natively using Kind and Knative&lt;/li&gt;
&lt;li&gt;Create a Persistent Volume Claim for local model deployment&lt;/li&gt;
&lt;li&gt;Preparing custom model for Kserve inference&lt;/li&gt;
&lt;li&gt;Deploying model using a KServe InferenceService&lt;/li&gt;
&lt;li&gt;Complete a sample request and plot predictions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Note: This notebook was tested on a Linux-based machine with Nvidia T4 GPUs. We also assume Docker is installed in your Linux system/environment&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s get started!&lt;/p&gt;
&lt;h2&gt;Pre-reqs: Setting up Python and Jupyter Lab environment&lt;/h2&gt;
&lt;p&gt;Run the below commands to set up a Python virtual environment, and install all the Python packages needed for this tutorial&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;sudo apt-get update &amp;#x26;&amp;#x26; sudo apt-get  install python3.8-venv
python3 -m venv kserve_env
source kserve_env/bin/activate
pip install kserve jupyterlab torch-model-archiver
pip install torch==1.11.0 torchvision==0.12.0 matplotlib
jupyter lab --ip=0.0.0.0 \
  --port=8008 \
  --NotebookApp.token=&apos;&apos; \
  --NotebookApp.password=&apos;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Install Kserve natively using Kind and Knative&lt;/h2&gt;
&lt;h3&gt;Install Kind&lt;/h3&gt;
&lt;p&gt;Open a terminal and run the following bash commands to install a Kubernetes cluster using Kind:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.18.0/kind-linux-amd64&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;chmod +x ./kind&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo mv ./kind /usr/local/bin/kind&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After running these commands, create a cluster by running the command: &lt;code&gt;kind create cluster&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Install Kubectl&lt;/h3&gt;
&lt;p&gt;Run the following bash commmands in a terminal to to install the kubectl runtime:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl&quot;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;curl -LO &quot;https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256&quot;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install Kserve&lt;/h3&gt;
&lt;p&gt;Run this bash script to install KServe onto our default Kubernetes cluster, note this will install the following artifacts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ISTIO_VERSION=1.15.2, KNATIVE_VERSION=knative-v1.9.0, KSERVE_VERSION=v0.9.0-rc0, CERT_MANAGER_VERSION=v1.3.0&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bash e2e_blogposts/ngc_blog/kserve_utils/bash_scripts/kserve_install.sh&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Patch domain for local connection to KServe cluster/environment&lt;/h3&gt;
&lt;p&gt;Run this command to patch your cluster when you want to connect to your cluster on the same machine:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;kubectl patch cm config-domain --patch &apos;{&quot;data&quot;:{&quot;example.com&quot;:&quot;&quot;}}&apos; -n knative-serving&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Run port forwarding to access KServe cluster&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector=&quot;app=istio-ingressgateway&quot; --output jsonpath=&apos;{.items[0].metadata.name}&apos;)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Make sure to open a new terminal to continue the configuration.&lt;/p&gt;
&lt;h3&gt;Create a persistent volume claim for local model deployment&lt;/h3&gt;
&lt;p&gt;You will be creating a persistent volume claim to host and access the PyTorch-based object detection model locally. A persistent volume claim requires three k8s artifacts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A persistent volume&lt;/li&gt;
&lt;li&gt;A persistent volume claim&lt;/li&gt;
&lt;li&gt;A k8s pod that connects the PVC to be accessed by other k8s resources&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Creating a persistent volume and persistent volume claim&lt;/h3&gt;
&lt;p&gt;Below is the yaml definition that defines the Persistent Volume (PV) and a PersistentVolumeClaim (PVC). We already created a file that defines this PV in &lt;code&gt;k8s_files/pv-and-pvc.yaml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: &quot;/home/ubuntu/mnt/data&quot;
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create the PV and PVC, run the command: &lt;code&gt;kubectl apply -f k8s_files/pv-and-pvc.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;Create k8s pod to access PVC&lt;/h3&gt;
&lt;p&gt;Below is the yaml definition that defines the k8s Pod that mounts the PersistentVolumeClaim (PVC). We already created a file that defines this PV in &lt;code&gt;k8s_files/model-store-pod.yaml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: Pod
metadata:
  name: model-store-pod
spec:
  volumes:
    - name: model-store
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: model-store
      image: ubuntu
      command: [ &quot;sleep&quot; ]
      args: [ &quot;infinity&quot; ]
      volumeMounts:
        - mountPath: &quot;/pv&quot;
          name: model-store
      resources:
        limits:
          memory: &quot;1Gi&quot;
          cpu: &quot;1&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create the Pod, run the command: &lt;code&gt;kubectl apply -f k8s_files/model-store-pod.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Preparing custom model for Kserve inference&lt;/h2&gt;
&lt;p&gt;Here we will complete some preparation steps to deploy a trained custom FasterRCNN Object Detection model using KServe. A pre-requisite is to download a checkpoint from a determined experiement. You can read this &lt;a href=&quot;https://docs.determined.ai/latest/training/model-management/checkpoints.html#downloading-checkpoints-using-the-cli&quot;&gt;tutorial&lt;/a&gt; on how to download a checkpoint using the Determined CLI. For this tutorial, you can download an already prepared checkpoint using the following bash command:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wget -O kserve_utils/torchserve_utils/trained_model.pth https://determined-ai-xview-coco-dataset.s3.us-west-2.amazonaws.com/trained_model.pth&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Stripping the checkpoint of the optimizer state dictionary&lt;/h3&gt;
&lt;p&gt;Checkpoints created from a Determined experiment will save both the model parameters and the optimizer parameters. You will need to strip the checkpoint of all parameters except the model parameters for inference. Run the bash command to generate &lt;code&gt;train_model_stripped.pth&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;Run the below command in a terminal:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;python kserve_utils/torchserve_utils/strip_checkpoint.py --ckpt-path kserve_utils/torchserve_utils/trained_model.pth \
  --new-ckpt-name kserve_utils/torchserve_utils/trained_model_stripped.pth
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Run TorchServe Export to create .mar file&lt;/h3&gt;
&lt;p&gt;Run the below command to export the PyTorch checkpoint into a .mar file that is required for torchserve inference. The Kserve InferenceService will automatically deploy a Pod with a docker image that support TorchServe inferencing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-cwl&quot;&gt;torch-model-archiver --model-name xview-fasterrcnn \
  --version 1.0 \
  --model-file kserve_utils/torchserve_utils/model-xview.py \
  --serialized-file kserve_utils/torchserve_utils/trained_model_stripped.pth \
  --handler kserve_utils/torchserve_utils/fasterrcnn_handler.py \
  --extra-files kserve_utils/torchserve_utils/index_to_name.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After command finishes, run the command to move the file to our prepared &lt;code&gt;model-store/&lt;/code&gt; directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cp xview-fasterrcnn.mar kserve_utils/torchserve_utils/model-store -v&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Copy &lt;code&gt;config/&lt;/code&gt; and &lt;code&gt;model-store/&lt;/code&gt; folders to the K8S PVC Pod&lt;/h3&gt;
&lt;p&gt;This is the directory structure needed to prepare your custom PyTorch model for KServe inferencing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;├── config
│   └── config.properties
├── model-store
│   ├── properties.json
│   └── xview-fasterrcnn.mar
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;What the config.properties file looks like&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;inference_address=http://0.0.0.0:8085
management_address=http://0.0.0.0:8085
metrics_address=http://0.0.0.0:8082
grpc_inference_port=7070
grpc_management_port=7071
enable_metrics_api=true
metrics_format=prometheus
number_of_netty_threads=4
job_queue_size=10
enable_envvars_config=true
install_py_dep_per_model=true
model_store=/mnt/models/model-store
model_snapshot={&quot;name&quot;: &quot;startup.cfg&quot;,&quot;modelCount&quot;: 1,&quot;models&quot;: {&quot;xview-fasterrcnn&quot;: {&quot;1.0&quot;: {&quot;defaultVersion&quot;: true,&quot;marName&quot;: &quot;xview-fasterrcnn.mar&quot;,&quot;serialized-file&quot;:&quot;trained_model_stripped.pth&quot;,&quot;extra-files&quot;:&quot;index_to_name.json&quot;,&quot;handler&quot;:&quot;fasterrcnn_handler.py&quot;,&quot;minWorkers&quot;: 1,&quot;maxWorkers&quot;: 5,&quot;batchSize&quot;: 1,&quot;maxBatchDelay&quot;: 100,&quot;responseTimeout&quot;: 120}}}}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;What the properties.json looks like&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[
    {
    &quot;model-name&quot;: &quot;xview-fasterrcnn&quot;,
    &quot;version&quot;: &quot;1.0&quot;,
    &quot;model-file&quot;: &quot;&quot;,
    &quot;serialized-file&quot;: &quot;trained_model_stripped.pth&quot;,
    &quot;extra-files&quot;: &quot;index_to_name.json&quot;,
    &quot;handler&quot;: &quot;fasterrcnn_handler.py&quot;,
    &quot;min-workers&quot; : 1,
    &quot;max-workers&quot;: 3,
    &quot;batch-size&quot;: 1,
    &quot;max-batch-delay&quot;: 100,
    &quot;response-timeout&quot;: 120,
    &quot;requirements&quot;: &quot;&quot;
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that there is a &lt;code&gt;config/&lt;/code&gt; folder that includes a config.properties. This defines A. There is also a &lt;code&gt;model-store/&lt;/code&gt; directory that contains are exported models and a &lt;code&gt;properties.json&lt;/code&gt; file. You will need this file for B.&lt;/p&gt;
&lt;p&gt;Now, run several kubectl commands to copy over these folders into your Pod and into the PVC defined directory.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl cp kserve_utils/torchserve_utils/config/ model-store-pod:/pv/config/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl cp kserve_utils/torchserve_utils/model-store/ model-store-pod:/pv/model-store/&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Run these commands to verify the contents have been copied over to the pod.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl exec --tty model-store-pod -- ls /pv/config&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl exec --tty model-store-pod -- ls /pv/model-store&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Deploying a model using a KServe InferenceService&lt;/h2&gt;
&lt;h3&gt;Create Inference Service&lt;/h3&gt;
&lt;p&gt;Below is the yaml definition that defines the KServe InferenceService that deploys models stored in the PVC. A file that defines this PV has already been created in &lt;code&gt;k8s_files/torch-kserve-pvc.yaml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: &quot;serving.kserve.io/v1beta1&quot;
kind: &quot;InferenceService&quot;
metadata:
  name: &quot;torchserve&quot;
spec:
  predictor:
    pytorch:
      storageUri: pvc://task-pv-claim/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create the Pod, run the command: &lt;code&gt;kubectl apply -f kserve_utils/k8s_files/torch-kserve-pvc.yaml&lt;/code&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;After running the previous command &lt;code&gt;kubectl get inferenceservice&lt;/code&gt;, you should see that the inferenceservice is not loaded yet&lt;/li&gt;
&lt;li&gt;Keep running the command every minute until you see the InferenceService loaded (view screenshot below of example)&lt;/li&gt;
&lt;li&gt;Next, run command &lt;code&gt;kubectl get pods&lt;/code&gt; to get underlying pod that is running inference service. Copy the pod name (example seen in screenshot)&lt;/li&gt;
&lt;li&gt;Finally run command: &lt;code&gt;kubectl logs -f &amp;#x3C;POD_NAME&gt;&lt;/code&gt; to see the logs and if the model was successfully loaded.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Complete a sample request and plot predictions&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import json
import requests
import base64
from PIL import Image
from PIL import ImageDraw
import matplotlib.pyplot as plt 
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;filename=&apos;kserve_utils/torchserve_utils/example_img.jpg&apos;
im = Image.open(filename)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the test image that will be sent to the deployed model.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;im
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/output_26_0.png&quot; alt=&quot;png&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, encode the image into the base64 binary format.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;image = open(filename, &apos;rb&apos;)  # open binary file in read mode
image_read = image.read()
image_64_encode = base64.b64encode(image_read)
bytes_array = image_64_encode.decode(&apos;utf-8&apos;)
request = {
  &quot;instances&quot;: [
    {
      &quot;data&quot;: bytes_array
    }
  ]
}
result_file = &quot;{filename}.{ext}&quot;.format(filename=str(filename).split(&quot;.&quot;)[0], ext=&quot;json&quot;)
print(&quot;Result File: &quot;,result_file)
with open(result_file, &apos;w&apos;) as outfile:
    json.dump(request, outfile, indent=4, sort_keys=True)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Result File:  kserve_utils/torchserve_utils/example_img.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can submit the image to the deployed endpoint and visualize the predictions!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;headers = {
    &quot;Host&quot;: &quot;torchserve.default.example.com&quot;
}

data = open(result_file)
response = requests.post(&apos;http://localhost:8080/v1/models/xview-fasterrcnn:predict&apos;, headers=headers, data=data)

resp = json.loads(response.content)

print(&quot;Number of Predictions: &quot;, len(resp[&apos;predictions&apos;][0]))
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;Number of Predictions:  95
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;draw = ImageDraw.Draw(im)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;for pred in resp[&apos;predictions&apos;][0]:
    assert len(list(pred.keys())) == 2
    cl_name = list(pred.keys())[0]
    bboxes = pred[cl_name]
    # print(cl_names)
    # print(bboxes)
    # print(&quot;score: &quot;,pred[&apos;score&apos;])
    if pred[&apos;score&apos;] &gt; 0.4:
        # bboxes = [int(i) for i in bboxes]
        # print(bboxes[0],type(bboxes[0]))
        draw.rectangle([bboxes[0],bboxes[1],bboxes[2],bboxes[3]],outline=(255,0,0),fill=None,width=1)
        draw.text([bboxes[0],bboxes[1]-10],&quot;{} :{:.2f}&quot;.format(cl_name,pred[&apos;score&apos;]),fill=(250,0,0))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, you can see the model predictions overlaid onto the input image.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;plt.figure(figsize=(12,12))
plt.imshow(im)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;#x3C;matplotlib.image.AxesImage at 0x7ff6341ccf10&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/output_34_1.png&quot; alt=&quot;png&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;There are numerous ways to get started with end-to-end ML model training with HPE. You can find the &lt;a href=&quot;https://github.com/interactivetech/e2e_blogposts/tree/main/ngc_blog&quot;&gt;full example on GitHub here&lt;/a&gt; To get started with Determined AI’s open source model training platform, visit the &lt;a href=&quot;https://docs.determined.ai/latest/&quot;&gt;Documentation&lt;/a&gt; page.  &lt;/p&gt;
&lt;p&gt;If you’re ready to begin your ML journey with HPE, we’re excited to help you get started! &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; comes with premium HPE support, which you can read more about &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;here&lt;/a&gt;.  &lt;/p&gt;
&lt;p&gt;HPE also offers a purpose-built, turnkey AI infrastructure for model development and training
at scale. &lt;a href=&quot;https://www.hpe.com/us/en/hpe-machine-learning-development-system.html&quot;&gt;Get in touch with us about HPE Machine Learning Development System here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Open Source enthusiast and Fpart project developer, Ganael Laplanche]]></title><description><![CDATA[As part of our blog series on open source experts, the HPE Developer team recently met up with Ganael Laplanche, the project developer for…]]></description><link>https://developer.hpe.com/meet-open-source-contributor-and-fpart-fpsync-project-developer-ganael-laplanche/</link><guid isPermaLink="false">https://developer.hpe.com/meet-open-source-contributor-and-fpart-fpsync-project-developer-ganael-laplanche/</guid><pubDate>Mon, 12 Jun 2023 19:54:14 GMT</pubDate><content:encoded>&lt;p&gt;As part of our blog series on open source experts, the HPE Developer team recently met up with Ganael Laplanche, the project developer for &lt;a href=&quot;https://www.fpart.org/&quot;&gt;Fpart&lt;/a&gt;, a sysadmin-oriented tool that helps users sort files and pack them into bags or &apos;partitions&apos;. Here, we&apos;ll introduce you to his work, how it came about, and learn more about what got Ganael involved with working with open source software.&lt;/p&gt;
&lt;h1&gt;Ganael, can you tell us a little about the tools Fpart and Fpsync?&lt;/h1&gt;
&lt;p&gt;The project started when I was working for a renowned center for biomedical research after a discussion with a friend of mine. We wanted to implement a fast bin-packing tool to produce filesystem tree partitions with the same size and number of files. The tool quickly evolved and got support for hooks that can be triggered when a partition is generated.&lt;/p&gt;
&lt;p&gt;At that time, we needed to move petabyte-scale filesystems to freshly-acquired storage arrays. With its new hooking system, Fpart seemed to be a good basement to launch small migration jobs in parallel through our SSH cluster. Initial tests (&lt;a href=&quot;https://connect.ed-diamond.com/GNU-Linux-Magazine/glmf-164/parallelisez-vos-transferts-de-fichiers&quot;&gt;see our article in French&lt;/a&gt;) were successful but we were still depending on our on-site scheduler to orchestrate submitted jobs and it was to be retired sooner or later. We needed a new scheduler.&lt;/p&gt;
&lt;p&gt;That&apos;s where &lt;a href=&quot;https://www.fpart.org/fpsync/&quot;&gt;Fpsync&lt;/a&gt; comes into play : the tool wraps Fpart and embeds its own scheduler to trigger small &lt;a href=&quot;https://rsync.samba.org/&quot;&gt;Rsync&lt;/a&gt; jobs to parallelize data migration &lt;em&gt;by itself&lt;/em&gt;. It can leverage your SSH cluster to get the best from your data servers, acting as a powerful, standalone, data migration tool.&lt;/p&gt;
&lt;p&gt;Of course, as an ardent open source supporter, those tools were released with an open source license (BSD 2-Clause &quot;Simplified&quot; License). They were quickly adopted by large companies (Intel, AWS, Microsoft, Alibaba, Oracle, ...) as well as research centers to migrate petabyte-scale filesystems.&lt;/p&gt;
&lt;h1&gt;What attracted you to free software?&lt;/h1&gt;
&lt;p&gt;I first discovered free software by reading magazines that were surfing on Linux hype during mid-90&apos;s (and trying their GNU/Linux distros offered on CDROM). But I really began to understand what free software meant later during my studies. I was immediately seduced by the thought that it exemplified humanity&apos;s best attribute: the willingness to share knowledge in order to move forward together.&lt;/p&gt;
&lt;p&gt;As a student, this was very important to me: it enabled me to learn more, as the code is freely available and the open source community very responsive. I quickly felt that I owed the community something in return; I didn&apos;t want to use all that free software (as in free beer) without giving something back. So I started looking at how I could make my own contribution. This is where FreeBSD played a important role, acting as a catalyst...&lt;/p&gt;
&lt;h1&gt;Why did you come to FreeBSD as a development platform?&lt;/h1&gt;
&lt;p&gt;There are several reasons for that choice. As a curious student, I tried &lt;a href=&quot;https://www.freebsd.org/&quot;&gt;FreeBSD&lt;/a&gt; in the early 2000&apos;s, testing version 4.5. What impressed me at that time was its documentation (&lt;a href=&quot;https://docs.freebsd.org/en/books/handbook/&quot;&gt;&quot;handbook&quot;&lt;/a&gt;) and man (&quot;manual&quot;) pages. While GNU/Linux appeared complex to me, FreeBSD suddenly became more clear. With a very nice and welcoming community, it was the perfect platform for a newcomer into the UNIX world. I became hooked on FreeBSD and haven&apos;t returned to any other system since.&lt;/p&gt;
&lt;p&gt;I later came to understand another reason why FreeBSD appeared more clear. It is a homogeneous system, not a patchwork of very different projects. This makes a world of difference, as a &lt;em&gt;specific&lt;/em&gt; version of FreeBSD represents a &lt;em&gt;specific&lt;/em&gt; version of base components (called &quot;world&quot;) and kernel, offering up a complete system. World and kernel are all maintained by the same entity (&lt;a href=&quot;https://docs.freebsd.org/en/articles/contributors/&quot;&gt;FreeBSD developers&lt;/a&gt;) and, because of this, everything is consistent - from any options to the documentation and man pages. This delivers great value for users and guarantees a level of robustness and stability for the system.&lt;/p&gt;
&lt;p&gt;FreeBSD is a good choice for developers because it is POSIX compliant. This is important if you want to produce portable code. Also, it is very easy to access source code for world, kernel and ports (third-party applications ported to FreeBSD). One can easily patch things and test the modifications, which is a bit harder on other systems where you would often have to install a dedicated source package to be able to patch it.&lt;/p&gt;
&lt;p&gt;Finally, the system is a pleasure to administrate and update. I think I have not needed to reinstall my machine since the late 2000&apos;s; I&apos;ve only performed updates since. Third-party applications can now be easily installed and upgraded using binary packages, which was not the case when I first started using FreeBSD.&lt;/p&gt;
&lt;p&gt;These are all the reasons why I use FreeBSD on my systems - not just for servers and development, but also as a daily desktop OS. Lots of people still think FreeBSD is not ready for everyday use on the desktop, but I am living proof that this is not true!&lt;/p&gt;
&lt;h1&gt;What other open source projects are you involved with?&lt;/h1&gt;
&lt;p&gt;I became a FreeBSD developer in 2011 and I now maintain more than 40 ports (A port is a set of patches and build options that makes a software work on FreeBSD. It also acts as the basis for binary packages). Maintaining ports is a fantastic hobby because on the one hand, you have the chance to work on your favorite OS, and on the other hand, you can contribute patches back upstream. This way, you are always connected with the different communities.&lt;/p&gt;
&lt;p&gt;Aside from my FreeBSD activities, I have several &lt;a href=&quot;https://contribs.martymac.org/&quot;&gt;personal projects&lt;/a&gt;. I mentioned Fpart and Fpsync, but I am also the author of ldapscripts, a set of tools used to simplify user and group management within an LDAP directory. They are quite old now, but they still do the job. I also worked on various smaller projects, such as sms1xxx kernel module (a port of Linux&apos; Siano DVB-T driver to FreeBSD, now deprecated in favor of webcamd), evtViewer (a viewer for Ms event log files) or Grpar (a Build engine group archive extract tool). I also wrote several courses (in French).&lt;/p&gt;
&lt;p&gt;I also try to contribute to software I use when I find a bug (either by fixing it or at least by reporting it).&lt;/p&gt;
&lt;h1&gt;Is there anything else you’d like to share with our readers?&lt;/h1&gt;
&lt;p&gt;I owe a lot to free software. That&apos;s mostly what allowed me to learn computing, making my career possible. That&apos;s why I contribute back the most I can.&lt;/p&gt;
&lt;p&gt;But that takes time (that is, personal time) and money (we need machines to test on, as well as power to run them). I am glad to see more and more companies supporting open source. Recently, HPE provided me with a replacement for my old server, I&apos;ll never thank them enough for that kindness! This HPE ProLiant ML350 allows me to perform faster Fpart and Fpsync tests as well as compile code far more quicker than with my old machine. This is a sign that things are changing. I think everybody now understands why it is so important to support open source development. Providing hardware is a simple yet very efficient way of supporting open source developers, sharing code is another one. Let&apos;s encourage companies to continue that way!&lt;/p&gt;
&lt;p&gt;As for individuals, do not hesitate to report bugs or share code. You will participate in making great things and get fantastic feedback from the community!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing HPE Swarm Learning 2.0.0]]></title><description><![CDATA[We’re excited to announce the HPE Swarm Learning 2.0.0 community release!! In the previous version of HPE Swarm Learning, if the sentinel…]]></description><link>https://developer.hpe.com/announcing-hpe-swarm-learning-2-0-0/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-hpe-swarm-learning-2-0-0/</guid><pubDate>Mon, 12 Jun 2023 16:48:53 GMT</pubDate><content:encoded>&lt;p&gt;We’re excited to announce the HPE Swarm Learning 2.0.0 community release!!&lt;/p&gt;
&lt;p&gt;In the previous version of HPE Swarm Learning, if the sentinel Swarm Network (SN) node goes down during Swarm training, the training process would stop, and there was no way to resume it. However, with this release, we have addressed the issue by implementing a mesh topology (connectivity) between SNs, replacing the previous star topology where only the sentinel SN was connected to other SNs.&lt;/p&gt;
&lt;p&gt;Also, we now support multiple blockchain miners instead of just one miner in the sentinel SN. Now, even if the initial sentinel SN goes down, since other SNs also function as miners, it allows the training to continue uninterrupted. Additionally, when the initial sentinel SN is down and if a new SN wants to join the network, it can seamlessly integrate and join the Swarm network with the help of any other SN node. This &lt;strong&gt;high availability configuration&lt;/strong&gt; ensures improved resilience and robustness of HPE Swarm Learning.&lt;/p&gt;
&lt;p&gt;In the HPE Swarm Learning sync stage (defined by sync frequency), when it is time to share the learning from the individual model, one of the Swarm Learning (SL) nodes is designated as the “leader” node. This leader node collects the individual models from each peer node and merges them into a single model by combining parameters of all the individuals. The &lt;strong&gt;Leader Failure Detection and Recovery (LFDR)&lt;/strong&gt; feature enables SL nodes to continue Swarm training during the merging process when an SL leader node fails. A new SL leader node is selected to continue the merging process. If the failed SL leader node comes back after the new SL leader node is in action, the failed SL leader node is treated as a normal SL node and contributes its learning to the swarm global model.&lt;/p&gt;
&lt;p&gt;With the HPE Swarm Learning v2.0.0 release, a user can now extend a Swarm client to support other machine learning platforms as well. Currently Swarm client supports machine learning platforms like PyTorch and Keras (based on Tensorflow 2 in backend). Please find the instructions to extend Swarm client &lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/blob/master/lib/src/README.md&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;2.0.0 release contains following updates:&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;High availability for SN&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Handles Sentinel node failure&lt;/li&gt;
&lt;li&gt;Ensures any SN node can act as sentinel while adding new node&lt;/li&gt;
&lt;li&gt;Supports mesh topology of SN network&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;High availability for SL leader&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Elects a new merge leader when a leader failure is detected&lt;/li&gt;
&lt;li&gt;Handles stale leader recovery&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Swarm Learning Management UI (SLM-UI)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Supports Swarm product installation through SLM-UI&lt;/li&gt;
&lt;li&gt;Deploys and manages Swarm Learning through SLM-UI&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Swarm client library&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Extends Swarm Learning for new ML platforms&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Improved diagnostics and utility script for logs collection.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;For complete details on this new release, please refer to the following resources:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning&quot;&gt;H﻿PE Swarm Learning home page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/swarm-learning/blob/master/lib/src/README.md&quot;&gt;H﻿PE Swarm Learning client readme&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For any questions, start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C04A5DK9TUK&quot;&gt;#hpe-swarm-learning&lt;/a&gt; slack channel on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[There’s always something new to Discover]]></title><link>https://developer.hpe.com/2023-June-06/</link><guid isPermaLink="false">https://developer.hpe.com/2023-June-06/</guid><pubDate>Mon, 05 Jun 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Automate boot from SAN target configurations using Redfish]]></title><description><![CDATA[As the scale and complexity of infrastructure deployed in a customer environment increases, the need for consistency, ease of management and…]]></description><link>https://developer.hpe.com/automate-boot-from-san-target-configurations-using-redfish/</link><guid isPermaLink="false">https://developer.hpe.com/automate-boot-from-san-target-configurations-using-redfish/</guid><pubDate>Thu, 01 Jun 2023 20:13:38 GMT</pubDate><content:encoded>&lt;p&gt;As the scale and complexity of infrastructure deployed in a customer environment increases, the need for consistency, ease of management and agility in responding to the changing needs of the infrastructure consumers becomes a key priority for IT staff.  Operational efficiency has become a key business imperative for customers hosting any large IT footprint, and the operational efficiency gains are often achieved through automation of repetitive tasks performed by IT staff to deploy, manage, and operate IT infrastructure.&lt;/p&gt;
&lt;p&gt;There are several use-cases of infrastructure automation that exist, and in this blog post we will address the aspect of configuration of OS boot for a server from a Storage Area Network (SAN) based storage array.  Specifically, we will discuss the   procedure for configuring an HPE ProLiant DL Gen10, Gen10 plus and Gen11 servers to boot from a SAN target Logical Unit Number (LUN) using the Redfish® standard DMTF API.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The steps outlined in this document were tested on HPE DL 380 Gen10 and Gen10 plus servers with Emulex based SN1610E FibreChannel (FC) cards, HPE iLO firmware 2.81 and Service Pack for ProLiant (SPP) available as of March 2023.&lt;/li&gt;
&lt;li&gt;QLogic SN1610Q cards do not support the steps outlined in this document (as of April 2023). However, future firmware releases might enable this Redfish API-based configuration method.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Pre-requisite information&lt;/h1&gt;
&lt;p&gt;Before we dive into the steps for performing the automation, you will need to gather a few pieces of information, such as&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE iLO Management port IP address and credentials.&lt;/li&gt;
&lt;li&gt;Storage target World Wide Number (WWN) and LUN id.  A storage administrator can typically provide you with this information.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Download and install the &lt;a href=&quot;https://www.hpe.com/info/resttool&quot;&gt;HPE RESTful Interface Tool&lt;/a&gt;  (version 4.1.0.0 or later) on the system you will initiate the automation actions from.&lt;/p&gt;
&lt;p&gt;Once the RESTful Interface tool (referred to as iLOREST in the remainder of the document) has been installed, you can review the current boot target and their order of execution by following the steps below&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ilorest login &amp;#x3C;ilo-ip&gt; -u &amp;#x3C;ilo-user&gt; -p password&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ilorest bootorder&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Refer to the iLOREST&lt;a href=&quot;https://hewlettpackard.github.io/python-redfish-utility/#bootorder-command&quot;&gt;documentation&lt;/a&gt; for more information on this command.&lt;/p&gt;
&lt;h1&gt;Configuration Steps&lt;/h1&gt;
&lt;p&gt;Now that you have gathered the pre-requisite information and installed the iLOREST tool, let’s discuss the detailed steps for implementing the automation.&lt;/p&gt;
&lt;h2&gt;Step 1: Ensure that the &lt;code&gt;FCScanPolicy&lt;/code&gt; value is set to ‘AllTargets’&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest  get FCScanPolicy --selector Bios
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expected output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;FCScanPolicy=AllTargets
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the value is not set to AllTargets, you can change it using the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest set &quot;FCScanPolicy&quot;=&quot;AllTargets&quot; --selector Bios

ilorest commit

ilorest reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;A﻿fter restarting the server, remember to retrieve the updated Bios settings from the iLO of the server with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest select Bios. --refresh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can monitor the server restart state and continue the configuration when it reaches the &lt;code&gt;FinishedPost&lt;/code&gt; state, and the &lt;code&gt;vMainDeviceDiscoveryComplete&lt;/code&gt; device discovery state.  More details about monitoring server state is described in this &lt;a href=&quot;https://developer.hpe.com/blog/master-the-redfish-server-states-to-improve-your-monitoring-and-manageme/&quot;&gt;article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The server state can be retrieved with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest serverstate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The device discovery state can be obtained with the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest select ComputerSystem. --refresh

ilorest get Oem/Hpe/DeviceDiscoveryComplete/DeviceDiscovery --json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After restarting the server, you can verify that the settings have taken effect using the BIOS menus as shown in Figure 1 or using the commands:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ilorest select Bios. --refresh

ilorest get FCScanPolicy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture1.png&quot; alt=&quot;Figure 1: FCScanPolicy Bios setting&quot; title=&quot;Figure 1: FCScanPolicy Bios setting&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 2: Identify the right FC Card&lt;/h2&gt;
&lt;p&gt;The server you are trying to configure might have one or more FC cards in it, and you need to identify the right card using the steps below.&lt;/p&gt;
&lt;p&gt;List out available network adapters (includes FC cards)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawget /redfish/v1/Chassis/1/NetworkAdapters/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The previous command returns the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{ 
 
  &quot;@odata.context&quot;: &quot;/redfish/v1/$metadata#NetworkAdapterCollection.NetworkAdapterCollection&quot;, 
 
  &quot;@odata.etag&quot;: &quot;W/&quot;F09DE05C&quot;&quot;, &quot;@odata.id&quot;: &quot;/redfish/v1/Chassis/1/NetworkAdapters/&quot;, 
  
  &quot;@odata.type&quot;: &quot;#NetworkAdapterCollection.NetworkAdapterCollection&quot;, 
 
  &quot;Description&quot;: &quot;The collection of network adapter resource instances available in this chassis.&quot;,
 
  &quot;Members&quot;: [ 
 
  { &quot;@odata.id&quot;: &quot;/redfish/v1/Chassis/1/NetworkAdapters/DC080000/&quot; }, 
 
  { &quot;@odata.id&quot;: &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE083000/&quot; }, 

  { &quot;@odata.id&quot;: &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE081000/&quot; }, 

  { &quot;@odata.id&quot;: &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE07B000/&quot; } 

  ],

&quot;Members@odata.count&quot;: 4, 
…
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make a note of the device IDs in the output above - DC080000, DE083000, DE081000, DE07B000.  The values for these items might be different on the server you are using.&lt;/p&gt;
&lt;p&gt;Query these items using the device ID to verify it is the right FC card that you would like to configure, for example using DE083000 device (your output and device IDs might be different).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawget /redfish/v1/Chassis/1/NetworkAdapters/DE083000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Look for the following in the output.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&quot;Location&quot;: {
        &quot;PartLocation&quot;: {
          &quot;LocationOrdinalValue&quot;: 4,
          &quot;LocationType&quot;: &quot;Slot&quot;,
          &quot;ServiceLabel&quot;: &quot;PCI-E Slot 4&quot;
        }
…

&quot;Manufacturer&quot;: &quot;Hewlett Packard Enterprise&quot;,

&quot;Model&quot;: &quot;HPE StoreFabric SN1610E 32Gb 2-port Fibre Channel Host Bus Adapt&quot;,

&quot;Name&quot;: &quot;HPE SN1610E 32Gb 2p FC HBA&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above values can help you ascertain that you have the right card, and its device ID.&lt;/p&gt;
&lt;h2&gt;Step 3: Verify that Redfish configuration is Enabled&lt;/h2&gt;
&lt;p&gt;Before you can change the boot from SAN configurations on an FC card, you will have to set &lt;code&gt;RedfishConfiguration&lt;/code&gt; key to &quot;Enabled&quot; for the specific adapter, if it has not already been done.&lt;/p&gt;
&lt;p&gt;The current value of this key can be viewed by querying the adapter.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawget /redfish/v1/Chassis/1/NetworkAdapters/DE083000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the value is not set, please perform the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a file named enable-redfish.txt with the following contents&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE083000/&quot;:
 {
  &quot;Oem&quot;:
  {
   &quot;Hpe&quot;:
   {
       &quot;RedfishConfiguration&quot;: &quot;Enabled&quot;
   }
  }
 }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply this patch via iLOREST&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpatch &amp;#x3C;file path&gt;\enable-redfish.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To make this patch permanent, flush the configuration to the iLO NVRAM by creating a file name flushtonvm.txt, with the following contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
 &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE083000/Actions/Oem/Hpe/HpeNetworkAdapter.FlushConfigurationToNVM/&quot;:
 {
 }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Post the flush action via iLOREST&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpost &amp;#x3C;file path&gt;\flushtonvm.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reboot the server&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Step 4: Set boot from SAN value to &quot;Enable&quot; for the adapter&lt;/h2&gt;
&lt;p&gt;For each SN1610E adapter on your server that you would like to enable for SAN boot, there is a variable &quot;BootMode&quot; that needs to have a value of &quot;FibreChannel&quot;.&lt;/p&gt;
&lt;p&gt;This can be viewed from BIOS configuration UI at the location shown in Figure 2.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture2.png&quot; alt=&quot;&quot; title=&quot;Figure 2: Set boot from SAN&quot;&gt;&lt;/p&gt;
&lt;p&gt;This value is part of the FC adapter configuration, and is visible in the output of the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawget /redfish/v1/Chassis/1/NetworkAdapters/DE083000/NetworkDeviceFunctions/1/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Look for &quot;BootMode&quot; in the output generated.&lt;/p&gt;
&lt;p&gt;To modify the value of this property,&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a file named enable-sanboot.txt with the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
 &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE083000/NetworkDeviceFunctions/1/&quot;:
 {
    &quot;BootMode&quot;:&quot;FibreChannel&quot;
 }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply this patch via iLOREST&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpatch &amp;#x3C;file path&gt;\enable-sanboot.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Flush or commit the changes to ilo NVM using the flushtonvm.txt file created before&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpost &amp;#x3C;file path&gt;\flushtonvm.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 5: Create the SAN boot target entry&lt;/h2&gt;
&lt;p&gt;Once the boot from SAN is enabled on the FC card, you will need to create a mapping for the Storage LUN that would be used as a boot entry for the server.&lt;/p&gt;
&lt;p&gt;This entry is created within the BootTargets collection for the Port on the Adapter.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawget /redfish/v1/Chassis/1/NetworkAdapters/DE083000/NetworkDeviceFunctions/1/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the output, look for entries such as:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&quot;FibreChannel&quot;: {
       &quot;BootTargets&quot;: [
         {
           &quot;BootPriority&quot;: 0,
           &quot;LUNID&quot;: &quot;00:00:00:00:00:00:00:00&quot;,
           &quot;WWPN&quot;: &quot;00:00:00:00:00:00:00:00&quot;
         }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To create this entry, you will need to find out the Storage WWN and the LUN number exported to this server.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a patch file (example bootentry.txt file contents shown below).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;/redfish/v1/Chassis/1/NetworkAdapters/DE083000/NetworkDeviceFunctions/1&quot;:
  {
  &quot;FibreChannel&quot;:
    {
    &quot;BootTargets&quot;:[
      {
      &quot;BootPriority&quot;: 0,
      &quot;LUNID&quot;: &quot;00:00:00:00:00:00:00:00&quot;,
      &quot;WWPN&quot;: &quot;00:00:00:00:00:00:00:C3&quot;
      }
    ]
   }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply this patch file via iLOREST&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpatch &amp;#x3C;file path&gt;\bootentry.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Commit the changes by flashing the ilo NVRAM using the flushtonvm.txt file created before.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest rawpost &amp;#x3C;file path&gt;\flushtonvm.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Restart the server.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the server has restarted, you can view the boot order to verify that the new entry for the SAN boot target has been created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ilorest bootorder
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;In this blog post, we have learnt how easy it is to automate the configuration of boot from SAN targets for HPE servers using the iLOREST tool.  There are other aspects of server provisioning, monitoring, and management that you can automate with this tool and using the Redfish API interface supported by HPE iLO.&lt;/p&gt;
&lt;p&gt;To learn more about these topics, check out &lt;a href=&quot;https://developer.hpe.com/search/?term=redfish&quot;&gt;HPE Developer portal&lt;/a&gt; for additional blogs and technical articles.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[New for HPE Discover 2023! Hack Shack serves as home base for HPE’s Office of the CTO]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) will be hosting Discover 2023 from June 20-22 at the Venetian Convention and Expo Center in Las Vegas…]]></description><link>https://developer.hpe.com/new-for-hpe-discover-2023-hack-shack-serves-as-home-base-for-hpe’s-office-of-the-cto/</link><guid isPermaLink="false">https://developer.hpe.com/new-for-hpe-discover-2023-hack-shack-serves-as-home-base-for-hpe’s-office-of-the-cto/</guid><pubDate>Mon, 22 May 2023 11:06:25 GMT</pubDate><content:encoded>&lt;style&gt;
li {
   font-size: 27px;
   line-height: 33px;
   max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) will be hosting Discover 2023 from June 20-22 at the Venetian Convention and Expo Center in Las Vegas, Nevada. The HPE Developer Community team is excited to be there once again to greet customers and partners in the Hack Shack. There will be some exciting changes this year. Read on to learn more about what will be happening in the Hack Shack!&lt;/p&gt;
&lt;h2&gt;What’s new for the Hack Shack?&lt;/h2&gt;
&lt;p&gt;Since 2017, the HPE Hack Shack has been enjoyed by developers as a place to meet with HPE software engineers and other software subject matter experts, to brainstorm about solutions to issues they may be experiencing back home, and to learn a bit more about what HPE is doing in the software space. It’s also provided an opportunity for attendees to enjoy a little down time, relaxing over a few games and software-focused Hack Shack challenges.&lt;/p&gt;
&lt;p&gt;This year, the Hack Shack will serve as the Technology Hub for technologists of all types; be they developers, data scientists, ITOps or DevOps personnel. It will be the home base for the HPE Office of the Chief Technology Officer (CTO), where you’ll be able to meet and chat with engineering representatives and subject matter experts (SMEs) who work on the HPE GreenLake edge-to-cloud platform. Here, you’ll get to have in-depth conversations about the platform that was created to eliminate complexity, delivering a cloud experience for customer workloads wherever their data may be.&lt;/p&gt;
&lt;p&gt;The Hack Shack will complement the CTO Theater, where over a dozen different HPE GreenLake platform sessions will take place covering everything from platform APIs to future innovations, by providing a place where you can continue the conversations and hone in on any specific questions you might have.&lt;/p&gt;
&lt;h2&gt;What is the HPE Office of the CTO?&lt;/h2&gt;
&lt;p&gt;Today’s highly interconnected world requires easier, more secure ways to access and use the data that has become our life force. The HPE Office of the Chief Technology Officer (CTO) leads HPE’s hotbed of innovation. Office of the CTO team members are heavily involved in R&amp;#x26;D to find and develop new ways to do this. They help create history, pushing the industry forward by redefining what cloud computing means and advancing customers’ and partners’ businesses in ways they could not previously have imagined.&lt;/p&gt;
&lt;p&gt;The organization is responsible for the HPE GreenLake edge-to-cloud platform, bringing the economy, ease of use, and scalability of a cloud experience to where a customer’s data lives. In addition, the group incubates projects around a variety of emerging technologies to bring the company’s most innovative products and services to market.&lt;/p&gt;
&lt;p&gt;As companies strive for a future where they are fully autonomous, HPE GreenLake will help them get there, delivering AI-driven smart services that are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fully automated&lt;/li&gt;
&lt;li&gt;Continuous&lt;/li&gt;
&lt;li&gt;Self-healing&lt;/li&gt;
&lt;li&gt;Self-aware&lt;/li&gt;
&lt;li&gt;Fully distributed&lt;/li&gt;
&lt;li&gt;Auto-compliant&lt;/li&gt;
&lt;li&gt;Self-learning&lt;/li&gt;
&lt;li&gt;Self-securing&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Office of the CTO is driving this transformation through its work on the HPE GreenLake platform and its innovation. The group is home to some of the brightest and best talent that HPE has to offer, and is working to attract even more.&lt;/p&gt;
&lt;h2&gt;What’s happening specifically in the Hack Shack?&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/hack-shack-image.png&quot; alt=&quot;Hack Shack at HPE Discover&quot; title=&quot;Hack Shack at HPE Discover&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Prize winning activities and celebration party&lt;/h3&gt;
&lt;p&gt;As in previous years, the Hack Shack will feature a lot of fun activities. The HPE Developer Community Virtual Treasure Hunt is a fan favorite. This year, 12 winners will be awarded a developer-branded hoodie/sweatshirt! There will be games set up in the yard, like Jenga and corn hole, as well.&lt;/p&gt;
&lt;p&gt;The traditional Hack Shack Celebration party will be held on Wednesday, June 21st. Make sure you stop on by to join in on the fun, food, and games!&lt;/p&gt;
&lt;h3&gt;Hack Shack challenges&lt;/h3&gt;
&lt;p&gt;For those looking for more mind-stimulating activities, there will be 4 Hack Shack challenges this year focused on the HPE GreenLake edge-to-cloud platform.  They will center on the HPE GreenLake Aruba Central API, the HPE Data Services Cloud Console API, the HPE Compute Ops Management API and the HPE GreenLake Terraform provider. These are perfect for those who’d like some hands-on experience in working with these APIs.&lt;/p&gt;
&lt;h3&gt;Post-presentation “Meet the experts” sessions&lt;/h3&gt;
&lt;p&gt;Event attendees are invited to Meet-the-Expert sessions immediately following presentations being held in the CTO Theater. After attending the session in the theater, you’ll have the opportunity to follow the speaker back to the Hack Shack and continue the conversation in a fun and friendly environment.&lt;/p&gt;
&lt;h3&gt;Customer-engaging meetups&lt;/h3&gt;
&lt;p&gt;New for this year, the Hack Shack will host 4 different meetup sessions for customers to meet with the CTO, other HPE executives, and HPE Fellows. These meetups are interactive sessions designed to collect customer feedback. In many of them, you’ll be able to share your experiences, challenges, and expectations related to a particular topic. Take advantage of these opportunities to connect with HPE executives on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HPE GreenLake Private Cloud Enterprise&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this meetup, share your thoughts on what services, in addition to the resource capacity, could be offered from a managed private cloud services offering. We’d love to hear what key features, functionalities, and capabilities you’d like to see offered as part of, or in addition to the existing bare-metal, VMs and container offerings.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unlocking Data Insights: Maximizing Your Cloud&apos;s Potential&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this meetup, we’d love to hear your thoughts on what the minimum data services a cloud platform should offer. Focus on key features, functionalities, and capabilities that you consider essential in any cloud data service.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Site-as-a-Service&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Starting with a brief presentation from the Office of the CTO on HPE current and near-term projects, this meetup will focus on Site-as-a-Service, where HPE performs edge management on behalf of an enterprise, as well as work in distributed data and workload management at the edge. After the presentation, we will lead an open discussion on the challenges and requirements of the use and management of distributed edge sites in the enterprise and how HPE can help address them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Simplifying Hybrid Cloud Operations&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this meetup, we invite you to share your perspectives and experiences in managing IT operations across a hybrid cloud environment. We’ll delve into various challenges and use cases in monitoring, managing events, and automating tasks using the OpsRamp solution. Don’t miss out on this opportunity to engage with industry peers and have a say in product directions by joining this session.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What sessions are being held in the CTO Theater?&lt;/h2&gt;
&lt;p&gt;To get details on these 15 sessions, go to the &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1050&amp;#x26;locale=en_US&quot;&gt;Content Catalog&lt;/a&gt; and search on CTO has the keyword.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;API for HPE GreenLake block storage and next-gen platforms [&lt;strong&gt;CTO5731&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Future of ITOps on the HPE GreenLake platform [&lt;strong&gt;CTO5734&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Gain visibility and insights for optimizing the sustainability of your IT estate [&lt;strong&gt;CTO5689&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Getting started with the HPE GreenLake platform [&lt;strong&gt;CTO5732&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;HPE GreenLake for Private Cloud Enterprise: APIs and developer experience [&lt;strong&gt;CTO5736&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Intro to the HPE GreenLake for Compute Ops Management API [&lt;strong&gt;CTO5723&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Introducing HPE GreenLake for Disaster Recovery [&lt;strong&gt;CTO5726&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Introduction to Aruba Central automation [&lt;strong&gt;CTO5725&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Join the HPE Developer Community to collaborate and build [&lt;strong&gt;CTO5727&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Seamless onboarding to the HPE GreenLake platform: user management and SSO configuration [&lt;strong&gt;CTO5733&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Simplify management of multi-cloud hybrid IT with HPE [&lt;strong&gt;CTO5735&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Under the hood of the HPE GreenLake platform [&lt;strong&gt;CTO5729&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Unified API strategy enhances HPE GreenLake experience [&lt;strong&gt;CTO5728&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;What’s next for HPE? OCTO platform innovations [&lt;strong&gt;CTO5737&lt;/strong&gt;]&lt;/li&gt;
&lt;li&gt;Zero trust through SPIFFE at scale [&lt;strong&gt;CTO5730&lt;/strong&gt;]&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Is there any swag or party this year?&lt;/h2&gt;
&lt;p&gt;Yes! The celebration party will be held at 5.30 pm on Wednesday evening. Attendees will be treated to food and refreshments and hear from some of the renowned members of our community. Those who attend the celebration party will receive a developer-branded cap.&lt;/p&gt;
&lt;p&gt;We’ll also be giving out t-shirts to those who participate in any of our Hack Shack activities, such as the virtual treasure hunt or a Workshop-on-Demand.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/swag-2.png&quot; alt=&quot;Giveaways at HPE Discover&quot; title=&quot;Giveaways at HPE Discover&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Join us in the Hack Shack!&lt;/h2&gt;
&lt;p&gt;If you haven’t stopped by the Hack Shack before – maybe because you thought it was just for developers – make sure you join us this year. Learn about the HPE GreenLake edge-to-cloud platform. Learn what resources HPE can provide to help you build on and use to work with the platform. It’s a great opportunity to get to meet with the experts and answer any questions you might have.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;a href=&quot;https://attend.hpe.com/discover2023/index.cfm?iLangID=1&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;button&quot;&gt;Register now!&lt;/button&gt;&lt;/a&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Upgrade Kubernetes clusters using HPE GreenLake Terraform Provider]]></title><description><![CDATA[IaC, or Infrastructure as code, is a practice of automating the process of managing and provisioning infrastructure through the use of code…]]></description><link>https://developer.hpe.com/upgrade-kubernetes-clusters-using-hpe-greenlake-terraform-provider/</link><guid isPermaLink="false">https://developer.hpe.com/upgrade-kubernetes-clusters-using-hpe-greenlake-terraform-provider/</guid><pubDate>Tue, 16 May 2023 09:00:43 GMT</pubDate><content:encoded>&lt;p&gt;IaC, or Infrastructure as code, is a practice of automating the process of managing and provisioning infrastructure through the use of code instead of using manual processes. It gives organizations the tools required to create, manage, and destroy compute resources by statically defining and declaring these resources in codeblocks. It helps increase operational agility, simplify management, reduce errors, and save cost.&lt;/p&gt;
&lt;p&gt;I﻿n this post, I will explore options used to declare and upgrade Kubernetes clusters on HPE GreenLake using the HPE GreenLake Terraform Provider.&lt;/p&gt;
&lt;p&gt;In terms of upgrades, the following 2 scenarios are supported:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Scaling of a cluster&apos;s worker nodes. Please refer to the blog post &lt;a href=&quot;https://developer.hpe.com/blog/scale-kubernetes-cluster-using-hpe-greenlake-terraform-provider/&quot;&gt;Scale Kubernetes Clusters using HPE GreenLake Terraform Provider&lt;/a&gt; to check out scaling options available for worker nodes.&lt;/li&gt;
&lt;li&gt;Upgrade the Kubernetes version of the cluster. This step is covered in this blog.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;Prerequisite&lt;/h1&gt;
&lt;p&gt;Before implementing the steps shown in this tutorial, please read the blog post &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code - Part 1&lt;/a&gt;, which includes the steps required to create a Kubernetes cluster.  This post expands upon that scenario by examining how to upgrade a cluster&apos;s Kubernetes version.&lt;/p&gt;
&lt;h1&gt;Verify existing Kubernetes cluster&lt;/h1&gt;
&lt;p&gt;After the cluster is created following the instructions found in the  &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code - Part 1 blog post&lt;/a&gt;, launch the HPE GreenLake Central console and verify that the cluster is present under the appropriate tenant.&lt;/p&gt;
&lt;p&gt;You should see the  &lt;strong&gt;tf-test&lt;/strong&gt;  cluster present under  &lt;strong&gt;Dashboard -&gt; Manage your Private Cloud -&gt; Containers&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://developer.hpe.com/img/cluster_list.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Shown below is the reference Terraform configuration file for the existing cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
  name = &quot;BLR&quot;
  space_id = var.HPEGL_SPACE
 }
 
data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
  name = &quot;demo&quot;
  site_id = data.hpegl_caas_site.blr.id
}
 
resource hpegl_caas_cluster test {
  name         = &quot;tf-test&quot;
  blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
  site_id      = data.hpegl_caas_site.blr.id
  space_id     = var.HPEGL_SPACE
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Upgrade the Kubernetes version&lt;/h1&gt;
&lt;p&gt;For the Kubernetes version upgrade, you need to specify the new version of Kubernetes that is available for upgrade in the resources block.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;kubernetes_version&lt;/strong&gt;: Use the Kubernetes version that pops up on the cluster details page in the UI.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below you can see the reference Terraform configuration for updating the cluster&apos;s Kubernetes version.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
 required_providers {
   hpegl = {
     source  = &quot;hpe/hpegl&quot;
     version = &quot;&gt;= 0.2.2&quot;
   }
 }
}

provider hpegl {
 caas {
 }
}

variable &quot;HPEGL_SPACE&quot; {
 type = string
}

data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
 name = &quot;BLR&quot;
 space_id = var.HPEGL_SPACE
}

data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
 name = &quot;demo&quot;
 site_id = data.hpegl_caas_site.blr.id
}

resource hpegl_caas_cluster test {
 name         = &quot;tf-test&quot;
 blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
 site_id      = data.hpegl_caas_site.blr.id
 space_id     = var.HPEGL_SPACE
 kubernetes_version = &quot;1.23.13-hpe1&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Run Terraform plan&lt;/h2&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run  &lt;strong&gt;terraform plan.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ terraform plan

hpegl_caas_cluster.test: Refreshing state... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # hpegl_caas_cluster.test will be updated in-place
  ~ resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
        id                    = &quot;a32fabb9-7c19-42d1-9a38-ebf122810c0a&quot;
      ~ kubernetes_version    = &quot;1.22.9-hpe1&quot; -&gt; &quot;1.23.13-hpe1&quot;
        name                  = &quot;tf-test&quot;
        # (17 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn&apos;t use the -out option to save this plan, so Terraform can&apos;t guarantee to take exactly these actions if you run &quot;terraform apply&quot; now.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Run Terraform apply&lt;/h2&gt;
&lt;p&gt;Terraform apply executes the actions proposed in the Terraform plan and updates the resources. Run the command  &lt;strong&gt;terraform apply&lt;/strong&gt;  and type  &lt;strong&gt;yes&lt;/strong&gt;  when asked to  &lt;strong&gt;Enter a value&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ terraform apply

hpegl_caas_cluster.test: Refreshing state... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # hpegl_caas_cluster.test will be updated in-place
  ~ resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
        id                    = &quot;a32fabb9-7c19-42d1-9a38-ebf122810c0a&quot;
      ~ kubernetes_version    = &quot;1.22.9-hpe1&quot; -&gt; &quot;1.23.13-hpe1&quot;
        name                  = &quot;tf-test&quot;
        # (17 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.

  Enter a value: yes

hpegl_caas_cluster.test: Modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 1m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 3m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 5m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 7m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 9m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 11m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 13m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 15m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 17m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 19m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 21m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 23m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 25m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 27m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 29m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 31m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 33m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 35m10s elapsed]
hpegl_caas_cluster.test: Modifications complete after 35m18s [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kubernetes version update can be performed simultainously with scaling of worker nodes by introducing the worker_nodes block in the terraform configuration file.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;In this blog, I covered how to update Kubernetes clusters with Terraform provider for HPE GreenLake. I showed you how to update the Kubernetes version of the cluster. Also I discussed how update can be performed while scaling a cluster.&lt;/p&gt;
&lt;p&gt;I hope you found this information interesting and useful while considering the upgrade of Kubernetes cluster with HPE GreenLake Terraform provider. Use the following links to understand more about Terraform and HPE GreenLake Terraform Provider.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.terraform.io/&quot;&gt;Learn more about Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest/docs&quot;&gt;Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Don’t forget, you can always find other tutorials and articles on HPE GreenLake on the  &lt;a href=&quot;https://developer.hpe.com/blog/tag/hpe-greenlake&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Federating SPIRE on HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[SPIRE is designed to enable widespread deployment of Mutual TLS (mTLS), a method for mutual authentication, between workloads in distributed…]]></description><link>https://developer.hpe.com/federating-spire-on-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/federating-spire-on-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Mon, 15 May 2023 03:37:14 GMT</pubDate><content:encoded>&lt;p&gt;SPIRE is designed to enable widespread deployment of Mutual TLS (mTLS), a method for mutual authentication, between workloads in distributed systems. In our previous &lt;a href=&quot;https://developer.hpe.com/blog/integrating-istio-and-spire/&quot;&gt;blog &lt;/a&gt;post, we explained how you can deploy a Kubernetes cluster on HPE GreenLake for Private Cloud Enterprise and integrate Istio and SPIRE to enable advanced analysis and visualization of the service mesh.&lt;/p&gt;
&lt;p&gt;In this blog post, we will install and federate SPIRE across two Kubernetes clusters deployed on HPE GreenLake for Private Cloud Enterprise: cluster 1 and cluster 2. We will then show you how to deploy a sample application to verify the federation and visualize the communication across services through a graph.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spire-federation.png&quot; alt=&quot;SPIRE Federation&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Step 1. Installing SPIRE&lt;/h1&gt;
&lt;p&gt;Using the QuickStart files provided in this &lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content/tree/main/services/spire/federation&quot;&gt;link&lt;/a&gt;, get started installing SPIRE on both Clusters. Since there are two clusters in our example, the trust domain configured for the first cluster is &lt;em&gt;&lt;strong&gt;cluster 1.demo&lt;/strong&gt;&lt;/em&gt; and the other is &lt;em&gt;&lt;strong&gt;cluster2.demo&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: You may configure your own custom trust domains for the clusters by replacing these values across the configuration files.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;1.1 Clone the repo using the command&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;git clone https://github.com/cxteamtrials/caas-trials-content.git 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;1.2 Apply the QuickStart file on each cluster using the following commands&lt;/h2&gt;
&lt;p&gt;As the Kubectl command is required for installation and configuration, please refer to our first &lt;a href=&quot;https://developer.hpe.com/blog/integrating-istio-and-spire/&quot;&gt;blog &lt;/a&gt;post, which explains how to obtain the Kubeconfig file to manage the K8s clusters using Kubectl.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#for K8s cluster 1:
kubectl apply -f services/spire/federation/spire-quickstart-cluster-1.yaml 
#for K8s cluster 2:
kubectl apply -f services/spire/federation/spire-quickstart-cluster-2.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This step will install SPIRE into your Kubernetes clusters, along with two additional components: the SPIFFE CSI Driver and the SPIRE Kubernetes Controller manager, which facilitates the registration of workloads and establishment of federated relationships. &lt;/p&gt;
&lt;p&gt;Verify the installation by checking to see if all the pods are running and that the containers within them are up.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl get po -n spire 
NAME                            READY   STATUS    RESTARTS      AGE
spire-agent-92q5m               3/3     Running   0             37d
spire-agent-jhgwf               3/3     Running   0             37d
spire-agent-sm8gt               3/3     Running   0             37d
spire-server-574474c7dc-gbzl6   2/2     Running   1 (11d ago)   37d
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster2:~ # kubectl get po -n spire 

NAME                            	 READY      STATUS      RESTARTS       AGE 

spire-agent-wttmd               	 3/3        Running     1 (24h ago)    24h 
spire-server-574474c7dc-2bfcx   	 2/2        Running     0              24h 
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Step 2. Installing Istio&lt;/h1&gt;
&lt;p&gt;On each of your Kubernetes clusters, install Istio and patch Istio ingress gateway. Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path, allowing Envoy to communicate and fetch identities directly from it. SPIRE can be configured for Istio workloads through an integration with Envoy’s SDS API. &lt;/p&gt;
&lt;h2&gt;2.1 Download the latest release&lt;/h2&gt;
&lt;p&gt;You can download the latest release using the official Istio repository or just copy the following command, which would do the same thing for you.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;curl -L https://istio.io/downloadIstio | sh - 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Change to the Istio directory (cd command), and set the path by using this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;cd istio-1.17.1 
export PATH=$PWD/bin:$PATH 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;2.2 I﻿nstall Istio with custom patch&lt;/h2&gt;
&lt;p&gt;Install Istio with custom patches for the Ingress-gateway as well as for Istio-proxy.  &lt;/p&gt;
&lt;p&gt;Get the Istio-spire-config patch using this &lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content/blob/main/services/istio/release-1.17/spire&quot;&gt;link&lt;/a&gt;, and install that patch using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#For K8s cluster 1:
istioctl apply -f services/istio/release-1.17/spire/spire-patch-cluster1.yaml 
#For K8s cluster 2:
istioctl apply -f services/istio/release-1.17/spire/spire-patch-cluster2.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Installing Istio with the custom patch will share the spiffe-csi-driver with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent’s UNIX Domain Socket. &lt;/p&gt;
&lt;h2&gt;2.3 Patch Istio Ingress Gateway&lt;/h2&gt;
&lt;h3&gt;2﻿.3.1 Apply SPIFFE ID&lt;/h3&gt;
&lt;p&gt;First, you must get and apply one of SPIRE controller manager’s &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/&quot;&gt;CRD (Custom Resource Definition)&lt;/a&gt; ClusterSPIFFEID. The CRD - ClusterSPIFFEID is a cluster-wide resource used to register workloads with SPIRE. The ClusterSPIFFEID can target all workloads in the cluster or can be optionally scoped to specific pods or namespaces via label selectors.  &lt;/p&gt;
&lt;p&gt;Create a ClusterSPIFFEID CRD to generate registration entries in SPIRE server for all workloads labeled &lt;em&gt;&lt;strong&gt;spiffe.io/spire-managed-identity: true.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Apply the ClusterSPIFFEID used for this demo to both clusters. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f services/spire/clusterspiffeid-example.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2.3﻿.2 Patch Ingress Gateway&lt;/h3&gt;
&lt;p&gt;Now, simply patch the ingress-gateway with &lt;em&gt;&lt;strong&gt;spiffe.io/spire managed-identity: true label.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This patch will register your ingress-gateway pod into the server.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl patch deployment istio-ingressgateway -n istio-system -p &apos;{&quot;spec&quot;:{&quot;template&quot;:{&quot;metadata&quot;:{&quot;labels&quot;:{&quot;spiffe.io/spire-managed-identity&quot;: &quot;true&quot;}}}}}&apos; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2﻿.3.3 Check your work&lt;/h3&gt;
&lt;p&gt;After patching, confirm that your ingress-gateway pod, istiod and all their containers work.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl get po -n istio-system 
NAME                                   READY   STATUS    RESTARTS   AGE
istio-ingressgateway-5d77cdd9d-gh9w4   1/1     Running   0          37d
istiod-d5bc8669c-4bdvh                 1/1     Running   0          37d
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster2:~ #  kubectl get po -n istio-system 
NAME                                   READY   STATUS    RESTARTS   AGE
istio-ingressgateway-64bd5ccbbb-kqs2h  1/1     Running   0          37d
istiod-d5bc8669c-thbpj                 1/1     Running   0          37d
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Step 3. Federating SPIRE&lt;/h1&gt;
&lt;h2&gt;3﻿.1 Expose SPIRE server bundle endpoint&lt;/h2&gt;
&lt;p&gt;Assign an external IP to your spire-server-bundle-endpoint service on each cluster.  &lt;/p&gt;
&lt;p&gt;SPIFFE (&lt;em&gt;Secure Production Identity Framework For Everyone&lt;/em&gt;) is a specification for implementing identity for workloads, and SPIRE is the code that implements this specification in practice. A SPIFFE bundle is a resource that contains the public key material needed to authenticate credentials from a particular trust domain. A SPIFFE bundle endpoint is a resource (represented by a URL) that serves a copy of a SPIFFE bundle for a trust domain. SPIFFE control planes may both expose and consume these endpoints to transfer bundles between themselves, thereby achieving federation. The SPIRE server is used to host the “spire-server-bundle-endpoint” service that serves the SPIFFE bundle to an external SPIRE agent of a different trust domain.  &lt;/p&gt;
&lt;p&gt;MetalLB is used to assign the IP for this service. MetalLB hooks into your Kubernetes cluster and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load balancers. &lt;/p&gt;
&lt;p&gt;Follow this &lt;a href=&quot;https://metallb.universe.tf/configuration/_advanced_ipaddresspool_configuration/#controlling-automatic-address-allocation&quot;&gt;link&lt;/a&gt; to set up an automatic address allocation for all services that have type external IP. &lt;/p&gt;
&lt;p&gt;The configuration should now look something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl get svc -n spire 
NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
spire-controller-manager-webhook-service   ClusterIP      10.111.48.177   &amp;#x3C;none&gt;        443/TCP          37d
spire-server                               NodePort       10.106.72.102   &amp;#x3C;none&gt;        8081:30256/TCP   37d
spire-server-bundle-endpoint               LoadBalancer   10.99.0.208     172.16.17.9   8443:30889/TCP   37d
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster2:~ # kubectl get svc -n spire 
NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
spire-controller-manager-webhook-service   ClusterIP      10.97.108.123   &amp;#x3C;none&gt;        443/TCP          37d
spire-server                               NodePort       10.104.109.247  &amp;#x3C;none&gt;        8081:30256/TCP   37d
spire-server-bundle-endpoint               LoadBalancer   10.104.151.184  172.16.17.3   8443:30889/TCP   37d
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;3﻿.2 Create cluster federated trust domain &lt;/h2&gt;
&lt;p&gt;The Cluster Federated Trust Domain CRD is used to federate the clusters with each other.  &lt;/p&gt;
&lt;p&gt;It requires the following federation configurations: &lt;/p&gt;
&lt;h3&gt;C﻿luster Federated Trust Domain:&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/table1.png&quot; alt=&quot;Cluster Federated Trust Domain&quot;&gt;&lt;/p&gt;
&lt;h3&gt;B﻿undle Endpoint:&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/table2.png&quot; alt=&quot;Bundle Endpoint&quot;&gt;&lt;/p&gt;
&lt;p&gt;The sample CRD’s can be applied to each cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#For K8s cluster 1:
kubectl apply -f services/spire/federation/clusterfederatedtrustdomain-cluster1.yaml 
#For K8s cluster 2:
kubectl apply -f services/spire/federation/clusterfederatedtrustdomain-cluster2.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Bundle endpoint URL and the TrustDomainBundle fields must be edited to be configured to the specifications of your cluster. &lt;/p&gt;
&lt;p&gt;Using the previous step, the Bundle endpoint URL can be obtained. To obtain the trust domain bundle, edit the clusters as follows: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server bundle show -format spiffe &gt; cluster1.bundle 

Cluster2:~ # kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server bundle show -format spiffe &gt; cluster2.bundle 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Doing a cat command of the bundles reveals the keys to be copied into the trust domain bundle field.  &lt;/p&gt;
&lt;p&gt;After the Bundle endpoint URL and the TrustDomainBundle fields are configured and applied, check the status of the  SPIRE pods to make sure they are running and check the logs of the spire-server to verify successful federation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl logs -n spire -c spire-server spire-server-574474c7dc-gbzl6 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/table3.png&quot; alt=&quot;Cluster-1 Logs&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster2:~ # kubectl logs -n spire -c spire-server spire-server-574474c7dc-2bfcx 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/table4.png&quot; alt=&quot;Cluster-2 Logs&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Step 4. Deploying a sample application&lt;/h1&gt;
&lt;p&gt;Now that SPIRE is federated and communication across clusters can be facilitated, here&apos;s how you can deploy a sample application that verifies this functionality.   &lt;/p&gt;
&lt;h2&gt;4.1 Deploy a resource in Cluster-1&lt;/h2&gt;
&lt;p&gt;In Cluster 1, apply a new ClusterSpiffeID called &lt;em&gt;&lt;strong&gt;curl-greeter&lt;/strong&gt;&lt;/em&gt; that registers resources with the label &lt;strong&gt;spiffe.io/spire-managed-identity=curl-greeter&lt;/strong&gt; that can be federated with cluster2. Create a resource called &lt;em&gt;&lt;strong&gt;curl-greeter&lt;/strong&gt;&lt;/em&gt; that has the label: &lt;strong&gt;spiffe.io/spire-managed-identity=curl-greeter&lt;/strong&gt; and annotation: &lt;strong&gt;inject.istio.io/templates=sidecar, spire&lt;/strong&gt; &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#Apply SPIFFEID
kubectl apply -f /services/spire/federation/clusterspiffeid-curl-greeter-cluster1.yaml
#Create Curl-Greeter Resource
kubectl run curl-greeter --image=radial/busyboxplus:curl --labels=&quot;spiffe.io/spire-managed-identity=curl-greeter&quot; --overrides=&apos;{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;spec&quot;: { &quot;template&quot;: {&quot;metadata&quot;: {&quot;annotations&quot;: { &quot;inject.istio.io/templates&quot;:&quot;sidecar,spire&quot; } } }}}&apos; -i --tty 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4﻿.2 Deploy Bookinfo Sample Application in Cluster-2&lt;/h2&gt;
&lt;p&gt;In Cluster 2, apply a new ClusterSpiffeID called &lt;em&gt;&lt;strong&gt;federated&lt;/strong&gt;&lt;/em&gt; that registers resources with the label &lt;strong&gt;spiffe.io/spire-managed-identity=spire&lt;/strong&gt; that can be federated with cluster1. Then apply the bookinfo sample application manifest.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#Apply SPIFFEID
kubectl apply -f /services/spire/federation/clusterspiffeid-federated-cluster2.yaml
#Apply Bookinfo Manifest
kubectl apply -f services/istio/release-1.17/bookinfo.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4﻿.3 Check if all the resources created are up and running&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # kubectl get po 

NAME                              READY   STATUS    RESTARTS   AGE 

curl-greeter                      2/2     Running   0          15h 
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster2:~ # kubectl get po 

NAME                                READY   STATUS    RESTARTS   AGE 
details-v1-bff8759df-vkvb4          2/2     Running   0          16h 
greeter-client-76686757cd-6j2ft     2/2     Running   0          21h 
productpage-v1-98887b9b-x5k24       2/2     Running   0          16h 
ratings-v1-7ddbb859fc-htmfq         2/2     Running   0          16h 
reviews-v1-67b576c8bf-jr6tj         2/2     Running   0          16h 
reviews-v2-7ffbdcc5f7-m2c29         2/2     Running   0          16h 
reviews-v3-6dbfcc6d89-zn9tw         2/2     Running   0          16h 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4﻿.4 Check that the workload identity was issued by SPIRE&lt;/h2&gt;
&lt;p&gt;SPIRE certificate at Curl-Greeter &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # istioctl proxy-config secret curl-greeter -o json | jq -r \ 

&gt; &apos;.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes&apos; | base64 --decode &gt; chain.pem 

Cluster1:~ # openssl x509 -in chain.pem -text | grep SPIRE 

        Subject: C = US, O = SPIRE, x500UniqueIdentifier = c3dd29b5f4a326f8a567854407456ea9 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Similarly, certificates for other pods can be checked.&lt;/em&gt; &lt;/p&gt;
&lt;h2&gt;4﻿.5 Create a secret&lt;/h2&gt;
&lt;p&gt;Create a secret to hold bookinfo product page credentials like this: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl create -n istio-system secret generic bookinfo-credential \ 

\--from-file=tls.key=bookinfo.com.key \ 

\--from-file=tls.crt=bookinfo.com.crt \ 

\--from-file=ca.crt=bookinfo.ca.crt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Obtain the tls cert and key using this command: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server x509 mint -spiffeID spiffe://cluster2.demo/ns/default/sa/bookinfo-productpage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Copy the SVID section into a new file bookinfo.com.crt and Private key section into bookinfo.com.key.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Obtain the ca cert using this command: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server bundle list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Copy the ca cert under the cluster1.demo section into a new file bookinfo.ca.cert.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Apply the gateway configuration for the bookinfo application found &lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content/blob/main/services/istio/release-1.17/samples/bookinfo-gateway.yaml&quot;&gt;here&lt;/a&gt;: (It uses the istio-ingress gateway) &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f services/istio/release-1.17/bookinto-gateway.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4﻿.6 Generate traffic to the sample application on cluster2 from the curl greeter on cluster 1&lt;/h2&gt;
&lt;p&gt;Curl Command at Cluster 1 (IP addr: istio-ingress-gateway external ip) &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;Cluster1:~ # curl -v -HHost:bookinfo.com --resolve &quot;bookinfo.com:443:172.16.17.2&quot;   --cacert bookinfo.ca.crt --cert svid.pem --key key.pem   &quot;https://bookinfo.com:443/productpage&quot; -k 

* Added bookinfo.com:443:172.16.17.2 to DNS cache 

* Hostname bookinfo.com was found in DNS cache 

*   Trying 172.16.17.2:443... 

* TCP_NODELAY set 

* Connected to bookinfo.com (172.16.17.2) port 443 (#0) 

* ALPN, offering h2 

* ALPN, offering http/1.1 

* successfully set certificate verify locations: 

*   CAfile: bookinfo.ca.crt 

  CApath: none 

* TLSv1.3 (OUT), TLS handshake, Client hello (1): 

* TLSv1.3 (IN), TLS handshake, Server hello (2): 

* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): 

* TLSv1.3 (IN), TLS handshake, Request CERT (13): 

* TLSv1.3 (IN), TLS handshake, Certificate (11): 

* TLSv1.3 (IN), TLS handshake, CERT verify (15): 

* TLSv1.3 (IN), TLS handshake, Finished (20): 

* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): 

* TLSv1.3 (OUT), TLS handshake, Certificate (11): 

* TLSv1.3 (OUT), TLS handshake, CERT verify (15): 

* TLSv1.3 (OUT), TLS handshake, Finished (20): 

* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 

* ALPN, server accepted to use h2 

* Server certificate: 

*  subject: C=US; O=SPIRE; x500UniqueIdentifier=a09f8093609833482827d697f2719205 

*  start date: May  9 08:45:18 2023 GMT 

*  expire date: May  9 09:45:28 2023 GMT 

*  issuer: C=US; O=SPIFFE 

*  SSL certificate verify ok. 

* Using HTTP2, server supports multi-use 

* Connection state changed (HTTP/2 confirmed) 

* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 

* Using Stream ID: 1 (easy handle 0x5569570e8050) 

&gt; GET /productpage HTTP/2 

&gt; Host:bookinfo.com 

&gt; User-Agent: curl/7.66.0 

&gt; Accept: */* 

&gt; 

* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): 

* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): 

* old SSL session ID is stale, removing 

* Connection state changed (MAX_CONCURRENT_STREAMS == 2147483647)! 

&amp;#x3C; HTTP/2 200 

&amp;#x3C; content-type: text/html; charset=utf-8 

&amp;#x3C; content-length: 4294 

&amp;#x3C; server: istio-envoy 

&amp;#x3C; date: Tue, 09 May 2023 08:49:40 GMT 

&amp;#x3C; x-envoy-upstream-service-time: 24 

 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Obtain cert and key file using: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server x509 mint -spiffeID spiffe://cluster.1demo/curl-greeter
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Copy the SVID section into a new file svid.pem and Private key section into key.pem.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Obtain ca cert using: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl exec -n spire -c spire-server deployment/spire-server -- /opt/spire/bin/spire-server bundle list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Copy the ca cert under the cluster2.demo section into a new file bookinfo.ca.cert.&lt;/em&gt; &lt;/p&gt;
&lt;h2&gt;4﻿.7 Visualize using Service Mesh&lt;/h2&gt;
&lt;p&gt;Using the Kiali dashboard, observe the graphs of generated traffic. &lt;/p&gt;
&lt;p&gt;The graph below shows services communication, and the locks symbolize mTls protocol. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mtls.png&quot; alt=&quot;mTLS Graph&quot;&gt;&lt;/p&gt;
&lt;h1&gt;S﻿ummary&lt;/h1&gt;
&lt;p&gt;The goal of this blog post was to guide you through federating SPIRE across two Kubernetes clusters deployed on HPE GreenLake for Private Cloud Enterprise. It shows you how to do this by creating a cluster federated trust domain and federated ClusterSpiffeIDs for your sample application workloads and then helps you visualize your service mesh through Kiali Dashboard.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[There’s always something new to learn]]></title><link>https://developer.hpe.com/2023-May-03/</link><guid isPermaLink="false">https://developer.hpe.com/2023-May-03/</guid><pubDate>Fri, 05 May 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[NetCDF in Chapel, Part 2: Reading a Dataset in Parallel]]></title><description><![CDATA[P﻿laceholder]]></description><link>https://developer.hpe.com/netcdf-in-chapel-part-2-reading-a-dataset-in-parallel/</link><guid isPermaLink="false">https://developer.hpe.com/netcdf-in-chapel-part-2-reading-a-dataset-in-parallel/</guid><pubDate>Wed, 03 May 2023 16:50:00 GMT</pubDate><content:encoded>&lt;p&gt;P﻿laceholder&lt;/p&gt;</content:encoded></item><item><title><![CDATA[NetCDF in Chapel, Part 1: Interfacing with the C Library]]></title><description><![CDATA[E﻿xternal blog]]></description><link>https://developer.hpe.com/netcdf-in-chapel-part-1-interfacing-with-the-c-library/</link><guid isPermaLink="false">https://developer.hpe.com/netcdf-in-chapel-part-1-interfacing-with-the-c-library/</guid><pubDate>Thu, 27 Apr 2023 05:31:00 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New PowerShell library for the HPE GreenLake platform]]></title><description><![CDATA[The purpose of this blog is to familiarize readers with the recently released PowerShell library for the HPE GreenLake edge-to-cloud…]]></description><link>https://developer.hpe.com/new-powershell-library-for-the-hpe-greenlake-cloud-platform/</link><guid isPermaLink="false">https://developer.hpe.com/new-powershell-library-for-the-hpe-greenlake-cloud-platform/</guid><pubDate>Wed, 26 Apr 2023 11:33:07 GMT</pubDate><content:encoded>&lt;style&gt;ul li{ font-size:28px;padding-bottom: 10px; line-height: 1.2}&lt;/style&gt;
&lt;style&gt;ol li{ font-size:28px;padding-bottom: 0.5em; line-height: 1.2}&lt;/style&gt;
&lt;style&gt;
  img {
    max-width: 100%;
    height: auto;
    border: 1px solid #ccc;
    margin: 20px;
    box-shadow: 2px 2px 5px #ccc;
  }
&lt;/style&gt;
&lt;p&gt;The purpose of this blog is to familiarize readers with the recently released PowerShell library for the HPE GreenLake edge-to-cloud platform. This library allows PowerShell developers, IT automation experts, and DevOps professionals to use the HPE GreenLake platform&apos;s API without having to rely on the Graphical User Interface (GUI).&lt;/p&gt;
&lt;p&gt;With the introduction of this new library, anyone with basic PowerShell skills can now automate their interaction with the HPE GreenLake API, leverage the many resources offered by the HPE GreenLake platform, and enjoy increased productivity and efficiency.&lt;/p&gt;
&lt;p&gt;Are you seeking agility and the ability to automate routine tasks through HPE GreenLake? If so, this library is the ideal solution for you!&lt;/p&gt;
&lt;p&gt;Among many other operations, this library allows you to create users, assign roles, send invitations to users, add and archive devices, assign applications and attach subscriptions, add tags to any device, and get all kinds of information about any HPE GreenLake resource.&lt;/p&gt;
&lt;p&gt;In addition, you can implement resource restriction policies, fully automate device onboarding, generate API credentials for specific application instances (such as HPE Compute Ops Management or HPE Data Services Cloud Console), and extend your API calls to any HPE GreenLake application instance on the fly.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake platform provides a shared set of common cloud services for different application instances and services. This common service experience brings all HPE service offerings together into a single, comprehensive customer experience that this library leverages.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ccs.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE GreenLake CCS, short for HPE GreenLake Common Cloud Services, offers a collection of API-enabled services that serve various functions, with the primary ones being:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Registration&lt;/strong&gt;: in charge of the user registration. Includes email verification, account creation with HPE GreenLake platform&apos;s Identity Provider (IdP) including integration with Ping Identity as the OpenID Connect (OIDC) Relying Party (RP) provider for user authenticity verification and token issuance&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authentication (AuthN)&lt;/strong&gt;: takes care of the user authentication and HPE GreenLake CCS to applications authentication. Includes unified login, Single or Multi-factor authentication (MFA) or Single sign-on (SSO) with third party, federated authentication with customer’s identity DP, single logout, user management, and supports increased security with PKCE (Proof Key for Code Exchange, pronounced ‘pixy”) for OAuth 2.0&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authorization (AuthZ)&lt;/strong&gt;: provides authorization service for HPE GreenLake CCS. Includes unified Role-Based Access Control (RBAC) for users, custom roles and Resource Restriction Policy (RRP) including role creation, resource assignment to a role, role assignment to user, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Device activation and inventory&lt;/strong&gt;: provides Zero Touch Provisioning (ZTP) and Asset inventory (contract and customer order processing). Includes device firmware management (firmware repository for resources, latest FW check, FW upgrade, baseline upgrade)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Subscription management&lt;/strong&gt;: offers subscription inventory, support for different consumption models&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The majority of these essential functions can be performed effortlessly and with great efficiency using the HPE GreenLake PowerShell library.&lt;/p&gt;
&lt;p&gt;PowerShell offers numerous benefits, such as flexibility and ease of learning, and with this new library, you can significantly boost your productivity and be agile while reducing the risk of manual errors. Most, if not all, of the HPE GreenLake tasks can be automated in a smarter way through its use, making it a must-have tool for any IT automation engineer or DevOps personnel.&lt;/p&gt;
&lt;h1&gt;Introduction to the HPE GreenLake PowerShell library&lt;/h1&gt;
&lt;p&gt;The new HPE GreenLake PowerShell library can be found on the Hewlett Packard Enterprise GitHub repository at the following &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake&quot;&gt;location&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/github_glcp.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The supported PowerShell editions for this new library are Desktop (with 5.1 and above) and Core (with 7.x supported on Windows, Linux and Mac).&lt;/p&gt;
&lt;p&gt;This module is also published in the &lt;a href=&quot;https://www.powershellgallery.com/packages/HPEGreenLake&quot;&gt;PowerShell Gallery&lt;/a&gt;. The PowerShell Gallery is a repository for sharing and distributing PowerShell modules and scripts. It&apos;s a community-driven platform that provides access to various PowerShell resources, enabling you to easily discover, install, and publish your own PowerShell content. The PowerShell Gallery can be accessed through the PowerShellGet module, which comes pre-installed with Windows PowerShell 5.0 and above.&lt;/p&gt;
&lt;p&gt;In most GitHub repositories, the &lt;strong&gt;README.md&lt;/strong&gt; file is a crucial file that provides essential information about the project. It typically contains instructions on how to install and use the module, as well as any other important details that potential users or contributors may need to know. Other files and folders in the repository include source code, documentation, configuration files, samples, and more. These files are organized into different directories to help keep things organized and easy to find. As mentioned, the &lt;strong&gt;README.md&lt;/strong&gt; file is an excellent starting point for getting familiar with this new module.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Samples&lt;/strong&gt; folder provides several scripts and csv file examples to demonstrate how this library can be best used. These examples illustrate the variety of functionality in the library, including connecting to the platform, onboarding devices, generating API credentials, interacting with iLO, etc.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake/issues&quot;&gt;Issues&lt;/a&gt;&lt;/strong&gt; tab is a feature commonly found on websites that use version control systems such as Git, GitHub, or Bitbucket. It allows users to report issues, request new features, or provide feedback related to a project. When users click on one of the &lt;strong&gt;Get Started&lt;/strong&gt; buttons, they are redirected to a form where they can enter details about their issue or suggestion. This information is then stored as a new issue in the project&apos;s issue tracker, where developers can see it and take action accordingly. The &lt;strong&gt;Issues&lt;/strong&gt; tab is a valuable tool for effective communication between developers and users, allowing for a more transparent and collaborative development process.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/github_-glcp_issues.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Getting started with the HPE GreenLake PowerShell library&lt;/h1&gt;
&lt;p&gt;&lt;strong&gt;Install-Module&lt;/strong&gt; cmdlet is a common way to install PowerShell modules from online repositories. The cmdlet downloads and installs the module and any associated dependencies that it may have. To use the &lt;strong&gt;Install-Module&lt;/strong&gt; cmdlet, you can simply open a PowerShell console (or a PowerShell ISE console or VS Code), and run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Install-Module HPEGreenLake
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will download and install the module from the official PowerShell Gallery repository. If this is your first time installing a module from the PowerShell Gallery, it will ask you to confirm whether you trust the repository or not. You can type &lt;strong&gt;Y&lt;/strong&gt; and press &lt;strong&gt;Enter&lt;/strong&gt; to continue with the installation.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You must have an internet connection to install the module from the PowerShell Gallery.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: At the time of writing this blog, the HPE GreenLake PowerShell library has no dependencies. In other words, the library does not require the installation of any other software or modules to function properly.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There could be several issues you may encounter while using the &lt;strong&gt;Install-Module&lt;/strong&gt; cmdlet in PowerShell, some of which are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Insufficient permissions&lt;/strong&gt;: You may need administrative privileges to install modules. If you do not have sufficient privileges, you can run your PowerShell client as an administrator or use: &lt;strong&gt;Install-Module HPEGreenLake -Scope CurrentUser&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Blocked security protocols&lt;/strong&gt;: Sometimes, the security protocols built into PowerShell can prevent the installation process. This usually happens when the PowerShell execution policy is set to &quot;Restricted&quot;. If &lt;strong&gt;Get-ExecutionPolicy&lt;/strong&gt; shows Restricted, you may need to run &lt;strong&gt;Set-ExecutionPolicy RemoteSigned&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To find all cmdlets in a module that can be used with a specific resource, you can use the &lt;strong&gt;Get-Command&lt;/strong&gt; cmdlet along with the &lt;strong&gt;-Module&lt;/strong&gt; parameter to specify the name of the module.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-Command -Module HPEGreenLake
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this first release, about 50 cmdlets are available with the HPE GreenLake module. The table below describes the main resources and features that are supported:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-key-supported-resources.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In PowerShell, cmdlet names are constructed using a verb-noun format. The verb describes the action that the cmdlet performs (Get, Set, Remove, Invoke, etc.), and the noun specifies the object that the cmdlet acts upon. For example, the &lt;strong&gt;Get-HPEGLUserRole&lt;/strong&gt; retrieves information about the user role in the HPE GreenLake platform. Note that object resource with the HPE GreenLake library always starts with &lt;strong&gt;HPEGL&amp;#x3C;resource_name&gt;&lt;/strong&gt; (GL for GreenLake).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Get-Help&lt;/strong&gt; is an important cmdlet in PowerShell as it provides detailed information about a specific cmdlet or function. To get the full help details for the &lt;strong&gt;Get-HPEGLUserRole&lt;/strong&gt; cmdlet, you can use the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-Help Get-HPEGLUserRole -Full
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To view the detailed examples of how to use a particular cmdlet, you can use the &lt;strong&gt;Get-Help&lt;/strong&gt; cmdlet along with the &lt;strong&gt;-Examples&lt;/strong&gt; parameter followed by the name of the cmdlet. Here&apos;s an example command that you can use to get the examples of &lt;strong&gt;Get-HPEGLUserRole&lt;/strong&gt; cmdlet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-Help Get-HPEGLUserRole -Examples
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will display all the available examples for the &lt;strong&gt;Get-HPEGLUserRole&lt;/strong&gt; cmdlet in a list format. You can review the examples and use them according to your requirements.&lt;/p&gt;
&lt;h1&gt;Connection to the HPE GreenLake platform&lt;/h1&gt;
&lt;p&gt;The connection to the HPE GreenLake platform is done using the &lt;strong&gt;Connect-HPEGL&lt;/strong&gt; cmdlet.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The library currently only supports single-factor authentication. Multi-factor authentication (MFA) and SAML Single Sign-On are not supported.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This limitation means that users who use SAML single sign-on with the HPE GreenLake platform (this applies to all HPE employees) cannot use their corporate email credentials when logging in via the &lt;strong&gt;Connect-HPEGL&lt;/strong&gt; cmdlet.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: While waiting for SAML Single Sign-On support, the temporary solution is to add a secondary email into your HPE GreenLake account. Just go to the HPE GreenLake GUI and use the &lt;strong&gt;Invite Users&lt;/strong&gt; card in &lt;strong&gt;Manage&lt;/strong&gt; / &lt;strong&gt;Identity &amp;#x26; Access&lt;/strong&gt; to send an invitation to a non-corporate email address. Once you receive the email, accept the invitation and you will be directed to the HPE GreenLake interface where you can set a password. Once this is done, you can use this email address and password to log in with &lt;strong&gt;Connect-HPEGL&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: To interact with the HPE GreenLake platform through this library, you must possess at least the &lt;em&gt;&lt;strong&gt;Observer&lt;/strong&gt;&lt;/em&gt; built-in role in the &lt;em&gt;&lt;strong&gt;HPE GreenLake platform&lt;/strong&gt;&lt;/em&gt; application. This role only grants view privileges. However, if you need to make modifications, then either the &lt;em&gt;&lt;strong&gt;Operator&lt;/strong&gt;&lt;/em&gt; (with view and edit privileges) or the &lt;em&gt;&lt;strong&gt;Administrator&lt;/strong&gt;&lt;/em&gt; (with view, edit, and delete privileges) built-in roles are necessary. If none of these built-in roles are suitable, you can also create your own custom role that meets your role-based access control requirements.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To begin with, it is recommended that you create a credentials object that includes your HPE GreenLake user&apos;s email and password:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; $GLCP_userName = &quot;Username@domain.com&quot;  
&gt; $GLCP_password = &quot;xxxxxxxxxxxxxxxx&quot;  
&gt; $secpasswd = ConvertTo-SecureString -String $GLCP_password -AsPlainText -Force  
&gt; $credentials = New-Object System.Management.Automation.PSCredential ($GLCP_userName, $secpasswd)  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then using the &lt;strong&gt;Connect-HPEGL&lt;/strong&gt; cmdlet, you can connect to the platform:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Connect-HPEGL -Credential $credentials
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you have multiple company accounts, you can add the &lt;strong&gt;-CompanyName&lt;/strong&gt; parameter to connect to the appropriate company account as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Connect-HPEGL -Credential $credentials -CompanyName &quot;HPE Mougins&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After successfully authenticating to the HPE GreenLake platform, the &lt;em&gt;[HPEGreenLake.Connection]&lt;/em&gt; object is returned to the caller and (at the same time) is added to the global session tracker &lt;strong&gt;$HPEGreenLakeSession&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-picture10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To display the full content of this global variable in the console, use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; $HPEGreenLakeSession | format-List
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-picture6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This object contains the following properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;session&lt;/strong&gt;: Web session object containing information about the HPE GreenLake session, including cookies and credentials&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;oauth2AccessToken&lt;/strong&gt;: OAuth2 access token string returned by the API after successful authentication&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;oauth2IdToken&lt;/strong&gt;: OAuth2 ID token string returned by the API after successful authentication&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;oauth2RefreshToken&lt;/strong&gt;: OAuth2 refresh token string returned by the API after successful authentication&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;userName&lt;/strong&gt;: Email address that was authenticated with HPE GreenLake&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;customerId&lt;/strong&gt;: HPE GreenLake customer ID&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;companyName&lt;/strong&gt;: Name of the company account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;oauth2TokenCreation&lt;/strong&gt;: OAuth2 token creation datetime value&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;oauth2TokenCreationEpoch&lt;/strong&gt;: Unix time since creation of the OAuth2 token&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;userSessionIdleTimeout&lt;/strong&gt;: HPE GreenLake user session timeout in minutes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;apiCredentials&lt;/strong&gt;: Collection of application API credentials created during the session&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each API credential object contains the following properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;credential_name&lt;/strong&gt;: Name of the API credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;application_name&lt;/strong&gt;: Name of the application using this credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ccs_region&lt;/strong&gt;: Region of the application using this credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;application_instance_id&lt;/strong&gt;: Instance ID of the application using this credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;client_id&lt;/strong&gt;: Client ID of the API credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;client_secret&lt;/strong&gt;: Client Secret of the API credential&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;connectivity_endpoint&lt;/strong&gt;: Connectivity endpoint of the application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;apiCredentials&lt;/strong&gt; property is only filled in when using &lt;strong&gt;New-HPEGLAPIcredential&lt;/strong&gt; during a session.&lt;/p&gt;
&lt;p&gt;All properties in this object are important. &lt;strong&gt;Session&lt;/strong&gt; stores what the library uses to make all the calls in the cmdlets. You can open a session using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; $HPEGreenLakeSession.session
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output is as follows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-picture7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note that in the headers, an &lt;strong&gt;Authorization, Bearer &amp;#x3C;token&gt;&lt;/strong&gt; header is defined.&lt;/p&gt;
&lt;p&gt;This token is a JSON Web Token (JWT). Pronounced &quot;Jot&quot;, this type of token is used in the OAuth 2.0 standard for authentication and authorization purposes with the HPE GreenLake platform. OAuth 2.0 is a secure and standardized authorization framework designed to enable third-party applications to access resources and data without the need for the user to reveal their login credentials to the third-party application or API.&lt;/p&gt;
&lt;p&gt;The JWT typically contains data about the token (such as its type and signature algorithm), statements about the user (such as their ID, name, email address, or role) and an expiration time. Finally, it includes a signature that is used to verify that the token has not been altered during transmission and that it was issued by a trusted source.&lt;/p&gt;
&lt;p&gt;It&apos;s important to keep in mind that the access token will only be valid for a period of 2 hours, after which it will expire and no longer be usable. Note that the library has a built-in function that runs in the background to refresh the access token before it expires. The function is triggered whenever a cmdlet is executed. However, if your session has already expired due to prolonged inactivity (defined by the session timeout in the HPE GreenLake user preferences) you will need to reconnect to the platform with &lt;strong&gt;Connect-HPEGL&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Connect-HPEGL&lt;/strong&gt; cmdlet handles the entire authorization process, which includes authorizing with the third-party identity provider and collecting various tokens. Once all the necessary tokens have been obtained, a session is created by utilizing bearer authentication with the HPE GreenLake platform Authentication (AuthN) service API using the &lt;em&gt;id_token&lt;/em&gt; present in the payload to confirm your identity.&lt;/p&gt;
&lt;p&gt;When the session is created, CCS sets a unique session ID in a cookie that the library stores in &lt;strong&gt;$HPEGreenLakeSession.session.cookies&lt;/strong&gt; so that all subsequent requests can use it. Other cookies are also added to maintain the session state between the PowerShell session and the AuthN service API. For each request from the HPE GreenLake library, these cookies are sent back to the CCS API, which allows the cloud platform to identify the user and grant various permissions.&lt;/p&gt;
&lt;h1&gt;Interaction with the HPE GreenLake API&lt;/h1&gt;
&lt;h2&gt;Main Cmdlets for roles, permissions, and restrictions&lt;/h2&gt;
&lt;p&gt;In HPE GreenLake, you have the ability to manage users and their permissions to services and resources within your account. The management of these permissions is done through Roles which are composed of a set of permissions that provide access to users in your HPE GreenLake account.&lt;/p&gt;
&lt;p&gt;Additionally, you can restrict access to specific resources by creating custom resource groupings using Resource Restriction Policies.&lt;/p&gt;
&lt;p&gt;The main cmdlets for managing users and their access to HPE GreenLake resources are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To manage users:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLUser&lt;/strong&gt;: to get users and their activity status and roles&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Send-HPEGLUserInvitation&lt;/strong&gt;: to invite users to your account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLUser&lt;/strong&gt;: to delete users from your account&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To view and manage user roles and permissions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLRole&lt;/strong&gt;: to view the role (= group of permissions) that you can specify and assign to users in your HPE GreenLake account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLUserRole&lt;/strong&gt;: to view user roles and permissions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Set-HPEGLUserRole&lt;/strong&gt;: to set user roles and permissions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLUserRole&lt;/strong&gt;: to remove user roles and permissions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To define Resource Restriction Policies and manage access to resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLResourceRestrictionPolicy&lt;/strong&gt;: to view resource restriction policies in your HPE GreenLake account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;New-HPEGLResourceRestrictionPolicy&lt;/strong&gt;: to set resource restriction policies in your HPE GreenLake account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLResourceRestrictionPolicy&lt;/strong&gt;: to remove resource restriction policies in your HPE GreenLake account&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Managing devices with the HPE GreenLake PowerShell library&lt;/h2&gt;
&lt;p&gt;In HPE GreenLake, devices are the resource for compute, network and storage devices that you can onboard on the platform in your HPE GreenLake account. Once onboarded, devices can be viewed, assigned to an application instance, applied to a subscription, and tagged.&lt;/p&gt;
&lt;p&gt;In the library, the main cmdlet to get information on devices is &lt;strong&gt;Get-HPEGLDevice&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-HPEGLdevice
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As in the GUI, the bare cmdlet (without parameters) returns a subset of resources in a page. A maximum of 100 devices are displayed by default. The &lt;strong&gt;-Nolimit&lt;/strong&gt; parameter can be used to display all available devices, but this may result in a longer response time.&lt;/p&gt;
&lt;p&gt;The cmdlets in this library usually generate formatted objects when they are displayed on the console to enhance readability and ease of understanding. As an example, if you enter:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-HPEGLdevice -Limit 10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-picture8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The generated output is &quot;formatted&quot;. To get the full view of a formatted object in PowerShell, you can use the &lt;strong&gt;Format-List&lt;/strong&gt; cmdlet (or &lt;strong&gt;fl&lt;/strong&gt; its alias). This cmdlet allows you to display all the properties and values of an object in a list form:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Get-HPEGLdevice -Limit 10 | fl
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-picture9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Several other parameters are available such as Tags, Archived, Stats, Devicetype, Serialnumber, etc. You can use the help to get the complete list, and try them out at your convenience.&lt;/p&gt;
&lt;h2&gt;Adding devices to the HPE GreenLake platform&lt;/h2&gt;
&lt;p&gt;To manage devices using HPE GreenLake, one of the initial steps is to onboard the device onto the platform. When using the GUI, this step is typically done manually. One of the major advantages of using a library that supports automation is the ability to streamline and fully automate processes such as device onboarding. With automation, you can significantly increase efficiency, reduce errors, and save time and resources that would otherwise be spent on manual tasks.&lt;/p&gt;
&lt;p&gt;The process of onboarding devices on the platform actually requires several steps that must first be performed on the HPE GreenLake platform side. Here are the different steps and commands that need to be executed:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-device-onboarding.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Add-HPEGLDeviceCompute&lt;/strong&gt; is the first cmdlet that you can use to simply add a Compute device to the HPE GreenLake console.  The sole purpose of this cmdlet is to add compute(s) with the specified tags to the platform and nothing beyond that. The corresponding cmdlets for adding storage and network devices are &lt;strong&gt;Add-HPEGLDeviceStorage&lt;/strong&gt; and &lt;strong&gt;Add-HPEGLDeviceNetwork&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;It is worth noting that a CSV file can be utilized to add multiple devices to the platform by containing device information such as serial number, part number, subscription key, MAC address and tags depending on the type of device you wish to add. You can use this CSV file as an input for the pipeline of these cmdlets, as in this example with &lt;strong&gt;Add-HPEGLDeviceCompute&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Import-Csv Compute_Devices.csv | Add-HPEGLDeviceCompute
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this case, the content of the csv file to define the Compute must use the following format:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;SerialNumber, PartNumber, Tags&lt;/em&gt;&lt;br&gt;
&lt;em&gt;WGX2380BLC, P55181-B21, Country=US State=PACA App=RH&lt;/em&gt;&lt;br&gt;
&lt;em&gt;AZX2380BLD, P55182-B21, State=Texas Role=production&lt;/em&gt;&lt;br&gt;
&lt;em&gt;7LKY23D9LM, P54277-B21&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Note that for 7LKY23D9LM, no tag is assigned in this example.&lt;/p&gt;
&lt;p&gt;Tags are optional with Compute but highly recommended. They are particularly useful when creating resource restriction policies. They must meet the string format: &quot;&lt;strong&gt;&amp;#x3C;Name&gt;=&amp;#x3C;Value&gt; &amp;#x3C;Name&gt;=&amp;#x3C;Value&gt;&lt;/strong&gt;&quot; such as &quot;&lt;strong&gt;Country=US State=TX App=Grafana&lt;/strong&gt;&quot; or &quot;&lt;strong&gt;Country=US&lt;/strong&gt;&quot;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Add-HPEGLDeviceComputeFullService&lt;/strong&gt; is a much more advanced cmdlet than the previous one. This specific command has the ability to perform all mandatory steps of Compute onboarding, including both the HPE GreenLake platform side and server side procedures, as described in the following diagram:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-device-onboarding-full-service.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A single command is sufficient to implement the entire onboarding process. The steps include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Connect to HPE GreenLake using the default account or the one provided using the &lt;strong&gt;-GLCompanyName&lt;/strong&gt; parameter.&lt;/li&gt;
&lt;li&gt;Set the automatic assignment of subscriptions with the first Compute Enhanced subscription found that has not expired and has available subscription seats.&lt;/li&gt;
&lt;li&gt;Onboard each device to the HPE GreenLake platform account.&lt;/li&gt;
&lt;li&gt;Set optional server tags if defined with the &lt;strong&gt;-Tags&lt;/strong&gt; parameter.&lt;/li&gt;
&lt;li&gt;Associate each device with the defined application instance (subscription is automatically assigned to each device as per auto assignment configuration).&lt;/li&gt;
&lt;li&gt;Set each iLO to use a web proxy if defined via the different web proxy parameters.&lt;/li&gt;
&lt;li&gt;Connect each iLO to the HPE Compute Ops Management instance.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This cmdlet is involved in the configuration of iLOs so the prerequisite for running this cmdlet is to have network access to both the Internet (to access the HPE GreenLake platform) and the iLO network (which is usually a private network).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This time, the content of the csv file should contain information regarding iLOs which includes their IP addresses, login credentials (an administrator account is a requisite), and tags (if applicable). This is the only data needed to feed the pipeline. The format should be as follows:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;IP, Username, Password, Tags&lt;/em&gt;&lt;br&gt;
&lt;em&gt;192.168.3.193, Administrator, password, Country=FR City=Mougins App=InfluxDB Owner=LJ&lt;/em&gt;&lt;br&gt;
&lt;em&gt;192.168.3.191, Administrator, password, Country=US State=Texas Role=Production Owner=LJ&lt;/em&gt;&lt;br&gt;
&lt;em&gt;192.168.3.205, demo, password&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;All Compute data that is required by the HPE GreenLake platform to complete the Compute onboarding is collected and managed by the cmdlet in the background using RedFish iLO calls.&lt;/p&gt;
&lt;p&gt;You can use &lt;strong&gt;Get-Help&lt;/strong&gt; to learn more about this cmdlet and particularly look at the different examples:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Help Add-HPEGLDeviceComputeFullService -full
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Add-HPEGLDeviceStorageFullService&lt;/strong&gt; and &lt;strong&gt;Add-HPEGLDeviceNetworkFullService&lt;/strong&gt; offer the same full onboarding service for storage and networking devices.&lt;/p&gt;
&lt;p&gt;Overall, the ability to fully automate device onboarding is a significant advantage that can lead to many benefits for organizations of all sizes.&lt;/p&gt;
&lt;p&gt;Note that there are &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake/tree/master/Samples&quot;&gt;samples&lt;/a&gt; available on GitHub repository that can help in learning how to use the full device onboarding cmdlets and how to build a csv file for each device type.&lt;/p&gt;
&lt;h2&gt;Managing applications with the HPE GreenLake PowerShell library&lt;/h2&gt;
&lt;p&gt;The main cmdlets for managing applications in the HPE GreenLake platform are the followings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To manage applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLApplication&lt;/strong&gt;: to get all applications available to you on your HPE GreenLake platform, including the ones that are provisioned into your account and the ones that are not&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To add and remove applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Add-HPEGLApplication&lt;/strong&gt;: to provision an application in a new region&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: A HPE GreenLake region refers to the geographical location where the HPE GreenLake services are hosted and provided from. This can vary depending on the customer&apos;s location and which HPE GreenLake application you are using. Customers can choose the region that best suits their needs in terms of location, availability, and compliance requirements.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLApplication&lt;/strong&gt;: to delete an application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This cmdlet has a high-impact and irreversible action. This action permanently deletes all data of the application instance and cannot be undone once the process has started. For example, removing the Compute Ops Management US-West instance would remove all user data, all devices, all server settings, all server groups, all API credentials, etc. It&apos;s great to hear that this cmdlet has a confirmation prompt before deleting an application instance. This feature can help prevent accidental deletions and provide an extra layer of protection for critical data in applications.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To assign and unassign devices to applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Set-HPEGLDeviceApplication&lt;/strong&gt;: to attach devices to an application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Assigning devices to an application instance is the process of attaching devices to an application in a region, so that these devices become visible and managed in the application instance by users.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To create and delete API application credentials:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;New-HPEGLAPIcredential&lt;/strong&gt;: to create an API credential for an application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: With the HPE GreenLake platform, developers can make API calls on any application instance as long as they have the API credentials, which consist of a client ID, client secret and connectivity endpoint.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When API credentials are created, they are automatically stored in the global variable &lt;strong&gt;$HPEGreenLakeSession.apiCredentials&lt;/strong&gt;. This global variable is accessible as long as the PowerShell console is active and &lt;strong&gt;Disconnect-HPEGL&lt;/strong&gt; has not been used. In other words, as long as your session is active. This data is sensitive because it contains all you need to make API calls with an application API such as Compute Ops Management, Data Services Cloud Console, etc. To get a more complete example of how to deeply interact with Compute Ops Management, see &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake/blob/master/Samples/Interaction-with-COM_Sample.ps1&quot;&gt;Interaction-with-COM_Sample&lt;/a&gt; script sample.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: To store the API credentials beyond the duration of the session, the cmdlet provides a &lt;strong&gt;-Location&lt;/strong&gt; parameter. This parameter can be used with &lt;strong&gt;New-HPEGLAPIcredential&lt;/strong&gt; to save the API credentials in a directory and the &lt;strong&gt;-Encrypt&lt;/strong&gt; parameter can be utilized to encrypt the API credentials before exporting the JSON file into the designated &lt;em&gt;Location&lt;/em&gt; directory.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLAPIcredential&lt;/strong&gt;: to delete an API credential of an application instance&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Once API credentials are deleted, access to the application instance API is lost permanently. This is because API credentials are used to authenticate and authorize access to the API. When the credentials are deleted, the corresponding API client ID is also invalidated, which means that any requests made using those credentials will be rejected by the API server.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Managing subscriptions with the HPE GreenLake PowerShell library&lt;/h2&gt;
&lt;p&gt;The main cmdlets for managing subscriptions in the HPE GreenLake platform are the followings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To get information about subscriptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Get-HPEGLDeviceSubscription&lt;/strong&gt;: to get information about your device subscriptions available in your HPE GreenLake account. Several parameters can be used to display the subscriptions with available quantity, the subscriptions that are expired or not expired, etc. For example, you can combine parameters to obtain only the subscription keys that have not expired and for which there are still licenses available: &lt;strong&gt;Get-HPEGLDeviceSubscription -Notexpired -Available&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To manage device subscriptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Add-HPEGLDeviceSubscription&lt;/strong&gt;: to add device subscriptions to the HPE GreenLake account&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Set-HPEGLDeviceAutoSubscription&lt;/strong&gt;: to set automatic subscription assignment. This feature automatically assigns a subscription to any supported device that is added to the HPE GreenLake platform&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLDeviceAutoSubscription&lt;/strong&gt;: to remove an automatic assignment of subscriptions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To apply a subscription key to devices:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Set-HPEGLDeviceSubscription&lt;/strong&gt;: to apply a subscription key to one or more devices&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When the auto device subscription is not supported or not enabled for the type of device you use, you need to manually apply a subscription key.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Remove-HPEGLDeviceSubscription&lt;/strong&gt;: to detach devices from a subscription key&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;In this blog, I have introduced you to the primary cmdlets that are provided by this new PowerShell library. However, there are more cmdlets available and much more to explore. Nevertheless, you now possess the necessary knowledge to begin testing and to develop your own PowerShell scripts to start automating tasks so you can experience greater productivity and efficiency.&lt;/p&gt;
&lt;p&gt;If you encounter any issues during your testing, it&apos;s highly recommended that you open a &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake/issues&quot;&gt;New issue&lt;/a&gt; on the library’s GitHub page. This can help improve the library and potentially benefit other users who may be facing similar issues.&lt;/p&gt;
&lt;p&gt;For general questions, or need to discuss a topic that doesn&apos;t need to be tracked in the issue tracker, please join the  GitHub Discussions for the project: &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEGreenLake/discussions/&quot;&gt;Join the discussion&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Want more?&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;HPE GreenLake Developer Portal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;To learn more about the HPE GreenLake platform, refer to the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&quot;&gt;HPE GreenLake Edge to Cloud Platform User Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Integrating Istio and SPIRE on HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[This blog demonstrates how to integrate Istio and SPIRE to enable advanced analysis and visualization of the service mesh. Istio Istio is an…]]></description><link>https://developer.hpe.com/integrating-istio-and-spire/</link><guid isPermaLink="false">https://developer.hpe.com/integrating-istio-and-spire/</guid><pubDate>Tue, 25 Apr 2023 09:54:00 GMT</pubDate><content:encoded>&lt;p&gt;This blog demonstrates how to integrate Istio and SPIRE to enable advanced analysis and visualization of the service mesh.&lt;/p&gt;
&lt;h2&gt;Istio&lt;/h2&gt;
&lt;p&gt;Istio is an &lt;strong&gt;open-source service mesh&lt;/strong&gt; that provides a uniform and efficient way to secure, connect, and monitor services. Istio automatically manages load balancing for HTTP, gRPC, WebSocket, and TCP traffic. For details, see &lt;a href=&quot;https://istio.io/latest/about/service-mesh/&quot;&gt;The Istio service mesh&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://istio.io/latest/docs/ops/deployment/architecture/arch.svg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;SPIRE&lt;/h2&gt;
&lt;p&gt;SPIRE (SPIFFE Runtime Environment) is a production-ready implementation of the SPIFFE (Secure Production Identity Framework for Everyone) specification that performs node and workload attestation to securely issue cryptographic identities to workloads running in heterogeneous environments.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spire.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with Envoy’s SDS (Secret Discovery Service) API.
This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio’s powerful service management.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once integrated, SPIRE issued certificates to workloads can be used for communication between different trust domains or between two different clusters also.&lt;/p&gt;
&lt;p&gt;In this blog post, we will show you the steps you can use to install Istio and SPIRE on the same cluster and how to deploy a sample application using SPIRE-issued identities.&lt;/p&gt;
&lt;h3&gt;Step 1: Creating your own cluster&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;1﻿.1&lt;/strong&gt;  Go to &lt;a href=&quot;https://client.greenlake.hpe.com/session/hub/choose&quot;&gt;HPE Greenlake &lt;/a&gt;client page.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1.2&lt;/strong&gt;  Login to your tenant in HPE GreenLake Central  and navigate to the HPE GreenLake for Private Cloud Enterprise dashboard. Click on &lt;strong&gt;Containers&lt;/strong&gt; to launch into the containers dashboard.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1﻿.3&lt;/strong&gt;  You will notice a page similar to the one shown below. Click &lt;strong&gt;Create cluster&lt;/strong&gt; to create a new cluster, or you can also choose from the already created clusters. Ensure that you choose a cluster that does not have Istio pre-deployed, since this exercise will deploy SPIRE and Istio together.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1.4&lt;/strong&gt;	 After clicking Create cluster, give a name and description to your cluster and identify the type of cluster. In our case, we have chosen a large type.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1.5&lt;/strong&gt;	 Obtain Kubeconfig for your cluster and launch a Web terminal to access your cluster to handle steps found later in this post.&lt;/p&gt;
&lt;p&gt;From the &lt;strong&gt;Containers&lt;/strong&gt; main page, click &lt;strong&gt;Launch Service Console&lt;/strong&gt; to launch the HPE Ezmeral Runtime Enterprise. Open Kubectl, which allows you to enter commands to communicate with your cluster.&lt;/p&gt;
&lt;p&gt;If Kubectl is not installed, download Kubectl and the HPE Kubectl Plugin from the &lt;strong&gt;Dashboard&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For more information, see** &lt;a href=&quot;https://docs.containerplatform.hpe.com/55/reference/kubernetes/tenant-project-administration/Dashboard__Kubernetes_TenantProject_Administrator.html&quot;&gt;Dashboard - Kubernetes Tenant/Project Administrator&lt;/a&gt;** in the HPE Ezmeral Runtime Enterprise documentation.&lt;/p&gt;
&lt;h3&gt;Step 2: Install SPIRE&lt;/h3&gt;
&lt;p&gt;Get the quickstart yaml file using &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/cxteamtrials/caas-trials-content/main/services/spire/spire-quickstart.yaml&quot;&gt;this link&lt;/a&gt;&lt;/strong&gt; and copy that into your cluster and apply it by using kubectl.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f spire-quickstart.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will install SPIRE into your cluster, along with two additional components: the SPIFFE CSI Driver and the SPIRE Kubernetes &lt;strong&gt;Controller manager&lt;/strong&gt; which facilitate the registration of workloads and establishment of federation relationships.&lt;/p&gt;
&lt;p&gt;Verify installation of SPIRE by checking if all pods are running and containers within them are up. Specifically, you should look for the agent and SPIRE server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The number of agents depends on number of nodes you are working with. Here, we are working with three worker nodes, so three agents are assigned for each node.&lt;/p&gt;
&lt;p&gt;Use the command given below, and you will get the output as shown.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ kubectl get pods -n spire
NAME                            READY   STATUS    RESTARTS       AGE
spire-agent-5tlck               3/3     Running   2 (31d ago)    31d
spire-agent-gnwbj               3/3     Running   1 (31d ago)    31d
spire-agent-mghnw               3/3     Running   2 (31d ago)    31d
spire-server-574474c7dc-42kln   2/2     Running   4 (4d1h ago)   31d
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Install Istio&lt;/h3&gt;
&lt;h4&gt;Download the latest release:&lt;/h4&gt;
&lt;p&gt;You can download the latest release using the official Istio repository or just copy the following command (which would do the same thing for you).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;curl -L https://istio.io/downloadIstio | sh -
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For details, reach out to &lt;strong&gt;&lt;a href=&quot;https://istio.io/latest/docs/setup/getting-started/#download&quot;&gt;ISTIO download page&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Get into the Istio directory and set the path by command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;export PATH=$PWD/bin:$PATH
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After exporting, get out of directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;cd ..
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In the future, a case might occur when your cluster does not recognize istioctl. In this case, export the path again after getting into the Istio directory.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Install Istio with patches:&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;After deploying SPIRE into your environment and verifying that all deployments are in Ready state, install Istio with custom patches for the Ingress-gateway as well as for Istio-proxy.&lt;/p&gt;
&lt;p&gt;Get the istio-spire-config patch using &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/cxteamtrials/caas-trials-content/main/services/istio/release-1.17/spire/spire-patch.yaml&quot;&gt;this link&lt;/a&gt;&lt;/strong&gt; and copy that patch into your cluster. Install that patch using following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;istioctl install -f istio-spire-config.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will share the spiffe-csi-driver with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent’s UNIX Domain Socket.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;h4&gt;Patching Istio-Ingress gateways&lt;/h4&gt;
&lt;p&gt;If you receive the error shown below, your ingress-gateway is not patched yet and is not being registered onto the server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/patch-error-ingress.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For patching, the first step is to get and apply one of SPIRE controller manager’s &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/&quot;&gt;CRD (Custom Resource Definition)&lt;/a&gt; ClusterSPIFFEID. It is a cluster-wide resource used to register workloads with SPIRE. The ClusterSPIFFEID can target all workloads in the cluster or can be optionally scoped to specific pods or namespaces via label selectors.&lt;/p&gt;
&lt;p&gt;Create a ClusterSPIFFEID CRD to generate registration entries in SPIRE server for all workloads with the label &lt;strong&gt;&lt;code&gt;spiffe.io/spire-managed-identity: true&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Get the ClusterSPIFFEID used by us for this demo using &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/cxteamtrials/caas-trials-content/main/services/spire/clusterspiffeid-example.yaml&quot;&gt;this link&lt;/a&gt;&lt;/strong&gt;, copy that into your cluster, and apply it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f cluster-spiffeID-crd.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can create your own custom clusterSPIFFEID CRD with your own match label and own selector. For now, we have created a simple CRD with one pod selector and one match label.&lt;/p&gt;
&lt;p&gt;Now, simply patch the ingress-gateway with spiffe.io/spire managed-identity: true label.&lt;/p&gt;
&lt;p&gt;This will register your ingress-gateway pod into the server.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl patch deployment istio-ingressgateway -n istio-system -p &apos;{&quot;spec&quot;:{&quot;template&quot;:{&quot;metadata&quot;:{&quot;labels&quot;:{&quot;spiffe.io/spire-managed-identity&quot;: &quot;true&quot;}}}}}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After patching, confirm that your ingress-gateway pod, istiod, and all their containers work.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Step 4: Deploying Sample Application&lt;/h2&gt;
&lt;p&gt;Now that our SPIRE and Istio are integrated, the identities to workloads must be issued by SPIRE.&lt;/p&gt;
&lt;p&gt;For our case, we will create a namespace “bookinfo” and will add a label &lt;strong&gt;“spiffe.io/spire-managed-identity: true”&lt;/strong&gt; to it. Then, we will create a new ClusterSPIFFEID CRD with &lt;strong&gt;namespace selector&lt;/strong&gt; with match label as “spiffe.io/spire-managed-identity: true.”&lt;/p&gt;
&lt;p&gt;When the new workload is added to this namespace or any other namespace that has the lable mentioned above, it will now automatically get registered in the server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4.1&lt;/strong&gt; Create a new namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl create namespace &amp;#x3C;insert-namespace-name-here&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;4.2&lt;/strong&gt; Add a lable to it, using the same one that you have used for the clusterSPIFFEID.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl label namespaces &amp;#x3C;namespace_name&gt; spiffe.io/spire-managed-identity=true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;4.3&lt;/strong&gt; Enable istio-injection for this namespace so that any new pods that are created in that namespace will automatically have a sidecar added to them. You can achieve this by just adding another label in similar fashion.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl label namespace &amp;#x3C;namespace_name&gt; istio-injection=enabled --overwrite
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After all edits to your namespace, the namespace  should look similar as shown below.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you want to, you can edit further using the following command. But take care that your resulting yaml is not invalid. You can validate your yaml using any online validator available.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl edit ns &amp;#x3C;namespace_name&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Namespace&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;name&quot;:&quot;backend&quot;},&quot;name&quot;:&quot;bookinfo&quot;}}
  creationTimestamp: &quot;2023-03-21T06:23:38Z&quot;
  labels:
    istio-injection: enabled
    kubernetes.io/metadata.name: bookinfo
    spiffe.io/spire-managed-identity: &quot;true&quot;
  name: bookinfo
  resourceVersion: &quot;6116523&quot;
  uid: 6193a0b7-8455-46bd-a456-797ef69c045a
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;4.4&lt;/strong&gt; Create and apply a ClusterSPIFFEID CRD with namespace selector.&lt;/p&gt;
&lt;p&gt;Copy the clusterSPIFFEID from &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/cxteamtrials/caas-trials-content/main/services/spire/clusterspiffeid-example.yaml&quot;&gt;this link&lt;/a&gt;&lt;/strong&gt; and just change the selector to &lt;strong&gt;namespace selector&lt;/strong&gt;. Make sure that the correct match label is there like shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
  name: bookinfo
spec:
  spiffeIDTemplate: &quot;spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}&quot;
  namespaceSelector:
    matchLabels:
      spiffe.io/spire-managed-identity: &quot;true&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After editing your clusterSPIFFEID, apply it using kubectl.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f &amp;#x3C;your_clusterSPIFFEID_name&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;4.5&lt;/strong&gt;  After successfully creating namespace and applying CRD, deploy your application in the namespace you created. But before you deploy your application, the workloads will need to have the SPIFFE CSI Driver volume be able to access the SPIRE Agent socket. To accomplish this, we can leverage the SPIRE pod annotation template:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;annotations:
            inject.istio.io/templates: &quot;sidecar,spire&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can patch it to the workload or just add this to your deployment manifest at &lt;strong&gt;{spec:{template:{metadata:{ annotation:}}}}&lt;/strong&gt; as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sidecar-patch.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can get the sample bookinfo application manifest from &lt;strong&gt;&lt;a href=&quot;https://raw.githubusercontent.com/cxteamtrials/caas-trials-content/main/services/istio/release-1.16/samples/bookinfo/bookinfo.yaml&quot;&gt;this link&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This manifest is annotation free, so you need to add annotation to its deployments by following the steps shown above.&lt;/p&gt;
&lt;p&gt;After editing the manifest, apply it in a newly created namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f bookinfo.yaml -n &amp;#x3C;namespace_name&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Verify all workloads and services you just deployed are running and up.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl get all -n &amp;#x3C;namespace_name&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will get output as shown below if everything is working fine.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ kubectl get all -n bookinfo
NAME                                 READY   STATUS    RESTARTS      AGE
pod/details-v1-f8957ccb4-7vdgw       2/2     Running   0             37d
pod/productpage-v1-cfb4bc854-5km2l   2/2     Running   0             37d
pod/ratings-v1-65cd6fbcd8-s9jnc      2/2     Running   0             37d
pod/reviews-v1-55f769fb78-czh7j      2/2     Running   0             37d
pod/reviews-v2-6b7c798cc8-wkpxg      2/2     Running   0             37d
pod/reviews-v3-695c7f59db-nzwwk      2/2     Running   2 (34d ago)   37d

NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/details       ClusterIP   10.111.38.161    &amp;#x3C;none&gt;        9080/TCP   37d
service/productpage   ClusterIP   10.102.189.161   &amp;#x3C;none&gt;        9080/TCP   37d
service/ratings       ClusterIP   10.105.7.153     &amp;#x3C;none&gt;        9080/TCP   37d
service/reviews       ClusterIP   10.106.49.246    &amp;#x3C;none&gt;        9080/TCP   37d

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/details-v1       1/1     1            1           37d
deployment.apps/productpage-v1   1/1     1            1           37d
deployment.apps/ratings-v1       1/1     1            1           37d
deployment.apps/reviews-v1       1/1     1            1           37d
deployment.apps/reviews-v2       1/1     1            1           37d
deployment.apps/reviews-v3       1/1     1            1           37d

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/details-v1-f8957ccb4       1         1         1       37d
replicaset.apps/productpage-v1-cfb4bc854   1         1         1       37d
replicaset.apps/ratings-v1-65cd6fbcd8      1         1         1       37d
replicaset.apps/reviews-v1-55f769fb78      1         1         1       37d
replicaset.apps/reviews-v2-6b7c798cc8      1         1         1       37d
replicaset.apps/reviews-v3-695c7f59db      1         1         1       37d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once everything is up, all workloads would get registered under the SPIRE server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4.6&lt;/strong&gt; You can verify the registration of workloads using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl exec &amp;#x3C;spire-server_pod_name&gt; -n spire -c spire-server -- ./bin/spire-server entry show
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Verify that every workload with same label as clusterSPIFFEID CRD’s match label is registered in the server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/server-entries.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4.7&lt;/strong&gt; Verify that the certificate issuer of workloads is SPIRE using the following commands for each workload.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;istioctl proxy-config secret &amp;#x3C;pod_name&gt; -n &amp;#x3C;namespace_name&gt; -o json | jq -r &apos;.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes&apos; | base64 --decode &gt; chain.pem
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ openssl x509 -in chain.pem -text | grep SPIRE
 Subject: C = US, O = SPIRE, x500UniqueIdentifier = e2f9c35b9198e1824373e874b13287d0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should also check that everything is the same for the ingress-gateway pod in Istio-system namespace and verify that your deployed workloads and ingress-gateway has the same issuer.&lt;/p&gt;
&lt;h4&gt;Step 5: Open the application to outside traffic&lt;/h4&gt;
&lt;p&gt;The Bookinfo application is deployed but not accessible from the outside. To make it accessible, you need to create an Istio Ingress Gateway, which maps a path to a route at the edge of your mesh.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5.1&lt;/strong&gt; Associate this application with the Istio gateway:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml -n bookinfo
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;5.2&lt;/strong&gt; Ensure that there are no issues with the configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ # istioctl analyze -n bookinfo

✔ No validation issues found when analyzing namespace: bookinfo.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;5.3&lt;/strong&gt; Execute the following command to determine if your Kubernetes cluster is running in an environment that supports external load balancers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.105.191.32   172.16.17.5   15021:30189/TCP,80:30392/TCP,443:30566/TCP   32d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the EXTERNAL-IP value is set, your environment has an external load balancer. If not, then set the external load balancer first then follow further steps.&lt;/p&gt;
&lt;p&gt;For this cluster, we are using metallb.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5.4&lt;/strong&gt; Download and install Kiali dashboard and Prometheus.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Install Kiali:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://kiali.io/&quot;&gt;Kiali&lt;/a&gt;&lt;/strong&gt; is an observability console for Istio with service mesh configuration and validation capabilities. It helps you understand the structure and health of your service mesh by monitoring traffic.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/kiali.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Install Prometheus:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt;&lt;/strong&gt; is an open-source monitoring system and time series database. You can use Prometheus with Istio to record metrics that track the health of Istio and of applications within the service mesh.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/prometheus.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;5.5&lt;/strong&gt; After setting up the ingress gateway and bookinfo gateway, we will view the dashboard later on in this post. To ensure you&apos;ll be able to do this, you&apos;ll need to make these setting changes in your system proxy status.&lt;/p&gt;
&lt;p&gt;Go to &lt;strong&gt;Settings &gt; Network &gt; Proxy status &gt;&lt;/strong&gt; Turn Use a &lt;strong&gt;proxy server On&lt;/strong&gt;. In the exceptions field, add your external IP address of kiali and ingressgateway service.&lt;/p&gt;
&lt;p&gt;You can get IPs of these services by following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ kubectl get svc -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
istio-ingressgateway   LoadBalancer   10.105.191.32   172.16.17.5   15021:30189/TCP,80:30392/TCP,443:30566/TCP   32d
istiod                 ClusterIP      10.101.27.65    &amp;#x3C;none&gt;        15010/TCP,15012/TCP,443/TCP,15014/TCP        32d
kiali                  LoadBalancer   10.103.14.197   172.16.17.6   20001:32116/TCP,9090:31950/TCP               32d
prometheus             ClusterIP      10.98.101.102   &amp;#x3C;none&gt;        9090/TCP                                     32d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/manual_proxy.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Format:&lt;/strong&gt; &lt;code&gt;http://{external ip};&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Your kiali service might be of ClusterIP type, so to get the external IP for this service, you first need to edit the service type to LoadBalancer.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use the following command to edit the service, then edit the service type.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl edit svc kiali -n istio-system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Edit the service type &lt;strong&gt;{spec: {type:LoadBalancer}}&lt;/strong&gt; as shown below&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/service_edit.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5﻿.6&lt;/strong&gt;  Set the ingress IP and ports:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;export INGRESS_NAME=istio-ingressgateway

export INGRESS_NS=istio-system

export INGRESS_HOST=$(kubectl -n &quot;$INGRESS_NS&quot; get service &quot;$INGRESS_NAME&quot; -o jsonpath=&apos;{.status.loadBalancer.ingress[0].ip}&apos;)

export INGRESS_PORT=$(kubectl -n &quot;$INGRESS_NS&quot; get service &quot;$INGRESS_NAME&quot; -o jsonpath=&apos;{.spec.ports[?(@.name==&quot;http2&quot;)].port}&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;5.7&lt;/strong&gt;  Export and Set GATEWAY_URL:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ echo &quot;$GATEWAY_URL&quot;
172.16.17.5:80
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Curl into the productpage using gateway URL using following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;k8s-spiffe-integ-master-7j7fh-m67q9:~ curl -v  http://$GATEWAY_URL/productpage
* Uses proxy env variable no_proxy == &apos;localhost,127.0.0.1,10.96.0.1,172.16.5.41,172.16.5.42,172.16.5.43,172.16.5.44,172.16.5.45,172.16.5.46,172.16.5.40,glhc-caas.glhc-hpe.local,.glhc-hpe.local,glhc-caas.customer.hpe.net,172.16.17.20,172.16.17.21,172.16.17.22,172.16.5.47,gl-pulpnode.glhc-hpe.local,gl-pulpnode,10.96.0.1,10.192.0.0/12,10.96.0.0/12,.svc,.cluster.local,.default.svc,.customer.hpe.net,172.16.17.23,172.16.17.30,gl-cp-gw-node2.glhc-hpe.local,gl-cp-gw-node1.glhc-hpe.local,172.16.17.50&apos;
* Uses proxy env variable http_proxy == &apos;http://172.16.0.250:8080&apos;
*   Trying 172.16.0.250:8080...
* TCP_NODELAY set
* Connected to 172.16.0.250 (172.16.0.250) port 8080 (#0)
&gt; GET http://172.16.17.5:80/productpage HTTP/1.1
&gt; Host: 172.16.17.5
&gt; User-Agent: curl/7.66.0
&gt; Accept: */*
&gt; Proxy-Connection: Keep-Alive
&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can generate traffic on the product page by just reaching out to the shown http URL.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before reaching out to this page and kiali in further step, ensure that you have followed step 5.5 properly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5.9&lt;/strong&gt; &lt;strong&gt;Kiali Dashboard&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Generate traffic on the product page and observe the graphs on the Kiali dashboard.
Reach out to the kiali dashboard in your browser by just copying the external IP from above and http into that IP and port.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;http://&amp;#x3C;kiali_external_ip&gt;:&amp;#x3C;port&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After reaching kiali dashboard, generate traffic on product page and simultaneously, view and analyse traffic on kiali using various graphs and visualising methods.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;App Graph:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/app_graph.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Service Graph:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/service_graph-1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/service_graph-22.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The graph below shows services communication, and the lock here symbolises &lt;strong&gt;mTls protocol&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;We hope that this blog has helped you in integrating Istio and SPIRE from scratch, getting SPIRE issued identities for your sample application workloads, and setting up Kiali on your cluster for better visualisation of your service mesh.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Integrating Python 3 with FIPS enabled OpenSSL 3.1 on Microsoft Windows]]></title><description><![CDATA[Introduction Federal Information Processing Standard (FIPS) are a set of encryption algorithms and are mandatory in all the computer systems…]]></description><link>https://developer.hpe.com/integrating-python-3-with-fips-enabled-openssl-3-1/</link><guid isPermaLink="false">https://developer.hpe.com/integrating-python-3-with-fips-enabled-openssl-3-1/</guid><pubDate>Sun, 23 Apr 2023 13:41:41 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Federal Information Processing Standard (FIPS) are a set of encryption algorithms and are mandatory in all the computer systems and software used by non-military American government agencies, government contractors, and the vendors who work with the agencies. Whenever new software is developed, it needs to be FIPS-compliant. Thus, there is a need to enable Python 3 with FIPS, but the default Python 3 binary package comes without FIPS, as shown in the screenshot below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl-before.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This blog will explain step-by-step how to integrate the Python 3 with the FIPS enabled OpenSSL 3.1 on Microsoft Windows so that any new software compiled out of it is FIPS-compliant.&lt;/p&gt;
&lt;h2&gt;Steps for Microsoft Windows&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Download the OpenSSL source from &lt;code&gt;http://www.openssl.org&lt;/code&gt; and the Python 3 source from &lt;code&gt;http://www.python.org&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Install NASM and Perl Software and add to PATH&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Install these 2 Perl modules using the commands below.&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cpan -i Text::Template
cpan -i Test::More
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Create a directory structure like what&apos;s shown below.&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir &quot;C:\SSLout\SSL&quot;
mkdir &quot;C:\SSLout\DLL\x64\Release&quot;
mkdir &quot;C:\SSLout\Lib&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Unzip the &lt;code&gt;openssl-3.1.0.tar.gz&lt;/code&gt; file into &lt;code&gt;C:\work\openssl-3.1.0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Build the OpenSSL module using &lt;code&gt;VC++ 2015 x64 the Native Tools&lt;/code&gt; Command prompt.&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd openssl-3.1.0
Perl Configure VC-WIN64A --prefix=C:\SSLout\DLL\x64\Release -openssldir=c:\SSLout\SSL enable-fips
nmake
nmake install_sw
nmake install_ssldirs
nmake install_docs
nmake install_fips
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;The OpenSSL 3.1 binaries are generated in the &lt;code&gt;C:\SSLout\DLL\x64\Release\bin&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This is shown as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_directory_structure.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Unzip the &lt;code&gt;Python-3.11.2.tgz&lt;/code&gt; to &lt;code&gt;C:\work\Python3.11.2&lt;/code&gt; and go into&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd C:\work\Python3.11.2\PCBuild
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;Run the &lt;code&gt;get_externals.bat&lt;/code&gt; file.  This will fetch the dependencies from the internet into the &lt;code&gt;externals&lt;/code&gt; directory.&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;get_externals.bat
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now create the &lt;code&gt;openssl-bin-3.1.0&lt;/code&gt; directory under the &lt;code&gt;externals&lt;/code&gt; directory as shown below:&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd externals
mkdir openssl-bin-3.1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;The directory structure will look like this as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_directory_structure2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Under the &lt;code&gt;externals&lt;/code&gt; directory, create an &lt;code&gt;amd64&lt;/code&gt; depending on the CPU architecture and copy the files to it as shown below:&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;copy C:\SSLout\DLL\x64\Release\lib\*.lib  C:\work\Python3.11.2\PCBuild\externals\openssl-bin-3.1.0\amd64
copy C:\SSLout\DLL\x64\Release\bin\*.* C:\work\Python3.11.2\PCBuild\externals\openssl-bin-3.1.0\amd64
copy C:\SSLout\DLL\x64\Release\include C:\work\Python3.11.2\PCBuild\externals\openssl-bin-3.1.0\amd64
copy C:\SSLout\DLL\x64\Release\include\openssl\applink.c  C:\work\Python3.11.2\PCBuild\externals\openssl-bin-3.1.0\amd64\include
copy C:\SSLout\DLL\x64\Release\lib\ossl-modules\*.* C:\work\Python3.11.2\PCBuild\externals\openssl-bin-3.1.0\amd64
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; Modify &lt;code&gt;PCbuild/openssl.props&lt;/code&gt; as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_settings.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; Open &lt;code&gt;PCbuild/python.props&lt;/code&gt; and change the entries as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_settings2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 11:&lt;/strong&gt; Open the Python Solution under &lt;code&gt;PCbuild/pcbuild.sln&lt;/code&gt; in VS 2015.
Change the Linker settings of &lt;code&gt;_hashlib&lt;/code&gt; and &lt;code&gt;_ssl&lt;/code&gt; projects as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_vs_settings.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 12:&lt;/strong&gt; Now, build &lt;code&gt;_hashlib.pyd&lt;/code&gt; and &lt;code&gt;_ssl.pyd&lt;/code&gt; in VS 2015.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 13:&lt;/strong&gt; Copy these built .pyd files from &lt;code&gt;\PCbuild\amd64\&lt;/code&gt; to the Python binary installation directory &lt;code&gt;C:\python311\DLLs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 14:&lt;/strong&gt; Start Python binary installtion and use these commands to check the OpenSSL 3.1 version as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl-after.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 15:&lt;/strong&gt; Now, to run Python 3 with a FIPS enabled OpenSSL 3.1, create &lt;code&gt;openssl.cnf&lt;/code&gt; and &lt;code&gt;fipsmodule.cnf&lt;/code&gt; file using the below content in &lt;code&gt;C:\Python311&lt;/code&gt; directory. By using this config file, FIPS will be enabled by default.&lt;/p&gt;
&lt;blockquote&gt;
&lt;h4&gt;openssl.cnf:&lt;/h4&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;config_diagnostics = 1
openssl_conf = openssl_init
.include .\fipsmodule.cnf

[openssl_init]
providers = provider_sect
alg_section = algorithm_sect

[provider_sect]
fips = fips_sect
legacy = legacy_sect
base = base_sect
default = default_sect

[base_sect]
activate = 1

[legacy_sect]
activate = 1

[default_sect]
activate = 1

[algorithm_sect]
default_properties = fips=yes
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;fipsmodule.cnf:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[fips_sect]
activate = 1
conditional-errors = 1
security-checks = 1
module-mac = D4:64:00:E3:CE:34:EE:CE:58:32:12:08:21:6D:64:FD:E3:A6:D4:F0:E6:38:3D:2C:0C:40:1B:50:C8:8F:39:A3
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 16:&lt;/strong&gt; Now, open the &lt;code&gt;command&lt;/code&gt; window as an &lt;code&gt;Administrator&lt;/code&gt; and execute the following command as shown below:&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;set OPENSSL_CONF=C:\Python311\openssl.cnf
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 17:&lt;/strong&gt; To verify Python is enabled with FIPS, both the client and server need to be in FIPS mode. The client Windows OS needs to be enabled with FIPS using these 2 steps:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;gpedit.msc&lt;/code&gt; on run menu and navigate to &lt;code&gt;Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options&lt;/code&gt; and enable the &lt;code&gt;System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing&lt;/code&gt; setting.&lt;/li&gt;
&lt;li&gt;Open &lt;code&gt;regedit&lt;/code&gt; on run menu and go to &lt;code&gt;HKLM\System\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy\Enabled&lt;/code&gt; and set Enabled to 1.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Step 18:&lt;/strong&gt; To verify Python 3 is enabled with FIPS, run the following commands as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openssl_fips_algo.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note that the list of crypto algorithms available are more than the crypto algorithms guaranteed. But all the available algorithms can be used if the Server where the Client SSL is connecting to, is also configured in FIPS mode.&lt;/p&gt;
&lt;p&gt;The living example for the OpenSSL Server is HPE iLO.&lt;/p&gt;
&lt;p&gt;Voila!!...  Now the Python 3.11 is integrated with the OpenSSL 3.1 which is enabled with FIPS in Windows Platform.  This Python 3 installation can be used to develop the applications which are OpenSSL 3 and FIPS enabled!!&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog, I have covered the following steps regarding integrating Python 3 with FIPS enabled OpenSSL 3.1:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Download the required packages.&lt;/li&gt;
&lt;li&gt;Compile OpenSSL 3 along with FIPS enabled.&lt;/li&gt;
&lt;li&gt;Make the required changes and compile the Python along with the OpenSSL 3.1 binaries copied as an external dependency.&lt;/li&gt;
&lt;li&gt;Copy the newly generated binaries to the Python 3 installation directory.&lt;/li&gt;
&lt;li&gt;Test the OpenSSL 3 version in Python 3 and also verify if it is FIPS enabled.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope this blog is useful to the entire developer community!! Make sure you check out our other blog posts on &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE DEV&lt;/a&gt; for more useful tutorials.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE community hackathon builds STEM excitement for middle school students]]></title><description><![CDATA[A Hewlett Packard Enterprise (HPE) employee resource group called S.W.A.G. (Supporting Women’s Aspirations Group) is hosting a hackathon for…]]></description><link>https://developer.hpe.com/hpe-community-hackathon-builds-stem-excitement-for-middle-school-students/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-community-hackathon-builds-stem-excitement-for-middle-school-students/</guid><pubDate>Fri, 07 Apr 2023 16:39:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/hpe20160525019_800_0_72_rgb.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A Hewlett Packard Enterprise (HPE) employee resource group called S.W.A.G. (Supporting Women’s Aspirations Group) is hosting a hackathon for middle school students in the San Jose area on May 4th, 2023. Working in partnership with &lt;a href=&quot;https://www.curatedpathways.org/&quot;&gt;Curated Pathways to Innovation™&lt;/a&gt; (CPI), they are expected to deliver an exceptional experience to around 100 students. The day will be filled with fun activities meant to raise students’ awareness of the latest advancements in technology and their potential in playing a part in its development. Many volunteers will be helping with the planning and execution of this hackathon, and a fun time is expected to be had by all.&lt;/p&gt;
&lt;p&gt;As a part of the hackathon, students will be required to address a specific community issue and present their solution to the problem. Children will collaborate with one another, looking at community issues such as homelessness, domestic violence and teen depression/suicide and find ways to leverage technology to address them. They will do so using a hackathon platform developed by Curated Pathways to Innovation in conjunction with HPE. This platform forms the foundation of a STEM (Science, Technology, Engineering and Mathematics) curriculum followed by the schools participating in this event. Their students already use this platform in their school, so they’re very familiar with how to work with it.&lt;/p&gt;
&lt;h4&gt;A day filled with activities&lt;/h4&gt;
&lt;p&gt;Upon arrival at the facility, students will be greeted by S.W.A.G. team members along with HPE executives, Manju Abraham, Praveena Patchipulusu, and Lloyd Santy. They’ll then be given a brief overview of the process, including the schedule, logistics, and judging criteria. Once complete, the students will be divided into teams to begin work on their chosen topic. Between 10 and 12pm students will rotate between taking a tour of the HPE Customer Innovation Center (CIC) and working on their community problem, breaking it down into solvable pieces, coming up with solutions, and preparing their presentations.&lt;/p&gt;
&lt;p&gt;As a part of their tour of the HPE CIC, students will be introduced to many of the amazing products HPE produces and how they probably already experience these products as part of their everyday activities – from bringing real-time location context to mobile apps in stores to immersive and connected experiences at theme parks.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/innovation-center-800-px.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A special focal point of the CIC tour will be at a kiosk housing a model of a Galactic Starcruiser where the students will learn about HPE&apos;s partnership with Disney to bring about one of the most immersive experiences ever at Star Wars: Galaxy&apos;s Edge at the Walt Disney World Resort. Considering the date of this event, it certainly seems to be an appropriate tour stop. We can already imagine students calling out greetings of &quot;May the 4th be with you&quot;. In the CIC, students will also learn about our work with auto manufacturers to bring greater safety to vehicles and how supercomputers are used to help unlock unlimited supplies of clean energy, potentially sparking some ideas for their projects. (Learn more about the &lt;a href=&quot;https://www.hpe.com/us/en/about/virtual-customer-innovation-center.html&quot;&gt;HPE Customer Innovation Center&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;From 12-1:30pm the teams will present their solutions, have lunch, and participate in some fun activities that include games and a movie. Afterwards, the finalists will be selected and students will move to the outdoor lawn for the Award Ceremony.&lt;/p&gt;
&lt;p&gt;Hackathon organizers hope that the event will help students recognize the importance of giving back to your community as well as the fun involved in technology development. And perhaps, given a memorable experience, they may even consider working for HPE someday.&lt;/p&gt;
&lt;h4&gt;Meet the organizers and the technology behind it all&lt;/h4&gt;
&lt;p&gt;S.W.A.G. (Supporting Women’s Aspirations Group) is a group of HPE employees who volunteer to make a difference in the lives of women in technology. HPE employee resource groups, like S.W.A.G., contribute to making HPE an employer of choice by supporting the company’s inclusion, equity, and diversity goals of hiring and supporting the best talent, providing a workplace that values the unique contributions of its people, and fostering an inclusive mindset that drives innovation, building a strong reputation in the marketplace. For HPE employees interested in participating as a volunteer for this year’s event, please reach out to the S.W.A.G. core team at &lt;a href=&quot;mailto:swag-core@hpe.com&quot;&gt;swag-core@hpe.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Curated Pathways to Innovation is a game-changing model that drives an impactful approach to engaging students in STEM education. It was launched as a partnership between YWCA Silicon Valley, Hewlett Packard Enterprise, Santa Clara University and Purdue University as an IT solution developed to help young women and underrepresented minorities navigate their educational journey, with an emphasis on computing careers. A web-based app, Curated Pathways acts as a virtual guidance counselor and uses gamification to engage with students.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open Sourcing Workshops-on-Demand part 3: Understanding the Backend]]></title><description><![CDATA[I﻿n the previous article of this series, I explained how to deploy the backend server of our Workshops-on-Demand infrastructure. In this…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part3-understanding-the-backend/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part3-understanding-the-backend/</guid><pubDate>Wed, 05 Apr 2023 09:29:49 GMT</pubDate><content:encoded>&lt;p&gt;I﻿n the previous &lt;a href=&quot;https://developer.hpe.com/blog/open-sourcing-workshops-on-demand-part2-deploying-the-backend/&quot;&gt;article&lt;/a&gt; of this series, I explained how to deploy the backend server of our Workshops-on-Demand infrastructure.&lt;/p&gt;
&lt;p&gt;In this article, I will dig into details on the backend server. I will cover the inner workings of the registration process, explaining how a workshop is deployed on the backend server. Even though it takes only a few minutes for the backend server to deploy a workshop, there are many processes taking place in the background, which I will cover here.&lt;/p&gt;
&lt;p&gt;A﻿s a reminder, here is a diagram showing the different parts of the Workshops-on-Demand infrastructure. In this article, I will once again focus on the backend server side and, more precisely, on the JupyterHub server, where all the automation takes place.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-archi3.png&quot; alt=&quot;&quot; title=&quot;Workshops-on-Demand Architecture&quot;&gt;&lt;/p&gt;
&lt;h4&gt;B﻿ackend server  / workshops deployment lifecycle&lt;/h4&gt;
&lt;p&gt;The following picture depicts what happens on the backend server when a participant registers for a workshop. If you remember from the first article, upon registration the frontend sends instructions to the backend server through a procmail API call so the latter can proceed with the workshop preparation and deployment. Once these tasks are completed, it provides the API-DB server with the relevant information&lt;/p&gt;
&lt;p&gt;L﻿et&apos;s now look in details what is really happening  on the backend server&apos;s side:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-create.png&quot; alt=&quot;&quot; title=&quot;backend server CREATE workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;0- The procmail API: This is a mail parsing process allowing the backend server to retrieve the relevant information in order to perform appropriate actions. As with any API, it uses verbs to perform actions. In our case, we leverage &lt;strong&gt;CREATE&lt;/strong&gt;, &lt;strong&gt;CLEANUP&lt;/strong&gt;, &lt;strong&gt;RESET&lt;/strong&gt; and &lt;strong&gt;PURGE&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If you need more info on procmail usage, check this &lt;a href=&quot;https://wiki.archlinux.org/title/Procmail%3E&quot;&gt;page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;T﻿ake a look at the  following template of the &lt;code&gt;.procmailrc&lt;/code&gt; file that will be expanded at setup time.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;MAILDIR=$HOME/.mail      # You&apos;d better make sure it exists
DEFAULT=$MAILDIR/mbox
LOGFILE=$MAILDIR/from

:0b
#* ^From.*{{ WODSENDER }}.*
# \/ defines what will be matched in $MATCH
* ^Subject: *CREATE \/[1-9]+.*
| {{ SCRIPTDIR }}/procmail-action.sh CREATE $MATCH

:0b
#* ^From.*{{ WODSENDER }}.*
# \/ defines what will be matched in $MATCH
* ^Subject: *CLEANUP \/[1-9]+.*
| {{ SCRIPTDIR }}/procmail-action.sh CLEANUP $MATCH

:0b
#* ^From.*{{ WODSENDER }}.*
# \/ defines what will be matched in $MATCH
* ^Subject: *RESET \/[1-9]+.*
| {{ SCRIPTDIR }}/procmail-action.sh RESET $MATCH

:0b
#* ^From.*{{ WODSENDER }}.*
# \/ defines what will be matched in $MATCH
* ^Subject: *PURGE student\/[1-9]+.*
| {{ SCRIPTDIR }}/procmail-action.sh PURGE $MATCH
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;From:&lt;/code&gt; is important as &lt;code&gt;.procmailrc&lt;/code&gt; checks that the sender is the configured one from the frontend server. During the install process, the &lt;code&gt;WODSENDER&lt;/code&gt; parameter ID referring to this. Any mail from any other sender but the configured one is not processed.&lt;/p&gt;
&lt;p&gt;This API is actually based on a script &lt;code&gt;procmail-action.sh&lt;/code&gt;. This script defines the different actions linked to the verbs passed through the API calls via &lt;code&gt;.procmailrc&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;L﻿et&apos;s start with a &lt;strong&gt;CREATE&lt;/strong&gt; scenario looking at the very first lines of the &lt;code&gt;procmail&lt;/code&gt; log file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;From xyz@hpe.com  Wed Mar  1 15:10:41 2023
Subject: CREATE 401 825 frederic.passeron@hpe.com
Folder: /home/wodadmin/wod-backend/scripts/procmail-action.sh CREATE       14
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;code&gt;Subject:&lt;/code&gt;, look for the API verb &lt;strong&gt;CREATE&lt;/strong&gt; followed by &lt;strong&gt;student id,&lt;/strong&gt; &lt;strong&gt;participant id&lt;/strong&gt; and finally the registered &lt;strong&gt;participant email.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;H﻿ere are the values respectively:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;s﻿tudent id: 401&lt;/li&gt;
&lt;li&gt;p﻿articipant id: 825&lt;/li&gt;
&lt;li&gt;p﻿articipant email: &lt;a href=&quot;mailto:frederic.passeron@hpe.com&quot;&gt;frederic.passeron@hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I﻿n order to work properly, &lt;code&gt;procmail-action.sh&lt;/code&gt; needs to source 3 files:&lt;/p&gt;
&lt;p&gt;1- &lt;code&gt;w﻿od.sh&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;2- &lt;code&gt;r﻿andom.sh&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;3- &lt;code&gt;f﻿unctions.sh&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;w﻿od.sh&lt;/code&gt; sets a large number of variables: This script is generated at install time as it leverages variables defined at setup time.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;# This is the wod.sh script, generated at install
#
# Name of the admin user
export WODUSER=wodadmin

# Name of the WoD machine type (backend, api-db, frontend, appliance)
export WODTYPE=backend

# This main dir is computed and is the backend main dir
export WODBEDIR=/home/wodadmin/wod-backend

# BACKEND PART
# The backend dir has some fixed subdirs
# wod-backend (WODBEDIR)
#    |---------- ansible (ANSIBLEDIR)
#    |---------- scripts (SCRIPTDIR defined in all.yml not here to allow overloading)
#    |---------- sys (SYSDIR)
#    |---------- install
#    |---------- conf
#    |---------- skel
#
export ANSIBLEDIR=$WODBEDIR/ansible
export SYSDIR=$WODBEDIR/sys

# PRIVATE PART
# These 3 dirs have fixed names by default that you can change in this file
# they are placed as sister dirs wrt WODBEDIR
# This is the predefined structure for a private repo
# wod-private (WODPRIVDIR)
#    |---------- ansible (ANSIBLEPRIVDIR)
#    |---------- notebooks (WODPRIVNOBO)
#    |---------- scripts (SCRIPTPRIVDIR)
#
PWODBEDIR=`dirname $WODBEDIR`
export WODPRIVDIR=$PWODBEDIR/wod-private
export ANSIBLEPRIVDIR=$WODPRIVDIR/ansible
export SCRIPTPRIVDIR=$WODPRIVDIR/scripts
export SYSPRIVDIR=$WODPRIVDIR/sys
export WODPRIVNOBO=$WODPRIVDIR/notebooks
WODPRIVINV=&quot;&quot;
# Manages private inventory if any
if [ -f $WODPRIVDIR/ansible/inventory ]; then
        WODPRIVINV=&quot;-i $WODPRIVDIR/ansible/inventory&quot;
        export WODPRIVINV
fi

# AIP-DB PART
export WODAPIDBDIR=$PWODBEDIR/wod-api-db

# FRONTEND PART
export WODFEDIR=$PWODBEDIR/wod-frontend

# These dirs are also fixed by default and can be changed as needed
export WODNOBO=$PWODBEDIR/wod-notebooks
export STUDDIR=/student
#
export ANSPLAYOPT=&quot;-e PBKDIR=staging -e WODUSER=wodadmin -e WODBEDIR=/home/wodadmin/wod-backend -e WODNOBO=/home/wodadmin/wod-notebooks -e WODPRIVNOBO=/home/wodadmin/wod-private/notebooks -e WODPRIVDIR=/home/wodadmin/wod-private -e WODAPIDBDIR=/home/wodadmin/wod-api-db -e WODFEDIR=/home/wodadmin/wod-frontend -e STUDDIR=/student -e ANSIBLEDIR=/home/wodadmin/wod-backend/ansible -e ANSIBLEPRIVDIR=/home/wodadmin/wod-private/ansible -e SCRIPTPRIVDIR=/home/wodadmin/wod-private/scripts -e SYSDIR=/home/wodadmin/wod-backend/sys -e SYSPRIVDIR=/home/wodadmin/wod-private/sys&quot;
export ANSPRIVOPT=&quot; -e @/home/wodadmin/wod-private/ansible/group_vars/all.yml -e @/home/wodadmin/wod-private/ansible/group_vars/staging&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;r﻿andom.sh&lt;/code&gt; ﻿Exports the randomly generated password.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;f﻿unctions.sh&lt;/code&gt; Is a library of shell functions used by many scripts, among which can be found &lt;code&gt;procmail-action.sh&lt;/code&gt;.  Details are shown below.&lt;/p&gt;
&lt;p&gt;4- &lt;code&gt;procmail-action.sh&lt;/code&gt; Calls the necessary functions and scripts to perform the &lt;strong&gt;CREATE&lt;/strong&gt; operation.&lt;/p&gt;
&lt;p&gt;5- &lt;code&gt;get_session_token()&lt;/code&gt; This function retrieves the necessary token to make an API call to the api-db server.&lt;/p&gt;
&lt;p&gt;6- &lt;code&gt;get_workshop_name()&lt;/code&gt;  This function extracts the workshop name from the mail body. In the body, one will find the workshop name. For example, &lt;strong&gt;WKSHP-API101&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;7- &lt;code&gt;get_workshop_id()&lt;/code&gt; From the workshop name, the function &lt;code&gt;get_workshop_id()&lt;/code&gt; will get the workshop&apos;s ID from the api-db server.&lt;/p&gt;
&lt;p&gt;This ID will be used later to get some of the workshop&apos;s specifics through additional API calls to the api-db server.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Does the workshop require the use of the student password as a variable?&lt;/li&gt;
&lt;li&gt;Does the workshop require LDAP authentication?&lt;/li&gt;
&lt;li&gt;D﻿oes the workshop require a compiled script?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;8- &lt;code&gt;teststdid()&lt;/code&gt; This function checks the student ID provided by procmail API is valid: For each workshop, a dedicated student range is allocated. This function exits when the student ID is not in the correct range.&lt;/p&gt;
&lt;p&gt;9- &lt;code&gt;generate_randompwd()&lt;/code&gt; This function creates a random password for a user. It is used both for local and LDAP users&apos; passwords. If the workshop requires an LDAP authentication (&lt;code&gt;get_ldap_status()&lt;/code&gt; functions will return this information) then another function is used to update the LDAP server with the password for the given student (&lt;code&gt;update_ldap_passwd()&lt;/code&gt;)&lt;/p&gt;
&lt;p&gt;The generated password will be sent back to the api-db server so that the frontend server can then send an email to allow participant to connect to the workshop.&lt;/p&gt;
&lt;p&gt;10- &lt;code&gt;erase_student()&lt;/code&gt; This function erases all the content from the allocated student&apos;s home directory. You want to make sure that the home directory is not compromised. You want to start clean.&lt;/p&gt;
&lt;p&gt;11- &lt;code&gt;get_compile_status()&lt;/code&gt; This function will check if the workshop needs some scripts to be compiled. For instance, if you need to authenticate against a private cloud portal and you don&apos;t want your participants to see the credentials, make sure to check the relevant box in the workshop table of the database. This compile feature will compile the authentication scripts into an executable that cannot be edited.&lt;/p&gt;
&lt;p&gt;12- If an appliance is needed for the workshop, then the following script is called: &lt;code&gt;create-&amp;#x3C;WKSHP&gt;.sh&lt;/code&gt; .This will prepare the appliance (deploying a docker image on it for instance) and setup user env on the appliance accordingly (ssh keys, skeletons)&lt;/p&gt;
&lt;p&gt;F﻿or instance, &lt;code&gt;create-WKSHP-ML101.sh&lt;/code&gt; will perform the following tasks in order to prepare the appliance for the workshop: It will start by reseting the appliance with the &lt;code&gt;reset-&amp;#x3C;WKSHP&gt;.sh&lt;/code&gt; s﻿cript. Then, it calls a second script aiming at preparing  a generic appliance &lt;code&gt;create-appliance.sh&lt;/code&gt;.﻿ Once done with these two, it moves on with the proper customization of the appliance for the given student.&lt;/p&gt;
&lt;p&gt;S﻿ee details below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;#!/bin/bash

set -x

source {{ SCRIPTDIR }}/functions.sh

export RTARGET={{ hostvars[inventory_hostname][&apos;IP-WKSHP-ML101&apos;] }}
# Start by cleaning up stuff - do it early as after we setup .ssh content
{{ SCRIPTDIR }}/reset-$ws.sh
{{ SCRIPTDIR }}/create-appliance.sh

NAME=mllab
TMPDIR=/tmp/$NAME.$stdid


mkdir -p $TMPDIR

# Define local variables
echo wid=$wid
APPMIN=`get_range_min $wid`
echo stdid=$stdid
echo APPMIN=$APPMIN
mlport=$(($stdid-$APPMIN+{{ hostvars[inventory_hostname][&apos;MLPORT-WKSHP-ML101&apos;] }}))
mlport2=$(($stdid-$APPMIN+{{ hostvars[inventory_hostname][&apos;MLPORT2-WKSHP-ML101&apos;] }}))
httpport=$(($stdid-$APPMIN+{{ hostvars[inventory_hostname][&apos;HTTPPORT-WKSHP-ML101&apos;] }}))

cat &gt; $TMPDIR/dockerd-entrypoint.sh &amp;#x3C;&amp;#x3C; EOF
export HTTPPORT
tini -g -- start-notebook.sh &amp;#x26;
sleep 3
jupyter lab list | tail -1 | cut -d&apos;=&apos; -f2 | cut -d&apos; &apos; -f1 &gt; {{ STUDDIR }}/student$stdid/mltoken
sleep infinity
EOF

cat &gt; $TMPDIR/Dockerfile &amp;#x3C;&amp;#x3C; EOF
FROM ${NAME}:latest
USER root
COPY dockerd-entrypoint.sh /usr/local/bin/
ENTRYPOINT /usr/local/bin/dockerd-entrypoint.sh
RUN mkdir -p {{ STUDDIR }}/student$stdid
RUN useradd student$stdid -u $stdid -g 100 -d {{ STUDDIR }}/student$stdid
RUN chown student$stdid:users {{ STUDDIR }}/student$stdid
# Unlock the account
RUN perl -pi -e &quot;s|^student$stdid:!:|student$stdid:\$6\$rl1WNGdr\$qHyKDW/prwoj5qQckWh13UH3uE9Sp7w43jPzUI9mEV6Y1gZ3MbDDMUX/1sP7ZRnItnGgBEklmsD8vAKgMszkY.:|&quot; /etc/shadow
# In case we need sudo
#RUN echo &quot;student$stdid   ALL=(ALL)       NOPASSWD: ALL&quot; &gt;&gt; /etc/sudoers
WORKDIR {{ STUDDIR }}/student$stdid
USER student$stdid
ENV NB_USER student$stdid
ENV NB_UID $stdid
ENV HTTPPORT $httpport
RUN git clone https://github.com/snowch/ml-101 {{ STUDDIR }}/student$stdid/
RUN /opt/conda/bin/jupyter-nbconvert --clear-output --inplace {{ STUDDIR }}/student$stdid/*.ipynb
EOF


# Look at https://stackoverflow.com/questions/34264348/docker-inside-docker-container
# and http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
# For security consider using https://github.com/nestybox/sysbox
cat &gt; $TMPDIR/docker-compose.yml &amp;#x3C;&amp;#x3C; EOF
version: &apos;3.5&apos;
services:
  $NAME$stdid:
    image: $NAME$stdid
    build: .
    #privileged: true
    ports:
      - &quot;$httpport:8888&quot;
      - &quot;$mlport:4040&quot;
      - &quot;$mlport2:4041&quot;
#    volumes:
#      - /var/run/docker.sock:/var/run/docker.sock
EOF
cat &gt; $TMPDIR/launch-$NAME &amp;#x3C;&amp;#x3C; EOF
#!/bin/bash
cd $TMPDIR
docker-compose up --build -d
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;1﻿3- T﻿he &lt;code&gt;copy_folder yml&lt;/code&gt; playbook is now executed to deploy the notebooks and scripts necessary for the participant to run the workshop. Remember that the  participant got a student (with a dedicated student id. For instance: student41) allocated to him/her at the time of the registration. This student id is picked from a range that is allocated for the workshop. The admin decides on the maximum capacity it allocates to a given workshop. &lt;code&gt;c﻿opy_folder.yml&lt;/code&gt;: This is historically one of very first playbooks we used and therefore a very important one. It performs the necessary actions to deploy and personnalize (by substituting Ansible variables) the selected notebook to the appropriate student home folder&lt;/p&gt;
&lt;p&gt;1﻿4- In the certain cases, some post deployment actions are needed. For instance, you may want to git clone some repository to leverage some data stored there. This can only occur when done with the deployment. Therefore, a &lt;code&gt;post-copy-&amp;#x3C;WKSHP.sh&lt;/code&gt; ﻿is called.&lt;/p&gt;
&lt;p&gt;1﻿5- Finally, the workshop is now ready to be used by the participant. The backend therefore, needs to inform the frontend of this. To do so, it will perform two API calls:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;T﻿he first API call will update the password data for the participant&apos;s allocated student.&lt;/li&gt;
&lt;li&gt;T﻿he second API call will update the participant&apos;s allocated student&apos;s status to active.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These changes will trigger the frontend web portal application to send a second email to the participant. This email will contain the necessary information for the participant to connect to its notebooks environment. The participant will then run the workshop. For each workshop, a dedicated time window is allocated. Some workshops will take longer to be run than others. The time windows varies from 2 to 4 hours maximum.The system knows how to set it up so that it will time out. This means that once the participant hits the register button on the frontend web portal, the clock starts ticking.&lt;/p&gt;
&lt;p&gt;Some background checks take place on the web portal to verify time spent since the registration to a given workshop. As a consequence, a reminder email is sent an hour before the workshop times out. When the bell rings at the end of the class, a new procmail API call is made to the backend server ordering a &lt;strong&gt;CLEANUP&lt;/strong&gt; action. The participant can also trigger this action by registering to a new workshop before the end of the current one. He/She will have to provide the necessary information to the frontend web portal in order to end the current workshop.&lt;/p&gt;
&lt;p&gt;L﻿et&apos;s see what is happening on the backend server to perform this &lt;strong&gt;CLEANUP&lt;/strong&gt; scenario.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-cleanup.png&quot; alt=&quot;&quot; title=&quot;backend server CLEANUP workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;A﻿s you can see, it does not differ much from the &lt;strong&gt;CREATE&lt;/strong&gt;. We still need to gather data to interact with the proper workshop from the right student. The &lt;code&gt;.procmail.rc&lt;/code&gt; is providing us with this information. Then, the automation kicks in through the &lt;code&gt;procmail-action-sh&lt;/code&gt; script.&lt;/p&gt;
&lt;p&gt;T﻿he verb is now &lt;strong&gt;CLEANUP&lt;/strong&gt;. As a consequence, step 4 is now &lt;strong&gt;CLEANUP&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;N﻿othing changes from 5 to 9.&lt;/p&gt;
&lt;p&gt;1﻿0- &lt;code&gt;get_wod_completion_ratio()&lt;/code&gt; Allows us to retrieve information through a simple computing of the numbers of notebooks cells executed thoughout the different exercices of the workshop a ratio. This enables us to see how much of the workshop is actually run. Participants are asked to fill out a form in a conclusion notebook which is present in every student&apos;s workshop&apos;s folder.&lt;/p&gt;
&lt;p&gt;T﻿his completion ratio script provides us this data and we store it in our database.&lt;/p&gt;
&lt;p&gt;1﻿1- API call to send the completion ratio figure to the database. This can be later queried to build up a nice reporting dashboard as explained here by my colleague Didier Lalli, in the following &lt;a href=&quot;https://developer.hpe.com/blog/open-source-elasticsearch-helped-us-globally-support-virtual-labs/&quot;&gt;article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;1﻿2- &lt;code&gt;erase-student()&lt;/code&gt;: Now that we have extracted the necessary data from the student&apos;s notebooks, we can perform a cleanup of the student folder.&lt;/p&gt;
&lt;p&gt;1﻿3- &lt;code&gt;cleanup_processes_student()&lt;/code&gt;: On top of cleaning up the student folder, we also kill all the allocated student&apos;s processes.&lt;/p&gt;
&lt;p&gt;1﻿4- &lt;code&gt;cleanup-&amp;#x3C;workshop&gt;.sh&lt;/code&gt;: If any appliance is involved, this task will perform the necessary cleanup processes on the appliance.&lt;/p&gt;
&lt;p&gt;1﻿5- Finally, just like for the creation of a workshop process, we need to tell the frontend that the cleanup is now done. Therefore, several API calls are made to update tables in the database. The new s﻿tudent password is recorded. We also generate a new password at the cleanup phase to prevent unregistered logins. The s﻿tudent status is set to inactive. The capacity figure is incremented by one to make the seat available again.&lt;/p&gt;
&lt;p&gt;A﻿s for the &lt;strong&gt;CREATE&lt;/strong&gt; phase, the regular checks occuring on the frontent web portal will get these data and trigger the final email to the participant thanking him for his particpation.&lt;/p&gt;
&lt;p&gt;N﻿ow let&apos;s look at the &lt;strong&gt;RESET&lt;/strong&gt; scenario.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-reset.png&quot; alt=&quot;&quot; title=&quot;backend server RESET workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;You may wonder about the differences between CLEANUP and RESET. CLEANUP only takes care of students whereas RESET takes care of a larger scope. Let me explain...&lt;strong&gt;CLEANUP&lt;/strong&gt; only takes care of student whereas &lt;strong&gt;RESET&lt;/strong&gt; takes care of a larger scope.&lt;/p&gt;
&lt;p&gt;W﻿hen a &lt;strong&gt;CLEANUP&lt;/strong&gt; occurs, it deals with the participant&apos;s student workshop and home directory (the workshop directory belonging to the home directory). It cleans up workshop content, ssh keys, skeletons. The &lt;strong&gt;RESET&lt;/strong&gt; will delete leftovers from the workshop&apos;s exercices. For instance, when one runs the &lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/24&quot;&gt;Kubernetes 101&lt;/a&gt; workshop, he is creating microservices, he&apos;s scaling them, and should at the end of the workshop run some &lt;code&gt;kubectl delete&lt;/code&gt; commands to clean up everything. However, some participants may forget to run these clean up steps. And the admin needs to make sure that the next participant who will be assigned the same student environment gets a fresh one. Therefore, some measures have to be taken. These measures take place when a reset flag is associated to the workshop in the database.&lt;/p&gt;
&lt;p&gt;D﻿uring the &lt;strong&gt;CLEANUP&lt;/strong&gt; phase, a check is actually performed to test the presence of this flag through a simple API call on the frontend  API-DB server. If the workshop has a reset flag then a dedicated &lt;code&gt;reset-WKSHP.sh&lt;/code&gt; script is called and performs the necessary tasks. In the case of Kubernetes 101, it will wipe out any leftovers from the student. In some other cases, it will launch a revert to snapshot script on a virtual machine.&lt;/p&gt;
&lt;p&gt;Finally, let&apos;s consider the &lt;strong&gt;PURGE&lt;/strong&gt; scenario.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-purge.png&quot; alt=&quot;&quot; title=&quot;backend server PURGE workflow&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿n a perfect world, we would have covered here what one would somehow expect from any kind of API (GET, PUT, DELETE = CREATE, CLEANUP and RESET).  But, this is unfortunately not the case. Even though we did our best to harden the deployment automation, failures might occur.  Issues could occur at many different levels. From a backhoe loader cutting an internet line on the very morning of a starting event (preventing you from accessing your remote labs) to unplanned power cuts, or misconfigured power redundancy in pdus assignment, there are many examples possible of human factor related issues. As a result, the Jupyterhub server or an appliance might become unreachable, and the automation of the workshop&apos;s deployment might fail.&lt;/p&gt;
&lt;p&gt;In these very cases, you need to be able to cleanup the mess quickly.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie3-failure-purge.png&quot; alt=&quot;&quot; title=&quot;backend server PURGE / Failure workflow&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;f﻿rontend - backend communication issues&lt;/li&gt;
&lt;li&gt;J﻿upyterHub server failure&lt;/li&gt;
&lt;li&gt;J﻿upyterHub server - appliance server communication issues&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;PURGE&lt;/strong&gt; scenario is therefore triggered on Workshops-on-Demand&apos; deployment failures.&lt;/p&gt;
&lt;p&gt;A﻿t the registration time, when the participant hits the register button on the frontend web portal, an entry is automatically created in the database for him. It associates the participant to a student and a workshop. It also r﻿egisters the date and start time of the workshop, sets the participant status  to &apos;welcome&apos; in the database and a first email is sent to the participant from the frontend web portal welcoming him to the Workshop-on-Demand and stating to him that within a few minutes a second email will be sent along with the necessary information (credentials and url) to connect to the workshop&apos;s environment.&lt;/p&gt;
&lt;p&gt;If for any reason, the deployment of the workshop fails and as a consequence, no API call is made back to the frontend from the backend, the frontend could remain stuck forever and so would the participant. To overcome this, we implemented a check on the frontend web portal to test this welcome status. In a normal scenario, this welcome status gets updated within less than 3 minutes. If the status is not updated within 10 minutes, we consider that something went wrong during the deployment and as a result, a &lt;strong&gt;PURGE&lt;/strong&gt; scenario is initiated to clean up both the backend and the frontend sides of the related registration. Of course, depending of the backend sanity, some actions could also fail. But from our experience, both  frontend and backend are really reliable.&lt;/p&gt;
&lt;p&gt;Considering now the most common case: the backend server and frontend servers can communicate but the JupyterHub server has issues to communicate with appliances. I﻿n terms of tasks associated to the &lt;strong&gt;PURGE&lt;/strong&gt; scenario, you can see that we kept the minimal as there should not be much to clean up on the backend server. Simply consider that it is a &lt;strong&gt;CLEANUP&lt;/strong&gt; scenario without any workshop deployment.&lt;/p&gt;
&lt;p&gt;We call the same tasks to begin with as we still need student ID and workshop ID.&lt;/p&gt;
&lt;p&gt;W﻿e then initiate :&lt;/p&gt;
&lt;p&gt;9- &lt;code&gt;generate_randompwd()&lt;/code&gt;:  We always update the student&apos;s password for security reasons.&lt;/p&gt;
&lt;p&gt;1﻿0- &lt;code&gt;erase-student()&lt;/code&gt;:  We perform a cleanup of the student folder.&lt;/p&gt;
&lt;p&gt;1﻿1- API calls to update tables in the database. The new s﻿tudent password is recorded. We also generate a new password at the &lt;strong&gt;PURGE&lt;/strong&gt; phase to prevent unregistered logins. The s﻿tudent status is set to inactive. The capacity figure is incremented by one to make the seat available again.&lt;/p&gt;
&lt;p&gt;A﻿n email is then sent to the participant explaining to him that we encountered an issue with the deployment and that we apologize for this. The same email is sent to the admin so he can work on the issue.&lt;/p&gt;
&lt;p&gt;N﻿ow, you should have a clearer view of what is really happening in the background when one registers for a workshop. You can see that I have uncovered many scripts to explain step by step all the stages of a workshop&apos;s deployment process. But there is more to be explained. It is obvious that the main function of the backend server is to deploy and run workshops. Nevertheless, as any other server, it cannot live without maintenance.&lt;/p&gt;
&lt;p&gt;T﻿his subject will be at the core of my next article where I will detail how one needs to manage and work with this server on a daily basis. What we usually call Day 2 operations.&lt;/p&gt;
&lt;p&gt;If we can be of any help in clarifying any of this, please reach out to us on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt;. Please be sure to drop back at &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; for a follow up on this. Check out also the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt;! Willing to collaborate with us? Contact us and let&apos;s build together some more workshops! Stay tuned!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[It’s HPE GreenLake Day for storage!]]></title><link>https://developer.hpe.com/2023-April-03/</link><guid isPermaLink="false">https://developer.hpe.com/2023-April-03/</guid><pubDate>Mon, 03 Apr 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Creating HPE GreenLake Data Services Cloud Console block storage resources using Ansible playbooks]]></title><description><![CDATA[In my previous blog post , I provided an introduction to Ansible playbooks for HPE GreenLake Data Services Cloud Console and how to use them…]]></description><link>https://developer.hpe.com/creating-dscc-block-storage-resources-using-dscc-ansible-playbooks/</link><guid isPermaLink="false">https://developer.hpe.com/creating-dscc-block-storage-resources-using-dscc-ansible-playbooks/</guid><pubDate>Thu, 30 Mar 2023 14:02:46 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px;
    line-height: 33px;
    max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In my previous &lt;a href=&quot;https://developer.hpe.com/blog/automating-operations-on-dscc-using-ansible-playbooks/&quot;&gt;blog post&lt;/a&gt; , I provided an introduction to Ansible playbooks for HPE GreenLake Data Services Cloud Console and how to use them. In this post, I will show you how to create an Ansible playbook to create block storage resources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The objective of this use case is to provision a Primera volume from scratch using Data Services Cloud Console APIs. This use case covers the creation of the following resources in the following order:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a host&lt;/li&gt;
&lt;li&gt;Create a host group&lt;/li&gt;
&lt;li&gt;Create a volume set&lt;/li&gt;
&lt;li&gt;Create volumes&lt;/li&gt;
&lt;li&gt;Update the volume set with the volumes created&lt;/li&gt;
&lt;li&gt;Export the volume set to the host created&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Let’s look at the Ansible playbook resource-wise&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Variables in Ansible are similar to variables in any programming language that can be declared and used anywhere in the script. In this use case, certain variables will be used in the playbook and these are declared under the ‘vars’ section. One important variable to watch for here is the “config” variable that provides credentials for the APIs to authenticate the request and the host URL to which the request will be sent. The greenlake_config.json file that is mentioned in the value of the config variable will have the clientID, client secret, and the host URL of the Data Services Cloud Console instance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;vars:
    - config: &quot;{{ playbook_dir }}/greenlake_config.json&quot;
    - name: Greenlake DSCC volumeset
    - system_id: 2M29510B8L
    - initiators: []
    - host_name: host_from_ansible
    - host_group_name: hostGroupFromAnsible
    - operating_system: RHE Linux
    - workload: ORACLE_LOG
    - volume_set_name: ansibleVolumeSet
    - volume_name: ansibleVolume
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Contents of greenlake_config.json:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;host&quot;: &quot;https://us1.data.cloud.hpe.com&quot;,
  &quot;client_id&quot;: &quot;009e17ef-c356-4066-8b66-4ee10e1128b1&quot;,
  &quot;client_secret&quot;: &quot;53a9dd94d61311ec9b7e3ad2f9067b26&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Creation of host&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A host represents a physical server and a host group represents a collection of physical servers. In the context of the Data Services Cloud Console API, a host is a definition for a group of initiators that belong to a single server and a host group is a group of initiators across servers.&lt;/p&gt;
&lt;p&gt;To create a host, initiator information is needed and this can be obtained by a GET call to get the initiator resources. One of the initiators can be used in the creation of the host.
This means that the creation of a host requires two calls: one to get the initiator and another  to create a host. Along with this, you must mention the name of the host, operating system, and a “user_created” flag must be set to true. Variable names will be mentioned between two curly braces and double quotes as shown in the below code snippet.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Get GreenLake Host Initiators
      greenlake_host_initiator_facts:
        config: &quot;{{ config }}&quot;

    - debug: var=host_initiators

    - set_fact:
          initiators=&apos;{{ initiators + [item.id] }}&apos;
      loop: &quot;{{ host_initiators }}&quot;
      when: item.protocol == &apos;FC&apos; and item.hosts|length &amp;#x3C;= 0

    - debug: var=initiators

    - name: Create GreenLake DSCC Host
      greenlake_host:
        config: &quot;{{ config }}&quot;
        state: present
        data:
          initiator_ids:
            - &quot;{{initiators.0}}&quot;
          name: &quot;{{host_name}}&quot;
          operating_system: &quot;{{operating_system}}&quot;
          user_created: True

    - debug: var=hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Creation of host group&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A host group can be created using the host created in the previous step. The tasks of this Ansible playbook will be executed sequentially, so the creation of the host group task executes only after the host creation task is successfully completed.&lt;/p&gt;
&lt;p&gt;Provide the name of the host group, host Id, and make sure that the “user_created” flag is set to true.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Create GreenLake DSCC Host Group
      greenlake_host_group:
        config: &quot;{{ config }}&quot;
        state: present
        data:
          name: &quot;{{host_group_name}}&quot;
          hostIds:
            - &quot;{{hosts.0.id}}&quot;
          user_created: True

    - debug: var=host_groups
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Creation of volume set&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The device type represents whether it’s a Primera or Nimble volume. If it is 1, then it is a Primera\Alletra 9k volume and if it is 2, then it is a Nimble\Alletra 6k volume. Mandatory parameters are app_set_name, app_set_importance, app_set_type (which indicates what kind of workload is required, like Oracle Database). Optional parameters are commented on in the below snippet.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The request body for Nimble volume creation may vary, so please refer to the documentation for specifics.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Create GreenLake DSCC Volume Set
      greenlake_volumeset:
        config: &quot;{{ config }}&quot;
        device_type: 1
        system_id: &quot;{{system_id}}&quot;
        state: present
        data:
          # app_set_business_unit: &quot;HPE&quot;
          # app_set_comments: &quot;Edit&quot;
          app_set_importance: &quot;MEDIUM&quot;
          app_set_name: &quot;{{volume_set_name}}&quot;
          # name: &quot;ansible_volume_set_1&quot;
          app_set_type: &quot;{{workload}}&quot;
          # members: [&quot;ansible-vol1&quot;, &quot;ansible-vol2&quot;]

    - debug: var=volume_sets
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Creation of volume (Primera)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The mandatory fields are the name, size, and user CPG. Other parameters are optional.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Create GreenLake DSCC volume
      greenlake_volume:
        config: &quot;{{ config }}&quot;
        system_id: &quot;{{system_id}}&quot;
        state: present
        data:
          comments: &quot;Ansible library test&quot;
          count: 2 #Optional
          data_reduction: True #Optional
          name: &quot;{{volume_name}}&quot;  
          size_mib: 16384
          snap_cpg: &quot;SSD_r6&quot; #Optional
          snapshot_alloc_warning: 10 #Optional
          user_alloc_warning: 10 #Optional
          user_cpg: &quot;SSD_r6&quot;

    - debug: var=volumes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Updating the volume set&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The volume set will be updated with the volumes created earlier. For this, provide the id of the volume set and the names of the volumes created. Since these are created dynamically during the playbook execution, these are referred to as variables.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Update GreenLake DSCC Volume Set with volumes
      greenlake_volumeset:
        config: &quot;{{ config }}&quot;
        device_type: 1
        system_id: &quot;{{system_id}}&quot;
        state: present
        data:
          id: &quot;{{volume_sets.0.id}}&quot;
          add_members:
            - &quot;{{volume_name}}.0&quot;
            - &quot;{{volume_name}}.1&quot;

    - debug: var=volume_sets
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Exporting the volume set to the host group&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now, the volume set created needs to be exported to the host group created earlier. Provide the system id, host group id, and the volume set id in the form of variables.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;    - name: Export GreenLake DSCC Volume Set from system
      greenlake_volumeset:
        config: &quot;{{ config }}&quot;
        device_type: 1
        system_id: &quot;{{system_id}}&quot;
        state: export
        data:
          id: &quot;{{volume_sets.0.id}}&quot;
          host_group_ids:
            - &quot;{{host_groups.0.id}}&quot;

    - debug: var=volume_sets
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I hope you found this blog post helpful. This is just a sample use case one can implement to create resources on HPE GreenLake Data Services Cloud Console. Admins/users can come up with multiple use cases that can be used to manage the Data Services Cloud Console resources, like cleaning the resources, monitoring the resources, etc. Keep an eye out for new blogs and videos that are on track to be released related to the automation of Data Services Cloud Console operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet the HPE Developer Community team at HPE Technology and Solution Summit 2023!]]></title><description><![CDATA[HPE Technology and Solution Summit 2023 (HPE TSS 2023) goes physical again this year in sunny Barcelona. After being relegated to virtual…]]></description><link>https://developer.hpe.com/meet-our-new-hpe-dev-team-representative-at-hpe-technology-and-solution-summit-2023/</link><guid isPermaLink="false">https://developer.hpe.com/meet-our-new-hpe-dev-team-representative-at-hpe-technology-and-solution-summit-2023/</guid><pubDate>Thu, 30 Mar 2023 13:29:10 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/tss2023.png&quot; alt=&quot;&quot; title=&quot;HPE TECHNOLOGY AND SOLUTIONS SUMMIT 2023&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE Technology and Solution Summit 2023 (HPE TSS 2023) goes physical again this year in sunny Barcelona. After being relegated to virtual sessions these last couple of times, you will once again enjoy the pleasure of being able to engage with an HPE Developer Community representative in person. This time, all our sessions will be held by our new team member, Mathieu Losmede.&lt;/p&gt;
&lt;p&gt;HPE TSS is the largest HPE technical event that is held in the Europe, Middle East, and Africa (EMEA) geography. Whether you are an HPE Presales or Partner, you will have the opportunity to attend technical breakout, Hands-on Labs, as well as strategic keynotes, and also finally get to meet your peers in person!&lt;/p&gt;
&lt;p&gt;T﻿he decision to hold this event at a physical location came late this year. As a result, the HPE Developer Community team will be unable to bring the complete, well-loved Hack Shack to the event. And we will only be able to spare one member of our team to attend and run our sessions. However, we will be offering a number of sessions, hands-on labs, and a few surprises.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Meet our new team member: Mathieu Losmede&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;M﻿athieu joined the HPE Developer Community team last summer. He was present at Discover 2022 with us at Las Vegas in the Hack Shack. You can see him between Patrick and myself on the following picture. He will be the face of the HPE Developer Community team in Barcelona.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fred-6-b-3-512-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Mathieu came in with a strong HPE GreenLake background. He was part of the HPE GreenLake Cloud Service (aka Greenlake Central) SRE - Devops team. As our team&apos;s focus has recently shifted more heavily towards the HPE GreenLake Cloud Platform, we were very fortunate to have him join our group.&lt;/p&gt;
&lt;p&gt;Mathieu will be all over the place in Barcelona. One might wonder whether he managed to discover the secret of ubiquity. He will be delivering several technical breakouts either on his own or in partnership with other business units (BU) representatives (HPE GreenLake, Storage, Compute, and Aruba). He will also be co-presenting in other BUs&apos; sessions. On top of these, he will deliver three hands-on labs based on our well-known &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt; project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tss2023-wod.png&quot; alt=&quot;&quot; title=&quot;Workshops-on-Demand&quot;&gt;&lt;/p&gt;
&lt;p&gt;Look for the HPE Developer Community track in the HPE TSS 2023 agenda to get details about all our sessions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Techies just wanna have fun!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As always,the HPE Developer Community team brings in new ways to learn in a fun way. This year at TSS 2023 in Barcelona, we propose you to participate to a treasure hunt. Do not worry, no long walking needed there, we won&apos;t be sending you looking for clues in the city of Barcelona. All you will be needing is an internet browser and a few minutes. The treasure hunt will be available from the HPE TSS mobile app. It will start on Monday the 22nd of May at 12.00 pm CEST and end on Thursday the 25th of May at 10.00 CEST.&lt;/p&gt;
&lt;p&gt;Three winners will be selected. The winners will receive their prizes on Thursday, May 25th on the show floor at the HPE GreenLake booth at 12:15pm CEST. Please read the Terms and Conditions for participation in the HPE Developer Community Treasure Hunt found &lt;a href=&quot;https://developer.hpe.com/hackshack/hpetss2023-treasurehunt-terms-conditions/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/echodotgen3.png&quot; alt=&quot;&quot; title=&quot;Amazon Echo Dot3&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you are going to TSS 2023, make sure to sign up for the HPE Developer Community sessions. As you know, we value the importance of this technical event. We expect you do as well and appreciate that you have regularly rated our sessions highly. We look forward to reconnecting with you there. As does Mathieu :-).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automating operations on HPE GreenLake Data Services Cloud Console using Ansible Playbooks]]></title><description><![CDATA[Automation is one of the top trends in technology and the pace of automation is accelerating with more companies opting for developing fully…]]></description><link>https://developer.hpe.com/automating-operations-on-dscc-using-ansible-playbooks/</link><guid isPermaLink="false">https://developer.hpe.com/automating-operations-on-dscc-using-ansible-playbooks/</guid><pubDate>Wed, 29 Mar 2023 14:00:40 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px;
    line-height: 33px;
    max-width: none;
}
&lt;/style&gt;
&lt;p&gt;Automation is one of the top trends in technology and the pace of automation is accelerating with more companies opting for developing fully automated systems. Automation reduces time, effort, cost, and manual errors while increasing efficiency and productivity. Gone are those days when many complex coding skills were required to implement automation. Now, there are many low-code tools available in the market, like Ansible, that make automation easier.&lt;/p&gt;
&lt;p&gt;In this blog post, I am excited to be able to introduce the Ansible playbooks for HPE GreenLake Data Services Cloud Console and show you how to use them. Along with the &lt;a href=&quot;https://github.com/HewlettPackard/greenlake-data-services-python&quot;&gt;Python SDK&lt;/a&gt; for HPE GreenLake Data Services Cloud Console, these playbooks should help you in your efforts to automate HPE GreenLake Data Services through an infrastructure-as-code approach.&lt;/p&gt;
&lt;p&gt;Ansible is an open-source IT automation tool that automates provisioning, configuration management, application deployment, and many other IT processes.
Two main features that make Ansible the best choice for automation are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ansible does not require any programming.&lt;/li&gt;
&lt;li&gt;Idempotence is offered as a built-in feature of many of the Ansible modules. This means the result of performing a task once is the same as performing it multiple times without any intervening actions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Why do we need Ansible playbooks for HPE GreenLake Data Services Cloud Console?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Ansible helps the users/admins automate the deployment of resources and applications without the manual overhead of creating everything from scratch. These playbooks can be configured with conditions, variables, and tasks. Currently, simple playbooks, like performing CRUD operations on the resources, are available. These playbooks can be considered basic building blocks and can be reused to build simple-to-complex use cases.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/HewlettPackard/greenlake-data-services-ansible&quot;&gt;Ansible Modules for HPE GreenLake Data Services Cloud Console&lt;/a&gt;:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The following Ansible modules are currently available for Data Services Cloud Console. You can use the samples given or customize it.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_audit_events_facts&lt;/em&gt; - Get details of audit events&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_host&lt;/em&gt; - Manage host&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_host_facts -&lt;/em&gt; Get details of a host&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_host_group -&lt;/em&gt; Manage host group&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_host_group_facts -&lt;/em&gt; Get details of a host group&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_host_initiator_facts -&lt;/em&gt; Get details of a host initiator&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_storage_system_facts -&lt;/em&gt; Get details of a storage system&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_volume -&lt;/em&gt; Manage volume&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_volume_facts -&lt;/em&gt; Get details of a volume&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_volumeset -&lt;/em&gt; Manage volume set&lt;/p&gt;
&lt;p&gt;&lt;em&gt;greenlake_volumeset_facts -&lt;/em&gt; Get details of a volume set&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites to use these Ansible playbooks:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Any machine with Python 3.8 or newer installed. This includes Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on. (Microsoft Windows is not supported - see Note below)&lt;/li&gt;
&lt;li&gt;The latest and most stable version of Ansible must be installed. (current version is 2.9)&lt;/li&gt;
&lt;li&gt;HPE GreenLake Data Services Python SDK. (Installation procedure is mentioned below)&lt;/li&gt;
&lt;li&gt;Cloning the GitHub repo that has these playbooks&lt;/li&gt;
&lt;li&gt;Setup ANSIBLE_LIBRARY and ANSIBLE_MODULE_UTILS environment variables&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: If you are using Windows 10, then an Ubuntu terminal called Windows Subsystem for Linux(WSL) can be used.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Installing Ansible on Ubuntu&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Make sure your system’s package index is up to date. Refresh the package index with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$sudo apt update
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, install Ansible on Ubuntu with the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$sudo apt install ansible
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The installation will prompt you to press Y to confirm, with the rest of the installation process being automated.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;Check the version of Ansible by using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ansible --version
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For other operating systems, please refer to the official &lt;a href=&quot;https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html&quot;&gt;Ansible documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Installing the HPE GreenLake Data Services Python SDK&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These Ansible playbooks use the Python libraries of the HPE GreenLake Python SDK. Install the Python SDK using the steps mentioned &lt;a href=&quot;https://github.com/HewlettPackard/greenlake-data-services-python#installation--usage&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cloning the Data Services Cloud Console GitHub repo&lt;/strong&gt;
To clone the repo, execute the following command on the machine where you installed Ansible:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/HewlettPackard/greenlake-data-services-ansible.git
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This repo mainly consists of two folders:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Libraries&lt;/strong&gt;: These are Python libraries that will be used by the Ansible playbooks to perform CRUD operations on Data Services Cloud Console resources.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Examples&lt;/strong&gt;: This folder consists of sample Ansible playbooks that perform CRUD operations on Data Services Cloud Console resources. With these examples, one can start building use cases.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Setting up environment variables&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Set the ANSIBLE_LIBRARY variable to the greenlake-data-services-ansible/library directory of the repo -&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ANSIBLE_LIBRARY=/home/admin/greenlake-data-services-ansible/library&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Set the ANSIBLE_MODULE_UTILS to the module_utils directory under greenlake-data-services-ansible/library directory -&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ANSIBLE_MODULE_UTILS=/home/admin/greenlake-data-services-ansible/library/module_utils&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Usage&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Let’s have a look at an example used to perform Data Services Cloud Console operations on a host using an Ansible playbook. For any Ansible playbook to be used, an inventory file is required. The Ansible inventory file defines the hosts and groups of hosts on which commands, modules, and tasks in a playbook operate. In this example, we are calling REST APIs from our local machine. This file can be placed anywhere and the path of this file can be given during the Ansible playbook execution. Create an inventory file, name it “hosts” (the name of the file can be anything), and update it with the following details:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;[localhost] 127.0.0.1 ansible_connection=local ansible_python_interpreter=/home/admin/ansible-env/bin/python&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The IP address given is localhost, which means operations are performed on the local host. Provide the Python interpreter location on your system. Name this file “hosts”.&lt;/p&gt;
&lt;p&gt;Let&apos;s look at a sample of the host file. Ansible playbooks contain a list/array of tasks that will be performed sequentially. The below code snippet has the following tasks.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Creation&lt;/strong&gt; of host&lt;/p&gt;
&lt;p&gt;In this block, the input request parameters are provided under the section ‘data’. In this sample, only required fields for the REST API call are provided such as name, initiator_ids, user_created flag, and operating system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt; a host&lt;/p&gt;
&lt;p&gt;In an update request, one can update the name, and change the initiators. These input parameters can be provided under the ‘data’ section.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Delete&lt;/strong&gt; a host&lt;/p&gt;
&lt;p&gt;To delete a host, all you need to provide is the name of the host.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- hosts: all
  vars:
    - config: &quot;{{ playbook_dir }}/greenlake_config.json&quot;
    - name: &quot;Greenlake DSCC Host&quot;
  tasks:
    - name: Create GreenLake DSCC Host
      greenlake_host:
        config: &quot;{{ config }}&quot;
        state: present
        data:
          initiator_ids:
            - &quot;b015d393e2274592a37cc7a579c8b0ca&quot;
          name: &quot;hostAnsibleTest&quot;
          operating_system: &quot;RHE Linux&quot;
          user_created: True
          # new_name: &quot;hostAnsibleNameUpdated1&quot;

    - debug: var=hosts

    - name: Update GreenLake DSCC Host Name
      greenlake_host:
        config: &quot;{{ config }}&quot;
        state: present
        data:
          initiator_ids:
            - &quot;b015d393e2274592a37cc7a579c8b0ca&quot;
          name: &quot;hostAnsibleTest&quot;
          operating_system: &quot;RHE Linux&quot;
          user_created: True
          new_name: &quot;hostAnsibleTestUpdated&quot;

    - name: Delete GreenLake DSCC Host
      greenlake_host:
        config: &quot;{{ config }}&quot;
        state: absent
        data:
          name: &quot;hostAnsibleTestUpdated&quot;
    - debug: var=hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To execute the Ansible playbook, execute the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ansible-playbook examples/greenlake_host.yaml --i ../hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is what the result of a playbook execution looks like.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;(ansible_env) anusha@MUCILUR2I5:~/greenlake-data-services-ansible$ ansible-playbook examples/greenlake_host.yaml -i ../hosts
[WARNING]: Found variable using reserved name: name

PLAY [all] ********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]

TASK [Create GreenLake DSCC Host] *********************************************************************************************************************************************************************************
changed: [127.0.0.1]

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [127.0.0.1] =&gt; {
    &quot;hosts&quot;: [
        {
            &quot;associated_links&quot;: [
                {
                    &quot;resourceUri&quot;: &quot;/api/v1/initiators?filter=hostId in (63e338328779477481d44d906866b60b)&quot;,
                    &quot;type&quot;: &quot;initiators&quot;
                }
            ],
            &quot;associated_systems&quot;: null,
            &quot;comment&quot;: null,
            &quot;console_uri&quot;: &quot;/data-ops-manager/host-initiators/63e338328779477481d44d906866b60b&quot;,
            &quot;contact&quot;: null,
            &quot;customer_id&quot;: &quot;eb00678a466b11ec94d66ec0ab988305&quot;,
            &quot;edit_status&quot;: &quot;Not_Applicable&quot;,
            &quot;fqdn&quot;: null,
            &quot;generation&quot;: 1655290314,
            &quot;host_groups&quot;: [],
            &quot;id&quot;: &quot;63e338328779477481d44d906866b60b&quot;,
            &quot;initiators&quot;: [
                {
                    &quot;address&quot;: &quot;c3:33:ff:58:5f:19:00:1e&quot;,
                    &quot;id&quot;: &quot;b015d393e2274592a37cc7a579c8b0ca&quot;,
                    &quot;ip_address&quot;: null,
                    &quot;name&quot;: &quot;Host Path C333FF585F19001E (1:3:2)&quot;,
                    &quot;protocol&quot;: &quot;FC&quot;,
                    &quot;systems&quot;: [
                        &quot;2M29510B8N&quot;,
                        &quot;2M29510B8L&quot;
                    ]
                }
            ],
            &quot;ip_address&quot;: null,
            &quot;location&quot;: null,
            &quot;marked_for_delete&quot;: false,
            &quot;model&quot;: null,
            &quot;name&quot;: &quot;hostAnsibleTest&quot;,
            &quot;operating_system&quot;: &quot;RHE Linux&quot;,
            &quot;persona&quot;: null,
            &quot;protocol&quot;: null,
            &quot;subnet&quot;: null,
            &quot;systems&quot;: [
                &quot;2M29510B8N&quot;,
                &quot;2M29510B8L&quot;
            ],
            &quot;type&quot;: &quot;host-initiator&quot;,
            &quot;user_created&quot;: true
        }
    ]
}

TASK [Update GreenLake DSCC Host Name] ****************************************************************************************************************************************************************************
changed: [127.0.0.1]

TASK [Delete GreenLake DSCC Host] *********************************************************************************************************************************************************************************
changed: [127.0.0.1]

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [127.0.0.1] =&gt; {
    &quot;hosts&quot;: [
        {
            &quot;associated_links&quot;: [
                {
                    &quot;resourceUri&quot;: &quot;/api/v1/initiators?filter=hostId in (63e338328779477481d44d906866b60b)&quot;,
                    &quot;type&quot;: &quot;initiators&quot;
                }
            ],
            &quot;associated_systems&quot;: null,
            &quot;comment&quot;: null,
            &quot;console_uri&quot;: &quot;/data-ops-manager/host-initiators/63e338328779477481d44d906866b60b&quot;,
            &quot;contact&quot;: null,
            &quot;customer_id&quot;: &quot;eb00678a466b11ec94d66ec0ab988305&quot;,
            &quot;edit_status&quot;: &quot;Update_In_Progress&quot;,
            &quot;fqdn&quot;: null,
            &quot;generation&quot;: 1655290320,
            &quot;host_groups&quot;: [],
            &quot;id&quot;: &quot;63e338328779477481d44d906866b60b&quot;,
            &quot;initiators&quot;: [
                {
                    &quot;address&quot;: &quot;c3:33:ff:58:5f:19:00:1e&quot;,
                    &quot;id&quot;: &quot;b015d393e2274592a37cc7a579c8b0ca&quot;,
                    &quot;ip_address&quot;: null,
                    &quot;name&quot;: &quot;Host Path C333FF585F19001E (1:3:2)&quot;,
                    &quot;protocol&quot;: &quot;FC&quot;,
                    &quot;systems&quot;: [
                        &quot;2M29510B8N&quot;,
                        &quot;2M29510B8L&quot;
                    ]
                }
            ],
            &quot;ip_address&quot;: null,
            &quot;location&quot;: null,
            &quot;marked_for_delete&quot;: false,
            &quot;model&quot;: null,
            &quot;name&quot;: &quot;hostAnsibleTestUpdated&quot;,
            &quot;operating_system&quot;: &quot;RHE Linux&quot;,
            &quot;persona&quot;: null,
            &quot;protocol&quot;: null,
            &quot;subnet&quot;: null,
            &quot;systems&quot;: [
                &quot;2M29510B8N&quot;,
                &quot;2M29510B8L&quot;
            ],
            &quot;type&quot;: &quot;host-initiator&quot;,
            &quot;user_created&quot;: true
        }
    ]
}

PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1                  : ok=6    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These Ansible playbooks can be used to fetch details as well. Under the examples folder, there are files with the suffix ‘_facts’ which specify that it is used to get the details of a resource.
For example, take a look at the ‘greenlake_host_facts.yaml’. This playbook is used to get the details of the host.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- hosts: all
  vars:
    - config: &quot;{{ playbook_dir }}/greenlake_config.json&quot;
    - name: &quot;Get Hosts&quot;
  tasks:
    - name: Get GreenLake Hosts
      greenlake_host_facts:
        config: &quot;{{ config }}&quot;
        # id: &quot;fbc6f4e700154bff8cdfebaf10c3b965&quot;
        # params:
        #   limit: 10 # int | Number of items to return at a time (optional)
        #   offset: 5 # int | The offset of the first item in the collection to return (optional)

    - debug: var=hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this playbook, you have an option to add filters to it like ID, limit, offset, etc., or else you can get the details of all hosts available.
Use the following command to execute the Ansible playbook:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ansible-playbook examples/greenlake_host_facts.yaml --i ../hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;(ansible_env) anusha@MUCILUR2I5:~/greenlake-data-services-ansible$ ansible-playbook examples/greenlake_host_facts.yaml -i ../hosts
[WARNING]: Found variable using reserved name: name

PLAY [all] ********************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]

TASK [Get GreenLake Hosts] ****************************************************************************************************************************************************************************************
ok: [127.0.0.1]

TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [127.0.0.1] =&gt; {
    &quot;hosts&quot;: [
        {
            &quot;associated_links&quot;: [
                {
                    &quot;resourceUri&quot;: &quot;/api/v1/initiators?filter=hostId in (e1b09ecf837b46cca365044bf7237bc3)&quot;,
                    &quot;type&quot;: &quot;initiators&quot;
                },
                {
                    &quot;resourceUri&quot;: &quot;/api/v1/host-initiator-groups?filter=hostId in (e1b09ecf837b46cca365044bf7237bc3)&quot;,
                    &quot;type&quot;: &quot;host-groups&quot;
                }
            ],
            &quot;associated_systems&quot;: [
                &quot;2M29510B8N&quot;,
                &quot;2M29510B8L&quot;
            ],
            &quot;comment&quot;: null,
            &quot;console_uri&quot;: &quot;/data-ops-manager/host-initiators/e1b09ecf837b46cca365044bf7237bc3&quot;,
            &quot;contact&quot;: null,
            &quot;customer_id&quot;: &quot;eb00678a466b11ec94d66ec0ab988305&quot;,
            &quot;edit_status&quot;: &quot;Not_Applicable&quot;,
            &quot;fqdn&quot;: null,
            &quot;generation&quot;: 1653561376,
            &quot;host_groups&quot;: [
                {
                    &quot;id&quot;: &quot;0fb8a5f6616c4465a77a43cb7841e105&quot;,
                    &quot;marked_for_delete&quot;: false,
                    &quot;name&quot;: &quot;HostGroup-01&quot;,
                    &quot;systems&quot;: [
                        &quot;2M29510B8N&quot;,
                        &quot;2M29510B8L&quot;
                    ],
                    &quot;user_created&quot;: true
                }
            ],
            &quot;id&quot;: &quot;e1b09ecf837b46cca365044bf7237bc3&quot;,
            &quot;initiators&quot;: [
                {
                    &quot;address&quot;: &quot;c3:33:ff:58:5f:19:00:04&quot;,
                    &quot;id&quot;: &quot;1b6b97741b21480bb916ca3b8c0a2c6e&quot;,
                    &quot;ip_address&quot;: null,
                    &quot;name&quot;: &quot;Host Path C333FF585F190004 (0:3:1)&quot;,
                    &quot;protocol&quot;: &quot;FC&quot;,
                    &quot;systems&quot;: [
                        &quot;2M29510B8N&quot;,
                        &quot;2M29510B8L&quot;
                    ]
                },
                {
                    &quot;address&quot;: &quot;c3:33:ff:58:5f:19:00:24&quot;,
                    &quot;id&quot;: &quot;1bce3ffd6a0d4b74a8520ddc55e2a1eb&quot;,
                    &quot;ip_address&quot;: null,
                    &quot;name&quot;: &quot;Host Path C333FF585F190024 (1:3:1)&quot;,
                    &quot;protocol&quot;: &quot;FC&quot;,
                    &quot;systems&quot;: [
                        &quot;2M29510B8N&quot;,
                        &quot;2M29510B8L&quot;
                    ]
                }
            ],
            &quot;ip_address&quot;: null,
            &quot;location&quot;: null,
            &quot;marked_for_delete&quot;: false,
            &quot;model&quot;: null,
            &quot;name&quot;: &quot;Host-01&quot;,
            &quot;operating_system&quot;: &quot;Ubuntu&quot;,
            &quot;persona&quot;: null,
            &quot;protocol&quot;: null,
            &quot;subnet&quot;: null,
            &quot;systems&quot;: [
                &quot;2M29510B8N&quot;,
                &quot;2M29510B8L&quot;
            ],
            &quot;type&quot;: &quot;host-initiator&quot;,
            &quot;user_created&quot;: true
        }
    ]
}

PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1                  : ok=3    changed=0    unreacha-ble=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the current release, only critical resources are supported. New resources will be added moving forward. This is just the beginning.  In the future, there are many use cases to cover.&lt;/p&gt;
&lt;p&gt;In this blog, I gave you a preview of the Ansible playbooks for HPE GreenLake Data Services Cloud Console. This SDK is a Beta version with lots of rooms for improvement. I would urge you to try out the features and keep us engaged with your feedback so that we come out with a better version for the GA. Also, look out for other posts in the HPE Developer blog that are related to Data Services Cloud Console.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing Python SDK for HPE GreenLake Data Services Cloud Console]]></title><description><![CDATA[In my previous blog, I discussed how to generate a Software Development Kit (SDK) from the HPE GreenLake Data Services Cloud Console Open…]]></description><link>https://developer.hpe.com/introducing-python-sdk-for-dscc/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-python-sdk-for-dscc/</guid><pubDate>Wed, 29 Mar 2023 13:45:50 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px;
    line-height: 33px;
    max-width: none;
}
&lt;/style&gt;
&lt;p&gt;In my previous &lt;a href=&quot;https://developer.hpe.com/blog/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/&quot;&gt;blog&lt;/a&gt;, I discussed how to generate a Software Development Kit (SDK) from the HPE GreenLake Data Services Cloud Console Open API spec using a third-party tool called OpenAPI generator. Today, I am going to talk more in detail on the Python SDK generated using the tool. As we progress, we will discuss the implementation aspects, and then take up a use-case to better understand how to use the SDK.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;APIs and SDKs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;APIs are a list of functions libraries that are used to communicate with a web service (via secure HTTP), whereas an SDK is a development kit that facilitates the usage of these APIs. APIs enable any software developer to create business opportunities by leveraging the capabilities provided by the API to extend their application whereas the SDK facilitates the process for developers. An SDK is a collection of software tools and programs for a specific application platform that allows developers to manipulate the functions supported by the service. These can be considered as a wrapper on the top of the APIs, making the code consumable by the application.&lt;/p&gt;
&lt;p&gt;One thing to note in the context of HPE GreenLake Data Services Cloud Console is that the user always gets access to the latest Data Services Cloud Console API version through the SDKs. How? The SDK is designed and deployed (using CI/CD pipelines such as Jenkins) in such a way that with every new release of the Data Services Cloud Console Open API spec, the SDKs get updated automatically. Thus, keeping it up-to date without any manual intervention. This also reduces time which is spent waiting for updates with newer features.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Introducing Python SDK for HPE GreenLake Data Services Cloud Console&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Due to the wide adoption of Python, and for the Python lovers out there who did not have an option to achieve their automation goals, we have the Python SDK available now. You can access the SDK on this &lt;a href=&quot;https://github.com/HewlettPackard/greenlake-data-services-python&quot;&gt;github &lt;/a&gt;page.&lt;/p&gt;
&lt;p&gt;This SDK contains the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Documentation (under docs folder)&lt;/li&gt;
&lt;li&gt;Code libraries (under greenlake_data_services folder)&lt;/li&gt;
&lt;li&gt;Test file (under test folder)&lt;/li&gt;
&lt;li&gt;README file&lt;/li&gt;
&lt;li&gt;The Python libraries that are required to run this SDK (requirements.txt &amp;#x26; test-requirements.txt)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Requirements&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;Python (&gt;=3.5) is required to run the scripts. Run the following command to install all the required packages that the SDK is dependent on:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;pip install –r requirements.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Example usage:&lt;/p&gt;
&lt;p&gt;Let us consider Audits as an example. Audit events are a collection of tasks performed by users. The below code snippet uses a GET method to fetch the details of audit events, like task ID, user email, state, etc.&lt;/p&gt;
&lt;p&gt;The sample code is provided in the &lt;a href=&quot;https://github.com/HewlettPackard/greenlake-data-services-python/blob/main/docs/AuditEventApi.md&quot;&gt;documentation&lt;/a&gt; of this resource. Take the sample code and replace the BEARER_TOKEN with the access token. Generate the access token as mentioned in this &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Save the file as GetAudits.py&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import time
import greenlake_data_services
from pprint import pprint
from greenlake_data_services.api import audit_api
from greenlake_data_services.model.audit_bad_request import AuditBadRequest
from greenlake_data_services.model.audit_internal_server_error import AuditInternalServerError
from greenlake_data_services.model.audit_results import AuditResults
from greenlake_data_services.model.audit_service_unavailable import AuditServiceUnavailable
from greenlake_data_services.model.audit_user_forbidden import AuditUserForbidden
from requests_oauthlib import OAuth2Session
from requests.auth import HTTPBasicAuth
from oauthlib.oauth2 import BackendApplicationClient

CLIENT_SECRET = &quot;36f6d0fa92ea11ecae650a4cf4dda9cf&quot;
CLIENT_ID = &quot;33a7fd43-4d63-41d5-8a10-1494eb5430c9&quot;
client = BackendApplicationClient(CLIENT_ID)
oauth = OAuth2Session(client=client)
auth = HTTPBasicAuth(CLIENT_ID, CLIENT_SECRET)
token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)
access_token = token[&quot;access_token&quot;]
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure Bearer authorization (JWT): JWTAuth
configuration = greenlake_data_services.Configuration(
    access_token = access_token,
    host = &quot;https://us1.data.cloud.hpe.com&quot;
)

# Enter a context with an instance of the API client
with greenlake_data_services.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    api_instance = audit_api.AuditApi(api_client)
    filter = &quot;filter_example&quot; # str | Filter criteria - e.g. state eq Failure and occurredAt gt 2020-09-08T16:51:33Z (optional)
    limit = 1 # int | The number of results to return (optional)
    offset = 1 # int | The number of results to skip (optional)
    sort = &quot;sort_example&quot; # str | A comma separated list of properties to sort by, followed by a direction  indicator (\&quot;asc\&quot; or \&quot;desc\&quot;). If no direction indicator is specified the  default order is ascending. - e.g. state,version desc. Currently only support sorting by 1 property per request (optional)
    select = &quot;select_example&quot; # str | A list of properties to include in the response. Currently only support returning of all fields. (optional)

    try:
        # GET audit-events
        api_response = api_instance.audit_events_get(limit=limit, offset=offset)#filter=filter, , sort=sort, select=select)
        pprint(api_response)
    except greenlake_data_services.ApiExc eption as e:
        print(&quot;Exception when calling AuditApi-&gt;audit_events_get: %s\n&quot; % e)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And run the script.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;$python GetAudits.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output of scripts are in JSON format. For reference, there are example scripts available for all the resources listed on the Data Services Cloud Console API spec. Check them out on the SDK, after which you can customize it per your use-case.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&apos;items&apos;: [{&apos;associated_resource&apos;: {&apos;id&apos;: &apos;/api/v1/tasks/3b0139a1-478b-4a24-9811-9a1e072b5744&apos;,
                                    &apos;name&apos;: &apos;Delete [Vol-01.1]&apos;,
                                    &apos;type&apos;: &apos;tasks&apos;},
            &apos;code&apos;: &apos;&apos;,
            &apos;context_id&apos;: &apos;18337cba-c12f-4c16-a1c2-755471de8ed1&apos;,
            &apos;customer_id&apos;: &apos;eb00678a466b11ec94d66ec0ab988305&apos;,
            &apos;id&apos;: &apos;9794158b-fd2e-4cb6-bbf6-c9df3867d035&apos;,
            &apos;loggedAt&apos;: datetime.datetime(2022, 6, 8, 10, 12, 31, tzinfo=tzutc()),
            &apos;message&apos;: &apos;Parent Task : Delete [Vol-01.1] - Completed&apos;,
            &apos;occurred_at&apos;: &apos;2022-06-08T10:12:31Z&apos;,
            &apos;permission&apos;: &apos;&apos;,
            &apos;scope&apos;: &apos;&apos;,
            &apos;source&apos;: &apos;/api/v1/storage-systems/device-type1/2M29510B8L/volumes/0c24e55f12e5609ca0e2de527dfa8426&apos;,
            &apos;source_ip_address&apos;: &apos;fleet-gql-data-graph:4000&apos;,
            &apos;state&apos;: &apos;Success&apos;,
            &apos;task_id&apos;: &apos;&apos;,
            &apos;unique_id&apos;: &apos;audit.events+2+24511&apos;,
            &apos;user_email&apos;: &apos;anusha.y@hpe.com&apos;,
            &apos;version&apos;: 1}],
 &apos;page_limit&apos;: 1,
 &apos;page_offset&apos;: 1,
 &apos;total&apos;: 6978}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Next steps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now that you have access to the Python SDK for HPE GreenLake Data Services Cloud Console, use it to create automation for any use-case that requires the use of Data Services Cloud Console APIs, right from your console. In &lt;a href=&quot;https://developer.hpe.com/blog/automating-operations-on-dscc-using-ansible-playbooks/&quot;&gt;my next blog&lt;/a&gt;, I talk about how to use Ansible playbooks to achieve your automation goals for Data Services Cloud Console. Stay tuned!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 1.30.0!]]></title><description><![CDATA[E﻿xternal Blog]]></description><link>https://developer.hpe.com/announcing-chapel-1-30-0/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-1-30-0/</guid><pubDate>Fri, 24 Mar 2023 00:29:50 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal Blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Bare metal provisioning on HPE GreenLake using Terraform]]></title><description><![CDATA[Introduction The HPE GreenLake for Private Cloud Enterprise: Bare Metal service offering enables you to create dedicated compute instances…]]></description><link>https://developer.hpe.com/bare-metal-provisioning-on-hpe-greenlake-using-terraform/</link><guid isPermaLink="false">https://developer.hpe.com/bare-metal-provisioning-on-hpe-greenlake-using-terraform/</guid><pubDate>Mon, 20 Mar 2023 18:30:57 GMT</pubDate><content:encoded>&lt;style&gt;
li {
    font-size: 27px;
    line-height: 33px;
    max-width: none;
}
&lt;/style&gt;
&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;The HPE GreenLake for Private Cloud Enterprise: Bare Metal service offering enables you to create dedicated compute instances deployed on a physical IT infrastructure, facilitating on-demand scalability, convenience, and agility as a cloud service.&lt;/p&gt;
&lt;p&gt;Using bare metal service, you can create compute-instances provisioned with specific operating systems, network connections, one or more public SSH keys, and optional network-attached storage volumes.
The service can be accessed via GUI as well as via public APIs, enabling developers to use an Infrastructure-as-Code tool to build, change, and manage infrastructure in a consistent and repeatable way.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake Terraform provider &lt;strong&gt;hpegl&lt;/strong&gt; by HPE GreenLake provides Infrastructure-as-Code support for HPE GreenLake Cloud Services.
Using the &lt;strong&gt;hpegl&lt;/strong&gt; Terraform provider, you can automate the management of your infrastructure. You can provision OS on bare metal, spin up virtual machines and bring up a Kubernetes cluster; starting right from bare metal all the way up the stack to your desired configurations and applications.&lt;/p&gt;
&lt;p&gt;In this blog post, I will walk you through the steps required to use the HPE GreenLake Terraform Provider to deploy and further manage bare metal compute instances.&lt;/p&gt;
&lt;h1&gt;Preparing an Infrastructure-as-Code implementation&lt;/h1&gt;
&lt;h2&gt;Setting up an API client for access&lt;/h2&gt;
&lt;p&gt;You need an API client to authenticate against HPE GreenLake.&lt;/p&gt;
&lt;p&gt;Follow the below steps for an API client creation:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;From the HPE GreenLake platform, launch the HPE GreenLake Central console for the appropriate tenant. Under the settings icon on the tenant Dashboard page, select the &lt;strong&gt;User Management&lt;/strong&gt; option.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake_console_usermanagement.png&quot; alt=&quot;User Management&quot; title=&quot;User Management&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Under the API Clients tab, click on &lt;strong&gt;Create API Client&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake_console_createapiclient.png&quot; alt=&quot;Create API Client&quot; title=&quot;Create API Client&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Enter a &lt;strong&gt;Name&lt;/strong&gt; (mandatory field) and &lt;strong&gt;Description&lt;/strong&gt; (optional) for the API client, and click on &lt;strong&gt;Create&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake_conolse_apiclient_create.png&quot; alt=&quot;Create API Client&quot; title=&quot;Create API Client&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Ensure you make a note of the &lt;strong&gt;Issuer&lt;/strong&gt;, &lt;strong&gt;Client ID&lt;/strong&gt;, and &lt;strong&gt;Client Secret&lt;/strong&gt; before clicking on the &lt;strong&gt;Close&lt;/strong&gt; button. These details will be exported as environment variables in the next section.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake_conolse_apiclient_created.png&quot; alt=&quot;API Client Created&quot; title=&quot;API Client Created&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;In the API Clients page, select the newly created client, and click on &lt;strong&gt;Create Assignment&lt;/strong&gt; button.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Assign the roles &lt;strong&gt;BMAAS Access Viewer&lt;/strong&gt; and &lt;strong&gt;BMAAS Access Project Contributor&lt;/strong&gt; on the &lt;strong&gt;Space: Default.&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake_console_createbmaasassignment.png&quot; alt=&quot;BMaaS Roles&quot; title=&quot;BMaaS Roles&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API client is now ready to be used to run the Terraform resources.&lt;/p&gt;
&lt;h2&gt;Select the Compute group ID&lt;/h2&gt;
&lt;p&gt;A compute group is a logical grouping of bare metal resources (compute instances, networks, SSH keys, etc.) that a team of cloud consumers can consume. You must specify the compute group ID to interact with the bare metal service resources. You can create a new compute group or select the existing one from the HPE GreenLake console. Make a note of the compute group ID because you need it to set a variable.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note: Compute group is an AKA Project.&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to HPE GreenLake for Private Cloud Services card -&gt; Bare Metal -&gt; Compute Groups.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/compute_group_list.png&quot; alt=&quot;BMaaS Compute Groups&quot; title=&quot;BMaaS Compute Groups&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Click on the desired compute group and then extract the ID from the browser URL seen at that time.&lt;br&gt;
This will be exported in the environment variable &lt;strong&gt;HPE_METAL_PROJECT_ID&lt;/strong&gt; in the later section.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/compute_group_id.png&quot; alt=&quot;Compute group ID&quot; title=&quot;Compute group ID&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Install Terraform&lt;/h2&gt;
&lt;p&gt;Next, get your system ready to run Terraform. In case this has not been done yet:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download and install Terraform, version v0.13 or later.&lt;br&gt;
For more information, see &lt;a href=&quot;https://learn.hashicorp.com/tutorials/terraform/install-cli&quot;&gt;https://learn.hashicorp.com/tutorials/terraform/install-cli&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify the installation with &lt;strong&gt;terraform -help.&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At this point, you are ready to start building your infrastructure description file.&lt;/p&gt;
&lt;h1&gt;Deploy Compute Instance&lt;/h1&gt;
&lt;h2&gt;Select Terraform provider with bare metal service configurations&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Export the following environment variables on your setup.&lt;/p&gt;
&lt;p&gt;E﻿xport the Tenant ID:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export HPEGL_TENANT_ID=&quot;&amp;#x3C;Tenant ID&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Export the API Client credentials that you obtained when you created an API Client within HPE GreenLake Central:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export HPEGL_USER_ID=&quot;&amp;#x3C;API Client ID&gt;&quot;
export HPEGL_USER_SECRET=&quot;&amp;#x3C;API Client Secret&gt;&quot;
export HPEGL_IAM_SERVICE_URL=&quot;&amp;#x3C;Issuer URL&gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;E﻿xport the Compute group ID:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# compute group/project ID
export HPEGL_METAL_PROJECT_ID=&quot;&amp;#x3C;Compute group ID&gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;﻿Export bare metal service REST URL:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;# Production Environment:  https://client.greenlake.hpe.com/api/metal 
# Integration Environment: https://client.greenlake.hpe-gl-intg.com/api/metal
# local development: http://localhost:3002

export HPEGL_METAL_REST_URL=&quot;&amp;#x3C;Metal Service REST Base URL&gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the Terraform provider.&lt;/p&gt;
&lt;p&gt;Create an empty folder and put a file in it called &lt;strong&gt;main.tf&lt;/strong&gt; with the following contents:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;main.tf:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;HPE/hpegl&quot;
      version = &quot;&gt;= 0.3.12&quot;
    }
  }
}

provider &quot;hpegl&quot; {    
  # metal block for configuring bare metal resources.
  metal {   
  } 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿his tells Terraform that you are going to be using HPE/hpegl as your provider and you are using the bare metal service.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Write resource configuration for compute instance creation&lt;/h2&gt;
&lt;p&gt;To deploy compute instance, you need to use the &lt;strong&gt;hpegl_metal_host&lt;/strong&gt; terraform resource.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;compute instance is AKA host.&lt;/li&gt;
&lt;li&gt;compute instance type is AKA machine size.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;strong&gt;hpegl_metal_host&lt;/strong&gt; resource supports many different arguments, but these are the required ones:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;networks – List of networks. The list must always include any networks marked as &apos;&lt;strong&gt;required&lt;/strong&gt;&apos; for that location.&lt;/li&gt;
&lt;li&gt;machine_size – Compute instance type.&lt;/li&gt;
&lt;li&gt;ssh – List of SSH keys to push to the host.&lt;/li&gt;
&lt;li&gt;location – The data center location of where the compute instance will be provisioned.&lt;/li&gt;
&lt;li&gt;Image – OS image in the form flavor@version.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Y﻿ou can also check the documentation &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest/docs/resources/metal_host&quot;&gt;here&lt;/a&gt; to see all the required and optional fields.&lt;/p&gt;
&lt;p&gt;Your next step with the TF file is to query the HPE GreenLake provider to collect the above-required information for creating a host. For this, you will use the Terraform data statements.&lt;/p&gt;
&lt;h3&gt;Querying for available OS images&lt;/h3&gt;
&lt;p&gt;In order to list the available OS images for OS, add the below data statements in your Terraform file &lt;strong&gt;main.tf&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;data &quot;hpegl_metal_available_images&quot; &quot;ubuntu&quot; { 
  # select anything that looks like ubuntu:20.04
  filter {
    name   = &quot;flavor&quot;
    values = [&quot;(?i)ubuntu&quot;] 
  }

  filter {
    name   = &quot;version&quot;
    values = [&quot;20.04*&quot;] 
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The OS image list can be fetched using the following statements in &lt;strong&gt;main.tf&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;locals {
  ubuntu_image = format(&quot;%s@%s&quot;, data.hpegl_metal_available_images.ubuntu.images[0].flavor,
                          data.hpegl_metal_available_images.ubuntu.images[0].version)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Querying for other available resources&lt;/h3&gt;
&lt;p&gt;For this, you should use &lt;strong&gt;hpegl_metal_available_resources&lt;/strong&gt; data resource. For example, the following statements show how to retrieve and store the available SSH keys in a local variable.&lt;/p&gt;
&lt;p&gt;Append to main.tf:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;# query available resources.
data &quot;hpegl_metal_available_resources&quot; &quot;available&quot; {
}

# using one of the available SSH keys.
locals  {
  ssh_keys = data.hpegl_metal_available_resources.available.ssh_keys[0].name
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using a similar technique, you can retrieve the rest of the data you need - networks, machine size, etc.&lt;/p&gt;
&lt;p&gt;A﻿ppend to main.tf:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;# choosing a location that has at least one machine available.
locals {
  location = ([for msize in data.hpegl_metal_available_resources.available.machine_sizes : msize.location 
                    if msize.quantity &gt; 0])[0]
}

# Listing required networks for this location.
locals {
  networks = ([for net in data.hpegl_metal_available_resources.available.networks : net.name 
                  if net.host_use == &quot;Required&quot;  &amp;#x26;&amp;#x26; net.location == local.location])
}

# choosing machine size/Compute Instance Type to deploy OS.
locals {
  machine_size = ([for msize in data.hpegl_metal_available_resources.available.machine_sizes : msize.name 
                    if msize.location == local.location])
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest/docs/data-sources/metal_available_resources&quot;&gt;Here&lt;/a&gt; you can get information about each of the bare metal data statements supported by the &lt;strong&gt;hpegl&lt;/strong&gt; provider.&lt;/p&gt;
&lt;h3&gt;Create Compute instance&lt;/h3&gt;
&lt;p&gt;The last step is for you to define a &lt;strong&gt;hpegl_metal_host&lt;/strong&gt; terraform resource in the file to request a new compute instance (host):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;resource &quot;hpegl_metal_host&quot; &quot;demo_host&quot; {
  name          = &quot;demo-host-1&quot;
  image         = local.ubuntu_image
  machine_size  = local.machine_size[0]
  ssh           = [ local.ssh_keys ]
  networks      = local.networks
  location      = local.location
  description   = &quot;Simple Host Deployment Demo&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Initialize Terraform&lt;/h4&gt;
&lt;p&gt;Before you can use Terraform, you will have to initialize it from the configuration file we have created. In the same directory as the &lt;strong&gt;main.tf&lt;/strong&gt; file you created, run the command: &lt;strong&gt;terraform init&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bmaas_terraform_init.png&quot; alt=&quot;terraform init&quot; title=&quot;terraform init&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Validate and View the Terraform Execution Plan&lt;/h4&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run the command: &lt;strong&gt;terraform plan&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bmaas_terraform_plan.png&quot; alt=&quot;terraform plan&quot; title=&quot;terraform plan&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Apply the Terraform Execution Plan&lt;/h4&gt;
&lt;p&gt;The command you need to use is now: &lt;strong&gt;terraform apply&lt;/strong&gt;. This will rerun the plan command, then prompt you to confirm before it starts applying what’s in the plan:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bmaas_terraform_apply_1.png&quot; alt=&quot;terraform apply&quot; title=&quot;terraform apply&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bmaas_terraform_apply_2.png&quot; alt=&quot;terraform apply&quot; title=&quot;terraform apply&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Advanced Example&lt;/h2&gt;
&lt;p&gt;The above example shows how to deploy a compute instance from pre-existing resources. Below is another code sample demonstrating compute instance deployment using dynamic resources and additional optional configurations with &lt;strong&gt;hpegl_metal_host.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;HPE/hpegl&quot;
      version = &quot;&gt;= 0.3.12&quot;
    }
  }
}

provider &quot;hpegl&quot; {    
  # metal block for configuring bare metal resources.
  metal {   
  } 
}

locals {
  location = &quot;USA:CO:FTC&quot;
}

resource &quot;hpegl_metal_ssh_key&quot; &quot;newssh_1&quot; {
  name       = &quot;newssh_1&quot;
  public_key = &quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCv03o//GEQ9/6eI1qZleyBbSndg0n5AkcKVnf5D4fEjwkWrtSIJEnROqJddEAn2XYALAk9x1AcB4Nue3q4tDG17VeK3ODo0+9Dx0LYqUTawnFWmo4X80QKr658Jmt7Enmnk5x2IrUDcNwAzALVellkBbwq7QbYUu1swSycNlNhSfGizqo/lQCNIHXyeRQ8oJxOuZkbiturXHZL389blIrTeUo53xmwE1TolVS8QzZRN8ve1GjFvpC5dl6orzi6LXDcrDcbZaxlrW+YQqyaipFRAw1DyTalrfpqxtq/Y9+Elz5xgCnUaepHN6ha/k81wtI2rySHga6pMOcJKlxaRS5OfzdrWh7oi2tEAaiq2y3pTr9hROQ2OGcMNU5gxbVU2ymeXdHVsAHMCmyKvQe0g0/fJzmNA/excogFCWDN7Spy9s2V39IbEKttyXjD/dpave7re9eFzYHA1CBEnNjMuvJj0H4tnpAETdQ6UbnjbE4JYn5eKGvnJ2w1JTfSdMK8nMcxqo4HfHWuLFuntCV9GAlWIVIvJn1pYisY8kEOtN5w6QrLTfsei96/TfssAsfhrDrVtgcgNU3EvZlC6Uaaly7D0ISFeufsxkPswu+jGNUJvGEqDiqvt05lSEZWS5viR/TOROTlicaGN9dhez/fqHcj5cnuoK1pmibK5GT7/Yf1Gw== user1@quattronetworks.com&quot;
}

resource &quot;hpegl_metal_network&quot; &quot;newpnet_1&quot; {
  name        = &quot;newpnet_1&quot;
  description = &quot;New private network 1 description&quot;
  location    = local.location
  ip_pool {
    name          = &quot;npool&quot;
    description   = &quot;New IP pool description&quot;
    ip_ver        = &quot;IPv4&quot;
    base_ip       = &quot;10.0.0.0&quot;
    netmask       = &quot;/24&quot;
    default_route = &quot;10.0.0.1&quot;
    sources {
      base_ip = &quot;10.0.0.3&quot;
      count   = 10
    }
    dns      = [&quot;10.0.0.50&quot;]
    proxy    = &quot;https://10.0.0.60&quot;
    no_proxy = &quot;10.0.0.5&quot;
    ntp      = [&quot;10.0.0.80&quot;]
  }
}


resource &quot;hpegl_metal_host&quot; &quot;demo_advance&quot; {
  count = 0
  name             = &quot;demo-advance-1&quot;
  image            = &quot;ubuntu@20.04-20210713&quot;
  machine_size     = &quot;G2i&quot;
  ssh              = [hpegl_metal_ssh_key.newssh_1.id]
  networks         = [&quot;Public&quot;, hpegl_metal_network.newpnet_1.name]
  network_route    = &quot;Public&quot;
  network_untagged = hpegl_metal_network.newpnet_1.name
  location         = local.location
  description      = &quot;Hello from Terraform&quot;
  # Attaching tags 
  labels           = { &quot;purpose&quot; = &quot;devops&quot; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Cleaning up resources   ﻿&lt;/h2&gt;
&lt;p&gt;When you no longer need the resources created via Terraform, destroy the resources using the &lt;strong&gt;terraform destroy&lt;/strong&gt; command.  This will automatically use the HPE GreenLake provider to clean the infrastructure in HPE GreenLake.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bmaas_terraform_cleanup.png&quot; alt=&quot;terraform destroy&quot; title=&quot;terraform destroy&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;In this blog, I covered how to provision a compute instance with Terraform provider for HPE GreenLake using bare metal resources. I also showed you advanced usage of hpegl resource statements to deploy a compute instance with dynamic resources.&lt;br&gt;
I hope you found this information interesting and helpful in helping you get started with the HPE GreenLake Terraform provider. You can also go through the below links to understand more about the HPE GreenLake Terraform provider.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code – Part 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-cluster-as-code-part-2/&quot;&gt;Kubernetes Cluster as Code – Part 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/hpe/hpegl/latest&quot;&gt;Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Building Custom Docker Images  for EMR Serverless]]></title><description><![CDATA[EMR serverless is the latest addition from the AWS to offer out of box support for Elastic MapReduce paradigm with auto scaling and pay as…]]></description><link>https://developer.hpe.com/custom-docker-images-for-emr-serverless/</link><guid isPermaLink="false">https://developer.hpe.com/custom-docker-images-for-emr-serverless/</guid><pubDate>Thu, 09 Mar 2023 11:10:16 GMT</pubDate><content:encoded>&lt;p&gt;EMR serverless is the latest addition from the AWS to offer out of box support for Elastic MapReduce paradigm with auto scaling and pay as you go model.&lt;/p&gt;
&lt;p&gt;One often needs to build custom images in EMR serverless since the application uses specialized libraries that don&apos;t come with an EMR serverless base image. An explicit use case would be the requirements found with Delta Lake over S3 or specific Python modules, such as boto modules, database access libraries (i.e. pgcopy), etc. While there exists the option to have them installed at run time (i.e. when the image is running in a container and a custom script is used to install it), in production settings, it is encouraged to prebundle all the required libraries, modules, and jars. Dynamic installations are typically discouraged due to access permissions and internet connectivity issues.&lt;/p&gt;
&lt;p&gt;Further, getting prebundled assembly jars with all the dependencies is feasible in certain programming such as a scala/java, but with widely used Python the concept of creating assembly modules is missing. To address this, one option for all Python based modules/libraries that are required by the application code is to be prebundle them into custom images. We are going to show you a fairly quick and easy way to do this for AWS.&lt;/p&gt;
&lt;p&gt;In this article we take two scenarios, one where the application expects the data jars to be in specific location for its execution as in case of Delta Lake and another where we install specific Python modules using pip install both of which are bundled into a custom docker image that the EMR Serverless uses.&lt;/p&gt;
&lt;p&gt;Delta Lake (&lt;a href=&quot;https://delta.io&quot;&gt;https://delta.io&lt;/a&gt;) is an open-source storage framework that enables building a lake house architecture. One of the important features supported by Delta Lake is the checkpoint management for batch/streaming jobs that uses AWS S3 as a data lake. The most common approach in using Delta Lake is via the command line running the spark-submit command - something like &lt;code&gt;&quot;sparkSubmitParameters&quot;: &quot;--packages io.delta:delta-core_2.12:1.2.1&quot;&lt;/code&gt;. This dynamically pulls the Delta Lake (via maven project) when the spark jobs run (ref. &lt;a href=&quot;https://docs.delta.io/latest/quick-start.html#pyspark-shell&quot;&gt;https://docs.delta.io/latest/quick-start.html#pyspark-shell&lt;/a&gt; ). The other approach is to use &lt;strong&gt;pip install delta-spark==2.2.0&lt;/strong&gt; when Python is used.  However,  both these approaches in production might not be possible, as they require an active internet connection and necessary permissions on the production machine to download/install a library at run time in the EMR Serverless application.&lt;/p&gt;
&lt;p&gt;Another use case is the installation of all custom libraries (in the form of jars etc.) used by the code. In this article, we will provide you with step-by-step instructions on how to build custom images, addressing both of the above scenarios using the ECR to register our Docker image. We will also show you how to use the Docker image in the EMR Serverless.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Step 1:&lt;/em&gt;  Pre-requisites to build a custom Docker image&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Step 2:&lt;/em&gt;  Identify the base EMR Serverless Docker image, packages/jars that need to be installed/downloaded, and prepare the custom Docker image.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Step 3:&lt;/em&gt;  Push the Docker image to the ECR&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Step 4:&lt;/em&gt; Create the EMR Serverless applications by using one of the following approaches:&lt;/p&gt;
&lt;p&gt;(a) Via the AWS command line (AWS CLI)&lt;/p&gt;
&lt;p&gt;(b) Via EMR Serverless settings in the EMR management console&lt;/p&gt;
&lt;h2&gt;Step 1: Prerequisites to build a custom Docker image:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The latest version of the Docker client&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The latest version of AWS CLI&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify the access to any dependent repository from where resources (modules, code etc) might be downloaded.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Step 2: Sample Docker file used to create custom EMR Serverless image:&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;# Refer AWS EMR documentation for the release version tag&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;FROM public.ecr.aws/emr-serverless/spark/emr-6.9.0:20221108&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;USER root&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;# Dependent JAR files&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;RUN curl -O&lt;/code&gt;&lt;a href=&quot;https://repo1.maven.org/maven2/io/delta/delta-core_2.12/2.2.0/delta-core_2.12-2.2.0.jar&quot;&gt;&lt;code&gt;https://repo1.maven.org/maven2/io/delta/delta-core_2.12/2.2.0/delta-core_2.12-2.2.0.jar&lt;/code&gt;&lt;/a&gt;&lt;/em&gt;``&lt;/p&gt;
&lt;p&gt;&lt;code&gt;RUN curl -O https://repo1.maven.org/maven2/io/delta/delta-storage/2.2.0/delta-storage-2.2.0.jar&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;# The base emr-image sets WORKDIR to /home/hadoop, hence the JAR files will be downloaded under /home/hadoop.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;# Then these jars will be copied to /usr/lib/spark/jars which was set as SPARK_HOME by EMR base image.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;RUN cp /home/hadoop/delta-core_2.12–2.2.0.jar /usr/lib/spark/jars/&lt;/code&gt;&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;code&gt;RUN cp /home/hadoop/delta-storage-2.2.0.jar /usr/lib/spark/jars/&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;# EMRS will run the image as hadoop&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;code&gt;USER hadoop:hadoop&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Step 3: Pushing the Docker image to the&lt;/strong&gt; Amazon Elastic Container Registry &lt;strong&gt;(AWS ECR):&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Command to authenticate Docker to an AWS ECR public registry assuming region as us-east-1. As the EMR Serverless base Docker image is present in the ECR public registry, run the command shown below before building your customized Docker file.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;aws ecr-public get-login-password — region us-east-1 — profile &amp;#x3C;profile_name_in_your_aws_credentials_file&gt; | docker login — username AWS — password-stdin public.ecr.aws&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Command to build Docker image:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;docker build -t local_docker_image_name:tag -f &amp;#x3C;docker_file_name&gt; &lt;code&gt;.&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Command to tag locally built Docker image in order to push to the AWS ECR private registry:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;docker tag local_docker_image_name:tag &lt;a href=&quot;http://aws_account_id.dkr.ecr.region.amazonaws.com&quot;&gt;aws_account_id.dkr.ecr.region.amazonaws.com&lt;/a&gt;**&lt;a href=&quot;http://718515174980.dkr.ecr.us-east-1.amazonaws.com/emr-serverless-ci-examples:emr-serverless-6.9.0-V1&quot;&gt;/docker_image_name&lt;/a&gt;:tag&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Command to authenticate Docker to an AWS ECR private registry assuming region as us-east-1. Run the command shown below before pushing the Docker image to AWS ECR private registry.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;&lt;code&gt;aws ecr get-login-password — region us-east-1 — profile &amp;#x3C;profile_name_in_your_aws_credentials_file&gt; | docker login — username AWS — password-stdin&lt;/code&gt;&lt;a href=&quot;http://aws_account_id.dkr.ecr.region.amazonaws.com&quot;&gt;&lt;code&gt;aws_account_id.dkr.ecr.region.amazonaws.com&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;``&lt;/p&gt;
&lt;p&gt; &lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;   1. When the --profile option is not provided, credentials will be picked from the profile name  &lt;default&gt; in the ~/.aws/credentials file. If you are using access credentials for a different user, then include the profile section in ~/.aws/credentials files.&lt;/p&gt;
&lt;p&gt;2.  Example content of ~/.aws/credentials file:          &lt;/p&gt;
&lt;p&gt;[default]&lt;br&gt;
aws_access_key_id =&lt;br&gt;
aws_secret_access_key =&lt;/p&gt;
&lt;p&gt;[testprofile]&lt;br&gt;
aws_access_key_id=&lt;br&gt;
aws_secret_access_key=&lt;/p&gt;
&lt;p&gt;Command to push Docker image to the  AWS ECR private registry:&lt;/p&gt;
&lt;p&gt;    &lt;em&gt;docker push &lt;a href=&quot;http://aws_account_id.dkr.ecr.region.amazonaws.com&quot;&gt;aws_account_id.dkr.ecr.region.amazonaws.com&lt;/a&gt;**&lt;a href=&quot;http://718515174980.dkr.ecr.us-east-1.amazonaws.com/emr-serverless-ci-examples:emr-serverless-6.9.0-V1&quot;&gt;/docker_image_name:&lt;/a&gt;tag&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Step 4:  EMR Serverless using the custom Docker image created:&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;There are two approaches for EMR Serverless to use the custom image created:&lt;/p&gt;
&lt;p&gt;(a) via the  AWS CLI&lt;/p&gt;
&lt;p&gt;(b) Via EMR Serverless settings in the EMR management console&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;4a) Via the AWS CLI command to create simple EMR Serverless application using custom Docker image:&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;AWS CLI reference for EMR Serverless application management: &lt;a href=&quot;https://docs.aws.amazon.com/cli/latest/reference/emr-serverless/index.html&quot;&gt;https://docs.aws.amazon.com/cli/latest/reference/emr-serverless/index.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;aws — region &lt;region&gt; emr-serverless create-application&lt;/em&gt; &lt;br&gt;
&lt;em&gt;— release-label emr-6.9.0&lt;/em&gt; &lt;br&gt;
&lt;em&gt;— type “SPARK”&lt;/em&gt; &lt;br&gt;
&lt;em&gt;— name emr-application-1&lt;/em&gt; &lt;br&gt;
&lt;em&gt;— image-configuration ‘{ “imageUri”: “&lt;your AWS account ID&gt;.dkr.ecr.&lt;region&gt;.&lt;a href=&quot;http://amazonaws.com/emr-serverless-ci-examples:emr-serverless-ci-ml&quot;&gt;amazonaws.com/&amp;#x3C;ecr_registry_name:&lt;/a&gt;image_name&gt;” }’&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Image URI can be copied from the AWS ECR registry.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;4b)&lt;/strong&gt; Via EMR Serverless settings in the EMR management console&lt;/h3&gt;
&lt;h5&gt;EMR Serverless custom image settings section in the AWS EMR Serverless management console**:**&lt;/h5&gt;
&lt;p&gt;&lt;img src=&quot;/img/customimagesettings-in-emrserverless.png&quot; alt=&quot;EMR Serverless Management Console Docker section&quot;&gt;&lt;/p&gt;
&lt;h5&gt;Browse and select custom EMR Serverless image from the AWS ECR private registry in the same region:&lt;/h5&gt;
&lt;p&gt;&lt;img src=&quot;/img/selectionofimagefromecr_registry.png&quot; alt=&quot;EMR Serverless Management Console to choose the Docker image&quot;&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This article provides the reader with mechanisms to build custom Docker image where custom libraries especially Python modules need to be bundled and used in EMR Serverless. We use two use cases with Python modules and Delta Lake libraries on how one can build this Docker image. This should help all developers using Python as their software language for EMR Serverless to pre-bundle the Python libraries for the production environment. If you found this blog post helpful, we recommend you read more on the topic by referencing the documentation below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://aws.amazon.com/blogs/big-data/add-your-own-libraries-and-application-dependencies-to-spark-and-hive-on-amazon-emr-serverless-with-custom-images/&quot;&gt;Steps to build custom Docker image for EMR Serverless&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html&quot;&gt;How to pull Docker image from AWS ECR registry&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html&quot;&gt;How to authorize AWS ECR registry with Docker client&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html&quot;&gt;How to push Docker image to AWS ECS registry&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://docs.aws.amazon.com/cli/latest/reference/emr-serverless/index.html&quot;&gt;AWS EMR Serverless CLI reference&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/docker-custom-images-steps.html&quot;&gt;How to customize image for multi CPU architecture&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;About the Authors:&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/niranjan_2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;D R Niranjan is a Senior software/Cloud application developer at HPE, experienced in developing software and cloud applications for HPE Servers and Storage systems. He is technically skilled in Python, Java, Scala, Spark programming, AWS, Dockers and Kubernetes, PgSQL/RDBMS. You can have a look at his LinkedIn profile here: &lt;a href=&quot;https://www.linkedin.com/in/niranjan-d-r/&quot;&gt;https://www.linkedin.com/in/niranjan-d-r/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sagar-pic_2.jpg&quot; alt=&quot;&quot; title=&quot;sagar-nyamagouda@hpe.com&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sagar Nyamagouda holds a  B.E(Information Science and Engineering) from BMS College of Engineering (BMSCE), Bengaluru and M.Tech in Software Systems from BITS Pilani. He is an experienced R&amp;#x26;D Engineer working on Big Data Technologies and building AI/ML pipelines to give real time insights to customers. An AI/ML enthusiast, he is currently working with HPE enabling advanced insights for Data Services in HPE GreenLake Cloud Platform. LinkedIn Profile: &lt;a href=&quot;http://www.linkedin.com/in/sagarny&quot;&gt;www.linkedin.com/in/sagarny&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chirag_2.jpg&quot; alt=&quot;&quot; title=&quot;chirag.talreja@hpe.com&quot;&gt;&lt;/p&gt;
&lt;p&gt;A graduate from BITS Pilani, Pilani, Chirag is currently working as a cloud developer in HPE. He has over 6 years of experience in designing, developing, and implementing complex data processing pipelines. He is experienced in microservice architecture,  big data tech stack like Apache Spark , Spark Streaming , SQL/NoSQL databases, Kafka, Core Java, Scala, and Python and has a good understanding of the AWS platform. His LinkedIn profile is &lt;a href=&quot;https://www.linkedin.com/in/chiragtalreja29/&quot;&gt;https://www.linkedin.com/in/chiragtalreja29/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chinmay_2jpg.jpg&quot; alt=&quot;&quot; title=&quot;chinmay.chaturvedi@hpe.com&quot;&gt;&lt;/p&gt;
&lt;p&gt;Chinmay currently works at HPE as a cloud engineer. He has expertise in various Big Data Processing technologies, including the building of ML/AI platforms, data engineering and processing (most recently related to Spark, AWS, SQL, etc.). Please view his LinkedIn profile here: &lt;a href=&quot;https://www.linkedin.com/in/chinmay-chaturvedi-707886138&quot;&gt;https://www.linkedin.com/in/chinmay-chaturvedi-707886138&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.linkedin.com/in/chinmay-chaturvedi-707886138&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/whatsapp-image-2021-11-02-at-10.01.24-am-1-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Kalapriya Kannan currently works with HPE on cloud enablement of storage analytics. She holds a Ph.D from IISc. She has authored around 60 peer reviewed international conference papers and over a 100 disclosure submissions for patent filing for which she has been granted 65 patents. Her interests are in the area of distributed and parallel systems and currently she is working in processing of big data for analytical insights. Her linkedIn profile: &lt;a href=&quot;https://www.linkedin.com/in/kalapriya-kannan-0862b55b&quot;&gt;https://www.linkedin.com/in/kalapriya-kannan-0862b55b&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/download.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Roopali has 19 years of work experience in systems software development in the areas of operating system, file system and cloud technologies. She has played a number of various roles, starting from developer to lead expert and product owner. In her current role, she is responsible for functional delivery and people management of Data Observability Analytics Project.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to create a cold-tiered volume and use HPE Ezmeral Data Fabric Object Store as the remote target]]></title><description><![CDATA[Introduction What is data tiering and why do it? Much of your data needs to be retained, either to meet regulatory requirements or because…]]></description><link>https://developer.hpe.com/create-a-cold-tiered-volume-and-use-hpe-ezmeral-data-fabric-object-store-as-remote-target/</link><guid isPermaLink="false">https://developer.hpe.com/create-a-cold-tiered-volume-and-use-hpe-ezmeral-data-fabric-object-store-as-remote-target/</guid><pubDate>Mon, 06 Mar 2023 01:05:32 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;h3&gt;What is data tiering and why do it?&lt;/h3&gt;
&lt;p&gt;Much of your data needs to be retained, either to meet regulatory requirements or because it still has value.
Data tiering is useful for data that is not accessed frequently but needs to be retained to be stored in a more resource-efficient and cost-effective manner.&lt;/p&gt;
&lt;p&gt;The most frequently accessed file data can be thought of as a &quot;hot data tier, which uses normal file storage.
Data used less often can be moved to low-cost storage alternatives in different ways, depending on the relative frequency of access.
Some data is rarely accessed or modified but needs to be archived for future projects, for verification purposes in audits, or to meet regulatory requirements. This &quot;cold&quot; data could be tiered to low-cost object storage in the same data storage system or in a remote storage system, such as remote object storage.&lt;/p&gt;
&lt;p&gt;In HPE Ezmeral Data Fabric, you can create a cold-tiered volume, set corresponding storage policies, and periodically offload the data in the volume to the remote object storage.
The remote object storage can be the object storage of AWS, GCP, Azure and other public clouds, or an object storage service compatible with Minio.
Of course, you can also use the Object Store of HPE Ezmeral Data Fabric as a remote target.
This article will demonstrate how to create a cold-tiered volume and configure another HPE Ezmeral Data Fabric Object Store as the remote target for offloading.
At the same time, I will also demonstrate how to create an account, IAM user, and bucket in the HPE Ezmeral Data Fabric Object Store, and use the AWS CLI to perform a &lt;strong&gt;put object operation&lt;/strong&gt; on this bucket through the above configuration.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#advantages-of-using-data-tiering&quot;&gt;&lt;/a&gt;Advantages of using data tiering&lt;/h3&gt;
&lt;p&gt;HPE Ezmeral Data Fabric provides a rule-based automated tiering functionality that allows you to seamlessly integrate with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Low-cost storage as an additional storage tier in the data fabric cluster for storing file data that is less frequently accessed (&quot;warm&quot; data) in an erasure-coded volume.&lt;/li&gt;
&lt;li&gt;3rd party cloud object storage as an additional storage tier in the data fabric cluster to store file data that is rarely accessed or archived (&quot;cold&quot; data).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this way, valuable on-premise storage resources can be used for more active or &quot;hot&quot; file data and applications, while &quot;warm&quot; and/or &quot;cold&quot; file data can be retained at minimal cost for compliance, historical, or other business reasons. HPE Ezmeral Data Fabric provides consistent and simplified access to and management of the data.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#advantages-of-using-object-store&quot;&gt;&lt;/a&gt;Advantages of using Object Store&lt;/h3&gt;
&lt;p&gt;HPE Ezmeral Data Fabric Object Store is a native object storage solution that efficiently stores objects and metadata for optimized access.&lt;/p&gt;
&lt;p&gt;Underlying each Object Store bucket is a volume. Every bucket created in an Object Store account is automatically associated with a volume. You can snapshot or mirror a bucket volume for disaster recovery.&lt;/p&gt;
&lt;p&gt;If you create an account in Object Store, specify the erasure coding scheme (ecscheme) in the storage_class. All buckets created in the account inherit the ecscheme. Underlying volumes are automaticall tiered in such a way that data in a bucket volume can be offloaded to a back-end volume to reclaim storage space.&lt;/p&gt;
&lt;p&gt;Some potential Object Store use cases include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Archive data and build on-premises applications, or migrate to cloud-native applications.&lt;/li&gt;
&lt;li&gt;Store media for operational use; reduce costs of storing globally distributed media, such as music, video, and images.&lt;/li&gt;
&lt;li&gt;Run analytics on data with tools like Apache Spark, Apache Drill, Presto, and S3 Select to gain valuable insights into customers, operations, or markets.&lt;/li&gt;
&lt;li&gt;Maintain Spark Delta Lake time travel information. You can time travel to see different versions of the data when Object Store is configured as a data lake for Spark Delta Lake.&lt;/li&gt;
&lt;li&gt;Store ML model data and share the ML models in real-time with downstream applications.&lt;/li&gt;
&lt;li&gt;Publish S3 events to HPE Ezmeral Data Fabric Streams.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;a href=&quot;#install-and-configure-object-store&quot;&gt;&lt;/a&gt;Install and configure Object Store&lt;/h2&gt;
&lt;p&gt;Regarding the installation of Object Store, I recommend you use the &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapRInstaller.html&quot;&gt;Installer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When using the Installer to install HPE Ezmeral Data Fabric 7.0.0 or higher, you must enable security.
Even for POC environments, I recommend you enable and configure basic security.&lt;/p&gt;
&lt;p&gt;If you have ever used the ecological components of Apache Hadoop, or other commercial big data suites, then I think you already have the basic concepts of authentication, authorization, audit, and encryption of the Hadoop ecosystem.&lt;/p&gt;
&lt;p&gt;The rationale for the security of HPE Ezmeral Data Fabric is the same as an open-source Hadoop ecosystem.
For example, SASL (MapR-SASL, Kerberos) is used for authentication, Ranger is used for authorization, and TLS is used for encryption.&lt;/p&gt;
&lt;p&gt;Use the Installer to install HPE Ezmeral Data Fabric, which can automatically create a series of TLS-related certificates, and automatically configure core components and various HPE Ezmeral Ecosystem Pack components to enable security.&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#post-installation-configuration-for-object-store&quot;&gt;&lt;/a&gt;Post-installation configuration for Object Store&lt;/h3&gt;
&lt;p&gt;Some post-installation steps must be performed before you can use the HPE Ezmeral Data Fabric Object Store.
You should refer to this document - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/AdvancedInstallation/Enabling_object_store.html&quot;&gt;Enabling the HPE Ezmeral Data Fabric Object Store&lt;/a&gt; for post-installation configuration.&lt;/p&gt;
&lt;p&gt;For the above document, I have a few supplementary notes, which should make your configuration smoother.&lt;/p&gt;
&lt;h4&gt;1. About &quot;keytool -noprompt -importcert&quot; command&lt;/h4&gt;
&lt;p&gt;Please do not use this 👇 command in the original document:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;${JAVA_HOME}/bin/keytool -noprompt -importcert -file /opt/mapr/conf/ca/chain-ca.pem -alias maprca -keystore ${JAVA_HOME}/lib/security/cacerts -storepass &amp;#x3C;store_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Instead, use the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;${JAVA_HOME}/bin/keytool -noprompt -importcert -file /opt/mapr/conf/ca/chain-ca.pem -alias maprca -cacerts -storepass &amp;#x3C;store_password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you use the Installer to install HPE Ezmeral Data Fabric on a fresh OS, then Installer will automatically install JDK 11, then &quot;-storepass&quot; password is &quot;changeit&quot;.&lt;/p&gt;
&lt;p&gt;There is another place in the documentation where keytool is used. The command is as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;${JAVA_HOME}/bin/keytool -noprompt -importcert -file /opt/mapr/conf/ca/chain-ca.pem -alias mosscert -keystore ${JAVA_HOME}/lib/security/cacerts -storepass changeit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You don&apos;t need to execute &lt;code&gt;keytool&lt;/code&gt; twice to import the same &lt;strong&gt;CA certificate&lt;/strong&gt; file.&lt;/p&gt;
&lt;p&gt;I suggest you change &quot;maprca&quot; to something more recognizable.&lt;/p&gt;
&lt;p&gt;For example, if you named the cluster &quot;edf-cluster-a.mycompany.com&quot; when you installed the HPE Ezmeral Data Fabric cluster, then you can use the following keytool command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;${JAVA_HOME}/bin/keytool -noprompt -importcert -file /opt/mapr/conf/ca/chain-ca.pem -alias edf-clustera-ca -keystore ${JAVA_HOME}/lib/security/cacerts - storepass &amp;#x3C;cacerts_truststore&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The file - /opt/mapr/conf/ca/chain-ca.pem, is a self-signed TLS certificate file created by the Installer when configuring wire-level encryption for the cluster.&lt;/p&gt;
&lt;p&gt;Since it&apos;s a self-signed TLS certificate, the client (application, or your browser) will not be able to trust the TLS certificate of the server when accessing the HPE Ezmeral Data Fabric server.
This is because the CA certificate used by the self-signed TLS certificate is not publicly trusted.&lt;/p&gt;
&lt;p&gt;In any scenario where a self-signed TLS certificate is used, you need to import the self-signed CA certificate into the OS system.&lt;/p&gt;
&lt;p&gt;You can use the following command to look at things like CA certificates in your JVM:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;keytool -list -v -cacerts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now see something like the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-dart&quot;&gt;Alias name: digicertassuredidrootca
Creation date: Feb 2, 2023
Entry type: trustedCertEntry

Owner: CN=DigiCert Assured ID Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US
Issuer: CN=DigiCert Assured ID Root CA, OU=www.digicert.com, O=DigiCert Inc, C=US
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a recognized CA certificate issued by Digicert.&lt;/p&gt;
&lt;p&gt;After you use &lt;code&gt;keytool&lt;/code&gt; to import /opt/mapr/conf/ca/chain-ca.pem into the JVM, use keytool -list -v -cacerts to see something similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-dart&quot;&gt;Alias name: edf-clustera-ca
Creation date: Feb 16, 2023
Entry type: trustedCertEntry

Owner: CN=MapR Engineering Signing CA, OU=MapR Engineering Signing CA, O=MapR, DC=hpecorp, DC=net
Issuer: CN=MapR Engineering Root CA, OU=MapR Engineering Root CA, O=MapR, DC=hpecorp, DC=net
...
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. About client software options&lt;/h4&gt;
&lt;p&gt;The documentation describes several client software that you can use to interact with the HPE Ezmeral Object Store. I would like to add the following clarification:&lt;/p&gt;
&lt;p&gt;When using the &lt;code&gt;mc&lt;/code&gt; command-line tool or the &lt;code&gt;s3cmd&lt;/code&gt; command-line tool to interact with HPE Ezmeral Data Fabric Object Store without completing the configuration of &quot;&lt;a href=&quot;https://docs.datafabric.hpe.com/72/AdvancedInstallation/Enabling_object_store.html#concept_isb_53h_5bb__section_nnp_kr2_bvb&quot;&gt;Enabling S3 Virtual-Host-Style Requests&lt;/a&gt;&quot;, some commands will not work properly.&lt;/p&gt;
&lt;p&gt;In this case, for management operations, such as creating accounts, IAM accounts, buckets, etc., I recommend you use the &lt;code&gt;mc&lt;/code&gt; command line tool.&lt;/p&gt;
&lt;p&gt;For object listing, getting, putting, and deleting operations, I recommend you use &lt;code&gt;AWS CLI&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;3. Cannot revert to HTTP mode after enabling HTTPS.&lt;/h4&gt;
&lt;p&gt;If the Object Store was installed using the Installer, the Object Store would also have security enabled, including HTTPS.&lt;/p&gt;
&lt;p&gt;For example, although you can see the description below from HTTP Access to Object Store, this configuration change alone does not change Object Store to HTTP mode.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;To revert to http access, comment out the moss.certs.dir=/opt/mapr/conf line in the /opt/mapr/conf/moss.conf file.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;☝from here - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/AdvancedInstallation/Enabling_object_store.html#concept_isb_53h_5bb__section_bg1_zd1_vsb&quot;&gt;HTTP Access to Object Store&lt;/a&gt;
Additionally, there is nothing else in HPE Ezmeral Data Fabric&apos;s documentation on how to modify Object Store&apos;s TLS mode.
You may be able to find out how to turn off HTTPS from this document. 👉 &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/Object-Store-signed-certs.html&quot;&gt;Using Custom Signed Certificates with Object Store&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;&lt;a href=&quot;#create-a-bucket-in-hpe-ezmeral-data-fabric-object-store-and-upload-some-objects&quot;&gt;&lt;/a&gt;Create a bucket in HPE Ezmeral Data Fabric Object Store and upload some objects&lt;/h3&gt;
&lt;p&gt;In order to create a bucket, you need to create an account, an IAM user, and finally a bucket in sequence.
While creating IAM user and bucket, you also need to prepare access policy control for who (IAM User) can perform which (list, get, put, etc.) operations on which bucket.&lt;/p&gt;
&lt;p&gt;You can read the following document - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/object_store_account_management.html&quot;&gt;Entities and Resources&lt;/a&gt; first to gain a deeper understanding of the entity model of HPE Ezmeral Data Fabric Object Store.&lt;/p&gt;
&lt;p&gt;Below, I will demonstrate how to create the above-required entities.&lt;/p&gt;
&lt;h4&gt;Create an account&lt;/h4&gt;
&lt;p&gt;I choose to use the &lt;a href=&quot;https://docs.datafabric.hpe.com/72/ReferenceGuide/mc-commands-overview.html&quot;&gt;mc&lt;/a&gt; command line tool to create an account.&lt;/p&gt;
&lt;p&gt;Before using the mc command line for the first time, you need to create an alias for your administrator.
An alias contains an access endpoint, such as &quot;&lt;a href=&quot;https://s3-us-west-1.amazonaws.com&quot;&gt;https://s3-us-west-1.amazonaws.com&lt;/a&gt;&quot;, which is an Amazon AWS S3 endpoint; another example is &quot;&lt;a href=&quot;http://10.10.88.198:9000&quot;&gt;http://10.10.88.198:9000&lt;/a&gt;&quot;, which is a Minio endpoint.
An alias also contains the access key and secret key used by your administrator or IAM User.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. You first use the &quot;&lt;a href=&quot;https://docs.datafabric.hpe.com/72/ReferenceGuide/mc-alias-list.html&quot;&gt;mc alias list&lt;/a&gt;&quot; command to view the default alias in the following systems.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;❗Note: If you are using self-signed TLS certificates or installed the cluster via Installer, you have to copy &lt;ins&gt;/opt/mapr/conf/ca/chain-ca.pem&lt;/ins&gt; to &lt;ins&gt;~/.mc/certs/CAs/&lt;/ins&gt; on the node running &lt;code&gt;mc&lt;/code&gt;.
The reason for this step is the same as why you imported the self-issued CA to the keytool of the JVM earlier, &lt;code&gt;mc&lt;/code&gt; also needs to import the self-issued CA to communicate with the S3server of the Object Store.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -E -u mapr /opt/mapr/bin/mc alias list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sample output:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gcs
  URL       : https://storage.googleapis.com
  AccessKey : YOUR-ACCESS-KEY-HERE
  SecretKey : YOUR-SECRET-KEY-HERE
  API       : S3v2
  Path      : dns

local
  URL       : http://localhost:9000
  AccessKey :
  SecretKey :
  API       :
  Path      : auto

play
  URL       : https://play.min.io
  AccessKey : XXXXXXXXXXXXXXXXXXXX
  SecretKey : XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  API       : S3v4
  Path      : auto

s3
  URL       : https://s3.amazonaws.com
  AccessKey : YOUR-ACCESS-KEY-HERE
  SecretKey : YOUR-SECRET-KEY-HERE
  API       : S3v4
  Path      : dns
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2. Generate S3 keys to authenticate your administrator&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The cluster administrator (typically the mapr user) must authenticate to the Object Store cluster and generate S3 keys (accessKey and secretKey) on the default Object Store account.
Perform this operation before performing any CLI operations in Object Store.&lt;/p&gt;
&lt;p&gt;If the cluster is secure, use &lt;a href=&quot;https://docs.datafabric.hpe.com/72/SecurityGuide/ThemaprloginUtility.html&quot;&gt;maprlogin&lt;/a&gt; to authenticate the cluster administrator, and then generate the keys:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;maprcli s3keys generate -domainname primary -accountname default -username mapr -json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: An Object Store cluster has a domain, accounts, buckets, users, and access policies associated with it.
Installing Object Store in a cluster provides a primary domain and a default account.&lt;/p&gt;
&lt;p&gt;Sample output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;timestamp&quot;:1676472096994,
  &quot;timeofday&quot;:&quot;2023-02-15 10:41:36.994 GMT+0800 PM&quot;,
  &quot;status&quot;:&quot;OK&quot;,
  &quot;total&quot;:1,
  &quot;data&quot;:[
          {
            &quot;accesskey&quot;:&quot;XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;,
            &quot;secretkey&quot;:&quot;XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;
          }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: If you encounter any problem when generating the S3 keys, refer to this page: &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/object-store-get-started.html#object-store-get-started__section_x4h_w13_4tb&quot;&gt;Generate S3 Keys to Authenticate Users and Applications&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Use the &lt;code&gt;mc alias set&lt;/code&gt; command to create an alias for admin user&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;mc alias set s3-admin-alias https://`hostname -f`:9000 {ACCESS_KEY} {SECRET_KEY} --api &quot;s3v4&quot; --path &quot;off&quot; --json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: &quot;s3-admin-alias&quot; is the value of the alias parameter, you define it.
&quot;https://&lt;code&gt;hostname -f&lt;/code&gt;:9000&quot; is the endpoint of the Object Store service.
Here, I&apos;m running the command on the node that is running the S3server.
After created an alias, you would find the information is appended into &lt;ins&gt;$HOME/.mc/config.json&lt;/ins&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. Create an account&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I chose to use the Object Store Web GUI to create an account.&lt;/p&gt;
&lt;p&gt;Refer to this document - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/create-account.html#create_account__section_fsm_1hn_nrb&quot;&gt;Using the Object Store Interface&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Create an account using Object Store Web GUI - 1
&lt;a href=&quot;https://ibb.co/SyW9MKK&quot;&gt;&lt;img src=&quot;https://i.ibb.co/XLNvKzz/Object-Store-Create-Account-1.png&quot; alt=&quot;Object-Store-Create-Account-1&quot; border=&quot;0&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Enter the following in &quot;Default Bucket Place&quot;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
      {
        &quot;Sid&quot;: &quot;GrantAdminPutPermissions&quot;,
        &quot;Effect&quot;: &quot;Allow&quot;,
        &quot;Principal&quot;: &quot;arn:primary:default:user:mapr&quot;,
        &quot;Action&quot;:&quot;s3:PutObject&quot;,
        &quot;Resource&quot;:&quot;arn:aws:s3:::${bucket}/*&quot;
      },
      {
        &quot;Sid&quot;:&quot;GrantAnonymousReadPermissions&quot;,
        &quot;Effect&quot;:&quot;Allow&quot;,
        &quot;Principal&quot;: &quot;*&quot;,
        &quot;Action&quot;:[&quot;s3:GetObject&quot;],
        &quot;Resource&quot;:[&quot;arn:aws:s3:::${bucket}/*&quot;]
      }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;HPE Ezmeral Data Fabric Object Store is an on-premises object storage service compatible with Minio.
Some concepts such as Domain and Default Account do not exist in public cloud object storage services such as AWS S3.
But the policy for bucket and IAM user is compatible with the policy in public cloud object storage.
For the Bucket Policy here, you can refer to AWS S3 &lt;a href=&quot;https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html&quot;&gt;Bucket policy examples&lt;/a&gt; and HPE Ezmeral Data Fabric Object Store document - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/object-store-policies.html&quot;&gt;Access Policies&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After creating the user account, you can use the below command to view it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -u mapr /opt/mapr/bin/mc admin account list {ADMIN_ALIAS} domain=primary --json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sample output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
 &quot;name&quot;: &quot;default&quot;,
 &quot;id&quot;: 0,
 &quot;admin&quot;: &quot;mapr&quot;,
 &quot;labelname&quot;: &quot;default&quot;,
 &quot;minrepl&quot;: 2,
 &quot;desiredrepl&quot;: 3,
 &quot;usercount&quot;: 2
}
{
  // # 👇 The account you just created.
 &quot;name&quot;: &quot;s3test&quot;,
 &quot;id&quot;: 1,
 &quot;admin&quot;: &quot;mapr&quot;,
 &quot;def_bucket_policy&quot;: {
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
   {
    &quot;Sid&quot;: &quot;GrantAdminPutPermissions&quot;,
    &quot;Effect&quot;: &quot;Allow&quot;,
    &quot;Principal&quot;: {
     &quot;AWS&quot;: [
      &quot;arn:primary:default:user:mapr&quot;
     ]
    },
    &quot;Action&quot;: [
     &quot;s3:PutObject&quot;
    ],
    &quot;Resource&quot;: [
     &quot;arn:aws:s3:::${bucket}/*&quot;
    ]
   },
   {
    &quot;Sid&quot;: &quot;GrantAnonymousReadPermissions&quot;,
    &quot;Effect&quot;: &quot;Allow&quot;,
    &quot;Principal&quot;: {
     &quot;AWS&quot;: [
      &quot;*&quot;
     ]
    },
    &quot;Action&quot;: [
     &quot;s3:GetObject&quot;
    ],
    &quot;Resource&quot;: [
     &quot;arn:aws:s3:::${bucket}/*&quot;
    ]
   }
  ]
 },
 &quot;size&quot;: 22871,
 &quot;labelname&quot;: &quot;default&quot;,
 &quot;topology&quot;: &quot;/data/default-rack&quot;,
 &quot;minrepl&quot;: 1,
 &quot;desiredrepl&quot;: 1,
 &quot;usercount&quot;: 1,
 &quot;bucketcount&quot;: 1
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Create an IAM User in the non-default account just created&lt;/h4&gt;
&lt;p&gt;In step 4, you created a non-default account named &quot;s3test&quot;.
In HPE Ezmeral Data Fabric Object Store, you must create a non-default account to create an IAM user, and you should use the IAM User to operate buckets.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -u mapr /opt/mapr/bin/mc admin user add s3-admin-alias s3-test-iam_user account=s3test domain=primary
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: &quot;s3-admin-alias&quot; is the admin alias you created in step-3, and &quot;s3-test-iam_user&quot; is the IAM User name.
For more information, refer to: &lt;a href=&quot;https://docs.datafabric.hpe.com/72/MapROverview/create-IAM-user.html&quot;&gt;Create IAM Users&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Next, you create an IAM policy for the IAM User - s3-test-iam_user.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cat &amp;#x3C;&amp;#x3C;&apos;EOF&apos; &gt; ./PolicyPublicRead.json
{
    &quot;Version&quot;:&quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Sid&quot;:&quot;GrantAnonymousReadPermissions&quot;,
            &quot;Effect&quot;:&quot;Allow&quot;,
            &quot;Principal&quot;: &quot;*&quot;,
            &quot;Action&quot;:[&quot;s3:GetObject&quot;],
            &quot;Resource&quot;:[&quot;arn:aws:s3:::${bucket}/*&quot;]
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;mc admin policy add s3-admin-alias PolicyPublicRead ./PolicyPublicRead.json account=s3test domain=primary
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: &quot;PolicyPublicRead&quot; is the IAM Policy&apos;s name.&lt;/p&gt;
&lt;p&gt;You can also use the Object Store Web GUI to create the IAM Policy, like shown in the following screenshot👇.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://ibb.co/xsH1nYH&quot;&gt;&lt;img src=&quot;https://i.ibb.co/n0C7XBC/Object-Store-Create-IAMPolicy-1.png&quot; alt=&quot;Object-Store-Create-IAMPolicy-1&quot; border=&quot;0&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s create another IAM policy named &quot;GrantbucketOperations&quot;👇.
You will associate these 2 IAM Policies to the IAM User - &quot;s3-test-iam_user&quot; later.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;:
    [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;:
            [
                &quot;s3:ListAllMyBuckets&quot;,
                &quot;s3:CreateBucket&quot;,
                &quot;s3:GetBucketLocation&quot;
            ],
            &quot;Resource&quot;:
            [
                &quot;arn:aws:s3:::*&quot;
            ]
        },
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;:
            [
                &quot;s3:ListBucket&quot;,
                &quot;s3:GetBucketLocation&quot;
            ],
            &quot;Resource&quot;: &quot;arn:aws:s3:::s3-test-iam-user-bucket/*&quot;
        },
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;:
            [
                &quot;s3:PutObject&quot;,
                &quot;s3:PutObjectAcl&quot;,
                &quot;s3:GetObject&quot;,
                &quot;s3:GetObjectAcl&quot;,
                &quot;s3:DeleteObject&quot;
            ],
            &quot;Resource&quot;: &quot;arn:aws:s3:::s3-test-iam-user-bucket/*&quot;
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To associate an IAM Policy to an IAM User:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -u mapr /opt/mapr/bin/mc admin policy set s3-admin-alias PolicyPublicRead users=&apos;s3-test-iam_user&apos; account=&apos;s3test&apos; domain=&apos;primary&apos;
sudo -u mapr /opt/mapr/bin/mc admin policy set s3-admin-alias GrantBucketOperations users=&apos;s3-test-iam_user&apos; account=&apos;s3test&apos; domain=&apos;primary&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Create a bucket for the IAM user&lt;/h4&gt;
&lt;p&gt;First, you need to generate the access key and secret key for the IAM User - &quot;s3-test-iam_user&quot;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -u mapr maprcli s3keys generate -domainname primary \
  -accountname s3test \
  -username &apos;s3-test-iam_user&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;☝Then you would get the access key and secret key for the IAM User - &quot;s3-test-iam_user&quot;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo -u mapr mc alias set s3-test-iam_user-alias https://`hostname -f`:9000 \
{ACCESS_KEY} \
{SECRET_KEY} \
--api &quot;s3v4&quot; --path &quot;off&quot; --json

sudo -u mapr mc mb --account s3test --ignore-existing --disable-versioning --json s3-test-iam_user-alias/s3-test-iam-user-bucket
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;☝Now you have created a bucket named &quot;s3-test-iam-user-bucket&quot; using the IAM user - &quot;s3-test-iam_user&quot;.
Because &quot;s3-test-iam_user&quot; is inside account - &quot;s3test&quot;, the bucket will be also placed under account - &quot;s3test&quot;.&lt;/p&gt;
&lt;p&gt;To list buckets using the &lt;code&gt;mc&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;/opt/mapr/bin/mc ls --account s3test --versions --recursive --summarize --json s3-test-iam_user-alias
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Sample output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
 &quot;status&quot;: &quot;success&quot;,
 &quot;type&quot;: &quot;folder&quot;,
 &quot;lastModified&quot;: &quot;2023-02-16T15:22:16+08:00&quot;,
 &quot;size&quot;: 23897893980,
 &quot;key&quot;: &quot;s3-test-iam-user-bucket/&quot;,
 &quot;etag&quot;: &quot;&quot;,
 &quot;url&quot;: &quot;https://m2-maprts-vm197-172.mip.storage.hpecorp.net:9000/&quot;,
 &quot;versionOrdinal&quot;: 1
}
{
&quot;totalObjects&quot;: 1,
&quot;totalSize&quot;: 23897893980
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Install the AWS CLI and put an file into the bucket&lt;/h4&gt;
&lt;p&gt;To install the AWS CLI, refer to this Amazon AWS document 👉 &lt;a href=&quot;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&quot;&gt;Installing or updating the latest version of the AWS CLI&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Then, create a profile for the IAM user:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;export AWS\_CA\_BUNDLE=/opt/mapr/conf/ca/chain-ca.pem
aws configure --profile s3-test-iam_user-ray-2-objstor
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;❗Note: Before using the AWS CLI, remember to export the environment AWS_CA_BUNDLE=/opt/mapr/conf/ca/chain-ca.pem.
Otherwise AWS CLI cannot communicate with S3server because S3server is using self-signed TLS certificates.&lt;/p&gt;
&lt;p&gt;After inputting the above command, the AWS CLI will ask you to input the access key and secret key.
After the profile is created, the information will be stored at &lt;ins&gt;$HOME/.aws/config&lt;/ins&gt; and &lt;ins&gt;$HOME/.aws/credentials&lt;/ins&gt;.&lt;/p&gt;
&lt;p&gt;Use the below command to list buckets:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;aws s3api list-buckets --endpoint-url https://`hostname -f`:9000 --profile s3-test-iam_user-ray-2-objstor
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use the below command to put a file into the bucket:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;aws s3api put-object --bucket s3-test-iam-user-bucket --key &apos;testdir/s3-test-iam-user-dir/hpe-cp-rhel-release-5.5.1-3083.bin&apos; --body &apos;downloads/hpe-cp-rhel-release-5.5.1-3083.bin&apos; --endpoint-url https://m2-maprts-vm197-172.mip.storage.hpecorp.net:9000 --profile s3-test-iam_user-ray-2-objstor
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: &quot;s3-test-iam-user-bucket&quot; is the bucket&apos;s name which you created before.
&quot;testdir/s3-test-iam-user-dir/hpe-cp-rhel-release-5.5.1-3083.bin&quot; is the path which you want to put into the bucket.
The part of &quot;testdir/s3-test-iam-user-dir/&quot; indicates it&apos;s under this directory, if the directory doesn&apos;t exist, it will be created.
&quot;downloads/hpe-cp-rhel-release-5.5.1-3083.bin&quot; is the local file path which you want to put into the bucket.&lt;/p&gt;
&lt;h2&gt;Create an cold-tiered volume and offlad to remote Object Store&lt;/h2&gt;
&lt;p&gt;So, now we are going to create a volume on another cluster and configure the cold-tier remote target for this volume.
Then we will manually offload the data in this olume to the remote HPE Ezmeral Data Fabric Object Store.&lt;/p&gt;
&lt;h3&gt;Create a cold-tiered volume via a web GUI&lt;/h3&gt;
&lt;p&gt;First, you will need to log into MCS, and enter the following positions in turn at the top of the screen: Data --&gt; Volumes👇&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://ibb.co/1Zg5RXN&quot;&gt;&lt;img src=&quot;https://i.ibb.co/2q9QcMr/Create-Cold-Tier-Volume-1.png&quot; alt=&quot;Create-Cold-Tier-Volume-1&quot; border=&quot;0&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Then click &quot;Create Volume&quot; at the top of the screen👇.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://ibb.co/T8SV1Mt&quot;&gt;&lt;img src=&quot;https://i.ibb.co/cFHGwhQ/Create-Cold-Tier-Volume-2.png&quot; alt=&quot;Create-Cold-Tier-Volume-2&quot; border=&quot;0&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Fill in the necessary information, you can refer to this document 👉 &lt;a href=&quot;https://docs.datafabric.hpe.com/72/ClusterAdministration/data/volumes/CreateVols.html&quot;&gt;Creating a Volume&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Turn on the &quot;Data Tiering&quot; switch and select &quot;Remote Archiving(Cold)&quot;. Refer to the figure below to fill in the remote target information:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://ibb.co/cyy3cWt&quot;&gt;&lt;img src=&quot;https://i.ibb.co/3TTdyxW/Create-Cold-Tier-Volume-3.png&quot; alt=&quot;Create-Cold-Tier-Volume-3&quot; border=&quot;0&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;URL: The host where the S3server of the remote HPE Ezmeral Data Fabric cluster is located, and the port number is the default port 9000 of the S3server.&lt;/li&gt;
&lt;li&gt;Bucket: The bucket created for IAM User in previous steps.&lt;/li&gt;
&lt;li&gt;Access key and secret key: The keys of the IAM User created in the previous step.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Configure the CA certificate of the remote Object Store for the MAST Gateway of the local cluster&lt;/h3&gt;
&lt;p&gt;You should remember that in the earlier steps, we configured the CA certificate of the Object Store&apos;s self-signed TLS certificate for the JDK keystore as well as the mc command line tool and the AWS CLI.&lt;/p&gt;
&lt;p&gt;Now, you will also need to configure this self-signed CA root certificate for MAST Gateway so that it can communicate with the remote Object Store.&lt;/p&gt;
&lt;p&gt;Refer to this document - &lt;a href=&quot;https://docs.datafabric.hpe.com/72/StorageTiers/ConfigMASTGateway.html&quot;&gt;Configuring the MAST Gateway Service&lt;/a&gt;, and set the value of &quot;mastgateway.curl.cainfo&quot; in the configuration file.&lt;/p&gt;
&lt;p&gt;You need to find &lt;ins&gt;/opt/mapr/conf/ca/chain-ca.pem&lt;/ins&gt; from a host of the Object Store cluster first, and copy it to the MAST Gateway node.
For the convenience of management, you can rename it appropriately and configure it as the value of &quot;mastgateway.curl.cainfo&quot;.&lt;/p&gt;
&lt;h3&gt;Use the maprcli volume offload command to manually offload data&lt;/h3&gt;
&lt;p&gt;Now you can place some data in the cold-tiered volume you just created.
I put a 5.6GB file in it.&lt;/p&gt;
&lt;p&gt;Then, you can use the following command to manually trigger the offload of the entire olume.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;maprcli volume offload -ignorerule true -name {VOLUME_NAME}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, you can use the following command to monitor the offload status.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;watch &apos;maprcli volume tierjobstatus -name {VOLUME_NAME} -json&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;🗒Note: the &lt;code&gt;watch&lt;/code&gt; will execute the following string as a command every 2 seconds.&lt;/p&gt;
&lt;p&gt;When the offload is complete, you will see the following output.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;timestamp&quot;: 1676564889008,
    &quot;timeofday&quot;: &quot;2023-02-17 12:28:09.008 GMT+0800 AM&quot;,
    &quot;status&quot;: &quot;OK&quot;,
    &quot;total&quot;: 1,
    &quot;data&quot;:
    [
        {
            &quot;offload&quot;:
            {
                &quot;state&quot;: &quot;Success&quot;,
                &quot;progress&quot;: &quot;100%&quot;,
                &quot;startTime&quot;: &quot;2023-02-17 00:22:47.352 GMT+0800&quot;,
                &quot;endTime&quot;: &quot;2023-02-17 00:27:00.014 GMT+0800&quot;,
                &quot;offloadedDataSize&quot;: &quot;5697.702 MB&quot;,
                &quot;gateway&quot;: &quot;10.163.173.99:8660&quot;
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this article, I have demonsrated how to create a bucket in HPE Ezmeral Data Fabric Object store and upload data using the AWS CLI command line tool.
Then, I showed you how to create a cold-tiered volume and configure it to use the remote Object Store as a remote target.
Finally, I showed you how to manually trigger a volume data offload to verify that the whole work is populated.
I hope this article was helpful to you. Catch you next time!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Addressing hybrid cloud application challenges]]></title><link>https://developer.hpe.com/2023-March-03/</link><guid isPermaLink="false">https://developer.hpe.com/2023-March-03/</guid><pubDate>Fri, 03 Mar 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Open Sourcing Workshops-on-Demand part 2: How to Deploy the backend]]></title><description><![CDATA[In the first article of this series, I described the reasons behind the decision to open source our Workshops-on-Demand (WoD) project and…]]></description><link>https://developer.hpe.com/open-sourcing-workshops-on-demand-part2-deploying-the-backend/</link><guid isPermaLink="false">https://developer.hpe.com/open-sourcing-workshops-on-demand-part2-deploying-the-backend/</guid><pubDate>Wed, 01 Mar 2023 17:23:03 GMT</pubDate><content:encoded>&lt;p&gt;In the first &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;article&lt;/a&gt; of this series, I described the reasons behind the decision to open source our Workshops-on-Demand (WoD) project and gave you a comprehensive picture of the project&apos;s overall infrastructure. In this second article, I will cover the backend part of the project and explain how to deploy it.&lt;/p&gt;
&lt;p&gt;The overall infrastructure can run on physical servers or VMs. We usually designate one server for the frontend and a second server for the backend. You could also decide to separate every single component of each side.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;H﻿ow to deploy your own backend...&lt;/h2&gt;
&lt;p&gt;A﻿s explained in the previous &lt;a href=&quot;https://developer.hpe.com/blog/willing-to-build-up-your-own-workshops-on-demand-infrastructure/&quot;&gt;article&lt;/a&gt;, the project is split into multiple repositories from the architectural and public / private aspects. The architecture is divided between the frontend and backend. The project admins will need to decide whether they are willing to develop and propose public-only content to the participants or add any proprietary and private content.&lt;/p&gt;
&lt;p&gt;I﻿ will start with the simpliest scenario: A public-only approach. Then we will dive into the specificities related the private approach.&lt;/p&gt;
&lt;h3&gt;P﻿ublic-only Deployment: No private backend nor private workshops&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Important Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;T﻿his part is compulsory for any type of deployment. Public only or public + private.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;F﻿irst, you need a repository to clone. The Workshops-on-Demand GitHub projects can be found &lt;a href=&quot;https://github.com/Workshops-on-Demand/&quot;&gt;here&lt;/a&gt;. W﻿e have packaged the solution in several Github repos. Each repository handles a specific role in the overall architecture.&lt;/p&gt;
&lt;p&gt;Here&apos;s a quick look at what can be found in each:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie2-repos.png&quot; alt=&quot;&quot; title=&quot;WOD Repositories&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-notebooks&quot;&gt;w﻿od-notebooks&lt;/a&gt;:&lt;/strong&gt; Public Workshops-on-Demand based on Jupyter Notebooks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can test them live at &lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot;&gt;https://hackshack.hpedev.io/workshops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-install&quot;&gt;w﻿od-install&lt;/a&gt;:&lt;/strong&gt; Installer part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend&quot;&gt;w﻿od-backend&lt;/a&gt;:&lt;/strong&gt; Back-end part of our Workshops-on-Demand setup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend&quot;&gt;w﻿od-frontend&lt;/a&gt;:&lt;/strong&gt; Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Based on NGINX and NodeJS technologies, it provides the participtants&apos; Registration Portal used to enable booking of the workshops.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db&quot;&gt;w﻿od-api-db&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open API 3.0 based api used to manage the Workshops-on-Demand project. It also provides a database hosting the different status of participants, workshops, and students.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-private&quot;&gt;w﻿od-private&lt;/a&gt;:&lt;/strong&gt; Example Private configuration for Workshops-on-Demand (WoD).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-frontend-private&quot;&gt;w﻿od-frontend-private&lt;/a&gt;:&lt;/strong&gt; Private Frontend part of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-api-db-private&quot;&gt;w﻿od-api-db-private&lt;/a&gt;:&lt;/strong&gt; Workshops-on-Demand registration portal application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This provide examples for creating your own cutomization layer on top of the public standard WoD Backend / wod Notebooks content. Do not put any confidential data here as this is a public repository!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: T﻿here are now 7 repositories available for now.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-blogserie2-2repos.png&quot; alt=&quot;&quot; title=&quot;Workshops-on-Demand repositories&quot;&gt;&lt;/p&gt;
&lt;p&gt;It provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An Installer allowing you to install either Backend, Api-DB server, or Frontend using a single line of command.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A complete JupyterHub server with some addons (additional Jupyterhub kernels, Ansible galaxies, and PowerShell libraries) on your system, ready to host Workshops-on-Demand that you can find here.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A postfix server used for the procmail API&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Ansible engine to allow automation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A fail2ban service&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An Admin user to manage everything&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A set of scripts to handle different tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Notebooks deployment&lt;/li&gt;
&lt;li&gt;Jupyterhub compliancy&lt;/li&gt;
&lt;li&gt;Users compliancy&lt;/li&gt;
&lt;li&gt;Security Management&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Backend server preparation:&lt;/h4&gt;
&lt;p&gt;B﻿efore cloning the backend repository, you will need to prepare the server that will host the backend features. When ready, you will proceed with the cloning and then the installation process.&lt;/p&gt;
&lt;h5&gt;Prerequesites:&lt;/h5&gt;
&lt;p&gt;In order to setup the backend server, you will need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fresh OS install on physical / virtualized server running Ubuntu 24.04 or Centos 7.9 leveraging any deployment mechanism of your choice.(e.g. iLO, vagrant, etc.). You may even use this vagrant file to automatically generate a complete setup leveraging vagrant, libvirt and QEMU/KVM.&lt;/li&gt;
&lt;li&gt;A Linux account with sudo priviledges on your Linux distro. Name it &lt;code&gt;install&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: In order to support 100 concurrent users, you need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 cpus or more machine&lt;/li&gt;
&lt;li&gt;128 GB of RAM&lt;/li&gt;
&lt;li&gt;500 GB of storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We are currently using an HPE ProLiant DL360 Gen10 server on our different production sites.&lt;/p&gt;
&lt;p&gt;When done with OS installation and preparation&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From the WoD-backend server (aka JupyterHub server), as the install user, you will need to clone the wod-install repo first.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;install$ git clone https://github.com/Workshops-on-Demand/wod-install.git
install$ cd wod-install/
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Examine default installation parameters and adapt when necessary accordingly. Files are self-documented.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Look at the following files within ansible/group_vars directory.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;all.yml&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi all.yml
---
# We create fixed user accounts to provide an isolated execution environment to run the jupyter notebooks
# They are called studentXXX where XXX is set between USERMIN and USERMAX defined below potentially with the addition of an offset (UIDBASE) for their uid/gid
# Their home directory is located under /student and is thus named /student/studentXXX
# Corresponding JupyterHub accounts are also created
#
# USERMIN indicates the starting ID of the Linux and Jupyter user account range
#
USERMIN: 1
#
# USERMAX indicates the ending ID of the Linux and Jupyter user account range
#
USERMAX: 20
#
# UIDBASE is the offset used to create the Linux user account IDs
# Example when creating user 35 with UIDBASE of 2000, the uid created is 2035
#
UIDBASE: 2000
#
# GIDBASE is the offset used to create the Linux group IDs
# Example: When creating user 35 with GIDBASE of 2000, the gid created is 2035
#
GIDBASE: 2000
#
# Set CLEAN to true if you want all Liunx &amp;#x26; Jupyter user accounts to be removed before ansible check
#
CLEAN: false
#
# VAULTPWD is the password used to manage the Ansible vault
#
VAULTPWD: VeryComplexPasswd1234!
#
# NOCHECKSSH are ssh options used to dialog with appliances
# By default, avoid checking Host keys and Host file as they may change on a regular base
#
NOCHECKSSH: -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
#
# Branding management - Use if you want to customize Logo and Notebooks branding
#
BRANDING: &quot;WoD Developer&quot;
BRANDINGWOD: &quot;WoD Developer&quot;
BRANDINGLOGO: &quot;![HPEDEVlogo](Pictures/hpe-dev-logo.png)&quot;
BRANDINGURL: &quot;https://wod.io&quot;
#
# Survey management - Use if you want to ask for feedback on your workshops - Look at existing conclusion notebooks
SURVEYURL: TBD
SURVEYCHALURL: TBD
#
# JPHUB  is the directory used to install the JupyterHub stack (a Python venv)
#
JPHUB: /opt/jupyterhub
#
#
# These variables are defined in Ansible playbooks. Do not change without knowing what you are doing.
#
STUDDIR: &quot;{{ ansible_env.STUDDIR }}&quot;
WODBEDIR: &quot;{{ ansible_env.WODBEDIR }}&quot;
WODPRIVDIR: &quot;{{ ansible_env.WODPRIVDIR }}&quot;
WODNOBO: &quot;{{ ansible_env.WODNOBO }}&quot;
WODAPIDBDIR: &quot;{{ ansible_env.WODAPIDBDIR }}&quot;
WODFEDIR: &quot;{{ ansible_env.WODFEDIR }}&quot;
SCRIPTDIR: &quot;{{ WODBEDIR }}/scripts&quot;
ANSIBLEDIR: &quot;{{ WODBEDIR }}/ansible&quot;
# This is the predefined structure for a private repo
WODPRIVNOBO: &quot;{{ WODPRIVDIR }}/notebooks&quot;
SCRIPTPRIVDIR: &quot;{{ WODPRIVDIR }}/scripts&quot;
ANSIBLEPRIVDIR: &quot;{{ WODPRIVDIR }}/ansible&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wod-backend&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi wod-backend
#
# These variables are located lower in the ansible tree to allow different values required for different backends while keeping a single frontend
#
# BASESTDID is the offset used to create users in the DB. It is required that each backend has a different non overlapping value.
# Overlap is defined by BASESTDID + USERMAX (from all.yml)
#
# Example:
# for student 35 in location A having BASESTDID to 0 the user is create as id 35
# for student 35 in location B having BASESTDID to 2000 the user is create as id 2035
# There is no overlap as long as you do not create more than 2000 users which should be the value of USERMAX in that case.
#
# This is different from the offset UIDBASE used for Linux uid
#
BASESTDID: 0
#
# POSTPORT is the Postfix Port on which the smtp service is listening to receive API mail requests from the frontend
#
POSTPORT: &quot;10025&quot;
#
# In case you are using an LDAP server to use, flag as such the corresponding workshops in the DB and use the following values:
#
LDAPSRVNAME: ldap.example.org
LDAPDMN: example.org
LDAPPWD: MotDePasseLDAPCompliquéAussi123!!!##
LDAPPORT: &quot;389&quot;
#
# For various existing public WoDs - These are needed. Adapt but do not remove!
#
SSHPORT-WKSHP-Docker101: 14101
SSHPORT-WKSHP-Ansible101: 16001
HTTPPORT-WKSHP-Docker101: 14151
HTTPPORT-WKSHP-Ansible101: 16051
HTTPPORT-WKSHP-Spark101: 17161
HTTPPORT-WKSHP-Concourse101: 19061
HTTPPORT-WKSHP-ML101: 18061
HTTPPORT-WKSHP-DataVisu101: 22161
CONCOURSEPORT-WKSHP-Concourse101: 19001
CONCOURSEPORT2-WKSHP-Concourse101: 19031
IP-WKSHP-DataVisu101: x.y.z.t
IP-WKSHP-Concourse101: x.y.z.t
IP-WKSHP-Docker101: x.y.z.t
IP-WKSHP-Ansible101: x.y.z.t
IP-WKSHP-Spark101: x.y.z.t
IP-WKSHP-ML101: x.y.z.t
IP-WKSHP-StackStorm101: x.y.z.t
SPARKPORT-WKSHP-Spark101: 17101
SPARKPORT2-WKSHP-Spark101: 17131
MLPORT-WKSHP-ML101: 18101
MLPORT2-WKSHP-ML101: 18031
DATAVISUPORT1-WKSHP-DataVisu101: 22101
DATAVISUPORT2-WKSHP-DataVisu101: 22131
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wod-system&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;vi wod-system
#
# Backend API management
#
# Do not change, as the port is fixed in the JupyterHub install
#
WODBEAPIURL: http://{{ WODBEFQDN }}:8000
#
# Replace with a random one - TODO Do that automatically at install time
#
WODBETOKEN: 2c0246e2c8564dc6ac7b12c544b25d77
#
# You may want to use these variables if you have an OPNSense server as a security FW and are allowing http comm internally
#
#OPNSENSEKEY:
#OPNSENSESEC:
#OPNSENSEIP:
#OPNSENSEPORT:
#
# Front-end API management
#
# Do not change, as the port is fixed in the JupyterHub install
#
WODFEAPIURL: https://{{ WODAPIDBFQDN }}/api
#
# Adapt to your setup - Used by installer to setup the frontend
#
WODFEAPIUSER: moderator
WODFEAPIPWD: MotDePasseCompliquéAussi125!!!##
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;S﻿ee the example below for a backend server.&lt;/p&gt;
&lt;h3&gt;B﻿ackend installation process:&lt;/h3&gt;
&lt;p&gt;The installation process is handled by a dedicated repo : wod-install. This repo needs to be cloned on every single machine  constituting the wod architecture.&lt;/p&gt;
&lt;p&gt;The wod installer allows through a single command to install either a backend, an api-db server, or a frontend.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/INSTALL.md#for-private-based-workshops-on-demand-private-backend--private-workshops-or-if-you-need-to-modify-defaults&quot;&gt;&lt;/a&gt;O﻿nce you are done with the files, you can can proceed with the installation itself. T﻿he installation is based on a common install script &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/install/install.sh&quot;&gt;install.sh &lt;/a&gt;that allows the deployment of the different parts of the solution. It can be called as follows:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;install.sh [-h][-t type][-g groupname][-b backend][-f frontend][-a api-db][-e external][-u user][-s sender]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;-﻿h&lt;/code&gt; provides the help&lt;/p&gt;
&lt;p&gt;&lt;code&gt;install.sh&lt;/code&gt; performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-&amp;#x3C;&amp;#x3C; distribution name &gt;&gt;.sh&lt;/code&gt; script&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Installs minimal required (&lt;code&gt;ansible, git, jq, openssh server, npm&lt;/code&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creates an admin user as defined upper (default is &lt;code&gt;wodadmin&lt;/code&gt;) with sudo rights&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install-system-common.sh&lt;/code&gt; script that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cleanup&lt;/li&gt;
&lt;li&gt;Github repos cloning (leveraging install.repo file) : public Backend and public Private repos&lt;/li&gt;
&lt;li&gt;Create ssh keys for wodadmin&lt;/li&gt;
&lt;li&gt;Creates GROUPNAME variables&lt;/li&gt;
&lt;li&gt;Creates Ansible inventory files&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calls the &lt;code&gt;install_system.sh&lt;/code&gt; script with the type (backend, frontend, etc..) that performs the following tasks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the necessary stack based on selected type&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;wod.sh&lt;/code&gt; script in &lt;code&gt;wod-backend&lt;/code&gt; directory to be used by all other scripts&lt;/li&gt;
&lt;li&gt;Source the &lt;code&gt;wod.sh&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Setup Ansible-galaxies (&lt;code&gt;community.general&lt;/code&gt; and &lt;code&gt;posix&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Setup Ansible and call the playbook &lt;code&gt;install_&amp;#x3C;type&gt;.yml&lt;/code&gt; followed by the &lt;code&gt;ansible\_check\_&amp;#x3C;type&gt;.yml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the end of the installation process:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;you will have a JupyterHub server running on port 8000&lt;/li&gt;
&lt;li&gt;You will get a new &lt;code&gt;wodadmin&lt;/code&gt; user (Default admin)&lt;/li&gt;
&lt;li&gt;You will get a set of 20 students (Default value)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A﻿ll playbooks are self-documented. Please check for details.&lt;/p&gt;
&lt;p&gt;W﻿e leave it to you to handle the necessary port redirection and SSL certificates management when needed. In our case, we went for a simple yet efficient solution based on an OPNSense Firewall along with a HAProxy setup to manage ports&apos;redirection, HTTP to HTTPS Redirection, SSL Certificates. The backend also includes a Fail2ban service for login security management.&lt;/p&gt;
&lt;p&gt;At this point, you should be able to access your JupyterHub environment with a few pre-installed set of kernels like &lt;code&gt;Bash, Python, ansible, ssh, PowerShell&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Y﻿ou can then start developing new notebooks for your public based environment. And if you don&apos;t know how to do this, I will explain how in my next article&lt;/p&gt;
&lt;p&gt;If you need to develop private content that cannot be shared with the wider Open Source Community because of dedicated IP, the next section in this article will explain how to handle this.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;How to handle private-content based Workshops-on-Demand&lt;/strong&gt;&lt;/h3&gt;
&lt;h4&gt;&lt;em&gt;(private backend + private workshops on top of default public backend and notebooks)&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;T﻿he principle remains similar, with a few differences explained below.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Y﻿ou will start by forking the following public private &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-private.git&quot;&gt;repo&lt;/a&gt; on Github under your own Github account (we will refer to it as &lt;code&gt;Account&lt;/code&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, clone the forked repo.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit the &lt;code&gt;all.yml&lt;/code&gt; and &lt;code&gt;&amp;#x3C;groupname&gt;&lt;/code&gt; files to customize your setup. T﻿his variable &lt;code&gt;&amp;#x3C;groupname&gt;&lt;/code&gt; defines possible backend server in your environement. By default, the project comes with a sample working file named &lt;code&gt;production&lt;/code&gt; in &lt;code&gt;ansible/group-vars&lt;/code&gt;. But you could have multiple. In our case, we have defined &lt;code&gt;sandbox&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt; and several &lt;code&gt;production&lt;/code&gt; files, all defining a different backend environment. These files will be used to override the default values specified by the public version delivered as part of the default public installation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Commit and push changes to your repo.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an &lt;code&gt;install.priv&lt;/code&gt; file located in &lt;code&gt;install&lt;/code&gt; directory when using a private repo (consider looking at &lt;a href=&quot;https://github.com/Workshops-on-Demand/wod-backend/blob/main/install/install.repo&quot;&gt;install.repo&lt;/a&gt; file for a better understanding of the variables).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Define the WODPRIVREPO and WODPRIVBRANCH variables as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;WODPRIVBRANCH=&quot;main&quot;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;WODPRIVREPO=&quot;git@github.com:Account/Private-Repo.git wod-private&quot;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When using a token&lt;/p&gt;
&lt;p&gt;Please refer to the following &lt;a href=&quot;https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token&quot;&gt;url&lt;/a&gt; to generate a &lt;code&gt;token&lt;/code&gt; file in &lt;code&gt;install&lt;/code&gt; directory of WoD-backend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Edit the &lt;code&gt;install.priv&lt;/code&gt; file located in &lt;code&gt;install&lt;/code&gt; directory of WoD-backend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create line before variable declaration: &lt;code&gt;token=`cat $EXEPATH/token` &lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Use the token in the url WODPRIVREPO=&quot;git clone &lt;a href=&quot;https://user:$token@github.com/Account/wod-private.git%C2%A0wod-private&quot;&gt;https://user:$token@github.com/Account/wod-private.git wod-private&lt;/a&gt;&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Y﻿ou are now ready to perform the installation again to support a private repository.&lt;/p&gt;
&lt;p&gt;Please note that this setup phase can be concurrent with the public setup phase. Indeed, the install script should detect the presence of the private repository owing to the presence of the install.priv file. It will automatically adjust the different scripts and variables to add the relevant content. It will actually overload some of the variables with private ones.&lt;/p&gt;
&lt;p&gt;Y﻿ou now have a working Workshops-on-Demand backend server in place. Congratulations! The next article in the series will help you better understand the lifecycle of the backend server. How does a workshop registration work from the backend server &apos;s side? How do you manage this server on a daily basis? How and when do you need to update it ? All these questions will be answered in the next article. And from there, we will move to the frontend side of things and finally to a workshop&apos;s creation process.&lt;/p&gt;
&lt;p&gt;I﻿f you need support for this installation process, use our dedicated &lt;a href=&quot;https://hpedev.slack.com/archives/C01B60X8SSD&quot;&gt;slack channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please be sure to check back &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog site&lt;/a&gt; to read all the articles in this series. Also, check out  the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/42&quot;&gt;Data Visualization 101&lt;/a&gt; is now available! Stay tuned for additional Workshops-on-Demand in our catalog.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A guide to enabling a managed Istio service mesh in a Kubernetes cluster on HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Introduction In this blog post, we demonstrate how an end user can deploy a containerized application or a managed service on a Kubernetes…]]></description><link>https://developer.hpe.com/a-guide-to-enable-managed-istio-service-mesh-in-a-kubernetes-cluster-on-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/a-guide-to-enable-managed-istio-service-mesh-in-a-kubernetes-cluster-on-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Thu, 16 Feb 2023 13:36:32 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post, we demonstrate how an end user can deploy a containerized application or a managed service on a Kubernetes-based container stack using the cluster add-on feature provided by &lt;strong&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/strong&gt; and then access it over an external network or internet. The containers service evaluates the user’s environment and makes add-ons available to the user so that they can add the containerized application or managed service to the cluster as required.&lt;/p&gt;
&lt;p&gt;For those of you who may be unfamiliar with the term, a &lt;strong&gt;Service mesh&lt;/strong&gt; is a network of microservices that consist of distributed applications and communications between those applications. It is a dedicated infrastructure layer that facilitates service-to-service communications routed through the proxy, ensuring secure communication.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Istio&lt;/strong&gt; is an open-source service mesh that provides a platform for distributed applications that includes API integrations with logging, telemetry, or policy systems. It provides a uniform and more efficient way to secure, connect, and monitor services. Istio automatically manages load balancing for HTTP, gRPC, WebSocket, and TCP traffic. For details, see &lt;strong&gt;&lt;a href=&quot;https://istio.io/latest/about/service-mesh/&quot;&gt;The Istio service mesh&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Enabling Istio service mesh add-on from a cluster&lt;/h2&gt;
&lt;h3&gt;Step-1: Create a Kubernetes cluster from the containers page&lt;/h3&gt;
&lt;p&gt;To create a cluster, you must have been assigned the roles of &lt;strong&gt;Private Cloud Cluster Owner&lt;/strong&gt; and &lt;strong&gt;Private Cloud Widget Viewer&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From the &lt;strong&gt;Containers&lt;/strong&gt; main page, under the &lt;strong&gt;Clusters&lt;/strong&gt; tab, click &lt;strong&gt;Create Cluster&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Create Cluster&lt;/strong&gt; form, provide the cluster name &apos;&lt;strong&gt;hpe&lt;/strong&gt;&apos;, and select the standard cluster blueprint. The new cluster appears in the list of clusters.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/clustermainpage-2.png&quot; alt=&quot;&quot; title=&quot;Clusters view&quot;&gt;&lt;/p&gt;
&lt;p&gt;As indicated above, there are multiple clusters deployed in parallel for multiple purposes. For the &lt;strong&gt;Istio&lt;/strong&gt; service mesh add-on enablement/deployment in our example, we are using a cluster created with the name &quot;&lt;strong&gt;hpe&lt;/strong&gt;&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/clusterhpeview.png&quot; alt=&quot;&quot; title=&quot;Cluster &amp;#x27;hpe&amp;#x27; view&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step-2: Enabling an add-on from a cluster&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;On the &lt;strong&gt;Containers&lt;/strong&gt; main page, click a cluster row to open the cluster details screen.&lt;/li&gt;
&lt;li&gt;On the cluster details screen, click the &lt;strong&gt;Add-ons&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Enable add-on&lt;/strong&gt; if no add-ons are enabled or click &lt;strong&gt;Enable another add-on&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/blankaddonpage.png&quot; alt=&quot;&quot; title=&quot;Add-ons view&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Enable Addon&lt;/strong&gt; wizard, select the &lt;strong&gt;Istio-service-mesh&lt;/strong&gt; add-on and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/istioaddonpage-11.png&quot; alt=&quot;&quot; title=&quot;Select Add-on view&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provide the values for the fields that appear for the selected add-on, read and accept the license agreement, and click &lt;strong&gt;Enable&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/istioaddonpage-22.png&quot; alt=&quot;&quot; title=&quot;Selected Add-on Istio-service-mesh view&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A﻿fter successful add-on enablement, add-on status will get updated to &apos;&lt;strong&gt;succeeded&lt;/strong&gt;&apos;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/istioaddongreenstatus.png&quot; alt=&quot;&quot; title=&quot;Add-ons view&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;V﻿iew the details of the add-on that you just enabled.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/istioaddondetailspage.png&quot; alt=&quot;&quot; title=&quot;Add-on Istio-service-mesh overview&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step-3: Launching the Kiali dashboard - the console for Istio service mesh&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Kiali&lt;/strong&gt; is an open-source project that provides observability for the Istio service mesh.&lt;/p&gt;
&lt;p&gt;From the &lt;strong&gt;Overview&lt;/strong&gt; tab, click the &lt;strong&gt;KialiURL&lt;/strong&gt; link and use the &lt;strong&gt;Kiali token&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Kiali dashboard&lt;/strong&gt; launches in a new web page.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The URL for the Kiali console might be different in your environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-console.png&quot; alt=&quot;&quot; title=&quot;Kiali console view&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: To take advantage of the Istio features, pods in the mesh must be running an Istio sidecar proxy. Injection of the proxy can be done either on a per-pod basis or at namespace level. To enable side car injection, refer to the &lt;a href=&quot;https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/&quot;&gt;setup instructions&lt;/a&gt;. For information about using Kiali, see the &lt;strong&gt;&lt;a href=&quot;https://kiali.io/docs/&quot;&gt;Kiali documentation&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;Step-4: Download scoped kubeconfig from the container platform page&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;From the &lt;strong&gt;Clusters&lt;/strong&gt; tab, select the &apos;&lt;strong&gt;hpe&lt;/strong&gt;&apos; Kubernetes cluster and click &lt;strong&gt;Launch Service Console&lt;/strong&gt;. This will direct you to the container platform page.&lt;/li&gt;
&lt;li&gt;Click on Download &lt;strong&gt;kubeconfig&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Launching the service console from HPE GreenLake Central is configured through SAML SSO and adds a session token to the kubeconfig file. You will need to download the kubeconfig file again if you want to continue to access the cluster when the session token expires after an hour.&lt;/p&gt;
&lt;h3&gt;Step-5: Deploying a sample Istio application: Bookinfo&lt;/h3&gt;
&lt;p&gt;This procedure follows the standard Istio documentation to deploy a sample application. To know more about Bookinfo Application, see the &lt;strong&gt;&lt;a href=&quot;https://istio.io/latest/docs/examples/bookinfo/&quot;&gt;Istio documentation&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use the following commands to create the namespace and label for Istio sidecar proxy injection to deploy the application in the bookinfo namespace&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl create namespace bookinfo		
namespace/bookinfo created

$ kubectl label namespace bookinfo istio-injection=enabled
namespace/bookinfo labeled

$ kubectl get namespace bookinfo --show-labels
NAME       STATUS   AGE    LABELS
bookinfo   Active   105s   gl.hpe.com/namespaceid=10d70074-0c2b-4221-804e-1437ed1842ca,hpe.com/cluster=stub,hpe.com/namespacetype=Tenant,hpe.com/tenant=bookinfo,hpe.com/version=6.2.0,hpecp.hpe.com/hpecptenant=hpecp-tenant-106,istio-injection=enabled,kubernetes.io/metadata.name=bookinfo,serving.kubeflow.org/inferenceservice=enabled
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Deploy the Bookinfo application using the YAML manifest file i.e. services/istio/release-1.16/samples/bookinfo/bookinfo.yaml from the&lt;/strong&gt; &lt;a href=&quot;https://github&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content&quot;&gt;Github repository&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl apply -f bookinfo.yaml -n bookinfo
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Confirm all pods and services are deployed successfully&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get pods,services -n bookinfo
NAME                             READY   STATUS    RESTARTS   AGE
details-v1-698b5d8c98-qglhw      2/2     Running   0          6m17s
productpage-v1-bf4b489d8-bkpdm   2/2     Running   0          6m17s
ratings-v1-5967f59c58-28kc5      2/2     Running   0          6m17s
reviews-v1-9c6bb6658-mw2df       2/2     Running   0          6m17s
reviews-v2-8454bb78d8-p4h9d      2/2     Running   0          6m17s
reviews-v3-6dc9897554-g7xqh      2/2     Running   0          6m17s

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.98.141.15     &amp;#x3C;none&gt;        9080/TCP   14m
productpage   ClusterIP   10.104.123.90    &amp;#x3C;none&gt;        9080/TCP   6m45s
ratings       ClusterIP   10.108.60.57     &amp;#x3C;none&gt;        9080/TCP   6m46s
reviews       ClusterIP   10.106.208.181   &amp;#x3C;none&gt;        9080/TCP   14m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Configure the service to access the application outside of the cluster&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Edit the deployed service &lt;strong&gt;productpage&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Change service type from ClusterIP to &lt;strong&gt;NodePort&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Add the label &lt;strong&gt;hpecp.hpe.com/hpecp-internal-gateway=true&lt;/strong&gt;. The service will be automatically mapped/exposed to a &lt;strong&gt;Container platform gateway host&lt;/strong&gt; with an assigned port.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl describe svc productpage -n bookinfo
Name:                     productpage
Namespace:                bookinfo
Labels:                   app=productpage
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          service=productpage
Annotations:              hpecp-internal-gateway/9080: epicgw.customer.hpe.net:10072
Selector:                 app=productpage
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.123.90
IPs:                      10.104.123.90
Port:                     http  9080/TCP
TargetPort:               9080/TCP
NodePort:                 http  31766/TCP
Endpoints:                10.192.3.181:9080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From         Message
  ----    ------  ----  ----         -------
  Normal  HpeCp   21s   hpecp-agent  Created HPECP K8S service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Confirm the application is accessible from outside the cluster&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The Bookinfo application &lt;strong&gt;productpage&lt;/strong&gt; can be accessed in the browser by typing the URL &lt;strong&gt;&lt;a href=&quot;http://epicgw.customer.hpe.net:10072&quot;&gt;http://epicgw.customer.hpe.net:10072&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Note : The above URL might be different in your environment. You can form the URL by referencing annotations from the &lt;strong&gt;productpage&lt;/strong&gt; service.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bookinfo-productpage.png&quot; alt=&quot;&quot; title=&quot;Bookinfo application default view&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bookinfo-productpage-normal-user.png&quot; alt=&quot;&quot; title=&quot;Bookinfo application productpage view&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Monitor the sample application using the Kiali dashboard&lt;/h2&gt;
&lt;p&gt;Enter &lt;strong&gt;bookinfo&lt;/strong&gt; into the field Filter by Namespace. The Kiali overview screen displays the details about the namespace bookinfo. It shows that 4 applications are running in the &lt;strong&gt;namespace bookinfo&lt;/strong&gt; with no inbound traffic.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-bookinfo.png&quot; alt=&quot;&quot; title=&quot;Kiali overview&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;strong&gt;Graph&lt;/strong&gt; tab from the left navigation menu, after selecting the &lt;strong&gt;namespace bookinfo&lt;/strong&gt;, the screen shows an overview topology of the Bookinfo application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-console-graph.png&quot; alt=&quot;&quot; title=&quot;Kiali graph view&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;strong&gt;Applications&lt;/strong&gt; tab from the left navigation menu, after selecting the &lt;strong&gt;namespace bookinfo&lt;/strong&gt;, the screen shows application details of the Bookinfo application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-console-applications.png&quot; alt=&quot;&quot; title=&quot;Kiali applications view&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;strong&gt;Workloads&lt;/strong&gt; tab from the left navigation menu, after selecting the &lt;strong&gt;namespace bookinfo&lt;/strong&gt;, the screen shows deployment details of the Bookinfo   application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-console-workloads.png&quot; alt=&quot;&quot; title=&quot;Kiali workloads view&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;strong&gt;Services&lt;/strong&gt; tab from the left navigation menu, after selecting the &lt;strong&gt;namespace bookinfo&lt;/strong&gt;, you can check all the services details of the Bookinfo application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kiali-console-services.png&quot; alt=&quot;&quot; title=&quot;Kiali services view&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;You can find the GitHub repository that hosts demo code &lt;strong&gt;&lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content&quot;&gt;here&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;We hope that this blog post has provided you with enough information for you to get started deploying containerized application or a managed service &lt;strong&gt;i.e. Istio service mesh&lt;/strong&gt; on a Kubernetes-based container stack using the cluster add-on feature provided by &lt;strong&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To view more articles and tutorials on the use of the HPE GreenLake for Private Cloud Enterprise, refer to the &lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community blog&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Smart use cases for SmartSim]]></title><description><![CDATA[SmartSim is an open source library that aims to bridge the divide between traditional numerical simulations and data science tools. SmartSim…]]></description><link>https://developer.hpe.com/smart-use-cases-for-smartsim/</link><guid isPermaLink="false">https://developer.hpe.com/smart-use-cases-for-smartsim/</guid><pubDate>Mon, 13 Feb 2023 08:45:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/smartsim/home&quot;&gt;SmartSim&lt;/a&gt; is an open source library that aims to bridge the divide between traditional numerical simulations and data science tools. SmartSim started as a project at Cray before Cray was acquired by Hewlett Packard Enterprise (HPE). SmartSim was later open sourced under the BSD-2 license.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/compute/hpc/hpc-software.html&quot;&gt;High performance computing software&lt;/a&gt; (HPC) takes advantage of parallel computing techniques. A large portion of the traditional HPC applications are written in C, C++, or Fortran whereas data science toolkits are often written in Python and NumPy (which is used for scientific computing in Python). This can make it difficult to bridge the gap between the data science world and HPC. SmartSim is a software library aimed at providing the glue between these two worlds.&lt;/p&gt;
&lt;p&gt;SmartSim enables users to use machine learning models inside their existing Fortran/C/C++ simulations. It does so by allowing them to push and pull data to/from these applications and an in-memory, highly performant database. Because this database also supports the storage of ML models, SmartSim also allows coders to enable online training of machine learning models and use those models to make predictions inside of that same code. In essence, SmartSim tries to take the world of traditional HPC applications and bring it together with modern data science tools.&lt;/p&gt;
&lt;p&gt;With SmartSim, you get an infrastructure library component and a client library component. The infrastructure library helps users set up, configure, run and monitor simulations from a Python script. The SmartSim infrastructure library basically provides a lightweight API for users to create what we call experiments. These experiments may be a combination of traditional HPC applications and other data science tools. SmartSim’s infrastructure library automatically deploys the infrastructure on the compute resources that’s needed to accomplish whatever that experiment describes (e.g., having a tensor flow backend to do online inference).&lt;/p&gt;
&lt;p&gt;The client library that&apos;s a part of SmartSim enables users to embed this machine learning capability inside of traditional applications.  The library is available in four languages, Python, C, C++ and Fortran. It provides a very simple API that allows the transfer of data between the traditional applications and the database. The simplicity of the API allows users to rapidly integrate their application with only a minimal amount of modification to the original source code (usually less than 10 lines).&lt;/p&gt;
&lt;p&gt;One of the early use cases that the designers focused on was using SmartSim to enable better predictions inside of a global ocean model. The &lt;a href=&quot;https://www.gfdl.noaa.gov/mom-ocean-model/&quot;&gt;MOM6 (Modular Ocean Model)&lt;/a&gt; global ocean model has an algorithm that calculates a term called Eddy Kinetic Energy (EKE) which governs the strength of turbulence. It turns out that the parameterization is not very accurate. So, using SmartSim, we trained a machine learning model on a high fidelity run to learn the mapping from a coarsened version of the quantities (similar to those in the low fidelity run) to the true EKE. Not only did that machine learning model improve the accuracy by at least 20%, but there was no visible degradation in performance, even though this simulation required about 1.6 million inferences per second.&lt;/p&gt;
&lt;p&gt;SmartSim is great for users who have very specialized domain expertise in things like computational fluid dynamics or molecular dynamics; but don’t necessarily have the knowledge set to deploy the machine learning infrastructure for high performance. SmartSim removes that technical barrier for users so that they can spend more time experimenting with combining scientific simulations and ML.&lt;/p&gt;
&lt;p&gt;For more information on SmartSim, please reference the &lt;a href=&quot;https://developer.hpe.com/platform/smartsim/home&quot;&gt;SmartSim page&lt;/a&gt; found on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt;. You can also access &lt;a href=&quot;https://github.com/CrayLabs/SmartSim&quot;&gt;SmartSim’s GitHub page&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Celebrating Open Source Month at HPE]]></title><link>https://developer.hpe.com/2023-February-01/</link><guid isPermaLink="false">https://developer.hpe.com/2023-February-01/</guid><pubDate>Wed, 01 Feb 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Learn what you can do with HPE Data Services Cloud Console API in just 3 minutes]]></title><description><![CDATA[HPE Data Services Cloud Console, available through the HPE GreenLake edge-to-cloud platform, is a Software-as-a-service (SaaS) based cloud…]]></description><link>https://developer.hpe.com/learn-what-you-can-do-with-hpe-data-services-cloud-console-api-in-just-3-minutes/</link><guid isPermaLink="false">https://developer.hpe.com/learn-what-you-can-do-with-hpe-data-services-cloud-console-api-in-just-3-minutes/</guid><pubDate>Fri, 27 Jan 2023 14:17:03 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/greenlake/data-services-cloud-console/home/&quot;&gt;HPE Data Services Cloud Console&lt;/a&gt;, available through the &lt;a href=&quot;https://developer.hpe.com/greenlake/hpe-greenlake-cloud-platform/home/&quot;&gt;HPE GreenLake edge-to-cloud platform&lt;/a&gt;, is a Software-as-a-service (SaaS) based cloud console application that delivers a suite of cloud data services that enable a unified data operations as a service for storage infrastructure, simplify storage and data management, and bring the cloud experience to wherever data lives. Data Services Cloud Console also offers a unified and fully programmable API that enables developers to automate data infrastructure management.&lt;/p&gt;
&lt;p&gt;If you’re looking for a quick way to discover everything you can do with the HPE GreenLake Data Services Cloud Console API using popular tools that doesn’t require programming such as Postman, this blog post is definitely for you.&lt;/p&gt;
&lt;p&gt;As you know, one of the benefits of working within a community is the ability to take advantage of open collaboration, sharing hints, tools, and resources. This is exactly what I am doing here. This post helps you get started with the Data Services Cloud Console API for HPE GreenLake for Block Storage cloud data service by taking advantage of the Postman collection contributed by one of our HPE Developer Community members.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This blog post assumes you have created an &lt;a href=&quot;https://console.greenlake.hpe.com/&quot;&gt;HPE GreenLake account&lt;/a&gt; and joined your account to your company account (also called an &lt;em&gt;&lt;strong&gt;organization&lt;/strong&gt;&lt;/em&gt;). You also got assigned appropriate roles and permissions by the administrator for your organization in order to access HPE data services resources (for example storage arrays and volumes) through the Data Services Cloud Console application instances. A Data Service Cloud Console application instance is a service cluster running in one of the HPE regions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Data Services Cloud Console and REST API&lt;/h3&gt;
&lt;p&gt;Data Services Cloud Console supports a set of REST APIs that allows users to integrate HPE Data Services Cloud Console with their custom applications. By using &lt;a href=&quot;https://oauth.net/2/&quot;&gt;OAuth 2.0 protocol&lt;/a&gt; to authenticate and authorize applications, secure and time-limited (&lt;em&gt;&lt;strong&gt;120 minutes&lt;/strong&gt;&lt;/em&gt;) access to HPE data services are provided via an &lt;strong&gt;access token&lt;/strong&gt;. The token ensures that client API requests access HPE data services for the requested operation securely and according to the authorization granted to the user who created it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can find the Data Services Cloud Console API documentation &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;here&lt;/a&gt; and in the help section of the &lt;a href=&quot;https://console.greenlake.hpe.com/&quot;&gt;HPE GreenLake Cloud Platform&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The REST APIs support standard HTTP request methods (GET, POST, PATCH, PUT and DELETE). A HTTP request is made by providing a specific HPE regional connectivity endpoint for the Data Service Cloud Service application instance, HTTP request method, access token and data payload. The HTTP response for these requests are returned in a JSON format.&lt;/p&gt;
&lt;p&gt;Currently, there are three HPE regional Data Services Cloud Console application instance endpoints:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;EU Central&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AP Northeast&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://jp1.data.cloud.hpe.com&quot;&gt;https://jp1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;US West&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://us1.data.cloud.hpe.com&quot;&gt;https://us1.data.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE GreenLake Cloud Platform allows developers to make API calls on a particular regional Data Services Cloud Console customer instance. Using the API functionality in the HPE GreenLake Cloud Platform graphical user interface (GUI), developers can create their &lt;strong&gt;API client application&lt;/strong&gt; credentials. The credentials consist of a &lt;em&gt;ClientID-ClientSecret&lt;/em&gt; pair that represents the permissions granted to the user who creates the API client application credentials. The credentials are then used to generate and refresh expired OAuth based access token. Once the token is generated or refreshed, it can be used as an &lt;strong&gt;authorization bearer token&lt;/strong&gt; to make further secure REST API calls to protected HPE data services resources via the regional Data Services Cloud Console application instance.&lt;/p&gt;
&lt;p&gt;You can refer to &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;this blog post&lt;/a&gt; to learn how to create API client application credentials for your specific regional Data Services Cloud Console application instance. Make sure to copy the &lt;em&gt;ClientID&lt;/em&gt; and &lt;em&gt;ClientSecret&lt;/em&gt; values to a safe location as you will need them to generate the access token via a REST API call, which is necessary to complete the following steps.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ready? Let’s get started!&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;Step 1 – Sign in to your Postman account&lt;/h3&gt;
&lt;p&gt;You can sign in to your Postman account either from the &lt;a href=&quot;https://identity.getpostman.com/login&quot;&gt;web app&lt;/a&gt; or from the desktop app. If you don’t have a Postman account already, you can sign up for a Postman account &lt;a href=&quot;https://identity.getpostman.com/signup&quot;&gt;here&lt;/a&gt; or download the desktop app &lt;a href=&quot;https://www.postman.com/downloads/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Step 2 – Copy the existing HPE GreenLake Data Services Cloud Console API public collection&lt;/h3&gt;
&lt;p&gt;Upon log in to your Postman account, from the &lt;em&gt;&lt;strong&gt;Search&lt;/strong&gt;&lt;/em&gt; bar, look for the public collection &quot;&lt;strong&gt;Data Services Cloud Console API&lt;/strong&gt;&quot; and select the public collection from our community contributor, &lt;a href=&quot;mailto:mark.van.silfhout@hpe.com&quot;&gt;Mark van Silfhout&lt;/a&gt;, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/search-dscc-collection-postman-figure1.png&quot; alt=&quot;The Data Services Cloud Console API public collection&quot; title=&quot;The Data Services Cloud Console API public collection&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 1: The Data Services Cloud Console API public collection.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Fork the collection to make a copy of it in your Postman workspace. This allows you to work with your own copy of the collection and perform changes without affecting the parent collection. Select the &lt;em&gt;&lt;strong&gt;more actions&lt;/strong&gt;&lt;/em&gt; icon (the ellipsis) next to the collection, then select &lt;em&gt;&lt;strong&gt;Create a fork&lt;/strong&gt;&lt;/em&gt;. When you fork the public collection, you can choose to watch the original collection to be notified about changes made to the parent collection. This allows you to pull updates from the parent collection into your forked copy, should the parent collection be updated.&lt;/p&gt;
&lt;p&gt;Alternatively, you can export the public collection locally to your local storage as JSON file and import it as a new collection in your Postman workspace.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You must sign in to your Postman account to create a fork or export the public collection. To fork a collection within a public workspace, you must also enable your public profile in your Postman profile settings.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Step 3 – Set your Data Services Cloud Console collection variables&lt;/h3&gt;
&lt;p&gt;The Data Services Cloud Console API collection built by Mark makes use of collection variables that are available throughout the REST API requests in the collection. Select the collection in your workspace and then select the &lt;em&gt;&lt;strong&gt;Variables&lt;/strong&gt;&lt;/em&gt; tab as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/variables-dscc-collection-postman-figure2.png&quot; alt=&quot;HPE Data Services Cloud Console collection variables&quot; title=&quot;HPE Data Services Cloud Console collection variables&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 2: HPE Data Services Cloud Console collection variables.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Define the &lt;em&gt;current value&lt;/em&gt; of the collection variables to match your Data Services Cloud Console context:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;baseUrl&lt;/strong&gt;: This variable defines the base URL of the REST API requests. It should match the regional endpoint of your Data Services Cloud Console application instance where your storage devices are registered.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ClientId&lt;/strong&gt; and &lt;strong&gt;ClientSecret&lt;/strong&gt;: these should be set with the value of your Client Application API credentials you previously created using the HPE GreenLake Cloud Platform GUI. These variables are used to request an OAuth access token by authenticating with the authorization server referenced in the &lt;strong&gt;sso_URI&lt;/strong&gt; variable.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;sso_URI&lt;/strong&gt;: This variable is the URI of the OAuth authorization server. If your organization has set up their own HPE GreenLake SAML Single Sign-On (SSO) authorization server to create an access token, replace the current default value with your SSO URI. Otherwise keep the value for this variable as currently set to &lt;em&gt;sso.common.cloud.hpe.com/as/token.oauth2&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;BearerToken:&lt;/strong&gt; Do not edit this variable. Keep the value field empty. The collection variable BearerToken will be set automatically upon successful execution of the &lt;em&gt;&lt;strong&gt;GetToken&lt;/strong&gt;&lt;/em&gt; API call as explained in the next step.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 4 – Acquire an OAuth access token as your session bearer token&lt;/h3&gt;
&lt;p&gt;Data Services Cloud Console API uses a bearer token as an authorization type to ensure that all REST API requests access authorized data services securely. So you first need to obtain a token from the OAuth authorization server before you can make any REST API calls to your regional Data Services Cloud Console application instance. To do so, proceed as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;From your collection, generate the token using the &lt;em&gt;&lt;strong&gt;GetToken&lt;/strong&gt;&lt;/em&gt; API call from the &lt;em&gt;&lt;strong&gt;GetToken-Using-Variables&lt;/strong&gt;&lt;/em&gt; folder.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Verify you get a status code of 200 for a successful response with the token value in the response body.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check that the token value has been automatically defined for the collection variable &lt;em&gt;&lt;strong&gt;BearerToken&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;em&gt;GetToken&lt;/em&gt; API call has defined a script in the &lt;em&gt;&lt;strong&gt;Tests&lt;/strong&gt;&lt;/em&gt; tab to programmatically set the collection variable BearerToken as shown in the picture below. The programmatically defined token is then used to authenticate any subsequent REST API calls.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tests-capturebearertoken-dscc-collection-postman-figure3.png&quot; alt=&quot;Defining collection variables programmatically in script&quot; title=&quot;Defining collection variables programmatically in script&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 3: Defining collection variables programmatically in script.&lt;/span&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Access bearer token expires after 120 minutes. Run the &lt;em&gt;GetToken&lt;/em&gt; API request again to refresh the token before or after it expires.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Step 5 – Make subsequent secure REST API calls&lt;/h3&gt;
&lt;p&gt;The client REST API requests are authenticated by presenting the access token as the authorization bearer token to the regional Data Services Cloud Console application instance. The instance validates the access token, and if valid, serves the request.&lt;/p&gt;
&lt;p&gt;Pick one REST API call from the &lt;em&gt;&lt;strong&gt;storage-systems&lt;/strong&gt;&lt;/em&gt; folder to &lt;em&gt;&lt;strong&gt;Get all storage systems&lt;/strong&gt;&lt;/em&gt; registered with your regional Data Services Cloud Console application instance.&lt;/p&gt;
&lt;p&gt;As shown in the two pictures below, all REST API requests in the collection will inherit the authorization bearer token that is specified at the collection level.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/authorization-dscc-collection-postman-figure4.png&quot; alt=&quot;Authorization type (bearer token) specified at the collection level&quot; title=&quot;Authorization type (bearer token) specified at the collection level&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 4: Authorization type (bearer token) specified at the collection level.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/authorization-inherited-dscc-collection-postman-figure5.png&quot; alt=&quot;REST API request with authorization type inherited from parent collection&quot; title=&quot;REST API request with authorization type inherited from parent collection&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 5: REST API request with authorization type inherited from parent collection.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;As depicted in the figure below, the API supports several query parameters (click on Params tab in the request) depending on the resource type, such as filter (filter the set of resources returned), limit (maximum number of records to return), offset (resource offset to start the response from) and sort (order in which to return the resources in the collection).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/getallstoragesystems-dscc-collection-postman-figure6.png&quot; alt=&quot;REST API request with query parameter to search for storage array of type HPE Alletra 9060&quot; title=&quot;REST API request with query parameter to search for storage array of type HPE Alletra 9060&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt;Figure 6: REST API request with query parameter to search for storage array of type HPE Alletra 9060.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;The query parameters are indicated after the question mark (“?”) in the REST API URL. Select and adjust the query parameters according to your environment. Make sure to refer to &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;the API documentation&lt;/a&gt; to understand the query parameters that can be used for each HPE Data Services Cloud Console API request.&lt;/p&gt;
&lt;p&gt;Finally click the &lt;strong&gt;Send&lt;/strong&gt; button. You will get a JSON representation of the storage system resources registered based on the query parameters specified. Here, I am getting the list of storage systems of type “&lt;em&gt;HPE Alletra 9060&lt;/em&gt;”.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;items&quot;: [
        {
            &quot;systemWWN&quot;: &quot;2FF70002AC07EB29&quot;,
            &quot;name&quot;: &quot;s3294&quot;,
            &quot;id&quot;: &quot;4UW0003294&quot;,
            &quot;description&quot;: &quot;System s3294&quot;,
            &quot;mgmtIp&quot;: &quot;16.182.41.5&quot;,
            &quot;softwareVersions&quot;: &quot;9.5.0&quot;,
            &quot;model&quot;: &quot;HPE Alletra 9060&quot;,
            &quot;productFamily&quot;: &quot;deviceType1&quot;,
            &quot;state&quot;: &quot;NORMAL&quot;,
            &quot;callhomeStatus&quot;: &quot;ENABLED_NORMAL&quot;,
            &quot;resourceUri&quot;: &quot;/api/v1/storage-systems/device-type1/4UW0003294&quot;,
            &quot;upSince&quot;: 1670726169000,
            &quot;fqdn&quot;: &quot;s3294.mip.storage.hpecorp.net&quot;,
            &quot;capacityDetail&quot;: {
                &quot;volumeSpace&quot;: 21092875,
                &quot;snapSpace&quot;: 1676167,
                &quot;totalUsedSpace&quot;: 22769042
            },
            &quot;associatedLinks&quot;: [
                {
                    &quot;type&quot;: &quot;storage-pools&quot;,
                    &quot;resourceUri&quot;: &quot;/api/v1/storage-systems/4UW0003294/storage-pools&quot;
                },
                {
                    &quot;type&quot;: &quot;volumes&quot;,
                    &quot;resourceUri&quot;: &quot;/api/v1/storage-systems/4UW0003294/volumes&quot;
                },
                {
                    &quot;type&quot;: &quot;swupdatestatus&quot;,
                    &quot;resourceUri&quot;: &quot;/api/v1/storage-systems/4UW0003294/swupdate/status&quot;
                }
            ],
            &quot;arrayList&quot;: null,
            &quot;collectionStatus&quot;: {
                &quot;metricStatus&quot;: {
                    &quot;status&quot;: &quot;NORMAL&quot;
                },
                &quot;configStatus&quot;: {
                    &quot;status&quot;: &quot;NORMAL&quot;
                },
                &quot;overAllStatus&quot;: &quot;NORMAL&quot;
            },
            &quot;lastConnectedTime&quot;: 1674674119,
            &quot;connectionStatus&quot;: &quot;CONNECTED&quot;,
            &quot;customerId&quot;: &quot;c93de31c382811ecb478320c1ce21c93&quot;,
            &quot;generation&quot;: 1674674119,
            &quot;type&quot;: &quot;storage-system&quot;,
            &quot;tierType&quot;: &quot;STORAGE_TIER_9000_NVME&quot;,
            &quot;tierName&quot;: &quot;HPE Alletra 9000&quot;
        }
    ],
    &quot;total&quot;: 1,
    &quot;pageLimit&quot;: 50,
    &quot;pageOffset&quot;: 0,
    &quot;requestUri&quot;: &quot;https://us1.data.cloud.hpe.com/api/v1/storage-systems?filter=model%20eq%20%22HPE%20Alletra%209060%22&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;That’s it. No more than 3 minutes!&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog gives you a great example on how to obtain the access token using an API call and should help you get started with the Data Services Cloud Console REST API for the &lt;em&gt;HPE GreenLake for Block Storage&lt;/em&gt; data service using Postman. Additional HPE data services APIs will be published in accordance to the expansion of cloud data services delivered through the HPE Data Services Cloud Console. Make sure you stay tuned for any update of Mark’s public Postman collection for Data Services Cloud Console API.&lt;/p&gt;
&lt;p&gt;If you have more time, I invite you to further explore the rest of the collection on your own, while adjusting the query parameters to match your Data Services Cloud Console context. Also, don’t hesitate to provide Mark with feedback on his very convenient collection.&lt;/p&gt;
&lt;p&gt;Any questions on HPE GreenLake Data Services Cloud Console API? Please join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C02D6H623JP&quot;&gt;#hpe-greenlake-data-services-cloud-console&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open sourcing Workshops-on-Demand - Part 1: Why and How]]></title><description><![CDATA[A﻿ bit of background T﻿he Workshops-on-Demand program  has been an important asset for the HPE Developer Community for the last 2 years. If…]]></description><link>https://developer.hpe.com/willing-to-build-up-your-own-workshops-on-demand-infrastructure/</link><guid isPermaLink="false">https://developer.hpe.com/willing-to-build-up-your-own-workshops-on-demand-infrastructure/</guid><pubDate>Tue, 24 Jan 2023 12:35:46 GMT</pubDate><content:encoded>&lt;h1&gt;A﻿ bit of background&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshops/&quot;&gt;T﻿he Workshops-on-Demand program &lt;/a&gt; has been an important asset for the HPE Developer Community for the last 2 years. If you are interested in learning more on the genesis of the project, check the following &lt;a href=&quot;https://developer.hpe.com/blog/from-jupyter-notebooks-as-a-service-to-hpe-dev-workshops-on-demand/&quot;&gt;blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;T﻿his program allows us to deliver free hands-on workshops. We use these during HPE sponsored events, like HPE Discover and HPE Technical Symposium Summit, as well as open source events, like  Open Source Summit and KubeCon, to promote API/automation-driven solutions along with some 101-level coding courses. B﻿y the end of 2022, more than 4000  had registered for our workshops.&lt;/p&gt;
&lt;p&gt;﻿If you are interested in creating your own training architecture to deliver workshops, this project is definitely for you. It would allow you to create, manage content and deliver it in an automated and very efficient way. Once ready, the infrastructure can deliver workshops in matter of minutes!&lt;/p&gt;
&lt;p&gt;T﻿rying to build a similar architecture on your own is obviously possible, but we integrated so much automation around the different components like the JupyterHub server deployment along with multiple pre installed kernels, user management and much more. When leveraging our project, one can actually get a proper public-based environment in 2 hours.&lt;/p&gt;
&lt;p&gt;This very first article will set the stage for the following blog articles where I will explain how to setup your own Workshops-on-Demand infrastructure.&lt;/p&gt;
&lt;h2&gt;Why would we consider open sourcing our Workshops-on-Demand?&lt;/h2&gt;
&lt;p&gt;Firstly, if you read carefully the messaging on our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;homepage&lt;/a&gt; , you will find words like sharing and collaborating. This is part of the HPE Developer team&apos;s DNA.&lt;/p&gt;
&lt;p&gt;S﻿econdly, the project is based on open source technologies like Jupyter and Ansible. It felt natural that the work we did leveraging these should also benefit the open source community.&lt;/p&gt;
&lt;p&gt;W﻿e have, actually, shared the fundamentals of the project thoughout the HPE Developer Community, and to a wider extent, the Open Source Community  through different internal and external events. And the feedback has always been positive. Some people found the project very appealing. Originally, and long before even thinking of open sourcing the project, when we really started the project development, people were mainly interested in the content and not necessarily in the infrastructure. The students wanted to be able to reuse some of the notebooks. And in a few cases, they also asked for details about the infrastructure itself, asking about the notebooks delivery mechanism and other subjects like the &lt;a href=&quot;https://www.youtube.com/watch?v=zZm6ObQATDI&quot;&gt;procmail API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Early last year, we were contacted by an HPE colleague who was willing to replicate our setup in order to deliver notebooks to its AI/ML engineers. His aim was to provide a simple, central point from which he could deliver Jupyter Notebooks, that would later be published on the Workshops-on-Demand infrastructure frontend portal, allowing content to be reused and shared amongst engineers. While, over time, we had worked  a lot on automating content delivery and some parts of the infrastructure setup, we needed now to rework and package the overall solution to make it completely open source and reusable by others.&lt;/p&gt;
&lt;p&gt;As a result of our work on that project, over the course of 2022 we started to open source the Workshops-on-Demand program. As a project developed within the confiines of Hewlett Packard Enterprise (HPE), we had a number of technical, branding, and legal hurdles we needed to overcome in order to achieve this.&lt;/p&gt;
&lt;h4&gt;L﻿egal side of things&lt;/h4&gt;
&lt;p&gt;F﻿rom a legal standpoint, we needed to go through the HPE OSRB (Open Source Review Board) to present the project that we wanted to open source. We were asked to follow a process that consisted of four steps:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wod-osrb1.png&quot; alt=&quot;HPE OSRB Process&quot; title=&quot;HPE OSRB process&quot;&gt;&lt;/p&gt;
&lt;p&gt;As the project did not contain any HPE proprietary software, as it is based on open source technologies like Ansible and Jupyter, the process was quite straightforward. Besides, HPE did not want to exploit commercially the generated intellectual property.  We explained to the OSRB that the new architecture of the solution would allow the administrator of the project to separate public content from private content, with private content being a proprietary technology.&lt;/p&gt;
&lt;p&gt;This had a huge influence on the future architecture of the project that originally did not allow it. In our case, for instance, any workshop related to an HPE technology like  HPE Ezmeral, would fall into the private part of the project, and therefore, would not appear on the public github repository that we had to create for the overall project distribution.&lt;/p&gt;
&lt;h4&gt;T﻿echnical side of things&lt;/h4&gt;
&lt;p&gt;F﻿rom a technical standpoint, as mentioned above, we had to make sure to separate the public only content from any possible private content. We started by sorting the different workshops (public vs private based). We also had to sort the related scripts that come along with the workshops. Going through this process, we found out that some of the global scripts had to be reworked as well to support any future split of public and private content. Similarly, we had to address any brand specifics, parameterizing instead of hardcoding variables as it was in the first version.&lt;/p&gt;
&lt;p&gt;This took us a few months and we are now ready to share with you the result of this work. In this first blog, I will focus on the architecture side of the Workshops-on-Demand project.&lt;/p&gt;
&lt;p&gt;F﻿urther blog articles will help you setup your own architecture.&lt;/p&gt;
&lt;h2&gt;T﻿he How&lt;/h2&gt;
&lt;h2&gt;U﻿nderstand the architecture first&lt;/h2&gt;
&lt;p&gt;The Workshops-on-Demand concept is fairly simple. The following picture gives you a general idea of how the process works.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-6.png&quot; alt=&quot;Workshops-on-Demand Concepts 1&quot; title=&quot;Workshops-on-Demand Concepts 10000 feet view&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that you understand the basic principle, let&apos;s look at the details. The image below shows what happens at each stage from a protocol standpoint.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-10.png&quot; alt=&quot;&quot; title=&quot;Workshops-on-Demand Architecture Diagram&quot;&gt;&lt;/p&gt;
&lt;h3&gt;T﻿he Register Phase&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Participants start by browsing a frontend web server that presents the catalog of available workshops. They then select one and register for it by entering their email address, first and last names, as well as their company name. Finally, they accept the terms and conditions and hit the register button.&lt;/p&gt;
&lt;p&gt;As the register button is clicked, the frontend server performs a series of actions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Assigns a student (i.e student401) from the dedicated workshop range to the participant. Every workshop has a dedicated range of students assigned to it.&lt;/p&gt;
&lt;p&gt;H﻿ere is a screenshot of the workshop table present in the frontend database server showing API 101 workshops details.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-2.png&quot; alt=&quot;Workshops Table from Frontend DB server&quot; title=&quot;Workshops Table from Frontend DB server&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Frederic Passeron gets assigned a studentid &quot;student397&quot; for workshop &quot;API101&quot;.&lt;/p&gt;
&lt;p&gt;H﻿ere are the details of the participant info when registered to a given workshop.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/howto-wod-3.png&quot; alt=&quot;Customers Table from Frontend DB server&quot; title=&quot;Customers Table from Frontend DB server&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; An initial email is sent to participants from the frontend server welcoming them to the workshop and informing them that the deployment is ongoing and that a second email will arrive shortly providing the necessary information required to log onto the workshop environment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4.&lt;/strong&gt; At the same time, the frontend server sends the necessary orders through a procmail API call to the backend server. The mail sent to the backend server contains the following details:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Action Type ( CREATE, CLEANUP, RESET)&lt;/li&gt;
&lt;li&gt;W﻿orkshop ID&lt;/li&gt;
&lt;li&gt;S﻿tudent ID&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;5.&lt;/strong&gt; The backend server recieves the order and processes it by parsing the email recieved using the procmail API. the procmail API automates the management of the workshops.&lt;/p&gt;
&lt;p&gt;Like any API, it uses verbs to perform tasks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CREATE to deploy a workshop&lt;/li&gt;
&lt;li&gt;C﻿LEANUP to delete a workshop&lt;/li&gt;
&lt;li&gt;R﻿ESET to reset associated workshop&apos;s resource&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;C﻿REATE subtasks:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;6.&lt;/strong&gt; It prepares any infrastructure that might be required for the workshop (Virtual Appliance, Virtual Machine, Docker Container, LDAP config, etc.).&lt;/li&gt;
&lt;li&gt;It g﻿enerates a random Password for the allocated student.&lt;/li&gt;
&lt;li&gt;It deploys the workshop content on the jupyterhub server in the dedicated student home directory (Notebooks files necessary for the workshops).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;7.&lt;/strong&gt; It sends back the confirmation of the deployment of the workshop, along with the student&apos;s required details (i.e password), through API Calls to the frontend server.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;8﻿.&lt;/strong&gt; The frontend server tables will be updated in the following manner:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;T﻿he customers table shows an active status for the participant row. The password field has been updated.&lt;/li&gt;
&lt;li&gt;The workshop table also gets updated. The capacity field decrements the number of available seats.&lt;/li&gt;
&lt;li&gt;The student tables gets updated as well by setting the allocated student to active&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;9﻿.&lt;/strong&gt; The frontend server sends the second email to each participant providing them with the details to connect to the workshop environment.&lt;/p&gt;
&lt;h3&gt;T﻿he Run Phase&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;10.&lt;/strong&gt; F﻿rom the email, the particpant clicks on the start button. it will open up a browser to the JupyterHub server and directly open the readme first notebook, presenting the workshop&apos;s flow.&lt;/p&gt;
&lt;p&gt;Participants will go through the different steps and labs of the workshop connecting to the necessary endpoints and leveraging the different kernels available on the JupyterHub server.&lt;/p&gt;
&lt;p&gt;Meanwhile, the frontend server will perform regular checks on how much time has passed. Depending on the time allocation (from 2 to 4 hours) associated with the workshop, the frontend server will send a reminder email usually a hour before the end of the time allocated. The time count actually starts when participants hit the register for the workshop button. It is mentioned in the terms and conditions.&lt;/p&gt;
&lt;p&gt;F﻿inally, when the time is up, the frontend server sends a new order to the backend to perform either CLEANUP or RESET action for the dedicated studentid.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RESET subtasks:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It resets any infrastructure that was required for the workshop (Virtual Appliance, Virtual Machine, Docker Container, LDAP config, etc..).&lt;/li&gt;
&lt;li&gt;It g﻿enerates a random password for the allocated student.&lt;/li&gt;
&lt;li&gt;It d﻿eletes the workshop content on the JupyterHub server in the dedicated student home directory (Notebooks files necessary for the workshop).&lt;/li&gt;
&lt;li&gt;It sends back the confirmation of the CLEANUP or RESET of the workshop along with the student details (i.e password) through API Calls to the frontend server.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The frontend server tables will be updated in the following manner:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;T﻿he customers table will show an inactive status for the participant row. The password field has been updated.&lt;/li&gt;
&lt;li&gt;T﻿he Workshop table gets also updated. The capacity field increment the number of available seats.&lt;/li&gt;
&lt;li&gt;The student tables gets updated as well by setting the allocated student to inactive.&lt;/li&gt;
&lt;li&gt;T﻿he frontend sends the final email to the participant.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;T﻿he React and Reward Phase:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A final email thanks the students for their participation. It provides a link to an external survey and encourages the participants to share their achievement &lt;a href=&quot;https://developer.hpe.com/blog/become-a-legend/&quot;&gt;badge&lt;/a&gt; on social media like linkedin or twitter.&lt;/p&gt;
&lt;p&gt;E﻿t voila! &lt;a href=&quot;https://developer.hpe.com/blog/become-a-legend/&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;W﻿ith this very first article, I wanted to set the stage for the following articles where I will explain how to setup your own Workshops-on-Demand infrastructure. We will start by looking at the &quot;JupyterHub&quot; side of things. I will detail how to set it up depending on your use case (Public only vs Public and Private). Then, I will move to the workshop development part; from the notebook development to the automation that needs to come along with it in order to be properly integraged into the overall solution. Finally, the last article will cover the frontend&apos;s side. It will show you how to deploy it and more. I may also cover the appliance management aspect in another dedicated article.&lt;/p&gt;
&lt;p&gt;Please be sure to drop back at &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; for a follow up on this. Check out also the Hack Shack for new &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;workshops&lt;/a&gt;! We are currently working on different subjects like Data Visualization 101 or HPE GreenLake for Compute Operations Management API 101. Stay tuned!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Living the open source culture at HPE]]></title><description><![CDATA[Here at Hewlett Packard Enterprise (HPE), we like to say that “open collaboration is in HPE’s DNA”. We believe open source technologies and…]]></description><link>https://developer.hpe.com/living-the-open-source-culture-at-hpe/</link><guid isPermaLink="false">https://developer.hpe.com/living-the-open-source-culture-at-hpe/</guid><pubDate>Fri, 20 Jan 2023 20:38:44 GMT</pubDate><content:encoded>&lt;p&gt;Here at Hewlett Packard Enterprise (HPE), we like to say that “open collaboration is in HPE’s DNA”. We believe &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;open source technologies and communities&lt;/a&gt; can help deliver innovative solutions securely and at scale.  Certainly, open source has a lot to do with the technologies we deliver. But, importantly, it also has a lot to do with how we conduct business on a daily basis, like connecting across teams and business units to collaborate, play, and learn together. &lt;/p&gt;
&lt;p&gt;HPE has an Open Source Program Office where we share projects and ideas during our weekly open source office hours – discussions in these sessions have even inspired some inner-sourcing projects.  We have an “Open for Challenge” team that competes with and against other company teams in our wellness challenges (right now, we’re tracking our steps to put open source fans at HPE on our company-wide leader board). But, my favorite collaboration activity is our Open Source &amp;#x26; Tech Bookclub. Any HPE employee can join our internal Slack channel dedicated to discussing and exploring everything from legal tomes to cyberpunk fiction. &lt;/p&gt;
&lt;p&gt;2022 was our inaugural year where we covered the following selections:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;The Cathedral &amp;#x26; the Bazaar:&lt;/em&gt; Musings on Linux and Open Source by an Accidental Revolutionary by Eric S. Raymond (1999).   I re-read this seminal treatise for the bookclub and was struck by just how much of ESR’s insights still hold true 20+ years later.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The Phoenix Project&lt;/em&gt; by Gene Kim (2018) is so engaging and fast-paced that it is practically a DevOps beach read.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The Business and Economics of Linux and Open Source&lt;/em&gt; by Martin Fink (2003).  Martin, originally part of HPE’s parent company HP, actually dropped by for our discussion which added a unique perspective to the topics covered in this book.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography&lt;/em&gt; by Simon Singh (2000) is one of my favorite books on cryptography – the author uses fantastically accessible analogies, includes a treasure hunt and makes quantum cryptography as understandable as possible without an advanced degree.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The Making and Maintenance of Open Source&lt;/em&gt; by Nadia Eghbal (2020) tackles one of the more recent issues in the sustainability of open source projects.  This topic was meaty enough that we held a live happy hour for this discussion!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the bookclub had a baseline established, we held an active channel-wide vote on which books to expand our club into for the second year.  Here are our winners for 2023:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Life 3.0: Being Human in the Age of Artificial Intelligence&lt;/em&gt; by Max Tegmark - (2017) – I’ve only read the first chapter so far but it poses lots of provocative questions about the future.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Open Source Law, Policy and Practice - Second Edition&lt;/em&gt;, Edited by Amanda Brock (2022) (this one is available for free download from &lt;a href=&quot;https://global.oup.com/academic/product/open-source-law-policy-and-practice-9780198862345?cc=us&amp;#x26;lang=en&amp;#x26;&quot;&gt;Oxford University Press&lt;/a&gt;).  This one is jam-packed with articles by some of the leading minds in the open source world.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race&lt;/em&gt; by Walter Isaacson – I haven’t started this one yet but I think it will be fascinating.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Snow Crash&lt;/em&gt; by Neal Stephenson – the most amazing thing about this cyberpunk novel is that it was envisioned in 1992.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I’m looking forward to the discussions on all these books both in our virtual channel as well as our scheduled live discussions.  I’m also looking forward to our virtual globe-trotting in the wellness step challenge.  If you would like to work and play with us, consider checking out our &lt;a href=&quot;https://careers.hpe.com/us/en&quot;&gt;job board&lt;/a&gt; or contributing to one of our &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;featured open source projects&lt;/a&gt;!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Intro to Core API: Part 1 (Integer Incrementing)]]></title><description><![CDATA[E﻿xternal Blog]]></description><link>https://developer.hpe.com/intro-to-core-api-part-1-integer-incrementing/</link><guid isPermaLink="false">https://developer.hpe.com/intro-to-core-api-part-1-integer-incrementing/</guid><pubDate>Thu, 19 Jan 2023 14:37:21 GMT</pubDate><content:encoded>&lt;p&gt;E﻿xternal Blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Developing an open source metadata framework to enhance the ML pipeline experience]]></title><description><![CDATA[In this blog series, you’ll meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be…]]></description><link>https://developer.hpe.com/developing-an-open-source-metadata-framework-to-enhance-the-ml-pipeline-experience/</link><guid isPermaLink="false">https://developer.hpe.com/developing-an-open-source-metadata-framework-to-enhance-the-ml-pipeline-experience/</guid><pubDate>Sun, 15 Jan 2023 18:21:10 GMT</pubDate><content:encoded>&lt;p&gt;In this blog series, you’ll meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be interviewing Annmary Justine K who works in the HPE AI Research Lab on foundations and techniques for data centric trustworthy AI. As an AI Expert, she has been looking into areas like data provenance, data versioning, metadata tracking and lineages. Prior to her work in HPE’s AI research lab, she worked in HPE Storage, focused on various products ranging from file, block, and object level storage. She has published multiple research papers and patent disclosures in her 15 years of industry experience.&lt;/p&gt;
&lt;p&gt;Annmary began her open source journey by contributing to the Hierarchical Data Format (HDF) standard, a set of file formats designed to store and organize large amounts of data, especially developed for supercomputing applications.  Originally developed at the U.S. National Center for Supercomputing Applications, it is now supported by the non-profit HDF Group, whose mission is to ensure continued development in this area. Annmary’s work was originally focused on HDF5 and then she went on to create Common Metadata Framework.&lt;/p&gt;
&lt;h3&gt;How did you first get involved with contributing to open source?&lt;/h3&gt;
&lt;p&gt;My journey with open source started with the Hierarchical Data Format project. Hierarchical Data Format (HDF) is a set of file formats (HDF4, HDF5) designed to store and organize large amounts of data and is used extensively in scientific data applications. I started by writing a versioning connector for HDF5 that creates separate sub files for each dataset created in HDF5 and mounts these sub-files as external links in the main file. It enables the versioning of HDF5 files at a dataset boundary.  This work exposed me to the need for integrated versioning and metadata management for data and AI pipelines and the discovery of this gap led to the creation of the Common Metadata Framework (CMF).&lt;/p&gt;
&lt;h3&gt;Tell me a little more about this project.&lt;/h3&gt;
&lt;p&gt;Common Metadata Framework is a metadata tracking library for machine learning (ML) pipelines. Its power lies in its ability to track distributed lineages for pipelines and to track lineages from different experiment variants. It provides abstractions to support datacentric AI /ML development and allows you to gain better insights into the versions of data and artifacts. The information can be exported, i.e. via Openlineage. As such, it is not locked into the platform and can be shared from this platform to others for metadata sharing and collaborative development. It provides a framework to exchange metadata among different team members and enables a Git-like model for the sharing of metadata, making it unique from the other experiment management frameworks in the market. It enables users to push their local metadata to the remote repository, where it is merged to create the global metadata and pulls metadata from the global metadata to the local, to create a local view, which would contain only the metadata of interest.&lt;/p&gt;
&lt;p&gt;The information collected by CMF can provide interesting insights about your AI pipelines. For example, it can help identify what could be the sphere of influence of a corrupted dataset. What other downstream artifacts or models were affected by a tainted dataset? How did your model perform for a particular subset of data versus another subset? What other experiments were executed with the dataset and which of these provided the best results? What different intermediate artifacts were created while developing a particular model? These are only some of the questions the information gathered can answer.&lt;/p&gt;
&lt;p&gt;These insights not only help in terms of traceability and reproducibility of pipelines. They can also be used to optimize the performance of the pipelines. Pipelines can be optimized for their execution time, execution metrics, carbon footprint etc. The insights can also be used to provide other valuable additions, like providing high-quality representative data from huge volumes of unlabeled datasets.&lt;/p&gt;
&lt;h3&gt;How did you get involved with this project?&lt;/h3&gt;
&lt;p&gt;The HPE AI research lab started CMF from the ground up when the group realized that there was a gap in the ecosystem for a platform that enables the exchange of AI/ML metadata. Reproducibility in AI/ML pipelines are possible only when one knows the source code version, the data version and hyper parameters being used. Tracking this metadata enables the creation of trustworthy models. Although there are many metadata tracking tools in the field, we realized that there is no single framework that enables the integrated tracking of all of these while also enabling the ability to freely share this metadata. That is when we came up with CMF.  We have built CMF on top of existing open source tools (e.g. DVC and ML Metadata) so that we didn’t end up reinventing the wheel and instead provided added value.&lt;/p&gt;
&lt;h3&gt;What excites you about your role here?&lt;/h3&gt;
&lt;p&gt;Being the creator of a new open source project was challenging. Working in open source enables you to don multiple hats at the same time. The role also allows you to stretch beyond your technical skills. It provides opportunities to present your work, collaborate with teams outside of the company, and work towards creating value not just for one company but society at large.  &lt;/p&gt;
&lt;h3&gt;What things are you working on right now in regards to this project?&lt;/h3&gt;
&lt;p&gt;We have just scratched the surface with Common Metadata Framework. The metadata collected from AI pipelines enables further optimization and we are building intelligence for that.  We are working on multiple use cases found in the science community. One specific use case is the High Energy Physics Particle Tracking Pipeline. Deep learning-based pattern recognition methods are now regularly used for High energy Physics (HEP) particle tracking. The leading techniques involve successive filtering of detector impact cloud points from sensors in collider experiments to isolate possible trajectories of exotic particles. This involves metric learning using fully-connected multi-layer perceptron’s, binary classifiers for edge filtering and a graph neural network (GNN) to improve purity of selected points. This work faces numerous challenges, including large datasets and a large parameter space with cascaded inputs/outputs making optimization difficult. Distributed algorithm development and multiple compute environments (HPC, on-prem &amp;#x26; cloud) add to the complexity. CMF has been applied in scenarios like this to capture metadata for the entire experiment population by capturing data lineage, metrics, network architecture and hyperparameters. We are working towards optimizing this pipeline to provide a faster turnaround time for experiments and improve model performance.&lt;/p&gt;
&lt;h3&gt;Do you have any advice you’d like to pass on?&lt;/h3&gt;
&lt;p&gt;When working with open source it is important to realize that the road ahead is long and may not provide instant results. Adoption may come only after consistent effort and building a lot of tools around it. You need to continue to persevere and work towards your goals incrementally. &lt;/p&gt;
&lt;p&gt;Open source also offers the opportunity to collaborate and work together with a broader community. It is important to work in tandem with other tools in the ecosystem so that, together, you create a more enhanced ecosystem.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake for Private Cloud Enterprise monitoring with Apache SkyWalking and OpenTelemetry]]></title><description><![CDATA[Overview Modern applications are predominantly distributed microservices-based applications that are flexible for scalability and developed…]]></description><link>https://developer.hpe.com/greenlake-platform-infrastructure-monitoring-with-apache-skywalking/</link><guid isPermaLink="false">https://developer.hpe.com/greenlake-platform-infrastructure-monitoring-with-apache-skywalking/</guid><pubDate>Fri, 13 Jan 2023 08:52:15 GMT</pubDate><content:encoded>&lt;h1&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Modern applications are predominantly distributed microservices-based applications that are flexible for scalability and developed using varying technology stacks. These applications are deployed on a highly available infrastructure composed of a multitude of compute instances (bare metal/virtual machine/container). It is imperative to monitor, in near real time, how the compute resources are utilized, so that necessary proactive actions are initiated to keep the applications in a consistently acceptable state. Application Performance Management (APM) tools help to centrally monitor distributed applications and/or compute resources.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is a popular, open-source application performance monitoring (APM) tool used to collect, analyze, aggregate, and visualize data from services and cloud-native infrastructures. It provides an easy way to maintain a clear view of your distributed systems, even across clouds. You can use it with HPE GreenLake for Private Cloud Enterprise, the managed, pay-per-use Private Cloud as a Service (PCaaS) solution to enhance monitoring and observability of your workloads.&lt;/p&gt;
&lt;p&gt;Because HPE GreenLake for Private Cloud Enterprise provides you with a flexible, self-service environment where you can implement a hybrid approach and run different kinds of bare metal, virtual machine, and containerized workloads, you can take advantage of additional apps and tools like SkyWalking to customize your situation and meet your specific needs.&lt;/p&gt;
&lt;p&gt;In this post, I will focus on how you can use SkyWalking and OpenTelemetry or OTel for short, to do infrastructure host monitoring while taking advantage of how HPE GreenLake provides you with a consistent cloud experience across all your applications and data, whether on- or off-premises.&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;L﻿et&apos;s get started&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;Definitions&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; is tooling or a technical solution that allows teams to watch and understand the state of their systems. Monitoring is based on gathering predefined sets of metrics or logs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observability&lt;/strong&gt; is tooling or a technical solution that allows teams to actively debug their system. Observability is based on exploring properties and patterns not defined in advance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Pre-requisites&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;An active service subscription to HPE GreenLake Private Cloud Enterprise.&lt;/li&gt;
&lt;li&gt;VM and/or Bare metal instances are provisioned for the workloads and you are able to log in to the console.&lt;/li&gt;
&lt;li&gt;SkyWalking instance has been setup either on premises or in Hyperscalers. Ensure that inbound ingress traffic is allowed on port 11800 for SkyWalking instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Steps to configure monitoring&lt;/strong&gt;&lt;/h2&gt;
&lt;h3&gt;Step 1- Monitoring host metric with node exporter&lt;/h3&gt;
&lt;p&gt;Log in to the console of machine to be monitored.&lt;/p&gt;
&lt;p&gt;Next, download and run the node exporter using the snippet code below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0-rc.0/node_exporter-1.4.0-rc.0.linux-amd64.tar.gz

tar xvfz node_exporter-1.4.0-rc.0.linux-amd64.tar.gz

rm -rf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz

cd node_exporter-1.4.0-rc.0.linux-amd64

cp node_exporter /usr/sbin

cat&amp;#x3C;&amp;#x3C;EOF &gt;nodeexporter.service

[Unit]
Description=Service to start the node_exporter on the node
[Service]
Type=Simple
ExecStart=/usr/sbin/node_exporter
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=%n

[Install]
WantedBy=multi-user.target

EOF


mv nodeexporter.service /etc/systemd/system
systemctl daemon-reload
systemctl enable nodexporter.service
systemctl start nodexporter.service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check that the metrics endpoint shows metrics exported from the infrastructure node.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl &amp;#x3C;http://localhost:9100/metrics&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/node_metrics.png&quot; alt=&quot;&quot; title=&quot;Node Metrics&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 2- Setup OpenTelemetry (OTel) collector&lt;/h3&gt;
&lt;p&gt;OTel provides a set of standardized vendor-agnostic SDKs, APIs, and tools for ingesting, transforming, and sending data to an observability back-end. You can install OTel collector in the same host, which exposes metrics, or we can create central VM/Bare metal instance to receive telemetry information from several infrastructure instances.&lt;/p&gt;
&lt;p&gt;Install OTel collector.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;sudo wget &amp;#x3C;https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.54.0/otelcol_0.54.0_linux_amd64.deb&gt;

sudo dpkg -i otelcol_0.54.0_linux_amd64.deb

systemctl status otelcol

# By default, the otelcol systemd service will be started with the --config=/etc/otelcol/config.yaml option after installation.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a new OTel configuration file (say /etc/otelcol/otel-collector-config.yaml) to link the metrics from host nodes to SkyWalking Observability Analysis Platform (OAP). The orange boxes shown in the images below must be modified to point to the correct infrastructure hosts and SkyWalking DNS or IP address.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/otel_collector_configuration.png&quot; alt=&quot;&quot; title=&quot;OTeL Configuration&quot;&gt;&lt;/p&gt;
&lt;p&gt;Change the OTel configuration to the newly created OTel configuration file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;cd /etc/otelcol

sudo nano otelcol.conf

# Change the path to the new configuration file you just created in the 
# above step and save.

sudo systemctl restart otelcol

sudo journalctl -u otelcol
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3- Validate the infrastructure host metrics are seen in the SkyWalking UI&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/skywalking_vm_monitoring.png&quot; alt=&quot;&quot; title=&quot;SkyWalking UI with Infrastructure details&quot;&gt;&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Infrastructure and application level monitoring gives a holistic picture of service reliability. In this post, I have shown you how easily you can set up compute infrastructure resource monitoring, running in on- or off-premises, using OpenTelemetry and SkyWalking. CPU, Memory, and Disk usage trends showin in the APM tool give the monitoring team early insights into potential system issues that should be addressed before they impact customer applications. Additionally, APM tools often provide trace, logs, metrics monitoring along with alerting, which can be used in conjunction with infrastructure monitoring tools to provide highly reliable services for customers.&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;Reference&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://skywalking.apache.org/docs/main/v9.3.0/en/concepts-and-designs/overview/&quot;&gt;Apache SkyWalking&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://cloud.google.com/architecture/devops/devops-measurement-monitoring-and-observability&quot;&gt;Monitoring and observability&lt;/a&gt;﻿&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://prometheus.io/docs/guides/node-exporter/&quot;&gt;Prometheus Node exporter&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://opentelemetry.io/&quot;&gt;OpenTelemetry&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Handling application performance monitoring on HPE GreenLake for Private Cloud Enterprise – Part 3: K8s monitoring using Apache SkyWalking]]></title><description><![CDATA[Why is Kubernetes monitoring so important? HPE GreenLake for Private Cloud Enterprise delivers a modern private cloud to support your app…]]></description><link>https://developer.hpe.com/set-up-apache-skywalking-for-k8s-monitoring-in-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/set-up-apache-skywalking-for-k8s-monitoring-in-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Wed, 11 Jan 2023 20:19:50 GMT</pubDate><content:encoded>&lt;h2&gt;Why is Kubernetes monitoring so important?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt; delivers a modern private cloud to support your app workloads running in any combination across your edges, colocations, and data centers. It contains one HPE service, called &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise: Containers&lt;/a&gt;, which provides an enterprise-grade container management service using open source Kubernetes. It allows customers to create a Kubernetes cluster, view details about existing clusters, and launch the service console.&lt;/p&gt;
&lt;p&gt;Though Kubernetes dramatically simplifies application deployment in containers and across clouds, it adds a new set of complexities for managing, securing and troubleshooting applications. Container-based applications are dynamic and they are being designed using microservices, where the number of components is increased by an order of magnitude.&lt;/p&gt;
&lt;p&gt;To ensure Kubernetes security, it requires self-configuration that is typically specified in code, whether Kubernetes YAML manifests, Helm charts, or templating tools. Properly configuring for workloads, clusters, networks, and infrastructure is crucial for averting issues and limiting the impact if a breach occurs. Dynamic provisioning via Infrastructure as code, automated configuration management and orchestration also add to monitoring and troubleshooting complexity.&lt;/p&gt;
&lt;p&gt;Since Kubernetes is widely used for processing customer workloads, the non-availability of both workloads and the cluster itself, from misconfiguration of core components to network connectivity issues in Kubernetes, can adversely impact productivity, business continuity and user experience. To avoid this, enterprises must closely monitor the status of the objects managed and operations performed by Kubernetes, proactively capture abnormalities, and resolve them well before end-users notice.&lt;/p&gt;
&lt;p&gt;Kubernetes monitoring is critical to managing application performance, service uptime and troubleshooting. However, it presents a challenge for a traditional, static monitoring approach, emphasizing the need for real time monitoring. Having a good application performance monitoring (APM) tool is becoming essential for Kubernetes monitoring.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-application-performance-monitoring-tools-overview/&quot;&gt;my first blog post&lt;/a&gt;, I walked through some of the best APM tools, described their key features and discussed their strengths and weaknesses in detail. In this blog post, I choose one APM tool,  &lt;em&gt;Apache SkyWalking&lt;/em&gt;, and describe in detail how to set it up in HPE GreenLake for Private Cloud Enterprise for monitoring a Kubernetes cluster.&lt;/p&gt;
&lt;h2&gt;Apache SkyWalking&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://skywalking.apache.org/&quot;&gt;Apache SkyWalking&lt;/a&gt; is an open source application performance monitoring (APM) tool, especially designed for microservices, cloud native, and container-based architectures.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is lightweight and scalable. It can be easily set up as a &lt;em&gt;self-managed&lt;/em&gt; APM tool within an on-premises data center. This avoids leasing customer data to third party services and matches well with the strict security parameters of HPE GreenLake for Private Cloud environment.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Before starting, make sure you have the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt;, together with the &lt;em&gt;HPE kubectl plugin&lt;/em&gt; and the &lt;em&gt;kubeconfig&lt;/em&gt; file of the Kubernetes cluster. You can download them from the launched service console in HPE GreenLake for Private Cloud Enterprise. To simplify the setup process, you can export the environment variable &lt;code&gt;KUBECONFIG&lt;/code&gt; and point it to the downloaded &lt;em&gt;kubeconfig&lt;/em&gt; file.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;&lt;a href=&quot;https://helm.sh/docs/intro/install/&quot;&gt;Helm&lt;/a&gt;&lt;/em&gt; CLI tool, version 3.8.1 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With your user access setup, you should have access to permissions that can create and update the following resources in the Kubernetes cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes Service Account(s)&lt;/li&gt;
&lt;li&gt;Kubernetes Roles &amp;#x26; RoleBindings&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Set up Apache SkyWalking for Kubernetes monitoring&lt;/h2&gt;
&lt;p&gt;Apache SkyWalking leverages the Kubernetes &lt;a href=&quot;https://github.com/kubernetes/kube-state-metrics&quot;&gt;kube-state-metrics&lt;/a&gt; service for collecting metrics data from Kubernetes cluster. It then leverages the &lt;em&gt;OpenTelemetry&lt;/em&gt; collector to transfer the Kubernetes metrics to the &lt;em&gt;OpenTelemetry&lt;/em&gt; receiver in the Apache SkyWalking Observability Analysis Platform (OAP) for Kubernetes monitoring.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/otel-collector.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Deploy Apache SkyWalking&lt;/h3&gt;
&lt;p&gt;In this blog post, I will take the approach to setting up the Apache SkyWalking as a &lt;em&gt;self-managed&lt;/em&gt; APM tool within the Kubernetes cluster created in HPE GreenLake for Private Cloud Enterprise. This mainly takes into account matching with the strict security parameters of HPE GreenLake for Private Cloud environment.&lt;/p&gt;
&lt;p&gt;To start, install Apache SkyWalking using Helm charts with &lt;em&gt;elasticsearch&lt;/em&gt; as storage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git clone https://github.com/apache/skywalking-kubernetes 
$ cd skywalking-kubernetes/chart
$ helm repo add elastic https://helm.elastic.co
$ helm dep up skywalking
$﻿ kubectl create ns skywalking
$ helm install skywalking skywalking –n skywalking \
--set oap.image.tag=9.2.0 \
--set oap.storageType=elasticsearch \
--set ui.image.tag=9.2.0 \
--set elasticsearch.imageTag=7.1.1 \7
--set elasticsearch.persistence.enabled=true \
--set elasticsearch.sysctlInitContainer.enabled=false \
--set oap.env.SW_OTEL_RECEIVER=default \
--set oap.env.SW_OTEL_RECEIVER_ENABLED_OC_RULES=&quot;k8s-cluster\,k8s-service\,k8s-node&quot; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After running the above commands, t﻿he Apache SkyWalking is installed to the Kubernetes cluster&apos;s namespace &lt;em&gt;skywalking&lt;/em&gt;. It creates the &lt;em&gt;elasticsearch&lt;/em&gt; as the &lt;code&gt;StatefulSet&lt;/code&gt; resource, running a pod on each worker node. It runs the Apache SkyWalking OAP with replicas as 2 to provide high availability.&lt;/p&gt;
&lt;p&gt;T﻿he last two options, &lt;em&gt;oap.env.SW_OTEL_RECEIVER=default&lt;/em&gt; &amp;#x26; &lt;em&gt;oap.env.SW_OTEL_RECEIVER_ENABLED_OC_RULES=&quot;k8s-cluster,k8s-service,k8s-node&quot;&lt;/em&gt;, enable the &lt;em&gt;OpenTelemetry&lt;/em&gt; receiver and define the metrics for the Kubernetes service, service instance and endpoint. It requires Apache SkyWalking OAP to have access to the Kubernetes API server to query the metadata.&lt;/p&gt;
&lt;p&gt;You can check the detailed Apache SkyWalking installation by typing the following &lt;em&gt;kubectl&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n skywalking
NAME                                  READY   STATUS      RESTARTS   AGE
pod/elasticsearch-master-0            1/1     Running     0          8m7s
pod/elasticsearch-master-1            1/1     Running     0          8m7s
pod/elasticsearch-master-2            1/1     Running     0          8m7s
pod/skywalking-es-init-m9t5c          0/1     Completed   0          8m7s
pod/skywalking-oap-7f757c7668-nq2cz   1/1     Running     0          8m8s
pod/skywalking-oap-7f757c7668-q8z7m   1/1     Running     0          8m8s
pod/skywalking-ui-549dc5989f-jq9b9    1/1     Running     0          8m8s

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
service/elasticsearch-master            ClusterIP   10.110.35.173    &amp;#x3C;none&gt;        9200/TCP,9300/TCP     8m5s
service/elasticsearch-master-headless   ClusterIP   None             &amp;#x3C;none&gt;        9200/TCP,9300/TCP     8m5s
service/skywalking-oap                  ClusterIP   10.108.29.84     &amp;#x3C;none&gt;        11800/TCP,12800/TCP   8m5s
service/skywalking-ui                   ClusterIP   10.102.186.131   &amp;#x3C;none&gt;        80/TCP                8m5s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/skywalking-oap   2/2     2            2           8m6s
deployment.apps/skywalking-ui    1/1     1            1           8m6s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/skywalking-oap-7f757c7668   2         2         2       8m9s
replicaset.apps/skywalking-ui-549dc5989f    1         1         1       8m9s

NAME                                    READY   AGE
statefulset.apps/elasticsearch-master   3/3     8m5s

NAME                           COMPLETIONS   DURATION   AGE
job.batch/skywalking-es-init   1/1           7m27s      8m6s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Y﻿ou can edit the deployed SkyWalking UI service &lt;em&gt;skywalking-ui&lt;/em&gt; and change its type from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;NodePort&lt;/em&gt;. The service will be automatically mapped to the gateway host with an assigned port.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ k edit service/skywalking-ui -n skywalking

$ k describe service/skywalking-ui -n skywalking 
Name:                     skywalking-ui
Namespace:                skywalking
Labels:                   app=skywalking
                          app.kubernetes.io/managed-by=Helm
                          chart=skywalking-4.2.0
                          component=ui
                          heritage=Helm
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          release=skywalking
Annotations:              hpecp-internal-gateway/80: gl2-caas.gl-hpe.local:10037
                          meta.helm.sh/release-name: skywalking
                          meta.helm.sh/release-namespace: skywalking
Selector:                 app=skywalking,component=ui,release=skywalking
Type:                     NodePort
IP:                       10.102.186.131
Port:                     &amp;#x3C;unset&gt;  80/TCP
TargetPort:               8080/TCP
NodePort:                 &amp;#x3C;unset&gt;  32748/TCP
Endpoints:                10.192.7.25:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   &amp;#x3C;none&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As shown in the &lt;em&gt;&lt;strong&gt;Annotations&lt;/strong&gt;&lt;/em&gt; section of the service description above, t﻿he SkyWalking UI can then be accessed in your browser by typing the address &lt;em&gt;gl2-caas.gl-hpe.local:10037&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-ui-k8s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Deploy kube-state-metrics service&lt;/h3&gt;
&lt;p&gt;T﻿he Kubernetes &lt;em&gt;kube-state-metrics&lt;/em&gt; service will be deployed to listen to the Kubernetes API server and generate metrics about the state of the Kubernetes objects.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ helm install  kube-state-metrics -n skywalking prometheus-community/kube-state-metrics
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set up &lt;em&gt;OpenTelemetry&lt;/em&gt; collector&lt;/h3&gt;
&lt;p&gt;The &lt;em&gt;OpenTelemetry&lt;/em&gt; collector needs to be installed and set up to transfer the Kubernetes metrics to &lt;em&gt;OpenTelemetry&lt;/em&gt; receiver from the SkyWalking OAP server. I use the standard Docker image &lt;em&gt;otel/opentelemetry-collector:0.50.0&lt;/em&gt; to deploy the &lt;em&gt;OpenTelemetry&lt;/em&gt; collector to the Kubernetes cluster.&lt;/p&gt;
&lt;h4&gt;Set up role-based access control (RBAC)&lt;/h4&gt;
&lt;p&gt;Kubernetes RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. It is important to ensure that, when designing permissions for cluster users, the cluster administrator understands the areas where privilege escalation could occur, to reduce the risk of excessive access leading to security incidents.&lt;/p&gt;
&lt;p&gt;To set up RBAC, you create a &lt;em&gt;Service Account&lt;/em&gt;, a &lt;em&gt;ClusterRole&lt;/em&gt;, and connect the two with a &lt;em&gt;ClusterRoleBinding&lt;/em&gt;.&lt;/p&gt;
&lt;h5&gt;1. Create a YAML file &lt;em&gt;otel-sa-kubernetes-monitor.yaml&lt;/em&gt; for the service account:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: otel-sa-kubernetes-monitor
&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;2. Create a YAML file &lt;em&gt;otel-role-kubernetes-monitor.yaml&lt;/em&gt; for the cluster roles:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole

metadata:
  name: otel-role-kubernetes-monitor
rules:
  - apiGroups: [ &quot;&quot; ]
    resources:
      # @feature: kubernetes-monitor; permissions to read resources
      - &quot;endpoints&quot;
      - &quot;pods&quot;
      - &quot;services&quot;
      - &quot;nodes&quot;
      - &quot;nodes/metrics&quot;
      - &quot;nodes/proxy&quot;
    verbs: [ &quot;get&quot;, &quot;watch&quot;, &quot;list&quot; ]
&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;3. Create a YAML file &lt;em&gt;otel-role-binding-kubernetes-monitor.yaml&lt;/em&gt; to bind the service account with the cluster roles:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-role-binding-kubernetes-monitor
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otel-role-kubernetes-monitor
subjects:
  - kind: ServiceAccount
    name: otel-sa-kubernetes-monitor
    namespace: skywalking
&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;4. Deploy the service account, the cluster role and the cluster rolebinding:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl apply -f otel-sa-kubernetes-monitor.yaml -n skywalking
$ kubectl apply -f otel-role-kubernetes-monitor.yaml -n skywalking
$ kubectl apply -f otel-role-binding-kubernetes-monitor.yaml.yaml -n skywalking
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Deploy &lt;em&gt;OpenTelemetry&lt;/em&gt; collector&lt;/h4&gt;
&lt;h5&gt;1. Create a YAML file &lt;em&gt;otel-collector-config.yaml&lt;/em&gt; to set the &lt;em&gt;OpenTelemetry&lt;/em&gt; config to scrape the Kubernetes metrics:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf-kubernetes-monitor
  labels:
    app: otel-kubernetes-monitor
data:
  otel-collector-config: |
    service:
      pipelines:
        metrics:
          receivers: [ prometheus ]
          exporters: [ opencensus,logging ]
    exporters:
      opencensus:
        endpoint: &quot;skywalking-oap.skywalking.svc.cluster.local:11800&quot;
        tls:
          insecure: true
      logging:
        loglevel: debug
    receivers:
      prometheus:
        config:
          scrape_configs:
          # @feature: kubernetes-monitor; configuration to scrape Kubernetes Nodes metrics
          - job_name: &apos;kubernetes-cadvisor&apos;
            scheme: https
            tls_config:
              ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            kubernetes_sd_configs:
              - role: node
            relabel_configs:
              - action: labelmap
                regex: __meta_kubernetes_node_label_(.+)
              - source_labels: []
                target_label: cluster
                replacement: cfe-iac-clu
              - target_label: __address__
                replacement: kubernetes.default.svc:443
              - source_labels: [__meta_kubernetes_node_name]
                regex: (.+)
                target_label: __metrics_path__
                replacement: /api/v1/nodes/$${1}/proxy/metrics/cadvisor
              - source_labels: [instance]
                separator: ;
                regex: (.+)
                target_label: node
                replacement: $$1
                action: replace
          # @feature: kubernetes-monitor; configuration to scrape Kubernetes Endpoints metrics
          - job_name: kube-state-metrics
            metrics_path: /metrics
            kubernetes_sd_configs:
            - role: endpoints
            relabel_configs:
            - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
              regex: kube-state-metrics
              replacement: $$1
              action: keep
            - action: labelmap
              regex: __meta_kubernetes_service_label_(.+)
            - source_labels: []
              target_label: cluster
              replacement: cfe-iac-clu
&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;2. Create a YAML file &lt;em&gt;otel-collector-deploy.yaml&lt;/em&gt; for the &lt;em&gt;OpenTelemetry&lt;/em&gt; collector deployment:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-deployment-kubernetes-monitor
  labels:
    app: otel-kubernetes-monitor
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otel-kubernetes-monitor
  template:
    metadata:
      labels:
        app: otel-kubernetes-monitor
      annotations:
        sidecar.istio.io/inject: &quot;false&quot;
    spec:
      serviceAccountName: otel-sa-kubernetes-monitor
      containers:
        - name: otel-kubernetes-monitor
          image: otel/opentelemetry-collector:0.50.0
          command:
            - &quot;/otelcol&quot;
            - &quot;--config=/conf/otel-collector-config.yaml&quot;
          volumeMounts:
            - name: otel-collector-config-vol-kubernetes-monitor
              mountPath: /conf
      volumes:
        - name: otel-collector-config-vol-kubernetes-monitor
          configMap:
            name: otel-collector-conf-kubernetes-monitor
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h5&gt;3. Deploy the OpenTelemetry collector:&lt;/h5&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl apply -f otel-collector-config.yaml -n skywalking
$ kubectl apply -f otel-collector-deploy.yaml -n skywalking
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl  get all -n skywalking -l app=otel-kubernetes-monitor
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/otel-deployment-kubernetes-monitor-798cdd8486-gz885   1/1     Running   0          93d

NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/otel-deployment-kubernetes-monitor   1/1     1            1           96d

NAME                                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/otel-deployment-kubernetes-monitor-798cdd8486   1         1         1       96d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After all setup steps are finished, the Kubernetes metrics will show up in the SkyWalking UI, under the &lt;em&gt;Kubernetes&lt;/em&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-k8s-clu.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can check the﻿ Kubernetes overview from the SkyWalking UI:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-k8s-overview.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;And the Kubernetes nodes:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-k8s-node.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes worker node:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-k8s-node-instance.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;And the  Kubernetes services:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-k8s-svc.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I discussed the challenges in Kubernetes monitoring and why it’s important for Kubernetes monitoring in HPE GreenLake for Private Cloud Enterprise. I then took the Apache SkyWalking as the application performance monitoring (APM) tool and showed the detailed process of setting it up, as a &lt;em&gt;self-managed&lt;/em&gt; environment in HPE GreenLake for Private Cloud Enterprise for monitoring a Kubernetes cluster. It provides a way to gain the visibility of the objects and operations performed by Kubernetes, and to resolve issues in the cluster.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Handling application performance monitoring on HPE GreenLake for Private Cloud Enterprise – Part 2: App monitoring using Apache SkyWalking]]></title><description><![CDATA[Introduction HPE GreenLake for Private Cloud Enterprise delivers a modern private cloud to support your app workloads with bare metal…]]></description><link>https://developer.hpe.com/set-up-apache-skywalking-for-k8s-and-vm-monitoring-in-hpe-greenlake-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/set-up-apache-skywalking-for-k8s-and-vm-monitoring-in-hpe-greenlake-private-cloud/</guid><pubDate>Tue, 10 Jan 2023 07:31:55 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt; delivers a modern private cloud to support your app workloads with bare metal, containers, and virtual machines (VMs) running in any combination across your edges, colocations, and data centers. It combines self-service resource access for developers with consumption and performance transparency for IT. Within this modern application environment, having a robust application performance monitoring (APM) tool is becoming essential. It can help IT professionals to ensure that deployed applications meet the performance, reliability and valuable user experience required by developers, partners and customers.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/get-started-with-application-performance-monitoring-tools-overview/&quot;&gt;my first blog post&lt;/a&gt;, I walked through some of the best APM tools, describing their key features, strengths, and weaknesses. In this blog post, using one of the APM tools I covered in my previous post, I will describe, in detail, the process of how to set it up in HPE GreenLake for Private Cloud Enterprise for monitoring and alerting customer applications.&lt;/p&gt;
&lt;h2&gt;Apache SkyWalking&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://skywalking.apache.org/&quot;&gt;Apache SkyWalking&lt;/a&gt; is an open source APM tool with capabilities for monitoring, tracing and diagnosing distributed systems. It’s especially designed for microservices, cloud native and container-based architectures.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking provides a list of agents to be used for building &lt;em&gt;Java&lt;/em&gt;, &lt;em&gt;.NET Core&lt;/em&gt;, &lt;em&gt;PHP&lt;/em&gt;, &lt;em&gt;NodeJS&lt;/em&gt;, &lt;em&gt;Golang&lt;/em&gt;, &lt;em&gt;LUA&lt;/em&gt;, &lt;em&gt;Rust&lt;/em&gt; and &lt;em&gt;C++&lt;/em&gt; apps. It provides tracing, metrics analysis, alerting, service mesh observability and visualization.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is lightweight and scalable. It can be easily set up as a &lt;em&gt;self-managed&lt;/em&gt; APM tool within an on-premises data center. This avoids leasing customer data to third party services and matches well with the strict security parameters of HPE GreenLake for Private Cloud environment.&lt;/p&gt;
&lt;h2&gt;Set up Apache SkyWalking for application monitoring&lt;/h2&gt;
&lt;p&gt;I will take you through setting up Apache SkyWalking as a &lt;em&gt;self-managed&lt;/em&gt; APM tool within the Kubernetes cluster created in HPE GreenLake for Private Cloud Enterprise. By setting up the APM tool within this environment, it can benefit from the security features of HPE GreenLake.&lt;/p&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before starting, make sure you have the following requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster, being provisioned in HPE GreenLake for Private Cloud Enterprise&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, version 1.23 or later, together with the &lt;em&gt;kubeconfig&lt;/em&gt; files for accessing the Kubernetes cluster&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;&lt;a href=&quot;https://helm.sh/docs/intro/install/&quot;&gt;Helm&lt;/a&gt;&lt;/em&gt; CLI tool, version 3.8.1 or later&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Deploy Apache SkyWalking&lt;/h3&gt;
&lt;p&gt;Install Apache SkyWalking using Helm charts with &lt;em&gt;elasticsearch&lt;/em&gt; as storage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git clone https://github.com/apache/skywalking-kubernetes 
$ cd skywalking-kubernetes/chart
$ helm repo add elastic https://helm.elastic.co
$ helm dep up skywalking
$﻿ kubectl create ns skywalking
$ helm install skywalking skywalking –n skywalking \
--set oap.image.tag=9.2.0 \
--set oap.storageType=elasticsearch \
--set ui.image.tag=9.2.0 \
--set elasticsearch.imageTag=7.17.1 \
--set elasticsearch.persistence.enabled=true \
--set elasticsearch.sysctlInitContainer.enabled=false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After running the above commands, Apache SkyWalking is installed on the Kubernetes cluster&apos;s namespace &lt;em&gt;skywalking&lt;/em&gt;. The option &lt;em&gt;elasticsearch.persistence.enabled=true&lt;/em&gt; in the above Helm install command creates the &lt;em&gt;elasticsearch&lt;/em&gt; as the &lt;code&gt;StatefulSet&lt;/code&gt; object, running a pod on each worker node. The command runs the Apache SkyWalking Observability Analysis Platform (OAP) with replicas as 2 to provide high availability.&lt;/p&gt;
&lt;p&gt;It should be noted that the last option &lt;em&gt;elasticsearch.sysctlInitContainer.enabled=false&lt;/em&gt; in the above Helm install command is necessary. Otherwise, the command will try to set up &lt;em&gt;vm.max_map_count&lt;/em&gt; using a privileged container during &lt;em&gt;elasticsearch&lt;/em&gt; installation. Running privileged containers leaves a large chance that an attacker will be able to run code as root. There is one &lt;em&gt;PodSecurityPolicy&lt;/em&gt;, &lt;strong&gt;&lt;code&gt;psp-privileged-container&lt;/code&gt;&lt;/strong&gt;, which has been pre-deployed in K8s clusters to deny privileged container running in the HPE GreenLake for Private Cloud Enterprise environment. This policy will fail the Helm install.&lt;/p&gt;
&lt;p&gt;You can check the detailed Apache SkyWalking installation by typing the following &lt;em&gt;kubectl&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n skywalking
NAME                                  READY   STATUS      RESTARTS   AGE
pod/elasticsearch-master-0            1/1     Running     0          8m7s
pod/elasticsearch-master-1            1/1     Running     0          8m7s
pod/elasticsearch-master-2            1/1     Running     0          8m7s
pod/skywalking-es-init-m9t5c          0/1     Completed   0          8m7s
pod/skywalking-oap-7f757c7668-nq2cz   1/1     Running     0          8m8s
pod/skywalking-oap-7f757c7668-q8z7m   1/1     Running     0          8m8s
pod/skywalking-ui-549dc5989f-jq9b9    1/1     Running     0          8m8s

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
service/elasticsearch-master            ClusterIP   10.110.35.173    &amp;#x3C;none&gt;        9200/TCP,9300/TCP     8m5s
service/elasticsearch-master-headless   ClusterIP   None             &amp;#x3C;none&gt;        9200/TCP,9300/TCP     8m5s
service/skywalking-oap                  ClusterIP   10.108.29.84     &amp;#x3C;none&gt;        11800/TCP,12800/TCP   8m5s
service/skywalking-ui                   ClusterIP   10.102.186.131   &amp;#x3C;none&gt;        80/TCP                8m5s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/skywalking-oap   2/2     2            2           8m6s
deployment.apps/skywalking-ui    1/1     1            1           8m6s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/skywalking-oap-7f757c7668   2         2         2       8m9s
replicaset.apps/skywalking-ui-549dc5989f    1         1         1       8m9s

NAME                                    READY   AGE
statefulset.apps/elasticsearch-master   3/3     8m5s

NAME                           COMPLETIONS   DURATION   AGE
job.batch/skywalking-es-init   1/1           7m27s      8m6s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can edit the deployed SkyWalking UI service &lt;em&gt;skywalking_ui&lt;/em&gt; and change its type from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;NodePort&lt;/em&gt;. The service will be automatically mapped to a gateway host with an assigned port.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ k edit service/skywalking-ui -n skywalking

$ k describe service/skywalking-ui -n skywalking 
Name:                     skywalking-ui
Namespace:                skywalking
Labels:                   app=skywalking
                          app.kubernetes.io/managed-by=Helm
                          chart=skywalking-4.2.0
                          component=ui
                          heritage=Helm
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          release=skywalking
Annotations:              hpecp-internal-gateway/80: gl2-caas.gl-hpe.local:10037
                          meta.helm.sh/release-name: skywalking
                          meta.helm.sh/release-namespace: skywalking
Selector:                 app=skywalking,component=ui,release=skywalking
Type:                     NodePort
IP:                       10.102.186.131
Port:                     &amp;#x3C;unset&gt;  80/TCP
TargetPort:               8080/TCP
NodePort:                 &amp;#x3C;unset&gt;  32748/TCP
Endpoints:                10.192.7.25:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   &amp;#x3C;none&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As shown in the &lt;em&gt;&lt;strong&gt;Annotations&lt;/strong&gt;&lt;/em&gt; section of the service description above, the SkyWalking UI can then be accessed in the browser by typing the address &lt;em&gt;gl2-caas.gl-hpe.local:10037&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-ui.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Deploy a sample application: &lt;em&gt;SpringBoot&lt;/em&gt;&lt;/h3&gt;
&lt;p&gt;A﻿s my first demo application, I will create a &lt;em&gt;SpingBoot&lt;/em&gt; Web app that provides a REST endpoint &lt;strong&gt;/message&lt;/strong&gt; to print some nice message. Then, I will generate a &lt;em&gt;jar&lt;/em&gt; package using the &lt;a href=&quot;https://maven.apache.org/what-is-maven.html&quot;&gt;Apache Maven&lt;/a&gt; command &lt;em&gt;mvn&lt;/em&gt; with the &lt;em&gt;pom.xml&lt;/em&gt; file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;├── Dockerfile
├── Dockerfile.agentless
├── pom.xml
├── README.md
├── src
│   ├── main
│   └── test
└── target
    ├── springboot-k8s-demo.jar
    └── test-classes

$﻿ cat pom.xml
&amp;#x3C;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
&amp;#x3C;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;
	xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt;
	&amp;#x3C;modelVersion&gt;4.0.0&amp;#x3C;/modelVersion&gt;
	&amp;#x3C;parent&gt;
		&amp;#x3C;groupId&gt;org.springframework.boot&amp;#x3C;/groupId&gt;
		&amp;#x3C;artifactId&gt;spring-boot-starter-parent&amp;#x3C;/artifactId&gt;
		&amp;#x3C;version&gt;2.6.1&amp;#x3C;/version&gt;
		&amp;#x3C;relativePath/&gt; &amp;#x3C;!-- lookup parent from repository --&gt;
	&amp;#x3C;/parent&gt;
	&amp;#x3C;groupId&gt;com.javatechie&amp;#x3C;/groupId&gt;
	&amp;#x3C;artifactId&gt;springboot-k8s-demo&amp;#x3C;/artifactId&gt;
	&amp;#x3C;version&gt;0.0.1-SNAPSHOT&amp;#x3C;/version&gt;
	&amp;#x3C;name&gt;springboot-k8s-demo&amp;#x3C;/name&gt;
	&amp;#x3C;description&gt;Demo project for Spring Boot&amp;#x3C;/description&gt;
	&amp;#x3C;properties&gt;
		&amp;#x3C;java.version&gt;1.8&amp;#x3C;/java.version&gt;
	&amp;#x3C;/properties&gt;
	&amp;#x3C;dependencies&gt;
		&amp;#x3C;dependency&gt;
			&amp;#x3C;groupId&gt;org.springframework.boot&amp;#x3C;/groupId&gt;
			&amp;#x3C;artifactId&gt;spring-boot-starter-web&amp;#x3C;/artifactId&gt;
		&amp;#x3C;/dependency&gt;

		&amp;#x3C;dependency&gt;
			&amp;#x3C;groupId&gt;org.springframework.boot&amp;#x3C;/groupId&gt;
			&amp;#x3C;artifactId&gt;spring-boot-starter-test&amp;#x3C;/artifactId&gt;
			&amp;#x3C;scope&gt;test&amp;#x3C;/scope&gt;
		&amp;#x3C;/dependency&gt;
	&amp;#x3C;/dependencies&gt;

	&amp;#x3C;build&gt;
		&amp;#x3C;plugins&gt;
			&amp;#x3C;plugin&gt;
				&amp;#x3C;groupId&gt;org.springframework.boot&amp;#x3C;/groupId&gt;
				&amp;#x3C;artifactId&gt;spring-boot-maven-plugin&amp;#x3C;/artifactId&gt;
			&amp;#x3C;/plugin&gt;
		&amp;#x3C;/plugins&gt;
		&amp;#x3C;finalName&gt;springboot-k8s-demo&amp;#x3C;/finalName&gt;
	&amp;#x3C;/build&gt;

&amp;#x3C;/project&gt;

$﻿ mvn compile
$﻿ mvn package
$ ls target/springboot-k8s-demo.jar
target/springboot-k8s-demo.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see the &lt;em&gt;jar&lt;/em&gt; package &lt;em&gt;springboot-k8s-demo.jar&lt;/em&gt; is created in the &lt;em&gt;target&lt;/em&gt; folder. B﻿y building a &lt;em&gt;Docker&lt;/em&gt; image using this generated &lt;em&gt;jar&lt;/em&gt; file, the &lt;em&gt;SpringBoot&lt;/em&gt; web app can be easily deployed as a containerized application in the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking provides a list of agents for instrumenting applications. A specific agent per programming language can be used to build a corresponding service that collects application data and exports them to the SkyWalking OAP server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-agents.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿n order to monitor the sample  &lt;em&gt;SpringBoot&lt;/em&gt; web app from Apache SkyWalking, download the Java agent and rebuild the image.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/java-agent.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;H﻿ere is the &lt;em&gt;Dockerfile&lt;/em&gt; for building the image with the Java agent:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ cat Dockerfile
FROM adoptopenjdk:11-jre-hotspot
# copy extracted java agent folder from the downloaded apache skywalking archive
ADD agent /opt/agent
# copy the app jar file
EXPOSE 8080
ADD target/springboot-k8s-demo.jar /app/springboot-k8s-demo.jar
WORKDIR /app
ENTRYPOINT [&quot;java&quot;,&quot;-javaagent:/opt/agent/skywalking-agent.jar=agent.namespace=default,agent.service_name=springboot-k8s-app,collector.backend_service=skywalking-oap.skywalking.svc.cluster.local:11800, plugin.jdbc.trace_sql_parameters = true,profile.active=true&quot;,&quot;-jar&quot;,&quot;/app/springboot-k8s-app.jar&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Monitor SpringBoot application from SkyWalking UI&lt;/h3&gt;
&lt;p&gt;After building the Docker image &lt;em&gt;guopingjia/springboot-k8s-demo:pce&lt;/em&gt; and pushing it to the &lt;em&gt;DockerHub&lt;/em&gt; registry, deploy the &lt;em&gt;SpringBoot&lt;/em&gt; web app in the Kubernetes cluster with the &lt;em&gt;deployment.yaml&lt;/em&gt; manifest file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$﻿ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: springboot-k8s-demo
spec:
  selector:
    matchLabels:
      app: springboot-k8s-demo
  replicas: 1  
  template:
    metadata:
      labels:
        app: springboot-k8s-demo
    spec:
      containers:
        - name: springboot-k8s-demo
          image: docker.io/guopingjia/springboot-k8s-demo:pce

          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080

$﻿ kubectl apply -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Upon web app deployment, the built-in Java agent will start collecting application data and posting it to the SkyWalking OAP. All the application metrics will be available in the SkyWalking UI, under the &lt;em&gt;General Service&lt;/em&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/java-app.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below is the &lt;em&gt;SpringBoot&lt;/em&gt; web app topology map:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/java-app-map.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Deploy a multi-tier application&lt;/h3&gt;
&lt;p&gt;A﻿s my second demo application, I will deploy a multi-tier &lt;em&gt;music&lt;/em&gt; application, available as part of &lt;a href=&quot;https://github.com/apache/skywalking-showcase&quot;&gt;Apache SkyWalking showcase application&lt;/a&gt;. This multi-tier music application consists of a frontend app server and its UI, a backend gateway service, a recommendation service and songs service, together with an &lt;em&gt;H2&lt;/em&gt; database. Each microservice is implemented with a different programming language, e.g. &lt;em&gt;NodeJS&lt;/em&gt;, &lt;em&gt;React&lt;/em&gt;, &lt;em&gt;Java Spring&lt;/em&gt;, &lt;em&gt;Python&lt;/em&gt;, etc.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/multl-tier-app-music.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿n order to monitor this multi-tier &lt;em&gt;music&lt;/em&gt; application from Apache SkyWalking, you need to pick up the SkyWalking agent per programming language and rebuild the corresponding service to collect and send application metrics to the SkyWalking OAP server.&lt;/p&gt;
&lt;p&gt;Y﻿ou can rebuild the &lt;em&gt;Docker&lt;/em&gt; image per service using the agent version file &lt;em&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;/em&gt; in each service&apos;s folder from the multi-tier &lt;em&gt;music&lt;/em&gt; application repo:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;├── app
│   ├── Dockerfile
│   ├── Dockerfile.agentless
│   ├── Makefile
│   ├── package.json
│   ├── package-lock.json
│   ├── server
│   └── ui
├── gateway-service
│   ├── build.gradle
│   ├── Dockerfile
│   ├── Dockerfile.agentless
│   ├── gradle
│   ├── gradle.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── Makefile
│   ├── settings.gradle
│   └── src
├── recommendation-service
│   ├── Dockerfile
│   ├── Dockerfile.agentless
│   ├── Makefile
│   ├── requirements.txt
│   └── src
└── songs-service
    ├── build.gradle
    ├── Dockerfile
    ├── Dockerfile.agentless
    ├── gradle
    ├── gradle.properties
    ├── gradlew
    ├── gradlew.bat
    ├── Makefile
    ├── settings.gradle
    └── src
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A﻿fter image files are rebuilt with the agents, the multi-tier &lt;em&gt;music&lt;/em&gt; application can be deployed in the Kubernetes cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ envsubst &amp;#x3C; resources.yaml | kubectl create -f -
service/gateway created
deployment.apps/gateway-deployment created
service/songs created
deployment.apps/songs-deployment created
service/rcmd created
deployment.apps/recommendation-deployment created
service/app created
deployment.apps/app-deployment created
deployment.apps/loadgen-deployment created
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Monitor multi-tier application from SkyWalking UI&lt;/h3&gt;
&lt;p&gt;When the multi-tier &lt;em&gt;music&lt;/em&gt; app gets deployed, the agents built with each microservice will start collecting application data and post it to the SkyWalking OAP. The multi-tier &lt;em&gt;music&lt;/em&gt; application metrics will be available in the SkyWalking UI, under the &lt;em&gt;General Service&lt;/em&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/multl-tier-app.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below is the multi-tier &lt;em&gt;music&lt;/em&gt; application topology map:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/multl-tier-app-map.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also check the following multi-tier &lt;em&gt;music&lt;/em&gt; application trace page. It&apos;s very helpful when you debug any performance issue in the application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-trace.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Application alerting&lt;/h3&gt;
&lt;p&gt;Apache SkyWalking provides an alerting mechanism to measure application performance according to a list of pre-defined metrics, e.g., &lt;em&gt;service_resp_time&lt;/em&gt;, &lt;em&gt;database_access_resp_time&lt;/em&gt;, and &lt;em&gt;service_sla&lt;/em&gt;. It will trigger alerting when some metrics reach pre-defined thresholds. You can define new metrics using Observability Analysis Language (OAL) or customize the existing metrics with new thresholds.&lt;/p&gt;
&lt;p&gt;Here you can see the alarms page from SkyWalking UI showing all the triggered alerts for deployed multi-tier &lt;em&gt;music&lt;/em&gt; application:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-alarms.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he alarms page shows &lt;em&gt;Successful rate of service agent::app is lower than 80% in 2 minutes of last 10 minutes&lt;/em&gt;. It indicates an issue from the frontend app server in the multi-tier &lt;em&gt;music&lt;/em&gt; application.&lt;/p&gt;
&lt;p&gt;A﻿pache SkyWalking configures the alerting using a collection of alerting rules located in &lt;em&gt;/skywalking/config/alarm-settings.yml&lt;/em&gt; from the SkyWalking OAP pod. You can check the content by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kukectl exec pod/skywalking-skywalking-helm-oap-bfb57fbf8-5g7k7 -n skywalking -it -- cat /skywalking/config/alarm-settings.yml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When comparing it to the output of the &lt;em&gt;alarm-settings.yml&lt;/em&gt;, you can see the alerts from the alarms page are triggered by the following metric alerting rule &lt;em&gt;service_sla&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;  service_sla_rule:
    # Metrics value need to be long, double or int
    metrics-name: service_sla
    op: &quot;&amp;#x3C;&quot;
    threshold: 8000
    # The length of time to evaluate the metrics
    period: 10
    # How many times after the metrics match the condition, will trigger alarm
    count: 2
    # How many times of checks, the alarm keeps silence after alarm triggered, default as same as period.
    silence-period: 3
    message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can go to the frontend app by clicking the service &lt;code&gt;agent::app&lt;/code&gt; from SkyWalking UI Service page. From the below service &lt;code&gt;agent::app&lt;/code&gt; overview page, it shows &lt;strong&gt;Success Rate 66.66%&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-svc-app-overview.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You may check further the service&apos;s trace page and try to figure out the root cause for this issue.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-svc-app-trace.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I used the Apache SkyWalking application performance monitoring (APM) tool and explained in detail how to set it up as a &lt;em&gt;self-managed&lt;/em&gt; environment in HPE GreenLake for Private Cloud Enterprise to be used for monitoring and alerting applications. Using the instrumentation of multiple supported agents from Apache SkyWalking, the application workloads can be easily monitored through the integrated Apache SkyWalking UI, with a nice application topology map, tracing details and real-time alarms for any application performance issues.&lt;/p&gt;
&lt;p&gt;I﻿n &lt;a href=&quot;https://developer.hpe.com/blog/set-up-apache-skywalking-for-k8s-monitoring-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;the next blog post of the series&lt;/a&gt; , I will show you how to use the Apache SkyWalking APM tool for monitoring of Kubernetes clusters provisioned on HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New year, new programs and tutorials]]></title><link>https://developer.hpe.com/2023-January-09/</link><guid isPermaLink="false">https://developer.hpe.com/2023-January-09/</guid><pubDate>Mon, 09 Jan 2023 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Handling application performance monitoring on HPE GreenLake for Private Cloud Enterprise – Part 1: A tools overview]]></title><description><![CDATA[Introduction HPE GreenLake for Private Cloud Enterprise delivers a modern private cloud to support your app workloads with bare metal…]]></description><link>https://developer.hpe.com/get-started-with-application-performance-monitoring-tools-overview/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-application-performance-monitoring-tools-overview/</guid><pubDate>Mon, 09 Jan 2023 08:53:23 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt; delivers a modern private cloud to support your app workloads with bare metal, containers, and virtual machines (VMs) running in any combination across your edges, colocations, and data centers. It combines self-service resource access for developers with consumption and performance transparency for IT. Within this modern application environment, having a robust application performance monitoring (APM) tool is becoming essential. It can help IT professionals ensure that deployed applications meet the performance, reliability and valuable user experience required by developers, partners and customers.&lt;/p&gt;
&lt;p&gt;This blog post will give an overview of the existing application performance monitoring (APM) tools, their key features, and their deployment and pricing models. It will provide some guidance on your effort in various APM feature evaluation and product selection. I have written it to help you analyze which parts of your stack require the most monitoring and select APM tools that align with the monitoring needs of your applications and operational environment.&lt;/p&gt;
&lt;h2&gt;Application Performance Monitoring (APM)&lt;/h2&gt;
&lt;p&gt;The continued availability and appropriate performance of an application are essential to a company’s ability to maintain uninterrupted business processes. This prevents unnecessary business disruptions and enhances customer satisfaction. To ensure this, many enterprises use application performance monitoring (APM). APM relies on a collection of tools and processes used to track the performance of applications and analyze the reports to spot anomalies and performance-related issues.&lt;/p&gt;
&lt;p&gt;Modern application architectures can be complex, involving large numbers of services and distributed systems located across multiple networks and physical locations, including the cloud. These environments can be challenging to monitor.&lt;/p&gt;
&lt;p&gt;APM tools collect data through metrics, traces and logs to measure performance and identify potential problems. They help collect application data across a broader range of environments and performs sophisticated analytics on data patterns to provide insights on large and complex environments. As the business impact of outages rises day by day, more and more businesses are likely to spend money on the best APM tool that matches their monitoring needs.&lt;/p&gt;
&lt;p&gt;There are a broad range of APM tools to choose from, some specifically dedicated to APM tasks and others with APM functionality built into a broader array of features. Some of the most popular APM tools can be deployed as a Software-as-a-Service (SaaS) solution within a public cloud, or on-premises within a private cloud, or even across a hybrid environment. While you can find a number of APM tools that have a ton of features, covering most use cases, these can also come with premium pricing attached. Choosing a good APM tool that best fits both your monitoring needs and your budget is a challenge.&lt;/p&gt;
&lt;h2&gt;APM Tools: open-source vs commercial vendor&lt;/h2&gt;
&lt;p&gt;Open source-based APM tools offer a lot of freedom for users, since users can access and customize the tool&apos;s source code for their project-specific needs. They also allow for self-hosting, which can help in the context of tightening data protection laws and removing the privacy and security concerns customers may have to put customer data going to third-party services. Open source APM tools also often offer a vibrant community of active developers who might provide helpful plugins and tips.&lt;/p&gt;
&lt;p&gt;However, many SaaS-based commercial APM tools offer free or reasonably priced bundles. Open source is rarely &lt;strong&gt;really&lt;/strong&gt; free, and many commercial SaaS solutions offer better, more reliable, and reasonably priced APM tools. You should spend time and effort in APM feature evaluation and make the best decision that saves cost and works well with your stack.&lt;/p&gt;
&lt;h2&gt;APM Tools Overview&lt;/h2&gt;
&lt;p&gt;From the &lt;a href=&quot;https://www.gartner.com/reviews/market/application-performance-monitoring-and-observability&quot;&gt;Gartner Research&lt;/a&gt;, it provides the reviews and ratings of existing commercial vendor based application performance monitoring and observability tools in the market. In this blog post, I will take a look at some of those APM tools. I also choose some open-source based APM tools to look at in this blog post.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Splunk&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/splunk.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.splunk.com/&quot;&gt;Splunk&lt;/a&gt; is an extensible data platform that offers a range of solutions for analytics, monitoring and security to identify data patterns, provide metrics and diagnose problems. It delivers real-time monitoring and alerting for all environments, on-premises, hybrid or multicloud.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collects data from virtually any source and location&lt;/li&gt;
&lt;li&gt;Converts logs into metrics and analyzes and correlates data to create real-time visualizations and dashboards&lt;/li&gt;
&lt;li&gt;Provides a policy-based mechanism to reserve system resources for workload collection&lt;/li&gt;
&lt;li&gt;Provides a search processing language for both simple searches and advanced data exploration&lt;/li&gt;
&lt;li&gt;Provides thresholds for monitoring events and proactively warns of potential problems when data passes the threshold&lt;/li&gt;
&lt;li&gt;Pushes alerts to notify regarding critical events and impending conditions in real-time&lt;/li&gt;
&lt;li&gt;Analyzes metrics and events data with visualizations like bar charts, reference lines, scatter plots and column charts&lt;/li&gt;
&lt;li&gt;Offers outlier and anomaly detection and predictive analytics using machine learning toolkit&lt;/li&gt;
&lt;li&gt;Supports open source algorithms and creates custom machine learning models to help operationalize data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Splunk is more focused on monitoring and analyzing data generated from various machines, converting them so that they can be analyzed by developers. It takes more of a log management approach that makes it ideal for managing and monitoring the large amount of data generated from the devices running on the network. It’s great for analyzing the huge number of log files generated by enterprise systems. It eliminates the need for IT to spend hours trawling through all the logs looking for performance issues. Splunk integrates data streams from a huge number of sources. It supports a wide range of data formats, and &lt;em&gt;.xml&lt;/em&gt;, &lt;em&gt;.csv&lt;/em&gt; and &lt;em&gt;.json&lt;/em&gt; files are all supported. This is important if a company needs data stream integration from multiple data formats.&lt;/p&gt;
&lt;p&gt;Splunk is a much broader platform and toolset geared for a heavy duty large enterprise. It offers a breadth of management by providing a wide range of products. Splunk bundles similar tools together and offers them as two different types of platforms, &lt;em&gt;Splunk Cloud&lt;/em&gt; and &lt;em&gt;Splunk Enterprise&lt;/em&gt;. Splunk Cloud can be hosted on the cloud server. The entire set of the configurations, as well as the maintenance, is completely done by Splunk. Splunk Enterprise can be maintained by the data center and users need to just style up the entire hardware infrastructure. The integration of Splunk Cloud and Splunk Enterprise provides end-to-end full-stack coverage across hybrid cloud environments.&lt;/p&gt;
&lt;p&gt;Apart from being regarded as a &lt;em&gt;Visionary&lt;/em&gt; in the latest Gartner Magic Quadrant for APM and Observability, Splunk also has been named as a &lt;em&gt;Leader&lt;/em&gt; in the latest &lt;a href=&quot;https://www.splunk.com/en_us/blog/security/2022-gartner-magic-quadrant-for-siem-splunk-named-a-leader-for-the-9th-consecutive-year.html&quot;&gt;Gartner Magic Quadrant for Security Information and Event Management (SIEM)&lt;/a&gt;. Splunk includes more than &lt;em&gt;2300&lt;/em&gt; out-of-the-box integrations for comprehensive tech stack visibility. In &lt;a href=&quot;https://stackshare.io/splunk&quot;&gt;StackShare community&lt;/a&gt;, Splunk has been mentioned in &lt;em&gt;79&lt;/em&gt; company stacks and &lt;em&gt;437&lt;/em&gt; developer stacks. It belongs among the founding members and the number one contributor to &lt;em&gt;OpenTelemetry&lt;/em&gt;. Splunk APM supports open, vendor-neutral instrumentation, allowing for even more flexibility.&lt;/p&gt;
&lt;p&gt;Splunk has a reputation for being expensive. It’s not a low-cost option. Upselling beyond APM, e.g., adding &lt;em&gt;SIEM&lt;/em&gt; module and real-time monitoring, can send the budget even higher. It’s very important to determine what you really need and what you can dispense with if you decide to go with Splunk.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;New Relic&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/new-relic.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://newrelic.com/&quot;&gt;New Relic&lt;/a&gt; is a SaaS-based observability platform that includes APM as one of its key services. Organizations can trace dependencies across their distributed applications to detect anomalies, address errors, optimize performance and improve the customer experience. The product offers visibility into the application stack, from back-end APIs to the user devices.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides flexible instrumentation and dashboarding to collect data to meet the unique needs of specific applications and industries&lt;/li&gt;
&lt;li&gt;Guides appropriate engineer responses and helps them to the most important performance abnormalities using multiple techniques including AI and ML algorithms&lt;/li&gt;
&lt;li&gt;Correlates application performance to end-user experience through real-user monitoring and synthetic monitoring&lt;/li&gt;
&lt;li&gt;Connects application and infrastructure performance to explore the problem&lt;/li&gt;
&lt;li&gt;Uses multiple data types to count and measure every single request to have performance visibility down to the method level&lt;/li&gt;
&lt;li&gt;Supports real-time error analysis with on-demand diagnostic tools&lt;/li&gt;
&lt;li&gt;Integrates with various DevOps tools for incident response, logging and configuration management&lt;/li&gt;
&lt;li&gt;Supports important cloud service instrumentation&lt;/li&gt;
&lt;li&gt;Handles spikes in traffic with SaaS based architecture&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;New Relic is an &lt;em&gt;all-in-one&lt;/em&gt; application performance tool that lets you see performance from the end user&apos;s experience, through servers and down to the line of application code. It addresses not only APM, but also infrastructure, user monitoring and performance analytics for desktop, web and mobile applications. It offers premium features such as real-time monitoring for mobile, web and cloud application performance. It has a personalized dashboard that keeps track of all monitoring, as well as other activity and application performance. It customizes dashboards and enables alerts with real-time tracking.&lt;/p&gt;
&lt;p&gt;New Relic has been graded as a &lt;em&gt;Leader&lt;/em&gt; in the latest &lt;a href=&quot;https://newrelic.com/blog/nerd-life/gartner-magic-quadrant-22&quot;&gt;Gartner Magic Quadrant for APM and Observability&lt;/a&gt;. In &lt;a href=&quot;https://stackshare.io/new-relic#stacks&quot;&gt;StackShare community&lt;/a&gt;, New Relic has a broader approval rating, being mentioned in &lt;em&gt;11589&lt;/em&gt; company stacks and &lt;em&gt;7841&lt;/em&gt; developer stacks. New Relic is very strong in using community resources for learning the application, training of users, and troubleshooting issue via self-serve. It provides support through blogs, meetups, and social media channels. New functions, such as anomaly detection in logs, greater support for Microsoft Azure and AWS integration, data exploration, correlation, browser monitoring, instrumentation and &lt;em&gt;AIOps&lt;/em&gt;, keep being added and supported. New Relic has outstanding capabilities in reporting and dashboard, user interaction performance, and multicloud resource view. Its &lt;em&gt;OpenTelemetry&lt;/em&gt; capabilities and contributions place it ahead of many of other APM tools.&lt;/p&gt;
&lt;p&gt;New Relic uses the freemium pricing strategy. It’s free to use with its most generous free tier that include &lt;em&gt;100GB&lt;/em&gt; data ingest per month for unlimited basic users and 1 free full platform user, with the default data retention of 8 days and up. It then starts its &lt;em&gt;Standard&lt;/em&gt; plan at &lt;em&gt;$0.30/GB&lt;/em&gt; beyond, based on the amount of data you want to send to New Relic. Based on the number of users and their permissions, the &lt;em&gt;Standard&lt;/em&gt; plan offers &lt;em&gt;$49/month&lt;/em&gt; for core users, and &lt;em&gt;$99/month&lt;/em&gt; for up to 5 full platform users. New Relic offers the &lt;em&gt;Pro&lt;/em&gt; and &lt;em&gt;Enterprise&lt;/em&gt; plans for teams with more than 5 users and those who have  advanced security and support needs. For some premium features, such as real-time application performance monitoring, New Relic is more expensive than other SaaS solutions. You should be careful when deciding whether the additional price for a particular feature is worth it for your company.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Datadog&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/datadog.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.datadoghq.com/&quot;&gt;Datadog&lt;/a&gt; is a monitoring, security and analytics platform for cloud applications. It brings together end-to-end traces, metrics, and logs to make applications, infrastructure and third-party services entirely observable.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aggregates metrics and events across the full DevOps stack with more than 600 built-in integrations&lt;/li&gt;
&lt;li&gt;Provides full visibility into modern applications for monitoring, troubleshooting and optimizing application performance&lt;/li&gt;
&lt;li&gt;Analyzes and explores log data in context for troubleshooting and alerting&lt;/li&gt;
&lt;li&gt;Monitors proactively the user experience in a single platform&lt;/li&gt;
&lt;li&gt;Correlates frontend performance with business impact&lt;/li&gt;
&lt;li&gt;Visualizes traffic flow in cloud-native environments&lt;/li&gt;
&lt;li&gt;Builds real-time interactive dashboards&lt;/li&gt;
&lt;li&gt;Gets alerted and notified on critical issues&lt;/li&gt;
&lt;li&gt;Instruments applications with new integrations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Datadog is a SaaS-based application that focuses on cloud monitoring and security with public, private and hybrid options. It takes an infrastructure monitoring approach geared toward analytics and application performance, and it is praised especially for its infrastructure and security monitoring features. Datadog handles the entire DevOps and SRE workflow, including the complete incident management and &lt;em&gt;SIEM&lt;/em&gt;. With its built-in security monitoring capabilities, Datadog is able to send observational data to its Cloud SIEM product. It makes the incident management fairly easy by declaring and managing incidents from events and monitors. Users can create incidents, rank them by severity, manage incident resolution by assigning responsible users and teams, and send basic email and notifications. Some of the features provided by Datadog, such as the real-time alerts and automated reports, bring amazing advantages to organizations. New features, such as network monitoring, security analysis, &lt;em&gt;AIOps&lt;/em&gt;, business analytics, and incident management, keep being added and supported in Datadog. It offers a much broader applicability both in terms of APM capabilities and monitoring other areas such as infrastructure, device, server, database, and log management.&lt;/p&gt;
&lt;p&gt;Datadog has been graded as a &lt;em&gt;Leader&lt;/em&gt; in the latest &lt;a href=&quot;https://www.datadoghq.com/resources/gartner-magic-quadrant-apm-observability-2022/&quot;&gt;Gartner Magic Quadrant for API and Observability&lt;/a&gt;. In &lt;a href=&quot;https://stackshare.io/datadog&quot;&gt;StackShare community&lt;/a&gt;, Datadog has been mentioned in &lt;em&gt;1271&lt;/em&gt; company stacks and &lt;em&gt;6360&lt;/em&gt; developer stacks. It supports community APIs and extensions to integrate with existing IT infrastructure. Datadog is a major contributor to &lt;em&gt;OpenTelemetry&lt;/em&gt;. Its learning platform offers web-based coding labs. It enables new users to get hands-on experience in a simulated environment and plunges users into the workflow from the start.&lt;/p&gt;
&lt;p&gt;The Datadog interface offers extensive functionality and supports further customization to dashboards and interfaces. With many supported features, it could be difficult for new users who may be overwhelmed by the number of options. They definitely need to take their time to fully understand its features and how to maximize the benefits of those services. In the beginning, it can be hard to track the log data, and create and manage the customer dashboards. Datadog can work with a wide array of data formats and sources. However, it’s not a platform that can deal with a large number of information sources. Data formats, such as &lt;em&gt;.xml&lt;/em&gt;, &lt;em&gt;.csv&lt;/em&gt; and &lt;em&gt;.json&lt;/em&gt;, are not supported.&lt;/p&gt;
&lt;p&gt;Datadog prices out at around &lt;em&gt;$15 per user&lt;/em&gt;. It has an open pricing policy with published prices. Its pricing per-month options include per host, per million events, and per GB of analyzed log files. As a SaaS-based tool, Datadog offers generally low prices.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Dynatrace&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/dynatrace.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.dynatrace.com/&quot;&gt;Dynatrace&lt;/a&gt; is a software-intelligence monitoring platform offering various tools focused on monitoring modern infrastructures and distributed applications, user experience, and business intelligence.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides a single agent to automatically discover, instrument and collect monitoring metrics for all types of entities in application environment&lt;/li&gt;
&lt;li&gt;Ingests metric data and events into its AI engine and provides code-level visibility and root-cause answers for applications&lt;/li&gt;
&lt;li&gt;Uses an interactive topology map to visualize the dynamic relationships among all application components across every tier&lt;/li&gt;
&lt;li&gt;Supports automated remediation through integration with any CI/CD tools&lt;/li&gt;
&lt;li&gt;Monitors cloud environments, virtual machines, network, process, host, server-side service, mobile app and real user&lt;/li&gt;
&lt;li&gt;Discovers and monitors dynamic microservice workloads running in containers&lt;/li&gt;
&lt;li&gt;Monitors message queues to gain visibility into microservice communications&lt;/li&gt;
&lt;li&gt;Provides full front-to-back observability ensuring every application is available, functional, and efficient across every channel for the best customer experiences&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Dynatrace is an &lt;em&gt;all-in-one&lt;/em&gt; platform that monitors the application performance, the underlying infrastructure and the experience of the end users, thanks to its integrated AI engine. Dynatrace deployment is fairly straightforward. The initial setup process offers sufficient onboarding support for deploying the agent based on the environment. It supports configuring the agent from its Web UI. This makes the setup of log monitoring and APM relatively seamless. Dynatrace’s documentation offers sufficient support to deploy, set up and tweak the agent. &lt;a href=&quot;https://university.dynatrace.com/&quot;&gt;Dynatrace University&lt;/a&gt; is available directly from the UI via a link in the user settings drop-down menu.&lt;/p&gt;
&lt;p&gt;Dynatrace can be deployed either as a SaaS solution with its data being retained in the cloud, or as a &lt;em&gt;self-managed&lt;/em&gt; solution that allows customers to maintain control of where their data resides, whether in the cloud or on-premises. This deployment model can really help in the context of tightening data protection laws in the customer&apos;s environment.&lt;/p&gt;
&lt;p&gt;Dynatrace has been named as a &lt;em&gt;Leader&lt;/em&gt; in the latest &lt;a href=&quot;https://www.dynatrace.com/monitoring/gartner-magic-quadrant-for-application-performance-monitoring-observability&quot;&gt;Gartner Magic Quadrant for APM and Observability&lt;/a&gt;. In the latest Gartner Critical Capabilities report, Dynatrace has obtained the highest scores in &lt;em&gt;4&lt;/em&gt; of &lt;em&gt;6&lt;/em&gt; use cases, ranked as #1 IT Operations, Digital Experience Monitoring (DEM), DevOps/AppDev and SRE/Platform Operations. Dynatrace is a major contributor to &lt;em&gt;OpenTelemetry&lt;/em&gt;. Its roadmap for &lt;em&gt;OpenTelemetry&lt;/em&gt; also puts it ahead of many of other APM tools.&lt;/p&gt;
&lt;p&gt;Dynatrace offers minimal alerting, but almost no problem/incident management features out-of-the-box. The third-party incident management and status page solutions must be integrated. Dynatrace has no capabilities in the area of federated, hierarchical, or edge &lt;em&gt;AI/ML&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Dynatrace offers a full-stack pricing model, starting at &lt;em&gt;$74/month&lt;/em&gt; per 8 GB per host. It also offers individual product pricing models, such as infrastructure monitoring, digital experience monitoring, application security and open ingestion, etc. Each of those pricing models works as an add-on and is not included in the full-stack. They are charged with additional cost.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Elastic&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/elastic.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.elastic.co/&quot;&gt;Elastic&lt;/a&gt; is a distributed search and analytics solution.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Operates in a distributed environment with scalability and resiliency&lt;/li&gt;
&lt;li&gt;Allows full control over data, users and cluster operations with a variety of management tools, such as snapshots, index lifecycle, data tiers, data streams&lt;/li&gt;
&lt;li&gt;Protects data with a list of security features, such as &lt;em&gt;keystore&lt;/em&gt;,  encrypted communications, RBAC, IP filtering, security realms, SSO and audit logging&lt;/li&gt;
&lt;li&gt;Supports customized and reliable alerting and notification integration with any other third-party systems&lt;/li&gt;
&lt;li&gt;Allows to work with data using various language clients, Elasticsearch &lt;em&gt;DSL&lt;/em&gt; and &lt;em&gt;SQL&lt;/em&gt;, and &lt;em&gt;REST APIs&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Extends Elasticsearch functionality with various plugins and integrations&lt;/li&gt;
&lt;li&gt;Runs and manages Elasticsearch across public cloud, private cloud and Kubernetes using &lt;em&gt;Elastic Cloud&lt;/em&gt;, &lt;em&gt;Elastic Cloud Enterprise&lt;/em&gt; and &lt;em&gt;Elastic Cloud on Kubernetes&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Ingests any data type using language clients, ingest nodes, lightweight shippers or &lt;em&gt;Logstash&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Enriches raw data using a variety of analyzers, tokenizer, filters, and enrichment options&lt;/li&gt;
&lt;li&gt;Supports document storage, time series analysis and metrics, and geospatial analytics&lt;/li&gt;
&lt;li&gt;Provides full-text search capabilities with its inverted index, tunable relevance scoring and advanced query &lt;em&gt;DSL&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Finds data relationships through aggregations and graph exploration and creates alerts&lt;/li&gt;
&lt;li&gt;Models and automates the analysis of time series data, combines alerting and inference using machine learning&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Elastic builds and maintains the &lt;em&gt;Elastic Stack&lt;/em&gt;, an &lt;em&gt;all-in-one&lt;/em&gt; platform built upon the proven &lt;em&gt;Elasticsearch, Logstash, and Kibana (ELK) Stack&lt;/em&gt; for the logs, metrics, and application trace data with a multitude of out-of-the-box integrations. Elastic Stack is the foundation for its primary solutions, &lt;em&gt;Elastic Enterprise Search&lt;/em&gt;, the fleet of search solutions, &lt;em&gt;Elastic Observability&lt;/em&gt;, the solution for unified visibility across logs, metrics and APM data, and &lt;em&gt;Elastic Security&lt;/em&gt;, the solution that unifies endpoint protection and &lt;em&gt;SIEM&lt;/em&gt;. You can easily deploy any of these solutions as a managed service with Elastic Cloud, with one stack powering three solutions.&lt;/p&gt;
&lt;p&gt;Elastic has been named as a &lt;em&gt;Visionary&lt;/em&gt; in the latest &lt;a href=&quot;https://www.elastic.co/explore/devops-observability/2022-gartner-magic-quadrant-apm/&quot;&gt;Gartner Magic Quadrant for APM and Observability&lt;/a&gt;. It has a modern initial interface that users can take advantage out of the box. It provides a lot of very powerful tools for data ingestion, data enrichment, data analysis and various plugins and open source integrations, from years of development and community input. Elastic has good capabilities across reporting and dashboards, user interaction performance, multicloud resource view, predictive analysis, and intelligent data push. It’s easy to use, but a bit of a hassle to configure and maintain. Since Elastic is based on open source code, it requires technical skills in open source and it has quite high threshold to get over to understand how the system works and how to configure it properly.&lt;/p&gt;
&lt;p&gt;Elastic offers a 14-day free trial of the &lt;em&gt;Standard&lt;/em&gt; plan without requiring credit card details. After this, users can choose from 4 paid subscription plans. The &lt;em&gt;Standard&lt;/em&gt; plan starts at &lt;em&gt;$95/month&lt;/em&gt;, and it provides access to core security features and solutions including APM. The &lt;em&gt;Gold&lt;/em&gt; plan adds custom plugins, while the &lt;em&gt;Platinum&lt;/em&gt; plan offers advanced security features and machine learning support. It also includes endpoint detection and response, protection, and event collection capabilities. The &lt;em&gt;Enterprise&lt;/em&gt; plan adds additional enterprise features, such as searchable snapshots, &lt;em&gt;Elastic Maps&lt;/em&gt; server and data retention for security related data, and raises its cost to &lt;em&gt;$175/month&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Prometheus&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheus.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt; is an open source system monitoring and alerting toolkit and time series database originally developed by &lt;a href=&quot;https://soundcloud.com/&quot;&gt;SoundCloud&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Implements a multi-dimensional data model with time series being identified by metric name and a set of key-value pairs&lt;/li&gt;
&lt;li&gt;Provides a flexible query language &lt;em&gt;PromQL&lt;/em&gt; to leverage the dimensionality&lt;/li&gt;
&lt;li&gt;Stores time series in memory and on local disk in an efficient custom format with no dependency on distributed storage&lt;/li&gt;
&lt;li&gt;Records metrics in real time via a pull model over HTTP&lt;/li&gt;
&lt;li&gt;Allows slicing and dicing of collected time series data to generate ad-hoc graphs, tables, and alerts&lt;/li&gt;
&lt;li&gt;Supports of pushing time series via an intermediary gateway&lt;/li&gt;
&lt;li&gt;Discovers targets via service discovery or static configuration&lt;/li&gt;
&lt;li&gt;Supports multiple modes for visualizing data using a built-in expression browser, &lt;em&gt;Grafana&lt;/em&gt; integration and a console template language&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prometheus joined &lt;a href=&quot;https://www.cncf.io&quot;&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; and became its second hosted project after Kubernetes. It has managed to obtain a large and vibrant community of contributors and users ever since. Prometheus is good and focuses mainly on application metric monitoring. In order to have a seamless experience with both metrics and traces that are required by APM, you can integrate Prometheus with other open source tracing tools, such as &lt;a href=&quot;https://www.jaegertracing.io/&quot;&gt;Jaeger&lt;/a&gt;. However, since Jaeger lacks sophisticated capabilities for analyzing and segmenting all of a user&apos;s trace data, it has only some support for certain data filtering. Experience with such integration may not be great.&lt;/p&gt;
&lt;p&gt;Prometheus is an open source tool with &lt;em&gt;46K&lt;/em&gt; GitHub stars and &lt;em&gt;7.7K&lt;/em&gt; Github forks. In &lt;a href=&quot;https://stackshare.io/prometheus&quot;&gt;StackShare community&lt;/a&gt;, Prometheus has been mentioned in &lt;em&gt;852&lt;/em&gt; company stacks and &lt;em&gt;1962&lt;/em&gt; developer stacks. Since it is free, Prometheus certainly wins on pricing. However, full functionality of Prometheus demands skills in open source and competence in &lt;em&gt;Apache&lt;/em&gt; based applications. Without those required skills and experience, the Prometheus interface can be difficult to master, and some others even find it difficult to set it up and scale.&lt;/p&gt;
&lt;p&gt;Prometheus is maintained by volunteers, not by a company. It relies on other open source tools for security. Fixing security issues in Prometheus is done on a &lt;em&gt;best-effort&lt;/em&gt; basis. Prometheus strives to release security fixes within 7 days for its key components &lt;em&gt;alertmanager&lt;/em&gt;, &lt;em&gt;node exporter&lt;/em&gt;, &lt;em&gt;blackbox exporter&lt;/em&gt; and &lt;em&gt;pushgateway&lt;/em&gt;, etc.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Apache SkyWalking&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/skywalking.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://skywalking.apache.org/&quot;&gt;Apache SkyWalking&lt;/a&gt; is an open source APM tool with capabilities for monitoring, tracing and diagnosing distributed system. It’s especially designed for microservices, cloud native and container-based architectures.&lt;/p&gt;
&lt;p&gt;Key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides metrics analysis of services, service instances and endpoints with distributed tracing, log collecting and metrics collecting and customization&lt;/li&gt;
&lt;li&gt;Supports root cause analysis with profiling the code on the runtime by in-process agent, &lt;em&gt;eBPF&lt;/em&gt; profiler and network profiler&lt;/li&gt;
&lt;li&gt;Provides dependency analysis of service instances and endpoints&lt;/li&gt;
&lt;li&gt;Supports service topology map analysis&lt;/li&gt;
&lt;li&gt;Detects slow services and endpoints and provides performance optimization&lt;/li&gt;
&lt;li&gt;Detects slow &lt;em&gt;SQL&lt;/em&gt; statement for database performance monitoring&lt;/li&gt;
&lt;li&gt;Provides message queue performance and consuming latency monitoring&lt;/li&gt;
&lt;li&gt;Starts tracing from browser for browser performance monitoring&lt;/li&gt;
&lt;li&gt;Supports infrastructure monitoring for Kubernetes and Linux&lt;/li&gt;
&lt;li&gt;Supports alerting using rules in both observability analysis language and metric analysis language&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Apache SkyWalking provides a list of agents to be used for building &lt;em&gt;Java&lt;/em&gt;, &lt;em&gt;.NET Core&lt;/em&gt;, &lt;em&gt;PHP&lt;/em&gt;, &lt;em&gt;Node.js&lt;/em&gt;, &lt;em&gt;Golang&lt;/em&gt;, &lt;em&gt;LUA&lt;/em&gt;, &lt;em&gt;Rust&lt;/em&gt; and &lt;em&gt;C++&lt;/em&gt; apps. It supports to integrate and collect data from multiple sources, including &lt;em&gt;Prometheus&lt;/em&gt;, &lt;em&gt;OpenTelemetry&lt;/em&gt; and &lt;em&gt;Zabbix&lt;/em&gt; for metrics and logs, &lt;em&gt;Zipkin&lt;/em&gt; for traces. It provides tracing, metrics analysis, alerting, service mesh observability and visualization.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is an open source tool with &lt;em&gt;21K&lt;/em&gt; GitHub stars and &lt;em&gt;6K&lt;/em&gt; GitHub forks. In &lt;a href=&quot;https://stackshare.io/apache-skywalking#stacks.&quot;&gt;StackShare community&lt;/a&gt;, Apache SkyWalking does not yet have as great a share of the market, only being  mentioned in 12 developer stacks. However, Apache SkyWalking has more than &lt;em&gt;600&lt;/em&gt; contributors on GitHub and thousands of contributions every year. All the agents for application instrumentation have been actively maintained.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is the first open source project that initialized and implemented an &lt;a href=&quot;https://www.envoyproxy.io/docs/envoy/v1.18.2/api-v2/service/accesslog/v2/als.proto&quot;&gt;Envoy Access Log Service (ALS)&lt;/a&gt; based solution to provide observability on the service mesh, no matter the architecture or language. Since service mesh provides full control of the routed &lt;em&gt;RPC&lt;/em&gt;, including &lt;em&gt;HTTP&lt;/em&gt; and &lt;em&gt;TCP&lt;/em&gt;, this observation solution is much easier to be added without language-specific technology. With this solution, users can get the application service topology map, metrics graph, request details and error message with a very nice visualization. This integration solution can be extremely important for monitoring and visualizing applications that consist of many microservices running across on-premises, cloud-based or hybrid environments.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is lightweight and scalable, and it supports alerting and visualization. It can be easily set up as a &lt;em&gt;self-managed&lt;/em&gt; APM tool within an on-premises data center. This avoids leasing customer data to third-party services and removes the restricted security restriction in the user&apos;s environment.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I hope I gave you a closer look at some of the best APM tools that are out there, both open-source and commercial vendor based. In it, I have listed the key features of each APM tool and discussed each tool&apos;s strengths and weaknesses. The importance of a good APM solution is now indisputable. All it takes is to pick the right one based on your monitoring needs for your applications.&lt;/p&gt;
&lt;p&gt;This blog post is the first of three in a series. In &lt;a href=&quot;https://developer.hpe.com/blog/set-up-apache-skywalking-for-k8s-and-vm-monitoring-in-hpe-greenlake-private-cloud/&quot;&gt;the second post of the series&lt;/a&gt;, I will show you the detailed process on how to set up the Apache SkyWalking APM tool for monitoring and alerting of customer applications deployed on Kubernetes cluster provisioned on HPE GreenLake for Private Cloud Enterprise. In &lt;a href=&quot;https://developer.hpe.com/blog/set-up-apache-skywalking-for-k8s-monitoring-in-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;the third post of the series&lt;/a&gt;, I will expand on setting up Apache SkyWalking APM tool to monitor the infrastructure of Kubernetes clusters deployed on HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;h2&gt;Reference&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.splunk.com/&quot;&gt;Splunk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://newrelic.com/&quot;&gt;New Relic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.datadoghq.com/&quot;&gt;Datadog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.dynatrace.com/&quot;&gt;Dynatrace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.elastic.co/&quot;&gt;Elastic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://skywalking.apache.org/&quot;&gt;Apache SkyWalking&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Announcing Chapel 1.29.0!]]></title><description><![CDATA[External Blog]]></description><link>https://developer.hpe.com/announcing-chapel-1-29-0/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-chapel-1-29-0/</guid><pubDate>Sat, 07 Jan 2023 18:07:06 GMT</pubDate><content:encoded>&lt;p&gt;External Blog&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Addressing hybrid cloud application challenges using HPE GreenLake for Private Cloud Enterprise – Part 2: Application monitoring]]></title><description><![CDATA[Introduction In my previous blog post, I covered the detailed process of deploying the complex Online Boutique application in a hybrid cloud…]]></description><link>https://developer.hpe.com/monitor-application-performance-across-hybrid-cloud-environment-using-apache-skywalking-and-service-mesh/</link><guid isPermaLink="false">https://developer.hpe.com/monitor-application-performance-across-hybrid-cloud-environment-using-apache-skywalking-and-service-mesh/</guid><pubDate>Fri, 06 Jan 2023 06:42:26 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/how-to-deploy-application-across-hybrid-clouds-beyond-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;my previous blog post&lt;/a&gt;, I covered the detailed process of deploying the complex Online Boutique application in a hybrid cloud environment, across the public EKS cluster from AWS to the private Kubernetes cluster in &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt;. This hybrid cloud model amplifies the benefits of both private and public clouds and allows for more seamless integration across technical barriers. It enables the enterprise to rely on the security of on-premises data centers while taking advantage of the agility of managing the front-end of an application in the public cloud. This model is becoming increasingly popular as more businesses and enterprises shift toward cloud-based computing.&lt;/p&gt;
&lt;p&gt;However, this evolution presents challenges for monitoring, as applications and services are inherently more distributed in this environment. Having a good application performance monitoring (APM) tool is becoming essential within the hybrid cloud environment. It can consolidate performance metrics and troubleshoot data for assets across the hybrid cloud environment into a single application. It makes it easier to track metrics and allows the enterprise to resolve problems quickly.&lt;/p&gt;
&lt;p&gt;For this blog post, I c﻿hose Apache SkyWalking as an APM tool and describe how to set up it, as a &lt;em&gt;self-hosted&lt;/em&gt; APM tool, for monitoring and alerting the application performance across the hybrid cloud environment. I took the service mesh as an auto instrumentation mechanism for application monitoring, without adding any manual process to instrument existing applications.&lt;/p&gt;
&lt;h2&gt;Apache SkyWalking&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://skywalking.apache.org/&quot;&gt;Apache SkyWalking&lt;/a&gt; is an open source APM tool that can monitor, trace, and diagnose distributed systems. It’s especially designed for microservices, cloud native and container-based architectures.&lt;/p&gt;
&lt;p&gt;Apart from a list of supported agents to be used for instrumenting applications, Apache SkyWalking implements an &lt;a href=&quot;https://www.envoyproxy.io/docs/envoy/v1.18.2/api-v2/service/accesslog/v2/als.proto&quot;&gt;Envoy Access Log Service (ALS)&lt;/a&gt; based solution to provide observability on the service mesh under Kubernetes environment, no matter the architecture or language. A service mesh provides a mesh of &lt;em&gt;Layer 7&lt;/em&gt; proxies that manage network traffic between services. It supports application observability at the platform layer, instead of the application layer, by abstracting away how inter-process and service-to-service communications being handled in Kubernetes. Using a list of implemented analyzers, e.g.,  &lt;em&gt;k8s-mesh&lt;/em&gt; and &lt;em&gt;mx-mesh&lt;/em&gt;, Apache SkyWalking can receive and analyze the detailed access logs of all requests, both &lt;em&gt;&lt;strong&gt;HTTP&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;TCP&lt;/strong&gt;&lt;/em&gt;, emitted from Envoy ALS.  With this solution, users could get the application service topology map, metrics graph, request details and error message with a very nice visualization. This observation solution is much easier to be added without language-specific technology. It can be extremely important for monitoring and visualizing applications that consist of many microservices running across on-premises, cloud-based or hybrid environments.&lt;/p&gt;
&lt;p&gt;Apache SkyWalking is lightweight and scalable. It can be easily set up as a &lt;em&gt;self-hosted&lt;/em&gt; APM tool within a hybrid cloud environment, without any additional external resources for hosting the tool. This can help in the context of a resource-constrained environment and remove the privacy and security concerns customers may have in putting customer data that goes out to third-party services.&lt;/p&gt;
&lt;h2&gt;Set up Apache SkyWalking&lt;/h2&gt;
&lt;h3&gt;Prerequisites&lt;/h3&gt;
&lt;p&gt;Before you start, make sure you have the following required elements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A public Kubernetes cluster from one of the public cloud providers such as &lt;em&gt;AWS&lt;/em&gt;, &lt;em&gt;Microsoft Azure&lt;/em&gt; or &lt;em&gt;Google&lt;/em&gt;. For the purposes of this case study blog post, one EKS cluster, named &lt;em&gt;eks-cfe-public&lt;/em&gt; from AWS, is used. However, it works if you choose a cluster from other providers.&lt;/li&gt;
&lt;li&gt;A private Kubernetes cluster, named &lt;em&gt;eks-pce-clu-1&lt;/em&gt; provisioned in HPE GreenLake for Private Cloud Enterprise.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, version 1.23 or later, together with the &lt;em&gt;kubeconfig&lt;/em&gt; files for accessing both public and private clusters.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;&lt;a href=&quot;https://helm.sh/docs/intro/install/&quot;&gt;Helm&lt;/a&gt;&lt;/em&gt; CLI tool, version 3.8.1 or later.&lt;/li&gt;
&lt;li&gt;﻿The &lt;a href=&quot;https://istio.io/latest/docs/reference/commands/istioctl/&quot;&gt;istioctl&lt;/a&gt; CLI tool, version 1.16.0 or later. Use the &lt;a href=&quot;https://istio.io/latest/docs/setup/install/istioctl/&quot;&gt;istioctl Installation&lt;/a&gt; to install this CLI tool to your local development environment. The &lt;em&gt;istioctl&lt;/em&gt; client will be used for installing and set up &lt;em&gt;Istio&lt;/em&gt; service mesh.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&quot;https://skupper.io/&quot;&gt;Skupper&lt;/a&gt; CLI tool, the latest version 1.2.0. Use the &lt;a href=&quot;https://skupper.io/start/#step-1-install-the-skupper-command-line-tool-in-your-environment&quot;&gt;Skupper Installation&lt;/a&gt; to install this CLI tool to your local development environment.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Deploy Apache SkyWalking to AWS EKS cluster&lt;/h3&gt;
&lt;p&gt;Install Apache SkyWalking using Helm charts with &lt;em&gt;elasticsearch&lt;/em&gt; as storage to the public AWS EKS cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git clone https://github.com/apache/skywalking-kubernetes
$ cd skywalking-kubernetes/chart
$ helm repo add elastic https://helm.elastic.co
$ helm dep up skywalking 
$ kubectl create ns skywalking
$ helm install skywalking skywalking –n skywalking \
--set oap.image.tag=9.2.0 \
--set ui.image.tag=9.2.0 \
--set oap.storageType=elasticsearch \
--set elasticsearch.imageTag=7.17.1 \
--set elasticsearch.persistence.enabled=true \
--set values.telemetry.v2.enabled=true \
--set oap.envoy.als.enabled=true \
--set oap.env.SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS=mx-mesh \
--set oap.env.SW_ENVOY_METRIC_ALS_TCP_ANALYSIS=mx-mesh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Above the Helm commands, install the Apache SkyWalking to the namespace &lt;em&gt;skywalking&lt;/em&gt; of the AWS EKS cluster. It uses the &lt;em&gt;elasticsearch&lt;/em&gt; as the storage type and creates it as a &lt;em&gt;StatefulSet&lt;/em&gt; resource, running a pod on each worker node. It installs the Apache SkyWalking Observability Analysis Platform (OAP) with replicas being set as 2 to ensure high availability. The installation enables the Envoy Access Log Service (ALS) and specifies the ALS analyzer as &lt;em&gt;mx-mesh&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;It should be noted that Apache SkyWalking also supports the ALS analyzer &lt;em&gt;k8s-mesh&lt;/em&gt;, which uses the metadata from Kubernetes cluster to analyze the logs. It requires the SkyWalking OAP server to access the Kubernetes API server to get information of pods, services and service endpoints. This works only for monitoring a single cloud environment. In the case of a hybrid cloud, you need to use the ALS analyzer &lt;em&gt;mx-mesh&lt;/em&gt;, which uses the Envoy metadata exchange mechanism to get the service names. It&apos;s required for monitoring applications deployed in the hybrid cloud environment.&lt;/p&gt;
&lt;p&gt;You can check the detailed Apache SkyWalking installation by typing the following &lt;em&gt;kubectl&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n skywalking
NAME                                                 READY   STATUS      RESTARTS   AGE
pod/elasticsearch-master-0                           1/1     Running     0          6h34m
pod/elasticsearch-master-1                           1/1     Running     0          6h34m
pod/skywalking-oap-init-n92hp                        1/1     Completed   0          88s
pod/skywalking-skywalking-helm-oap-bfb57fbf8-27frm   1/1     Running     0          92s
pod/skywalking-skywalking-helm-oap-bfb57fbf8-djzw5   1/1     Running     0          52s
pod/skywalking-skywalking-helm-ui-7776f4854d-nvkds   1/1     Running     0          6h34m

NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
service/elasticsearch-master             ClusterIP   172.20.171.140   &amp;#x3C;none&gt;        9200/TCP,9300/TCP     6h34m
service/elasticsearch-master-headless    ClusterIP   None             &amp;#x3C;none&gt;        9200/TCP,9300/TCP     6h34m
service/skywalking-skywalking-helm-oap   ClusterIP   172.20.155.177   &amp;#x3C;none&gt;        11800/TCP,12800/TCP   6h34m
service/skywalking-skywalking-helm-ui    ClusterIP   172.20.205.170   &amp;#x3C;none&gt;        80/TCP                6h34m

NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/skywalking-skywalking-helm-oap   2/2     2            2           6h34m
deployment.apps/skywalking-skywalking-helm-ui    1/1     1            1           6h34m

NAME                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/skywalking-skywalking-helm-oap-bfb57fbf8    2         2         2       96s
replicaset.apps/skywalking-skywalking-helm-ui-7776f4854d    1         1         1       6h34m

NAME                                    READY   AGE
statefulset.apps/elasticsearch-master   2/2     6h34m

NAME                            COMPLETIONS   DURATION   AGE
job.batch/skywalking-oap-init   1/1           31s        94s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Change the service types for both service &lt;em&gt;skywalking-skywalking-helm-oap&lt;/em&gt; and &lt;em&gt;skywalking-skywalking-helm-ui&lt;/em&gt;  from &lt;em&gt;ClusterIP&lt;/em&gt; to &lt;em&gt;LoadBalancer&lt;/em&gt;. With the built-in support of Elastic Load Balancing (ELB) in AWS EKS cluster, the externally accessible load balancing host names will be created for those two services in the Apache SkyWalking installation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl edit service/skywalking-skywalking-helm-ui -n skywalking 
$ kubectl edit service/skywalking-skywalking-helm-oap -n skywalking

$ k get svc -n skywalking -l app=skywalking
NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                           AGE
skywalking-skywalking-helm-oap   LoadBalancer   172.20.155.177   afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com   11800:30642/TCP,12800:32269/TCP   38d
skywalking-skywalking-helm-ui    LoadBalancer   172.20.205.170   a2dea6e89216444e28ed29ef48c0b0fa-951983485.us-east-2.elb.amazonaws.com   80:31740/TCP                      38d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Apache SkyWalking UI can be made (publicly) accessible using the assigned URL &lt;a href=&quot;http://a2dea6e89216444e28ed29ef48c0b0fa-951983485.us-east-2.elb.amazonaws.com/&quot;&gt;http://a2dea6e89216444e28ed29ef48c0b0fa-951983485.us-east-2.elb.amazonaws.com/&lt;/a&gt;. The Apache SkyWalking OAP server ELB host name with its port &lt;em&gt;11800&lt;/em&gt;,  &lt;strong&gt;afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com:11800&lt;/strong&gt;, will be used in the following configuration to send the application metrics.&lt;/p&gt;
&lt;h3&gt;Install &lt;em&gt;Istio&lt;/em&gt; service mesh&lt;/h3&gt;
&lt;p&gt;Install &lt;em&gt;istio&lt;/em&gt; service mesh using &lt;em&gt;istioctl&lt;/em&gt; with Envoy Access Log Service enabled:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl create ns istio-system
$ istioctl install -y --set profile=demo \
--set meshConfig.enableEnvoyAccessLogService=true \
--set meshConfig.defaultConfig.envoyAccessLogService.address=afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com:11800 \
--set meshConfig.defaultConfig.envoyMetricsService.address=afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com:11800 \
--set &apos;meshConfig.defaultConfig.proxyStatsMatcher.inclusionRegexps[0]=.*&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the above commands on both the AWS EKS cluster and the private Kubernetes cluster. It installs the &lt;em&gt;istio&lt;/em&gt; to the namespace &lt;em&gt;istio-system&lt;/em&gt; of the clusters. It explicitly enables the Envoy Access Log Service (ALS), with both  &lt;em&gt;envoyAccessLogService.address&lt;/em&gt; and  &lt;em&gt;envoyMetricsService.address&lt;/em&gt; settings pointing to the Apache SkyWalking OAP server ELB host name and its port &lt;em&gt;11800&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;You can check the detailed &lt;em&gt;istio&lt;/em&gt; installation by typing the following &lt;em&gt;kubectl&lt;/em&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n istio-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/istio-egressgateway-78fb5cf46-djxv2     1/1     Running   0          38d
pod/istio-ingressgateway-77b9d69b74-499vf   1/1     Running   0          38d
pod/istiod-67fcb675b5-2dhjw                 1/1     Running   0          38d

NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)                                                                      AGE
service/istio-egressgateway    ClusterIP      172.20.224.42   &amp;#x3C;none&gt;                                                                    80/TCP,443/TCP                                                               38d
service/istio-ingressgateway   LoadBalancer   172.20.249.41   a17641fd9b6564b02ab3cc5faeb51e7a-1241343104.us-east-2.elb.amazonaws.com   15021:30506/TCP,80:31341/TCP,443:31933/TCP,31400:31131/TCP,15443:32118/TCP   38d
service/istiod                 ClusterIP      172.20.95.232   &amp;#x3C;none&gt;                                                                    15010/TCP,15012/TCP,443/TCP,15014/TCP                                        38d

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/istio-egressgateway    1/1     1            1           38d
deployment.apps/istio-ingressgateway   1/1     1            1           38d
deployment.apps/istiod                 1/1     1            1           38d

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/istio-egressgateway-78fb5cf46     1         1         1       38d
replicaset.apps/istio-ingressgateway-77b9d69b74   1         1         1       38d
replicaset.apps/istiod-67fcb675b5                 1         1         1       38d
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy Online Boutique application&lt;/h3&gt;
&lt;p&gt;Please refer to my blog post &lt;a href=&quot;https://developer.hpe.com/blog/how-to-deploy-application-across-hybrid-clouds-beyond-hpe-greenlake-for-private-cloud-enterprise/&quot;&gt;here&lt;/a&gt; on how to deploy the Online Boutique application across the public AWS EKS cluster and the private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise. The &lt;em&gt;Skupper&lt;/em&gt; CLI tool is used for deploying the Virtual Application Network (VAN) and creating connection between the private Kubernetes cluster and the public AWS EKS cluster.&lt;/p&gt;
&lt;p&gt;After deploying the Online Boutique application, you can verify deployment by checking the &lt;em&gt;Skupper&lt;/em&gt; status from the deployment namespace &lt;em&gt;boutique&lt;/em&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;from private Kubernetes cluster:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper status --namespace boutique
Skupper is enabled for namespace &quot;boutique&quot; with site name &quot;pce-private&quot; in interior mode. 
It is connected to 1 other site. It has 10 exposed services.
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;from AWS EKS cluster:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper status --namespace boutique
Skupper is enabled for namespace &quot;boutique&quot; with site name &quot;aws-public&quot; in interior mode. 
It is connected to 1 other site. It has 10 exposed services.
The site console url is: https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8080
The credentials for internal console-auth mode are held in secret: &apos;skupper-console-users‘
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Skupper console shows the connections from the public AWS EKS cluster to the private Kubernetes cluster and the hybrid deployment of the Online Boutique application:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/skupper-apps-monitor.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Online Boutique UI can be accessed through deployed frontend service URL from public AWS EKS cluster:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/online-boutique-frontend-monitor.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Online Boutique application monitoring using ALS&lt;/h3&gt;
&lt;h4&gt;1 Add the label &lt;em&gt;istio-injection=enabled&lt;/em&gt; to namespace &lt;em&gt;boutique&lt;/em&gt;&lt;/h4&gt;
&lt;p&gt;Run the following command, from both the AWS EKS cluster and the private Kubernetes cluster, to add the label &lt;em&gt;istio-injection=enabled&lt;/em&gt; to the namespace &lt;em&gt;boutique&lt;/em&gt;, in which the Online Boutique application is deployed. This will enable &lt;em&gt;istio&lt;/em&gt; injection for this namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl label namespace boutique istio-injection=enabled
$ kubectl get ns boutique --show-labels
NAME       STATUS   AGE   LABELS
boutique   Active   3d   istio-injection=enabled,kubernetes.io/metadata.name=boutique
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2 Restart Online Boutique application deployment&lt;/h4&gt;
&lt;p&gt;Run the following command from both the AWS EKS cluster and the private Kubernetes cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl rollout restart deployment 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command will restart all the deployments in the namespace &lt;em&gt;boutique&lt;/em&gt; by terminating the related pods and re-creating them. Since the namespace has the label &lt;em&gt;istio-injection=enabled&lt;/em&gt; added, the newly created pods will be injected automatically with the &lt;em&gt;istio-proxy&lt;/em&gt; c﻿ontainer. The &lt;em&gt;PROXY_CONFIG&lt;/em&gt; in the &lt;em&gt;istio-proxy&lt;/em&gt; c﻿ontainer contains the configuration of &lt;em&gt;envoyAccessLogService.address&lt;/em&gt; and &lt;em&gt;envoyMetricsService.address&lt;/em&gt;, both pointing to the Apache SkyWalking OAP server URL:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;  istio-proxy:
    Container ID:  docker://2e0fa0a4e290a0138e59fe11e4bd4cdaffa329e5b780e9ed227089bb10660c73
    Image:         docker.io/istio/proxyv2:1.16.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:f6f97fa4fb77a3cbe1e3eca0fa46bd462ad6b284c129cf57bf91575c4fb50cf9
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Fri, 02 Dec 2022 16:34:23 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      10m
      memory:   40Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      productcatalogservice-77589df479-p2p2c (v1:metadata.name)
      POD_NAMESPACE:                 boutique (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {&quot;envoyAccessLogService&quot;:{&quot;address&quot;:&quot;afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com:11800&quot;},&quot;envoyMetricsService&quot;:{&quot;address&quot;:&quot;afd4a163a65c74af5ad732fcf86b7dff-261027448.us-east-2.elb.amazonaws.com:11800&quot;},&quot;proxyStatsMatcher&quot;:{&quot;inclusionRegexps&quot;:[&quot;.*&quot;]}}
                                     
      ISTIO_META_POD_PORTS:          [
                                         {&quot;containerPort&quot;:3550,&quot;protocol&quot;:&quot;TCP&quot;}
                                     ]
      ISTIO_META_APP_CONTAINERS:     server
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      productcatalogservice
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/boutique/deployments/productcatalogservice
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local```
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should be noted that, in order to avoid collecting ALS logs for the Skupper deployments (e.g., &lt;em&gt;skupper-router&lt;/em&gt; &amp;#x26; &lt;em&gt;skupper-service-controller&lt;/em&gt; deployed to the same namespace that are used for multicloud communication and hybrid application deployment), we can add the annotation &lt;em&gt;sidecar.istio.io/inject: false&lt;/em&gt; to those &lt;em&gt;Skupper&lt;/em&gt; deployments. After restart deployments, the &lt;em&gt;Skupper&lt;/em&gt; deployments will not inject the &lt;em&gt;istio-proxy&lt;/em&gt; to their pods.&lt;/p&gt;
&lt;h4&gt;3 Monitor Online Boutique application&lt;/h4&gt;
&lt;p&gt;After following these steps, the Online Boutique application metrics will be visible from the Apache SkyWalking UI, under the &lt;em&gt;Service Mesh&lt;/em&gt; tab:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-svc.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The SkyWalking UI &lt;em&gt;Topology&lt;/em&gt; page will show the Online Boutique application topology map:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-map.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can check the service &lt;em&gt;Overview&lt;/em&gt; and &lt;em&gt;Endpoint&lt;/em&gt; pages per service, e.g., &lt;em&gt;frontend&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-overview.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-endpoint.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also check all the services from the &lt;em&gt;Data Plane&lt;/em&gt; tab under  &lt;em&gt;Service Mesh&lt;/em&gt; that are observed through &lt;em&gt;Envoy Metrics Service&lt;/em&gt; setup:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-app-svc-dataplane.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Online Boutique application alerting&lt;/h3&gt;
&lt;p&gt;Apache SkyWalking provides an alerting mechanism to measure application performance according to a list of pre-defined metrics, e.g., &lt;em&gt;service_resp_time&lt;/em&gt;, &lt;em&gt;service_instance_resp_time&lt;/em&gt;, and &lt;em&gt;service_sla&lt;/em&gt;. It will trigger alerting when some metrics reach its pre-defined thresholds.&lt;/p&gt;
&lt;p&gt;A﻿pache SkyWalking configures the alerting using a collection of alerting rules located in &lt;em&gt;/skywalking/config/alarm-settings.yml&lt;/em&gt; from the SkyWalking OAP pod. You can check the content by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kukectl exec pod/skywalking-skywalking-helm-oap-bfb57fbf8-5g7k7 -n skywalking -it -- cat /skywalking/config/alarm-settings.yml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can define new metrics by adding a new entry to the file using the SkyWalking observability analysis language (OAL), or customize the existing metrics with new thresholds.&lt;/p&gt;
&lt;p&gt;B﻿elow is the alarms page from the SkyWalking UI showing all the triggered alerts for a deployed Online Boutique application:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sw-alert.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The alarms page shows an alert * Response time of service instance frontend-549fd9954f-lvnsv of frontend is more than 1000ms in 2 minutes of last 10 minutes *.&lt;/p&gt;
&lt;p&gt;This alert is triggered by the following metric alerting rule for the metric &lt;em&gt;service_instance_resp_time&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;  service_instance_resp_time_rule:
    metrics-name: service_instance_resp_time
    op: &quot;&gt;&quot;
    threshold: 1000
    period: 10
    count: 2
    silence-period: 5
message: Response time of service instance {name} is more than 1000ms in 2 minutes of last 10 minutes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It indicates an issue from the &lt;em&gt;frontend&lt;/em&gt; service instance in Online Boutique application. You can check the service trace page further to figure out the root cause of this issue.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post discussed the challenges in hybrid cloud monitoring and described the process of setting up Apache SkyWalking as a &lt;em&gt;self-hosted&lt;/em&gt; APM tool for application performance monitoring across a hybrid cloud environment. Instead of taking the manual instrumentation mechanism to rebuild the application with various agents to collect and send the application metrics, we used an auto instrumentation approach with service mesh in the setup. This Envoy Access Log Service (ALS) based approach did not require any change to the deployed applications and the setup process showed that it was very easy to configure it for hybrid cloud monitoring. This integration solution can be extremely important for monitoring and visualizing an application that consists of many microservices running across hybrid cloud environments.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE joins the Linux Foundation’s SPDX workgroup as an SPDX supporter]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) is pleased to have joined The Linux Foundation® Software Package Data Exchange (SPDX)® workgroup as an SPDX…]]></description><link>https://developer.hpe.com/hpe-joins-the-linux-foundation’s-spdx-workgroup-as-an-spdx-supporter/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-joins-the-linux-foundation’s-spdx-workgroup-as-an-spdx-supporter/</guid><pubDate>Thu, 05 Jan 2023 17:52:22 GMT</pubDate><content:encoded>&lt;p&gt;Hewlett Packard Enterprise (HPE) is pleased to have joined &lt;a href=&quot;https://www.linuxfoundation.org/projects&quot;&gt;The Linux Foundation®&lt;/a&gt; &lt;a href=&quot;https://spdx.dev/&quot;&gt;Software Package Data Exchange (SPDX)®&lt;/a&gt; workgroup as an SPDX Supporter.&lt;/p&gt;
&lt;p&gt;HPE has chosen &lt;a href=&quot;https://spdx.dev/about/&quot;&gt;SPDX&lt;/a&gt; for its software bills of materials (SBOMs) to communicate an ingredient list for its software products. SBOMs written in SPDX-format are easy to understand … even by a human. The real value in using a standardized format like SPDX, however, will be in the creation of automation tooling to consume and report back on the information contained in SBOMs, especially when matching them up to vulnerability databases. Using SPDX, HPE will be able to adapt its reporting for a variety of initiatives, including [US Executive Order #14028](&lt;a href=&quot;https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity(May)&quot;&gt;https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity(May)&lt;/a&gt; as well as global initiatives aligned with software security and supply chain management. Furthermore, since SPDX has been &lt;a href=&quot;https://www.iso.org/standard/81870.html&quot;&gt;approved&lt;/a&gt; by the International Organization for Standardization (ISO), it will be easier to cross-collaborate with external suppliers who also choose this standard format.&lt;/p&gt;
&lt;p&gt;SPDX also helps with a broader issue at the intersection of &lt;em&gt;Open Source Street&lt;/em&gt; and &lt;em&gt;Security Avenue&lt;/em&gt;. Various public security feeds/databases use different names to describe the same packages/components. By supporting the inclusion of alternative identifiers (e.g., PURL, SWID, etc.), SPDX provides an entry point for teams to correlate the SBOM information they receive from a supplier with a larger number of public security feeds or even cross-correlate with SBOMs created using the other popular SBOM specifications. Finally, HPE supports SPDX’s efforts in &lt;a href=&quot;https://github.com/spdx/license-list-XML/blob/main/DOCS/license-inclusion-principles.md&quot;&gt;reviewing licenses for inclusion&lt;/a&gt;. SPDX’s legal group serves an important function in identifying and cataloguing commonly used open source licenses with short identifiers to aid in license identification for open source projects.&lt;/p&gt;
&lt;p&gt;HPE is committed to software security, and we want to provide our customers with confidence in our products by using a format that serves both security and license identification initiatives.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Addressing hybrid cloud application challenges using HPE GreenLake for Private Cloud Enterprise – Part 1: Deploying complex apps]]></title><description><![CDATA[Introduction HPE GreenLake for Private Cloud Enterprise delivers a modern private cloud to support your app workloads with bare metal…]]></description><link>https://developer.hpe.com/how-to-deploy-application-across-hybrid-clouds-beyond-hpe-greenlake-for-private-cloud-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-deploy-application-across-hybrid-clouds-beyond-hpe-greenlake-for-private-cloud-enterprise/</guid><pubDate>Thu, 05 Jan 2023 07:52:05 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/private-cloud-enterprise.html&quot;&gt;HPE GreenLake for Private Cloud Enterprise&lt;/a&gt; delivers a modern private cloud to support your app workloads with bare metal, containers, and virtual machines (VMs) running in any combination across your edges, colocations, and data centers. It combines self-service resource access for developers with consumption and performance transparency for IT.&lt;/p&gt;
&lt;p&gt;This blog post shows you how to deploy a complex application that consists of multiple microservices as a hybrid app that spans both a public AWS EKS cluster and a private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise. By using a hybrid cloud solution, you can combine the compliance benefits of a private cloud in HPE GreenLake for Private Cloud Enterprise environment with the scalability and connectivity of the public cloud. You can rely on the security of finely tuned, on-premises data centers while turning to the agility of cloud computing to manage the front end of an application in the public cloud. Using HPE GreenLake for Private Cloud Enterprise, you can optimize resource allocation, save costs, and improve overall productivity and performance in the process.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Before you start, make sure you have the following required components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A public Kubernetes cluster from one of the public cloud providers such as &lt;em&gt;AWS&lt;/em&gt;, &lt;em&gt;Microsoft Azure&lt;/em&gt; or &lt;em&gt;Google&lt;/em&gt;. For the purposes of the use case being highlighted in this blog post, a single EKS cluster, named &lt;em&gt;eks-cfe-public&lt;/em&gt; from AWS, is being used. However, it works if you choose a cluster from other providers.&lt;/li&gt;
&lt;li&gt;A private Kubernetes cluster, named &lt;em&gt;eks-pce-clu-1&lt;/em&gt; provisioned in HPE GreenLake for Private Cloud Enterprise.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;kubectl&lt;/em&gt; CLI tool, version 1.23 or later, together with the &lt;em&gt;kubeconfig&lt;/em&gt; files for accessing both the public AWS EKS cluster and private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise. To simplify the setup process, you can start two terminal sessions in your environment, export the environment variable &lt;code&gt;KUBECONFIG&lt;/code&gt; in each session and point it to the &lt;em&gt;kubeconfig&lt;/em&gt; file for accessing the public AWS EKS cluster and the private Kubernetes cluster, respectively.&lt;/li&gt;
&lt;li&gt;The &lt;a href=&quot;https://skupper.io/&quot;&gt;Skupper&lt;/a&gt; CLI tool, the latest version 1.2.0. Use the &lt;a href=&quot;https://skupper.io/start/#step-1-install-the-skupper-command-line-tool-in-your-environment&quot;&gt;Skupper Installation&lt;/a&gt; to install this CLI tool to your environment. The &lt;em&gt;Skupper&lt;/em&gt; CLI tool works w﻿ith the same environment setup for &lt;em&gt;kubectl&lt;/em&gt; for accessing the public AWS EKS cluster and private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise. Some options, e.g., &lt;em&gt;--kubeconfig&lt;/em&gt;, &lt;em&gt;--context&lt;/em&gt;, and &lt;em&gt;--namespace&lt;/em&gt;, can be used explicitly in &lt;em&gt;Skupper&lt;/em&gt; for using a specific &lt;em&gt;kubeconfig&lt;/em&gt; file and context or accessing a Kubernetes namespace.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Online Boutique&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/GoogleCloudPlatform/microservices-demo&quot;&gt;Online Boutique&lt;/a&gt; is a cloud-first microservices demo application. It consists of an &lt;em&gt;11-tier&lt;/em&gt; microservices application. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them. This demo app has been used widely for demonstrating various technologies. It’s easy to deploy and it works on any Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;This blog post will use the &lt;em&gt;Online Boutique&lt;/em&gt; as the demo application, deploying it across the public AWS EKS cluster and the private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise using &lt;em&gt;Skupper&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/apps.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Skupper&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://skupper.io/&quot;&gt;Skupper&lt;/a&gt; is a &lt;em&gt;Layer 7&lt;/em&gt; service interconnect. It enables secure communication across multiple Kubernetes clusters through a Virtual Application Network (VAN). The VAN connects the applications and services in multiple clusters into a virtual network so that they can communicate with each other as if they were all running in the same site. VANs are able to provide connectivity across the hybrid cloud because they operate at Layer 7 (the application layer). They use Layer 7 application routers to route communication between Layer 7 application addresses.&lt;/p&gt;
&lt;p&gt;With &lt;em&gt;Skupper&lt;/em&gt;, your application can span multiple cloud providers, data centers, and regions with no VPNs or special firewall rules.&lt;/p&gt;
&lt;h2&gt;Deploy Online Boutique application&lt;/h2&gt;
&lt;p&gt;Clone the &lt;a href=&quot;https://github.com/GoogleCloudPlatform/microservices-demo&quot;&gt;Online Boutique&lt;/a&gt; GitHub repo to your local environment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git clone https://github.com/GoogleCloudPlatform/microservices-demo.git
$ cd microservices-demo/release/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the manifests file &lt;em&gt;kubernetes-manifests.yaml&lt;/em&gt;  in the folder, create the following 3 manifests files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;k8s-manifests-deploy-private.yaml&lt;/strong&gt;, including the following 3 &lt;em&gt;Deployment&lt;/em&gt; manifests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;emailservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;paymentservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;shippingservice&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;k8s-manifests-deploy-public.yaml&lt;/strong&gt;, including the following 7 &lt;em&gt;Deployment&lt;/em&gt; manifests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;frontend&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;recommendationservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;productcatalogservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;checkoutservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;cartservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;currencyservice&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;redis-cart&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;adservice&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;k8s-manifests-service-public.yaml&lt;/strong&gt;, including the following 2 &lt;em&gt;Service&lt;/em&gt; manifests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;frontend&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;frontend-external&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Deploy application microservices to AWS EKS cluster&lt;/h3&gt;
&lt;p&gt;C﻿reate the namespace &lt;em&gt;boutique&lt;/em&gt; in the AWS EKS cluster and then deploy 7 &lt;em&gt;Deployment&lt;/em&gt; and 2 &lt;em&gt;Service&lt;/em&gt; resources to the namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl create ns boutique
$ kubectl config set-context --current --namespace boutique
$ kubectl apply -f k8s-manifests-deploy-public.yaml                  
deployment.apps/recommendationservice created                                                  
deployment.apps/frontend created                                                             
deployment.apps/productcatalogservice created                                               
deployment.apps/checkoutservice created                                                      
deployment.apps/cartservice created                                               
deployment.apps/currencyservice created                                                
deployment.apps/redis-cart created                                                      
deployment.apps/adservice created                                                            

$ kubectl apply -f k8s-manifests-service-public.yaml           
service/frontend created                                                              
service/frontend-external created    

$ kubectl get svc
frontend                ClusterIP      172.20.103.129   &amp;#x3C;none&gt;                                                                    80/TCP                            40s   &amp;#x3C;none&gt;
frontend-external       LoadBalancer   172.20.16.223    a52d7c861c01c4466803a44373bc11dc-1387384363.us-east-2.elb.amazonaws.com   80:31482/TCP                      40s   &amp;#x3C;none&gt;                                    
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy application microservices to private Kubernetes cluster&lt;/h3&gt;
&lt;p&gt;Similarly, create the namespace &lt;em&gt;boutique&lt;/em&gt; in the private Kubernetes cluster running on HPE GreenLake for Private Cloud Enterprise and then deploy 3 &lt;em&gt;Deployment&lt;/em&gt; resources to the namespace:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl create ns boutique
$ kubectl config set-context --current --namespace boutique
$ kubectl apply -f k8s-manifests-deploy-private.yaml    
deployment.apps/emailservice created
deployment.apps/paymentservice created
deployment.apps/shippingservice created
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy Virtual Application Network&lt;/h3&gt;
&lt;p&gt;Define the Virtual Application Network using &lt;em&gt;Skupper&lt;/em&gt; on both the public AWS EKS cluster and private Kubernetes cluster:&lt;/p&gt;
&lt;h4&gt;1. In the public AWS EKS cluster, deploy the &lt;em&gt;aws-public&lt;/em&gt; application router.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl config set-context --current –namespace boutique
$ skupper init --site-name aws-public                                                                                         
Waiting 115 seconds for LoadBalancer IP or hostname...                                         
Waiting 111 seconds for LoadBalancer IP or hostname...                                         
Waiting 108 seconds for LoadBalancer IP or hostname...                                        
Skupper is now installed in namespace &apos;boutique&apos;.  Use &apos;skupper status&apos; to get more information.            
                                                                        
$ skupper status             
Skupper is enabled for namespace &quot;boutique&quot; with site name &quot;aws-public&quot; in interior mode. It is connected to 1 other site. It has 10 exposed services.
The site console url is:  https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8080
The credentials for internal console-auth mode are held in secret: &apos;skupper-console-users&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. In the private Kubernetes cluster, deploy the &lt;em&gt;pce-private&lt;/em&gt; application router.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl config set-context --current –namespace boutique
$ skupper init --ingress none --site-name pce-private
Skupper is now installed in namespace &apos;boutique&apos;.  Use &apos;skupper status&apos; to get more information.

$ skupper status
Skupper is enabled for namespace &quot;boutique&quot; with site name &quot;pce-private&quot; in interior mode. It is not connected to any other sites. It has no exposed services
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;3. In the public AWS EKS cluster, create a connection token for connection.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper token create ~/aws-public-token.yaml                                                                              
Token written to /home/guoping/aws-public-token.yaml                                           
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;4. In the private Kubernetes cluster, define the connections to the public AWS EKS cluster.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper link create ~/aws-public-token.yaml 
Site configured to link to https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8081/d2e35a8c-6654-11ed-bf10-000c295724b5 (name=link1)
Check the status of the link using &apos;skupper link status&apos;.

$ skupper link status

Links created from this site:
-------------------------------
Link link1 is active

Currently active links from other sites:
----------------------------------------
There are no active links

$ skupper status
Skupper is enabled for namespace &quot;boutique&quot; with site name &quot;pce-private&quot; in interior mode. It is connected to 1 other site. It has no exposed services.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;5. In the public AWS EKS cluster, verify connectivity has been established.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper status               
Skupper is enabled for namespace &quot;aws-boutique&quot; with site name &quot;aws-public&quot; in interior mode. It is connected to 1 other site. It has no exposed services.                                    
The site console url is:  https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8080
The credentials for internal console-auth mode are held in secret: &apos;skupper-console-users&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;F﻿rom the &lt;em&gt;Skupper&lt;/em&gt; console URL at &lt;strong&gt;&lt;a href=&quot;https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8080&quot;&gt;https://aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com:8080&lt;/a&gt;&lt;/strong&gt;, you can see the connections from
the public AWS EKS cluster and the private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/skupper-status.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Expose application microservices to Virtual Application Network&lt;/h3&gt;
&lt;h4&gt;1. In the private Kubernetes cluster, expose 3 services:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper expose deployment emailservice --address emailservice --port 5000 --protocol http2 --target-port 8080
deployment emailservice exposed as emailservice

$ skupper expose deployment paymentservice --address paymentservice --port 50051 --protocol http2 --target-port 50051
deployment paymentservice exposed as paymentservice

$ skupper expose deployment shippingservice --address shippingservice --port 50051 --protocol http2 --target-port 50051
deployment shippingservice exposed as shippingservice
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. In the public AWS EKS cluster, expose 7 services:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ skupper expose deployment productcatalogservice --address productcatalogservice --port 3550 --protocol http2 --target-port 3550
deployment productcatalogservice exposed as productcatalogservice

$ skupper expose deployment recommendationservice --address recommendationservice --port 8080 --protocol http2 --target-port 8080
deployment recommendationservice exposed as recommendationservice

$ skupper expose deployment checkoutservice --address checkoutservice --port 5050 --protocol http2 --target-port 5050
deployment checkoutservice exposed as checkoutservice

$ skupper expose deployment cartservice --address cartservice --port 7070 --protocol http2 --target-port 7070
deployment cartservice exposed as cartservice

$ skupper expose deployment currencyservice --address currencyservice --port 7000 --protocol http2 --target-port 7000
deployment currencyservice exposed as currencyservice

$ skupper expose deployment adservice --address adservice --port 9555 --protocol http2 --target-port 9555
deployment adservice exposed as adservice                                                      

$ skupper expose deployment redis-cart --address redis-cart --port 6379 --protocol tcp --target-port 6379
deployment redis-cart exposed as redis-cart                                                    
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Access Online Boutique application&lt;/h3&gt;
&lt;p&gt;F﻿rom the &lt;em&gt;Skupper&lt;/em&gt; console, you can see all the deployed services to the public AWS EKS cluster and the private Kubernetes cluster:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/skupper-apps.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;From the public AWS EKS cluster, check all the deployed services.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$﻿ kubectl get svc -n boutique
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                           AGE
adservice               ClusterIP      172.20.183.120   &amp;#x3C;none&gt;                                                                    9555/TCP                          40d
cartservice             ClusterIP      172.20.255.202   &amp;#x3C;none&gt;                                                                    7070/TCP                          40d
checkoutservice         ClusterIP      172.20.146.32    &amp;#x3C;none&gt;                                                                    5050/TCP                          40d
currencyservice         ClusterIP      172.20.244.103   &amp;#x3C;none&gt;                                                                    7000/TCP                          40d
emailservice            ClusterIP      172.20.136.4     &amp;#x3C;none&gt;                                                                    5000/TCP                          28h
frontend                ClusterIP      172.20.103.129   &amp;#x3C;none&gt;                                                                    80/TCP                            40d
frontend-external       LoadBalancer   172.20.16.223    a52d7c861c01c4466803a44373bc11dc-1387384363.us-east-2.elb.amazonaws.com   80:31482/TCP                      40d
paymentservice          ClusterIP      172.20.244.25    &amp;#x3C;none&gt;                                                                    50051/TCP                         28h
productcatalogservice   ClusterIP      172.20.147.163   &amp;#x3C;none&gt;                                                                    3550/TCP                          40d
recommendationservice   ClusterIP      172.20.83.157    &amp;#x3C;none&gt;                                                                    8080/TCP                          40d
redis-cart              ClusterIP      172.20.179.232   &amp;#x3C;none&gt;                                                                    6379/TCP                          40d
shippingservice         ClusterIP      172.20.16.129    &amp;#x3C;none&gt;                                                                    50051/TCP                         28h
skupper                 LoadBalancer   172.20.111.44    aea867abf6fb6413d8f577652da564c1-130946084.us-east-2.elb.amazonaws.com    8080:31907/TCP,8081:30027/TCP     40d
skupper-router          LoadBalancer   172.20.182.70    acaedc6978d3b453b8555d6dead90943-1598691456.us-east-2.elb.amazonaws.com   55671:30272/TCP,45671:32499/TCP   40d
skupper-router-local    ClusterIP      172.20.175.145   &amp;#x3C;none&gt;                                                                    5671/TCP                          40d
skupper-router-local    ClusterIP      172.20.249.51    &amp;#x3C;none&gt;                                                                    5671/TCP                          35m                       
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;T﻿he &lt;em&gt;Online Boutique&lt;/em&gt; application can be accessed from the assigned LoadBalancing host name &lt;strong&gt;a52d7c861c01c4466803a44373bc11dc-1387384363.us-east-2.elb.amazonaws.com&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/online-boutique-frontend.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can start shopping by adding items to the shopping cart, creating your shipping address and choosing the payment method. Please note that both the payment and the shipping services are running from the private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/online-boutique-payment.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can then place an order to complete your shopping.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/online-boutique-order.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;This blog post described the process of deploying the &lt;em&gt;Online Boutique&lt;/em&gt; application as a hybrid app across both a public EKS cluster in AWS and a private Kubernetes cluster in HPE GreenLake for Private Cloud Enterprise environment.&lt;/p&gt;
&lt;p&gt;Running applications and services in this hybrid cloud environment is becoming increasingly popular as more businesses and enterprises shift toward cloud-based computing. This model can amplify the benefits of both private and public clouds and allows for more seamless integration across technical barriers.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/monitor-application-performance-across-hybrid-cloud-environment-using-apache-skywalking-and-service-mesh/&quot;&gt;my next blog post of the series&lt;/a&gt;, I will show you how to install and set up the Apache SkyWalking application performance monitoring tool to monitor the deployed application in such a hybrid cloud environment as this. It helps to reduce management complexity and deliver operational insights for more informed business practices, and protect your most valuable user data.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with HPE GreenLake Compute Ops Management API in just 3mn]]></title><description><![CDATA[Apply the power of a Developer Community HPE GreenLake provides APIs for the different applications hosted on the HPE GreenLake Cloud…]]></description><link>https://developer.hpe.com/getting-started-with-the-hpe-greenlake-compute-ops-management-api-in-just-3mn/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-hpe-greenlake-compute-ops-management-api-in-just-3mn/</guid><pubDate>Wed, 04 Jan 2023 09:57:13 GMT</pubDate><content:encoded>&lt;style&gt;
ul li{
 font-size:25px;
}
&lt;/style&gt;
&lt;style&gt;
table {
    display: block;
    width: max-content !important;
    max-width: 100%;
    overflow: auto;
     -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
}
td {
   -webkit-box-shadow: none;
    -moz-box-shadow: none;
    box-shadow: none;
    border:1px solid grey;
    text-align: left !important;
     font-weight: normal !important;
    padding: 10px !important;
}
thead tr:first-child td {
  -webkit-box-shadow: none;
  -moz-box-shadow: none;
  box-shadow: none;
  border:1px solid grey;
  text-align: center !important;
  padding: 20px !important;
  font-weight: bold !important;
}
&lt;/style&gt;
&lt;h2&gt;Apply the power of a Developer Community&lt;/h2&gt;
&lt;p&gt;HPE GreenLake provides APIs for the different applications hosted on the HPE GreenLake Cloud Platform to do what can be done in the GUI through programmatic techniques. One of these applications is HPE GreenLake for Compute Ops Management (COM). HPE GreenLake for Compute Ops Management automates and transforms complex and time-consuming compute management operations into a simplified experience across edge-to-cloud. You can find its &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/&quot;&gt;API documentation&lt;/a&gt; on the &lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;HPE GreenLake API portal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;However, if you are time constrained and prefer to just poke around instead of browsing through pages of documentation, this blog post is for you. One of the benefits of working within a community is the ability to take advantage of open collaboration, sharing hints, tools, and resources. In order to discover the API capabilities more rapidly, you can use a (free) tool, such as Postman, and leverage some of the Postman collections already created by members of the HPE Developer Community. If you are not yet familiar with Postman, no problem! In this post, I will provide you with step-by-step instructions.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can follow these instructions with the Postman native application as well as the SaaS version of Postman. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Ready? Let’s start the stopwatch, now!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/stopwatch1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 1 [optional] - Get yourself a Postman account&lt;/h2&gt;
&lt;p&gt;If you don’t have a Postman account already, you can request a free one at: &lt;a href=&quot;https://identity.getpostman.com/signup&quot;&gt;https://identity.getpostman.com/signup&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Step 2 - Log into Postman&lt;/h2&gt;
&lt;p&gt;Log into your Postman account. From the search bar, look for the collection “Compute Ops Management”  and select from the list, the one from our Community contributor, Lionel Jullien’s (&lt;a href=&quot;mailto:lio@hpe.com&quot;&gt;lio@hpe.com&lt;/a&gt;) public workspace. &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can access it directly from &lt;a href=&quot;https://www.postman.com/jullienl/workspace/lionel-jullien-s-public-workspace/collection/991177-a2b4838f-3e9d-4047-b02f-f813b73a6724?ctx=documentation&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 3 - Fork the existing HPE GreenLake Compute Ops management collection&lt;/h2&gt;
&lt;p&gt;You can fork Lionel’s COM collection into your Postman workspace, which is very similar to a GitHub fork operation. This will copy the content of the collection to your workspace, and maintain a link to the source, which would allow you to get updates (if Lionel does update his collection), or contribute to Lionel’s collection by opening a Pull Request (PR) if you have made changes to the collection. After you forked his collection, you can work on your own copy in your Postman workspace.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can also export the collection and import it into your own workspace.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 4 - Prepare your COM environment&lt;/h2&gt;
&lt;p&gt;The COM collection built by Lionel makes use of environment variables which can be grouped in a Postman environment. You will have to define and initialize these environment variables before using calls from the Postman collection. You can view Lionel’s Postman environment in his workspace, and either fork his environment or recreate it from scratch by creating a new environment in your workspace and setting up the following variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;ClientID&lt;/li&gt;
&lt;li&gt;ClientSecret&lt;/li&gt;
&lt;li&gt;url&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Be careful, as Postman variables are case-sensitive. Also, you can set the type of variable ClientSecret to secret, so the value of the variable is only shown upon request.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 5 - Set your COM environment&lt;/h2&gt;
&lt;p&gt;You now need to initialize the 3 environment variables that you created in your Postman workspace. In order to do this you need to use the HPE GreenLake Cloud Platform Graphical User Interface (GUI). Please refer to &lt;a href=&quot;https://developer.hpe.com/blog/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/&quot;&gt;this blog post&lt;/a&gt; to create an API Access in your own HPE GreenLake for Compute Ops Management instance. You do not need to generate an access token from the GUI, as you will do it via the API in the next step.&lt;/p&gt;
&lt;p&gt;From the GUI, make note of your Client ID, your Client Secret and the Connectivity Endpoint that corresponds to your region. Then, in your Postman environment, set a value to the 3 variables:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable name&lt;/th&gt;
&lt;th&gt;Initial Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ClientID&lt;/td&gt;
&lt;td&gt;Your client ID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ClientSecret&lt;/td&gt;
&lt;td&gt;Your client Secret&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;url&lt;/td&gt;
&lt;td&gt;Your Connectivity Endpoint&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Make sure that there isn’t any space or carriage return at the end of the variable values and to save your environment. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 6 - Obtaining a session token&lt;/h2&gt;
&lt;p&gt;Open your Postman workspace and locate the COM collection you have forked (or imported). Now, generate a session token using the &lt;strong&gt;Create session&lt;/strong&gt; call from the &lt;strong&gt;0_session&lt;/strong&gt; folder&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note:  Before pressing Send, make sure that you have selected your environment in the upper right corner drop down. By default, it is set to &lt;strong&gt;No Environment&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Once the call returns, make sure you have a Status of 200 OK and check the JSON response body:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{

&quot;access_token&quot;: &quot;eyJhbGciOiJSUzI1NiIsImtpZCI6IlRFak8tZEJPbThxUDlqRUlxdVE5aXVKX09HTSIsInBpLmF0bSI6ImRlejAifQ.eyJjbGllbnRfaWQiOiJmYWE5ZDZjMi04MjdjLTQzMWYtYTI2My1kNGE1YzY5YjYwZjIiLCJpc3MiOiJodHRwczovL3Nzby5jb21tb24uY2xvdWQuaHBlLmNvbSIsImF1ZCI6ImV4dGVybmFsX2FwaSIsInN1YiI6ImNvbS5kZW1vdXNlckBnbWFpbC5jb20iLCJ1c2VyX2N0eCI6ImIwMDQxNjk0NDM5MjExZWM5MWQ2YzZhZGQyMDZhMjQ3IiwiYXV0aF9zb3VyY2UiOiJjY3NfdG9rZW5fbWFuYWdlbWVudCIsInBsYXRmb3JtX2N1c3RvbWVyX2lkIjoiMzQ2NTJmZjAzMTc3MTFlYzliYzA5Njg3MjU4MGZkNmQiLCJpYXQiOjE2NzEwOTM2ODUsImFwcGxpY2F0aW9uX2luc3RhbmNlX2lkIjoiNjM3ZjA5MzgtMTg4Mi00NzNiLTkyNDQtNGNkMDFkMWExNjY4IiwiZXhwIjoxNjcxMTAwODg1fQ.ajV-eV98TrdZPtQAIynOW_9zrF0HaZo_g8eBdoxjHldEvXlxyomcpT3ElI_Ke2AZAGKDDQB9zihy0bfplSgnMg1yBoH7r2Ih4PJXm-lprFQZPF9dApDvv39sPu-VJZU2RijCGPp5fDqzbcF-37zbCzdhihdYsAnQE3VcGd8xKRTkH8JZVe3Rg22_ndzZOqTR3SAeVcFGuI3PN3r2mJ5GyxlT8ckt_QsUHrxtYPEVwZnOwRFtrT7JB-Ht3vJB8wJJwXdyIadr64gunV3UusMjxMzYk4RpvdMjRuANhhBUAHxdA3Mmnq3MXtnSLu_hPGz2MUzoXO9IPMI9Csq3q5G8Rg&quot;,
&quot;token_type&quot;: &quot;Bearer&quot;,
&quot;expires_in&quot;: 7199
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This token is known as a Bearer token and it is valid for 7200 seconds (120 minutes or 2 hours). It will expire after that period and will need to be regenerated by running a Create session call again if you have exceeded that time.&lt;/p&gt;
&lt;p&gt;There is javascript code executed at the end of the call to automatically capture the value of the session token and store it in a new environment variable (called &lt;strong&gt;COM_Token&lt;/strong&gt;) that is visible in your Postman environment. You can check this code in the Tests tab of the API call:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-js&quot;&gt;const responseJson = pm.response.json();

// Set environment variables

pm.environment.set(&quot;COM_Token&quot;, responseJson.access_token);

// Display in console. To see the console, go to View / Show Postman Console

console.log(&quot;COM Token:&quot;, pm.environment.get(&quot;COM_Token&quot;)); 

// Display in Test Results

pm.test(&quot;COM session token = &quot; + pm.environment.get(&quot;COM_Token&quot;));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code allows you to run any subsequent call from the collection using that saved token (until it expires).&lt;/p&gt;
&lt;h2&gt;Step 7 - Collecting API versions&lt;/h2&gt;
&lt;p&gt;The COM API is an aggregation of multiple independent APIs. To help you find what the latest version of each component is, the collection provides a little “trick” in &lt;strong&gt;2- Get the resource API versions from the API reference&lt;/strong&gt; from the &lt;strong&gt;0_session&lt;/strong&gt; folder. Call this to populate API versions of each of the component, and before using any other calls from the collection:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/apicalls.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: This is not using an API, but parsing the content of a documentation web page, so it might be subject to changes in the future.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 8 - Exploring the API&lt;/h2&gt;
&lt;p&gt;Now that you have a valid token and have identified the API endpoints, you are ready to explore the rest of the COM API using this handy collection. Pick one call from the &lt;strong&gt;Servers&lt;/strong&gt; folder to &lt;strong&gt;List of all ProLiant DL325 Gen10 Plus&lt;/strong&gt; and hit Send. You will get the following JSON response (provided you have some of these servers in your environment).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;offset&quot;: 0,
    &quot;count&quot;: 1,
    &quot;total&quot;: 1,
    &quot;items&quot;: [
        {
            &quot;id&quot;: &quot;P18606-B21+CZJ11105MN&quot;,
            &quot;type&quot;: &quot;compute-ops/server&quot;,
            &quot;platformFamily&quot;: &quot;PROLIANT&quot;,
            &quot;resourceUri&quot;: &quot;/compute-ops/v1beta2/servers/P18606-B21+CZJ11105MN&quot;,
            &quot;name&quot;: &quot;Geneva-CIC-amd16&quot;,
            &quot;createdAt&quot;: &quot;2022-11-07T09:42:20.767907+00:00&quot;,
            &quot;updatedAt&quot;: &quot;2022-12-14T17:20:43.292718+00:00&quot;,
            &quot;generation&quot;: 27,
            &quot;hardware&quot;: {
                &quot;serialNumber&quot;: &quot;CZJ11105MN&quot;,
                &quot;model&quot;: &quot;ProLiant DL325 Gen10 Plus&quot;,
                &quot;uuid&quot;: &quot;36383150-3630-5A43-4A31-313130354D4E&quot;,
                &quot;productId&quot;: &quot;P18606-B21&quot;,
                &quot;powerState&quot;: &quot;ON&quot;,
                &quot;indicatorLed&quot;: &quot;LIT&quot;,
                &quot;health&quot;: {
                    &quot;summary&quot;: &quot;OK&quot;,
                    &quot;healthLED&quot;: &quot;OK&quot;,
                    &quot;fans&quot;: &quot;OK&quot;,
                    &quot;fanRedundancy&quot;: &quot;REDUNDANT&quot;,
                    &quot;liquidCooling&quot;: &quot;NOT_PRESENT&quot;,
                    &quot;liquidCoolingRedundancy&quot;: &quot;NOT_PRESENT&quot;,
                    &quot;memory&quot;: &quot;OK&quot;,
                    &quot;network&quot;: &quot;OK&quot;,
                    &quot;powerSupplies&quot;: &quot;OK&quot;,
                    &quot;powerSupplyRedundancy&quot;: &quot;REDUNDANT&quot;,
                    &quot;processor&quot;: &quot;OK&quot;,
                    &quot;storage&quot;: &quot;OK&quot;,
                    &quot;temperature&quot;: &quot;OK&quot;,
                    &quot;bios&quot;: &quot;OK&quot;,
                    &quot;smartStorage&quot;: &quot;OK&quot;
                },
                &quot;bmc&quot;: {
                    &quot;mac&quot;: &quot;B4:7A:F1:B0:4C:44&quot;,
                    &quot;ip&quot;: &quot;10.4.25.209&quot;,
                    &quot;hostname&quot;: &quot;hdp-sto-amd16-ilo.hpintelco.org&quot;
                }
            },
            &quot;state&quot;: {
                &quot;managed&quot;: true,
                &quot;connected&quot;: true,
                &quot;connectedModifiedAt&quot;: &quot;2022-12-14T17:20:43.333272+00:00&quot;,
                &quot;subscriptionState&quot;: &quot;SUBSCRIBED&quot;,
                &quot;subscriptionTier&quot;: &quot;Enhanced&quot;,
                &quot;subscriptionExpiresAt&quot;: &quot;2027-04-21T17:55:00+00:00&quot;
            },
            &quot;firmwareInventory&quot;: [
                {
                    &quot;name&quot;: &quot;iLO 5&quot;,
                    &quot;version&quot;: &quot;2.70 May 16 2022&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;System ROM&quot;,
                    &quot;version&quot;: &quot;A43 v2.56 (02/10/2022)&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;Intelligent Platform Abstraction Data&quot;,
                    &quot;version&quot;: &quot;10.1.0 Build 37&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;System Programmable Logic Device&quot;,
                    &quot;version&quot;: &quot;0x11&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;Power Management Controller Firmware&quot;,
                    &quot;version&quot;: &quot;1.0.8&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;Power Supply Firmware&quot;,
                    &quot;version&quot;: &quot;1.00&quot;,
                    &quot;deviceContext&quot;: &quot;Bay 1&quot;
                },
                {
                    &quot;name&quot;: &quot;Power Supply Firmware&quot;,
                    &quot;version&quot;: &quot;1.00&quot;,
                    &quot;deviceContext&quot;: &quot;Bay 2&quot;
                },
                {
                    &quot;name&quot;: &quot;Redundant System ROM&quot;,
                    &quot;version&quot;: &quot;A43 v2.54 (12/03/2021)&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;Intelligent Provisioning&quot;,
                    &quot;version&quot;: &quot;3.52.37&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;Power Management Controller FW Bootloader&quot;,
                    &quot;version&quot;: &quot;1.1&quot;,
                    &quot;deviceContext&quot;: &quot;System Board&quot;
                },
                {
                    &quot;name&quot;: &quot;HPE Smart Storage Energy Pack 1 Firmware&quot;,
                    &quot;version&quot;: &quot;0.70&quot;,
                    &quot;deviceContext&quot;: &quot;Embedded Device&quot;
                },
                {
                    &quot;name&quot;: &quot;Mellanox Network Adapter - 88:E9:A4:02:8C:30&quot;,
                    &quot;version&quot;: &quot;16.32.10.10&quot;,
                    &quot;deviceContext&quot;: &quot;PCI-E Slot 1&quot;
                },
                {
                    &quot;name&quot;: &quot;HPE SN1600Q 32Gb 2p FC HBA&quot;,
                    &quot;version&quot;: &quot;1.75.07&quot;,
                    &quot;deviceContext&quot;: &quot;PCI-E Slot 2&quot;
                },
                {
                    &quot;name&quot;: &quot;HPE Smart Array P408i-a SR Gen10&quot;,
                    &quot;version&quot;: &quot;5.00&quot;,
                    &quot;deviceContext&quot;: &quot;Storage Slot 12&quot;
                },
                {
                    &quot;name&quot;: &quot;10/25Gb 2-port SFP28 BCM57414 OCP3 Adapter&quot;,
                    &quot;version&quot;: &quot;219.0.144.0&quot;,
                    &quot;deviceContext&quot;: &quot;OCP 3.0 Slot 10&quot;
                },
                {
                    &quot;name&quot;: &quot;Embedded Video Controller&quot;,
                    &quot;version&quot;: &quot;2.5&quot;,
                    &quot;deviceContext&quot;: &quot;Embedded Device&quot;
                },
                {
                    &quot;name&quot;: &quot;480GB 6G SATA SSD&quot;,
                    &quot;version&quot;: &quot;HPG1&quot;,
                    &quot;deviceContext&quot;: &quot;Slot=12:Port=1I:Box=1:Bay=1&quot;
                },
                {
                    &quot;name&quot;: &quot;480GB 6G SATA SSD&quot;,
                    &quot;version&quot;: &quot;HPG1&quot;,
                    &quot;deviceContext&quot;: &quot;Slot=12:Port=1I:Box=1:Bay=2&quot;
                }
            ],
            &quot;softwareInventory&quot;: \[],
            &quot;lastFirmwareUpdate&quot;: null,
            &quot;host&quot;: {
                &quot;osName&quot;: &quot;None&quot;,
                &quot;osVersion&quot;: &quot;None&quot;,
                &quot;hostname&quot;: &quot;Geneva-CIC-amd16&quot;,
                &quot;osType&quot;: null,
                &quot;osDescription&quot;: &quot;None&quot;
            },
            &quot;firmwareBundleUri&quot;: null,
            &quot;tags&quot;: {},
            &quot;biosFamily&quot;: &quot;A43&quot;,
            &quot;processorVendor&quot;: &quot;AMD EPYC 7352 24-Core Processor                &quot;,
            &quot;autoIloFwUpdate&quot;: true,
            &quot;serverGeneration&quot;: &quot;GEN_10
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That’s it. No more than 3 minutes!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/stopwatch2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you have more time, feel free to explore the rest of the collection on your own. Also, don’t hesitate to provide Lionel with feedback on his very convenient collection.&lt;/p&gt;
&lt;p&gt;Any questions on HPE GreenLake for Compute Ops Management? Please join the HPE Developer Slack Workspace (&lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;https://slack.hpedev.io/&lt;/a&gt;) and start a discussion in our &lt;a href=&quot;https://hpedev.slack.com/archives/C03QTQWC213&quot;&gt;#hpe-greenlake-compute-ops-management &lt;/a&gt;channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Learning through doing]]></title><link>https://developer.hpe.com/2022-December-05/</link><guid isPermaLink="false">https://developer.hpe.com/2022-December-05/</guid><pubDate>Mon, 05 Dec 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[A guide to deploying MongoDB applications using HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise. Introduction In this…]]></description><link>https://developer.hpe.com/deploying-mongodb-application-on-hpe-greenlake-for-containers/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-mongodb-application-on-hpe-greenlake-for-containers/</guid><pubDate>Thu, 01 Dec 2022 15:59:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post, we demonstrate how an end user can deploy a containerized, stateful MongoDB application on a Kubernetes-based container stack provided by HPE GreenLake for Containers and then access it over an external network or internet. In this scenario, the Kubernetes cluster gets configured using HPE CSI driver, along with the default storage class.&lt;/p&gt;
&lt;h2&gt;An overview of HPE GreenLake for Containers&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Containers provides a pay-as-you-go, Kubernetes-based container-optimized stack that is delivered as a service to help you operationalize your containers at scale.&lt;/p&gt;
&lt;p&gt;HPE GreenLake for Containers provides a standardized approach for cluster creation using cluster blueprints with user roles and role-based access controls as defined in the HPE GreenLake for Containers interface to the created clusters and applications. It also provides you with a dashboard view that displays the status of all Kubernetes services and resource utilization across all clusters.&lt;/p&gt;
&lt;p&gt;HPE manages your environment and will contact you before any upgrades are made to let you know the planned date and time, along with any pre-upgrade requirements.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake for Containers service:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Is an HPE-designed, implemented, owned, and operated private cloud that is deployed at a customer site&lt;/li&gt;
&lt;li&gt;Is offered as a consumption-based service that allows customers to better align costs to outcomes&lt;/li&gt;
&lt;li&gt;Supports Kubernetes on VMware vSphere&lt;/li&gt;
&lt;li&gt;Supports HPE Nimble and Alletra Storage arrays which provides persistent storage for containerized workloads&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can launch HPE GreenLake for Containers using the HPE GreenLake for Private Cloud Enterprise card on the HPE GreenLake Central Dashboard. From the Private Cloud Enterprise main page, click Containers to create clusters and blueprints, view details about existing clusters, and launch the HPE Ezmeral Runtime Environment (Containers page).&lt;/p&gt;
&lt;h2&gt;HPE GreenLake for Containers: Machine blueprint layout for Kubernetes cluster node(s)&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Containers uses machine blueprints to define the infrastructure details for the worker nodes used in a cluster.&lt;/p&gt;
&lt;p&gt;Predefined blueprints are provided when the service is provisioned, and you can create your own custom machine blueprints.&lt;/p&gt;
&lt;p&gt;A machine blueprint includes the:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Machine provider&lt;/li&gt;
&lt;li&gt;Operating system image and version&lt;/li&gt;
&lt;li&gt;Number of vCPU cores and amount of memory in the node&lt;/li&gt;
&lt;li&gt;Compute instance types&lt;/li&gt;
&lt;li&gt;Storage instance types&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;HPE GreenLake for Containers: Cluster blueprint layout for Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for Containers uses cluster blueprints to define the cluster layout and other infrastructure details used to create a cluster.&lt;/p&gt;
&lt;p&gt;Predefined blueprints are provided when the service is provisioned. You can copy and modify the predefined blueprints or create your own custom cluster blueprints.&lt;/p&gt;
&lt;p&gt;A cluster blueprint includes the:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cluster provider&lt;/li&gt;
&lt;li&gt;Version of Kubernetes to deploy on the cluster&lt;/li&gt;
&lt;li&gt;Default storage class&lt;/li&gt;
&lt;li&gt;Control plane nodes and worker nodes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;HPE GreenLake for Containers: MongoDB application deployment on Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;MongoDB is an open source, NoSQL database that provides support for JSON-styled, document-oriented storage systems. It supports a flexible data model that enables you to store data of any structure, and provides a rich set of features, including full index support, sharding, and replication.&lt;/p&gt;
&lt;p&gt;Below is the preferred cluster configuration for a MongoDB in-memory database workload.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step-1: Create a Kubernetes cluster from containers page&lt;/h3&gt;
&lt;p&gt;To create a cluster, you must have been assigned the roles of &lt;strong&gt;Private Cloud Cluster Owner&lt;/strong&gt; and &lt;strong&gt;Private Cloud Widget Viewer&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From the Containers main page, under the Clusters tab, click Create Cluster.&lt;/li&gt;
&lt;li&gt;In the Create Cluster form, provide the cluster name &apos;hpe&apos;, and select the standard cluster blueprint. The new cluster appears in the list of clusters.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As indicated above, there are multiple clusters deployed in parallel for multiple purposes. For the MongoDB application deployment in our example, the cluster will be created with the name &quot;hpe&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step-2: Download scoped kubeconfig from Container platform page&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;From the Clusters tab, select the &apos;hpe&apos; Kubernetes cluster and click &lt;strong&gt;Launch Service Console&lt;/strong&gt;. This will direct you to the container platform page.&lt;/li&gt;
&lt;li&gt;Click on Download kubeconfig.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: Launching HPE Ezmeral Runtime Enterprise from HPE GreenLake Central is configured through SAML SSO and adds a session token to the kubeconfig file. You will need to download the kubeconfig file again if you want to continue to access the cluster when the session token expires after an hour.&lt;/p&gt;
&lt;h3&gt;Step-3: View the &apos;hpe&apos; Kubernetes cluster environment details&lt;/h3&gt;
&lt;p&gt;Get Kubernetes &lt;strong&gt;cluster version.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl version --short
Client Version: v1.20.0
Server Version: v1.20.11-hpe-2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get Kubernetes &lt;strong&gt;cluster nodes.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get nodes -o wide
NAME                                        STATUS   ROLES                  AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                              KERNEL-VERSION                 CONTAINER-RUNTIME
k8s-hpe-master-d6xv8-254v7.glhc-hpe.local   Ready    control-plane,master   71d   v1.20.11-hpe-2   172.16.17.115   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
k8s-hpe-master-d6xv8-8fxxz.glhc-hpe.local   Ready    control-plane,master   71d   v1.20.11-hpe-2   172.16.17.110   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
k8s-hpe-master-d6xv8-jjrpc.glhc-hpe.local   Ready    control-plane,master   71d   v1.20.11-hpe-2   172.16.17.114   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
k8s-hpe-worker-qscr4-89n67.glhc-hpe.local   Ready    worker                 71d   v1.20.11-hpe-2   172.16.17.109   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
k8s-hpe-worker-qscr4-fp8px.glhc-hpe.local   Ready    worker                 71d   v1.20.11-hpe-2   172.16.17.116   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
k8s-hpe-worker-qscr4-l95j4.glhc-hpe.local   Ready    worker                 71d   v1.20.11-hpe-2   172.16.17.113   &amp;#x3C;none&gt;        SUSE Linux Enterprise Server 15 SP2   5.3.18-150200.24.115-default   containerd://1.5.1-hpe-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get Kubernetes cluster &lt;strong&gt;default storage class.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get sc
NAME                              PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gl-sbc-glhcnimblestor (default)   csi.hpe.com                    Delete          Immediate              true                   69d
gl-sbc-hpe                        csi.hpe.com                    Delete          Immediate              true                   69d
gl-sbp-glhcnimblestor             csi.hpe.com                    Delete          Immediate              true                   69d
hpe-hdd-storage                   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  69d
hpe-nvme-storage                  kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  69d
hpe-ssd-storage                   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  69d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get all resources available under &lt;strong&gt;hpe-storage&lt;/strong&gt; namespace i.e. HPE CSI driver, Snapshot controller.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get all -n hpe-storage
NAME                                       READY   STATUS    RESTARTS   AGE
pod/hpe-csi-controller-7c6f876494-vrd49    9/9     Running   0          69d
pod/hpe-csi-node-cpmkg                     2/2     Running   0          69d
pod/hpe-csi-node-m2f75                     2/2     Running   0          69d
pod/hpe-csi-node-m9mj9                     2/2     Running   0          69d
pod/nimble-csp-db7c7bb65-c5wrk             1/1     Running   0          69d
pod/primera3par-csp-6f999b8d76-wtd4n       1/1     Running   0          69d
pod/snapshot-controller-0                  1/1     Running   0          69d
pod/snapshot-controller-64b98b668f-rdfwr   1/1     Running   0          32d
pod/snapshot-controller-64b98b668f-tdzz9   1/1     Running   0          32d

NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/alletra6000-csp-svc   ClusterIP   10.109.228.208   &amp;#x3C;none&gt;        8080/TCP   69d
service/alletra9000-csp-svc   ClusterIP   10.107.227.232   &amp;#x3C;none&gt;        8080/TCP   69d
service/nimble-csp-svc        ClusterIP   10.102.45.17     &amp;#x3C;none&gt;        8080/TCP   69d
service/primera3par-csp-svc   ClusterIP   10.104.57.79     &amp;#x3C;none&gt;        8080/TCP   69d

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/hpe-csi-node   3         3         3       3            3           &amp;#x3C;none&gt;          69d

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hpe-csi-controller    1/1     1            1           69d
deployment.apps/nimble-csp            1/1     1            1           69d
deployment.apps/primera3par-csp       1/1     1            1           69d
deployment.apps/snapshot-controller   2/2     2            2           32d

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/hpe-csi-controller-7c6f876494    1         1         1       69d
replicaset.apps/nimble-csp-db7c7bb65             1         1         1       69d
replicaset.apps/primera3par-csp-6f999b8d76       1         1         1       69d
replicaset.apps/snapshot-controller-64b98b668f   2         2         2       32d

NAME                                   READY   AGE
statefulset.apps/snapshot-controller   1/1     69d
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step-4: Create a namespace on &apos;hpe&apos; Kubernetes cluster for MongoDB deployment&lt;/h3&gt;
&lt;p&gt;Create namespace with name &lt;strong&gt;mongo.&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl create ns mongo
namespace/mongo created

$ kubectl get ns
NAME                      STATUS   AGE
argocd                    Active   71d
default                   Active   71d
ezctl                     Active   71d
gatekeeper-system         Active   71d
hpe                       Active   71d
hpe-externalclusterinfo   Active   71d
hpe-ldap                  Active   71d
hpe-logzio                Active   71d
hpe-metering              Active   71d
hpe-nodesvc               Active   71d
hpe-operations            Active   50d
hpe-secure                Active   71d
hpe-security              Active   71d
hpe-snow                  Active   71d
hpe-storage               Active   71d
hpe-system                Active   71d
hpe-templates-compute     Active   71d
hpecp                     Active   71d
hpecp-bootstrap           Active   71d
hpecp-falco               Active   71d
kd-apps                   Active   71d
kd-mlops                  Active   71d
kube-node-lease           Active   71d
kube-public               Active   71d
kube-system               Active   71d
kubernetes-dashboard      Active   71d
mongo                     Active   6s
opsramp-agent             Active   71d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Switch to new namespace i.e. &lt;strong&gt;mongo&lt;/strong&gt; in the current context.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl config set-context --current --namespace=mongo
Context &quot;caas-dev-3-hpe-hpe-ashish-kumar@hpe.com&quot; modified.

$ kubectl config view --minify --output &apos;jsonpath={..namespace}&apos;; echo
mongo

$ kubectl get pods
No resources found in mongo namespace.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step-5: Deploy MongoDB&lt;/h3&gt;
&lt;p&gt;Deploy the MongoDB application using YAML file i.e. &lt;strong&gt;services/mongodb/install-mongo.yaml&lt;/strong&gt; from &lt;strong&gt;&lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content&quot;&gt;https://github.com/cxteamtrials/caas-trials-content&lt;/a&gt;&lt;/strong&gt; location.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl create -f install-mongo.yaml
service/mongo created
statefulset.apps/mongo created

$ cat install-mongo.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
    hpecp.hpe.com/hpecp-internal-gateway: &quot;true&quot; # Expose the service on ERE Gateway
spec:
  ports:
  - protocol: TCP
    port: 27017
    targetPort: 27017
  selector:
    role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      app: mongo
  serviceName: &quot;mongo&quot;
  # Number of initial MongoDB pods to deploy - default is 1 if not set
  replicas: 3
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: mongo
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          env:
          - name: POD_IP_ADDRESS
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          command:
          - &quot;mongod&quot;
          - &quot;--bind_ip_all&quot;
          - &quot;--replSet&quot;
          - rs0
          - &quot;--oplogSize&quot;
          - &quot;128&quot;
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: &quot;app=mongo&quot;
  volumeClaimTemplates:
    - metadata:
        name: mongo-persistent-storage
      spec:
        accessModes: [ &quot;ReadWriteOnce&quot; ]
        storageClassName: gl-sbc-glhcnimblestor
        resources:
          requests:
            storage: 50Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step-6: Validate MongoDB deployment&lt;/h3&gt;
&lt;p&gt;Get MongoDB related Kubernetes resources like; pod, service, pvc and validate the deployment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE                                        NOMINATED NODE   READINESS GATES
mongo-0   2/2     Running   0          18h   10.192.3.58    k8s-hpe-worker-qscr4-89n67.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;
mongo-1   2/2     Running   0          18h   10.192.3.59    k8s-hpe-worker-qscr4-89n67.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;
mongo-2   2/2     Running   0          18h   10.192.4.208   k8s-hpe-worker-qscr4-l95j4.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;

$ kubectl get all
NAME          READY   STATUS    RESTARTS   AGE
pod/mongo-0   2/2     Running   0          18h
pod/mongo-1   2/2     Running   0          18h
pod/mongo-2   2/2     Running   0          18h

NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
service/mongo     ClusterIP   10.96.180.195   &amp;#x3C;none&gt;        27017/TCP         18h

NAME                     READY   AGE
statefulset.apps/mongo   3/3     18h

$ kubectl get svc
$ kubectl get svc
NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
mongo     ClusterIP   10.96.180.195   &amp;#x3C;none&gt;        27017/TCP         18h

$ kubectl describe svc mongo
Name:              mongo
Namespace:         mongo
Labels:            hpecp.hpe.com/hpecp-internal-gateway=true
                   name=mongo
Annotations:       &amp;#x3C;none&gt;
Selector:          role=mongo
Type:              ClusterIP
IP Families:       &amp;#x3C;none&gt;
IP:                10.96.180.195
IPs:               10.96.180.195
Port:              &amp;#x3C;unset&gt;  27017/TCP
TargetPort:        27017/TCP
Endpoints:         &amp;#x3C;none&gt;
Session Affinity:  None
Events:            &amp;#x3C;none&gt;

$ kubectl get pvc
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
mongo-persistent-storage-mongo-0   Bound    pvc-bc7faca6-6cd6-4796-acc1-864716c98f86   50Gi       RWO            gl-sbc-glhcnimblestor   18h
mongo-persistent-storage-mongo-1   Bound    pvc-1ef8ef09-c858-4862-989c-d246308518b4   50Gi       RWO            gl-sbc-glhcnimblestor   18h
mongo-persistent-storage-mongo-2   Bound    pvc-12044e23-8535-4b3e-bc02-a99819b9753b   50Gi       RWO            gl-sbc-glhcnimblestor   18h

$ kubectl describe pvc mongo-persistent-storage-mongo-0
Name:          mongo-persistent-storage-mongo-0
Namespace:     mongo
StorageClass:  gl-sbc-glhcnimblestor
Status:        Bound
Volume:        pvc-bc7faca6-6cd6-4796-acc1-864716c98f86
Labels:        app=mongo
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      50Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       mongo-0
Events:        &amp;#x3C;none&gt;

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                                 STORAGECLASS            REASON   AGE
pvc-12044e23-8535-4b3e-bc02-a99819b9753b   50Gi       RWO            Delete           Bound    mongo/mongo-persistent-storage-mongo-2                                                                                gl-sbc-glhcnimblestor            19h
pvc-193a0dbf-79ed-43da-864d-6dfc8673b502   1Gi        RWO            Delete           Bound    hpe-metering/prometheus-managed-kube-prometheus-st-prometheus-db-prometheus-managed-kube-prometheus-st-prometheus-0   gl-sbc-hpe                       71d
pvc-1ef8ef09-c858-4862-989c-d246308518b4   50Gi       RWO            Delete           Bound    mongo/mongo-persistent-storage-mongo-1                                                                                gl-sbc-glhcnimblestor            19h
pvc-4dabe263-101d-4fcd-be49-6c6a7a97d51a   500Gi      RWO            Delete           Bound    ashish-mongo-deploy/nimblepod-500                                                                                     gl-sbc-hpe                       66d
pvc-68265149-eaf7-4239-982d-917f34f704c4   100Gi      RWO            Delete           Bound    ashish-mongo-deploy/nimblepod-100                                                                                     gl-sbc-hpe                       66d
pvc-76062c79-819d-4b6e-af6b-207df5e008ee   100Gi      RWO            Delete           Bound    ashish-mongo-deploy/ashish-mongo-persistent-storage-mongo-1                                                           gl-sbc-hpe                       66d
pvc-86360549-ac45-4ab7-b01e-426c8d3b0d33   5Gi        RWO            Delete           Bound    ashish-hpe-mongo-deploy/ashish-test-pvc                                                                               gl-sbc-hpe                       2d20h
pvc-8a565d78-a0f3-4db1-b577-c88699c3aa41   100Gi      RWO            Delete           Bound    ashish-mongo-deploy/ashish-mongo-persistent-storage-mongo-2                                                           gl-sbc-hpe                       66d
pvc-96b24e79-9c05-4c6b-8fcf-754d70dad152   30Gi       RWO            Delete           Bound    ashish-hpe-mongo-deploy/ashish-test-pvc2                                                                              gl-sbc-hpe                       47h
pvc-a8e578e8-a213-4607-912e-f89da2203cfd   100Gi      RWO            Delete           Bound    ashish-mongo-deploy/ashish-mongo-persistent-storage-mongo-0                                                           gl-sbc-hpe                       66d
pvc-aced3b34-1d46-427a-a315-a90bdb687561   2Gi        RWO            Delete           Bound    hpe-metering/metering-agent                                                                                           gl-sbc-hpe                       51d
pvc-b4f5173e-ad34-4811-8722-c5f1ced1cafd   200Gi      RWO            Delete           Bound    ashish-mongo-deploy/nimblepod-200                                                                                     gl-sbc-hpe                       66d
pvc-bc7faca6-6cd6-4796-acc1-864716c98f86   50Gi       RWO            Delete           Bound    mongo/mongo-persistent-storage-mongo-0                                                                                gl-sbc-glhcnimblestor            19h
pvc-e27f5ae0-0f02-4690-b24a-2c65f9c51b1f   1000Gi     RWO            Delete           Bound    ashish-mongo-deploy/nimblepod-1000                                                                                    gl-sbc-hpe                       66d
pvc-e800beac-56c4-46df-8bf7-4374e311133e   30Gi       RWO            Delete           Bound    ashish-hpe-mongo-deploy/ashish-mongo-persistent-storage-mongo-2                                                       gl-sbc-hpe                       47h
pvc-e8aed33b-dc99-41bc-b301-2e2d3ca81b8d   30Gi       RWO            Delete           Bound    ashish-hpe-mongo-deploy/ashish-mongo-persistent-storage-mongo-0                                                       gl-sbc-hpe                       47h
pvc-ee3146e2-ad77-4418-93b8-a94cd30e63ad   10Gi       RWO            Delete           Bound    ashish-mongo-deploy/nimblepod-10                                                                                      gl-sbc-hpe                       66d
pvc-eee0e074-3284-40f5-914e-88842899efc2   5Gi        RWO            Delete           Bound    ashish-mongo-deploy/ashish-test-pvc                                                                                   gl-sbc-hpe                       69d
pvc-f42472d1-f8f2-4ac4-8955-c37664d7cf8b   30Gi       RWO            Delete           Bound    ashish-hpe-mongo-deploy/ashish-mongo-persistent-storage-mongo-1                                                       gl-sbc-hpe                       47h


$ kubectl describe pv pvc-bc7faca6-6cd6-4796-acc1-864716c98f86
Name:            pvc-bc7faca6-6cd6-4796-acc1-864716c98f86
Labels:          &amp;#x3C;none&gt;
Annotations:     pv.kubernetes.io/provisioned-by: csi.hpe.com
Finalizers:      [kubernetes.io/pv-protection external-attacher/csi-hpe-com]
StorageClass:    gl-sbc-glhcnimblestor
Status:          Bound
Claim:           mongo/mongo-persistent-storage-mongo-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        50Gi
Node Affinity:   &amp;#x3C;none&gt;
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            csi.hpe.com
    FSType:            xfs
    VolumeHandle:      067ac428a6431c485b00000000000000000000044d
    ReadOnly:          false
    VolumeAttributes:      accessProtocol=iscsi
                           allowOverrides=nfsResources,nfsNamespace
                           cloneOf=
                           createSnapshot=false
                           dedupeEnabled=true
                           description=hpe
                           destroyOnDelete=true
                           encrypted=false
                           folder=caas-pvs
                           fsType=xfs
                           hostEncryption=false
                           limitIops=-1
                           limitMbps=-1
                           performancePolicy=default
                           pool=default
                           storage.kubernetes.io/csiProvisionerIdentity=1663147353961-8081-csi.hpe.com
                           syncOnDetach=false
                           targetScope=volume
                           thick=false
                           volumeAccessMode=mount
Events:                &amp;#x3C;none&gt; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step-7: Configure MongoDB primary and secondary replica&lt;/h3&gt;
&lt;p&gt;Get the IP address of each mongo pod.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE                                        NOMINATED NODE   READINESS GATES
mongo-0   2/2     Running   0          44m   10.192.3.58    k8s-hpe-worker-qscr4-89n67.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;
mongo-1   2/2     Running   0          44m   10.192.3.59    k8s-hpe-worker-qscr4-89n67.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;
mongo-2   2/2     Running   0          44m   10.192.4.208   k8s-hpe-worker-qscr4-l95j4.glhc-hpe.local   &amp;#x3C;none&gt;           &amp;#x3C;none&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Expose mongo as a service.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;kubectl expose pod/mongo-0 --type=&quot;NodePort&quot; --port 27017
kubectl expose pod/mongo-1 --type=&quot;NodePort&quot; --port 27017
kubectl expose pod/mongo-2 --type=&quot;NodePort&quot; --port 27017
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Describe each mongo service and get the port details on which service has been exposed over ERE Gateway.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ kubectl get svc
NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
mongo     ClusterIP   10.96.180.195   &amp;#x3C;none&gt;        27017/TCP         54m
mongo-0   NodePort    10.97.222.217   &amp;#x3C;none&gt;        27017:30946/TCP   23m
mongo-1   NodePort    10.98.166.198   &amp;#x3C;none&gt;        27017:32648/TCP   19m
mongo-2   NodePort    10.101.219.63   &amp;#x3C;none&gt;        27017:31216/TCP   20m

$ kubectl describe svc mongo-0
Name:                     mongo-0
Namespace:                mongo
Labels:                   app=mongo
                          controller-revision-hash=mongo-7648bd99c8
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          statefulset.kubernetes.io/pod-name=mongo-0
Annotations:              hpecp-internal-gateway/27017: epicgw.customer.hpe.net:10030
Selector:                 app=mongo,controller-revision-hash=mongo-7648bd99c8,statefulset.kubernetes.io/pod-name=mongo-0
Type:                     NodePort
IP Families:              &amp;#x3C;none&gt;
IP:                       10.97.222.217
IPs:                      10.97.222.217
Port:                     &amp;#x3C;unset&gt;  27017/TCP
TargetPort:               27017/TCP
NodePort:                 &amp;#x3C;unset&gt;  30946/TCP
Endpoints:                10.192.3.58:27017
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From         Message
  ----    ------  ----  ----         -------
  Normal  HpeCp   25m   hpecp-agent  Created HPECP K8S service

$ kubectl describe svc mongo-1
Name:                     mongo-1
Namespace:                mongo
Labels:                   app=mongo
                          controller-revision-hash=mongo-7648bd99c8
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          statefulset.kubernetes.io/pod-name=mongo-1
Annotations:              hpecp-internal-gateway/27017: epicgw.customer.hpe.net:10031
Selector:                 app=mongo,controller-revision-hash=mongo-7648bd99c8,statefulset.kubernetes.io/pod-name=mongo-1
Type:                     NodePort
IP Families:              &amp;#x3C;none&gt;
IP:                       10.98.166.198
IPs:                      10.98.166.198
Port:                     &amp;#x3C;unset&gt;  27017/TCP
TargetPort:               27017/TCP
NodePort:                 &amp;#x3C;unset&gt;  32648/TCP
Endpoints:                10.192.3.59:27017
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From         Message
  ----    ------  ----  ----         -------
  Normal  HpeCp   22m   hpecp-agent  Created HPECP K8S service

$ kubectl describe svc mongo-2
Name:                     mongo-2
Namespace:                mongo
Labels:                   app=mongo
                          controller-revision-hash=mongo-7648bd99c8
                          hpecp.hpe.com/hpecp-internal-gateway=true
                          statefulset.kubernetes.io/pod-name=mongo-2
Annotations:              hpecp-internal-gateway/27017: epicgw.customer.hpe.net:10035
Selector:                 app=mongo,controller-revision-hash=mongo-7648bd99c8,statefulset.kubernetes.io/pod-name=mongo-2
Type:                     NodePort
IP Families:              &amp;#x3C;none&gt;
IP:                       10.101.219.63
IPs:                      10.101.219.63
Port:                     &amp;#x3C;unset&gt;  27017/TCP
TargetPort:               27017/TCP
NodePort:                 &amp;#x3C;unset&gt;  31216/TCP
Endpoints:                10.192.4.208:27017
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From         Message
  ----    ------  ----  ----         -------
  Normal  HpeCp   23m   hpecp-agent  Created HPECP K8S service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install &apos;&lt;strong&gt;mongosh&lt;/strong&gt;&apos; client locally for shell interaction with MongoDB. &lt;strong&gt;mongosh&lt;/strong&gt; will be accessing the mongo cluster from outside the Kubernetes cluster.
You can download package &lt;strong&gt;services/mongodb/mongosh-1.6.0-win32-x64.zip&lt;/strong&gt; from &lt;strong&gt;&lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content&quot;&gt;https://github.com/cxteamtrials/caas-trials-content&lt;/a&gt;&lt;/strong&gt; location.&lt;/p&gt;
&lt;p&gt;Extract the package and set the &lt;strong&gt;mongosh&lt;/strong&gt; client bin path.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ export PATH=$PATH:/c/Ashish/mongosh-1.6.0-win32-x64/bin/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Connect to MongoDB service over ERE Gateway through &lt;strong&gt;mongosh&lt;/strong&gt; client.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ mongosh --host epicgw.customer.hpe.net --port 10030
Current Mongosh Log ID: 637f7622aaa80bc199cfcb06
Connecting to:          mongodb://epicgw.customer.hpe.net:10030/?directConnection=true&amp;#x26;appName=mongosh+1.6.0
Using MongoDB:          6.0.3
Using Mongosh:          1.6.0

For mongosh info see: https://docs.mongodb.com/mongodb-shell/

------
   The server generated these startup warnings when booting
   2022-11-24T12:48:39.726+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
   2022-11-24T12:48:39.727+00:00: You are running this process as the root user, which is not recommended
   2022-11-24T12:48:39.728+00:00: You are running on a NUMA machine. We suggest launching mongod like this to avoid performance problems: numactl --interleave=all mongod [other options]
   2022-11-24T12:48:39.728+00:00: vm.max_map_count is too low
------

------
   Enable MongoDB&apos;s free cloud-based monitoring service, which will then receive and display
   metrics about your deployment (disk utilization, CPU, operation statistics, etc).

   The monitoring data will be available on a MongoDB website with a unique URL accessible to you
   and anyone you share the URL with. MongoDB may use this information to make product
   improvements and to suggest MongoDB products and deployment options to you.

   To enable free monitoring, run the following command: db.enableFreeMonitoring()
   To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------

test&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Initialize MongoDB replicaset.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;test&gt; rs.initiate()
{
  info2: &apos;no configuration specified. Using a default configuration for the set&apos;,
  me: &apos;mongo-0:27017&apos;,
  ok: 1
}
rs0 [direct: other] test&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Register mongo-0 pod as primary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: other] test&gt; var cfg = rs.conf();cfg.members[0].host=&quot;10.192.3.58:27017&quot;;rs.reconfig(cfg)
{
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669297890, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669297890, i: 1 })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Validate that mongo-0 pod is registered as primary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: other] test&gt; rs.status()
{
  set: &apos;rs0&apos;,
  date: ISODate(&quot;2022-11-24T13:52:34.600Z&quot;),
  myState: 1,
  term: Long(&quot;1&quot;),
  syncSourceHost: &apos;&apos;,
  syncSourceId: -1,
  heartbeatIntervalMillis: Long(&quot;2000&quot;),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1669297950, i: 1 }), t: Long(&quot;1&quot;) },
    lastCommittedWallTime: ISODate(&quot;2022-11-24T13:52:30.843Z&quot;),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1669297950, i: 1 }), t: Long(&quot;1&quot;) },
    appliedOpTime: { ts: Timestamp({ t: 1669297950, i: 1 }), t: Long(&quot;1&quot;) },
    durableOpTime: { ts: Timestamp({ t: 1669297950, i: 1 }), t: Long(&quot;1&quot;) },
    lastAppliedWallTime: ISODate(&quot;2022-11-24T13:52:30.843Z&quot;),
    lastDurableWallTime: ISODate(&quot;2022-11-24T13:52:30.843Z&quot;)
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1669297900, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: &apos;electionTimeout&apos;,
    lastElectionDate: ISODate(&quot;2022-11-24T13:49:50.807Z&quot;),
    electionTerm: Long(&quot;1&quot;),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long(&quot;10000&quot;),
    newTermStartDate: ISODate(&quot;2022-11-24T13:49:50.829Z&quot;),
    wMajorityWriteAvailabilityDate: ISODate(&quot;2022-11-24T13:49:50.840Z&quot;)
  },
  members: [
    {
      _id: 0,
      name: &apos;10.192.3.58:27017&apos;,
      health: 1,
      state: 1,
      stateStr: &apos;PRIMARY&apos;,
      uptime: 3836,
      optime: { ts: Timestamp({ t: 1669297950, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T13:52:30.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T13:52:30.843Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T13:52:30.843Z&quot;),
      syncSourceHost: &apos;&apos;,
      syncSourceId: -1,
      infoMessage: &apos;&apos;,
      electionTime: Timestamp({ t: 1669297790, i: 2 }),
      electionDate: ISODate(&quot;2022-11-24T13:49:50.000Z&quot;),
      configVersion: 2,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: &apos;&apos;
    }
  ],
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669297950, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669297950, i: 1 })
}
rs0 [direct: primary] test&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add mongo-1 pod as secondary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: primary] test&gt; rs.add(&quot;10.192.3.59:27017&quot;)
{
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669298574, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669298574, i: 1 })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Validate that mongo-1 pod is registered as secondary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: primary] test&gt; rs.status()
{
  set: &apos;rs0&apos;,
  date: ISODate(&quot;2022-11-24T14:03:52.376Z&quot;),
  myState: 1,
  term: Long(&quot;1&quot;),
  syncSourceHost: &apos;&apos;,
  syncSourceId: -1,
  heartbeatIntervalMillis: Long(&quot;2000&quot;),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 2,
  writableVotingMembersCount: 2,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
    lastCommittedWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
    appliedOpTime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
    durableOpTime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
    lastAppliedWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
    lastDurableWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;)
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1669298619, i: 4 }),
  electionCandidateMetrics: {
    lastElectionReason: &apos;electionTimeout&apos;,
    lastElectionDate: ISODate(&quot;2022-11-24T13:49:50.807Z&quot;),
    electionTerm: Long(&quot;1&quot;),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long(&quot;10000&quot;),
    newTermStartDate: ISODate(&quot;2022-11-24T13:49:50.829Z&quot;),
    wMajorityWriteAvailabilityDate: ISODate(&quot;2022-11-24T13:49:50.840Z&quot;)
  },
  members: [
    {
      _id: 0,
      name: &apos;10.192.3.58:27017&apos;,
      health: 1,
      state: 1,
      stateStr: &apos;PRIMARY&apos;,
      uptime: 4514,
      optime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T14:03:50.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
      syncSourceHost: &apos;&apos;,
      syncSourceId: -1,
      infoMessage: &apos;&apos;,
      electionTime: Timestamp({ t: 1669297790, i: 2 }),
      electionDate: ISODate(&quot;2022-11-24T13:49:50.000Z&quot;),
      configVersion: 4,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: &apos;&apos;
    },
    {
      _id: 1,
      name: &apos;10.192.3.59:27017&apos;,
      health: 1,
      state: 2,
      stateStr: &apos;SECONDARY&apos;,
      uptime: 58,
      optime: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDurable: { ts: Timestamp({ t: 1669298630, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T14:03:50.000Z&quot;),
      optimeDurableDate: ISODate(&quot;2022-11-24T14:03:50.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T14:03:50.861Z&quot;),
      lastHeartbeat: ISODate(&quot;2022-11-24T14:03:52.308Z&quot;),
      lastHeartbeatRecv: ISODate(&quot;2022-11-24T14:03:50.818Z&quot;),
      pingMs: Long(&quot;0&quot;),
      lastHeartbeatMessage: &apos;&apos;,
      syncSourceHost: &apos;10.192.3.58:27017&apos;,
      syncSourceId: 0,
      infoMessage: &apos;&apos;,
      configVersion: 4,
      configTerm: 1
    }
  ],
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669298630, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669298630, i: 1 })
}
rs0 [direct: primary] test&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add mongo-2 pod as secondary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: primary] test&gt; rs.add(&quot;10.192.4.208:27017&quot;)
{
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669298829, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669298829, i: 1 })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Validate that mongo-2 pod is registered as secondary replica.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;rs0 [direct: primary] test&gt; rs.status()
{
  set: &apos;rs0&apos;,
  date: ISODate(&quot;2022-11-24T14:08:05.339Z&quot;),
  myState: 1,
  term: Long(&quot;1&quot;),
  syncSourceHost: &apos;&apos;,
  syncSourceId: -1,
  heartbeatIntervalMillis: Long(&quot;2000&quot;),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
    lastCommittedWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
    appliedOpTime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
    durableOpTime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
    lastAppliedWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
    lastDurableWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;)
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1669298870, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: &apos;electionTimeout&apos;,
    lastElectionDate: ISODate(&quot;2022-11-24T13:49:50.807Z&quot;),
    electionTerm: Long(&quot;1&quot;),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1669297790, i: 1 }), t: Long(&quot;-1&quot;) },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long(&quot;10000&quot;),
    newTermStartDate: ISODate(&quot;2022-11-24T13:49:50.829Z&quot;),
    wMajorityWriteAvailabilityDate: ISODate(&quot;2022-11-24T13:49:50.840Z&quot;)
  },
  members: [
    {
      _id: 0,
      name: &apos;10.192.3.58:27017&apos;,
      health: 1,
      state: 1,
      stateStr: &apos;PRIMARY&apos;,
      uptime: 4767,
      optime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T14:08:00.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      syncSourceHost: &apos;&apos;,
      syncSourceId: -1,
      infoMessage: &apos;&apos;,
      electionTime: Timestamp({ t: 1669297790, i: 2 }),
      electionDate: ISODate(&quot;2022-11-24T13:49:50.000Z&quot;),
      configVersion: 6,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: &apos;&apos;
    },
    {
      _id: 1,
      name: &apos;10.192.3.59:27017&apos;,
      health: 1,
      state: 2,
      stateStr: &apos;SECONDARY&apos;,
      uptime: 311,
      optime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDurable: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T14:08:00.000Z&quot;),
      optimeDurableDate: ISODate(&quot;2022-11-24T14:08:00.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      lastHeartbeat: ISODate(&quot;2022-11-24T14:08:05.090Z&quot;),
      lastHeartbeatRecv: ISODate(&quot;2022-11-24T14:08:05.094Z&quot;),
      pingMs: Long(&quot;0&quot;),
      lastHeartbeatMessage: &apos;&apos;,
      syncSourceHost: &apos;10.192.3.58:27017&apos;,
      syncSourceId: 0,
      infoMessage: &apos;&apos;,
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 2,
      name: &apos;10.192.4.208:27017&apos;,
      health: 1,
      state: 2,
      stateStr: &apos;SECONDARY&apos;,
      uptime: 56,
      optime: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDurable: { ts: Timestamp({ t: 1669298880, i: 1 }), t: Long(&quot;1&quot;) },
      optimeDate: ISODate(&quot;2022-11-24T14:08:00.000Z&quot;),
      optimeDurableDate: ISODate(&quot;2022-11-24T14:08:00.000Z&quot;),
      lastAppliedWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      lastDurableWallTime: ISODate(&quot;2022-11-24T14:08:00.869Z&quot;),
      lastHeartbeat: ISODate(&quot;2022-11-24T14:08:05.104Z&quot;),
      lastHeartbeatRecv: ISODate(&quot;2022-11-24T14:08:03.615Z&quot;),
      pingMs: Long(&quot;0&quot;),
      lastHeartbeatMessage: &apos;&apos;,
      syncSourceHost: &apos;10.192.3.59:27017&apos;,
      syncSourceId: 1,
      infoMessage: &apos;&apos;,
      configVersion: 6,
      configTerm: 1
    }
  ],
  ok: 1,
  &apos;$clusterTime&apos;: {
    clusterTime: Timestamp({ t: 1669298880, i: 1 }),
    signature: {
      hash: Binary(Buffer.from(&quot;0000000000000000000000000000000000000000&quot;, &quot;hex&quot;), 0),
      keyId: Long(&quot;0&quot;)
    }
  },
  operationTime: Timestamp({ t: 1669298880, i: 1 })
}
rs0 [direct: primary] test&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;HPE GreenLake for Containers: Data Science with MongoDB deployed on a Kubernetes cluster&lt;/h2&gt;
&lt;p&gt;The below steps illustrate an interactive Jupyter notebook adding records and querying MongoDB deployed on the &quot;hpe&quot; Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create a Database instance &apos;glcaasmongodemo&apos; and sample collection &apos;glsamplecollection&apos; and add document records in database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Add multiple documents to sample collection.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Perform query operation on sample collection.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image-16.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;HPE GreenLake for Containers: Demo Summary&lt;/h2&gt;
&lt;p&gt;You can find the GitHub repository that hosts demo code &lt;a href=&quot;https://github.com/cxteamtrials/caas-trials-content&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We hope that this blog post has provided you with enough information for you to get started deploying a containerized, stateful MongoDB application using HPE GreenLake for Containers. To view more articles and tutorials on the use of HPE GreenLake for Containers, refer to the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore the HPE GreenLake for Compute Ops Management REST API using Python and PowerShell]]></title><description><![CDATA[HPE GreenLake for Compute Ops Management automates and transforms complex and time-consuming compute management operations into a simplified…]]></description><link>https://developer.hpe.com/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-python-and-powershell/</link><guid isPermaLink="false">https://developer.hpe.com/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-python-and-powershell/</guid><pubDate>Thu, 01 Dec 2022 12:28:58 GMT</pubDate><content:encoded>&lt;p&gt;HPE GreenLake for Compute Ops Management automates and transforms complex and time-consuming compute management operations into a simplified experience across edge-to-cloud. Whether you are an IT OPS or a DEV OPS engineer, you know that automation is the key to success. And today’s automation relies heavily on APIs and how one can interact easily with them. So, let us show you how to leverage the API provided so you, too, can take advantage of what HPE GreenLake for Compute Ops Management can provide. In this blog post, we will cover how to do this using Python and PowerShell.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl1.png&quot; alt=&quot;figure1&quot;&gt;&lt;/p&gt;
&lt;h1&gt;&lt;strong&gt;HPE GreenLake for Compute Ops Management REST API&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management provides a Restful API to customers who want to manage their devices programmatically or through a command line interface. The API enables customers to invoke operations or tasks such as list devices, see device details, device health, manage a device&apos;s firmware and much more.&lt;/p&gt;
&lt;p&gt;To learn more about HPE GreenLake for Compute Ops Management API methods and resources, you can refer to &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;the API reference documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here are some of the operations you can do with the HPE GreenLake for Compute Ops Management API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Obtain the list of servers in your account&lt;/li&gt;
&lt;li&gt;Obtain the list of firmware bundles&lt;/li&gt;
&lt;li&gt;Obtain the list of job templates&lt;/li&gt;
&lt;li&gt;Perform a firmware update&lt;/li&gt;
&lt;li&gt;Run a carbon emissions report&lt;/li&gt;
&lt;li&gt;Power on/off/restart a server&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow9.png&quot; alt=&quot;figure2&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿n a previous &lt;a href=&quot;https://developer.hpe.com/blog/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-curl-and-postman&quot;&gt;blog&lt;/a&gt;, we covered some of basics of this API using cURL or Postman. In this post, we would like to share with you a few examples of REST API calls that can be made through simple Python and PowerShell code examples.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake for Compute Ops Management REST API uses the OAuth 2.0 HPE GreenLake authentication flow, where a limited lifetime access token is provided in the header of each REST API request as the authorization bearer. The process to generate this necessary access token is well described in the blog post written by &lt;strong&gt;Nisha Thomas&lt;/strong&gt;, entitled &lt;a href=&quot;https://developer.hpe.com/blog/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/&quot;&gt;How to use an API access token for HPE GreenLake for Compute Ops Management&lt;/a&gt;. If you are not familiar with this token generation and usage,  we would strongly advise you to read it as it represents the very first and important steps to be performed prior to getting the chief benefits described above.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sign up for an HPE Account&lt;/li&gt;
&lt;li&gt;Connect to HPE GreenLake&lt;/li&gt;
&lt;li&gt;Create API Client Credentials&lt;/li&gt;
&lt;li&gt;Generate an Access Token based on your credentials and relevant connectivity endpoint (HPE GreenLake for Compute Ops Management in our case)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the access token is generated, it can be used to make API calls to perform any HTTP method against the desired HPE GreenLake for Compute Ops Management resource. To do this, you simply add the access token to the HTTP header with the keyword &quot;Authorization: Bearer {token}&quot;. The name “Bearer authentication” can be understood as “give access to the bearer of this token”.&lt;/p&gt;
&lt;p&gt;There are several ways to invoke the API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using REST directly or from JavaScript&lt;/li&gt;
&lt;li&gt;Using cURL&lt;/li&gt;
&lt;li&gt;Using Python&lt;/li&gt;
&lt;li&gt;Using Go&lt;/li&gt;
&lt;li&gt;Using PowerShell&lt;/li&gt;
&lt;li&gt;Using Ansible, Terraform, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;HPE GreenLake for Compute Ops Management API Reference site&lt;/a&gt; leverages OpenAPI conformant documentation that provides a complete explanation of the operations supported by the Unique Resource Identifiers &lt;a href=&quot;&quot;&gt;(URIs)&lt;/a&gt;as well as sample requests and responses.&lt;/p&gt;
&lt;h1&gt;Using PowerShell&lt;/h1&gt;
&lt;p&gt;From a terminal application, start a PowerShell environment and set the variables that will be used later.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# Store our Compute Ops Manager API endpoint
$endpoint = &quot;https://us-west2-api.compute.cloud.hpe.com&quot;
# Initialize an empty headers array
$headers = @{} 	
# Add a header for authorization, using the Access Token generated in task 2	
$headers[&quot;Authorization&quot;] = &quot;Bearer_access_token_here&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow1.png&quot; alt=&quot;figure3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Our first request:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response = Invoke-webrequest &quot;$endpoint/compute-ops/v1beta2/servers?limit=2&quot; -Method GET -Headers $headers
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the request is successful, the response should display 200:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow2.png&quot; alt=&quot;figure4&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response.StatusCode
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The JSON data returned by the request is in $response.Content but it is rather hard to read as is, I can prettify it a bit with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response.Content | ConvertFrom-Json | Format-List
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, the request returned 2 objects (count : 2) and the actual server details are in the items array property.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow3.png&quot; alt=&quot;figure5&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API supports several query parameters depending on the resource type, such as filter, limit (maximum number of records to return), offset (resource offset to start the response from) and sort (order in which to return the resources in the collection).&lt;/p&gt;
&lt;p&gt;Filters provide the ability to limit the resources that take part in the action of a REST call. When a REST call includes a filter, the GET or DELETE action is restricted to a response that meets the filter requirements. Filters are specified by using the query parameter &apos;filter&apos;.&lt;/p&gt;
&lt;p&gt;A simple example of filtering follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET &lt;URI&gt;?filter=powerState eq &apos;Off&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This example shows a simple filter. The resources returned by the query are limited to results with the attribute powerState and the value Off.&lt;/p&gt;
&lt;p&gt;To use a filter on a nested property name, the &apos;/&apos; separator can be specified as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET /compute-ops/v1beta2/servers?filter=hardware/model eq &apos;ProLiant DL365 Gen10 Plus&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To test this filter, I run  the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response = Invoke-webrequest &quot;$endpoint/compute-ops/v1beta2/servers?filter=contains(hardware/model,&apos;DL365&apos;)&quot; -Method GET -Headers 
$headers
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I start by checking the request was successful as before:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response.StatusCode
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should display 200.&lt;/p&gt;
&lt;p&gt;Then I parse the result with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$response.Content | ConvertFrom-Json | select count
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;it should display 4, the number of servers with &apos;DL365&apos; in the hardware model in my environment.&lt;/p&gt;
&lt;p&gt;Now I can look at the actual content:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;($response.Content | ConvertFrom-Json).items&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow4.png&quot; alt=&quot;figure6&quot;&gt;&lt;/p&gt;
&lt;p&gt;Refer to the following &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;blog&lt;/a&gt; for more information on filter query syntax and API references.&lt;/p&gt;
&lt;h1&gt;Using Python&lt;/h1&gt;
&lt;p&gt;From a Python 3 environment:
At the Python interpreter prompt &gt;&gt;&gt;, I import 2 modules to make HTTP requests and to work with JSON data&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import requests
import json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I need to set some variables that I will use later:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Store our Compute Ops Manager API endpoint
endpoint = &quot;https://us-west2-api.compute.cloud.hpe.com&quot;
# Add a header for authorization, using the Access Token generated in task 2	
headers = {&quot;Authorization&quot;: &quot;Bearer &quot; + &quot;paste_access_token_here&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow5.png&quot; alt=&quot;figure8&quot;&gt;&lt;/p&gt;
&lt;p&gt;I can now send my first request:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;response = requests.get(url=endpoint + &apos;/compute-ops/v1beta2/servers?limit=2&apos;, headers=headers)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once again, I check if the request was successful, the response status code should display 200:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(response.status_code)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to display the JSON data returned by the request in a human-readable form:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(json.dumps(response.json(), indent=2))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow6.png&quot; alt=&quot;figure 9&quot;&gt;&lt;/p&gt;
&lt;p&gt;I see the request returned 2 objects &lt;strong&gt;(count : 2)&lt;/strong&gt; and the actual server details are in the &lt;strong&gt;items&lt;/strong&gt; array property.&lt;/p&gt;
&lt;p&gt;The API supports several query parameters depending on the resource type, such as filter, limit (maximum number of records to return), offset (resource offset to start the response from) and sort (order in which to return the resources in the collection).&lt;/p&gt;
&lt;p&gt;Filters provide the ability to limit the resources that take part in the action of a REST call. When a REST call includes a filter, the GET or DELETE action is restricted to a response that meets the filter requirements. Filters are specified by using the query parameter &apos;filter&apos;.&lt;/p&gt;
&lt;p&gt;A simple example of filtering follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET &lt;URI&gt;?filter=powerState eq &apos;Off&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This example shows a simple filter. The resources returned by the query are limited to results with the attribute powerState and the value Off.&lt;/p&gt;
&lt;p&gt;To use a filter on a nested property name, the &apos;/&apos; separator can be specified as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET /compute-ops/v1beta2/servers?filter=hardware/model eq &apos;ProLiant DL365 Gen10 Plus&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I can test with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;response = requests.get(url= endpoint + &quot;/compute-ops/v1beta2/servers?filter=contains(hardware/model,&apos;DL365&apos;)&quot;, headers=headers)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;First check if the request was successful as before&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(response.status_code)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should display 200.&lt;/p&gt;
&lt;p&gt;Now, display the number of objects in the response.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(response.json()\[&apos;count&apos;])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should display 4.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow7.png&quot; alt=&quot;figure 10&quot;&gt;&lt;/p&gt;
&lt;p&gt;To display the JSON data returned by the request in a human-readable form, type the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;print(json.dumps(response.json(), indent=2))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-pyt-pow8.png&quot; alt=&quot;figure 11&quot;&gt;&lt;/p&gt;
&lt;p&gt;Refer to previous sections for more information on filter query syntax and API references.&lt;/p&gt;
&lt;h2&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;In this blog post, we covered how to get started with HPE GreenLake for Compute Ops Management REST API and explained how to post simple API calls through Python and PowerShell. In some of our future &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt;, you will have the opportunity to run live experiments on these examples. Stay tuned for more details!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Learn more about the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/guide&quot;&gt;HPE GreenLake for Compute Ops Management REST API&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This &lt;a href=&quot;https://github.com/jullienl/HPE-Compute-Ops-Management&quot;&gt;GitHub repository&lt;/a&gt; hosts many script samples for the Compute Ops Management API including PowerShell, Python, Ansible playbooks and others.&lt;/p&gt;
&lt;p&gt;Find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Explore HPE GreenLake for Compute Ops Management REST API using cURL and Postman]]></title><description><![CDATA[HPE GreenLake for Compute Ops Management automates and transforms complex and time-consuming compute management operations into a simplified…]]></description><link>https://developer.hpe.com/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-curl-and-postman/</link><guid isPermaLink="false">https://developer.hpe.com/explore-the-hpe-greenlake-for-compute-ops-management-rest-api-using-curl-and-postman/</guid><pubDate>Wed, 30 Nov 2022 12:36:05 GMT</pubDate><content:encoded>&lt;p&gt;HPE GreenLake for Compute Ops Management automates and transforms complex and time-consuming compute management operations into a simplified experience across edge-to-cloud. Whether you are an IT OPS or a DEV OPS engineer, you know that automation is the key to success. And today’s automation relies heavily on APIs and how one can interact easily with them. So, let us show you how to leverage the API provided so you, too, can take advantage of what HPE GreenLake for Compute Ops Management can provide. In this blog post, we will cover how to do this using cURL and Postman.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl1.png&quot; alt=&quot;blog figure1&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE GreenLake for Compute Ops Management REST API&lt;/h1&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management provides a Restful API to customers who want to manage their devices programmatically or through a command line interface. The API enables customers to invoke operations or tasks such as list devices, see device details, device health, manage a device&apos;s firmware and much more.&lt;/p&gt;
&lt;p&gt;To learn more about HPE GreenLake for Compute Ops Management API methods and resources, you can refer to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API reference documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here are some of the operations you can do with the HPE GreenLake for Compute Ops Management API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Obtain the list of servers in your account&lt;/li&gt;
&lt;li&gt;Obtain the list of firmware bundles&lt;/li&gt;
&lt;li&gt;Obtain the list of job templates&lt;/li&gt;
&lt;li&gt;Perform a firmware update&lt;/li&gt;
&lt;li&gt;Run a carbon emissions report&lt;/li&gt;
&lt;li&gt;Power on/off/restart a server&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl24.png&quot; alt=&quot;blog figure1&quot;&gt;&lt;/p&gt;
&lt;p&gt;We would like to share with you today a few examples of REST API calls that can be made through simple &lt;a href=&quot;https://curl.se/&quot;&gt;cURL&lt;/a&gt;  commands or &lt;a href=&quot;https://www.postman.com/&quot;&gt;Postman&lt;/a&gt; examples.&lt;/p&gt;
&lt;p&gt;The HPE GreenLake for Compute Ops Management REST API uses the OAuth 2.0 HPE GreenLake authentication flow, where a limited lifetime access token is provided in the header of each REST API request as the authorization bearer. The process to generate this necessary access token is well described in the blog post written by &lt;strong&gt;Nisha Thomas&lt;/strong&gt;, entitled &lt;a href=&quot;https://developer.hpe.com/blog/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/&quot;&gt;How to use an API access token for HPE GreenLake for Compute Ops Management&lt;/a&gt;. If you are not familiar with this token generation and usage,  we would strongly advise you to read it as it represents the very first and important steps to be performed prior to getting the chief benefits described above.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sign up for an HPE Account&lt;/li&gt;
&lt;li&gt;Connect to HPE GreenLake&lt;/li&gt;
&lt;li&gt;Create API Client Credentials&lt;/li&gt;
&lt;li&gt;Generate an Access Token based on your credentials and relevant connectivity endpoint (HPE GreenLake for Compute Ops Management in our case)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the access token is generated, it can be used to make API calls to perform any HTTP method against the desired HPE GreenLake for Compute Ops Management resource. To do this, you simply add the access token to the HTTP header with the keyword &quot;Authorization: Bearer {token}&quot;. The name “Bearer authentication” can be understood as “give access to the bearer of this token”.&lt;/p&gt;
&lt;p&gt;There are several ways to invoke the API:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using REST directly or from JavaScript&lt;/li&gt;
&lt;li&gt;Using cURL&lt;/li&gt;
&lt;li&gt;Using Python&lt;/li&gt;
&lt;li&gt;Using Go&lt;/li&gt;
&lt;li&gt;Using PowerShell&lt;/li&gt;
&lt;li&gt;Using Ansible, Terraform, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;HPE GreenLake for Compute Ops Management API Reference site&lt;/a&gt; leverages OpenAPI conformant documentation that provides a complete explanation of the operations supported by the Unique Resource Identifiers &lt;a href=&quot;&quot;&gt;(URIs)&lt;/a&gt;as well as sample requests and responses.&lt;/p&gt;
&lt;p&gt;If you are wondering which to use, cURL is probably a good choice if you like command line interfaces, Postman if you prefer graphical user interfaces, and PowerShell or Python if you are really into programming.&lt;/p&gt;
&lt;h1&gt;Using cURL&lt;/h1&gt;
&lt;p&gt;From the command prompt, use a simple cURL command like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -X GET https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?limit=2 -H &quot;Authorization:Bearer ,&amp;#x3C; access_token_here&gt;”**
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that you must use the correct connectivity endpoint according to the region where HPE GreenLake for Compute Ops Management is deployed. Currently, these are the connectivity endpoints for the possible HPE GreenLake for Compute Ops Management regions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://eu-central1-api.compute.cloud.hpe.com&quot;&gt;EU Central&lt;/a&gt;  &lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://eu-central1-api.compute.cloud.hpe.com&quot;&gt;AP Northeast&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com&quot;&gt;US West&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The request uses the &lt;strong&gt;GET&lt;/strong&gt; method for the &lt;strong&gt;/compute-ops/v1beta2/servers&lt;/strong&gt; resource to obtain the list of the available servers as described in the API reference.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl25.png&quot; alt=&quot;blog figure3&quot;&gt;&lt;/p&gt;
&lt;p&gt;The command uses a header parameter with keyword &lt;strong&gt;&quot;Authorization:Bearer {token}&lt;/strong&gt;&quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;?limit=2&lt;/strong&gt; at the end of the URL is a query parameter to limit the response to a maximum of two servers as documented in the request pane of the &lt;em&gt;servers&lt;/em&gt; resource:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl19.png&quot; alt=&quot;blog figure4&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a result of the command, the API response lists two compute servers that are on-boarded and assigned to the corresponding application for the user&apos;s account.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl15.png&quot; alt=&quot;blog figure4&quot;&gt;&lt;/p&gt;
&lt;p&gt;To see the API response code, add -I at the end of the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -X GET https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?limit=2 -H &quot;Authorization:Bearer &amp;#x3C;your access_token_here&gt;” –I
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl2.png&quot; alt=&quot;blog figure5&quot;&gt;&lt;/p&gt;
&lt;p&gt;The response code displays &lt;strong&gt;200 OK.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This status response code indicates that the request has succeeded. The complete list of response codes for each resource in the API reference is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl26.png&quot; alt=&quot;blog figure6&quot;&gt;&lt;/p&gt;
&lt;p&gt;The JSON response provided by the API is not formatted, which makes it difficult to read and understand the JSON content, but a tool like &lt;a href=&quot;https://stedolan.github.io/jq/&quot;&gt;jq&lt;/a&gt;  can prettify the content. Just add “| jq” at the end of the command to get a better visual display.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl4.png&quot; alt=&quot;blog figure7&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API supports several query parameters depending on the resource type, such as filter, limit (maximum number of records to return), offset (resource offset to start the response from) and sort (order in which to return the resources in the collection).&lt;/p&gt;
&lt;p&gt;Filters provide the ability to limit the resources that take part in the action of a REST call. When a REST call includes a filter, the GET or DELETE action is restricted to a response that meets the filter requirements. Filters are specified by using the query parameter &apos;filter&apos;.&lt;/p&gt;
&lt;p&gt;Here is a simple example of filtering:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET &lt;URI&gt;?filter=powerState eq &apos;Off&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This example shows a simple filter. The resources returned by the query are limited to results with the attribute &lt;strong&gt;powerState&lt;/strong&gt; and the value &lt;strong&gt;Off&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To use a filter on a nested property name, the &apos;&lt;strong&gt;/&lt;/strong&gt;&apos; separator can be specified as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET /compute-ops/v1beta2/servers?filter=hardware/model eq &apos;ProLiant DL365 Gen10 Plus&apos;**&lt;/strong&gt;&lt;br&gt;
{**&lt;/p&gt;
&lt;p&gt;    &lt;strong&gt;&quot;offset&quot;: 0,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;    &lt;strong&gt;&quot;count&quot;: 20,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;    &lt;strong&gt;&quot;total&quot;: 20,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;    &lt;strong&gt;&quot;items&quot;: [&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;        &lt;strong&gt;{&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;id&quot;: &quot;P39368-B21+CN70421C51&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;type&quot;: &quot;compute-ops/server&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;platformFamily&quot;: &quot;PROLIANT&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;resourceUri&quot;: &quot;/compute-ops/v1beta2/servers/P39368-B21+CN70421C51&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;name&quot;: &quot;HPE-HOL56&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;createdAt&quot;: &quot;2022-04-29T12:35:35.265978+00:00&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;updatedAt&quot;: &quot;2022-10-25T19:54:36.572565+00:00&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;generation&quot;: 292,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;            &lt;strong&gt;&quot;hardware&quot;: {&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;serialNumber&quot;: &quot;CN70421C51&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;model&quot;: &quot;ProLiant DL365 Gen10 Plus&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;uuid&quot;: &quot;33393350-3836-4E43-3730-343231433531&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;productId&quot;: &quot;P39368-B21&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;powerState&quot;: &quot;ON&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;indicatorLed&quot;: &quot;OFF&quot;,&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;health&quot;: {&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                  **  &quot;summary&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;healthLED&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;fans&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;fanRedundancy&quot;: &quot;REDUNDANT&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;liquidCooling&quot;: &quot;NOT_PRESENT&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;liquidCoolingRedundancy&quot;: &quot;NOT_PRESENT&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;memory&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;network&quot;: &quot;UNKNOWN&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;powerSupplies&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;powerSupplyRedundancy&quot;: &quot;NOT_PRESENT&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;processor&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;storage&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;temperature&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;bios&quot;: &quot;OK&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;smartStorage&quot;: &quot;OK&quot;**&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;},&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                &lt;strong&gt;&quot;bmc&quot;: {&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;                  **  &quot;mac&quot;: &quot;B4:7A:F1:4E:9E:92&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;ip&quot;: &quot;172.30.231.116&quot;,**&lt;/p&gt;
&lt;p&gt;                  **  &quot;hostname&quot;: &quot;None&quot;**&lt;/p&gt;
&lt;p&gt;The following cURL command includes the filter&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -X GET “https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?filter=contains(hardware/model,&apos;DL365&apos;)” -H &quot;Authorization:Bearer &amp;#x3C;your access_token_here&gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl5.png&quot; alt=&quot;blog figure8&quot;&gt;&lt;/p&gt;
&lt;p&gt;Contains is another filter option to return resources where associated &lt;strong&gt;hardware/model&lt;/strong&gt; contains &lt;strong&gt;DL365.&lt;/strong&gt; The result of the command shows 4 server resources whose model name contains DL365.&lt;/p&gt;
&lt;p&gt;The filter query syntax supports a richer set of filters than the single operation in the previous example. Filtering syntax is broken down by Operations, Logic, and Types. In the previous example, &lt;a href=&quot;&quot;&gt;&lt;/a&gt;the operation was &apos;eq&apos; for equality. Most comparison operations require the evaluated property name to the left of the operator and a literal to the right.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl16.png&quot; alt=&quot;blog figure9&quot;&gt;&lt;/p&gt;
&lt;p&gt;To learn more about Filtering, see &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/guide/#filtering&quot;&gt;the HPE GreenLake for Compute Ops Management guide.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The HPE GreenLake for Compute Ops Management API supports many resources that are described in the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API reference&lt;/a&gt; document, as illustrated below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl17.png&quot; alt=&quot;blog figure10&quot;&gt;&lt;/p&gt;
&lt;p&gt;Unique Resource Identifiers &lt;a href=&quot;&quot;&gt;(URIs)&lt;/a&gt;  are used to identify a resource. A URI is a full API path ending in an identification number.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;compute-ops/v1beta2/servers/{serverId}&lt;/li&gt;
&lt;li&gt;compute-ops/v1beta1/reports/{id}/data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;v1beta1&lt;/strong&gt;, &lt;strong&gt;v1beta2&lt;/strong&gt; in the URI is the version of the resource that is being accessed.&lt;/p&gt;
&lt;p&gt;One can invoke the common HTTP &lt;a href=&quot;&quot;&gt;&lt;/a&gt;methods, like GET, POST, PUT, PATCH, and DELETE, on resources in the HPE GreenLake for Compute Ops Management API as shown below for the &lt;strong&gt;filters&lt;/strong&gt; resource:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl23.png&quot; alt=&quot;blog figure11&quot;&gt;&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API reference&lt;/a&gt; site for a complete list of &lt;a href=&quot;&quot;&gt;&lt;/a&gt;methods supported by each API resource.&lt;/p&gt;
&lt;h1&gt;Using Postman&lt;/h1&gt;
&lt;p&gt;Postman is a tool designed to build and use APIs.&lt;/p&gt;
&lt;p&gt;To get started, “create a request”, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl7.png&quot; alt=&quot;blog figure12&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the request URL field, enter the endpoint URL: ﻿&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;&quot;&gt;&lt;/a&gt;&lt;strong&gt;&lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?limit=2&quot;&gt;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?limit=2&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This is the base connectivity endpoint you have seen in earlier, followed by the resource URI you want to query ﻿&lt;strong&gt;/compute-ops/v1beta2/servers&lt;/strong&gt; as described in the API reference table.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl25.png&quot; alt=&quot;blog figure13&quot;&gt;&lt;/p&gt;
&lt;p&gt;In order to limit the output as documented in the request pane of the &lt;em&gt;servers&lt;/em&gt; resource:&lt;/p&gt;
&lt;p&gt;We use query parameter ﻿&lt;strong&gt;?limit=2&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl19.png&quot; alt=&quot;blog figure14&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl20.png&quot; alt=&quot;blog figure15&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;em&gt;Authorization&lt;/em&gt; tab, choose  &lt;strong&gt;Bearer Token&lt;/strong&gt; in the &lt;em&gt;Type&lt;/em&gt; drop-down menu and paste the access token that was generated earlier in the Token field. Postman will generate the appropriate Authorization header when you send the request.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl8.png&quot; alt=&quot;blog figure16&quot;&gt;&lt;/p&gt;
&lt;p&gt;Hit the &lt;strong&gt;Send&lt;/strong&gt; button to get a &lt;strong&gt;200 OK&lt;/strong&gt; status response indicating success and a JSON body with the details of two compute servers on-boarded and assigned to the corresponding application for the user&apos;s account.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl9.png&quot; alt=&quot;blog figure17&quot;&gt;&lt;/p&gt;
&lt;p&gt;The complete list of response codes is documented for each resource in the API reference table shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl21.png&quot; alt=&quot;blog figure18&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API supports several query parameters depending on the resource type, such as filter, limit (maximum number of records to return), offset (resource offset to start the response from) and sort (order in which to return the resources in the collection).&lt;/p&gt;
&lt;p&gt;Filters provide the ability to limit the resources that take part in the action of a REST call. When a REST call includes a filter, the GET or DELETE action is restricted to a response that meets the filter requirements. Filters are specified by using the query parameter &apos;filter&apos;.&lt;/p&gt;
&lt;p&gt;A simple example of filtering follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET &lt;URI&gt;?filter=powerState eq &apos;Off&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This example shows a simple filter. The resources returned by the query are limited to results with the attribute &lt;strong&gt;powerState&lt;/strong&gt; and the value &lt;strong&gt;Off&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To use a filter on a nested property name, the &apos;&lt;strong&gt;/&lt;/strong&gt;&apos; separator can be specified as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GET /compute-ops/v1beta2/servers?filter=hardware/model eq &apos;ProLiant DL365 Gen10 Plus&apos;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To test this filter, we enter the following URL in the Request field:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?filter=contains(hardware/model,&amp;#x27;DL365&amp;#x27;)&quot;&gt;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers?filter=contains(hardware/model,&apos;DL365&apos;)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl22.png&quot; alt=&quot;blog figure19&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Authorization tab should still have your Access Token (an access token is valid for 2 hours).&lt;/p&gt;
&lt;p&gt;We hit the &lt;strong&gt;Send&lt;/strong&gt; button.&lt;/p&gt;
&lt;p&gt;The request should indicate success (Status is 200 OK) and, in this example, the response shows 4 server resources whose model name contains DL365.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl10.png&quot; alt=&quot;blog figure20&quot;&gt;&lt;/p&gt;
&lt;p&gt;The filter query syntax supports a richer set of filters than the single operation in the previous example. Filtering syntax is broken down by Operations, Logic, and Types.&lt;/p&gt;
&lt;p&gt;In the previous example&lt;a href=&quot;&quot;&gt;,&lt;/a&gt;  the operation was &apos;eq&apos; for equality. Most comparison operations require the evaluated property name to the left of the operator&lt;a href=&quot;&quot;&gt;&lt;/a&gt; and a literal to the right.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-com-ops-api-curl16.png&quot; alt=&quot;blog figure21&quot;&gt;&lt;/p&gt;
&lt;p&gt;To learn more about Filtering, see the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/guide/#filtering&quot;&gt;HPE GreenLake for Compute Ops Management guide&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unique Resource Identifiers (URIs) are used to identify a resource. A URI is a full API path ending in an identification number.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;compute-ops/v1beta2/servers/{serverId}&lt;/li&gt;
&lt;li&gt;compute-ops/v1beta1/reports/{id}/data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;v1beta1&lt;/strong&gt;, &lt;strong&gt;v1beta2&lt;/strong&gt; in the URI is the version of the resource that is being accessed.&lt;/p&gt;
&lt;p&gt;You can invoke the common HTTP methods, like GET, POST, PUT, PATCH, and DELETE,on resources in the HPE GreenLake for Compute Ops Management API as shown below for the &lt;strong&gt;filters&lt;/strong&gt; resource:&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API reference&lt;/a&gt; site for a complete list of methods supported by each API resource.&lt;/p&gt;
&lt;p&gt;Finally, a Postman collection of executable API requests for the HPE GreenLake for Compute Ops Management API can be found &lt;a href=&quot;https://www.postman.com/jullienl/workspace/lionel-jullien-s-public-workspace/collection/991177-10c5377d-892b-4612-9e81-23d75d6c2f0d?ctx=documentation&quot;&gt;HERE&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;What’s next?&lt;/h1&gt;
&lt;p&gt;In this blog post, we covered how to get started with the HPE GreenLake for Compute Ops Management REST API, explained how to post simple API calls through cURL commands, and showed you how to leverage Postman to achieve the same results.  In our next article, we will show how similar calls can be performed using Python or PowerShell.&lt;/p&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://www.hpe.com/hpe/greenlake&quot;&gt;HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Learn more about the &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/guide&quot;&gt;HPE GreenLake for Compute Ops Management REST API&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using MapReduce with an HPE Ezmeral Data Fabric database binary table]]></title><description><![CDATA[Introduction No matter what kind of application you want to develop, the underlying layer will need a database.
Nowadays, applications based…]]></description><link>https://developer.hpe.com/example-of-mapreduce-with-ezmeral-data-fabric-mapr-database-binary-table/</link><guid isPermaLink="false">https://developer.hpe.com/example-of-mapreduce-with-ezmeral-data-fabric-mapr-database-binary-table/</guid><pubDate>Mon, 14 Nov 2022 15:56:03 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;No matter what kind of application you want to develop, the underlying layer will need a database.
Nowadays, applications based on big data are generally divided into two types: BI (Business Intelligence) and AI (Artificial Intelligence).
Naturally, both of these two types of applications require a big data system as a base.
In traditional applications, we use RDBMS as the database, but in big data systems, we need to use NoSQL databases.&lt;/p&gt;
&lt;h3&gt;Advantages of using HPE Ezmeral Data Fabric database&lt;/h3&gt;
&lt;p&gt;Let&apos;s first look at the position of the HPE Ezmeral Data Fabric database in the HPE Ezmeral Data Fabric software stack.
Since the bottom layer of HPE Ezmeral Data Fabric database is the file system, you get the benefits of better performance, simpler management, and ease of use.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/system_architecture_position-hpe-edf_database.png&quot; alt=&quot;HPE EDF Database is based on file system&quot; title=&quot;Position of the database in HPE EDF stack&quot;&gt;&lt;/p&gt;
&lt;p&gt;In my personal experience, I have found this to be true. For example, when using open source Apache Hadoop, due to its design principles, in order to fully realize the performance of Hadoop, it is necessary to merge small files before putting them in. With HPE Ezmeral Data Fabric file system, it&apos;s not necessary to care about whether small files need to be merged due to the existence of the logical unit volume within the file system. Another advantage of using HPE Ezmeral Data Fabric file system is that it provides the widely used protocol interface - NFS - allowing you to mount the HPE Ezmeral Data Fabric file system as an NFS file system on your PC.
That is to say, you can mount the HPE Ezmeral Data Fabric file system as an NFS file system on your PC. This is something that Hadoop and other peer commercial software cannot do.&lt;/p&gt;
&lt;p&gt;I have some additional thoughts on important unique advantages of HPE Ezmeral Data Fabric database.
The first thing that comes to my mind is, the simplicity of the product.&lt;/p&gt;
&lt;p&gt;For example, if you are using products in the Apache Hadoop ecosystem, or a commercial version of a big data system like Cloudera, you need to install and maintain the NoSQL service included in it separately. One often comes into contact with HBase, Mongo, DB, etc.
But in HPE Ezmeral Data Fabric, you don&apos;t need to deploy HBase and MongoDB separately, because these two different types of NoSQL systems have been integrated in HPE Ezmeral Data Fabric core as HPE Ezmeral Data Fabric database. From the process level, you only see one MFS process.&lt;/p&gt;
&lt;p&gt;When you are using HBase, you need to consider the HBase Master and Region Server processes, as well as the underlying Hadoop Namenode and Datanode processes.
HPE Ezmeral Data Fabric database includes two different types of NoSQL database systems, namely: Binary Table and JSON Table, which correspond to open source HBase and MongoDB respectively.
Now, only one software process can be seen, which is MFS. When you use a completely open source big data technology stack or other commercial big data platforms, you will still see a bunch of processes. So you can see one of the biggest differences: simplicity.&lt;/p&gt;
&lt;p&gt;Though a column-oriented NoSQL database, such as HBase, is a bit outdated in design when compared with a document-oriented NoSQL database, such as MongoDB, I did not have enough information on a better replacement in order to write this article. I did find a demo of Spark based on HBase in the HPE Developer portal entitled &lt;a href=&quot;https://developer.hpe.com/blog/spark-streaming-with-hbase/&quot;&gt;Spark Streaming with HBase&lt;/a&gt; that I was able to use.&lt;/p&gt;
&lt;p&gt;As stated, the advantages of using HPE Ezmeral Data Fabric database focus around simplicity. The advantages of the HPE Ezmeral Data Fabric file system include improved performance, ease of use, and ease of maintenance. Now let me show you how to use them with a Binary Table and MapReduce to apply these advantages when dealing with huge data volumes in a distributed storage and computing system.
As for why we use this kind of NoSQL products, I don’t think I need to go into details here. This is the same as why Hadoop, a big data file system, was born. Simply put, it is because we need to build a distributed storage and computing system. In order to complete the analysis and computation tasks of huge data volumes on cheap commercial computers.
In addition, the main reason for me to write this article is: I did not find a demo article in the HPE Developer portal that introduces the use of Binary Table and MapReduce together, so I would like to add such an example.&lt;/p&gt;
&lt;h2&gt;Getting started using MapReduce with HPE Ezmeral Data Fabric&lt;/h2&gt;
&lt;p&gt;In this article, I will show you how to create the Development Environment for HPE Ezmeral Data Fabric on Linux, Microsoft Windows and Apple Mac in a single-node cluster based on Docker containers. It will provide you with a choice of different HPE Ezmeral Data Fabric versions, based on how it integrates into HPE Ezmeral Ecosystem Packs. This way you can quickly create an HPE Ezmeral Data Fabric environment on your work computer. Then, using this as an example, I&apos;ll demonstrate a MapReduce application that uses HPE Ezmeral Data Fabric&apos;s database binary table as the backend service. I will create the table using the HBase shell command line tool customized for the HPE Ezmeral Data Fabric database and do aggregation operations using a MapReduce application. Let&apos;s get started!&lt;/p&gt;
&lt;h3&gt;Prerequisite: Create a Development Environment for HPE Ezmeral Data Fabric&lt;/h3&gt;
&lt;p&gt;Before you get started, you&apos;ll need to set up a Development Environment for HPE Ezmeral Data Fabric. To get some hints on how to do this, refer to the 👉 &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-spark-on-mapr-sandbox/&quot;&gt;Getting Started with Spark on MapR Sandbox&lt;/a&gt; blog post. I recommend that you read the latest official documentation, 👉 &lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;Development Environment for HPE Ezmeral Data Fabric&lt;/a&gt;, before setting up your environment.&lt;/p&gt;
&lt;p&gt;All you really need to do is follow the instructions in the documentation. Be aware that the documentation will instruct you to install Docker Desktop on a Mac, but you don&apos;t really have to install Docker Desktop. It&apos;s fine to install the Docker Engine using a standard Linux distribution.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;It&apos;s worth noting that installing Docker Desktop in Windows won&apos;t work.&lt;/strong&gt; I tried getting it to work through the following actions: I first installed WSL2 (Windows Subsystem Linux 2) then installed Docker Desktop for Windows and integrated with WSL2, and then ran the HPE Ezmeral Data Fabric Development Environment install script, but it still failed.&lt;/p&gt;
&lt;p&gt;So I ended up installing VMWare on my Windows PC, creating a CentOS8 VM, and ran the HPE Ezmeral Data Fabric Development Environment setup script in the VM. This approach proved feasible. Also, you can always choose the version of the development environment you want to deploy. You just need to change the tag of the Docker image.&lt;/p&gt;
&lt;p&gt;After running the setup script, you&apos;ll have an HPE Ezmeral Data Fabric cluster up and running. But in order to run a MapReduce application, you need to install the YARN framework first. Starting from release of HPE Ezmeral Data Fabric 6.2.0, the YARN framework decoupled from the core platform and exists as an ecosystem component as part of HPE Ezmeral Ecosystem Packs.
So if you are running a Development Environment newer than 6.2.0 (including 6.2.0), you need to complete the steps found in &lt;a href=&quot;https://docs.datafabric.hpe.com/70/AdvancedInstallation/InstallingHadoop.html&quot;&gt;Installing Hadoop and YARN&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Using MapReduce on an HPE Ezmeral Data Fabric database binary table&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Data Fabric database binary table is equivalent to the HPE Ezmeral Data Fabric version of Apache HBase, but its technical implementation is different from HBase. This is, of course, because the bottom layer of HPE Ezmeral Data Fabric database is the HPE Ezmeral Data Fabric File Store. For users, there is almost no difference between using HPE Ezmeral Data Fabric database binary table and using HBase.&lt;/p&gt;
&lt;p&gt;Now, let&apos;s imagine that we want to build a user notifications service. Since HBase does not support any operations that go across rows or across tables, in order to implement operations such as &lt;strong&gt;joins&lt;/strong&gt; and &lt;strong&gt;group by&lt;/strong&gt; in RDBMS, we will use MapReduce to complete some data analysis tasks.&lt;/p&gt;
&lt;h3&gt;Create a binary table&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;em&gt;The commands in this article are all executed as the user &quot;mapr&quot;, which is by default the admin user of the Development Environment for HPE Ezmeral Data Fabric. You can also use the &quot;root&quot; user to create the table and run the application, but if you don&apos;t modify the ACEs of the table, the &quot;mapr&quot; user would be not able to see the data in the table.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We are going to use the &lt;a href=&quot;https://docs.datafabric.hpe.com/70/ReferenceGuide/HBaseShellforMapR-DB.html&quot;&gt;hbase shell&lt;/a&gt; to create a &lt;a href=&quot;https://docs.datafabric.hpe.com/70/MapR-DB/intro-binary-tables.html&quot;&gt;binary table&lt;/a&gt; inside the HPE Ezmeral Data Fabric database.
To be able to use the &lt;code&gt;hbase shell&lt;/code&gt;, you will first need to install the &lt;strong&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/AdvancedInstallation/InstallingHBase-client-node.html?hl=mapr-hbase&quot;&gt;mapr-hbase&lt;/a&gt;&lt;/strong&gt; package.&lt;/p&gt;
&lt;p&gt;For data management convenience, I&apos;ll show you how to create a volume for the binary table. The commands are as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo -u mapr maprcli volume list -output terse -columns volumename,volumetype,actualreplication,localpath,mounted,mountdir,logicalUsed,used,nameContainerDataThresholdMB,nameContainerSizeMB,needsGfsck

maprcli volume create -name test.binarytable1 -path /testbinarytable1volume

# sudo -u mapr maprcli volume mount -name test.binarytable1 -path /testbinarytable1volume
sudo -u mapr hadoop fs -ls -d -h /testbinarytable1volume
sudo -u mapr hadoop mfs -ls /testbinarytable1volume
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;☝ The volume&apos;s name is &lt;em&gt;test.binarytable1&lt;/em&gt; and it will be mounted as &lt;ins&gt;/testbinarytable1volume/&lt;/ins&gt; in the HPE Ezmeral Data Fabric file system.&lt;/p&gt;
&lt;p&gt;On any node where the mapr-hbase package is installed, enter the command: &quot;hbase shell&quot; to enter the HBase shell interface.&lt;/p&gt;
&lt;p&gt;👇 Execute the following command on the HBase shell interface:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;create &apos;/testbinarytable1volume/notifications&apos;,&apos;attributes&apos;,&apos;metrics&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, the table name is &lt;ins&gt;/testbinarytable1volume/notifications&lt;/ins&gt; and the &quot;attributes&quot; and &quot;metrics&quot; are column families.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: In HPE Ezmeral Data Fabric database binary table, the table name is by default a path in the file system.
You can change the style of the table name to be like something that would be found in Apache HBase. Refer to 👉 &lt;a href=&quot;https://docs.datafabric.hpe.com/70/UpgradeGuide/.MappingTableNamespace-HBase-DBbinary_2.html&quot;&gt;Mapping to HBase Table Namespaces&lt;/a&gt; for details.&lt;/p&gt;
&lt;h3&gt;Build and run the MapReduce application&lt;/h3&gt;
&lt;h4&gt;MapReduce application source code&lt;/h4&gt;
&lt;p&gt;For your convenience, I have put the code on Github: &lt;a href=&quot;https://github.com/aruruka/Example-MapReduce-With-EzmeralDataFabricMapR-DatabaseBinaryTable&quot;&gt;Example-MapReduce-With-EzmeralDataFabricMapR-DatabaseBinaryTable&lt;/a&gt;.
You can download it and compile it using Visual Studio Code. This MapReduce application is simple. It follows this logic:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Get data from the source table: &lt;ins&gt;/testbinarytable1volume/notifications&lt;/ins&gt;.&lt;/li&gt;
&lt;li&gt;Output aggregated data to the target table: &lt;ins&gt;/testbinarytable1volume/summary&lt;/ins&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is basically a variation of &quot;Word Count&quot;. This MapReduce application simply aggregates the number of rows which contains a column called &quot;type&quot; in the &quot;attributes&quot; column family.
For example, there may be comment type, promotion type, friend-request type, etc. in this table.
Then the app will count how many rows of comment type, promotion type, friend-request type are there.&lt;/p&gt;
&lt;h4&gt;How to run it on HPE Ezmeral Data Fabric&lt;/h4&gt;
&lt;p&gt;When ready, you can run the MapReduce application on your one-node cluster of the Development Environment for HPE Ezmeral Data Fabric.
Refer to the following commands:&lt;/p&gt;
&lt;p&gt;First, use this command to create a new table for storing the counter number:&lt;/p&gt;
&lt;p&gt;👇 Execute the following command on the HBase shell interface:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;create &apos;/testbinarytable1volume/summary&apos;,&apos;metrics&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, run the MapReduce application via &lt;code&gt;yarn jar&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo -u mapr \
yarn jar ./target/original-hbase-example-1.0-SNAPSHOT.jar com.shouneng.learn.mapReduce.Main \
  -libjar ./hbase-server-1.4.13.200-eep-810.jar,hbase-client-1.4.13.200-eep-810.jar,hadoop-common-2.7.6.200-eep-810.jar,hbase-common-1.4.13.203-eep-810.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see the following output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;yarn jar ./target/original-hbase-example-1.0-SNAPSHOT.jar com.shouneng.learn.mapReduce.Main
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.hbase.util.UnsafeAvailChecker (file:/opt/mapr/hadoop/hadoop-2.7.6/share/hadoop/common/hbase-common-1.4.13.200-eep-810.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.hbase.util.UnsafeAvailChecker
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
22/09/18 13:09:58 WARN mapreduce.TableMapReduceUtil: The addDependencyJars(Configuration, Class&amp;#x3C;?&gt;...) method has been deprecated since it is easy to use incorrectly. Most users should rely on addDependencyJars(Job) instead. See HBASE-8386 for more details.
22/09/18 13:09:58 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
22/09/18 13:09:58 INFO mapreduce.TableMapReduceUtil: Configured mapr.hbase.default.db maprdb
22/09/18 13:09:59 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
22/09/18 13:09:59 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
22/09/18 13:09:59 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to m2-maprts-vm99-173.mip.storage.hpecorp.net/10.163.173.99:8032
22/09/18 13:09:59 INFO client.ConnectionFactory: mapr.hbase.default.db unsetDB is neither MapRDB or HBase, set HBASE_MAPR mode since mapr client is installed.
22/09/18 13:09:59 INFO client.ConnectionFactory: ConnectionFactory receives mapr.hbase.default.db(unsetDB), set clusterType(HBASE_MAPR), user(mapr), hbase_admin_connect_at_construction(false)
22/09/18 13:09:59 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
22/09/18 13:10:00 INFO client.ConnectionFactory: mapr.hbase.default.db unsetDB is neither MapRDB or HBase, set HBASE_MAPR mode since mapr client is installed.
22/09/18 13:10:00 INFO client.ConnectionFactory: ConnectionFactory receives mapr.hbase.default.db(unsetDB), set clusterType(HBASE_MAPR), user(mapr), hbase_admin_connect_at_construction(false)
22/09/18 13:10:00 INFO util.RegionSizeCalculator: Region size calculation disabled for MapR tables /testbinarytable1volume/notifications
22/09/18 13:10:00 INFO mapreduce.JobSubmitter: number of splits:1
22/09/18 13:10:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1657816483829_0017
22/09/18 13:10:00 INFO mapreduce.JobSubmitter: Executing with tokens: []
22/09/18 13:10:01 INFO security.ExternalTokenManagerFactory: Initialized external token manager class - org.apache.hadoop.yarn.security.MapRTicketManager
22/09/18 13:10:01 INFO impl.YarnClientImpl: Submitted application application_1657816483829_0017
22/09/18 13:10:01 INFO mapreduce.Job: The url to track the job: http://m2-maprts-vm99-173.mip.storage.hpecorp.net:8088/proxy/application_1657816483829_0017/
22/09/18 13:10:01 INFO mapreduce.Job: Running job: job_1657816483829_0017
22/09/18 13:10:09 INFO mapreduce.Job: Job job_1657816483829_0017 running in uber mode : false
22/09/18 13:10:09 INFO mapreduce.Job:  map 0% reduce 0%
22/09/18 13:10:17 INFO mapreduce.Job:  map 100% reduce 0%
22/09/18 13:10:22 INFO mapreduce.Job:  map 100% reduce 100%
22/09/18 13:10:22 INFO mapreduce.Job: Job job_1657816483829_0017 completed successfully
22/09/18 13:10:22 INFO mapreduce.Job: Counters: 59
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=282992
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                MAPRFS: Number of bytes read=347
                MAPRFS: Number of bytes written=170
                MAPRFS: Number of read operations=38
                MAPRFS: Number of large read operations=0
                MAPRFS: Number of write operations=42
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=5632
                Total time spent by all reduces in occupied slots (ms)=8865
                Total time spent by all map tasks (ms)=5632
                Total time spent by all reduce tasks (ms)=2955
                Total vcore-seconds taken by all map tasks=5632
                Total vcore-seconds taken by all reduce tasks=2955
                Total megabyte-seconds taken by all map tasks=5767168
                Total megabyte-seconds taken by all reduce tasks=9077760
                DISK_MILLIS_MAPS=2816
                DISK_MILLIS_REDUCES=3930
        Map-Reduce Framework
                Map input records=4
                Map output records=4
                Map output bytes=55
                Map output materialized bytes=0
                Input split bytes=181
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=65
                Reduce input records=4
                Reduce output records=2
                Spilled Records=8
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=209
                CPU time spent (ms)=4600
                Physical memory (bytes) snapshot=1240059904
                Virtual memory (bytes) snapshot=9576202240
                Total committed heap usage (bytes)=1421869056
        HBase Counters
                BYTES_IN_REMOTE_RESULTS=0
                BYTES_IN_RESULTS=0
                MILLIS_BETWEEN_NEXTS=0
                NOT_SERVING_REGION_EXCEPTION=0
                NUM_SCANNER_RESTARTS=0
                NUM_SCAN_RESULTS_STALE=0
                REGIONS_SCANNED=0
                REMOTE_RPC_CALLS=0
                REMOTE_RPC_RETRIES=0
                ROWS_FILTERED=0
                ROWS_SCANNED=0
                RPC_CALLS=0
                RPC_RETRIES=0
        Shuffle Errors
                IO_ERROR=0
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you encountered an issue indicating that some of the HBase related Java packages are missing, you can simply copy the following packages from 📁&lt;ins&gt;/opt/mapr/hbase/hbase-{VERSION}/lib/&lt;/ins&gt; to 📁&lt;ins&gt;/opt/mapr/hadoop/hadoop-{VERSION}/share/hadoop/common/&lt;/ins&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;hbase-client-*.jar&lt;/li&gt;
&lt;li&gt;hbase-server-*.jar&lt;/li&gt;
&lt;li&gt;hbase-protocol-*.jar&lt;/li&gt;
&lt;li&gt;hbase-hadoop2-compat-*.jar&lt;/li&gt;
&lt;li&gt;hbase-hadoop-compat-*.jar&lt;/li&gt;
&lt;li&gt;hbase-metrics-*.jar&lt;/li&gt;
&lt;li&gt;hbase-metrics-api-*.jar&lt;/li&gt;
&lt;li&gt;hbase-shaded-gson-*.jar&lt;/li&gt;
&lt;li&gt;hbase-shaded-htrace-*.jar&lt;/li&gt;
&lt;li&gt;metrics-core-*.jar&lt;/li&gt;
&lt;li&gt;hbase-common-*.jar&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You may run into this issue because the HBase related packages are considered as third-party libraries to the Hadoop system.
For more information, refer to👉: &lt;a href=&quot;https://docs.datafabric.hpe.com/70/DevelopmentGuide/Manage3rdPartyLibsForMapReduce.html&quot;&gt;Install the third-party libraries on each node that runs the program&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Glossary&lt;/h2&gt;
&lt;details&gt;
&lt;summary&gt;HPE Ezmeral Data Fabric (aka. MapR)&lt;/summary&gt;
&lt;p&gt;HPE Ezmeral Data Fabric is a platform for data-driven analytics, ML, and AI workloads.
The platform serves as a secure data store and provides file storage, NoSQL databases, object storage, and event streams.
The patented filesystem architecture was designed and built for performance, reliability, and scalability.
📖&lt;a href=&quot;https://docs.datafabric.hpe.com/70/index.html&quot;&gt;Documentation website&lt;/a&gt;&lt;/p&gt;
&lt;/details&gt;
&lt;details&gt;
&lt;summary&gt;HPE Ezmeral Ecosystem Packs&lt;/summary&gt;
&lt;p&gt;This is a software collection package that includes computing scheduling frameworks and computing engines of common Hadoop ecosystems such as YARN, Spark, Drill, and Hive, as well as a service suite for monitoring performance indicators and logs of HPE Ezmeral Data Fabric.
Users of HPE Ezmeral Data Fabric will use these customized Spark, Drill, Hive and other software to complete computing, analysis tasks and machine learning tasks.&lt;/p&gt;
&lt;/details&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, I introduced the advantages of HPE Ezmeral Data Fabric database compared to similar products. I also demonstrated a MapReduce application based on HPE Ezmeral Data Fabric database binary table.
HPE Ezmeral Data Fabric database binary table is equivalent to a better alternative to Apache HBase.&lt;/p&gt;
&lt;p&gt;I hope this blog post can help you quickly develop applications based on HPE Ezmeral Data Fabric database binary table.
Please note that developing an application based on HPE Ezmeral Data Fabric database binary table is essentially the same as developing an application based on Apache HBase, but some of the specific steps are different.
For example, in the step of creating a binary table, in addition to the method of using the &lt;code&gt;habse shell&lt;/code&gt; demonstrated in this article, you can still use &lt;code&gt;maprcli&lt;/code&gt;, a command line tool proprietary to HPE Ezmeral Data Fabric.
Moreover, the experience of developing a MapReduce application based on HPE Ezmeral Data Fabric database binary table is also applicable to other computing engines, such as Spark.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt; for more interesting posts and tutorials on HPE Ezmeral Data Fabric.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How citizen developers can build award-winning enterprise apps with low-code development platforms]]></title><description><![CDATA[I was not hired to be a developer. My background is in mechanical engineering and patent law and I joined Hewlett Packard Enterprise to help…]]></description><link>https://developer.hpe.com/how-citizen-developers-can-build-award-winning-enterprise-apps-with-low-code-development-platforms/</link><guid isPermaLink="false">https://developer.hpe.com/how-citizen-developers-can-build-award-winning-enterprise-apps-with-low-code-development-platforms/</guid><pubDate>Fri, 11 Nov 2022 13:58:12 GMT</pubDate><content:encoded>&lt;p&gt;I was not hired to be a developer. My background is in mechanical engineering and patent law and I joined Hewlett Packard Enterprise to help manage its large patent portfolio.  Sure, I took a couple of classes on coding in college, but who doesn’t these days? With all the focus around STEM in schools, a new generation of talent is learning how to build web and mobile apps, just as previous generations learned how to construct imaginative structures with Lego.&lt;/p&gt;
&lt;p&gt;It has never been easier for individuals or small teams to build apps through the use of low code platforms like Microsoft Power Apps and Power Automate. I was first introduced to these platforms a few years ago, and with the power they provided me, I am now proud to call myself a “Citizen Developer.” These tools enabled me to lead a small, self-organized cross-department team to create Idea Matchmaker, a mobile app that allows team members to easily connect and self-organize around interesting projects—a sort of “dating app” for ideas .&lt;/p&gt;
&lt;h3&gt;A match-making app for ideas&lt;/h3&gt;
&lt;p&gt;As a world-wide technology company, Hewlett Packard Enterprise (HPE) houses a lot of talent. Ideas can crop up at anytime from anywhere. But ideas need more than simple recognition to come to fruition – they need resources with the appropriate skill sets to make them a reality. As you can imagine, in such a large company, it’s not always easy to build the right team to carry out your bold ideas.&lt;/p&gt;
&lt;p&gt;A few years ago, I was part of a team within HPE’s patent department tasked with figuring out an easy way to gather feedback from the company’s vast technical community. This would allow the department to better protect its key inventions with patents and give team members a stronger voice in the company’s innovation strategy. We built a custom message board for each invention and were happy to discover that, in addition to commenting on the merit of the invention itself, team members would often suggest improvements and offer to collaborate. It was then that we realized how powerful a crowdsourcing platform designed to connect people could be.&lt;/p&gt;
&lt;p&gt;This inspired us to create Idea Matchmaker, an app that allows HPE team members to easily post and find small projects from other team members that they would like to work on. It’s based on the simple design and user experience for dating and friend-finding apps like Bumble and Tinder, but adapted for HPE project matchmaking. The app works on both mobile and desktop platforms and includes a recommendation engine that leverages a bit of artificial intelligence to assist with matchmaking. With Idea Matchmaker, HPE team members can easily pass or connect on projects based on their interest level. The app began as a submission to an HPE company-wide competition, called Innovation Quest, and won first place in the User Experience category. Winning the Innovation Quest garnered the app a lot of attention and it’s now being deployed company wide.&lt;/p&gt;
&lt;h3&gt;How we did it: the power of Power Apps&lt;/h3&gt;
&lt;p&gt;Building an application outside of the traditional software development streams within a company can be somewhat intimidating. There are a lot of challenges, like setting up cloud-based automation, dealing with security issues, implementing access controls, and navigating data compliance issues. Any “enterprise” app worth building will likely include sensitive or confidential data, which will limit how you can store and access your data outside of your company’s network. It’s not like you can just create a GoDaddy website and put your company’s sensitive information on it. Fortunately, with Microsoft Power Apps, everything can live within your existing Office 365 environment, which will allow you to easily build a secure and compliant end-to-end solution.&lt;/p&gt;
&lt;p&gt;Microsoft Power Apps authenticates using your existing Office 365 credentials, so you don’t need to worry about setting up your own Single Sign-On (SSO) workflow. You can also easily connect to data stored in your existing SharePoint sites, and can create surprisingly robust data tables using SharePoint lists. This allows you to rapidly iterate on your ideas without worrying about the time and effort required to secure and spend a budget on database infrastructure. If your data happens to be stored in other databases, like a SQL server or MySQL database, you can connect to that, too (with a premium connector license). This allows you to easily build proof-of-concept apps to validate solutions before investing the big bucks in scaling things up.&lt;/p&gt;
&lt;p&gt;The Microsoft Power Apps development environment is accessible via your browser, so you don’t need to install any desktop applications. And you won’t need to set up a Virtual Machine to virtualize your development environment. You can just open the site and start building your app, which I liken to a combination of creating PowerPoint slides, using formulas in Excel, with the bonus of being able to initiate events outside of the app from the app itself (e.g., click a button in the app to send an e-mail, create a document, or pull data from a database).&lt;/p&gt;
&lt;p&gt;Using low-code development platforms means you can focus on things like screen layouts and components of your app rather than dwelling on code syntax. For example, you can easily change the color of a button in your app by selecting from a list of colors on a side panel. You don’t have to navigate to the relevant section of the code and try to remember whether the color parameter is called “bgcolor” or “background-color”.&lt;/p&gt;
&lt;p&gt;Logic in a low-code app like Microsoft Power Apps feels similar to adding a formula to a cell in Excel, with simple if/then logic to define basic app logic (e.g., “If X, the button should be green, if not the button should be grey”). For adventurous low-code developers, you can also choose to make your entire app layout “responsive”, which will allow your app to automatically resize to fit the dimensions of a user’s browser window. Responsive design can add a great deal of complexity for citizen developers. If it’s not necessary for your specific use case, you can also use a fixed aspect ratio layout, which will allow your app to scale up or down similar to a PowerPoint slide deck. I often use a fixed aspect ratio to quickly build the first few versions of an app, and then switch to a responsive design once all the stakeholders approve of the overall design.&lt;/p&gt;
&lt;h3&gt;Enhance your skills and get things done faster&lt;/h3&gt;
&lt;p&gt;It’s important to remember that platforms like this aren’t just for the non-coder. They can help seasoned developers as well. Remember—skills are the new currency. Most developers do not have expertise in all aspects of bringing an app to life (e.g., authentication, front-end, back-end, databases, APIs, etc.). Low-code tools can allow a single developer to quickly put together a serviceable app rather than incurring the overhead of building a full team of developers.&lt;/p&gt;
&lt;p&gt;Another thing to keep in mind is how low-code/no-code platforms can save you time and money. If you’re not a developer, you might need to create a statement of work for a third-party vendor to create an app for you. This can be incredibly expensive—on the order of hundreds of thousands of dollars for medium-complexity apps. If you can do it yourself, you don’t have to find the right vendor, set up all those meetings, follow up, etc. You can get things done much faster, even if it’s just automating some existing process you have.&lt;/p&gt;
&lt;p&gt;I really enjoyed creating Idea Matchmaker and I’m always looking for ways now to take advantage of low-code platforms. If you’re interested in being a Citizen Developer, you might want to check out this &lt;a href=&quot;https://hpe.zoom.us/webinar/register/4716663493942/WN_8jlRM9SaRKmbT3r1CDNtDw&quot;&gt;HPE Developer Munch &amp;#x26; Learn session&lt;/a&gt; where I and a couple of colleagues get more into what HPE is doing with platforms like this today.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Contributing as a gift]]></title><link>https://developer.hpe.com/2022-November-01/</link><guid isPermaLink="false">https://developer.hpe.com/2022-November-01/</guid><pubDate>Tue, 01 Nov 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Galadriel - A SPIRE Federation Alternative]]></title><description><![CDATA[SPIRE (SPIFFE Runtime Environment) provides a workload identity management solution that allows organizations to establish attestable and…]]></description><link>https://developer.hpe.com/galadriel-a-spire-federation-alternative/</link><guid isPermaLink="false">https://developer.hpe.com/galadriel-a-spire-federation-alternative/</guid><pubDate>Mon, 31 Oct 2022 19:57:46 GMT</pubDate><content:encoded>&lt;p&gt;SPIRE (SPIFFE Runtime Environment) provides a workload identity management solution that allows organizations to establish attestable and verifiable mTLS (Mutual Transport Layer Security) connections among services. These capabilities support the concept of explicit trust beyond the perimeters of cloud providers or data centers. Its platform-agnostic and pluggable architecture gives organizations the flexibility to orchestrate services based on zero trust principles in highly heterogeneous environments. &lt;/p&gt;
&lt;p&gt;SPIRE provides functionality to federate with other entities following the SPIFFE(Secure Production Identity Framework for Everyone) federation specification. It allows workloads to authenticate to peer SPIRE systems on a separate node or environment that may be governed by the same organization or a separate organization. The SPIFFE community and SPIRE users have identified the need to have a scalable alternative to the current federation method. It is difficult to set up and maintain SPIRE federation in large, heterogeneous SPIRE deployments due to the numerous steps involved and issues with keeping the configuration up to date. Galadriel is an alternative approach to SPIRE federation that allows large SPIRE federation deployments to be sustainable, scalable, and secure. Galadriel offers SPIRE users a flexible, multi-tenant, and API driven solution that gets them a step closer to explicit trust.&lt;/p&gt;
&lt;h2&gt;SPIRE concepts&lt;/h2&gt;
&lt;p&gt;In SPIRE, a trust domain is a discrete zone in which workloads can mutually authenticate each other without the need to set up SPIRE federation. A trust domain typically corresponds to a distinct computing environment, administrative department, or similar division.&lt;/p&gt;
&lt;p&gt;SPIRE attests identities by using properties of the cloud provider or on-premises equipment.  In both cases, it maintains a set of key-pairs that represent the trust domain and its boundaries. While the private keys are kept secure, the public keys are published in files called “trust bundles,” which are necessary for validating the X.509 certificates or JWTs (JSON Web Token) that are used to establish secure communications. &lt;/p&gt;
&lt;h3&gt;How does the current SPIRE federation work?&lt;/h3&gt;
&lt;p&gt;Let’s assume that two SPIRE Servers (A and B) of two different trust domains will enter in a federated relationship. Thus, they will access each other&apos;s trust bundles to validate certificates for mTLS connections among workloads that cross trust domains. To configure the trust bundle exchange, the following must occur.&lt;/p&gt;
&lt;p&gt;First, SPIRE Server A exposes its trust bundle via an end-point API. This end-point is accessed by SPIFFE federation counterparts (SPIRE B) and configured in the SPIRE Server A’s configuration file. SPIRE Server B exposes its trust bundle in the same manner.&lt;/p&gt;
&lt;p&gt;Second, each SPIRE server must be configured to retrieve trust bundles from each other. SPIRE Server A “maps” SPIRE Server B’s federation end-point and defines a federated relationship with that server (this can be done via the federation API or via the server’s configuration file). SPIRE Server B sets the same configuration for A.&lt;/p&gt;
&lt;p&gt;Third, federation is bootstrapped in Servers A and B and both must be enabled to fetch trust bundles. They must authenticate the SPIFFE identity of each other to initiate the exchange.&lt;/p&gt;
&lt;p&gt;Fourth, registration entries are created in both servers defining which workloads are federated, so they can establish mTLS across trust domains. &lt;/p&gt;
&lt;h3&gt;Challenges with current SPIRE federation&lt;/h3&gt;
&lt;p&gt;Even though SPIRE allows the management of federation configurations dynamically via an API, there are several limitations to the current SPIRE approach that do not allow for large federation use cases.&lt;/p&gt;
&lt;p&gt;First, existing SPIRE federation options require a secure, public endpoint to serve the federation data, either through Web PKI (leveraging publicly trusted certificate authorities) or SPIFFE authentication. This entails a substantial administrative hassle and, in some cases, may be impossible, such as for on-premises use cases due to the security issue of exposing a public endpoint.&lt;/p&gt;
&lt;p&gt;Second, federating many-to-many relationships requires manual creation of federation entries. For example, federating five trust domains to each other requires 20 administrator actions (each domain needs four new configuration changes to federate with the four others). More administrator actions are required when trust points change URLs or relationships change.&lt;/p&gt;
&lt;p&gt;A third limitation relates to the lifecycle and auditing of federated relationships. SPIRE does not provide a mechanism to manage and track the lifecycle of federated relationships. Current techniques rely solely on manual configuration changes, and do not allow for observability of trust bundle exchange.  &lt;/p&gt;
&lt;h2&gt;Enter Galadriel&lt;/h2&gt;
&lt;p&gt;Galadriel is a new open source effort initiated by HPE that extends the existing federation authorization techniques from SPIRE by centralizing the management and exchange of trust bundles. It collects trust bundles from SPIRE Servers, routes them, and presents them to other SPIRE Servers and cloud resources. Galadriel aspires to provide an all-in-one solution for managing and auditing external relationships for SPIRE implementations.&lt;/p&gt;
&lt;p&gt;Galadriel introduces two main components: a server and harvester. These components facilitate the exchange of SPIRE-generated trust bundles among multiple SPIRE Servers. The exchange is enforced via relationship rules established at the Galadriel server level, and the harvesters transfer bundles to and from SPIRE Servers via a central hub. All relationships are mutually consented to and trust bundles are validated before being consumed. The harvester takes a “proxy” role and serves as a middle layer for the configuration of the federation in SPIRE and the trust bundle exchange.&lt;/p&gt;
&lt;p&gt;This new federation approach removes the need to have a public end-point configured in the SPIRE Server, streamlines the steps to establish federation, and creates the visibility and maintainability needed to manage federation at scale. &lt;/p&gt;
&lt;p&gt;An initial release of Galadriel is planned for the first half of 2023. For more detailed design information about Galadriel, check out the design proposal &lt;a href=&quot;https://docs.google.com/document/d/1nkiJV4PAV8Wx1oNvx4CT3IDtDRvUFSL8/edit?usp=sharing&amp;#x26;ouid=104807789400318304424&amp;#x26;rtpof=true&amp;#x26;sd=true&quot;&gt;RFC - SPIFFE/SPIRE Federation&lt;/a&gt;  presented to the SPIRE community.&lt;/p&gt;
&lt;p&gt;To know more about HPE&apos;s contributions to SPIFFE and SPIRE, visit the &lt;a href=&quot;https://developer.hpe.com/platform/spiffe-and-spire-projects/home&quot;&gt;HPE - SPIFFE/SPIRE&lt;/a&gt; website.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to use an API access token for HPE GreenLake for Compute Ops Management]]></title><description><![CDATA[Common identity frameworks and protocols use token-based strategies to secure access to applications and resources. OAuth 2.0 is one of the…]]></description><link>https://developer.hpe.com/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-use-an-api-access-token-for-hpe-greenlake-for-compute-ops-management/</guid><pubDate>Thu, 27 Oct 2022 17:16:38 GMT</pubDate><content:encoded>&lt;p&gt;Common identity frameworks and protocols use token-based strategies to secure access to applications and resources. OAuth 2.0 is one of the most popular, using access tokens and refresh tokens to allow an application to access resources hosted by other servers on behalf of a user. The Compute Ops Management REST API uses the OAuth 2.0 HPE GreenLake authentication flow, where a limited lifetime access token is provided in the header of each REST API request as the authorization bearer. The access token is associated with a subject (person or service) and retains all the same permissions and privileges as the subject. In this blog post, I will discuss the essential steps required to generate this access token.&lt;/p&gt;
&lt;h2&gt;HPE GreenLake steps to obtain the access token for Compute Ops Management&lt;/h2&gt;
&lt;p&gt;Start the process by logging in and authenticating into &lt;a href=&quot;https://console.greenlake.hpe.com/&quot;&gt;HPE GreenLake&lt;/a&gt;, which is authenticated by the Identity Provider (validated through username, password, Single Sign-On, or Multi-Factor Authentication).
The prerequisites are that Compute Ops Management is provisioned/added to your account and you must be assigned a role associated with performing the intended operation.&lt;/p&gt;
&lt;p&gt;To get started, you need to create the API client credentials for the specific Compute Ops Management application instance, which is used to generate the access token. Once the token is generated, you can make further API calls.&lt;/p&gt;
&lt;h3&gt;Configuring API Client Credentials&lt;/h3&gt;
&lt;p&gt;To configure your API Client Credentials, perform the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click on the &lt;strong&gt;Manage&lt;/strong&gt; link on the header on HPE GreenLake&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;API&lt;/strong&gt; tile&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Create Credentials&lt;/strong&gt; link. The &lt;strong&gt;Create Credentials&lt;/strong&gt; screen displays:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_manage.png&quot; alt=&quot;GreenLake manage link&quot; title=&quot;GreenLake manage link&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_api.png&quot; alt=&quot;GreenLake API link&quot; title=&quot;GreenLake API link&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_create_cred.png&quot; alt=&quot;GreenLake Create Credential Button&quot; title=&quot;GreenLake Create Credential Button&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Select the &lt;strong&gt;Application&lt;/strong&gt; you want to access.&lt;/li&gt;
&lt;li&gt;Provide a Credential Name.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Create Credentials&lt;/strong&gt; button to continue.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_create_cred_dialog.png&quot; alt=&quot;GreenLake Create Credential Dialog&quot; title=&quot;GreenLake Create Credential Dialog&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;The &lt;strong&gt;Credentials Created&lt;/strong&gt; screen displays your credentials.&lt;/li&gt;
&lt;li&gt;Next, you must copy the &lt;strong&gt;Client Secret&lt;/strong&gt; to a safe and secure location. HPE GreenLake does not store your &lt;strong&gt;Client Secret&lt;/strong&gt;. Select the &lt;strong&gt;copy icon&lt;/strong&gt; to save your information.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Close&lt;/strong&gt; button to continue.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_create_cred_copy.png&quot; alt=&quot;GreenLake Copy Credential&quot; title=&quot;GreenLake Copy Credential&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Generating an access token&lt;/h2&gt;
&lt;p&gt;Once you have created credentials, you can view their details on the API page.  This token has a limited lifespan and will expire after 120 minutes.  Using this method, you will need to return to the page to generate another token after it has expired.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click the arrow next to the credential name to display the credential details. It allows you to &lt;strong&gt;Generate Access Token&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Generate Access Token&lt;/strong&gt; to continue. The Generated Access Token screen displays.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_generate_token.png&quot; alt=&quot;Generate GreenLake  Access Token&quot; title=&quot;Generate GreenLake Access Token&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Enter your &lt;strong&gt;Client Secret&lt;/strong&gt; and click the &lt;strong&gt;Create Access Token&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Access Token Created&lt;/strong&gt; screen displays your &lt;strong&gt;Access Token&lt;/strong&gt;.
&lt;strong&gt;Note:&lt;/strong&gt; Since access tokens are not stored, HPE GreenLake recommends you make a copy of your access token and keep it in a safe location.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Close&lt;/strong&gt; button when you are finished.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This access token is referenced later as “&amp;#x3C;copy_access_token_here&gt;” when demonstrating an API call.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_create_access_token.png&quot; alt=&quot;Create GreenLake Access Token&quot; title=&quot;Create GreenLake Access Token&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_copy_token.png&quot; alt=&quot;Copy GreenLake Access Token&quot; title=&quot;Copy GreenLake Access Token&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Resetting your client secret&lt;/h3&gt;
&lt;p&gt;There may be a time when you want to reset your client secret for security purposes or if you did not copy the client secret. You can recreate it by using the Reset option. Resetting the client secret will invalidate all tokens associated with the Client ID and secret.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click the ellipsis next to the &lt;strong&gt;Generate Access Token&lt;/strong&gt; button to reset your client secret.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Reset Client Secret&lt;/strong&gt; link. The client secret is recreated with a new value.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_reset_token.png&quot; alt=&quot;Reset GreenLake Access Token&quot; title=&quot;Reset GreenLake Access Token&quot;&gt;&lt;/p&gt;
&lt;h1&gt;How to use the access token&lt;/h1&gt;
&lt;p&gt;You can embed the access token in the REST API request to perform the HTTP method against the desired Compute Ops Management resource to obtain the response. Note that you must use the correct connectivity endpoint according to the region where Compute Ops Management is deployed. Currently, these are the connectivity endpoints for the possible regions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;EU Central&lt;/strong&gt; - &lt;a href=&quot;https://eu-central1-api.compute.cloud.hpe.com&quot;&gt;https://eu-central1-api.compute.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AP Northeast&lt;/strong&gt; - &lt;a href=&quot;https://ap-northeast1-api.compute.cloud.hpe.com&quot;&gt;https://ap-northeast1-api.compute.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;US West&lt;/strong&gt; - &lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com&quot;&gt;https://us-west2-api.compute.cloud.hpe.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_endpoint.png&quot; alt=&quot;GreenLake API Endpoint&quot; title=&quot;GreenLake API Endpoint&quot;&gt;&lt;/p&gt;
&lt;p&gt;The access token must be added to the header &quot;Authorization: Bearer &quot; for any REST API request.  The name “Bearer authentication” can be understood as “giving access to the bearer of this token.”  The following example uses the GET method for the resource servers to obtain a list of available servers.&lt;/p&gt;
&lt;h2&gt;How to use the access token - cURL method&lt;/h2&gt;
&lt;p&gt;Next is a curl command (run as a console command), that will use the generated access token to make a call against the API Endpoint to list servers in my account:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;curl -X GET &lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers&quot;&gt;https://us-west2-api.compute.cloud.hpe.com/compute-ops/v1beta2/servers&lt;/a&gt; -H &quot;Accept:application/json&quot; -H &quot;Authorization:Bearer &amp;#x3C;copy_access_token_here&gt;”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Response - List of compute servers onboarded and assigned to the corresponding application for your customer account&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
   &quot;offset&quot;:0,
   &quot;count&quot;:1,
   &quot;total&quot;:1,
   &quot;items&quot;:[
      {
         &quot;id&quot;:&quot;P07595-B21+MXQ1140XVX&quot;,
         &quot;type&quot;:&quot;compute-ops/server&quot;,
         &quot;platformFamily&quot;:null,
         &quot;resourceUri&quot;:&quot;/compute-ops/v1beta2/servers/P07595-B21+MXQ1140XVX&quot;,
         &quot;name&quot;:&quot;MXQ1140XVX&quot;,
         &quot;createdAt&quot;:&quot;2022-09-15T18:46:21.488619+00:00&quot;,
         &quot;updatedAt&quot;:&quot;2022-09-15T18:46:21.488619+00:00&quot;,
         &quot;generation&quot;:1,
         &quot;state&quot;:{
            &quot;managed&quot;:true,
            &quot;connected&quot;:false,
            &quot;connectedModifiedAt&quot;:null,
            &quot;subscriptionState&quot;:&quot;REQUIRED&quot;,
            &quot;subscriptionTier&quot;:null,
            &quot;subscriptionExpiresAt&quot;:null
         }
      }
   ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;How to use the access token - POSTMAN&lt;/h2&gt;
&lt;p&gt;To execute the REST API using the Postman tool, the access token needs to be copied to the &lt;strong&gt;Bearer Token&lt;/strong&gt; section of the &lt;strong&gt;Authorization&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_postman.png&quot; alt=&quot;GreenLake API Call with POSTMAN&quot; title=&quot;GreenLake API Call with POSTMAN&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Deleting client credentials&lt;/h1&gt;
&lt;p&gt;Deleting the client ID and secret will invalidate all tokens associated with the ID and secret.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click the ellipsis next to the &lt;strong&gt;Generate Access Token&lt;/strong&gt; button to delete your client credentials.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Delete Credentials&lt;/strong&gt; link.
&lt;strong&gt;Note:&lt;/strong&gt; If a user is deleted from HPE GreenLake, any tokens or client IDs generated and associated with any applications owned by this user will no longer be valid.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glcp_delete_cred.png&quot; alt=&quot;Delete GreenLake Client Credential&quot; title=&quot;Delete GreenLake Client Credential&quot;&gt;&lt;/p&gt;
&lt;p&gt;I hope this blog post, in giving you an example of how to obtain the access token from HPE GreenLake and use it with the Compute Ops Management REST API, helps you make the most of your as-a-Service infrastructure. You may also wish to read the blog post on the Compute Ops Management REST API for the essential steps required to further explore the API to get even more out of it.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Reflections on helping Linux become enterprise-ready]]></title><description><![CDATA[In this blog series, you’ll meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be…]]></description><link>https://developer.hpe.com/reflections-on-helping-linux-become-enterprise-ready/</link><guid isPermaLink="false">https://developer.hpe.com/reflections-on-helping-linux-become-enterprise-ready/</guid><pubDate>Wed, 26 Oct 2022 15:17:41 GMT</pubDate><content:encoded>&lt;p&gt;In this blog series, you’ll meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be interviewing Suparna Bhattacharya who works in the HPE AI Research Lab on foundations and techniques for data centric trustworthy AI. She is an HPE Fellow and also the architect for Project Data Map, an innovation project in the CTO. Suparna was a Linux kernel contributor between 2000 and 2007 during the period when Linux was evolving into an enterprise OS.&lt;/p&gt;
&lt;p&gt;Some of the contributions Suparna made to Linux were in the areas of filesystems, IO and RAS (reliability, availability, and serviceability), helping to establish Linux as an enterprise-grade OS. During her tenure as a contributor, Suparna learned many things she was able to apply throughout her engineering career and shared them with me here.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Suparna, can you set the stage for me as to your involvement with contributing to the Linux kernel?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Back in 2000, there were a number of companies interested in helping Linux evolve into an operating system that could be used in enterprise-scale environments. I was working at IBM at the time and looking for ways I could contribute to help in that effort. My specific project was intended to look at serviceability capabilities, i.e. RAS features. But I wound up contributing in several areas that made their way into the kernel over the course of several years (2000-2007), including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Aspects of block I/O&lt;/li&gt;
&lt;li&gt;Asynchronous file system I/O&lt;/li&gt;
&lt;li&gt;ext4 file system&lt;/li&gt;
&lt;li&gt;pre-emptible RCU&lt;/li&gt;
&lt;li&gt;RAS features, including kprobes and kdump&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;You’ve likened your experience to bridging Lilliput and Brobdingnag, places found in Gulliver’s Travels by Jonathan Swift. Can you explain what you mean by that?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Up until that point, Linux had predominantly been used on relatively smaller systems. The challenge now was to make it work well on much larger systems in order to support the enterprise. It felt like trying to build a city where both the Lilliputians could live alongside the Brobdingnagians. You had to be careful that, when you introduced code to support the larger systems, you didn’t trample on the Lilliputians.  In other words, our challenge was to advance the scalability, responsiveness and dependability of Linux for enterprise workloads without introducing additional overhead for non-enterprise users.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Can you give an example?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because I had experience working on file systems, my interest was piqued in relation to the Linux IO stack and the file system stack. I noticed some developments coming out of the community to expand the block IO subsystem to support larger IO requests, which is what is needed for databases. A group of developers came up with this new abstraction called kiobuf that intrigued me. I thought to myself, well, okay, if you have this, then you may need to do a few additional things when the IO request completes. So I tried building a little something on to that, adding new techniques so you could work when the IO request completed and trigger callbacks. I came up with a mechanism for doing that, and as I was discussing it on some of the IRC channels, an experienced kernel developer encouraged me to &lt;a href=&quot;https://lkml.indiana.edu/hypermail/linux/kernel/0101.3/1273.html&quot;&gt;post it&lt;/a&gt; as an RFC so I could get feedback from the community.&lt;/p&gt;
&lt;p&gt;In doing so, I inadvertently triggered a whole &lt;a href=&quot;https://lore.kernel.org/lkml/CA2569E6.0051970D.00@d73mta03.au.ibm.com/#r&quot;&gt;firestorm&lt;/a&gt; of emails around kiobuf itself between several stalwarts in the kernel development community. I wasn’t even trying to do anything to change the kiobuf layout. I was merely suggesting the addition of some new features on top of it. But apparently I had drawn the attention of Linus Torvalds to the kiobufs, descriptors that were being proposed for large IO systems and databases. He expressed his concern that, in the process of trying to support these bigger systems, we were making it more difficult for anything that required low-latency handling, like floppy disks and smaller devices.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What was the fallout from the email firestorm?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Then, &lt;a href=&quot;https://www.landley.net/kdocs/ols/2004/ols2004v1-pages-51-62.pdf&quot;&gt;Jens Axboe&lt;/a&gt; started looking at how to rewrite the block IO sub-system. The challenge was to be able to support these larger IO requests while at the same time supporting diverse workloads and devices at the low end.&lt;/p&gt;
&lt;p&gt;Leaning on some of my past experience, I studied his re-write as he posted his patches and came up with notes on the block IO evolution that captured the design issues and tradeoffs. Some of these “bionotes” eventually became part of the block IO design documentation. While I was writing this up, a few others pointed out that there were still places where the new system was inefficient for certain kinds of IO devices, and I had to come up with some patches to fix that. It’s kind of funny how, although my original objective was to get Linux to the point where it worked better for the enterprise, I found myself contributing patches to make it better so that changes we were making ensured it still worked efficiently for smaller systems .&lt;/p&gt;
&lt;p&gt;Still, I found that writing up these design issues and design documentation and then contributing patches was a very nice way to get deeply involved with the community.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What did you learn from this?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This issue of potentially fixing one aspect for one set of users while introducing challenges for others led me to the understanding that, wherever you make a change, it’s better to make minimal changes and not make huge changes in one shot. Each contribution lays the groundwork for someone else to add more to it, so you can evolve the code carefully.&lt;/p&gt;
&lt;p&gt;Traditionally, we’ve always been used to building systems in a modular fashion. And Linux is no exception. That’s why you can work on one part of the subsystem without breaking things in other areas. But modularity can also lead to very bloated subsystems. So that’s why you want to proceed carefully, making small changes, sharing it with the community, get feedback, etc.  Don’t make one huge change all at once.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Were you able to get those RAS features in that you wanted?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It took a few years to get RAS capabilities accepted into Linux. Part of it, as I pointed out earlier, had to do with how difficult it was to support large enterprises while ensuring that we didn’t make the kernel unusable for desktop or low-end systems. But another part of it had to do with a long-held belief amongst the contributor community that you should be able to look at the code and understand why it failed, helping you identify the root cause, and not have to rely on injecting things into the kernel that might be considered intrusive.&lt;/p&gt;
&lt;p&gt;For example, a key piece of RAS is serviceability. If something goes wrong in an enterprise environment, you need the service engineers to be able to figure out what went wrong, and the only way they can tell is by inserting probes, like running a trace utility, doing a crash dump, etc. There was quite a lot of apprehension about incorporating these features into the mainline kernel because they were perceived as intrusive and polluting the elegance of kernel code. Many also worried that one might look at the debugger and apply “band-aid” fixes but wouldn’t really get at the root cause. It wasn’t that the community was against including these capabilities… they just didn’t want them done in a way that made the kernel messy or complicated.&lt;/p&gt;
&lt;p&gt;Eventually, we were able to add in kernel dynamic probes, aka kprobes, a dynamic tracing capability. The idea here was that, on a running kernel, you could put a probe in at any point and collect information instead of using a static trace. It took a long time to be incorporated; we posted our first implementation in August of 2000 and it took four years to get into the main line. I also worked on a crash dump feature, where again I learned that a minimalist approach is the best approach.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tell me how the kprobes work emphasized the need for minimalism to you.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I call this my journey from Modularity to Minimalism, because the first time we tried to implement it, we tried to make it do everything. We wanted it to have this probing capability, but also ensure some safety restrictions and wrote an in-kernel interpreter to allow users to write these probe functions in a new language, and all of it eventually amounted to 6,000 lines of kernel code. It was all very well thought out and we expected it would be well-received by the community. But it wasn’t. It wasn’t that folks didn’t like the capability or the techniques. It was just that they felt that it really didn’t belong in the kernel. It was too much.&lt;/p&gt;
&lt;p&gt;A kernel developer in Australia, &lt;a href=&quot;https://kernel.org/doc/ols/2006/ols2006v2-pages-109-124.pdf&quot;&gt;Rusty Russell&lt;/a&gt;, helped us out by suggesting we create just a simple patch that let you put a probe on the kernel function and register the output from that. We would isolate that tiny little patch and post it on the kernel mailing list. This brought our 6000 lines of code down to 550. It took care of the main issues to ensure correctness but didn’t attempt to address a whole lot of different situations. We also came up with a simple trick to enable developers to easily probe function arguments.&lt;/p&gt;
&lt;p&gt;After that, it received a lot better reception. There were other things that still needed to be added to it, and they eventually were. What I learned through doing this was that sometimes you need to go backwards in order to move forward. We had to undo a lot of the work and just tease out the very minimal elements. But that’s what led to success, because then folks were willing to adopt it and build upon it. After it was adopted, others added on Perf probes, System tap, eBPF, and a lot of other things.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What about the crash dump feature – what was the difficulty there?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Crash dump had been integrated into every other OS I had worked with, but it wasn’t in Linux. I started working with this project called Linux Kernel Crash Dump, which started to add that capability and I made a lot of contributions there, but again, it was pretty hard to get into the mainline kernel.&lt;/p&gt;
&lt;p&gt;The critical need here was a reliable and flexible crash dump mechanism that would support a wide-range of target devices and environments, while at the same time being easy to use and maintain. We first considered modular solutions. These provided a pluggable framework that was flexible, but the problem was that it was just too complex. It took 5000 lines of code just to capture a crash dump, and that code was being executed when the kernel was already in deep trouble.&lt;/p&gt;
&lt;p&gt;So there was this intrinsic challenge – we needed something simple and minimalist but most attempts to address the needs of diverse environments would make everything complex. What I learned here was that, when you have conflicting design goals like this, it’s a great opportunity for innovation.&lt;/p&gt;
&lt;p&gt;In the end, we asked ourselves “What is the minimal base for a crash dump?” You would need to save registers and invoke a safe soft reboot to a kernel in reserved memory while preserving a core file “view” of the crashed kernel’s memory.  This would make it possible to reuse existing utilities, such as cp to save a copy of the file (to create a disk dump) or scp and ftp (to create a network dump). When we looked at it that way, we achieved more with less; we got better flexibility and wider applicability, with barely 1000 lines of code (over kexec).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Working on Linux really made an impact on you, didn’t it?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It was a very exciting time, probably the most enjoyable phase of my entire career. We had no idea back then of how significant and ubiquitous Linux would be in the years to come – some refer to it as “the software that runs the world”.  I learned that participating in open source communities can bring a deep satisfaction from making contributions that can have a lasting impact, while also instilling lasting lessons for how one approaches their work.&lt;/p&gt;
&lt;p&gt;For example, one of the most important lessons I learned was that, when addressing design goals for new features, conventional approaches are not always good enough. You have to rethink and challenge long held beliefs to gracefully cope with evolving needs. New challenges are always emerging.&lt;/p&gt;
&lt;p&gt;And finally, I learned the power of simplicity, learning to do more with less. You want to reduce things down to the most essential features and avoid unnecessary complexity. Understanding these things has helped me to bring about change more easily in each and every engineering organization I’ve been a part of throughout my career.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[SPIFFE/SPIRE graduates, enabling greater security solutions]]></title><description><![CDATA[Late September 2022, the Cloud Native Computing Foundation® (CNCF) announced the graduation of the SPIFFE and SPIRE projects, joining an…]]></description><link>https://developer.hpe.com/spiffe-spire-graduates-enabling-greater-security-solutions/</link><guid isPermaLink="false">https://developer.hpe.com/spiffe-spire-graduates-enabling-greater-security-solutions/</guid><pubDate>Mon, 24 Oct 2022 18:38:25 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;Late September 2022, the Cloud Native Computing Foundation® (CNCF)&lt;a href=&quot;https://www.cncf.io/announcements/2022/09/20/spiffe-and-spire-projects-graduate-from-cloud-native-computing-foundation-incubator/&quot;&gt; announced the graduation of the SPIFFE and SPIRE projects&lt;/a&gt;, joining an elite group of only 18 graduated CNCF projects, like HELM and Kubernetes. Being designated a graduated project means that the project is considered stable, with enough governance processes around it, and ready to be widely deployed in production.&lt;/p&gt;
&lt;h3&gt;A Secure Production Identity Framework for Everyone&lt;/h3&gt;
&lt;p&gt;SPIFFE (Secure Production Identity Framework for Everyone) is designed to work within dynamic and heterogeneous environments to provide a means to securely authenticate workloads using a Zero Trust system. These workloads could be anything from a physical or virtual node to an individual process. By providing a secure identity to every workload, it removes the need for shared secrets and provides a foundation for higher level platform-agnostic security controls.&lt;/p&gt;
&lt;p&gt;Rising from the need to better secure today’s cloud-native ecosystem, SPIFFE and SPIRE resolved a fundamental issue – a need for a standardized, cryptographic, platform-agnostic identity foundation to help secure services across heterogeneous cloud production environments. SPIFFE and SPIRE moved from the CNCF Sandbox to the Incubator in 2020, and has grown significantly since, being contributed to by many leading technology companies, including VMWare, Uber, ByteDance, Anthem, Transferwise, IBM, and Hewlett Packard Enterprise (HPE).&lt;/p&gt;
&lt;h3&gt;How HPE puts SPIFFE/SPIRE into action&lt;/h3&gt;
&lt;p&gt;As a technology company, leading and influencing meaningful innovation in the Open Source community is important to us, as it provides key capabilities that our customers desire and helps us design solutions that meet their needs. SPIFFE and SPIRE are important components of enabling seamless zero trust infrastructures and have become integral parts of a number of HPE projects, including the following:&lt;/p&gt;
&lt;h4&gt;HPE Cray Exascale Super Computer Management&lt;/h4&gt;
&lt;p&gt;SPIFFE/SPIRE has shown that it can scale and how it’s being used by the Cray System Management (CSM) tooling underscores that super power. HPE Cray Exascale Super Computers are big – data center-sized big – comprised of tens of thousands of compute nodes and high-performance storage. In late 2018, development work began on the Cray CSM, a solution enabling system administrators to manage these large-scale supercomputers leveraging the architecture and advances of hyper-scalers and cloud providers, based on open source code.&lt;/p&gt;
&lt;p&gt;When both Scytale and Cray were acquired by HPE in 2019, the synergies found in their technologies started the engineering groups working together. They were quickly able to identify how important and helpful SPIFFE/SPIRE could be to the CSM development effort. Given how CSM helps customers to go beyond the traditional and enable new services, deploy a broad range of workloads, and drive towards an as-a-service experience, the integration of SPIRE makes these implementations even more secure.&lt;/p&gt;
&lt;p&gt;Today, SPIRE is an integral part of CSM and running in production at facilities including &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2021/04/us-department-of-energys-los-alamos-national-laboratory-expands-collaboration-with-hewlett-packard-enterprise-on-new-supercomputer-design-to-advance-scientific-research.html&quot;&gt;Los Alamos National Lab&lt;/a&gt; (Lawrence Berkeley National Lab – &lt;a href=&quot;https://www.nersc.gov/&quot;&gt;National Energy Research Scientific Computing Center&lt;/a&gt;), &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2020/10/hewlett-packard-enterprise-wins-160m-contract-to-power-one-of-the-worlds-fastest-supercomputers-based-in-finland-to-bolster-europes-research-in-science-and-unlock-economic-growth.html&quot;&gt;EuroHPC JU’s LUMI&lt;/a&gt;, the &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2021/04/swiss-national-supercomputing-centre-hewlett-packard-enterprise-and-nvidia-announce-worlds-most-powerful-ai-capable-supercomputer.html&quot;&gt;Swiss National Supercomputer Center (CSCS)&lt;/a&gt; and the &lt;a href=&quot;https://www.hpcwire.com/2021/04/22/microsoft-to-provide-worlds-most-powerful-weather-climate-supercomputer-for-uks-met-office/&quot;&gt;UK Met weather service&lt;/a&gt;, with more coming. It is also expected to play a role in enabling federated systems interaction in various scenarios both in HPC and AI/ML operations in many HPE Cray Exascale facilities and beyond.&lt;/p&gt;
&lt;h4&gt;HPE GreenLake&lt;/h4&gt;
&lt;p&gt;As our customers transition to an everything-as-a-service consumption model, they must deal with situations where applications can straddle multiple data centers, multiple clouds, 3rd party managed service providers and edge locations. A fundamental but complex problem with this digital transformation is how these physically distributed software systems can reliably authenticate to each other and securely communicate at scale, especially over untrusted networks.  &lt;/p&gt;
&lt;p&gt;As part of shared security responsibilities with our customers to ensure customers can trust to host their data on HPE GreenLake Edge-to-Cloud solutions, HPE must ensure HPE GreenLake Edge-to-Cloud Platform adheres to zero trust security best practices. HPE GreenLake Edge-to-Cloud Platform leverages SPIFFE and SPIRE to ensure all micro-services running inside the platform, and any external micro-services that the platform needs to talk to, are continuously attested and issued short-lived cryptographic identities (SPIFFE IDs) to establish secure communication with mutual TLS (mTLS) encryption. This protects the platform by eliminating long-lived secrets and minimizing any impact from credential exfiltration attacks.&lt;/p&gt;
&lt;p&gt;The SPIFFE/SPIRE cryptographic identities enable other opportunities for use with HPE GreenLake Edge-to-Cloud Platform as well. They offer a global identity foundation that could be leveraged for a broad spectrum of platform eco-system tools and services, such as a service mesh, policy framework, secrets managers, identity and access management (IAM) frameworks, and software supply chain integrity tools. By bootstrapping authentication to authorization tools, one could address the “credential zero” or “secure introduction” problem by eliminating the need for workloads to securely store the credentials needed to authenticate to the secret store.&lt;/p&gt;
&lt;h3&gt;Driving towards the best secured edge-to-cloud platform&lt;/h3&gt;
&lt;p&gt;The examples noted above are just some of the ways HPE is using this innovative technology to provide for a better and more seamlessly secured environment.  There’s a lot more going on. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt; to learn about different SPIFFE/SPIRE implementations and other news in this area. You might also want to check out the book, &lt;a href=&quot;https://spiffe.io/book/&quot;&gt;Solving the Bottom Turtle&lt;/a&gt;, for interesting and detailed information about SPIFFE and SPIRE.&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title><![CDATA[Exploring the role of the Citizen Developer]]></title><description><![CDATA[A topic I’ve been reading about a lot lately is the rise of the Citizen Developer. Citizen Developers leverage low-code/no-code (LCNC…]]></description><link>https://developer.hpe.com/exploring-the-role-of-the-citizen-developer/</link><guid isPermaLink="false">https://developer.hpe.com/exploring-the-role-of-the-citizen-developer/</guid><pubDate>Mon, 24 Oct 2022 13:33:55 GMT</pubDate><content:encoded>&lt;style&gt;
ul li{
  font-size:25px;
  list-style-position: outside;
  margin-bottom:8px;
}
&lt;/style&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe20160720045_1600_0_72_rgb-1200-x-675.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A topic I’ve been reading about a lot lately is the rise of the Citizen Developer.&lt;/p&gt;
&lt;p&gt;Citizen Developers leverage low-code/no-code (LCNC) platforms to build applications through visual representations without having to have spent years developing deep coding skills. They’ve become more plentiful as more and more businesses rely on software applications to help automate their processes, helping organizations do more with less.&lt;/p&gt;
&lt;p&gt;In essence, Citizen Developers are just non-IT employees who use technology to make work easier and more efficient using platforms designed to create programs using intuitive interfaces and a mouse click, as opposed to having to write lines and lines of code. Enabling Citizen Developers can save companies a lot of money by removing the constraint on IT for low-complexity innovation and helping everyone become more self-sufficient and efficient.&lt;/p&gt;
&lt;p&gt;At last week’s [SlashData’s DevRe[X] Summit](&lt;a href=&quot;https://www.devrelxsummit.com/&quot;&gt;https://www.devrelxsummit.com/&lt;/a&gt;), I learned that the usage of low/no code has doubled in the last 2 years, with 57% of developers reporting the use of LCNC tools.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/usage-of-low-no-code-copy.jpg&quot; alt=&quot;Usage of low/no code&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can check the full session from Christina Voskoglou, which the data comes from, &lt;a href=&quot;https://youtu.be/Up-N5x6NCps&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That is an interesting statistic, as it seems to indicate that these tools are being used not only by Citizen Developers, but are also making their way into “real” developers’ toolboxes (for building prototypes, front end development, etc.).&lt;/p&gt;
&lt;p&gt;The reason for this rise in popularity is that there are many pros to the Citizen Developer movement:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Low-code/no-code platforms allow non-developers to build simple apps with little IT investment, while freeing up corporate developers to focus on key applications.&lt;/li&gt;
&lt;li&gt;They provide pre-existing libraries of components for quick software builds.&lt;/li&gt;
&lt;li&gt;It expands the ability to innovate within the enterprise.&lt;/li&gt;
&lt;li&gt;The platforms allow one to start small, then customize and improve as you go, again without corporate developer interaction.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As you can see, there are a lot of benefits to be had for an organization. However, if you decide you want to provide your employees with low-code/no-code platforms and tools, there are a few things you’ll want to make sure corporate IT is involved with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You might need to provide training on simple development concepts and on LCNC tools you selected, and deployed for employees.&lt;/li&gt;
&lt;li&gt;You will need to watch over the data being used by Citizen Devs, to make sure no compliance or privacy rules are broken. Most likely you will need to implement some policy restrictions, which might represent a governance challenge.&lt;/li&gt;
&lt;li&gt;Enabling Citizen Devs can represent a risk to IT sprawl and shadow IT, if not carefully managed.&lt;/li&gt;
&lt;li&gt;It may represent a security risk, as in most case these Citizen Dev-built apps are being used and developed on personal devices, mobile devices and non-corporate Wi-Fi connections.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Read more about it&lt;/h3&gt;
&lt;p&gt;One article I recently found was very interesting: &lt;a href=&quot;https://thenewstack.io/how-low-code-platforms-can-help-cloud-native-developers/&quot;&gt; How Low-Code Platforms Can Help Cloud Native Developers&lt;/a&gt;. Vinothini Raju, who wrote a &lt;a href=&quot;https://developer.hpe.com/blog/autopilot-kubernetes-deployments-on-hpe-ezmeral-runtime-enterprise/&quot;&gt;blog post&lt;/a&gt; for the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; last year, reminds us that the rise of no-code platforms isn’t surprising, given that humans have a tendency to process visuals much faster than text. In her article, she explores low-code/no-code platforms for cloud-native deployments. It’s a fascinating article that points out why these platforms are such a boon to cloud-native development. It’s a short read and I highly recommend it.&lt;/p&gt;
&lt;h3&gt;Hear more about it&lt;/h3&gt;
&lt;p&gt;HPE is exploring low-code/no-code options as well. In the &lt;a href=&quot;https://hpe.zoom.us/webinar/register/8716666192015/WN_8jlRM9SaRKmbT3r1CDNtDw&quot;&gt;HPE Developer November 16th Munch &amp;#x26; Learn&lt;/a&gt;, Jeffrey Fougere, an Innovation Strategist at HPE, Colin Lue King, Talent Management Manager at HPE, and Richard Kerridge, Learning Technology Architecture &amp;#x26; Strategy Manager at HPE will come to talk on this subject. They’ll introduce session attendees to some of the popular platforms, like Microsoft Power Aps and Power Automate, and explain how HPE employees themselves have been able to build enterprise-level apps and automations with little-to-no budget. This is a session you’ll not want to miss. Even full-time developers can benefit from learning about these technologies to create full-stack solutions outside of their areas of expertise. There will be a replay posted on the &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt; calendar page once it’s ready.&lt;/p&gt;
&lt;p&gt;What’s your experience with low-code/no-code solutions? Other community members could learn a lot from your experience. If you’re a citizen developer, consider &lt;a href=&quot;https://developer.hpe.com/contribute&quot;&gt;contributing a blog post&lt;/a&gt; on the subject. The HPE Developer Community team would love to hear more about it.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to monitor HPE GreenLake for Compute Ops Management infrastructure with Grafana Metrics Dashboards]]></title><description><![CDATA[The purpose of this blog post is to describe how to generate Grafana dashboards to monitor any HPE Compute infrastructure managed by HPE…]]></description><link>https://developer.hpe.com/how-to-monitor-hpe-compute-ops-management/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-monitor-hpe-compute-ops-management/</guid><pubDate>Wed, 19 Oct 2022 18:21:24 GMT</pubDate><content:encoded>&lt;style&gt;ul li{ font-size:26px;padding-bottom: 0.5em;}&lt;/style&gt;
&lt;style&gt; i{ color:grey;font-family:&apos;Courier New&apos;;font-size:22px; } &lt;/style&gt;
&lt;p&gt;The purpose of this blog post is to describe how to generate Grafana dashboards to monitor any HPE Compute infrastructure managed by HPE GreenLake for Compute Ops Management.&lt;/p&gt;
&lt;h1&gt;Grafana Dashboards&lt;/h1&gt;
&lt;p&gt;IT infrastructure metrics visualization is critical for health monitoring, prediction, and capacity planning. It provides a powerful way of viewing infrastructure utilization, revealing issues and helping maintain uninterrupted services.&lt;/p&gt;
&lt;p&gt;Grafana’s time-series graphs are the perfect enabler for IT infrastructure optimization. They can assist administrators in monitoring temperature changes, network traffic performance, power consumption, and much more. They can be used to compare data over time to note trends and detect issues, allowing administrators to make any necessary adjustments and prevent downtime.&lt;/p&gt;
&lt;p&gt;The following picture shows a typical HPE infrastructure dashboard with different panels generated from HPE Compute Ops Management:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-21-11_14_38-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE GreenLake for Compute Ops Management REST API&lt;/h1&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management provides a northbound RESTful &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API &lt;/a&gt;that supports many operations. All the data you can get from the Compute Ops Management API can be leveraged to create beautiful and instructive Grafana dashboards.&lt;/p&gt;
&lt;p&gt;To take advantage of this, simply use a generic Grafana plugin that can handle REST requests, parse JSON responses and generate tables. With this solution, we greatly reduce the complexity of the solution which in principle requires a database like Prometheus or InfluxDB. In this post, I will show you how to do it without a database...&lt;/p&gt;
&lt;p&gt;HPE GreenLake for Compute Ops Management REST API uses the OAuth 2.0 authentication based on the client credential, which generates a limited lifetime access token.&lt;/p&gt;
&lt;p&gt;The access token is a long string in the form of a JSON Web Token that is signed using RS256 algorithm. The access token must be added into the HTTP header with keyword &quot;Authorization: Bearer {token}&quot; for any REST API request.&lt;/p&gt;
&lt;p&gt;For information about how to create API client credentials and how to generate an access token for HPE GreenLake for Compute Ops Management, refer to &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/guides/public/authentication/authentication/&quot;&gt;this web page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Only a few resource metrics are currently supported by Compute Ops Management via the RESTful API, but things will change quickly in the coming months. Today, the only metric available is the carbon footprint report but many other resources are available to create nice Grafana dashboards such as data related to the number of servers, health of servers, service packs, groups, etc.&lt;/p&gt;
&lt;h1&gt;Grafana Infinity plugin&lt;/h1&gt;
&lt;p&gt;There are several Grafana plugins that support data collection via the REST API (e.g. Infinity, &lt;a href=&quot;https://grafana.com/grafana/plugins/simpod-json-datasource/&quot;&gt;JSON&lt;/a&gt;, &lt;a href=&quot;https://grafana.com/grafana/plugins/marcusolsson-json-datasource/&quot;&gt;JSON API&lt;/a&gt;) but &lt;a href=&quot;https://grafana.com/grafana/plugins/yesoreyeram-infinity-datasource/&quot;&gt;Infinity &lt;/a&gt;has the great advantage of offering an advanced query language that is essential for manipulating JSON data into a suitable format that Grafana can understand. This language is called &lt;a href=&quot;https://sriramajeyam.com/grafana-infinity-datasource/wiki/uql/&quot;&gt;UQL&lt;/a&gt;, Infinity&apos;s unstructured query language.&lt;/p&gt;
&lt;p&gt;UQL is not simple at first glance, but I will provide examples in this blog. With UQL, you can customize the results you need regardless of the JSON format returned by the API.&lt;/p&gt;
&lt;p&gt;A UQL query can be formed with a list of commands joined by &lt;code&gt;|&lt;/code&gt;. Most of the time, fields are referenced in double quotes and string values are referenced in single quotes as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-21-11_47_58-hpe-software-%E2%80%8E-onenote-for-windows-10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following diagram describes the different components of the solution:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-16_15_11-lj-synergy-composable-fabric.pptx-powerpoint.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Pros and Cons about this solution&lt;/h2&gt;
&lt;p&gt;As with any solution, there are both Pros and Cons to using it.&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A lightweight solution that only requires Grafana and an easily installable plugin&lt;/li&gt;
&lt;li&gt;Supports collecting metrics from any API&lt;/li&gt;
&lt;li&gt;Cross-platform support, all components can be installed on Microsoft Windows or Linux&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cannot create a time series Grafana visualization with non-time series data you may retrieve from an API (This would require the use of a database, like Prometheus or InfluxDB)&lt;/li&gt;
&lt;li&gt;Requires in-depth knowledge of the API, authentication, and methods&lt;/li&gt;
&lt;li&gt;Requires knowledge of the UQL language to manipulate JSON data&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Configuration&lt;/h1&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Grafana must be installed, started, and enabled&lt;/li&gt;
&lt;li&gt;HPE GreenLake for Compute Ops Management API client credentials are required (this consists of a &lt;em&gt;client ID&lt;/em&gt; and a &lt;em&gt;client secret&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Infinity plugin installation&lt;/h2&gt;
&lt;p&gt;From an SSH session on the Grafana server, enter:&lt;br&gt;
&gt; &lt;i&gt;grafana-cli plugins install yesoreyeram-infinity-datasource&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Then restart the Grafana service:&lt;br&gt;
&gt; &lt;i&gt;service grafana-server restart&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;For more details on how to install the Infinity plugin, you can check out the &lt;a href=&quot;https://github.com/yesoreyeram/grafana-infinity-datasource&quot;&gt;Infinity GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Grafana configuration&lt;/h2&gt;
&lt;p&gt;To launch the Grafana User Interface, open a web browser and navigate to &lt;strong&gt;http://&amp;#x3C;grafana_IP or DNS name&gt;:3000/&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Note: The default HTTP port that Grafana listens to is 3000 unless you have configured a different port.&lt;/p&gt;
&lt;p&gt;Click on the gear icon on the side menu and click &lt;strong&gt;Data Sources&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Search for Infinity from the data source list.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Add &lt;strong&gt;Infinity&lt;/strong&gt; as a new data source and name it &lt;strong&gt;Infinity-COM&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-17_05_15-infinity-com_-settings-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Leave all other settings as default and click &lt;strong&gt;Save &amp;#x26; test&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Then to create a new Dashboard, click on the Dashboards icon on the side menu and select &lt;strong&gt;New dashboard&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The next steps consist in creating variables and panels for this dashboard. I will describe in this blog each step with a lot of details to give you an overview of the methods I use.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt;: You have the option to import the dashboard directly from my &lt;a href=&quot;https://github.com/jullienl/HPE-Compute-Ops-Management/tree/main/Grafana&quot;&gt;GitHub repository&lt;/a&gt; using the JSON file provided, which contains everything you need (layout, variables, styles, data sources, queries, etc.). Once imported in Grafana, the same dashboard that is described in detail below with its different panels will be immediately available, it includes carbon emissions reports, server information and health, firmware bundles, groups information, etc. Refer to the README.md for more details.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Going back to the manual configuration, the next step is to create Grafana variables for this new dashboard. These will be used to simplify the API authentication process and the REST requests you will define later.&lt;/p&gt;
&lt;p&gt;Click on &lt;strong&gt;Dashboard settings&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then select &lt;strong&gt;Variables&lt;/strong&gt; then &lt;strong&gt;Add variables&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Three variables are required:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;A variable for the HPE GreenLake for Compute Ops Management API endpoint URL:&lt;/p&gt;
&lt;p&gt;Endpoints are the host URLs that you will submit your API requests to. Compute Ops Management has unique endpoints in specific regions. Which region is used depends on which region the devices were onboarded into via the HPE GreenLake Cloud Platform.&lt;/p&gt;
&lt;p&gt;Use the following list to identify your application endpoint:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;US West: &lt;a href=&quot;https://us-west2-api.compute.cloud.hpe.com/&quot;&gt;https://us-west2-api.compute.cloud.hpe.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;EU Central: &lt;a href=&quot;https://eu-central1-api.compute.cloud.hpe.com/&quot;&gt;https://eu-central1-api.compute.cloud.hpe.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;AP NorthEast: &lt;a href=&quot;https://ap-northeast1-api.compute.cloud.hpe.com/&quot;&gt;https://ap-northeast1-api.compute.cloud.hpe.com/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Create a new variable using the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name: &lt;strong&gt;url&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Type: &lt;strong&gt;Custom&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Value: &lt;em&gt;&amp;#x3C;Your endpoint URL&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
src=&quot;/img/2022-10-19-17_41_55-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
 /&gt;&lt;/p&gt;
&lt;p&gt;Note: If you have devices in different regions, you can define your variable with the values of the three endpoints separated by a comma.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-24-11_22_41-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: When multiple endpoints values are defined, the &lt;em&gt;url&lt;/em&gt; variable is listed in a drop-down list at the top of the dashboard. This is very useful if you want to quickly adjust the visualization of the different regions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-24-11_24_26-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A variable to generate the access token for the API authentication:&lt;/p&gt;
&lt;p&gt;Compute Ops Management REST API uses the OAuth 2.0 authentication based on the client credential, which generates a limited lifetime access token. So, the variable must be created using:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name: &lt;strong&gt;session&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Type: &lt;strong&gt;Query&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Hide: &lt;strong&gt;Variable&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Data source: &lt;strong&gt;Infinity-COM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Query Type: &lt;strong&gt;Infinity&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;URL: &lt;strong&gt;&lt;a href=&quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot;&gt;https://sso.common.cloud.hpe.com/as/token.oauth2&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-24-11_51_57-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click then on &lt;strong&gt;HTTP method, Query param, Headers&lt;/strong&gt; and use the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Method: &lt;strong&gt;POST&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Body: &lt;strong&gt;grant_type=client_credentials&amp;#x26;client_id=&lt;/strong&gt;&amp;#x3C;your-client-ID&gt;&lt;strong&gt;&amp;#x26;client_secret=&lt;/strong&gt;&amp;#x3C;your-client-secret&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note: &lt;em&gt;&amp;#x3C;your-client-ID&gt;&lt;/em&gt; and &lt;em&gt;&amp;#x3C;your-client-secret&gt;&lt;/em&gt; are the Compute Ops Management API client credentials generated from the HPE GreenLake Cloud Platform (GLCP).&lt;/p&gt;
&lt;img src=&quot;/img/2022-10-19-18_07_06-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;   /&gt;
&lt;p&gt;And add in the Headers tab:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Header Name: &lt;strong&gt;Content-type&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Header Value: &lt;strong&gt;application/x-www-form-urlencoded&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;When everything is properly configured, the access token generated by the Compute Ops Management API should be displayed in the &lt;em&gt;Preview of values&lt;/em&gt; section at the bottom of the page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/lj-grafana-com-picture7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A variable for the carbon footprint report ID:&lt;/p&gt;
&lt;p&gt;I use a variable for the carbon footprint report ID, because each time a new report is generated, a new &lt;code&gt;id&lt;/code&gt; is created. So, by using a variable, I can fetch the last report &lt;code&gt;id&lt;/code&gt; and be sure that all my CO2 report API requests will be successful.&lt;/p&gt;
&lt;p&gt;For this variable, use the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name: &lt;strong&gt;reportID&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Type: &lt;strong&gt;Query&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Hide: &lt;strong&gt;Variable&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Data source: &lt;strong&gt;Infinity-COM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Query Type: &lt;strong&gt;Infinity&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;URL: &lt;strong&gt;${url}/compute-ops/v1beta1/reports&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 1: &lt;strong&gt;reportDataUri&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-24-11_53_38-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: &lt;code&gt;${variablename}&lt;/code&gt; is the general syntax for calling a variable in Grafana. So &lt;code&gt;${url}&lt;/code&gt; used in the URL field calls the &lt;em&gt;url&lt;/em&gt; variable you defined earlier. Same for &lt;code&gt;${session}&lt;/code&gt; below in the header value, it calls the access token generated by the API.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Method: &lt;strong&gt;GET&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
src=&quot;/img/2022-10-19-18_35_09-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Header Name: &lt;strong&gt;Authorization&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Header Value: &lt;strong&gt;Bearer ${session}&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&amp;#x3C;img&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;src=&quot;/img/2022-10-19-18_34_48-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;p&gt;Once completed, the URI of the carbon footprint report should be displayed in the &lt;em&gt;Preview of values&lt;/em&gt; section:&lt;/p&gt;
&lt;p&gt;&lt;img
src=&quot;/img/2022-10-19-19_02_23-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;p&gt;The variables and parameters of the dashboard are now complete. The warning on the &lt;code&gt;reportID&lt;/code&gt; variable below is expected because this variable is not yet used by any panel. This will be corrected once &lt;code&gt;reportID&lt;/code&gt; is used in a query.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-19_17_15-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click on &lt;strong&gt;Save dashboard&lt;/strong&gt; and give it an appropriate name.&lt;/p&gt;
&lt;h2&gt;Creating Grafana panels&lt;/h2&gt;
&lt;p&gt;The last step of the configuration is to create panels for this new dashboard.&lt;/p&gt;
&lt;p&gt;To create a first panel, click on the &lt;strong&gt;Add panel&lt;/strong&gt; icon in the upper menu bar and click &lt;strong&gt;Add a new panel&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-19_41_45-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following sections describe different panels you can generate in Grafana with HPE Compute Ops Management. They are provided as examples and the parameters can of course be modified to suit your needs. Each section lists the required parameters, provides API information, and methods and includes the UQL queries (if needed) to transform the JSON data.&lt;/p&gt;
&lt;h3&gt;Carbon footprint report (all servers)&lt;/h3&gt;
&lt;p&gt;Analyzing carbon emissions can help you understand the impact of your servers on the environment. Use the carbon footprint report to view the estimated carbon emissions generated by powering all the servers in a Compute Ops Management application region. The report displays the following information for the past seven days:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Estimated total carbon emissions for all servers&lt;/li&gt;
&lt;li&gt;Estimated daily carbon emissions for all servers&lt;/li&gt;
&lt;li&gt;Estimated total carbon emissions for each server&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The report does not include estimates of the embedded carbon footprint from manufacturing and distribution of the servers.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt;: Carbon footprint information is only available after running a report from the HPE GreenLake Cloud Platform GUI and, be aware, there is currently no API method to generate a report, so this process can only be manual and via the GUI. The carbon footprint report, when run, collects data from the last seven days and Compute Ops Management only saves the most recent report. As a result, the visualization in Grafana will be limited to seven days prior to the report run date and to get an up-to-date graph, it is necessary to run a new report from the HPE GreenLake GUI.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Panel overview&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-20_06_34-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;API request&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;get /compute-ops/v1beta1/reports/{id}/data&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;API response&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-{&quot;&gt;  &quot;id&quot;: &quot;843023bd-9412-46c2-8ac2-a3691f657fdb&quot;,
  &quot;type&quot;: &quot;compute-ops/report-data&quot;,
  &quot;name&quot;: &quot;Carbon Footprint Report (All Servers)&quot;,
  &quot;request&quot;: {
    &quot;reportDataStartAt&quot;: &quot;2022-02-04T01:04:20+00:00&quot;,
    &quot;reportDataEndAt&quot;: &quot;2022-02-11T01:04:20+00:00&quot;,
    &quot;reportType&quot;: &quot;CARBON_FOOTPRINT&quot;
  },
  &quot;series&quot;: [
    {
      &quot;name&quot;: &quot;Carbon Emissions&quot;,
      &quot;type&quot;: &quot;CO2_EMISSIONS&quot;,
      &quot;units&quot;: &quot;kgCO2e&quot;,
      &quot;unitsDisplayName&quot;: &quot;kilograms of carbon dioxide equivalent&quot;,
      &quot;subject&quot;: {
        &quot;displayName&quot;: &quot;1M512501AB&quot;,
        &quot;id&quot;: &quot;875765-S01+1M512501AB&quot;,
        &quot;type&quot;: &quot;SERVER&quot;
      },
      &quot;summary&quot;: {
        &quot;avg&quot;: 6.4,
        &quot;sum&quot;: 6.4
      },
      &quot;bucketDurationInSec&quot;: 86961.3,
      &quot;expectedSamplesPerBucket&quot;: 289.9,
      &quot;buckets&quot;: [
        {
          &quot;timestamp&quot;: &quot;2019-08-24T14:15:22Z&quot;,
          &quot;value&quot;: 6.4,
          &quot;numSamples&quot;: 233,
          &quot;noValueReason&quot;: null,
          &quot;extrapolated&quot;: 8
        }
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Panel configuration&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Data source: &lt;strong&gt;Infinity-COM&lt;/strong&gt;&lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type: &lt;strong&gt;UQL&lt;/strong&gt;   &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Format: &lt;strong&gt;Time Series&lt;/strong&gt; &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;URL: &lt;strong&gt;${url}${reportID}&lt;/strong&gt; &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Method: &lt;strong&gt;GET&lt;/strong&gt;   &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Header name: &lt;strong&gt;Authorization&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Header value = &lt;strong&gt;Bearer ${session}&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;UQL:&lt;br&gt;
&lt;strong&gt;parse-json&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;| jsonata  &quot;series[subject.type = &apos;TOTAL&apos;]&quot;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;| scope &quot;buckets&quot;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;| project &quot;Timestamp&quot;=todatetime(&quot;timestamp&quot;), &quot;Carbon Emissions (kgCO2e)&quot;=&quot;value&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Description of the UQL commands:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;parse-json&lt;/strong&gt;:  command to instruct UQL to parse the response as JSON&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;jsonata&lt;/strong&gt;: command to select the object representing the carbon emission report for all servers available in the &lt;code&gt;series&lt;/code&gt; array&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;scope&lt;/strong&gt;: command to set &lt;code&gt;buckets&lt;/code&gt; as the output data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;project&lt;/strong&gt;: command to create a table with two columns (&lt;em&gt;Timestamp&lt;/em&gt; and &lt;em&gt;Carbon Emissions (kgCO2e)&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note: JSONata is an open-source expression language that is used for querying and transforming JSON data. You can refer to the following &lt;a href=&quot;https://www.stedi.com/docs/mappings/jsonata-cheatsheet&quot;&gt;JSONata Cheatsheet&lt;/a&gt; for tons of examples on how to manipulate JSON data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Visualization: &lt;strong&gt;Time Series&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unit: &lt;strong&gt;Number&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-19-20_07_50-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Carbon footprint report (each server)&lt;/h3&gt;
&lt;p&gt;This report displays the estimated total carbon emissions for each server.&lt;/p&gt;
&lt;h4&gt;Panel overview&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-21-11_23_31-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;API request&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;get /compute-ops/v1beta1/reports/{id}/data&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;API response&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-{&quot;&gt;  &quot;id&quot;: &quot;843023bd-9412-46c2-8ac2-a3691f657fdb&quot;,
  &quot;type&quot;: &quot;compute-ops/report-data&quot;,
  &quot;name&quot;: &quot;Carbon Footprint Report (All Servers)&quot;,
  &quot;request&quot;: {
    &quot;reportDataStartAt&quot;: &quot;2022-02-04T01:04:20+00:00&quot;,
    &quot;reportDataEndAt&quot;: &quot;2022-02-11T01:04:20+00:00&quot;,
    &quot;reportType&quot;: &quot;CARBON_FOOTPRINT&quot;
  },
  &quot;series&quot;: [
    {
      &quot;name&quot;: &quot;Carbon Emissions&quot;,
      &quot;type&quot;: &quot;CO2_EMISSIONS&quot;,
      &quot;units&quot;: &quot;kgCO2e&quot;,
      &quot;unitsDisplayName&quot;: &quot;kilograms of carbon dioxide equivalent&quot;,
      &quot;subject&quot;: {
        &quot;displayName&quot;: &quot;1M512501AB&quot;,
        &quot;id&quot;: &quot;875765-S01+1M512501AB&quot;,
        &quot;type&quot;: &quot;SERVER&quot;
      },
      &quot;summary&quot;: {
        &quot;avg&quot;: 6.4,
        &quot;sum&quot;: 6.4
      },
      &quot;bucketDurationInSec&quot;: 86961.3,
      &quot;expectedSamplesPerBucket&quot;: 289.9,
      &quot;buckets&quot;: [
        {
          &quot;timestamp&quot;: &quot;2019-08-24T14:15:22Z&quot;,
          &quot;value&quot;: 6.4,
          &quot;numSamples&quot;: 233,
          &quot;noValueReason&quot;: null,
          &quot;extrapolated&quot;: 8
        }
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Panel configuration&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Data source: &lt;strong&gt;Infinity-COM&lt;/strong&gt;&lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type: &lt;strong&gt;UQL&lt;/strong&gt;   &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Format: &lt;strong&gt;Table&lt;/strong&gt;   &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;URL: &lt;strong&gt;${url}${reportID}&lt;/strong&gt; &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Method: &lt;strong&gt;GET&lt;/strong&gt;   &lt;br /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Header name: &lt;strong&gt;Authorization&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Header value = &lt;strong&gt;Bearer ${session}&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;UQL:&lt;br&gt;
&lt;strong&gt;parse-json&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;| scope &quot;series&quot;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;| project &quot;Servers&quot;=&quot;subject.displayName&quot;, &quot;Carbon Emissions&quot;=&quot;summary.sum&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Description of the UQL commands:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;parse-json&lt;/strong&gt;:  command to instruct UQL to parse the response as JSON&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;scope&lt;/strong&gt;: command to set &lt;code&gt;series&lt;/code&gt; as the output data&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;project&lt;/strong&gt;: command to create a table with two columns (&lt;em&gt;Servers&lt;/em&gt; and &lt;em&gt;Carbon Emissions&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Override1: Fields with name = &lt;strong&gt;Carbon Emissions&lt;/strong&gt; / Cell display Mode = &lt;strong&gt;LCD Gauge&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Visualization: &lt;strong&gt;Table&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unit: &lt;strong&gt;kgCO2e&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Color scheme: &lt;strong&gt;Green-Yellow-Red (by value)&lt;/strong&gt;&lt;br /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
 src=&quot;/img/2022-10-19-20_26_28-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;h3&gt;Server health and information panel&lt;/h3&gt;
&lt;p&gt;This panel displays the health and information for each server. The information you want to display in the panel is very flexible, many properties are available in servers resource, such as server model, serial number, model, power status, etc.&lt;/p&gt;
&lt;h4&gt;Panel overview&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;/img/2022-10-21-11_24_26-hpe-com-using-infinity-uql-native-api-calls-grafana-%E2%80%94-mozilla-firefox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;API request&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;get /compute-ops/v1beta2/servers/&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;API response&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-{&quot;&gt;    &quot;id&quot;: &quot;P39886-B21+CN70490RXS&quot;,
    &quot;type&quot;: &quot;compute-ops/server&quot;,
    &quot;platformFamily&quot;: &quot;PROLIANT&quot;,
    &quot;resourceUri&quot;: &quot;/compute-ops/v1beta2/servers/P39886-B21+CN70490RXS&quot;,
    &quot;name&quot;: &quot;HPE-HOL17&quot;,
    &quot;createdAt&quot;: &quot;2022-03-25T16:22:22.373640+00:00&quot;,
    &quot;updatedAt&quot;: &quot;2022-10-19T20:18:08.466399+00:00&quot;,
    &quot;generation&quot;: 515,
    &quot;hardware&quot;: {
        &quot;serialNumber&quot;: &quot;CN70490RXS&quot;,
        &quot;model&quot;: &quot;ProLiant DL360 Gen10 Plus&quot;,
        &quot;uuid&quot;: &quot;38393350-3638-4E43-3730-343930525853&quot;,
        &quot;productId&quot;: &quot;P39886-B21&quot;,
        &quot;powerState&quot;: &quot;OFF&quot;,
        &quot;indicatorLed&quot;: &quot;OFF&quot;,
        &quot;health&quot;: {
            &quot;summary&quot;: &quot;WARNING&quot;,
            &quot;healthLED&quot;: &quot;WARNING&quot;,
            &quot;fans&quot;: &quot;OK&quot;,
            &quot;fanRedundancy&quot;: &quot;REDUNDANT&quot;,
            &quot;liquidCooling&quot;: &quot;NOT_PRESENT&quot;,
            &quot;liquidCoolingRedundancy&quot;: &quot;NOT_PRESENT&quot;,
            &quot;memory&quot;: &quot;OK&quot;,
            &quot;network&quot;: &quot;UNKNOWN&quot;,
            &quot;powerSupplies&quot;: &quot;OK&quot;,
            &quot;powerSupplyRedundancy&quot;: &quot;NOT_PRESENT&quot;,
            &quot;processor&quot;: &quot;OK&quot;,
            &quot;storage&quot;: &quot;OK&quot;,
            &quot;temperature&quot;: &quot;OK&quot;,
            &quot;bios&quot;: &quot;WARNING&quot;,
            &quot;smartStorage&quot;: &quot;OK&quot;
        },
        &quot;bmc&quot;: {
            &quot;mac&quot;: &quot;B4:7A:F1:54:71:68&quot;,
            &quot;ip&quot;: &quot;172.30.231.79&quot;,
            &quot;hostname&quot;: &quot;None&quot;
        }
    },
    &quot;state&quot;: {
        &quot;managed&quot;: true,
        &quot;connected&quot;: true,
        &quot;connectedModifiedAt&quot;: &quot;2022-10-18T17:38:10.624330+00:00&quot;,
        &quot;subscriptionState&quot;: &quot;SUBSCRIBED&quot;,
        &quot;subscriptionTier&quot;: &quot;Enhanced&quot;,
        &quot;subscriptionExpiresAt&quot;: &quot;2027-01-31T19:11:00+00:00&quot;
    },
    &quot;firmwareInventory&quot;: [
        {
            &quot;name&quot;: &quot;iLO 5&quot;,
            &quot;version&quot;: &quot;2.72 Sep 04 2022&quot;,
            &quot;deviceContext&quot;: &quot;System Board&quot;
        },
       ,
        {
            &quot;name&quot;: &quot;8 SFF 12G x1SAS UBM2 BC BP&quot;,
            &quot;version&quot;: &quot;1.20&quot;,
            &quot;deviceContext&quot;: &quot;Slot=12:Port=2I:Box=1&quot;
        }
    ],
    &quot;softwareInventory&quot;: [],
    &quot;lastFirmwareUpdate&quot;: {
        &quot;status&quot;: &quot;OK&quot;,
        &quot;attemptedAt&quot;: &quot;2022-10-19T20:12:03.401750Z&quot;,
        &quot;firmwareInventoryUpdates&quot;: [
            {
                &quot;name&quot;: &quot;U46_1.64_08_11_2022.fwpkg&quot;,
                &quot;version&quot;: &quot;1.64_08-11-2022&quot;,
                &quot;status&quot;: &quot;OK&quot;
            },
            &amp;#x3C;...&gt;
            {
                &quot;name&quot;: &quot;cp053854.exe&quot;,
                &quot;version&quot;: &quot;5.32&quot;,
                &quot;status&quot;: &quot;OK&quot;
            }
        ]
    },
    &quot;host&quot;: {
        &quot;osName&quot;: &quot;None&quot;,
        &quot;osVersion&quot;: &quot;None&quot;,
        &quot;hostname&quot;: &quot;HPE-HOL17&quot;,
        &quot;osType&quot;: null,
        &quot;osDescription&quot;: &quot;None&quot;
    },
    &quot;firmwareBundleUri&quot;: &quot;/v1/firmware-bundles/ea469b39ed5106434169397999e816bf&quot;,
    &quot;tags&quot;: {},
    &quot;biosFamily&quot;: &quot;U46&quot;,
    &quot;processorVendor&quot;: &quot;Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz&quot;,
    &quot;autoIloFwUpdate&quot;: true
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Panel configuration&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;Data source: &lt;strong&gt;Infinity-COM&lt;/strong&gt;&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Type: &lt;strong&gt;JSON&lt;/strong&gt;   &lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Format: &lt;strong&gt;Table&lt;/strong&gt;   &lt;br /&gt;&lt;/li&gt;
&lt;li&gt;URL: &lt;strong&gt;${url}/compute-ops/v1beta2/servers?limit=100&lt;/strong&gt; &lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Method: &lt;strong&gt;GET&lt;/strong&gt;   &lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Header name: &lt;strong&gt;Authorization&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Header value = &lt;strong&gt;Bearer ${session}&lt;/strong&gt;&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;Column 1: &lt;strong&gt;name&lt;/strong&gt; as &lt;strong&gt;Name&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 2: &lt;strong&gt;hardware.serialNumber&lt;/strong&gt; as &lt;strong&gt;Serial Number&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 3: &lt;strong&gt;hardware.model&lt;/strong&gt; as &lt;strong&gt;Model&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 4: &lt;strong&gt;hardware.health.summary&lt;/strong&gt; as &lt;strong&gt;Health&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 5: &lt;strong&gt;hardware.powerState&lt;/strong&gt; as &lt;strong&gt;Power State&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 6: &lt;strong&gt;hardware.bmc.ip&lt;/strong&gt; as &lt;strong&gt;iLO IP&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Column 7: &lt;strong&gt;lastFirmwareUpdate.status&lt;/strong&gt; as &lt;strong&gt;Last FW Update Status&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
 src=&quot;/img/2022-10-20-17_20_28-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Override1: Fields with name = &lt;strong&gt;Health&lt;/strong&gt; / Cell display Mode = &lt;strong&gt;Color text&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Override2: Fields with name = &lt;strong&gt;Power State&lt;/strong&gt; / Cell display Mode = &lt;strong&gt;Color text&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Override3: Fields with name = &lt;strong&gt;Last FW Update Status&lt;/strong&gt; / Cell display Mode = &lt;strong&gt;Color text&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
 src=&quot;/img/2022-10-20-17_23_49-.png&quot;
/&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Visualization: &lt;strong&gt;Table&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Value mappings:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OK&lt;/strong&gt;: Green&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;WARNING&lt;/strong&gt;: Orange&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ERROR&lt;/strong&gt;: Red&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ON&lt;/strong&gt;: Green&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OFF&lt;/strong&gt;: Red&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img
 src=&quot;/img/2022-10-20-17_22_27-hpe-com-using-infinity-uql-native-api-calls-grafana-—-mozilla-firefox.png&quot;
/&gt;&lt;/p&gt;
&lt;p&gt;Many other panels using other &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake/services/compute-ops/public/openapi/compute-ops-latest/overview/&quot;&gt;API resources&lt;/a&gt; can be generated, for example for firmware bundles, groups, activities, etc. And now you have all the basics to get started and create the panels you need in your environment.&lt;/p&gt;
&lt;p&gt;This concludes this blog post. I hope you find it useful and should you have any feedback, please send me a &lt;a href=&quot;mailto:lio@hpe.com&quot;&gt;message&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open source contributor helps Istio integrate with SPIRE]]></title><description><![CDATA[In this blog series, you’ll get to meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be…]]></description><link>https://developer.hpe.com/open-source-contributor-helps-istio-integrate-with-spire/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-contributor-helps-istio-integrate-with-spire/</guid><pubDate>Wed, 12 Oct 2022 15:24:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/max-lambrecht-istio-with-spire-1200-x-675.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this blog series, you’ll get to meet some of the open source experts working with the HPE Developer Community team. In this post, I’ll be interviewing Max Lambrecht, a software engineer working with the HPE Security Engineering team on Zero Trust Security. Max has been an open source contributor to both the SPIFFE/SPIRE and Istio projects; most importantly, enabling their integration.&lt;/p&gt;
&lt;h2&gt;How did you get involved with open source?&lt;/h2&gt;
&lt;p&gt;I first became involved with open source in 2018 when I joined Scytale and began working on SPIFFE and SPIRE. Scytale was subsequently acquired by Hewlett Packard Enterprise (HPE) in 2020. In 2021, there were some conversations within HPE engineering about integrating Istio with SPIRE, and a new team was created to research and implement that integration. With my background in SPIFFE and SPIRE, and the experience I had with Istio, the project looked like a good fit for me and a great opportunity to contribute to open source.&lt;/p&gt;
&lt;h2&gt;What exactly is Istio?&lt;/h2&gt;
&lt;p&gt;Istio is by far the most popular service mesh in the world. The Istio service mesh helps developers cope with the challenges of distributed or microservices architectures. It provides a dedicated infrastructure layer through which you can add capabilities, like observability, traffic management, and security, to modern microservice-based applications without adding them to your code.&lt;/p&gt;
&lt;p&gt;The Istio project was started by teams from Google and IBM in partnership with the Envoy teams from Lyft. It’s a huge project with many contributors, including several large companies, from around the world. In April of 2022, Istio was donated to the Cloud Native Computing Foundation, the open-source organization which SPIFFE and SPIRE are also part of.&lt;/p&gt;
&lt;h2&gt;Why was it so important for you to help integrate Istio with SPIRE?&lt;/h2&gt;
&lt;p&gt;Over the last four or five years, probably just after SPIRE started getting attention from the open-source community and companies started using it, there was interest in using SPIRE as the certificate provider for services running on an Istio deployment, enabling all the powerful capabilities of SPIRE for software identities within that environment.&lt;/p&gt;
&lt;p&gt;Service meshes are becoming more and more relevant as the Zero Trust security model gains traction and the perimeter-based model proves to be untenable in a world of hybrid clouds and increasingly complex microservices architectures.&lt;/p&gt;
&lt;p&gt;To this scenario, SPIRE brings strongly attested identities that are automatically delivered in heterogeneous environments. SPIRE enables multiple mechanisms to assert the identity of a machine and a service running on it. This adds a strong level of security to service meshes, as identities are key in securing communication across services. SPIFFE/SPIRE is a foundational building block of a zero-trust infrastructure.&lt;/p&gt;
&lt;p&gt;By bringing together these two technologies, Istio, a powerful service mesh, and SPIRE, a toolchain of APIs for delivering strong identities, I was able to help provide a way to greatly increase the security posture of services deployments.&lt;/p&gt;
&lt;h2&gt;How did the project progress?&lt;/h2&gt;
&lt;p&gt;The project involved several different activities that ranged from understanding Istio’s complex code base to testing several approaches to achieve integration with SPIRE, and then implementing a proof of concept of the selected approach, documenting it, and presenting it to people from the Istio community (Istio users, Istio maintainers). We discussed the potential solutions during Istio maintainers&apos; calls and received useful feedback that helped us improve the solution. Once we came to an agreement on how to do it, we proceeded by putting together a pull request to submit the implementation into Istio. It was received with very good feedback, and finally accepted and merged. It was a big achievement for the team, for HPE, and for the SPIFFE/SPIRE and Istio communities. It’s something that people in both communities had been looking forward to for years.&lt;/p&gt;
&lt;h2&gt;What do you think the future will look like?&lt;/h2&gt;
&lt;p&gt;In my opinion, I think this will bring about a significant increase in the adoption of SPIFFE and SPIRE, as it will attract contributors to these projects, and they will continue to work on the development of tools to automate the deployment and configuration of SPIRE integrated with Istio.&lt;/p&gt;
&lt;p&gt;And given the &lt;a href=&quot;https://www.cncf.io/announcements/2022/09/20/spiffe-and-spire-projects-graduate-from-cloud-native-computing-foundation-incubator/&quot;&gt;recent graduation of SPIFFE and SPIRE as CNCF projects&lt;/a&gt;, I foresee many will be attracted even more to adopting these projects.&lt;/p&gt;
&lt;h2&gt;Is there anything else you’d like to share with our readers?&lt;/h2&gt;
&lt;p&gt;The humble words of wisdom I’d like to leave with readers is this: Contribute back to the community, contribute to open source, find a project that you find interesting and/or one that can be useful to others. Look for tasks that you can take on and engage with the project’s community in Slack. You’ll grow as a professional, you’ll grow your network, you’ll grow your skills, and ultimately feel a huge sense of accomplishment. Sometimes you’ll have to overcome impostor syndrome. We all experience it from time to time. But that’s a good sign... it means that you are getting involved in things that can make you grow and learn.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to monitor HPE OneView infrastructure with Grafana Metrics Dashboards and InfluxDB]]></title><description><![CDATA[The purpose of this blog post is to describe how to generate Grafana dashboards using InfluxDB and PowerShell scripts to monitor any HPE…]]></description><link>https://developer.hpe.com/how-to-monitor-hpe-oneview-infrastructure-with-grafana-metrics-dashboards-and-influxdb/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-monitor-hpe-oneview-infrastructure-with-grafana-metrics-dashboards-and-influxdb/</guid><pubDate>Wed, 12 Oct 2022 10:00:35 GMT</pubDate><content:encoded>&lt;style&gt;ul li{ font-size:26px;padding-bottom: 0.5em;}&lt;/style&gt;
&lt;style&gt; i{ color:grey;font-family:&apos;Courier New&apos;;font-size:22px; } &lt;/style&gt;
&lt;p&gt;The purpose of this blog post is to describe how to generate Grafana dashboards using InfluxDB and PowerShell scripts to monitor any HPE Compute infrastructure managed by HPE OneView.&lt;/p&gt;
&lt;h1&gt;Grafana Dashboards&lt;/h1&gt;
&lt;p&gt;IT infrastructure metrics visualization is critical for health monitoring, prediction, and capacity planning. It provides a powerful way of viewing infrastructure utilization, revealing issues and helping maintain uninterrupted services.&lt;/p&gt;
&lt;p&gt;Grafana’s time-series graphs are the perfect enabler for IT infrastructure optimization. They can assist administrators in monitoring temperature changes, network traffic performance, power consumption, and much more. They can be used to compare data over time to note trends and detect issues, allowing administrators to make any necessary adjustments and prevent downtime.&lt;/p&gt;
&lt;p&gt;The following picture shows a typical HPE infrastructure dashboard with Synergy frame, compute, and interconnect metrics:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image001.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE OneView metric resources&lt;/h1&gt;
&lt;p&gt;Multiple metrics resources are supported by HPE OneView through the API, including CPU, memory, power consumption, temperature, health, capacity data for some resources like enclosures, interconnects and server hardware. Network statistics and network throughput are also available for each uplink and downlink ports in &quot;interconnects&quot; such as HPE Virtual Connect modules.&lt;/p&gt;
&lt;p&gt;The following table provides the resource metrics that are accessible through the HPE OneView RESTful API:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Server hardware Metrics&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;URI&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Ambient Temperature&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=AmbientTemperature&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Cpu Average Frequency&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=CpuAverageFreq&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Cpu Utilization&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=CpuUtilization&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Average Power&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=AveragePower&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Peak Power&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=PeakPower&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Power Cap&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/server-hardware/{id}/utilization?fields=PowerCap&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Enclosures Metrics&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;URI&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Ambient Temperature&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/enclosures/{id}/utilization?fields=AmbientTemperature&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Average Power&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/enclosures/{id}/utilization?fields=AveragePower&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Peak Power&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/enclosures/{id}/utilization?fields=PeakPower&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Interconnect Metrics&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;URI&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Statistics for the specified port name on an interconnect&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/interconnects/{id}/statistics/portname&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Interconnect cpu and memory utilization data&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;em&gt;/rest/interconnects/{id}/utilization&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;HPE OneView metrics are enabled by default. For HPE Virtual Connect network statistics, the Utilization Sampling settings defined in the logical interconnect group controls the data collection rate and sample interval value. By default, the HPE Virtual Connect module sampling rate is 12 samples per hour, as shown in the following figure:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image002.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;InfluxDB Time-series database&lt;/h1&gt;
&lt;p&gt;My decision to use InfluxDB with Grafana metrics dashboards to monitor HPE OneView infrastructure was made for several reasons. First, InfluxDB can be installed on both Microsoft Windows and Linux, it is an open-source tool, and many useful modules are available to create entries in a database. Other options include Prometheus, which can also be used to collect and record measurements in real time.&lt;/p&gt;
&lt;p&gt;To collect HPE OneView metrics, I use a PowerShell script. This script collects utilization statistics of defined resources from the HPE OneView API periodically and continually transmits the metrics to InfluxDB via its REST API. The timestamped metrics data is saved into the InfluxDB time series database that Grafana uses to generate the graphs.&lt;/p&gt;
&lt;p&gt;The script is an independent process that must run continuously.&lt;/p&gt;
&lt;p&gt;The following diagram describes the different components of the solution:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image003.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Pros and Cons about this solution&lt;/h2&gt;
&lt;p&gt;As with any solution, there are both Pros and Cons to using it.&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enables time-series graphs for the HPE OneView Server hardware utilization statistics and HPE Virtual Connect modules utilization statistics&lt;/li&gt;
&lt;li&gt;Supports collecting metrics from any API (simply requires the appropriate PowerShell script for the collection)&lt;/li&gt;
&lt;li&gt;Provides a flexible solution using widely used and cross-platform scripting language&lt;/li&gt;
&lt;li&gt;Cross-platform support, all components can be installed on Microsoft Windows or Linux&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Requires development of PowerShell scripts if the examples provided do not meet your needs&lt;/li&gt;
&lt;li&gt;Requires in-depth knowledge of the language, API, authentication, and methods&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Configuration&lt;/h1&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Grafana and InfluxDB must be installed, started, and enabled&lt;/li&gt;
&lt;li&gt;A firewall rule must be created to allow TCP port 8086 (used by InfluxDB API, by default)&lt;/li&gt;
&lt;li&gt;PowerShell Core for Linux must be installed if a Linux server is used to run the PowerShell scripts&lt;/li&gt;
&lt;li&gt;HPE OneView PowerShell library 6.60 or later must be used&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Configure InfluxDB http authentication&lt;/h2&gt;
&lt;p&gt;By default, all security features are disabled in InfluxDB, so it is recommended to set up authentication by creating an &lt;em&gt;admin&lt;/em&gt; user.&lt;/p&gt;
&lt;p&gt;To launch the influx command line interface (CLI), type:&lt;br&gt;
&gt; &lt;i&gt;influx&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Then create a user with an authentication password:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;CREATE USER admin WITH PASSWORD &apos;P@ssw0rd&apos; WITH ALL PRIVILEGES&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Once created, authenticate using:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;auth&lt;/em&gt;&lt;/i&gt;&lt;br&gt;
username: &lt;i&gt;&lt;em&gt;admin&lt;/em&gt;&lt;/i&gt;&lt;br&gt;
password: &lt;i&gt;********&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;To enable the http authentication, you need to modify the InfluxDB configuration file. Go to the &lt;strong&gt;[http]&lt;/strong&gt; section of &lt;strong&gt;/etc/influxdb/influxdb.conf&lt;/strong&gt; and change the &lt;strong&gt;auth-enabled&lt;/strong&gt; value to &lt;strong&gt;true.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;[http]&lt;br&gt;
auth-enabled = &lt;i&gt;&lt;em&gt;true&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Once modified, restart the InfluxDB service:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;sudo systemctl restart influxdb&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;h2&gt;PowerShell scripts for HPE OneView metrics collection&lt;/h2&gt;
&lt;p&gt;PowerShell scripts to collect metrics from the HPE OneView API can be found in my GitHub repository &lt;a href=&quot;https://github.com/jullienl/HPE-Synergy-OneView-demos/tree/master/Powershell/Grafana%20Metrics&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Two distinct scripts are available, one for the interconnect metrics and one for compute, enclosure, and server profile metrics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image004.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For each script, it is important to provide all the required variables for HPE OneView and InfluxDB.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image005.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For &lt;em&gt;Grafana-Interconnect-monitoring.ps1&lt;/em&gt;, at the beginning of the script you need to provide the interconnect module names and port IDs that you would like to monitor using a hash table format:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note that the interconnect modules and port names can be found in the HPE OneView UI (in the Interconnects menu):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image006.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For &lt;em&gt;Grafana-Server_Enclosure-monitoring.ps1&lt;/em&gt;, you need to provide at the beginning of the script, the resource names (server hardware or server profile or enclosure) and utilization (CPU, power, or temperature) that you want to monitor using a hash table format:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image007.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The names of the resources that need to be provided can be easily identified in the corresponding menus of the HPE OneView user interface.&lt;/p&gt;
&lt;p&gt;These scripts are written to collect metrics continually. They can be run in background on a Linux system using a crontab configuration or on a Microsoft Windows one, using Task Scheduler.&lt;/p&gt;
&lt;h3&gt;How to run the scripts on a Microsoft Windows machine?&lt;/h3&gt;
&lt;p&gt;The following commands can be used to schedule both jobs on a Microsoft Windows machine:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;$trigger = New-JobTrigger -AtStartup -RandomDelay 00:00:30&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&gt; &lt;i&gt;&lt;em&gt;Register-ScheduledJob -Trigger $trigger -FilePath &quot;...\Grafana-Server_Enclosure-monitoring.ps1&quot; -Name GrafanaServerEnclosureMonitoring&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&gt; &lt;i&gt;&lt;em&gt;Register-ScheduledJob -Trigger $trigger -FilePath &quot;...\Grafana-Interconnect-monitoring.ps1&quot; -Name GrafanaInterconnectMonitoring&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;You can check the job schedule by typing:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;Get-ScheduledJob&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image008.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Alternatively, launch Windows Task Scheduler, by pressing Windows + R keys on your keyboard to run a command, and enter:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;taskschd.msc&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image009.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As I am using an &quot;at startup&quot; trigger, it is required to restart the server in order to run the scripts.&lt;/p&gt;
&lt;p&gt;Restart the server and confirm that scripts are executed. Once restarted, you can run on a Microsoft Windows machine:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;Get-job&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image010.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;How to run the scripts on a Linux machine?&lt;/h3&gt;
&lt;p&gt;PowerShell can be installed on different Linux distributions today and it works perfectly, using a crontab configuration would make the scripts run in background. This allows using only one Linux machine to host all components (i.e., Grafana, InfluxDB and the execution of the PowerShell scripts).&lt;/p&gt;
&lt;p&gt;To learn more, you can refer to &lt;a href=&quot;https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.2&quot;&gt;this article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Linux repositories proposed by Microsoft can be found &lt;a href=&quot;https://packages.microsoft.com/config/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image011.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;On a RHEL/CentOS virtual machine, you can use the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add the Microsoft package repository:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&gt; &lt;i&gt;&lt;em&gt;curl &lt;a href=&quot;https://packages.microsoft.com/config/centos/8/prod.repo&quot;&gt;https://packages.microsoft.com/config/centos/8/prod.repo&lt;/a&gt; | sudo tee /etc/yum.repos.d/microsoft.repo&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run the PowerShell installation:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;yum install powershell&lt;/em&gt;&lt;/i&gt;&lt;/li&gt;
&lt;li&gt;Copy the script files to the Linux system and set the execution permission on both files:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;chmod +x Grafana-Interconnect-monitoring.ps1&lt;/em&gt;&lt;/i&gt;&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;chmod +x Grafana-Server_Enclosure-monitoring.ps1&lt;/em&gt;&lt;/i&gt;&lt;/li&gt;
&lt;li&gt;Open the crontab configuration:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;crontab -e&lt;/em&gt;&lt;/i&gt;&lt;/li&gt;
&lt;li&gt;Add two configurations, one for each script with a startup execution after a sleep time:
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;@reboot sleep 30 &amp;#x26;&amp;#x26; pwsh -File &quot;.../Grafana-Interconnect-monitoring.ps1&quot;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;@reboot sleep 30 &amp;#x26;&amp;#x26; pwsh -File &quot;.../Grafana-Server_Enclosure-monitoring.ps1&quot;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Restart the Linux machine to trigger the execution:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;shutdown -r now&lt;/em&gt;&lt;/i&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;How to ensure that the scripts have started successfully?&lt;/h3&gt;
&lt;p&gt;First, to make sure that the scripts have started, you can check that the databases have been created using the InfluxDB tool.&lt;/p&gt;
&lt;p&gt;Connect to the server running InfluxDB and &lt;em&gt;launch the InfluxDB CLI&lt;/em&gt;:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;influx&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Authenticate using your InfluxDB credentials:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;auth&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;Display existing databases:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;show databases&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;If both databases defined in the script are listed, then both scripts have started successfully:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image012.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To verify that the metrics are successfully collected, open one of the databases and check the data content as shown below:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;use ov_icm_db&lt;/em&gt;&lt;/i&gt;&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;show measurements&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image013.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The measurements listed here correspond to the metrics (ports or resources) defined in the PowerShell scripts.&lt;/p&gt;
&lt;p&gt;Open one of the measurements to verify that the metric data is coming in:&lt;br&gt;
&gt; &lt;i&gt;&lt;em&gt;SELECT * FROM &quot;Frame3-Interconnect3-Q1&quot;&lt;/em&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image014.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The data shows that the collection of metrics has started and that everything is running fine.&lt;/p&gt;
&lt;h2&gt;Adding InfluxDB data sources in Grafana&lt;/h2&gt;
&lt;p&gt;Now that InfluxDB is configured and the data is collected, you can add the databases (created by the two scripts) to Grafana as new InfluxDB data sources.&lt;/p&gt;
&lt;p&gt;Once that is completed, any dashboard you create in Grafana will have access to the metric data collected.&lt;/p&gt;
&lt;p&gt;To launch the Grafana IU, open your web browser and navigate to &lt;strong&gt;http://&amp;#x3C;grafana_IP or DNS name&gt;:3000/&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Note: The default HTTP port that Grafana listens to is 3000 unless you have configured a different port.&lt;/p&gt;
&lt;p&gt;Click on the gear icon on the side menu and click &lt;strong&gt;Add data Sources&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image015.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select &lt;strong&gt;InfluxDB&lt;/strong&gt; from the data source list.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image016.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For server and enclosure metrics, enter a data source name, e.g., &lt;strong&gt;InfluxDB-OV-Server-Metrics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Add the InfluxDB URL; by default it is &lt;strong&gt;&lt;a href=&quot;http://localhost:8086/&quot;&gt;http://localhost:8086&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Add the database name that you defined in &lt;em&gt;Grafana-Server_Enclosure-monitoring.ps1&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image017.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Provide the InfluxDB admin username and password.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image018.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once completed, click on the &lt;strong&gt;Save &amp;#x26; Test&lt;/strong&gt; button.&lt;/p&gt;
&lt;p&gt;If no error is returned, a &quot;Data source is working&quot; message is displayed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image019.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now repeat the same Add data source procedure for the Interconnect metrics, this time using the data source name &lt;strong&gt;InfluxDB-OV-Interconnect-Metrics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image020.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Again, using the database name you defined in &lt;em&gt;Grafana-Interconnect-monitoring.ps1&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image021.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once this is done, click on the &lt;strong&gt;Save &amp;#x26; Test&lt;/strong&gt; button and make sure the data source is working.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image022.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can then click on the &lt;strong&gt;Back&lt;/strong&gt; button to return to the Data sources configuration window.&lt;/p&gt;
&lt;p&gt;You should now have two new Grafana data sources corresponding to the two InfluxDB databases generated by the two PowerShell scripts.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image023.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This completes the configuration of the Grafana data source.&lt;/p&gt;
&lt;p&gt;You are now ready to access your InfluxDB time series databases in Grafana.&lt;/p&gt;
&lt;h2&gt;Creating the Grafana dashboard&lt;/h2&gt;
&lt;p&gt;A Grafana dashboard can aggregate one or more panels using multiple sources. Thus, you can create a single dashboard with server/enclosure and interconnect metrics panels.&lt;/p&gt;
&lt;p&gt;Click on the Dashboards icon on the side menu and click &lt;strong&gt;New dashboard.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image024.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click on &lt;strong&gt;Add a new panel&lt;/strong&gt; to create a panel to visualize the first HPE Virtual Connect module metrics.&lt;/p&gt;
&lt;p&gt;In Data source, select &lt;strong&gt;Influxdb-OV-Interconnect-Metrics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image025.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For &lt;strong&gt;Query options&lt;/strong&gt;, it is recommended to set &lt;strong&gt;5m&lt;/strong&gt; as the minimum query interval to match the HPE OneView API metrics sampling value of the interconnect interfaces (see below).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image026.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then, you need to define a query for each port you specified in the PowerShell script (in the &lt;strong&gt;$Ports&lt;/strong&gt; variable) for this interconnect module name, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image027.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For example, to set the Q1 port, you need to click on &lt;strong&gt;select measurement&lt;/strong&gt; next to &lt;strong&gt;FROM&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image028.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A list of all the measurements available in the database is displayed in the drop-down menu, as seen below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image029.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select &lt;strong&gt;Q1,&lt;/strong&gt; then click on &lt;strong&gt;field (value)&lt;/strong&gt; in the &lt;strong&gt;SELECT&lt;/strong&gt; row to select the value you want to display:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image030.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A list of all measurement values recorded in the database displays in the drop-down menu:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image031.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select &lt;strong&gt;receiveKilobytesPerSec&lt;/strong&gt; for example.&lt;/p&gt;
&lt;p&gt;You can then set the alias as &lt;strong&gt;Q1&lt;/strong&gt; to replace the default metric name and get a clear legend label on the chart.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image032.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image033.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The metric points should already appear on the graph.&lt;/p&gt;
&lt;p&gt;Further, you can specify a panel title in the right-side menu using the interconnect name you selected, like &lt;strong&gt;VC Frame3-Interconnect3&lt;/strong&gt; (in our example).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image034.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;And for a better visualization, you can select &lt;strong&gt;Always&lt;/strong&gt; for &lt;strong&gt;Connect null values&lt;/strong&gt; and &lt;strong&gt;Never&lt;/strong&gt; for &lt;strong&gt;Show points&lt;/strong&gt; in the &lt;strong&gt;Graph styles&lt;/strong&gt; section.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image035.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;And finally, set the unit data rate you selected in the &lt;strong&gt;SELECT&lt;/strong&gt; row. Scroll down to the &lt;strong&gt;Standard options&lt;/strong&gt; section and in &lt;strong&gt;Unit&lt;/strong&gt; , select &lt;strong&gt;Data rate&lt;/strong&gt; and click on &lt;strong&gt;kilobytes/sec&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image036.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Rendering should display as follows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image037.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This completes the configuration of the first port query.&lt;/p&gt;
&lt;p&gt;You will need to click on the &lt;strong&gt;+ Query&lt;/strong&gt; button for the other ports and repeat the same query configuration (as previously described) for all the ports defined in the PowerShell script.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image038.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image039.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once all queries have been defined, you can save the panel using the &lt;strong&gt;Save&lt;/strong&gt; button in the upper right corner. Type a name for the newly created dashboard like &lt;strong&gt;HPE OneView Metrics&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image040.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can now duplicate this panel to create another one for the second HPE Virtual Connect module. Click on the panel&apos;s context menu, select &lt;strong&gt;More&lt;/strong&gt; , then &lt;strong&gt;Duplicate&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Click on the duplicated panel&apos;s context menu then select &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Change the panel title with the name of your second Virtua Connect module like &lt;strong&gt;VC Frame3-Interconnect6 Metrics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Then modify each query by selecting the ports on the second interconnect module that you want to monitor.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image041.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; then &lt;strong&gt;Apply&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The dashboard now displays two panels, one for each HPE Virtual Connect module that was defined in &lt;em&gt;Grafana Interconnect monitoring.ps1&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image042.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The next step consists in creating panels to display Compute and Frame metrics.&lt;/p&gt;
&lt;p&gt;Click on the &lt;strong&gt;Add panel&lt;/strong&gt; button on the upper bar and select &lt;strong&gt;Add a new panel&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Select the &lt;strong&gt;InfluxDB-OV-Server-Metrics&lt;/strong&gt; data source, then select the resource you want to monitor.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image043.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select the measurement you need:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image044.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add a panel title&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;5m&lt;/strong&gt; for the Min interval query options&lt;/li&gt;
&lt;li&gt;Select the graph styles options
&lt;ul&gt;
&lt;li&gt;Connect null values: &lt;strong&gt;Always&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Show points: &lt;strong&gt;Never&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select the correct unit corresponding to the measurement type
&lt;ul&gt;
&lt;li&gt;Power: Energy / &lt;strong&gt;Watt&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Temperature: Temperature / &lt;strong&gt;Celsius&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;CPU: simply type &lt;strong&gt;GHz&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Add meaningful alias names to reflect reported measurement&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Add any additional measurement you need, using another query.&lt;/p&gt;
&lt;p&gt;Here is an example of a frame panel with power and temperature metrics defined:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image045.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For panels with two different types of measurements (Watt and Celsius) as seen above, you need to define two Y-axes. One for the temperature and one for the power consumption.&lt;/p&gt;
&lt;p&gt;Select &lt;strong&gt;Overrides&lt;/strong&gt; at the top of the right-side menu, then click on &lt;strong&gt;Add field override&lt;/strong&gt; :&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image046.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After that, select the following override properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fields with name: &lt;strong&gt;Temperature&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Axis placement: &lt;strong&gt;Right&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Standard options &gt; Unit: Energy / &lt;strong&gt;Watt (W)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/image047.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;When completed, the panel displays the two Y-axis:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image048.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can then click on &lt;strong&gt;Save,&lt;/strong&gt; then &lt;strong&gt;Apply&lt;/strong&gt; buttons to return to the Grafana dashboard. An additional panel to monitor the temperature and power consumption of a frame is displayed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image049.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, you can add as many panels as you have resources defined in your PowerShell scripts.&lt;/p&gt;
&lt;p&gt;This concludes this blog post. I hope you find it useful and should you have any feedback, please send me a &lt;a href=&quot;mailto:lio@hpe.com&quot;&gt;message&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Mellanox switch automation, my gift to the network challenged engineers]]></title><description><![CDATA[T﻿he Mellanox 2010 ethernet switch is a curious thing. It is only half the width of a normal 19-inch rack mount-switch. It has 18 10/25G…]]></description><link>https://developer.hpe.com/mellanox-switch-automation-welcome-to-2012/</link><guid isPermaLink="false">https://developer.hpe.com/mellanox-switch-automation-welcome-to-2012/</guid><pubDate>Fri, 07 Oct 2022 22:54:50 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/rick-kaufmann-network-automation-1200-x-675.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;T﻿he Mellanox 2010 ethernet switch is a curious thing. It is only half the width of a normal 19-inch rack mount-switch. It has 18 10/25G ports. This is perfect when you need two switches for high-availability deployments but only a handful of ports are required. The fact that they are only half-width you only need 1U of  a data center rack. I believe because of its features and price, it makes an excellent candidate for HPE dHCI deployments.&lt;/p&gt;
&lt;p&gt;I was recently involved with a proof-of-concept for a very large customer who would be using Mellanox switches in over 200 locations. My job was to configure the network to support all the different connections the deployment required. I quickly calculated that 400 different switches would need to be configured. That would be a lot of typing on a terminal using the switch command line. Fortunately, I love to automate things and maintain consistency in all my switch configuration files.&lt;/p&gt;
&lt;p&gt;You never know when you&apos;ll run into the same issue, so let me help you out here by showing you how I handled it and gifting you the application I developed to take care of instances like this. First, let&apos;s take a look at the architecture so you can better understand how I did it.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/network_760x435.png&quot; alt=&quot;&quot; title=&quot;network&quot;&gt;&lt;/p&gt;
&lt;p&gt;Within the architecture, there are only five networks; the iLO (or Out-of-Band management), the VMware Management network, the home of the vCenter, a VM production network, and two ISCSI data networks&lt;/p&gt;
&lt;p&gt;My ultimate goal was to develop an application to speak directly to the switches via the REST API. Without having access to the hardware, it was a bit of a challenge to develop the Python bindings. To solve this challenge, I would need to create a text file which contains the configuration statements for the switch. Lucky for me, this is super easy using Flask and Jinja2 templates.  &lt;/p&gt;
&lt;p&gt;Looking at the switch configuration file, most of the information is static. Very little of it is dynamic. There are only 17 variables that need to be collected before they can be applied to a Jinja2 template. So, the solution for me was to use a Flask form and pass the variables over to Jinja2 to create the necessary configuration file. &lt;/p&gt;
&lt;p&gt;Writing the Python code can be quickly accomplished if you can see the results of the API requests. It contains valuable information that gives guidance as to the specific structure of the information the switch sends back. Once this information is understood, Flask can process the information and apply the Jinja2 template. Here&apos;s a handy link to the Flask documentation.&lt;a href=&quot;https://flask.palletsprojects.com/en/2.2.x/&quot;&gt;https://flask.palletsprojects.com/en/2.2.x/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Flask is a web server written in python, it makes it super easy to build elegant user interfaces. Jinja2 is a templating solution that allows placing variable information in a text file. Jinja2 will look for the “double mustache” or “{{ x }}”. This tells JInja2 where the variable information gets placed in the file. If I had a variable like “ip_address” which was 137.162.0.98 and a configuration like this: ip address {{ ip_address }}, Jinja2 would resolve the text to something like this; ip address 137.162.0.98.  You can learn more about Jinja2 by following this link: &lt;a href=&quot;https://jinja.palletsprojects.com/en/3.1.x/&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://svn.python.org/projects/external/Jinja-2.1.1/docs/_build/html/index.html&quot;&gt;https://svn.python.org/projects/external/Jinja-2.1.1/docs/_build/html/index.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/form_597x384.png&quot; alt=&quot;&quot; title=&quot;Mellon&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿ timed it and it only takes about three minutes to fill out the form and hit the generate button. The application I designed generates two configuration files, which can be secure copied to the Mellanox switch.&lt;/p&gt;
&lt;p&gt;With this application there are two options; one will generate a pair of configuration files, and the other will generate configuration files for many different sites or locations.&lt;/p&gt;
&lt;p&gt;In the main directory of the application, there is a comma separated variable file, known as a CSV. The title is mellanox_config.csv. Using Microsoft Excel, open the CSV file and you will see three columns. By changing the values in column “C”, you can generate a pair of configuration files. Below is an example of the contents of the file. At first it won&apos;t look as nice as the example. You will have to stretch out the columns and shade them to your liking (totally optional)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/pair.png&quot; alt=&quot;&quot; title=&quot;Pair of switches&quot;&gt;&lt;/p&gt;
&lt;p&gt;I﻿f there are many sites to generate, then the bulk file can be used.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/bulk.png&quot; alt=&quot;&quot; title=&quot;Bulk sites&quot;&gt;&lt;/p&gt;
&lt;p&gt;A﻿ video tutorial can be found on  my personal blog here: &lt;a href=&quot;https://www.techworldwookie.com/automation-for-the-sake-of-automating/&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://www.techworldwookie.com/automation-for-the-sake-of-automating&quot;&gt;https://www.techworldwookie.com/automation-for-the-sake-of-automating&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The application runs in Docker and docker-compose needs to be installed on the host system. I just use Docker Desktop and everything becomes quite portable. The application can be found over on my GitHub account, here: &lt;a href=&quot;https://github.com/xod442/Mellon&quot;&gt;&lt;/a&gt;&lt;a href=&quot;https://github.com/xod442/Mellon&quot;&gt;https://github.com/xod442/Mellon&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If I can ever get access to a physical switch, I will finish the API version of this application. Until then, I give you Mellon. You can still do it the hard way if you like, but I find using Mellon gives me more time to code.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Fast forward]]></title><link>https://developer.hpe.com/2022-October-07/</link><guid isPermaLink="false">https://developer.hpe.com/2022-October-07/</guid><pubDate>Fri, 07 Oct 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Catch HPE at KubeCon NA 2022]]></title><description><![CDATA[This October 24-28, the Cloud Native Computing Foundation will once again be hosting its flagship conference – KubeCon+CloudNativeCon – as a…]]></description><link>https://developer.hpe.com/catch-hpe-at-kubecon-na-2022/</link><guid isPermaLink="false">https://developer.hpe.com/catch-hpe-at-kubecon-na-2022/</guid><pubDate>Tue, 04 Oct 2022 15:47:12 GMT</pubDate><content:encoded>&lt;p&gt;This October 24-28, the Cloud Native Computing Foundation will once again be hosting its flagship conference – &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&quot;&gt;KubeCon+CloudNativeCon&lt;/a&gt; – as a hybrid event. Take this opportunity to meet with thousands of cloud-native leaders, including gold-sponsor Hewlett Packard Enterprise (HPE), as the community gathers to further the education and advancement of cloud native computing. This is an important event for HPE as it offers us the opportunity to further educate the community on our expertise and support of open source and cloud native technologies.&lt;/p&gt;
&lt;h3&gt;Have some Kubist fun and learn something in the process&lt;/h3&gt;
&lt;p&gt;HPE subject matter experts representing SPIFFE/SPIRE, HPE Storage, Determined AI, HPE Ezmeral, Zerto and HPE PointNext will be available in person to answer questions and offer demos at Booth #G13. Come to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Learn about the cloud-native data infrastructure portfolio from HPE and how to accelerate delivery of the most demanding stateful workloads on Kubernetes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get advice on advanced use cases involving data management, business transformations, and why persistent storage matters for app developers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Discuss fully open-source machine learning (ML) model training. Learn how to get started, discuss current trends in the ML ecosystem, and join the community.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Understand how HPE Pointnext can help you in your multi-cloud journey and transform monolithic to containerized, cloud-native workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You’re going to want to stay and pay attention to these talks, because our magical master of ceremonies will be quizzing you at the end. Be the first one to correctly answer the questions and you’ll come away with either a Rubik’s Cube or one of our cool wooden puzzles.&lt;/p&gt;
&lt;p&gt;Each day, we’ll also be giving away a Raspberry Pi Starter Kit. Booth visitors who get scanned in the booth will be given a raffle ticket and, at the end of the last presentation of each day, we will draw the winner of the prize.&lt;/p&gt;
&lt;h3&gt;Join us in celebrating the graduation of SPIFFE/SPIRE!&lt;/h3&gt;
&lt;p&gt;Contributing alongside the many leading technology companies who were a part of this project, like VMWare, Uber, ByteDance, Anthem, Transferwise, IBM, etc., HPE is really excited that SPIFFE/SPIRE has just been designated a Graduated project of CNCF. Leading and influencing meaningful innovation in the Open Source community is important to us, as it provides key capabilities that our customers desire and helps us design solutions that meet their needs.&lt;/p&gt;
&lt;p&gt;Make sure you stop by the booth to talk with some of our SPIFFE/SPIRE experts, like Andrew Jessup, to learn more about what it can do for you. We’ll be giving out several copies of &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;https://thebottomturtle.io/Solving-the-bottom-turtle-SPIFFE-SPIRE-Book.pdf&quot;&gt;Solving the Bottom Turtle&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt; as part of our celebration.&lt;/p&gt;
&lt;h3&gt;Ahoy, developers… join virtually and seek out your cache!&lt;/h3&gt;
&lt;p&gt;As part of our virtual presence, we’ll be hosting several Office Hours designed for developers, data scientists, and data/ML engineers to hear more about what HPE offers for them. &lt;a href=&quot;https://kubecon-cloudnativecon-na.com/virtual-exhibitor/?v4b4342b0f72f3260e37d74de68eab433fee0c641d933e76d52be7eb34b211371656f07b5a54b2e522db3ac7b27c7d555=068E4EDFD2D581C2C838CEEFD5F655754F523A6864305A1184BF01DD527B60FB0740D3F17B594A1732C891BBBA3213A8&quot;&gt;Tune in&lt;/a&gt; for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The HPE Developer Community team’s session on &lt;em&gt;&lt;strong&gt;What? Not yet an HPE Developer Community member? There’s treasure to be found!&lt;/strong&gt;&lt;/em&gt;, guiding you through the rich set of resources HPE offers developers through our web portal.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The HPE Ezmeral engineering group’s talk on &lt;em&gt;&lt;strong&gt;HPE Ezmeral Early Access&lt;/strong&gt;&lt;/em&gt;, a beta program that allows developers to try new products before they’re released. With exemplary training and support provided by HPE, developers will get hands-on experience with new features and capabilities.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The HPE Developer Community team will also be hosting a Virtual Treasure Hunt. This is a popular game that we’ve held at several past events that gets updated with new questions every time. Twelve players who complete the Virtual Treasure Hunt will be selected as winners of a $50 e-gift card to be redeemed at the HPE-branded merchandise e-store. Completing the Virtual Treasure Hunt requires participants to explore the HPE Developer Community’s rich ecosystem and submit their entry via the &lt;a href=&quot;https://bit.ly/kubecon-na-2022-hpedev-treasure-hunt&quot;&gt;Virtual Treasure Hunt form&lt;/a&gt; during the event. Refer to the giveaway &lt;a href=&quot;https://developer.hpe.com/hackshack/treasurehunt-terms-conditions/&quot;&gt;Terms and Conditions&lt;/a&gt; for complete program details.&lt;/p&gt;
&lt;h3&gt;We’ve always got something going on in this space&lt;/h3&gt;
&lt;p&gt;HPE has been a proud sponsor of KubeCon+CloudNativeCon for years. We’re excited to be able to share with you the key activities HPE is involved with in the areas of Kubernetes and other open source technologies. Make sure you stop by our booth, either virtually or physically. We’re always happy to engage with you.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake delivers NVIDIA AI as a service]]></title><description><![CDATA[Building the infrastructure required for AI solutions can be an expensive and daunting undertaking. That’s why an as-a-service option that…]]></description><link>https://developer.hpe.com/hpe-greenlake-delivers-nvidia-ai-as-a-service/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-delivers-nvidia-ai-as-a-service/</guid><pubDate>Thu, 29 Sep 2022 17:14:38 GMT</pubDate><content:encoded>&lt;p&gt;Building the infrastructure required for AI solutions can be an expensive and daunting undertaking. That’s why an as-a-service option that supports customer flexibility makes so much sense. With NVIDIA AI Enterprise now available through HPE GreenLake in select countries, developers can quickly get started building high-performing AI solutions. They can do so easily by using the NVIDIA AI Enterprise software suite – an end-to-end, cloud-native suite of AI and data analytics software that can be deployed anywhere, from the data center to the cloud.&lt;/p&gt;
&lt;p&gt;Announced this past June, the service offers NVIDIA AI Enterprise on NVIDIA-Certified HPE ProLiant DL380 and DL385 servers running VMware vSphere with Tanzu on a pay-per-use basis that can be tailored to the needs of the organization. Customers can select from predefined packages that include management of infrastructure and software and add on optional AI advisory and solution workshops.&lt;/p&gt;
&lt;p&gt;Check out this blog post by Nicola Sessions, senior manager of product marketing for AI software at NVIDIA, entitled &lt;a href=&quot;https://blogs.nvidia.com/blog/2022/06/28/hpe-greenlake-edge-to-cloud/&quot;&gt;NVIDIA Teams with HPE to Take AI From Edge to Cloud&lt;/a&gt;, to learn more about the NVIDIA AI Enterprise on HPE GreenLake solution.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Beta test our new HPE GreenLake for Data Fabric!]]></title><description><![CDATA[The HPE Ezmeral Early Access program is a new program designed to ensure new and exciting products are constantly developed. Through this…]]></description><link>https://developer.hpe.com/beta-test-our-new-hpe-greenlake-for-data-fabric/</link><guid isPermaLink="false">https://developer.hpe.com/beta-test-our-new-hpe-greenlake-for-data-fabric/</guid><pubDate>Fri, 23 Sep 2022 20:32:16 GMT</pubDate><content:encoded>&lt;p&gt;The HPE Ezmeral Early Access program is a new program designed to ensure new and exciting products are constantly developed. Through this beta program, we are giving you the opportunity to test out products and provide feedback to the engineering team. With exemplary training and support provided by HPE, you’ll get hands-on experience with new features and capabilities. One of our newest, exciting projects revolves around HPE Ezmeral Data Fabric, as it is about to start a new journey in its delivery of an enterprise-grade data fabric for hybrid and multi-cloud enterprises, bringing enterprise-grade data fabric to the as-a-Service world.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chasing that single-source-of-truth&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As we look back on software technology evolution, particularly around the area of analytics, we see a major challenge and theme recurring around the establishment of a single-source-of-truth. Firstly, there’s EIS (Executive Information Systems), which were usually OLAP (online analytical processing) cubes (multi-dimensional arrays of data), professing a single point to capture, aggregate and share analytic data. Then, when OLAP cube scalability became too limiting, ROLAP (Relational Online Analytical Processing) became the preferred solution. By definition, ROLAP brought the relational database into analytic solutions and added technologies and terms such as query and reporting, the data mart, and data warehouse (with their star and snowflake schemas), all including the single-source-of-truth value proposition in their range of benefits to the enterprise.&lt;/p&gt;
&lt;p&gt;Next came the data lake, predominantly driven by HDFS (Hadoop Distributed File System) taking mass, unstructured data storage to significantly higher scales. Structured use cases started to fold into HDFS with new iterations of columnar stores providing scale, with some structure, typically over HDFS / data lakes.&lt;/p&gt;
&lt;p&gt;While this was going on, clouds were forming – however clouds posed a greater threat to the single-source-of-truth. Clouds wanted your data to reside outside your premises and made it hard (and/or expensive) to share. For enterprises that deploy across multiple clouds, and likely on-prem and at the edge, this meant new islands of data and multiple versions of the truth.&lt;/p&gt;
&lt;p&gt;Today we have returned to the single data lake as one solution to the many versions of the truth issue. The single date lake is reminiscent of past approaches: bring all data into one location to standardize, normalize, aggregate and then distribute the results. A second approach, a federated approach, is to leave the data in place and retrieve only what is requested when it is requested, possibly with some query optimization and intelligent caching thrown in to minimize the impact on performance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Solving the issue with data fabric&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Data Fabric started and grew up in the big data world. It has long been trusted as an enterprise-grade data fabric managing data at scale for mission-critical applications across industries and use cases. Users today enjoy a single global name space that provides one point to review all attached data assets across data fabric deployments. Multi-modal data access delivers on the need to build an application once and deploy anywhere there is a data fabric. In addition, the multi-modal nature of the data fabric allows the development of stream-based, object-based apps and file-based apps against the same data without duplicating that data – very much a single version of the truth. Additionally, and if needed, the data fabric is good at moving data securely and optimally from deployment to deployment. &lt;/p&gt;
&lt;p&gt;The upcoming release of HPE GreenLake for Data Fabric brings the same enterprise-grade data fabric to the as-a-Service world. The installation, configuration and management of deployments is no longer a customer responsibility. The data fabric comes installed, running and is maintained by HPE. Customer-facing management tasks are elevated to the level of creation of storage buckets and volumes, managing user quotas, sharing of entry points with fellow application developers and overall monitoring of usage, all driven through a central UI that can manage all your HPE GreenLake for Data Fabric deployments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Beta test it yourself&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;With the upcoming HPE GreenLake for Data Fabric beta, we are targeting use cases in the hybrid and multi-cloud world, for example those who want to write an application once and run it on any deployment target using a consistent set of multi-modal APIs. It is for those who want to build a data plane that gives a consistent view of data assets across clouds via a single global name space and those that want to synchronize data across clouds securely and efficiently. The as-a-Service tooling allows the management of all clusters from a single interface, further simplifying management of your data plane. We will post additional resources on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt; that show example use cases. We will also provide you with &lt;a href=&quot;https://hpedev.slack.com/archives/C044E295003&quot;&gt;a Slack channel&lt;/a&gt; where you can connect with us to answer any questions you have.&lt;/p&gt;
&lt;p&gt;HPE will begin the beta of this new service in November this year (2022). Register today to learn more about how you can participate in this program. Beta testers, can deploy initially into AWS then other public clouds as well as on-prem hardware. Join us to learn more in our upcoming &lt;a href=&quot;https://hpe.zoom.us/webinar/register/3716641878854/WN_xLR2ynonSi6SojUswkVmRw&quot;&gt;HPE GreenLake for Data Fabric webinar&lt;/a&gt;. We look forward to learning about your use cases.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Open Source Grommet Expert, Shimrit Yacobi]]></title><description><![CDATA[In this open source contributor series, you’ll get to meet members of the HPE Developer Community who aim to advance the way people live and…]]></description><link>https://developer.hpe.com/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/</link><guid isPermaLink="false">https://developer.hpe.com/meet-open-source-grommet-lead-developer-and-architect-shimrit-yacobi/</guid><pubDate>Thu, 22 Sep 2022 18:09:48 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/shimi-yacobi-375.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this open source contributor series, you’ll get to meet members of the HPE Developer Community who aim to advance the way people live and work by contributing to open source projects.&lt;/p&gt;
&lt;p&gt;A software engineering graduate from the Middle East, Shimrit Yacobi has a long history with &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt;, an open source project led by HPE. Grommet is a React-based component library and framework that helps developers and designers create web applications. Shimi, as she prefers to be called, describes herself as a passionate React.js developer focused on making the vision of a design-driven user experience a reality.&lt;/p&gt;
&lt;p&gt;In addition to her numerous code contributions, including building and designing new components and features for the library, Shimi was the lead developer and architect and spent many years acting as the Grommet Community Manager. As a part of this role, she connected with HPE product teams, developers from the community, designers, and UX research to understand requirements and use cases. She continues to work with them today to augment the Grommet framework to meet customer needs while maintaining a tidy and clean API, architecture and implementation.&lt;/p&gt;
&lt;h2&gt;In your own words, can you define Grommet for me?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; is an open source UI development and design framework that simplifies the way web applications are built by providing a package of commonly used interface elements from which developers and designers can choose to use. It’s both a framework and a component library.&lt;/p&gt;
&lt;h2&gt;Can you tell me a little more about your role in regards to this project?&lt;/h2&gt;
&lt;p&gt;I tend to live and breathe this framework. I work on ideating and developing new components, solving complex problems with simple solutions, and collaborating with teams on development and architecture requirements. An important aspect of my job is acting as a mentor; encouraging HPE and outside developers and designers to join the Grommet community, embrace the collaborative culture, and be part of a family while making contributions and providing constant feedback as we grow. I also provide in-house and community consulting to Grommet users on best practices, solution optimization, and how to build responsive applications using Grommet features like accessibility and modularity. This close collaboration with the Grommet Community ensures that Grommet is always on the cutting edge of technology.&lt;/p&gt;
&lt;h2&gt;How did you get involved with Grommet?&lt;/h2&gt;
&lt;p&gt;I first encountered Grommet working as a developer in the HPE Hyperconverged systems group. I was on one of the development teams that had been encouraged to use Grommet. I was able leverage the framework quite well to win an HPE Hackathon project, while also contributing to the open source code. The team was a lot of fun to work with and encouraged me to work with them.&lt;/p&gt;
&lt;h2&gt;I hear that Grommet is pretty popular. Just how popular is it and why do you think that is?&lt;/h2&gt;
&lt;p&gt;Grommet has over 120,000 downloads a month, over 300 contributors, and continues to be adopted as one of the premier ReactJS frameworks both within HPE and throughout the open source community. Another indication of its popularity is that we have more than 4,500 members on our &lt;a href=&quot;https://slack-invite.grommet.io/&quot;&gt;Grommet Slack Channel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think one of the things that makes Grommet so popular is that it simplifies the way web applications are built. Grommet does this by providing a package of commonly used interface elements from which developers and designers can choose to use. It provides powerful themes and tools that allow web developers to customize the color, type, component elements and layout according to their needs. It also provides support for Web Content Accessibility Guidelines (WCAG), making it great for accessibility.&lt;/p&gt;
&lt;h2&gt;What other open source projects have you been involved with?&lt;/h2&gt;
&lt;p&gt;I was the lead developer and architect for the HPE Design System, an open source library of elements consisting of working code, best practices, and design resources. It features human-centered guidelines, and engages with a vibrant community of contributors. One of the things I like most about it is that it enables experiences to be crafted with uncompromising integrity.&lt;/p&gt;
&lt;p&gt;One of the things that makes the HPE Design System so powerful is that it has what we refer to as a human-centered design, crafted upon user research and the real needs of customers. In addition, it’s a system designed so that all those working on it can collaborate really well. It provides a common language for designers, developers, and stakeholders, allowing everyone to really understand each other, allowing for more precise specification of UX requirements.&lt;/p&gt;
&lt;h2&gt;Is there anything else you’d like to share with our readers?&lt;/h2&gt;
&lt;p&gt;Advocating for women in tech and overall diversity are passions of mine. I was the HPE Employee Resources Group (ERG) Co-Chair of HPE Northern Colorado Women’s Network, a place where I could contribute to improve cultural agendas for the company and make HPE a better place to work. I am also an avid facilitator of the Culture Catalyst group for Diversity and inclusion.&lt;/p&gt;
&lt;p&gt;To learn more about the open source projects that are of particular interest to HPE, please visit our &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;website&lt;/a&gt;. Interested in exploring what HPE offers for developers and data scientists? Check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community site&lt;/a&gt; for a ton of articles, workshops, tutorials, and other resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel scales new heights with 1.28.0]]></title><description><![CDATA[Exciting news for the Chapel community this month – Chapel 1.28.0 has just been released! Chapel, a productive parallel programming language…]]></description><link>https://developer.hpe.com/chapel-scales-new-heights-with-1-28-0/</link><guid isPermaLink="false">https://developer.hpe.com/chapel-scales-new-heights-with-1-28-0/</guid><pubDate>Tue, 20 Sep 2022 12:29:26 GMT</pubDate><content:encoded>&lt;p&gt;Exciting news for the Chapel community this month – Chapel 1.28.0 has just been released! Chapel, a productive parallel programming language that scales from laptops to supercomputers, now features improved GPU support, a new Communication module that supports low-level puts and gets, and improvements to the interface and other aspects of the code. This open source project is also being highlighted this month at the AnitaB.Org 2022 Grace Hopper Celebration where Michelle Strout and Lydia Duncan of the HPE High Performance Computing and AI group ran a Choose Your Own Adventure project during the Open Source Day, where developers could come to learn more about Chapel and how to contribute to the project.&lt;/p&gt;
&lt;p&gt;As Chapel technical lead Brad Chamberlain pointed out in the announcement, many of the changes in Chapel 1.28.0 have been done to improve the language and libraries, thus stabilizing core features in preparation for its forthcoming 2.0 release. As an example, working with non-default numerical types, or combinations of numerical types, has been made simpler, more intuitive, and more consistent with other scientific languages. Implicit conversion and resolution rules have been updated to better preserve bit-widths and ensure operations combining disparate types generate natural, rather than surprising, results or any types of errors.&lt;/p&gt;
&lt;p&gt;Adding to the GPU support that’s been added in the past few releases, Chapel 1.28.0 now permits a wider variety of parallel loop computations to be generated as GPU kernels. A new GPU utility module has also been added, providing the ability to assert that a computation will execute on the GPU as intended (or generate an explanation as to why it would not). The module also adds support for printing simple messages from within GPU loops.&lt;/p&gt;
&lt;p&gt;A new Communication module has also been introduced in Chapel 1.28.0. This module supports put/get routines to perform explicit data transfers between locales (e.g. nodes in a cluster or on a supercomputer) for users who want to exert more control over low-level communication in their code. You’ll also find many other notable improvements to the ‘chpldoc’ and ‘mason’ tools, which respectively create documentation from commented Chapel code and provide packaging support for Chapel.&lt;/p&gt;
&lt;p&gt;Here are some of the main highlights of the release:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Significantly improved behavior for numerical ops on small/mixed types&lt;/li&gt;
&lt;li&gt;A new Communication module for performing low-level puts/gets&lt;/li&gt;
&lt;li&gt;Expanded idioms that can run on GPUs and a new GPU utility module&lt;/li&gt;
&lt;li&gt;Improvements to ‘chpldoc’ and ‘mason’&lt;/li&gt;
&lt;li&gt;Simplified and improved selection between overloaded routines&lt;/li&gt;
&lt;li&gt;Stabilized language and library improvements&lt;/li&gt;
&lt;li&gt;Reduced average compilation times using the LLVM support library&lt;/li&gt;
&lt;li&gt;Improved portability to ARM-based Macs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For complete details on this new release, please refer to the following resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://chapel.discourse.group/t/announcing-chapel-1-28-0/16129&quot;&gt;Release announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/chapel-lang/chapel/blob/release/1.28/CHANGES.md&quot;&gt;Release changes list version 1.28.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://twitter.com/ChapelLanguage&quot;&gt;Chapel Language Twitter page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.facebook.com/ChapelLanguage/&quot;&gt;Chapel Programing Language Facebook page&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Infrastructure as code]]></title><link>https://developer.hpe.com/2022-September-08/</link><guid isPermaLink="false">https://developer.hpe.com/2022-September-08/</guid><pubDate>Thu, 08 Sep 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Attend NVIDIA GTC – the developer conference for the era of AI]]></title><description><![CDATA[Interested in artificial intelligence and high performance application development? Hewlett Packard Enterprise (HPE) is as well and is…]]></description><link>https://developer.hpe.com/attend-nvidia-gtc-–-the-developer-conference-for-the-era-of-ai/</link><guid isPermaLink="false">https://developer.hpe.com/attend-nvidia-gtc-–-the-developer-conference-for-the-era-of-ai/</guid><pubDate>Thu, 25 Aug 2022 17:01:25 GMT</pubDate><content:encoded>&lt;p&gt;Interested in artificial intelligence and high performance application development? Hewlett Packard Enterprise (HPE) is as well and is excited to announce its participation as a Diamond sponsor of NVIDIA GTC, September 19-22. GTC is the developer conference for the era of AI and the metaverse, exploring development, design and simulation, robotics, and applied real-time technologies. It’s an online event where students, developers, researchers, creators, IT decision-makers, and business leaders gather to learn how to shape our world with the power of AI, computer graphics, data science, and more.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.nvidia.com/gtc/?ncid=ref-spo-810249&quot;&gt;Register for NVIDIA GTC&lt;/a&gt; now.&lt;/p&gt;
&lt;p&gt;Hear from industry leaders from around the world through over 200+ curated conference talks—including technical, business strategy, and getting-started sessions designed to accelerate tech careers. GTC will provide networking opportunities to connect with HPE and NVIDIA experts.&lt;/p&gt;
&lt;h3&gt;Accelerate your AI with these HPE-sponsored GTC sessions&lt;/h3&gt;
&lt;p&gt;Catch our two HPE-sponsored GTC sessions, which focus on our partnership to bring you advanced technologies to accelerate your AI efforts. Find and register for them in the &lt;a href=&quot;https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus#/&quot;&gt;session catalog&lt;/a&gt;. NVIDIA CEO and founder Jensen Huang’s inspiring keynote is also something not to miss, so make sure you attend this event on September 20 at 8:00 a.m. PT. Click &lt;a href=&quot;https://www.nvidia.com/en-us/gtc/keynote/&quot;&gt;here&lt;/a&gt; to watch once it’s available. (Registration isn’t required to view the keynote.)&lt;/p&gt;
&lt;h4&gt;Accelerate AI by teaming up with the right crew and the best infrastructure (&lt;a href=&quot;https://register.nvidia.com/flow/nvidia/gtcfall2022/attendeeportal/page/sessioncatalog?tab.catalogallsessionstab=16566177511100015Kus&amp;#x26;search=A41399&quot;&gt;A41399&lt;/a&gt;)&lt;/h4&gt;
&lt;p&gt;Hear from Evan Sparks, Vice President of AI and HPC at HPE, as he describes the complete canonical stack required to accelerate AI–starting with the best-of-breed servers and GPU accelerators and optimized development stacks, such as HPE Machine Learning Development Environment. In this on-demand session, you’ll learn about tips and pitfalls to avoid that will help you speed up your AI projects. You’ll also hear about Champollion, an AI supercomputer built by HPE in collaboration with NVIDIA, which was recently added to the TOP500 list of the most powerful non-distributed computer systems.&lt;/p&gt;
&lt;h4&gt;Accelerating AI Edge to Cloud with NVIDIA and HPE GreenLake (&lt;a href=&quot;https://register.nvidia.com/flow/nvidia/gtcfall2022/attendeeportal/page/sessioncatalog?tab.catalogallsessionstab=16566177511100015Kus&amp;#x26;search=A41396&quot;&gt;A41396&lt;/a&gt;)&lt;/h4&gt;
&lt;p&gt;In this Simulive session, HPE subject-matter experts will cover how to combine NVIDIA with HPE GreenLake to accelerate AI edge-to-cloud services. Join in and hear Jeroen Bronkhorst, HPE GreenLake Cloud Services Portfolio Manager for AI/ML, and Terry Chiang, Data Scientist / MLOPs Product Manager for HPE Ezmeral, discuss today’s increased focus on ML model life-cycle management and the new hardware accelerators and optimized open-source tooling that’s popping up at every step of the AI model life cycle. You’ll learn about the latest innovations in data acquisition and preparation, model development and training, and model deployment and inference enabled by HPE, NVIDIA, and partner ecosystems that can be delivered as an on-premises cloud service through HPE GreenLake.&lt;/p&gt;
&lt;h3&gt;Attend our Watch Party!&lt;/h3&gt;
&lt;p&gt;NVIDIA and HPE will also be teaming up for a &lt;a href=&quot;https://www.nvidia.com/gtc/session-catalog/?search=WP41326b&amp;#x26;search=WP41326b%2C+WP41326b&amp;#x26;tab.catalogallsessionstab=16566177511100015Kus#/session/1660839912631001SdVF&quot;&gt;Watch Party&lt;/a&gt;, an interactive session hosted by NVIDIA to interact with other attendees via chat while you watch a GTC session together. Join Ilene Carpenter, Earth Sciences Segment Manager at HPE, and Karthik Kashinath, AI-HPC and engineering lead of Earth-2 at NVIDIA, as they host [NVIDIA&apos;s Earth-2: Progress and Challenges Towards Building a Digital Twin of the Earth for Weather and Climate [A41326]](&lt;a href=&quot;https://urldefense.com/v3/**https:/www.nvidia.com/gtc/session-catalog/*/session/1657569242361001HePe**;Iw!!NpxR!iaGDB93N3IrNqbhcsQjMZFXHeOcgNN7I_xUjpU7_9Eh33OXYyNZxg-m4prRZJ7moWNukqYfYTiOFxe5qb25cmQ$&quot;&gt;https://urldefense.com/v3/**https:/www.nvidia.com/gtc/session-catalog/*/session/1657569242361001HePe**;Iw!!NpxR!iaGDB93N3IrNqbhcsQjMZFXHeOcgNN7I_xUjpU7_9Eh33OXYyNZxg-m4prRZJ7moWNukqYfYTiOFxe5qb25cmQ$&lt;/a&gt;). The &lt;a href=&quot;https://www.nvidia.com/gtc/session-catalog/?search=WP41326b&amp;#x26;search=WP41326b%2C+WP41326b&amp;#x26;tab.catalogallsessionstab=16566177511100015Kus#/session/1660839912631001SdVF&quot;&gt;Watch Party&lt;/a&gt; will be held on Thursday, September 22, at 4:30-6:00 p.m. CET. Hear about NVIDIA’s recently launched Earth-2 initiative, which aims to build digital twins of the Earth to address one of the most pressing challenges of our time – climate change. Join us and learn how NVIDIA’s Earth-2 digital twin continues to advance what is possible in the application of AI to meteorology.&lt;/p&gt;
&lt;p&gt;Don&apos;t miss the chance to connect with AI developers and innovators at GTC, September 19-22, 2022. &lt;a href=&quot;https://www.nvidia.com/gtc/?ncid=ref-spo-810249&quot;&gt;Register for NVIDIA GTC&lt;/a&gt; today! And join the &lt;a href=&quot;https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper.nvidia.com%2Fdeveloper-program&amp;#x26;data=05%7C01%7Cgpiana%40nvidia.com%7Cd3a1422c83b745facca208da84896a6d%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637968022969870756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;#x26;sdata=VDyF2%2Fs42oE4sDRdu61f6HZpgiJK%2FNFoMYjioPRvGjs%3D&amp;#x26;reserved=0&quot;&gt;NVIDIA Developer Program&lt;/a&gt; for free and exclusive access to SDKs, technical documentation, peer, and domain expert help. NVIDIA offers tools and training to accelerate AI, HPC, and graphics applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Deploy stateful MongoDB applications on Kubernetes clusters in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/deploy-stateful-mongodb-application-on-kubernetes-clusters-in-hpe-greenlake-for-containers-1/</link><guid isPermaLink="false">https://developer.hpe.com/deploy-stateful-mongodb-application-on-kubernetes-clusters-in-hpe-greenlake-for-containers-1/</guid><pubDate>Tue, 16 Aug 2022 14:02:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Containers&lt;/a&gt;, one of the HPE GreenLake cloud services available on the HPE GreenLake Central platform, allows customers to open the Clusters screen to create a cluster, view details about existing clusters, and launch the HPE GreenLake for Container service console. It provides an enterprise-grade container management service using open source Kubernetes.&lt;/p&gt;
&lt;p&gt;This blog post describes the process of deploying a stateful MongoDB application on the created Kubernetes clusters in HPE GreenLake for Containers. Using the Kubernetes StatefulSet and Headless Service, together with the pre-provisioned persistent volumes, the MongoDB application can be deployed as a Replica Set that provides redundancy, fault tolerance and high availability. This MongoDB application deployment can be used for working with a large amount of data and a high number of workloads in customer production environments.&lt;/p&gt;
&lt;h2&gt;MongoDB Application&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.mongodb.com/&quot;&gt;MongoDB&lt;/a&gt; is a general-purpose, document-based NoSQL database program that is a popular alternative to traditional databases. The MongoDB data model can represent any kind of object structures that can have properties or even be nested for multiple levels. Unlike the relational databases, MongoDB can store large amounts of data without requiring a logical category or schema. Therefore, it takes up the challenge of focusing on scalability and the speed of queries. MongoDB is free, open-source software. You can download the MongoDB packages, set them up and configure them at no expense.&lt;/p&gt;
&lt;p&gt;MongoDB can be deployed as a standalone instance that is running as a single &lt;em&gt;mongod&lt;/em&gt; server. It can also be deployed as a &lt;a href=&quot;https://www.mongodb.com/docs/manual/core/replica-set-architectures&quot;&gt;Replica Set&lt;/a&gt; with multiple running &lt;em&gt;mongod&lt;/em&gt; instances that maintain the same data set. The Replica Set contains several data-bearing nodes. Of those data-bearing nodes, one and only one member is deemed the &lt;em&gt;Primary&lt;/em&gt; node, while the other nodes are deemed &lt;em&gt;Secondary&lt;/em&gt; nodes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mongodb-replica-set.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The standalone MongoDB instance deployment is suitable for testing and some aspects of development. However, it’s not adequate for production use. The MongoDB Replica Set deployment ensures multiple replicas running at any given time, which provides redundancy, fault tolerance and high availability. It is recommended for production environments, such as HPE GreenLake for Containers. This blog post will focus on deploying the MongoDB application as a Replica Set.&lt;/p&gt;
&lt;h2&gt;Deploy MongoDB Application&lt;/h2&gt;
&lt;h3&gt;Requirements&lt;/h3&gt;
&lt;p&gt;A Kubernetes cluster can be created using either HPE GreenLake for Containers GUI, or through Infrastructure as Code (IaC) with &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/0.2.2&quot;&gt;HPE GreenLake Terraform Provider&lt;/a&gt; as explained in the blog post &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code&lt;/a&gt;. By launching to the HPE GreenLake for Containers service console, you can download the &lt;strong&gt;kubectl&lt;/strong&gt; binary, together with the &lt;em&gt;kubeconfig&lt;/em&gt; file, and set it up to access the Kubernetes cluster using kubectl CLI. The administrative access will be configured from the created Kubernetes cluster in HPE GreenLake for containers, which allows you to set up the Kubernetes RBAC for MongoDB application deployment.&lt;/p&gt;
&lt;p&gt;Sample view of HPE GreenLake for Containers Clusters screen:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/containers-clusters.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sample view of HPE GreenLake for Containers service console:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/containers-service-console.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Set up Role-Based Access Control (RBAC)&lt;/h3&gt;
&lt;p&gt;Kubernetes RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. It is important to ensure that, when designing permissions for cluster users, the cluster administrator understands the areas where privilege escalation could occur, to reduce the risk of excessive access leading to security incidents.&lt;/p&gt;
&lt;p&gt;To set up RBAC, you create a Service Account, a ClusterRole, and connect the two with a Cluster RoleBinding.&lt;/p&gt;
&lt;h4&gt;1. Create a YAML file &lt;em&gt;sa.yaml&lt;/em&gt; for the service account:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  name: mongo-account
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Create a YAML file &lt;em&gt;clusterrole.yaml&lt;/em&gt; for the cluster roles:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: mongo-role
rules:
- apiGroups: [&quot;&quot;]
  resources: [&quot;deployments&quot;]
  verbs: [&quot;list&quot;, &quot;watch&quot;]
- apiGroups: [&quot;&quot;]
  resources: [&quot;services&quot;]
  verbs: [&quot;*&quot;]
- apiGroups: [&quot;&quot;]
  resources: [&quot;pods&quot;]
  verbs: [&quot;get&quot;,&quot;list&quot;, &quot;watch&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;3. Create a YAML file &lt;em&gt;clusterrolebinding.yaml&lt;/em&gt; to bind the service account with the cluster access roles:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: mongo_role_binding
subjects:
- kind: ServiceAccount
  name: mongo-account
roleRef:
  kind: ClusterRole
  name: mongo-role
  apiGroup: rbac.authorization.k8s.io
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create the StatefuleSet Deployment&lt;/h3&gt;
&lt;p&gt;To deploy MongoDB as a Replica Set with multiple pods, a Kubernetes StatefulSet deployment will be required. The data persistence setup can be done with a &lt;em&gt;VolumeClaimTemplate&lt;/em&gt; in the StatefulSet deployment.&lt;/p&gt;
&lt;p&gt;It should be noted that the Kubernetes Deployment works fine if you have only one single MongoDB replica being deployed. In case multiple replicas are running, which is required in the production environment, there will be some issues using Kubernetes Deployment. Developers have to handle the concurrent read-write of the same data in the deployment.&lt;/p&gt;
&lt;h4&gt;1. Create a YAML file &lt;em&gt;namespace.yaml&lt;/em&gt; for the namespace:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: Namespace
metadata:
  name: mongodb
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Create a YAML file &lt;em&gt;statefulset.yaml&lt;/em&gt; for MongoDB StatefulSet deployment:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-replica
spec:
  serviceName: pce-mongodb
  replicas: 1 
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
        selector: mongo
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccount: pce-mongo-account
      containers:
      - name: mongodb
        image: docker.io/mongo:4.2
        command: [&quot;/bin/sh&quot;]
        args: [&quot;-c&quot;, &quot;mongod --replSet=rs0 --bind_ip_all&quot;]
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - name: mongo-port
          containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set Up the Headless Service&lt;/h3&gt;
&lt;p&gt;A Kubernetes Headless Service does not allocate an IP address or forward traffic. It’s used for creating a service grouping. Clients can connect to the pods of a Headless Service by connecting to the service’s DNS name. The DNS returns the pods’ IPs and the client can connect directly to the pods instead via the service proxy. The Headless Service can be used for deploying the stateful applications such as MongoDB, while still providing the benefits of a service definition for taking care of the pod restart.&lt;/p&gt;
&lt;h4&gt;Create a YAML file &lt;em&gt;service.yaml&lt;/em&gt; for the headless service:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongo
spec:
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
  selector:
    app: mongo
  clusterIP: None
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Deploy MongoDB Application with Kustomize&lt;/h3&gt;
&lt;p&gt;With all created YAML files, MongoDB application will be deployed using &lt;a href=&quot;https://kustomize.io/&quot;&gt;Kustomize&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;1. Move all YAML files to the folder &apos;&lt;em&gt;base&lt;/em&gt;&apos;:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;├── base
│   ├── clusterrolebinding.yaml
│   ├── clusterrole.yaml
│   ├── kustomization.yaml
│   ├── namespace.yaml
│   ├── pvc.yaml
│   ├── sa.yaml
│   ├── service.yaml
│   └── statefulset.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The file &lt;em&gt;kustomization.yaml&lt;/em&gt; lists all YAML files in its &lt;em&gt;resources&lt;/em&gt; section:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - namespace.yaml
  - sa.yaml
  - clusterrole.yaml
  - clusterrolebinding.yaml
  - statefulset.yaml
  - service.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Create another folder &apos;&lt;em&gt;overlays&lt;/em&gt;&apos; and a sub-folder &apos;&lt;em&gt;production&lt;/em&gt;&apos;:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;└── overlays
    └── production
        ├── kustomization.yaml
        └── patch-statefulset.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The contents of the file &lt;em&gt;kustomization.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
commonLabels:
  env: production
namePrefix: pce-
namespace: pce-mongodb
patches:
- ./patch-statefulset.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The contents of the file &lt;em&gt;patch-statefulset.yaml&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-replica
spec:
  replicas: 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This YAML file will patch the original StatefulSet and deploy the MongoDB application as a three-member replica set.&lt;/p&gt;
&lt;h4&gt;3. Deploy MongoDB with the following command:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl apply -k overlays/production
namespace/pce-mongodb created
serviceaccount/pce-mongo-account created
clusterrole.rbac.authorization.k8s.io/pce-mongo-role created
clusterrolebinding.rbac.authorization.k8s.io/pce-mongo_role_binding created
service/pce-mongodb created
statefulset.apps/pce-mongodb-replica created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;MongoDB application is deployed as Replica Set to the namespace &apos;&lt;em&gt;pce-mongodb&lt;/em&gt;&apos; using the service account &apos;&lt;em&gt;pce-mongo-account&lt;/em&gt;&apos;. Running the following command, you should see three specified replicas pods are all in &lt;strong&gt;running&lt;/strong&gt; status:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get all -n pce-mongodb
NAME                        READY   STATUS    RESTARTS   AGE
pod/pce-mongodb-replica-0   1/1     Running   0          30s
pod/pce-mongodb-replica-1   1/1     Running   0          19s
pod/pce-mongodb-replica-2   1/1     Running   0          11s

NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/pce-mongodb   ClusterIP   None         &amp;#x3C;none&gt;        27017/TCP   30s

NAME                                   READY   AGE
statefulset.apps/pce-mongodb-replica   3/3     30s
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Set up MongoDB Replication Host&lt;/h2&gt;
&lt;p&gt;After the MongoDB application is deployed with all the replica pods in running states, you need to set up MongoDB replication.&lt;/p&gt;
&lt;h4&gt;1. Connect to the MongoDB pod &lt;em&gt;pce-mongodb-replica-0&lt;/em&gt;:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl exec -it pce-mongodb-replica-0 -n pce-mongodb -- mongo
MongoDB shell version v4.2.21
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;#x26;gssapiServiceName=mongodb
Implicit session: session { &quot;id&quot; : UUID(&quot;5d7b5a72-ff37-4ead-b864-965a551dc966&quot;) }
MongoDB server version: 4.2.21
Welcome to the MongoDB shell.
For interactive help, type &quot;help&quot;.
For more comprehensive documentation, see
  https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
  https://community.mongodb.com
---
Enable MongoDB&apos;s free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

&gt; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Initiate the replication:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&gt; rs.initiate()
{
  &quot;info2&quot; : &quot;no configuration specified. Using a default configuration for the set&quot;,
  &quot;me&quot; : &quot;pce-mongodb-replica-0:27017&quot;,
  &quot;ok&quot; : 1
}
rs0:SECONDARY&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;3. Add the pod as the &lt;em&gt;Primary&lt;/em&gt; server to the replication configuration:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;rs0:SECONDARY&gt; var cfg = rs.conf()
rs0:PRIMARY&gt; cfg.members[0].host=&quot;pce-mongodb-replica-0.pce-mongodb:27017&quot;
pce-mongodb-replica-0.pce-mongodb:27017
rs0:PRIMARY&gt;

rs0:PRIMARY&gt; rs.reconfig(cfg)
{
  &quot;ok&quot; : 1,
  &quot;$clusterTime&quot; : {
    &quot;clusterTime&quot; : Timestamp(1658309061, 1),
    &quot;signature&quot; : {
      &quot;hash&quot; : BinData(0,&quot;AAAAAAAAAAAAAAAAAAAAAAAAAAA=&quot;),
      &quot;keyId&quot; : NumberLong(0)
    }
  },
  &quot;operationTime&quot; : Timestamp(1658309061, 1)
}
rs0:PRIMARY&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;4. Add the second MongoDB pod &lt;em&gt;pce-mongodb-replica-1&lt;/em&gt; to the replication configuration:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;rs0:PRIMARY&gt; rs.add(&quot;pce-mongodb-replica-1.pce-mongodb:27017&quot;)
{
  &quot;ok&quot; : 1,
  &quot;$clusterTime&quot; : {
    &quot;clusterTime&quot; : Timestamp(1658309117, 1),
    &quot;signature&quot; : {
      &quot;hash&quot; : BinData(0,&quot;AAAAAAAAAAAAAAAAAAAAAAAAAAA=&quot;),
      &quot;keyId&quot; : NumberLong(0)
    }
  },
  &quot;operationTime&quot; : Timestamp(1658309117, 1)
}
rs0:PRIMARY&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;5. Continue to add the third MongoDB pod &lt;em&gt;pce-mongodb-replica-2&lt;/em&gt; to the replication configuration:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;rs0:PRIMARY&gt; rs.add(&quot;pce-mongodb-replica-2.pce-mongodb:27017&quot;)
{
  &quot;ok&quot; : 1,
  &quot;$clusterTime&quot; : {
    &quot;clusterTime&quot; : Timestamp(1658309148, 1),
    &quot;signature&quot; : {
      &quot;hash&quot; : BinData(0,&quot;AAAAAAAAAAAAAAAAAAAAAAAAAAA=&quot;),
      &quot;keyId&quot; : NumberLong(0)
    }
  },
  &quot;operationTime&quot; : Timestamp(1658309148, 1)
}
rs0:PRIMARY&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;6. Verify MongoDB replication status:&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;rs0:PRIMARY&gt; rs.status()
{
  &quot;set&quot; : &quot;rs0&quot;,
  &quot;date&quot; : ISODate(&quot;2022-07-20T09:26:19.014Z&quot;),
  &quot;myState&quot; : 1,
  &quot;term&quot; : NumberLong(1),
  &quot;syncingTo&quot; : &quot;&quot;,
  &quot;syncSourceHost&quot; : &quot;&quot;,
  &quot;syncSourceId&quot; : -1,
  &quot;heartbeatIntervalMillis&quot; : NumberLong(2000),
  &quot;majorityVoteCount&quot; : 2,
  &quot;writeMajorityCount&quot; : 2,
  &quot;optimes&quot; : {
    &quot;lastCommittedOpTime&quot; : {
      &quot;ts&quot; : Timestamp(1658309173, 1),
      &quot;t&quot; : NumberLong(1)
    },
    &quot;lastCommittedWallTime&quot; : ISODate(&quot;2022-07-20T09:26:13.146Z&quot;),
    &quot;readConcernMajorityOpTime&quot; : {
      &quot;ts&quot; : Timestamp(1658309173, 1),
      &quot;t&quot; : NumberLong(1)
    },
    &quot;readConcernMajorityWallTime&quot; : ISODate(&quot;2022-07-20T09:26:13.146Z&quot;),
    &quot;appliedOpTime&quot; : {
      &quot;ts&quot; : Timestamp(1658309173, 1),
      &quot;t&quot; : NumberLong(1)
    },
    &quot;durableOpTime&quot; : {
      &quot;ts&quot; : Timestamp(1658309173, 1),
      &quot;t&quot; : NumberLong(1)
    },
    &quot;lastAppliedWallTime&quot; : ISODate(&quot;2022-07-20T09:26:13.146Z&quot;),
    &quot;lastDurableWallTime&quot; : ISODate(&quot;2022-07-20T09:26:13.146Z&quot;)
  },
  &quot;lastStableRecoveryTimestamp&quot; : Timestamp(1658309133, 1),
  &quot;lastStableCheckpointTimestamp&quot; : Timestamp(1658309133, 1),
  &quot;electionCandidateMetrics&quot; : {
    &quot;lastElectionReason&quot; : &quot;electionTimeout&quot;,
    &quot;lastElectionDate&quot; : ISODate(&quot;2022-07-20T09:22:33.048Z&quot;),
    &quot;electionTerm&quot; : NumberLong(1),
    &quot;lastCommittedOpTimeAtElection&quot; : {
      &quot;ts&quot; : Timestamp(0, 0),
      &quot;t&quot; : NumberLong(-1)
    },
    &quot;lastSeenOpTimeAtElection&quot; : {
      &quot;ts&quot; : Timestamp(1658308953, 1),
      &quot;t&quot; : NumberLong(-1)
    },
    &quot;numVotesNeeded&quot; : 1,
    &quot;priorityAtElection&quot; : 1,
    &quot;electionTimeoutMillis&quot; : NumberLong(10000),
    &quot;newTermStartDate&quot; : ISODate(&quot;2022-07-20T09:22:33.139Z&quot;),
    &quot;wMajorityWriteAvailabilityDate&quot; : ISODate(&quot;2022-07-20T09:22:33.224Z&quot;)
  },
  &quot;members&quot; : [
    {
      &quot;_id&quot; : 0,
      &quot;name&quot; : &quot; pce-mongodb-replica-0.pce-mongodb:27017&quot;,
      &quot;health&quot; : 1,
      &quot;state&quot; : 1,
      &quot;stateStr&quot; : &quot;PRIMARY&quot;,
      &quot;uptime&quot; : 459,
      &quot;optime&quot; : {
        &quot;ts&quot; : Timestamp(1658309173, 1),
        &quot;t&quot; : NumberLong(1)
      },
      &quot;optimeDate&quot; : ISODate(&quot;2022-07-20T09:26:13Z&quot;),
      &quot;syncingTo&quot; : &quot;&quot;,
      &quot;syncSourceHost&quot; : &quot;&quot;,
      &quot;syncSourceId&quot; : -1,
      &quot;infoMessage&quot; : &quot;&quot;,
      &quot;electionTime&quot; : Timestamp(1658308953, 2),
      &quot;electionDate&quot; : ISODate(&quot;2022-07-20T09:22:33Z&quot;),
      &quot;configVersion&quot; : 4,
      &quot;self&quot; : true,
      &quot;lastHeartbeatMessage&quot; : &quot;&quot;
    },
    {
      &quot;_id&quot; : 1,
      &quot;name&quot; : &quot;pce-mongodb-replica-1.pce-mongodb:27017&quot;,
      &quot;health&quot; : 1,
      &quot;state&quot; : 2,
      &quot;stateStr&quot; : &quot;SECONDARY&quot;,
      &quot;uptime&quot; : 61,
      &quot;optime&quot; : {
        &quot;ts&quot; : Timestamp(1658309173, 1),
        &quot;t&quot; : NumberLong(1)
      },
      &quot;optimeDurable&quot; : {
        &quot;ts&quot; : Timestamp(1658309173, 1),
        &quot;t&quot; : NumberLong(1)
      },
      &quot;optimeDate&quot; : ISODate(&quot;2022-07-20T09:26:13Z&quot;),
      &quot;optimeDurableDate&quot; : ISODate(&quot;2022-07-20T09:26:13Z&quot;),
      &quot;lastHeartbeat&quot; : ISODate(&quot;2022-07-20T09:26:18.007Z&quot;),
      &quot;lastHeartbeatRecv&quot; : ISODate(&quot;2022-07-20T09:26:18.010Z&quot;),
      &quot;pingMs&quot; : NumberLong(0),
      &quot;lastHeartbeatMessage&quot; : &quot;&quot;,
      &quot;syncingTo&quot; : &quot;pce-mongodb-replica-0.pce-mongodb:27017&quot;,
      &quot;syncSourceHost&quot; : &quot;pce-mongodb-replica-0.pce-mongodb:27017&quot;,
      &quot;syncSourceId&quot; : 0,
      &quot;infoMessage&quot; : &quot;&quot;,
      &quot;configVersion&quot; : 4
    },
    {
      &quot;_id&quot; : 2,
      &quot;name&quot; : &quot;pce-mongodb-replica-2.pce-mongodb:27017&quot;,
      &quot;health&quot; : 1,
      &quot;state&quot; : 2,
      &quot;stateStr&quot; : &quot;SECONDARY&quot;,
      &quot;uptime&quot; : 31,
      &quot;optime&quot; : {
        &quot;ts&quot; : Timestamp(1658309173, 1),
        &quot;t&quot; : NumberLong(1)
      },
      &quot;optimeDurable&quot; : {
        &quot;ts&quot; : Timestamp(1658309173, 1),
        &quot;t&quot; : NumberLong(1)
      },
      &quot;optimeDate&quot; : ISODate(&quot;2022-07-20T09:26:13Z&quot;),
      &quot;optimeDurableDate&quot; : ISODate(&quot;2022-07-20T09:26:13Z&quot;),
      &quot;lastHeartbeat&quot; : ISODate(&quot;2022-07-20T09:26:18.007Z&quot;),
      &quot;lastHeartbeatRecv&quot; : ISODate(&quot;2022-07-20T09:26:18.536Z&quot;),
      &quot;pingMs&quot; : NumberLong(0),
      &quot;lastHeartbeatMessage&quot; : &quot;&quot;,
      &quot;syncingTo&quot; : &quot;pce-mongodb-replica-1.pce-mongodb:27017&quot;,
      &quot;syncSourceHost&quot; : &quot;pce-mongodb-replica-1.pce-mongodb:27017&quot;,
      &quot;syncSourceId&quot; : 1,
      &quot;infoMessage&quot; : &quot;&quot;,
      &quot;configVersion&quot; : 4
    }
  ],
  &quot;ok&quot; : 1,
  &quot;$clusterTime&quot; : {
    &quot;clusterTime&quot; : Timestamp(1658309173, 1),
    &quot;signature&quot; : {
      &quot;hash&quot; : BinData(0,&quot;AAAAAAAAAAAAAAAAAAAAAAAAAAA=&quot;),
      &quot;keyId&quot; : NumberLong(0)
    }
  },
  &quot;operationTime&quot; : Timestamp(1658309173, 1)
}
rs0:PRIMARY&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;strong&gt;members&lt;/strong&gt; section of the status output shows three replicas. The pod &lt;em&gt;pce-mongodb-replica-0&lt;/em&gt; is listed as the &lt;strong&gt;Primary&lt;/strong&gt; replica, while the other two pods, &lt;em&gt;pce-mongodb-replica-1&lt;/em&gt; &amp;#x26; &lt;em&gt;pce-mongodb-replica-2&lt;/em&gt;, are listed as the &lt;strong&gt;Secondary&lt;/strong&gt; replicas.&lt;/p&gt;
&lt;p&gt;The MongoDB replica set deployment is set up and ready to operate now. You can download and install the &lt;a href=&quot;https://www.mongodb.com/docs/compass&quot;&gt;MongoDB Compass&lt;/a&gt; and set up an external connection to connect to the MongoDB application deployed in the Kubernetes cluster. Using the powerful MongoDB Compass GUI, you can query, aggregate, and analyze the MongoDB data from the MongoDB deployment.&lt;/p&gt;
&lt;h2&gt;Scale MongoDB Application&lt;/h2&gt;
&lt;p&gt;If you want to add another replica set to the MongoDB deployment to scale up the MongoDB application, you can run the following kubectl command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl scale statefulset &amp;#x3C;name&gt; -n &amp;#x3C;namespace&gt; --replicas=&amp;#x3C;number of replicas&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, follow up with &lt;em&gt;Step 1&lt;/em&gt; from the previous section to connect to the first MongoDB pod and repeat &lt;em&gt;Step 5&lt;/em&gt; to add the new replica pod.&lt;/p&gt;
&lt;p&gt;To scale down the MongoDB application, you can simply run the command &lt;strong&gt;rs.remove()&lt;/strong&gt; to remove the replica pod from the replication configuration.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post describes the detailed process used to deploy and set up a stateful MongoDB application as a Replica Set deployment on Kubernetes clusters in HPE GreenLake for Containers. Kubernetes provides the ability to persist the state, deploy the stateful application, and manage and scale those applications with state. The production deployment of the MongoDB application provides redundancy, fault tolerance, high availability, and scalability. It can be used for working with a large amount of data and a high number of workloads in customer production environments.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ML Ops – Deploying an ML model in HPE GreenLake Platform ML Ops service]]></title><description><![CDATA[Overview HPE GreenLake Central is an advanced software-as-a-service platform that provides you with a consistent cloud experience for all…]]></description><link>https://developer.hpe.com/mlops-–-deploying-an-ml-model-in-greenlake-platform-mlops-service/</link><guid isPermaLink="false">https://developer.hpe.com/mlops-–-deploying-an-ml-model-in-greenlake-platform-mlops-service/</guid><pubDate>Mon, 08 Aug 2022 07:48:49 GMT</pubDate><content:encoded>&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;HPE GreenLake Central is an advanced software-as-a-service platform that provides you with a consistent cloud experience for all your applications and data on-premises or off-premises. It provides you with insights and controls to manage your hybrid IT estate, complementing your use of public clouds and data centers. HPE GreenLake Central gives you the ability to choose where and how to place your workloads and data, and through the services you purchase enables you to monitor security, capacity, resource utilization, and costs.&lt;/p&gt;
&lt;p&gt;HPE GreenLake for ML Ops is an on-premises, enterprise-grade ML service, enabling developers and data scientists to rapidly build, train, and deploy ML models from pilot to production, at any scale.&lt;/p&gt;
&lt;p&gt;This preconfigured solution comprises an optimized hardware stack and is powered by HPE Ezmeral Runtime Enterprise. It provides data scientists with self-service access to a sandbox environment for prototyping and testing, to eliminate IT provisioning delays, ensure repeatability, and accelerate time-to-value. As a fully managed solution, the HPE GreenLake for ML Ops offering frees IT from routine infrastructure management tasks.&lt;/p&gt;
&lt;h2&gt;Machine learning Lifecycle&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_3.png&quot; alt=&quot;ML Life cycle&quot; title=&quot;ML Life cycle&quot;&gt;&lt;/p&gt;
&lt;p&gt;The ML project life cycle can generally be divided into three main stages: data preparation, model creation, and deployment. All three of these components are essential for creating quality models that will bring added value to your business. This process is &lt;em&gt;cyclical&lt;/em&gt; because the insights gained from the existing model will help define the next model to be deployed.&lt;/p&gt;
&lt;p&gt;In this article, I will focus on optimal model identified after data preparation and building model. More specifically, as an example, I will show you how to deploy a sample ONNX model available in Kubernetes native MinIO Object storage, using Triton Inference Server.&lt;/p&gt;
&lt;p&gt;Triton Inference server supports deployment of any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming. To learn more about Triton Inference server, refer to the References section at the end of the post.&lt;/p&gt;
&lt;p&gt;HPE GreenLake for ML Ops platform allows customers to host their favorite cloud native applications, like MLflow, Minio, etc.&lt;/p&gt;
&lt;h2&gt;Pre-requisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;An active service subscription to HPE GreenLake for ML Ops&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An ML Ops project that has been created by an ML Ops admin for which the user is able to launch to HPE Ezmeral Runtime Enterprise through an ML Ops Project Member role&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Available access credentials to any S3 based object storage&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Triton Inference Server container image accessible either through on-prem registry (eg Harbor) or public registry accessible.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Steps to deploy a model&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Validation connection to Kubernetes cluster&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Click HPE GreenLake for ML Ops card in the HPE GreenLake Central &lt;strong&gt;Dashboard,&lt;/strong&gt; which shows the number of previously created projects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_2.png&quot; alt=&quot;Select ML Ops Project&quot; title=&quot;Select ML Ops Project&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click link “Launch ML Operations Console”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click “Dashboard” on left navigation of “Ezmeral Container Platform”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download “kubectl”, “HPE kubectl plugin”, and “kubeconfig”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set environment variable KUBECONFIG to point to the kubeconfig file&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Validate connectivity to the cluster using command “kubectl get no”&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Place the model in S3 object storage&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Place the model and configuration file for Triton Inference server in object storage&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_1.png&quot; alt=&quot;object storage for model&quot; title=&quot;object storage for model&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sample configuration file for the model as shown below:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;name: &quot;braintumor_onnx&quot;

platform: &quot;onnxruntime_onnx&quot;

max_batch_size : 0

input [

  {

    name: &quot;input_1&quot;

    data_type: TYPE_FP32

    dims: \[ 100, 128, 128, 2 ]

  }

]

output [

  {

    name: &quot;conv2d_22&quot;

    data_type: TYPE_FP32

    dims: \[ -1, 128, 128, 4 ]

  }

]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Create a namespace and secret for object storage credentials using the commands shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl create namespace triton

kubectl create secret generic minio_cred –from-literal=AWS_ACCESS_KEY_ID=&amp;#x3C;specify_access_key&gt; --from-literal=AWS_SECRET_ACCESS_KEY=&amp;#x3C;specify_secret_access_key&gt; -n triton
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Create a deployment to host the model and check the pods and services are running&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

apiVersion: apps/v1

kind: Deployment

metadata:

  labels:

    app: triton

  name: triton

  namespace: triton

spec:

  replicas: 1

  selector:

    matchLabels:

      app: triton

  template:

    metadata:

      labels:

        app: triton

    spec:

      containers:

      - image: nvcr.io/nvidia/tritonserver:21.10-py3

        name: tritonservercont

        command: [&quot;/bin/bash&quot;]

        args: [&quot;-c&quot;, &quot;/opt/tritonserver/bin/tritonserver --model-repository=s3://https://&amp;#x3C;objectstoreurl.com:port&gt;/sample/models --strict-model-config=false&quot;]

        env:

        - name: AWS_ACCESS_KEY_ID

            valueFrom:

              secretKeyRef:

                name: minio-cred

                key: AWS_ACCESS_KEY_ID

        - name : AWS_SECRET_ACCESS_KEY

            valueFrom:

              secretKeyRef:

                name: minio-cred

                key: AWS_SECRET_ACCESS_KEY

        ports:

        - containerPort: 8000

            name: http

        - containerPort: 8001

            name: grpc

        - containerPort: 8002

            name: metrics

        volumeMounts:

        - mountPath: /dev/shm

          name: dshm

        resources:

          requests:

            cpu: 2

            memory: 8Gi

            nvidia.com/gpu: 1

          limits:

            cpu: 2

            memory: 16Gi

            nvidia.com/gpu: 1

      volumes:

      - name: dshm

        emptyDir:

          medium: Memory

      securityContext:

        runAsUser: 1000

        fsGroup: 1000

      nodeSelector:

        gl.hpe.com/instance-type: GL-GP-MLi-Metal

---

apiVersion: v1

kind: Service

metadata:

  name: triton

  namespace: triton

  labels:

    hpecp.hpe.com/hpecp-internal-gateway: &quot;true&quot;

spec:

  type: NodePort

  selector:

    app: triton

  ports:

  - protocol: TCP

      name: http

      port: 8000

      nodePort: 30850

      targetPort: 8000

  - protocol: TCP

      name: grpc

      port: 8001

      nodePort: 30851

      targetPort: 8001

  - protocol: TCP

      name: metrics

      nodePort: 30852

      port: 8002

      targetPort: 8002
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;          &lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the above YAML, replace &lt;a href=&quot;objectstoreurl.com:port&quot;&gt;objectstoreurl.com:port&lt;/a&gt; and model path with actual object store URL, model sub path. Model refers to location where model and associated configuration is placed.&lt;/li&gt;
&lt;li&gt;Node selector “gl.hpe.com/instance-type: GL-GP-MLi-Metal” is used to place the workload in inference cluster&lt;/li&gt;
&lt;li&gt;Check that the pods and service are running using the commands shown below:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl get po –n triton

kubectl get svc –n triton
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Service labels &lt;strong&gt;hpecp.hpe.com/hpecp-internal-gateway: &quot;true&quot;&lt;/strong&gt; is placed to get a gateway endpoint to access the Triton Inference server outside the cluster. To find the endpoint, run this command:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl describe svc &amp;#x3C;service_name&gt; -n triton
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Check the metrics endpoints for models hosted in Triton Inference server are accessible:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;kubectl describe svc &amp;#x3C;service_name&gt; -n triton
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;Once the production model is hosted, then any application can perform inference on the model hosted by Triton inference server. For more on Triton client libraries, refer to the section below.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/triton-inference-server/server&quot;&gt;Triton Inference server&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/triton-inference-server/client&quot;&gt;Triton Client&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://min.io/&quot;&gt;MinIO&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=HPE-GreenLake-for-ML-Ops.html&quot;&gt;HPE GreenLake for ML Ops documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/HPE_Ezmeral_Container_Platform.html&quot;&gt;HPE Ezmeral Runtime Enterprise Documentation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Summer reading list]]></title><link>https://developer.hpe.com/2022-August-04/</link><guid isPermaLink="false">https://developer.hpe.com/2022-August-04/</guid><pubDate>Fri, 05 Aug 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Insights from a Hack Shack staffer's diary]]></title><description><![CDATA[After two years of only being able to meet virtually, HPE Developer Community members were excited to be able to connect with one another in…]]></description><link>https://developer.hpe.com/insights-from-a-hack-shack-staffers-diary/</link><guid isPermaLink="false">https://developer.hpe.com/insights-from-a-hack-shack-staffers-diary/</guid><pubDate>Thu, 21 Jul 2022 16:48:50 GMT</pubDate><content:encoded>&lt;p&gt;After two years of only being able to meet virtually, HPE Developer Community members were excited to be able to connect with one another in-person at HPE Discover 2022 in the Hack Shack. It was an amazing event with more than 8,000 visitors. The event focused mostly on HPE GreenLake and all the 70+ new services that have been made available, from private cloud for the enterprise to data analytics (try some of them here in the&lt;a href=&quot;https://testdrive.greenlake.hpe.com/&quot;&gt; HPE GreenLake Test Drive&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;It was great to once again meet up with colleagues and customers face-to-face. I arrived early this year in order to help with the Hack Shack setup over the weekend. Our CEO, Antonio Neri, stopped by the booth while we were doing setup. He gladly agreed to take a picture with some of us members of the HPE Developer Community team. (He’s the one in blue and I’m to the right.)&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-1b-3-501-pix-.jpg&quot; alt=&quot;HPE Developer Community staff with HPE CEO, Antonio Neri&quot;&gt;&lt;/center&gt;
&lt;h3&gt;Highlights of the week&lt;/h3&gt;
&lt;p&gt;The Hack Shack schedule was pretty intense, as usual. For those of you who may not have had the opportunity to stop by, you did miss out on quite a few cool activities. We had 13 interactive meetups focused on Open Source, how to take advantage of data using a data map, security, data protection, API integration, and infrastructure-as-code. We also hosted 5 software challenges based on our Workshop-on-Demand structure. The most popular software challenge this year was the one entitled &lt;em&gt;&lt;strong&gt;VM Desired State Management in HPE GreenLake&lt;/strong&gt;&lt;/em&gt; where one learned to manage an HPE GreenLake Private Cloud for Enterprise workload using Terraform.&lt;/p&gt;
&lt;p&gt;In the front yard, I met soon-to-become new members of the HPE Developer Community after sharing with them a few simple, yet efficient, words and a quick demo of our developer portal. I also helped attendees participate in our software challenges. These software challenges were enjoyed by many and will be transformed over the summer into workshops that people can take through our &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;Workshop-on-Demand catalog&lt;/a&gt;.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-2-b-3-475-pix.jpg&quot; alt=&quot;HPE Developer Community portal prominently displayed&quot;&gt;&lt;/center&gt;
&lt;p&gt;Another highlight of the week were the foosball challenges we engaged in with Hack Shack attendees. I remained unbeaten at the foosball table the entire week. And I even managed to deliver our HPE Developer Community message while playing. A peculiar, but effective, tactic I used to win, I must admit.  &lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-3-b-3-405-pix-.jpg&quot; alt=&quot;Frederic remains undefeated in foosball&quot;&gt;&lt;/center&gt;
&lt;p&gt;Many attendees decided to try our virtual treasure hunt. This simple scavenger-hunt style game allowed them to learn more about both the team and the different assets available in the HPE Developer Community portal. The winners were awarded Canakits and Raspberry Pi 4 computers during our Wednesday night party.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-4-b-3-450-pix-.jpg&quot; alt=&quot;Listening to Dr. Goh at the Hack Shack Celebration party&quot;&gt;&lt;/center&gt;
&lt;p&gt;Next to the Hack Shack was a space called the Living Labs where HPE introduced the new trials experience for HPE GreenLake. The HPE Developer Community team expects to work closely in the near future with HPE GreenLake teams to provide more developer-related content for their catalog. A software challenge we developed for this Discover event – the Terraform provider for HPE GreenLake – will be one of the very first workshops on that list.&lt;/p&gt;
&lt;h3&gt;The introduction of the HPE GreenLake Developer portal&lt;/h3&gt;
&lt;p&gt;One of the most interesting announcements of the week was the introduction of the &lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;HPE GreenLake Developer portal&lt;/a&gt;. As HPE transitions to an as-a-service strategy, APIs and developers become more and more important. It would be illogical for HPE to offer a platform where customers would consume resources by clicking on well-designed User Interfaces. APIs must be made available and described properly so they can be leveraged to allow complete automation. Equally, providers should be developed to allow better interaction with industry-standard automation frameworks like Red Hat’s Ansible, Hashicorp Terraform, Chef, or Puppet. The HPE GreenLake developer portal aims at offering API access to customers, allowing them to integrate seamlessly with HPE GreenLake programmatically.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-5-b-3-512-pix-.jpg&quot; alt=&quot;One of the technical meetup sessions&quot;&gt; &lt;/center&gt;
&lt;p&gt;The evidence that Discover 2022 was a HPE GreenLake-centric event abounded. The design of the showcase floor reflected that, with beams of data feeding a central control center. As HPE shifts to this model, the personas the company addresses are also evolving. Not only do our solutions address the IT Ops administrators moving to a DevOps culture, but also the new data-driven engineers working on Data and ML Ops. All these personas develop code and need to interact seamlessly with HPE GreenLake offerings. The HPE Developer Community team is doing its best to facilitate this interaction. That’s why we will be here again next year with new opportunities to learn, content and fun activities!&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/fred-6-b-3-512-.jpg&quot; alt=&quot;The onsite HPE Developer Community team&quot;&gt; &lt;/center&gt;</content:encoded></item><item><title><![CDATA[Introducing the HPE GreenLake Developer portal]]></title><description><![CDATA[During the executive keynote session at the HPE Discover 2022 Edge-to-Cloud Conference, Antonio Neri, President and CEO of Hewlett Packard…]]></description><link>https://developer.hpe.com/introducing-the-hpe-greenlake-developer-portal/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-the-hpe-greenlake-developer-portal/</guid><pubDate>Wed, 20 Jul 2022 15:42:12 GMT</pubDate><content:encoded>&lt;p&gt;During the executive keynote session at the HPE Discover 2022 Edge-to-Cloud Conference, Antonio Neri, President and CEO of Hewlett Packard Enterprise, announced the availability of the HPE GreenLake Developer Portal. He was joined by Fidelma Russo, HPE CTO, and Bryan Thompson, HPE VP of GreenLake Product Management, who gave a quick look at how it provides APIs and documentation to assist developers of all types integrate their apps and services with HPE GreenLake. A &lt;a href=&quot;https://developer.greenlake.hpe.com/docs/greenlake&quot;&gt;beta version of the portal&lt;/a&gt; is available today, easily accessed through the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Recently, I had the opportunity to talk with HPE GreenLake Platform Product Manager Navaneethan Venugopalan and HPE Distinguished Technologist Travis Tripp about this eagerly anticipated resource and review it myself. Through the portal, HPE GreenLake customers and partners can take advantage of a well-documented, secure, and scalable framework of APIs for the HPE GreenLake edge-to-cloud platform.  Starting off by providing material for the HPE GreenLake for Compute Ops Management service, contents in this resource portal will grow rapidly, so make sure you check it often.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;First off, congratulations on the launch of your new portal. I&apos;m sure our readers are very excited to hear about it. Could you give them a little more detail about the portal?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thank you! We’re excited as well. The HPE GreenLake Developer Portal is 100% dedicated to automating integrations with HPE GreenLake. Developers can find all of the official HPE GreenLake API and integration guides there, delivered directly from the HPE GreenLake engineering teams.  It will always contain the most up-to-date and accurate guidance, because it is delivered as part of our end-to-delivery pipeline.  In fact, you will find the very same API and integrations guides there that all of the HPE GreenLake teams use themselves!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why did you develop the portal?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We received numerous requests from our customers and partners to make this information easily available so that developers, data scientists, IT operators, MSP partners, resellers, and 3rd-party systems integrators could automate their interactions with HPE GreenLake.  The announcement of its availability at Discover 2022 was eagerly received. Our customers are excited to be able to automate common and repetitive tasks using these APIs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Can anyone, then, access the portal?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The portal, itself, is easily accessed by anyone – no login is required. Simply visit &lt;a href=&quot;https://developer.greenlake.hpe.com/&quot;&gt;developer.greenlake.hpe.com&lt;/a&gt; or go through the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community portal&lt;/a&gt; to access the site. We do offer varying levels of access, though. All the HPE GreenLake APIs support role-based access and you must be authenticated and authorized to use them. Every API in HPE GreenLake is designed with zero trust in mind, meaning that in order to access the API, you must have the correct authorization.  This means that based on the role you’ve been granted in your HPE GreenLake environment will allow you to successfully invoke the relevant APIs. You can always browse the APIs to learn more, but if you happen to be an authorized partner, you’ll get access to even more partner API docs, as we publish API documentation in two categories – public and partner.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I see that the HPE GreenLake for Compute Ops Management service is available today. What operational tasks can a developer automate using the HPE GreenLake management console APIs?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Developers using HPE GreenLake for Compute Ops Management (COM) can automate bulk server lifecycle management tasks (such as updating the firmware of servers), perform power on/off operations, set up policies for establishing firmware baselines, and more. They can also perform monitoring operations to get inventory and health status, to name a few. COM has been purpose-built to be API first; pretty much any operation that can be performed via the UI can be performed via the API.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What additional APIs will you be adding in the not-too-distant future and when might we expect them?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HPE GreenLake for Containers, Storage, and Private Cloud Enterprise are some of the services that will become available in the next 4-6 months.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;At the Discover event, were there any specific things customers requested?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Customers at the event asked that we ensure it is kept up-to-date with the most current and accurate information, which we are able to do considering how it is part of our end-to-delivery pipeline.  They also requested support for tools like Terraform and Ansible. They were even interested in contributing to it with open source tools and libraries. Finally, they asked for a way to provide feedback and suggestions, which we’re currently looking into.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I wish you much success with the portal and look forward to working with you and your team more closely to compliment your site with HPE GreenLake-related blog posts, workshops, and technical talks on the HPE Developer Community portal.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We do as well. As you know, the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-greenlake/home/&quot;&gt;HPE GreenLake edge-to-cloud platform page&lt;/a&gt; on the HPE Developer Community site already links to over a dozen blog posts and tutorials. There’s already one HPE GreenLake related &lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;Workshop-on-Demand&lt;/a&gt;, and we expect that more will be added soon. We’ve been invited to participate in the Munch &amp;#x26; Learn talks and Meetup sessions, and we look forward to doing that as well.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Discover the importance of HPE GreenLake to our community]]></title><description><![CDATA[It was standing-room only when HPE’s CEO, Antonio Neri, gave his keynote to kick off the HPE Discover 2022 Edge-to-Cloud Conference. The…]]></description><link>https://developer.hpe.com/discover-the-importance-of-hpe-greenlake-to-our-community/</link><guid isPermaLink="false">https://developer.hpe.com/discover-the-importance-of-hpe-greenlake-to-our-community/</guid><pubDate>Mon, 18 Jul 2022 18:29:05 GMT</pubDate><content:encoded>&lt;!--StartFragment--&gt;
&lt;p&gt;It was standing-room only when HPE’s CEO, Antonio Neri, gave his &lt;a href=&quot;https://www.youtube.com/watch?v=5YocxnhKAnM&quot;&gt;keynote&lt;/a&gt; to kick off the HPE Discover 2022 Edge-to-Cloud Conference. The importance of the HPE GreenLake edge-to-cloud platform resounded everywhere throughout the event. But most exciting for us was when the newly refreshed HPE Developer Community portal was shown on the onstage screen. You could hear a large cheer as Antonio noted the importance of developers to HPE, specifically in relation to the HPE GreenLake platform, and mentioned the HPE Developer Community by name. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-dev-on-screen.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In his &lt;a href=&quot;https://www.youtube.com/watch?v=5YocxnhKAnM&quot;&gt;keynote&lt;/a&gt;, Antonio Neri announced the availability of the new HPE GreenLake Developer portal – along with new tools – created with the aim of making it easier for developers to access Application Programming Interfaces (APIs) for HPE GreenLake. By previewing this portal, Antonio showed how HPE is working to engage with developers, IT DevOps teams, and data engineers to help them harness data and facilitate its use enterprise-wide to inform business decision making.&lt;/p&gt;
&lt;p&gt;It was a proud moment for the team – and an important one as we expand the HPE Developer Community’s ability to help developers of all types engage on the HPE GreenLake platform.&lt;/p&gt;
&lt;h2&gt;Develop with the HPE GreenLake edge-to-cloud platform&lt;/h2&gt;
&lt;p&gt;Antonio emphasized that bringing a consistent cloud operating model across all workloads and data was the real answer to the conundrum businesses face when they ask which workloads should be put in a public cloud and which should stay on-premises. And this is where HPE GreenLake provides the solution. It’s a platform built specifically for a highly distributed enterprise designed for application modernization.&lt;/p&gt;
&lt;p&gt;But it’s also important to remember that HPE GreenLake is a platform for application development. He pointed out that the company is continuing to build out its growing HPE GreenLake partner ecosystem with distributors, value-added resellers, systems integrators, ISVs, and service providers. “We are placing partner capabilities at the center of our platform through our strategy to open our unique APIs to partners,” Neri told the audience. “They can choose to build, resell or manage on behalf of the customers.”&lt;/p&gt;
&lt;p&gt;Antonio highlighted that HPE has already shown its HPE GreenLake API partner commitment by announcing tight API integration with the cloud marketplaces of the top four global distributors – TD Synnex, Ingram Micro, Arrow Electronics and ALSO Group – making it easier for them to sell cloud services through and with the channel. Ultimately, the aim is to open the full power of the platform to partners.&lt;/p&gt;
&lt;h2&gt;Looking forward to next Discover&lt;/h2&gt;
&lt;p&gt;Traditionally, the HPE Discover event has been an event where customers and partners get key insights into HPE’s strategies and products. In recent years, it has attracted more business line managers than techies. As our strategy continues to evolve to a cloud software and services model, the important issues that will be discussed will be how to integrate with the HPE GreenLake edge-to-cloud platform. Developers take heed! This is your moment. Start talking with your managers now to make sure they budget for your ability to travel and hear what’s new at the next HPE Discover event!&lt;/p&gt;
&lt;!--EndFragment--&gt;</content:encoded></item><item><title><![CDATA[Summary from the 9th Annual Chapel Implementers and Users Workshop (CHIUW 2022)]]></title><description><![CDATA[Introduction Programming today becomes complicated by the many kinds of parallelism that exist in everything from phones to laptops to…]]></description><link>https://developer.hpe.com/summary-from-the-9th-annual-chapel-implementers-and-users-workshop-chiuw-2022/</link><guid isPermaLink="false">https://developer.hpe.com/summary-from-the-9th-annual-chapel-implementers-and-users-workshop-chiuw-2022/</guid><pubDate>Fri, 15 Jul 2022 19:31:06 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Programming today becomes complicated by the many kinds of parallelism that exist in everything from phones to laptops to supercomputers.  The open-source Chapel parallel programming language makes parallel programming easier and more productive, while still enabling high performance that takes advantage of the wide variety of parallelism available today.&lt;/p&gt;
&lt;p&gt;In this post, Michelle Strout, general workshop chair, and Engin Kayraklioglu, program committee chair, summarize the highlights of the recent 9th Annual Chapel Implementers and Users Workshop (CHIUW 2022). Read on to hear about some exciting applications that are using Chapel productively, the coding day that happened the day before the workshop, updates on the project, and feedback the team received during and after the workshop.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Applications Written in Chapel&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Programmers generally enjoy trying out new programming languages, but like to see example use cases before they lean on it.  CHIUW provides them with an opportunity to hear about and ask questions about many different use cases.  The Chapel programming language is being used productively in a range of application domains: data science, aeronautical simulations, cosmology simulations, and quantum diagonalization to name a few. CHIUW featured the following talks on different Chapel applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/vBxPTzIRRr0&quot;&gt;Large-Scale and User-Friendly Exact Diagonalization in Chapel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/uTE_RZkODOk&quot;&gt;Recent Developments in the CHApel Multi-Physics Simulation Software&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/YrXYpgnt4rQ&quot;&gt;UltraLight Dark Matter in Simulations: A Chapel-Powered Eigenstate Perspective&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/pstRsgMhCDA&quot;&gt;Implementing and Optimizing Parquet I/O in Chapel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/xI9EByv7A5M&quot;&gt;Truss Analytics Algorithms and Integration in Arkouda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/400jmMzdzHQ&quot;&gt;From C and Python to Chapel as My Main Programming Language&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Coding Day&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For Coding Day, anyone interested in working one-on-one or in small groups with developers from the Chapel team at HPE could indicate their interest in an online form.  We had 7 different sessions.  Programmers interested in Chapel were able to ask questions specific to their Chapel code, interactively make changes to the code, and work through issues with Chapel developers, who were present to provide immediate assistance.  In these sessions, Chapel users and developers worked on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adding diagnostic support for Chapel runtime’s remote data cache&lt;/li&gt;
&lt;li&gt;Writing a cell-list module/library in Chapel&lt;/li&gt;
&lt;li&gt;Investigating porting a Dask application to Arkouda&lt;/li&gt;
&lt;li&gt;Implementing a Lisp interpreter in Chapel&lt;/li&gt;
&lt;li&gt;Optimizing distributed memory performance of a very large-scale matrix-vector multiplication&lt;/li&gt;
&lt;li&gt;Discussing Chapel’s nascent GPU support and going over the internals of the current implementation&lt;/li&gt;
&lt;li&gt;Learning Chapel in a peer-programming setting&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Next year, we plan to keep Coding Day virtual, grow it to include more small groups at non-intersecting times, and publish a schedule ahead of time.  Email Engin at &lt;a href=&quot;mailto:engin@hpe.com&quot;&gt;engin@hpe.com&lt;/a&gt; if you have any thoughts about what you would like to work on or see during next year’s Chapel coding day.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chapel Project Updates&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One update we heard from Chapel developers was that the new parser was being used in the production Chapel compiler.  This is important because an often-heard complaint from Chapel users is that the compiler is too slow.  The current production compiler does whole program compilation and thus is not able to take advantage of separate and incremental compilation approaches.  The current &lt;em&gt;dyno&lt;/em&gt; effort within the Chapel team is redesigning the Chapel compiler to enable separate, incremental, and in general more dynamic compilation.&lt;/p&gt;
&lt;p&gt;Another important update we heard was in regards to the ever-growing GPU support for Chapel.  Currently some &lt;code&gt;forall&lt;/code&gt; loops in Chapel are compiled for CPUs and as GPU kernels.  Which version to run is selected at runtime.  Below is an example of Chapel code that currently runs as a GPU kernel on machines where one or more GPUs are available.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/chiuw22-gpu-code-example.jpg&quot; alt=&quot;coforall gpu in here.gpus do on gpu { var A, B, C: [1..n] real; const alpha = 2.0; B = 1.0; C = 2.0; A = B + alpha + C; }&quot; title=&quot;Chapel Code that is offloaded to all the gpus on a locale/node.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Keep an eye on the &lt;a href=&quot;https://chapel-lang.org/docs/technotes/gpu.html&quot;&gt;GPU Programming Technical Note&lt;/a&gt; for new features as GPU support in Chapel continues to expand.  More GPU support means handling an especially difficult kind of parallelism that programmers struggle with these days.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Feedback from Attendees&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One user, Nelson Dias, gave a talk about moving to Chapel from the C and Python programming languages.  Nelson’s abstract states, “Chapel is a very elegant language, providing the power and speed of C and Fortran, while allowing a high degree of abstraction and expressiveness that rivals Python&apos;s. I have used it in the last two years for: calculating statistics over massive turbulence datasets, implementing models for lake evaporation in hydrology, and testing some relatively simple numerical solutions of partial differential equations.”  &lt;a href=&quot;https://youtu.be/400jmMzdzHQ&quot;&gt;His talk&lt;/a&gt; details the advantages and disadvantages he found while programming in Chapel.&lt;/p&gt;
&lt;p&gt;In a post-workshop survey, attendees provided the following feedback to help improve CHIUW:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add short tutorials on how to use language features in a real language, as well as a debugging and performance analysis tutorial. (The Chapel team plans on doing these for next year&apos;s Chapel Coding Day.)&lt;/li&gt;
&lt;li&gt;Expand presentation topics (The CHIUW organizers can encourage new submissions from the community next year)&lt;/li&gt;
&lt;li&gt;Explore how people are doing performance optimizations in Chapel applications (For next year, we will encourage such submissions from the community)&lt;/li&gt;
&lt;li&gt;Talk more about libraries, specifically have one about parallel/distributed libraries in Chapel and another about wrapping C libraries in Chapel and lightweight Python wrappers for Chapel&lt;/li&gt;
&lt;li&gt;Offer more about the internals of the Chapel compiler, runtime, and libraries&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Attendees also pointed out that their favorite Chapel features included parallel iterators, domains, global view memory, separation of concerns, and multi-resolution parallel programming.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Thank you for taking the time to read this post summarizing the recent Chapel workshop that highlights applications using Chapel, Coding Day, updates for the project, and feedback the team received during and after the workshop.&lt;/p&gt;
&lt;p&gt;Check out all of the talk videos, slides, and submissions at the &lt;a href=&quot;https://chapel-lang.org/CHIUW2022.html&quot;&gt;9th Annual Chapel Implementers and Users Workshop (CHIUW 2022)&lt;/a&gt; website.  Come interact with the open-source Chapel project at the &lt;a href=&quot;https://chapel-lang.org/&quot;&gt;Chapel website&lt;/a&gt;, on &lt;a href=&quot;https://github.com/chapel-lang/&quot;&gt;GitHub&lt;/a&gt;, &lt;a href=&quot;https://stackoverflow.com/questions/tagged/chapel&quot;&gt;StackOverflow&lt;/a&gt;, &lt;a href=&quot;https://www.facebook.com/ChapelLanguage&quot;&gt;Facebook&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/ChapelLanguage&quot;&gt;Twitter&lt;/a&gt;, &lt;a href=&quot;https://chapel.discourse.group/&quot;&gt;Discourse&lt;/a&gt;, or &lt;a href=&quot;https://www.youtube.com/c/ChapelParallelProgrammingLanguage&quot;&gt;YouTube&lt;/a&gt;.  Consider how Chapel could help you solve some of your parallel programming challenges.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Scale Kubernetes Clusters using HPE GreenLake Terraform Provider]]></title><description><![CDATA[The process of managing and provisioning computer data centers through machine-readable definition files, also known as Infrastructure-as…]]></description><link>https://developer.hpe.com/scale-kubernetes-cluster-using-hpe-greenlake-terraform-provider/</link><guid isPermaLink="false">https://developer.hpe.com/scale-kubernetes-cluster-using-hpe-greenlake-terraform-provider/</guid><pubDate>Tue, 12 Jul 2022 14:06:02 GMT</pubDate><content:encoded>&lt;p&gt;The process of managing and provisioning computer data centers through machine-readable definition files, also known as Infrastructure-as-Code (IaC), offers many significant benefits. It helps to increase operational agility, simplify management, reduce errors, and save cost.&lt;/p&gt;
&lt;p&gt;In this post, I will explore options to declare and scale Kubernetes clusters on HPE GreenLake using the HPE GreenLake Terraform Provider.&lt;/p&gt;
&lt;h1&gt;Prerequisite&lt;/h1&gt;
&lt;p&gt;Before starting this tutorial, it is recommended that you read the blog post &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code - Part 1&lt;/a&gt;, which includes steps for creating a Kubernetes cluster. This post expands upon that scenario by examining how to scale a cluster.&lt;/p&gt;
&lt;h1&gt;Verify existing Kubernetes cluster&lt;/h1&gt;
&lt;p&gt;After the cluster is created following the instructions found in the &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;Kubernetes Cluster as Code - Part 1 blog post&lt;/a&gt;, launch HPE GreenLake Central console and verify that the cluster is present under the appropriate tenant.&lt;/p&gt;
&lt;p&gt;You should see the &lt;strong&gt;tf-test&lt;/strong&gt; cluster present under &lt;strong&gt;Dashboard -&gt; Manage your Private Cloud -&gt; Containers&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster_list.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below is the reference Terraform configuration file for the existing cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
  name = &quot;BLR&quot;
  space_id = var.HPEGL_SPACE
 }
 
data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
  name = &quot;demo&quot;
  site_id = data.hpegl_caas_site.blr.id
}
 
resource hpegl_caas_cluster test {
  name         = &quot;tf-test&quot;
  blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
  site_id      = data.hpegl_caas_site.blr.id
  space_id     = var.HPEGL_SPACE
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Add worker node to Kubernetes cluster resource&lt;/h1&gt;
&lt;p&gt;You can scale the cluster by adding a worker node. The following worker-node attributes are specified to add or modify node pools in the declared Kubernetes cluster resource.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;name&lt;/strong&gt;: Fill in the name that would ideally represent each node pool.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;machine_blueprint_id&lt;/strong&gt;: Fill in the ID for the machine blueprint that is already present in HPE GreenLake Central for your tenant. Use the machine blueprint data source to retrieve the machine blueprint ID.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;min_size&lt;/strong&gt;: Add the number of minimum nodes to be present as part of this node pool. The autoscaler will not scale the nodepool below this number.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;max_size&lt;/strong&gt;: Add the number of maximum nodes to be present as part of this node pool. The autoscaler will not scale the nodepool above this number.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Below is the reference Terraform configuration for creating the cluster with additional nodes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
  name = &quot;BLR&quot;
  space_id = var.HPEGL_SPACE
 }
 
data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
  name = &quot;demo&quot;
  site_id = data.hpegl_caas_site.blr.id
}
 
data &quot;hpegl_caas_machine_blueprint&quot; &quot;standard_worker&quot; {
  name = &quot;standard-worker&quot;
  site_id = data.hpegl_caas_site.blr.id
}

resource hpegl_caas_cluster test {
  name         = &quot;tf-test&quot;
  blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
  site_id      = data.hpegl_caas_site.blr.id
  space_id     = var.HPEGL_SPACE

  worker_nodes {
    name = &quot;test-node-pool&quot;
    machine_blueprint_id = data.hpegl_caas_machine_blueprint.standard_worker.id
    min_size = &quot;1&quot;
    max_size = &quot;2&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Note: Machine blueprints are used to define the infrastructure details for the worker nodes used in a cluster. A machine blueprint includes the following:&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Machine provider&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Operating system image and version&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Number of vCPU cores and amount of memory in the node&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Ready to Terraform plan&lt;/h1&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ terraform plan

hpegl_caas_cluster.test: Refreshing state... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # hpegl_caas_cluster.test will be updated in-place
  ~ resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
        id                    = &quot;a32fabb9-7c19-42d1-9a38-ebf122810c0a&quot;
        name                  = &quot;tf-test&quot;
        # (17 unchanged attributes hidden)

      + worker_nodes {
          + min_size             = &quot;1&quot;
          + max_size             = &quot;2&quot;
          + machine_blueprint_id = &quot;0ac21c99-2fdb-491d-a590-a5016690b80b&quot;
          + name                 = &quot;test-node-pool&quot;
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn&apos;t use the -out option to save this plan, so Terraform can&apos;t guarantee to take exactly these actions if you run &quot;terraform apply&quot; now.
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Ready to Terraform apply&lt;/h1&gt;
&lt;p&gt;Terraform apply executes the actions proposed in the Terraform plan and updates the resources. Run the command &lt;strong&gt;terraform apply&lt;/strong&gt; and type &lt;strong&gt;yes&lt;/strong&gt; when asked to &lt;strong&gt;Enter a value&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shellsession&quot;&gt;$ terraform apply

hpegl_caas_cluster.test: Refreshing state... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # hpegl_caas_cluster.test will be updated in-place
  ~ resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
        id                    = &quot;a32fabb9-7c19-42d1-9a38-ebf122810c0a&quot;
        name                  = &quot;tf-test&quot;
        # (17 unchanged attributes hidden)

      + worker_nodes {
          + min_size             = &quot;1&quot;
          + max_size             = &quot;2&quot;
          + machine_blueprint_id = &quot;0ac21c99-2fdb-491d-a590-a5016690b80b&quot;
          + name                 = &quot;test-node-pool&quot;
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.

  Enter a value: yes

hpegl_caas_cluster.test: Modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 1m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 3m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 5m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 7m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 9m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 11m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 13m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 15m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 17m10s elapsed]
hpegl_caas_cluster.test: Still modifying... [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a, 19m10s elapsed]
hpegl_caas_cluster.test: Modifications complete after 19m18s [id=a32fabb9-7c19-42d1-9a38-ebf122810c0a]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the HPE GreenLake edge-to-cloud platform, launch the HPE GreenLake Central console. Navigate to &lt;strong&gt;Dashboard -&gt; Manage your Private Cloud -&gt; Containers&lt;/strong&gt; and select the &lt;strong&gt;tf-test&lt;/strong&gt; cluster created. You will see additional nodes with &lt;strong&gt;Node Pool&lt;/strong&gt; name as &quot;&lt;strong&gt;test-node-pool&lt;/strong&gt;&quot; being created successfully.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster_detail_page.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Scale Options with Auto Scaler&lt;/h1&gt;
&lt;p&gt;Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster nodepool within the min_size and max_size value. It increases the size of the cluster when there are pods that failed to schedule due to insufficient resources and decreases the size of the cluster when some nodes are consistently unneeded for a significant amount of time.&lt;/p&gt;
&lt;p&gt;The above example is specifically for adding a single worker node pool to an existing cluster. Below are all the possible options available for auto scaling.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Add worker node pools&lt;/em&gt;:&lt;/strong&gt; You can add multiple node pools by simply declaring corresponding &lt;strong&gt;worker_nodes&lt;/strong&gt; in the same cluster resource.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Reduce worker node pools&lt;/em&gt;:&lt;/strong&gt; Remove &lt;strong&gt;worker_nodes&lt;/strong&gt; associated with a specific node pool from the cluster resource.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Scaling up/down worker node&lt;/em&gt;:&lt;/strong&gt; Updating the &lt;strong&gt;min_size&lt;/strong&gt; and &lt;strong&gt;max_size&lt;/strong&gt; field increases(scale up) or decreases(scale down) the number of nodes under each node pool. If both the values are same, auto scaling will be disabled. In any case, &lt;strong&gt;min_size&lt;/strong&gt; cannot be greater than &lt;strong&gt;max_size&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Increase/decrease default worker node count&lt;/em&gt;:&lt;/strong&gt; Every cluster by default has a worker node even if &lt;strong&gt;worker_nodes&lt;/strong&gt; are not declared in the Terraform configuration. This originally comes from what&apos;s declared in the cluster blueprint. You can override and update the min_size, max_size and machine blueprint for this default worker by declaring &lt;strong&gt;worker_nodes&lt;/strong&gt; with the default worker node name. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note: If you remove the default node pool (&lt;strong&gt;worker_nodes&lt;/strong&gt; with the default worker node name) in your configuration file, the default configuration coming from the cluster blueprint shall be retained.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;In this blog, I covered how to scale Kubernetes clusters with Terraform provider for HPE GreenLake. I showed you how to update an existing cluster with additional worker nodes. I also discussed several options available to increase or reduce worker nodes across different node pools for Kubernetes clusters.&lt;/p&gt;
&lt;p&gt;I hope you found this information interesting and useful while considering the scale of Kubernetes cluster with HPE GreenLake Terraform provider. Use the following links to understand more about Terraform and HPE GreenLake Terraform Provider.&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.terraform.io/&quot;&gt; Learn more about Terraform&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt; Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl&quot;&gt; Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Don’t forget, you can always find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog/tag/hpe-greenlake&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Configuring Azure AD as the SAML IDP with HPE Greenlake Cloud Platform and Aruba Central]]></title><description><![CDATA[Single sign-on (SSO) enables users to securely authenticate with multiple applications and websites by logging in only once using just one…]]></description><link>https://developer.hpe.com/configuring-azure-ad-with-greenlake-cloud-platform-and-aruba-central/</link><guid isPermaLink="false">https://developer.hpe.com/configuring-azure-ad-with-greenlake-cloud-platform-and-aruba-central/</guid><pubDate>Mon, 11 Jul 2022 12:04:30 GMT</pubDate><content:encoded>&lt;p&gt;Single sign-on (SSO) enables users to securely authenticate with multiple applications and websites by logging in only once using just one set of credentials (username and password). With SSO, the application or website that the user is trying to access relies on a trusted third party (identity provider) to verify that users are who they say they are.&lt;/p&gt;
&lt;p&gt;Azure Active Directory (Azure AD) is a cloud-based identity and access management service that helps you access external resources, such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. Aruba Central uses Security Assertion Markup Language (SAML) identiy provider (idP) to issue authentication assertions in conjunction with a single sign-on profile. In this blog post, I&apos;ll explain the process for configuring Azure AD to authenticate users into the HPE GreenLake Cloud Platform (HPE GLCP) and Aruba Central using SAML idP.&lt;/p&gt;
&lt;p&gt;I&apos;ll explain the process for configuring Azure AD to authenticate users into HPE Greenlake Cloud Platform (HPE GLCP) and Aruba Central using SAML idP.&lt;/p&gt;
&lt;p&gt;If you&apos;re looking for the Okta version of this information, it can be found on &lt;a href=&quot;https://www.wifi-guys.com/?p=512&quot;&gt;WIFI-GUYS&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Contents&lt;/h2&gt;
&lt;!-- prettier-ignore-start --&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#before-you-begin&quot;&gt;Before you Begin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#terms-used-in-this-document&quot;&gt;Terms used in this blog post&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#steps-to-configure-ssosaml-application-in-azure-ad&quot;&gt;Steps to Configure SSO/SAML Application in Azure AD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#step-1-create-an-azure-ad-enterprise-application&quot;&gt;Step 1: Create an Azure AD Enterprise Application&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#step-2-configure-gclp-for-saml-federation&quot;&gt;Step 2: Configure GCLP for SAML Federation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#login-to-glcp-and-aruba-central-using-azure-ad&quot;&gt;Login to HPE GLCP and Aruba Central using Azure AD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#using-azure-ad-mfa&quot;&gt;Using Azure AD MFA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#troubleshooting&quot;&gt;Troubleshooting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#appendix-generating-the-hpe_ccs_attribute&quot;&gt;Appendix: Generating the hpe_ccs_attribute&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!-- prettier-ignore-end --&gt;
&lt;h2&gt;Before you Begin&lt;/h2&gt;
&lt;p&gt;This blog post references the following documentation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=ccs-help_en_us&quot;&gt;HPE Greenlake User Guide&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=a00092451en_us&amp;#x26;page=GUID-CD81FAF8-9601-4773-899F-049A506FEE2E.html&quot;&gt;Single sign-on (SS) authentication&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=a00120892en_us&quot;&gt;HPE Greenlake Platform Guide&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you&apos;re looking for the Central 2.5.4 SAML integration guide, &lt;a href=&quot;https://github.com/michaelrosejr/arubasso/tree/main/Central254&quot;&gt;it has been moved&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Terms used in this blog post&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;CCS&lt;/strong&gt;: Common Cloud Service&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GLCP&lt;/strong&gt;: HPE GreenLake Cloud Platform&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SSO&lt;/strong&gt;: Single Sign On&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SAML&lt;/strong&gt;: Security Assertion Markup Language&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;idP&lt;/strong&gt;: Identity Providers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AD&lt;/strong&gt;: Active Directory&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MFA&lt;/strong&gt;: Multi-Factor Authentication&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MSP&lt;/strong&gt;: Managed Service Proivder&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;XML&lt;/strong&gt;: eXtensible Markup Language&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Steps to Configure a SSO/SAML Application in Azure AD&lt;/h2&gt;
&lt;p&gt;To configure SSO in Aruba Central, first download the metadata file from Azure AD.&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;Create an Enteprise Application in the [Azure Portal](https://portal.azure.com)&lt;/li&gt;
	&lt;li&gt;Configure the Enterprise Application for HPE GLCP&lt;/li&gt;
	&lt;li&gt;Download the federated metadata XML file from Enterprise Application&lt;/li&gt;
	&lt;li&gt;Claim and configure your domain within HPE GLCP&lt;/li&gt;
	&lt;li&gt;Upload the federated metadata XML file to HPE GLCP &lt;/li&gt;
	&lt;li&gt;Create a recovery account&lt;/li&gt;&lt;/ol&gt;
&lt;h2&gt;Step 1: Create an Azure AD Enterprise Application&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Log into to the Azure portal.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Enterprise Applications&lt;/strong&gt; (you may need to search for it, if it&apos;s not on your menu)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;New Application&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/new_app.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Create your own Application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Enter the name of your app. (Ex: Aruba Central USWEST 4)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/create_app.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Integrate any other application you don&apos;t find in the gallery (Non-gallery)&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under Step 1: Assign users and groups, select the AD Group you created at the beginning of this document.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/assign-users-groups.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Under Step 2: Set Up Single Sign-On&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The default setting is Disabled. Select &lt;strong&gt;SAML&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/select-saml.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Under Basic SAML Configuration, click &lt;strong&gt;Edit&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;Attribute&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;Values&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;strong&gt;Identifier (Entity ID):&lt;/strong&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;a href=&quot;https://sso.common.cloud.hpe.com&quot;&gt;https://sso.common.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;strong&gt;Reply URL (Assertion Consumer Service URL):&lt;/strong&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;a href=&quot;https://sso.common.cloud.hpe.com/sp/ACS.saml2&quot;&gt;https://sso.common.cloud.hpe.com/sp/ACS.saml2&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/azure-saml-ccs-urls.png&quot; alt=&quot;azure-saml-ccs-urls&quot; height=&quot;50%&quot; width=&quot;50%&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Under Attributes &amp;#x26; Claims&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attribute&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;emailaddress&lt;/td&gt;
&lt;td&gt;user.givenname&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;name&lt;/td&gt;
&lt;td&gt;user.userprincipalname&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gl_first_name&lt;/td&gt;
&lt;td&gt;user.givenname&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gl_last_name&lt;/td&gt;
&lt;td&gt;user.surname&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hpe_ccs_attribute&lt;/td&gt;
&lt;td&gt;See Below&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;version_1#2fd5f97acbc211ecadc006baf610dd36:00000000-0000-0000-0000-000000000000:Account Administrator:ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator:ALL_SCOPES

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Where the PCID (2fd5f97acbc211ecadc006baf610dd36) is your ID for HPE GLCP
and App ID (683da368-66cb-4ee7-90a9-ec1964768092) for your Central cluster
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;For more details on the &lt;code&gt;hpe_ccs_attritube&lt;/code&gt;, see the Appendix: &lt;a href=&quot;#appendix-generating-the-hpe_ccs_attribute&quot;&gt;Generating the &lt;code&gt;hpe_ccs_attribute&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/azure-saml-custom-attributes-img1.png&quot; alt=&quot;Image&quot;&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/azure-saml-hpe_ccs_attribute.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Download&lt;/strong&gt; under Step 3 : Federation Metadata XML.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/azure-saml-federation-metadata-download.png&quot; alt=&quot;azure-saml-federation-metadata-download&quot; height=&quot;50%&quot; width=&quot;50%&quot;&gt;
&lt;h2&gt;Step 2: Configure GCLP for SAML Federation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Login to HPE GLCP and select Manage.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/manage.png&quot; alt=&quot;manage&quot; height=&quot;50%&quot; width=&quot;50%&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Select the Authentication tile.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs-authentication.png&quot; alt=&quot;ccs_authentication&quot; height=&quot;50%&quot; width=&quot;50%&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Claim your domain for SAML.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs-claim-domain.png&quot; alt=&quot;claim_domain&quot; height=&quot;50%&quot; width=&quot;50%&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Upload the &lt;em&gt;Federation Metadata XML&lt;/em&gt; file from the previous section.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs-samle-azure-metadata-summry.png&quot; alt=&quot;metadatasummary&quot; height=&quot;70%&quot; width=&quot;70%&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Apply the following configuration settings. These should match the First and Last Name settings you set above for Azure.&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs-saml-config-settings-summary.png&quot; alt=&quot;saml-settings&quot; height=&quot;70%&quot; width=&quot;70%&quot;&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create the recovery user per the instructions.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Validate the settings are correct.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Save and finish the configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you get an error that the SAML configuration wasn&apos;t completed using the account with the @domain.com, you&apos;ll have to log out and login again with the SAML domain and go through the above configuration again.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Log in to HPE GLCP and Aruba Central using Azure AD&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Once you&apos;ve completed the above steps, log in to HPE Greenlake Central using your Azure AD email.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs_login.png&quot; alt=&quot;ccs_login&quot; height=&quot;40%&quot; width=&quot;40%&quot;&gt;&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/ccs_login_saml.png&quot; alt=&quot;ccs_login_saml&quot; height=&quot;40%&quot; width=&quot;40%&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If everything is working correctly, you should have logged into HPE GLCP and find the Aruba Central application tile with an button to &quot;Launch&quot; the Aruba Central application.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Using Azure AD MFA&lt;/h2&gt;
&lt;p&gt;By default, Azure AD enables Multi-Factor Authentication (MFA). However, for testing and demos, it&apos;s much easier to disable MFA on your accounts. To disable MFA, please see the following documentation: &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-security-defaults&quot;&gt;What are security defaults&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Troubleshooting&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;There&apos;s a useful 3rd-party browser tool called: SAML Tracer&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;This tool will allow you to verify the attributes you&apos;re sending to Central.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It can be useful when configuration SAML with multiple HPE Greenlake Central accounts or domains.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SAML Tracer&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;https://chrome.google.com/webstore/detail/saml-tracer/mpdajninpobndbfcldcmbpnnbhibjmch?hl=en&quot;&gt;Chrome&lt;/a&gt;
&lt;a href=&quot;https://addons.mozilla.org/en-US/firefox/addon/saml-tracer/&quot;&gt;FireFox&lt;/a&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/firefox-saml-tracer.png&quot; alt=&quot;Image&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Appendix: Generating the &lt;code&gt;hpe_ccs_attribute&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;hpe_ccs_attribute&lt;/code&gt; is used to determine your HPE GLCP account.  The format for the &lt;code&gt;hpe_ccs_attribute&lt;/code&gt; is as follows:&lt;/p&gt;
&lt;img src=&quot;/img/0b085a5aef05404e9ecdf52cb9088feb/hpe_ccs_attribute-img1.png&quot; alt=&quot;hpe_ccs_attribute-img1&quot; height=&quot;75%&quot; width=&quot;75%&quot;&gt;
&lt;p&gt;An example &lt;code&gt;hpe_ccs_attribute&lt;/code&gt; for a single HPE GLCP and Aruba Central account would be:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;version_1#2fd5f97acbc211ecadc006baf610dd36:00000000-0000-0000-0000-000000000000:Account Administrator:ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator:ALL_SCOPES
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;version_1#5b0ec0e8b4f411eca432ba72799953ac:00000000-0000-0000-0000-000000000000:Account Administrator:ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator:ALL_SCOPES#5b0ec0e8b4f411eca432ba72799953ac:00000000-0000-0000-0000-000000000000:Account Administrator:ALL_SCOPES
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you&apos;re a Managed Service Provider (MSP), then the &lt;code&gt;hpe_ccs_attribute&lt;/code&gt; for Administrator rights to HPE GLCP and Aruba Central for all customer tenant accounts is as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;version_1#d951f8c8c67711eca2cf9efb55836a4d:00000000-0000-0000-0000-000000000000:Account Administrator|TENANT|:ALL_SCOPES:00000000-0000-0000-0000-000000000000:Account Administrator|MSP|:ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator|TENANT| : ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator|MSP| : ALL_SCOPES
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;hpe_ccs_attribute&lt;/code&gt; string for a tenant under a MSP account is shown below. Please note, you &lt;strong&gt;must&lt;/strong&gt; have the SAML domain configuration configured for that tenant account using the &lt;strong&gt;same&lt;/strong&gt; setting as the MSP account. To say it another way, you &lt;strong&gt;must&lt;/strong&gt; go through this configuration for each tenant account under the MSP.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;version_1#f9ee1cdecc1611ecb00e9e24ed17d2a7:00000000-0000-0000-0000-000000000000:Observer|TENANT| :ALL_SCOPES:683da368-66cb-4ee7-90a9-ec1964768092:Aruba Central Administrator|TENANT| :ALL_SCOPES
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, you learned how to configure Azure AD with HPE Greenlake by passing the necessary configuration and customizations using the hpe_ccs_attribute. From this point, you can create custom attributes to grant different level of access based on roles  such as Read/Write or Read/Only access.&lt;/p&gt;
&lt;p&gt;If you have feedback to this blog post, please send me a message.&lt;/p&gt;
&lt;p&gt;Be sure to come back to the HPE Developer Community blog for more articles on this and other interesting subjects.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake]]></title><link>https://developer.hpe.com/2022-July-06/</link><guid isPermaLink="false">https://developer.hpe.com/2022-July-06/</guid><pubDate>Wed, 06 Jul 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Service Mesh Security Hardening – using SPIRE with Istio]]></title><description><![CDATA[Building applications using microservices offers developers the ability to better scale their applications and take better advantage of…]]></description><link>https://developer.hpe.com/service-mesh-security-hardening-–-using-spire-with-istio/</link><guid isPermaLink="false">https://developer.hpe.com/service-mesh-security-hardening-–-using-spire-with-istio/</guid><pubDate>Mon, 27 Jun 2022 16:22:05 GMT</pubDate><content:encoded>&lt;p&gt;Building applications using microservices offers developers the ability to better scale their applications and take better advantage of public and hybrid cloud architectures. The ability to split each part of the application into independent codebases that perform one specific task means that each self-contained service can increase in size independently as its needs change, providing for this scalability. And it allows cross-functional teams to develop, test, and update services independently, leading to faster deployments and updates.&lt;/p&gt;
&lt;p&gt;Though there are significant advantages to a microservices architecture, it can also be much more complex to manage and secure. With the potential for hundreds of services, it’s challenging for developers to keep track of component interactions, health, performance, and security.&lt;/p&gt;
&lt;p&gt;In a recent SPIFFE blog post, Nathalia Satie Gomazako points out how a service mesh solves the problem of inter-service communications. By controlling service-to-service communication over a network, it allows separate parts of an application to communicate with one another. She explains how Istio is a very popular service mesh that does just this.&lt;/p&gt;
&lt;p&gt;Nathalia goes on to explain how SPIRE, the reference implementation of SPIFFE, the Secure Production Identity Framework for Everyone, can integrate with Istio and assist with security concerns, especially when dealing with a multi-cloud infrastructure. This integration extends Istio capabilities by allowing workloads to be identified and to get their identities by a pre-defined set of assigned attributes. With this attestation process, Istio can securely issue cryptographic identities to workloads.&lt;/p&gt;
&lt;p&gt;Her article, &lt;a href=&quot;https://blog.spiffe.io/hardening-istio-security-with-spire-d2f4f98f7a63&quot;&gt;Hardening Istio security with SPIRE&lt;/a&gt;, is a quick 3-minute read and quite informative. I highly recommend it.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes Cluster as Code - Part 2]]></title><description><![CDATA[Getting started The process of managing and provisioning computer data centers through machine-readable definition files, also known as…]]></description><link>https://developer.hpe.com/kubernetes-cluster-as-code-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-cluster-as-code-part-2/</guid><pubDate>Thu, 23 Jun 2022 14:01:05 GMT</pubDate><content:encoded>&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;The process of managing and provisioning computer data centers through machine-readable definition files, also known as Infrastructure-as-Code (IaC), offers many significant benefits, like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Increased operational agility&lt;/li&gt;
&lt;li&gt;Simplified management&lt;/li&gt;
&lt;li&gt;Reduced errors&lt;/li&gt;
&lt;li&gt;Saved cost&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Using the HPE GreenLake Terraform provider, you can bring up a Kubernetes cluster starting right from the infrastructure layer and way up in the stack to set up desired configurations and applications. In the diagram below, 2 and 3 are community providers that are available and can be used in combination with HPE GreenLake TF provider to deploy applications on the Kubernetes cluster. In this blog post, I will illustrate how this can be implemented using Terraform.&lt;/p&gt;
&lt;p&gt;Using similar Terraform files as shown in this blog post, customers can deploy any application of their choice on the Kubernetes cluster by consuming the appropriate community providers available. With this capability, customers can customize the Kubernetes clusters based on their needs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image2022-6-20_12-36-56.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Let&apos;s recap&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;my first blog post&lt;/a&gt;, I covered the usage of the HPE GreenLake Terraform provider to create and destroy a Kubernetes cluster and discussed how to use community providers, in combination with HPE GreenLake TF provider, to create a namespace in the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;In this blog post, I will focus on managing application deployments using IaC. Here, I will be deploying a Prometheus application on an existing/pre-created Kubernetes cluster. Hence, the pre-requisite to proceed would be to have a Kubernetes cluster and a namespace created. You could follow the steps mentioned in &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;my first blog post&lt;/a&gt; to achieve this.&lt;/p&gt;
&lt;h2&gt;Application deployment on a Kubernetes cluster&lt;/h2&gt;
&lt;h3&gt;Helm provider&lt;/h3&gt;
&lt;p&gt;Below is the code block for adding &lt;strong&gt;helm&lt;/strong&gt; community provider. Please refer to &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;my first blog post&lt;/a&gt; for details regarding &lt;strong&gt;hpegl_caas_cluster&lt;/strong&gt; data source.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;provider &quot;helm&quot; {
  kubernetes {
       host     = yamldecode(base64decode(data.hpegl_caas_cluster.tf-test-7.kubeconfig)).clusters[0].cluster.server
       token    = yamldecode(base64decode(data.hpegl_caas_cluster.tf-test-7.kubeconfig)).users[0].user.token
       insecure = true
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Helm-release Terraform resource for Prometheus stack deployment&lt;/h3&gt;
&lt;p&gt;In order to deploy Prometheus stack using the helm-release resource, the following values have to be filled in the &lt;strong&gt;prometheus-deploy.tf&lt;/strong&gt; file:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Cluster Name: Fill in the &lt;strong&gt;name&lt;/strong&gt; of the pre-created cluster in &lt;strong&gt;hpegl_caas_cluster&lt;/strong&gt; block. In the below example, name= &quot;tf-test-7&quot;&lt;/li&gt;
&lt;li&gt;Namespace: Fill in the appropriate &lt;strong&gt;namespace&lt;/strong&gt; in the &lt;strong&gt;helm_release&lt;/strong&gt; block. In the below example, namespace= &quot;test-namespace&quot; &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;prometheus-deploy.tf&lt;/strong&gt; &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can name this file according to your preference. We are using prometheus-deploy.tf here for easy reference.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider &quot;hpegl&quot; {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_cluster&quot; &quot;tf-test-7&quot; {
  name     = &quot;tf-test-7&quot;
  space_id = var.HPEGL_SPACE
}
 
provider &quot;helm&quot; {
  kubernetes {
       host     = yamldecode(base64decode(data.hpegl_caas_cluster.tf-test-7.kubeconfig)).clusters[0].cluster.server
       token    = yamldecode(base64decode(data.hpegl_caas_cluster.tf-test-7.kubeconfig)).users[0].user.token
       insecure = true
  }
}
 
resource &quot;helm_release&quot; &quot;prometheus-stack&quot; {
   name = &quot;prometheus-stack&quot;
   repository = &quot;https://prometheus-community.github.io/helm-charts&quot;
   chart = &quot;kube-prometheus-stack&quot;
   version = &quot;36.0.2&quot;
   namespace = &quot;test-namespace&quot;
 
   set {
    name  = &quot;grafana.service.type&quot;
    value = &quot;NodePort&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Initializing workspace &amp;#x26; synchronizing infrastructure components&lt;/h3&gt;
&lt;p&gt;Place the &lt;strong&gt;prometheus-deploy.tf&lt;/strong&gt; file in your working directory and initialize the working directory using the command: &lt;strong&gt;terraform init&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform init
 
Initializing the backend...
 
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of terraform.example.com/caas/hpegl from the dependency lock file
- Using previously-installed hashicorp/helm v2.5.1
- Using previously-installed terraform.example.com/caas/hpegl v0.0.1
 
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running &quot;terraform plan&quot; to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform ready to plan&lt;/h3&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run: &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform plan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # helm_release.prometheus-stack will be created
  + resource &quot;helm_release&quot; &quot;prometheus-stack&quot; {
      + atomic                     = false
      + chart                      = &quot;kube-prometheus-stack&quot;
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = &quot;prometheus-stack&quot;
      + namespace                  = &quot;test-namespace&quot;
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = &quot;https://prometheus-community.github.io/helm-charts&quot;
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = &quot;deployed&quot;
      + timeout                    = 300
      + verify                     = false
      + version                    = &quot;36.0.2&quot;
      + wait                       = true
      + wait_for_jobs              = false
 
      + set {
          + name  = &quot;grafana.service.type&quot;
          + value = &quot;NodePort&quot;
        }
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 
Note: You didn&apos;t use the -out option to save this plan, so Terraform can&apos;t guarantee to take exactly these actions if you run &quot;terraform apply&quot; now.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform ready to apply&lt;/h3&gt;
&lt;p&gt;Terraform apply executes the actions proposed in the Terraform plan and deploys the resources. Run &lt;strong&gt;terraform apply&lt;/strong&gt; and then type yes when asked to &lt;strong&gt;Enter a value&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform apply
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # helm_release.prometheus-stack will be created
  + resource &quot;helm_release&quot; &quot;prometheus-stack&quot; {
      + atomic                     = false
      + chart                      = &quot;kube-prometheus-stack&quot;
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = &quot;prometheus-stack&quot;
      + namespace                  = &quot;test-namespace&quot;
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = &quot;https://prometheus-community.github.io/helm-charts&quot;
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = &quot;deployed&quot;
      + timeout                    = 300
      + verify                     = false
      + version                    = &quot;36.0.2&quot;
      + wait                       = true
      + wait_for_jobs              = false
 
      + set {
          + name  = &quot;grafana.service.type&quot;
          + value = &quot;NodePort&quot;
        }
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.
 
  Enter a value: yes
 
helm_release.prometheus-stack: Creating...
helm_release.prometheus-stack: Still creating... [10s elapsed]
helm_release.prometheus-stack: Still creating... [20s elapsed]
helm_release.prometheus-stack: Still creating... [30s elapsed]
helm_release.prometheus-stack: Still creating... [40s elapsed]
helm_release.prometheus-stack: Still creating... [50s elapsed]
helm_release.prometheus-stack: Still creating... [1m0s elapsed]
helm_release.prometheus-stack: Still creating... [1m10s elapsed]
helm_release.prometheus-stack: Still creating... [1m20s elapsed]
helm_release.prometheus-stack: Still creating... [1m30s elapsed]
helm_release.prometheus-stack: Still creating... [1m40s elapsed]
helm_release.prometheus-stack: Still creating... [1m50s elapsed]
helm_release.prometheus-stack: Still creating... [2m0s elapsed]
helm_release.prometheus-stack: Creation complete after 2m2s [id=prometheus-stack]
 
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After Terraform apply is complete, you can see the Prometheus pods deployed by running the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get po -n &amp;#x3C;namespace&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see the Prometheus services by running the command: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get svc-n &amp;#x3C;namespace&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can fetch the node IP by running the command: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get nodes -o wide
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since Grafana is exposed as a NodePort, this &lt;strong&gt;PORT&lt;/strong&gt;: 32424 and the node&apos;s internal IP &lt;strong&gt;INTERNAL-IP&lt;/strong&gt;: 172.16.17.168, can be used to access Grafana portal as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;https://&amp;#x3C;INTERNAL-IP&gt;:&amp;#x3C;PORT&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For example, &lt;a href=&quot;http://172.16.17.168:32424&quot;&gt;http://172.16.17.168:32424&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/14-2-.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-clusters-as-code-part1/&quot;&gt;my first blog post&lt;/a&gt;, I covered how to get started with the Terraform provider for HPE GreenLake and explained how to create a Kubernetes cluster and bring up a namespace on the Kubernetes cluster using Kubernetes community provider. In this article, I showed you how to manage application deployments on a Kubernetes cluster using Terraform.&lt;/p&gt;
&lt;p&gt;I hope you found this information interesting and useful in helping you get started with HPE GreenLake Terraform provider. You could also go through the below links to understand more about Terraform and HPE GreenLake Terraform Provider&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.terraform.io/&quot;&gt; Learn more about Terraform&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt; Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl&quot;&gt; Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Don’t forget, you can always find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog/tag/hpe-greenlake&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes Cluster as Code - Part 1]]></title><description><![CDATA[Getting started The process of managing and provisioning computer data centers through machine-readable definition files, also known as…]]></description><link>https://developer.hpe.com/kubernetes-clusters-as-code-part1/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-clusters-as-code-part1/</guid><pubDate>Thu, 23 Jun 2022 10:15:09 GMT</pubDate><content:encoded>&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;The process of managing and provisioning computer data centers through machine-readable definition files, also known as Infrastructure-as-Code (IaC), offers many significant benefits. It helps to increase operational agility, simplify management, reduce errors, and save cost. In this post, we will explore some of the benefits of using IaC to build a Kubernetes cluster from scratch, with all the necessary configuration and core services, on HPE GreenLake using Terraform (TF). Storing Kubernetes cluster and favorable configurations as code helps in repeatability and change management.&lt;/p&gt;
&lt;p&gt;IaC with Kubernetes is not new. There are providers in the developer community that are quite good and well supported. Using the HPE GreenLake Terraform provider, you can bring up a Kubernetes cluster starting right from the infrastructure layer and way up in the stack to set up desired configurations and applications. For reference, see the below picture.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/image2022-6-20_12-36-56.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE GreenLake TF provider brings the Kubernetes stack up on the HPE GreenLake infrastructure, and exposes credentials for other TF providers to integrate further and build the complete stack, as desired. In the diagram above, 2 and 3 are community providers that are available, which can be used in combination with HPE GreenLake TF provider.&lt;/p&gt;
&lt;p&gt;One of the options provided by HPE GreenLake is to make it easy for customers to order and operate a private cloud with a mix of virtual machines, containers, and physical servers. This is exactly what the private cloud service is all about. It provides access via a public API, allowing developers to use an infrastructure-as-code type of tool to automate provisioning, for example using Terraform. For customers to try out everything that is mentioned in this blog series, they should have subscribed to &lt;strong&gt;HPE GreenLake for private cloud enterprise&lt;/strong&gt; that unifies Virtual Machines, Containers and Bare-metal in one cloud service.&lt;/p&gt;
&lt;p&gt;In this two-part blog series, I’ll share my experience as a first-time user of HPE Greenlake TF provider. This blog series aims to provide a step by step walkthrough of how to bring up a Kubernetes cluster using Terraform and how to deploy applications on a pre-created Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;In this first part, I will focus on the pre-requisites needed prior to using HPE GreenLake TF provider and the steps to be followed to bring up a Kubernetes cluster. I will also discuss on how 3rd party community providers can be used in tandem with HPE Greenlake TF provider.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-cluster-as-code-part-2/&quot;&gt;the second part of this series&lt;/a&gt;, I will illustrate how to deploy applications on a namespace in the pre-created Kubernetes cluster, using Terraform.&lt;/p&gt;
&lt;h2&gt;Preparing for infrastructure-as-code implementation &lt;/h2&gt;
&lt;h3&gt;Setting up API Client access&lt;/h3&gt;
&lt;p&gt;You need an API client to authenticate against HPE GreenLake.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You should have &lt;strong&gt;IAM Owner&lt;/strong&gt; role for the appropriate tenant to proceed with API Client creation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Follow the below steps for API Client creation:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;From the HPE GreenLake platform, launch the &lt;strong&gt;HPE GreenLake Central console&lt;/strong&gt; for the appropriate tenant. Under the settings icon on the tenant &lt;strong&gt;Dashboard&lt;/strong&gt; page, select &lt;strong&gt;User Management&lt;/strong&gt; option.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/dashboard.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Under the &lt;strong&gt;API Clients&lt;/strong&gt; tab, click on &lt;strong&gt;Create API Client&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt; Enter a &lt;strong&gt;Name&lt;/strong&gt; (mandatory field) and &lt;strong&gt;Description&lt;/strong&gt; (optional) for the API client, and click on &lt;strong&gt;Create&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt; Ensure you make a note of the &lt;strong&gt;Issuer&lt;/strong&gt;, &lt;strong&gt;Client ID&lt;/strong&gt; and &lt;strong&gt;Client Secret&lt;/strong&gt; before clicking on the &lt;strong&gt;Close&lt;/strong&gt; button. These details will be exported as environment variables in the next section.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;In the &lt;strong&gt;API Clients&lt;/strong&gt; page, select the newly created client, and click on &lt;strong&gt;Create Assignment&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Create an assignment with &lt;strong&gt;Role Assignment:&lt;/strong&gt; &lt;strong&gt;Private Cloud Cluster Owner&lt;/strong&gt; and &lt;strong&gt;Space:&lt;/strong&gt; &lt;strong&gt;Default.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API client is now ready to be used to run the Terraform resources.&lt;/p&gt;
&lt;h3&gt;Selecting a Terraform provider with container service configurations&lt;/h3&gt;
&lt;h4&gt;1. Ensure you have Terraform installed.&lt;/h4&gt;
&lt;p&gt;Terraform can be installed by following: &lt;a href=&quot;https://learn.hashicorp.com/tutorials/terraform/install-cli&quot;&gt;Terraform Installation&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Installation can be verified using the below command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ terraform version
Terraform v1.1.9
on linux_amd64
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;2. Export the following environment variables on your machine.&lt;/h4&gt;
&lt;p&gt;Export the Tenant ID and Space ID:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;export HPEGL_TENANT_ID=&amp;#x3C;Tenant ID&gt;
export TF_VAR_HPEGL_SPACE=&amp;#x3C;Space ID&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Export the API client details based on what was noted down.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;export HPEGL_USER_ID=&amp;#x3C;Client ID&gt;
export HPEGL_USER_SECRET=&amp;#x3C;Client Secret&gt;
export HPEGL_IAM_SERVICE_URL=&amp;#x3C;Issuer&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;3. Choosing the appropriate Terraform provider.&lt;/h4&gt;
&lt;p&gt;The first section of the Terraform file will enumerate the “providers” you rely upon for building your infrastructure, and there could be multiple providers in a single TF file. In this case here, you will have the HPE GreenLake provider referenced as hpe/hpegl (&lt;strong&gt;source&lt;/strong&gt;) and the available versions to choose from (&lt;strong&gt;version&lt;/strong&gt;), in the official &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/0.2.2&quot;&gt;Terraform registry&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first lines of your Terraform configuration file should look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Create a cluster resource&lt;/h2&gt;
&lt;h3&gt;Terraform data source for cluster blueprint&lt;/h3&gt;
&lt;p&gt;In order to use the data source available for cluster blueprint, you should add the below block in your Terraform file, and specify the cluster blueprint name. Using this data source, Terraform will fetch the cluster Blueprint ID associated with it.&lt;/p&gt;
&lt;p&gt;In the below block, &quot;demo&quot; is the cluster blueprint &lt;strong&gt;name&lt;/strong&gt; provided:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
  name = &quot;demo&quot;
  site_id = data.hpegl_caas_site.blr.id
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The cluster blueprint ID can now be fetched by using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform data source for site&lt;/h3&gt;
&lt;p&gt;In order to use the data source available for site, you should add the below block in your Terraform file, and specify the site name. Using this data source, Terraform will fetch the site ID associated with it.&lt;/p&gt;
&lt;p&gt;In the below block, &quot;BLR&quot; is the site &lt;strong&gt;name&lt;/strong&gt; provided:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
  name = &quot;BLR&quot;
  space_id = &quot;&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The site ID can now be fetched by using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;site_id = data.hpegl_caas_site.blr.id
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt; Terraform data source for cluster&lt;/h3&gt;
&lt;p&gt;In order to use the data source available for cluster, you should add the below block and provide the cluster name and space id. Using this data source, Terraform will fetch the cluster server and user token associated with it.&lt;/p&gt;
&lt;p&gt;In the below block, &quot;tf-test&quot; is the &lt;strong&gt;name&lt;/strong&gt; of the pre-created Kubernetes cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;data &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
  name     = &quot;tf-test&quot;
  space_id = &quot;&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;strong&gt;host&lt;/strong&gt; (cluster server) and &lt;strong&gt;token&lt;/strong&gt; (user token) can now be fetched from the cluster kubeconfig using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;host     = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).clusters[0].cluster.server
token    = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).users[0].user.token
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform resource for cluster&lt;/h3&gt;
&lt;p&gt;In order to create a Kubernetes cluster using the cluster resource, the following values should be specified in the &lt;strong&gt;cluster-create.tf&lt;/strong&gt; file shown below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Site Name: Fill in the appropriate site &lt;strong&gt;name&lt;/strong&gt; in the &lt;strong&gt;hpegl_caas_site&lt;/strong&gt; block. In the below example, name= &quot;BLR&quot; &lt;/li&gt;
&lt;li&gt;Cluster Blueprint Name: Fill in the appropriate cluster blueprint &lt;strong&gt;name&lt;/strong&gt; in the &lt;strong&gt;hpegl_caas_cluster_blueprint&lt;/strong&gt; block. In the below example, name= &quot;demo&quot; &lt;/li&gt;
&lt;li&gt;Cluster Name: Fill in the cluster &lt;strong&gt;name&lt;/strong&gt; of your choice in the &lt;strong&gt;hpegl_caas_cluster&lt;/strong&gt; block. In the below example, name= &quot;tf-test&quot;&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Here, the space_id is automatically set to the value specified while exporting TF_VAR_HPEGL_SPACE.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;cluster-create.tf&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can name this file according to your preference. We are using cluster-create.tf here for easy reference.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_site&quot; &quot;blr&quot; {
  name = &quot;BLR&quot;
  space_id = var.HPEGL_SPACE
 }
 
data &quot;hpegl_caas_cluster_blueprint&quot; &quot;bp&quot; {
  name = &quot;demo&quot;
  site_id = data.hpegl_caas_site.blr.id
}
 
resource hpegl_caas_cluster test {
  name         = &quot;tf-test&quot;
  blueprint_id = data.hpegl_caas_cluster_blueprint.bp.id
  site_id      = data.hpegl_caas_site.blr.id
  space_id     = var.HPEGL_SPACE
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can get information about each of the data sources and resources mentioned above from &lt;a href=&quot;https://github.com/HPE/terraform-provider-hpegl/tree/main/docs&quot;&gt;Github&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Initializing workspace &amp;#x26; synchronizing infrastructure components&lt;/h3&gt;
&lt;p&gt;Place the cluster-create.tf file in your working directory and initialize the working directory using the command: &lt;strong&gt;terraform init&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform init
 
Initializing the backend...
 
Initializing provider plugins...
- Finding hpe/hpegl versions matching &quot;&gt;= 0.2.0&quot;...
- Installing hpe/hpegl v0.2.2...
- Installed hpe/hpegl v0.2.2 (signed by a HashiCorp partner, key ID D1F277A1AC66CE3D)
 
Partner and community providers are signed by their developers.
If you&apos;d like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
 
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run &quot;terraform init&quot; in the future.
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running &quot;terraform plan&quot; to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt; Terraform ready to plan&lt;/h3&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run: &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform plan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # hpegl_caas_cluster.test will be created
  + resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
      + api_endpoint                      = (known after apply)
      + appliance_name                    = (known after apply)
      + blueprint_id                      = &quot;982be053-ae7d-4623-9af7-69a299e68bc9&quot;
      + cluster_provider                  = (known after apply)
      + created_date                      = (known after apply)
      + default_storage_class             = (known after apply)
      + default_storage_class_description = (known after apply)
      + health                            = (known after apply)
      + id                                = (known after apply)
      + k8s_version                       = (known after apply)
      + kubeconfig                        = (known after apply)
      + last_update_date                  = (known after apply)
      + machine_sets                      = (known after apply)
      + machine_sets_detail               = (known after apply)
      + name                              = &quot;tf-test&quot;
      + service_endpoints                 = (known after apply)
      + site_id                           = &quot;ecb6b8a4-3303-4528-96d1-42230336a9ec&quot;
      + space_id                          = &quot;8d5dfbc0-f996-4e45-ab34-e719588a96ca&quot;
      + state                             = (known after apply)
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 
Note: You didn&apos;t use the -out option to save this plan, so Terraform can&apos;t guarantee to take exactly these actions if you run &quot;terraform apply&quot; now.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform ready to apply&lt;/h3&gt;
&lt;p&gt;Terraform apply executes the actions proposed in the Terraform plan and deploys the resources. Run the command: &lt;strong&gt;terraform apply&lt;/strong&gt; and type &lt;strong&gt;yes&lt;/strong&gt; when asked to &lt;strong&gt;Enter a value&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform apply
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # hpegl_caas_cluster.test will be created
  + resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
      + api_endpoint                      = (known after apply)
      + appliance_name                    = (known after apply)
      + blueprint_id                      = &quot;982be053-ae7d-4623-9af7-69a299e68bc9&quot;
      + cluster_provider                  = (known after apply)
      + created_date                      = (known after apply)
      + default_storage_class             = (known after apply)
      + default_storage_class_description = (known after apply)
      + health                            = (known after apply)
      + id                                = (known after apply)
      + k8s_version                       = (known after apply)
      + kubeconfig                        = (known after apply)
      + last_update_date                  = (known after apply)
      + machine_sets                      = (known after apply)
      + machine_sets_detail               = (known after apply)
      + name                              = &quot;tf-test&quot;
      + service_endpoints                 = (known after apply)
      + site_id                           = &quot;ecb6b8a4-3303-4528-96d1-42230336a9ec&quot;
      + space_id                          = &quot;8d5dfbc0-f996-4e45-ab34-e719588a96ca&quot;
      + state                             = (known after apply)
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.
 
  Enter a value: yes  hpegl_caas_cluster.test: Creating...
hpegl_caas_cluster.test: Still creating... [10s elapsed]
hpegl_caas_cluster.test: Still creating... [1m0s elapsed]
hpegl_caas_cluster.test: Still creating... [5m0s elapsed]
hpegl_caas_cluster.test: Still creating... [10m0s elapsed]
hpegl_caas_cluster.test: Still creating... [20m0s elapsed]
hpegl_caas_cluster.test: Still creating... [25m0s elapsed]
hpegl_caas_cluster.test: Still creating... [25m10s elapsed]
hpegl_caas_cluster.test: Still creating... [25m20s elapsed]
hpegl_caas_cluster.test: Still creating... [25m30s elapsed]
hpegl_caas_cluster.test: Still creating... [25m40s elapsed]
hpegl_caas_cluster.test: Still creating... [25m50s elapsed]
hpegl_caas_cluster.test: Still creating... [26m0s elapsed]
hpegl_caas_cluster.test: Still creating... [26m11s elapsed]
hpegl_caas_cluster.test: Still creating... [26m21s elapsed]
hpegl_caas_cluster.test: Still creating... [26m31s elapsed]
hpegl_caas_cluster.test: Still creating... [26m41s elapsed]
hpegl_caas_cluster.test: Still creating... [26m51s elapsed]
hpegl_caas_cluster.test: Still creating... [27m1s elapsed]
hpegl_caas_cluster.test: Still creating... [27m11s elapsed]
hpegl_caas_cluster.test: Still creating... [27m21s elapsed]
hpegl_caas_cluster.test: Still creating... [27m31s elapsed]
hpegl_caas_cluster.test: Still creating... [27m41s elapsed]
hpegl_caas_cluster.test: Still creating... [27m51s elapsed]
hpegl_caas_cluster.test: Still creating... [28m1s elapsed]
hpegl_caas_cluster.test: Still creating... [28m11s elapsed]
hpegl_caas_cluster.test: Still creating... [28m21s elapsed]
hpegl_caas_cluster.test: Still creating... [28m31s elapsed]
hpegl_caas_cluster.test: Still creating... [28m41s elapsed]
hpegl_caas_cluster.test: Still creating... [28m51s elapsed]
hpegl_caas_cluster.test: Still creating... [29m1s elapsed]
hpegl_caas_cluster.test: Creation complete after 29m1s [id=8a3396db-ae26-44fd-a128-264c357f71fb]
 
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the HPE GreenLake platform, launch the &lt;strong&gt;HPE GreenLake Central console&lt;/strong&gt; for the appropriate tenant, and from the &lt;strong&gt;Dashboard&lt;/strong&gt;, select &lt;strong&gt;Clusters&lt;/strong&gt; to view the list of clusters. You will see &lt;strong&gt;tf-test&lt;/strong&gt; cluster has been created successfully.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Delete a cluster resource&lt;/h2&gt;
&lt;p&gt;In Terraform, clean-up can be done using the destroy command. This will automatically use the HPE GreenLake provider to clean the infrastructure in HPE GreenLake. Run the following command: &lt;strong&gt;terraform destroy&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform destroy
hpegl_caas_cluster.test: Refreshing state... [id=8a3396db-ae26-44fd-a128-264c357f71fb]
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy
 
Terraform will perform the following actions:
 
  # hpegl_caas_cluster.test will be destroyed
  - resource &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
      - api_endpoint          = &quot;https://gl-caas.gl-hpe.local:10003&quot; -&gt; null
      - appliance_name        = &quot;Austin&quot; -&gt; null
      - blueprint_id          = &quot;982be053-ae7d-4623-9af7-69a299e68bc9&quot; -&gt; null
      - cluster_provider      = &quot;ecp&quot; -&gt; null
      - created_date          = &quot;2022-06-12T11:11:55Z&quot; -&gt; null
      - default_storage_class = &quot;gl-sbc-glhc-nimblestor&quot; -&gt; null
      - health                = &quot;ok&quot; -&gt; null
      - id                    = &quot;8a3396db-ae26-44fd-a128-264c357f71fb&quot; -&gt; null
      - k8s_version           = &quot;v1.20.11.hpe-2&quot; -&gt; null
      - kubeconfig            = &quot;YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGluc2VjdXJlLXNraXAtdGxzLXZlcm
lmeTogdHJ1ZQogICAgc2VydmVyOiBodHRwczovL2dsLWNhYXMuZ2wtaHBlLmxvY2FsOjEwMDAzCiAgbmFtZTogZGVmYXVsdApjb250ZXh0czoKLSBjb2
50ZXh0OgogICAgY2x1c3RlcjogZGVmYXVsdAogICAgbmFtZXNwYWNlOiBkZWZhdWx0CiAgICB1c2VyOiBkZWZhdWx0CiAgbmFtZTogZGVmYXVsdApjdX
JyZW50LWNvbnRleHQ6IGRlZmF1bHQKa2luZDogQ29uZmlnCnByZWZlcmVuY2VzOiB7fQp1c2VyczoKLSBuYW1lOiBkZWZhdWx0CiAgdXNlcjoKICAgIH
Rva2VuOiBleUpoYkdjaU9pSlNVekkxTmlJc0ltdHBaQ0k2SWpKM2NVNTRhbFpvT0RJM04xVTRRalZ0TjJ3elNUQnpVVmhxUkhCVWVrOURVRk5PT1VWRVc
ExOW5ha0VpZlEuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFl
XTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnJkV0psTFhONWMzUmxiU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpj
bVYwTG01aGJXVWlPaUowWlhKeVazlkUkU2cXA1RG5od3pDZnRvX2FQMXBPZwo=&quot; -&gt; null
      - last_update_date      = &quot;2022-06-12T11:41:53Z&quot; -&gt; null
      - machine_sets          = [
          - {
              - count                = 1
              - machine_blueprint_id = &quot;25d70b74-1048-4aae-8994-aa72f08cfacb&quot;
              - name                 = &quot;master&quot;
              - os_image             = &quot;&quot;
              - os_version           = &quot;&quot;
            },
          - {
              - count                = 1
              - machine_blueprint_id = &quot;ebcb6ad2-b201-4c92-9dad-1bbb0b90cde3&quot;
              - name                 = &quot;worker&quot;
              - os_image             = &quot;&quot;
              - os_version           = &quot;&quot;
            },
        ] -&gt; null
      - machine_sets_detail   = [
          - {
              - compute_type         = &quot;General Purpose&quot;
              - count                = 1
              - machine_blueprint_id = &quot;25d70b74-1048-4aae-8994-aa72f08cfacb&quot;
              - machine_provider     = &quot;vmaas&quot;
              - machine_roles        = [
                  - &quot;controlplane&quot;,
                  - &quot;etcd&quot;,
                ]
              - machines             = [
                  - {
                      - created_date     = &quot;2022-06-12T11:11:55Z&quot;
                      - health           = &quot;ok&quot;
                      - hostname         = &quot;172.20.20.102&quot;
                      - id               = &quot;4247&quot;
                      - last_update_date = &quot;2022-06-12T11:41:35Z&quot;
                      - name             = &quot;k8s-tf-test-master-mshls-hwlgf&quot;
                      - state            = &quot;ready&quot;
                    },
                ]
              - name                 = &quot;master&quot;
              - networks             = [
                  - &quot;VM_Production&quot;,
                ]
              - os_image             = &quot;sles-custom&quot;
              - os_version           = &quot;15&quot;
              - proxy                = &quot;&quot;
              - size                 = &quot;Large&quot;
              - size_detail          = [
                  - {
                      - cpu             = 4
                      - ephemeral_disk  = 500
                      - memory          = 16384
                      - name            = &quot;Large&quot;
                      - persistent_disk = 0
                      - root_disk       = 120
                    },
                ]
              - storage_type         = &quot;General Purpose&quot;
            },
          - {
              - compute_type         = &quot;General Purpose&quot;
              - count                = 1
              - machine_blueprint_id = &quot;ebcb6ad2-b201-4c92-9dad-1bbb0b90cde3&quot;
              - machine_provider     = &quot;vmaas&quot;
              - machine_roles        = [
                  - &quot;worker&quot;,
                ]
              - machines             = [
                  - {
                      - created_date     = &quot;2022-06-12T11:11:55Z&quot;
                      - health           = &quot;ok&quot;
                      - hostname         = &quot;172.20.20.121&quot;
                      - id               = &quot;4248&quot;
                      - last_update_date = &quot;2022-06-12T11:41:35Z&quot;
                      - name             = &quot;k8s-tf-test-worker-58mvq-qvdch&quot;
                      - state            = &quot;ready&quot;
                    },
                ]
              - name                 = &quot;worker&quot;
              - networks             = [
                  - &quot;VM_Production&quot;,
                  - &quot;iSCSI_A&quot;,
                  - &quot;iSCSI_B&quot;,
                ]
              - os_image             = &quot;sles-custom&quot;
              - os_version           = &quot;15&quot;
              - proxy                = &quot;&quot;
              - size                 = &quot;xLarge&quot;
              - size_detail          = [
                  - {
                      - cpu             = 8
                      - ephemeral_disk  = 500
                      - memory          = 32768
                      - name            = &quot;xLarge&quot;
                      - persistent_disk = 0
                      - root_disk       = 120
                    },
                ]
              - storage_type         = &quot;General Purpose&quot;
            },
        ] -&gt; null
      - name                  = &quot;tf-test&quot; -&gt; null
      - service_endpoints     = [
          - {
              - endpoint  = &quot;https://gl-caas.gl-hpe.local:10004&quot;
              - name      = &quot;Kubernetes Dashboard&quot;
              - namespace = &quot;kubernetes-dashboard&quot;
              - type      = &quot;system&quot;
            },
        ] -&gt; null
      - site_id               = &quot;ecb6b8a4-3303-4528-96d1-42230336a9ec&quot; -&gt; null
      - space_id              = &quot;8d5dfbc0-f996-4e45-ab34-e719588a96ca&quot; -&gt; null
      - state                 = &quot;ready&quot; -&gt; null
    }
 
Plan: 0 to add, 0 to change, 1 to destroy.
 
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only &apos;yes&apos; will be accepted to confirm.
 
  Enter a value: yes
 
hpegl_caas_cluster.test: Destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 10s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 1m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 2m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 3m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 4m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 5m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 6m0s elapsed]
hpegl_caas_cluster.test: Still destroying... [id=8a3396db-ae26-44fd-a128-264c357f71fb, 7m40s elapsed]
hpegl_caas_cluster.test: Destruction complete after 7m48s
 
Destroy complete! Resources: 1 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the HPE GreenLake platform, launch the &lt;strong&gt;HPE GreenLake Central console&lt;/strong&gt; for the appropriate tenant, and from the &lt;strong&gt;Dashboard&lt;/strong&gt;, select &lt;strong&gt;Clusters&lt;/strong&gt; to view the list of clusters. You will see &lt;strong&gt;tf-test&lt;/strong&gt; cluster has been deleted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/delete.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Additional 3rd party provider of your choice from community&lt;/h2&gt;
&lt;p&gt;You can also use a 3rd party provider of your choice from the community. In this section, we will discuss on how to create a namespace in GL CaaS cluster using Kubernetes community provider.&lt;/p&gt;
&lt;h3&gt;Kubernetes provider&lt;/h3&gt;
&lt;p&gt;Below is the code block for adding &lt;strong&gt;Kubernetes&lt;/strong&gt; provider:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;provider &quot;kubernetes&quot; {
  host     = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).clusters[0].cluster.server
  token    = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).users[0].user.token
  insecure = true
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform resource for namespace&lt;/h3&gt;
&lt;p&gt;You can create a Kubernetes namespace using the &lt;strong&gt;kubernetes_namespace&lt;/strong&gt; resource and providing a namespace &lt;strong&gt;name&lt;/strong&gt; of your choice. In the below example, name = &quot;test-namespace&quot;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;resource &quot;kubernetes_namespace&quot; &quot;test-namespace&quot; {
  metadata {
    name = &quot;test-namespace&quot;
  }
  lifecycle {
    ignore_changes = [
      metadata[0].labels,
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Namespace creation using Terraform&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;namespace-create.tf&lt;/strong&gt; : Below is a complete example of using the Kubernetes provider and creating a namespace on a pre-created Kubernetes cluster.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can name this file according to your preference. We are using namespace-create.tf here for easy reference.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Copyright 2020 Hewlett Packard Enterprise Development LP
 
# Set-up for terraform &gt;= v0.13
terraform {
  required_providers {
    hpegl = {
      source  = &quot;hpe/hpegl&quot;
      version = &quot;&gt;= 0.2.2&quot;
    }
  }
}
 
provider hpegl {
  caas {
  }
}
 
variable &quot;HPEGL_SPACE&quot; {
  type = string
}
 
data &quot;hpegl_caas_cluster&quot; &quot;test&quot; {
  name     = &quot;tf-test-7&quot;
  space_id = var.HPEGL_SPACE
}  
 
provider &quot;kubernetes&quot; {
  host     = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).clusters[0].cluster.server
  token    = yamldecode(base64decode(data.hpegl_caas_cluster.test.kubeconfig)).users[0].user.token
  insecure = true
}
 
resource &quot;kubernetes_namespace&quot; &quot;test-namespace&quot; {
  metadata {
    name = &quot;test-namespace&quot;
  }
  lifecycle {
    ignore_changes = [
      metadata[0].labels,
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Initializing workspace &amp;#x26; synchronizing infrastructure components&lt;/h3&gt;
&lt;p&gt;Place the namespace-create.tf file in your working directory and initialize the working directory using the command: &lt;strong&gt;terraform init&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform init
 
Initializing the backend...
 
Initializing provider plugins...
- Finding hpe/hpegl versions matching &quot;&gt;= 0.2.0&quot;...
- Installing hpe/hpegl v0.2.2...
- Installed hpe/hpegl v0.2.2 (signed by a HashiCorp partner, key ID D1F277A1AC66CE3D)
 
Partner and community providers are signed by their developers.
If you&apos;d like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
 
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run &quot;terraform init&quot; in the future.
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running &quot;terraform plan&quot; to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform ready to plan&lt;/h3&gt;
&lt;p&gt;Terraform plan is a dry run that lets you preview the changes that Terraform plans to make to your infrastructure based on the data you provide in your Terraform file. To see this, run: &lt;strong&gt;terraform plan&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform plan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # kubernetes_namespace.test-namespace will be created
  + resource &quot;kubernetes_namespace&quot; &quot;test-namespace&quot; {
      + id = (known after apply)
 
      + metadata {
          + generation       = (known after apply)
          + name             = &quot;test-namespace&quot;
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 
Note: You didn&apos;t use the -out option to save this plan, so Terraform can&apos;t guarantee to take exactly these actions if you run &quot;terraform apply&quot; now.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Terraform ready to apply&lt;/h3&gt;
&lt;p&gt;Terraform apply executes the actions proposed in the Terraform plan and deploys the resources. Run &lt;strong&gt;terraform apply&lt;/strong&gt; and then type &lt;strong&gt;yes&lt;/strong&gt; when asked to &lt;strong&gt;Enter a value&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform apply  Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # kubernetes_namespace.test-namespace will be created
  + resource &quot;kubernetes_namespace&quot; &quot;test-namespace&quot; {
      + id = (known after apply)
 
      + metadata {
          + generation       = (known after apply)
          + name             = &quot;test-namespace&quot;
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }
 
Plan: 1 to add, 0 to change, 0 to destroy.
 
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.
 
  Enter a value: yes
 
kubernetes_namespace.test-namespace: Creating...
kubernetes_namespace.test-namespace: Creation complete after 0s [id=test-namespace]
 
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can verify the created namespace &lt;strong&gt;test-namespace&lt;/strong&gt;, by running the command: &lt;strong&gt;kubectl get namespace&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next up&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/kubernetes-cluster-as-code-part-2/&quot;&gt;my next blog post&lt;/a&gt;, we will continue our discussion on deploying applications on a pre-created Kubernetes cluster.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Lift and Transform Apps with HPE CSI Driver for Kubernetes]]></title><description><![CDATA[Do you run your applications in virtual machines on VMware today using the same patterns once employed by bare-metal paradigms? It's just a…]]></description><link>https://developer.hpe.com/lift-and-transform-apps-with-hpe-csi-driver-for-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/lift-and-transform-apps-with-hpe-csi-driver-for-kubernetes/</guid><pubDate>Fri, 17 Jun 2022 10:26:26 GMT</pubDate><content:encoded>&lt;p&gt;Do you run your applications in virtual machines on VMware today using the same patterns once employed by bare-metal paradigms? It&apos;s just a bit more efficient from a resource consumption perspective, but what about the lifecycle of the application, its dependencies — including libraries, host operating systems and various mnemonics — used to keep the application up and running?&lt;/p&gt;
&lt;p&gt;Containerization, and Kubernetes in particular, helps businesses move away from imperative operational models that require multiple teams to perform tedious tasks on a regular basis. It allows them to adopt an agile declarative model where there&apos;s a clean separation of concerns, along with a high degree of automation and abstractions that make sense for running a high-performance technology company.&lt;/p&gt;
&lt;p&gt;In this blog post, we’ll discuss a methodology that can be employed for lifting and transforming legacy stateful applications running on VMware vSphere to Kubernetes using HPE Alletra, Nimble Storage or Primera and the HPE CSI Driver for Kubernetes. The destination Kubernetes cluster could potentially be running on VMware vSphere or any other hypervisor, but it’s not a requirement. As long as the Kubernetes cluster and the origin VMware vSphere environment has connectivity to the underlying array, we’re in good shape.&lt;/p&gt;
&lt;p&gt;The TL;DR version of this blog post is available as an instructional lightboard video &lt;a href=&quot;https://www.youtube.com/watch?v=M7t_qPe3i5E&quot;&gt;published on YouTube&lt;/a&gt;. The blog post concretizes what’s being outlined in the lightboard video.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=M7t_qPe3i5E&quot;&gt;&lt;img src=&quot;/img/untitled.jpg&quot; alt=&quot;YouTube&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Still want more details? Let’s get started!&lt;/p&gt;
&lt;h1&gt;Origin state&lt;/h1&gt;
&lt;p&gt;In the following example, we’re going to use a MariaDB database with the commonly used &lt;a href=&quot;https://github.com/datacharmer/test_db&quot;&gt;Employees database&lt;/a&gt; as an example workload. It provides a simple means to validate its contents and ensure that the contents are intact throughout its journey.&lt;/p&gt;
&lt;p&gt;The database is running in a standalone Virtual Machine (VM) hosted on a legacy NFS datastore.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/slide1.png&quot; alt=&quot;MariaDB running on VMware vSphere&quot; title=&quot;MariaDB running on VMware vSphere&quot;&gt;&lt;/p&gt;
&lt;p&gt;The content is simply validated by loading a SQL statement file provided in the GitHub repo.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ mysql -u mmattsson -ppassword &amp;#x3C; test_employees_sha.sql
INFO
TESTING INSTALLATION
table_name	expected_records	expected_crc
departments	9	4b315afa0e35ca6649df897b958345bcb3d2b764
dept_emp	331603	d95ab9fe07df0865f592574b3b33b9c741d9fd1b
dept_manager	24	9687a7d6f93ca8847388a42a6d8d93982a841c6c
employees	300024	4d4aa689914d8fd41db7e45c2168e7dcb9697359
salaries	2844047	b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f
titles	443308	d12d5f746b88f07e69b9e36675b6067abb01b60e
table_name	found_records   	found_crc
departments	9	4b315afa0e35ca6649df897b958345bcb3d2b764
dept_emp	331603	d95ab9fe07df0865f592574b3b33b9c741d9fd1b
dept_manager	24	9687a7d6f93ca8847388a42a6d8d93982a841c6c
employees	300024	4d4aa689914d8fd41db7e45c2168e7dcb9697359
salaries	2844047	b5a1785c27d75e33a4173aaa22ccf41ebd7d4a9f
titles	443308	d12d5f746b88f07e69b9e36675b6067abb01b60e
table_name	records_match	crc_match
departments	OK	ok
dept_emp	OK	ok
dept_manager	OK	ok
employees	OK	ok
salaries	OK	ok
titles	OK	ok
computation_time
00:00:49
summary	result
CRC	OK
count	OK
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the remaining tests throughout this post, only the summary results will be displayed.&lt;/p&gt;
&lt;h1&gt;Destination state&lt;/h1&gt;
&lt;p&gt;When the transition has completed, the Employees database should be running on a Kubernetes cluster using persistent storage provided by the HPE CSI Driver for Kubernetes backed by a HPE Nimble Storage array. The procedures are similar regardless of the backend (HPE Alletra and Primera included).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/slide2.png&quot; alt=&quot;MariaDB running on Kubernetes&quot; title=&quot;MariaDB running on Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;While the destination state in our example is a virtualized Kubernetes cluster within the same vSphere environment, that is not a requirement. It can be any hypervisor, bare-metal or otherwise as long as the destination cluster is compatible with the HPE CSI Driver for Kubernetes and any of the supported backends.&lt;/p&gt;
&lt;h1&gt;Initial transition&lt;/h1&gt;
&lt;p&gt;The first step in becoming agile with the dataset of interest (the MariaDB database in this case) is to transition the virtual machine&apos;s storage to a VMware vSphere Virtual Volume (vVol) from the legacy NFS datastore. This is done with VMware vSphere Storage vMotion that allows the live migration of a running virtual machine&apos;s file system from one storage system to another, with no downtime for the VM or service disruption.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vmotion-screen-shot-2022-06-14-at-12.08.27-pm.png&quot; alt=&quot;VMware vSphere Storage vMotion example.&quot; title=&quot;VMware vSphere Storage vMotion example.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another important detail is that the data that matters to the application resides in a filesystem compatible with the destination Kubernetes cluster and without partitioning schemes or volume managers. We know for a fact that MariaDB stores all its data on &lt;code&gt;/var/lib/mysql&lt;/code&gt;, so let’s investigate that path.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ df -h /var/lib/mysql
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda         64G  1.2G   63G   2% /var
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the output we can clearly see that the filesystem of interest resides on a disk device without a volume manager or a partitioning scheme. If this is not the case, the data needs to migrate within the VM to a new blank disk.&lt;/p&gt;
&lt;p&gt;Once the disk(s) have been migrated over the HPE Nimble Storage array, each individual virtual disk is represented as a standalone volume along with the VM&apos;s metadata volumes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vvol-screen-shot-2022-06-14-at-1.12.59-pm.png&quot; alt=&quot;vVol Datastore Volumes&quot; title=&quot;vVol Datastore Volumes&quot;&gt;&lt;/p&gt;
&lt;p&gt;The 64GB volume represents the &lt;code&gt;/var&lt;/code&gt; filesystem subject for transforming into a Kubernetes persistent volume.&lt;/p&gt;
&lt;h1&gt;Iterate and learn&lt;/h1&gt;
&lt;p&gt;In this next phase of the workflow, all configuration and testing are performed directly from the Kubernetes cluster. An admin needs to install the HPE CSI Driver for Kubernetes, create a &lt;code&gt;Secret&lt;/code&gt; that represents the backend user or tenant and setup a &lt;code&gt;StorageClass&lt;/code&gt; designed to allow the importing of volumes.&lt;/p&gt;
&lt;p&gt;There are several ways to &lt;a href=&quot;https://scod.hpedev.io/csi_driver/deployment.html&quot;&gt;deploy the HPE CSI Driver&lt;/a&gt; but the most common method is to use Helm. Here&apos;s the gist of it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl create ns hpe-storage
$ helm install my-hpe-csi-driver hpe-storage/hpe-csi-driver --version 2.1.1-0 -n hpe-storage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, create a &lt;code&gt;Secret&lt;/code&gt; for the backend.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: Secret
metadata:
  name: hpe-backend
  namespace: hpe-storage
stringData:
  serviceName: nimble-csp-svc
  servicePort: &quot;8080&quot;
  backend: &amp;#x3C;Your Nimble array&gt;
  username: &amp;#x3C;Your username with privileges to the vVol folder&gt;
  password: &amp;#x3C;Your password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A &lt;code&gt;StorageClass&lt;/code&gt; needs to be specifically crafted to allow users to import volumes and snapshots from the storage array.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hpe-transform
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  description: Volume transformed from vVol by HPE CSI Driver
  allowOverrides: importVolAsClone,importVolumeName,forceImport
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The important details in the &lt;code&gt;StorageClass&lt;/code&gt; are the &lt;code&gt;allowOverrides&lt;/code&gt;. Here, we put the parameters we want the application admin to override once testing has started (&lt;code&gt;importVolAsClone&lt;/code&gt;) and once the database is ready for the final import (&lt;code&gt;importVolumeName&lt;/code&gt; and &lt;code&gt;forceImport&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;For the final production instance, we’re going to use the &lt;a href=&quot;https://artifacthub.io/packages/helm/bitnami/mariadb&quot;&gt;Bitnami MariaDB Helm chart&lt;/a&gt;. For this chart to install cleanly and be able to bring up the database, the following considerations need to be taken into account:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There needs to be a known &lt;code&gt;root@localhost&lt;/code&gt; account with a known password in the source database.&lt;/li&gt;
&lt;li&gt;The Bitnami chart does not support altering the &lt;code&gt;mountPath&lt;/code&gt;. The source filesystem needs to have a symlink to &lt;code&gt;/var/lib/mysql&lt;/code&gt; from &lt;code&gt;/var/lib/data&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For the initial cloning of the source database, the following &lt;code&gt;values.yaml&lt;/code&gt; file is being used.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;image:
  tag: 10.3
auth:
  rootPassword: my-password
global:
  imagePullSecrets: # Only needed if you&apos;re rate limited to Docker Hub
  - regcred
primary:
  persistence:
    enabled: true
    storageClass: hpe-transform
    annotations:
      csi.hpe.com/importVolAsClone: virt-mm-db-1.vmdk
    size: 64Gi
    subPath: lib
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above parameters are quite self-explanatory. The &lt;code&gt;subPath&lt;/code&gt; directive instructs the kubelet to mount the &lt;code&gt;lib&lt;/code&gt; directory in the root of the volume. As a result, &lt;code&gt;/lib&lt;/code&gt; in the volume will be mounted at &lt;code&gt;/bitnami/mariadb&lt;/code&gt; and the &lt;code&gt;data&lt;/code&gt; subdirectory, which holds the MariaDB databases, will be seen properly by the container binaries.&lt;/p&gt;
&lt;p&gt;Deploying the chart is straight forward.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-test-clone bitnami/mariadb -f values.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the chart is deployed and the database comes up, the &quot;Hello World&quot; test will, of course, verify the database.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl exec -it my-test-clone-mariadb-0 -- bash
I have no name!@my-test-clone-mariadb-0:/$ mysql -ummattsson -ppassword &amp;#x3C; /bitnami/mariadb/scripts/test_employees_sha.sql
INFO
TESTING INSTALLATION
computation_time
00:00:49
summary	result
CRC	OK
count	OK
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s likely that things may not come up properly on the first couple of iterations. It needs to read logs, error messages, and course correct until the database comes up cleanly. Once it does, in a production scenario, test clients can be attached to further validate and solidify that moving the application to Kubernetes will actually work.&lt;/p&gt;
&lt;h1&gt;The endgame&lt;/h1&gt;
&lt;p&gt;Up until now, there’s been zero downtime or disruption to the source application. The final step of this project is a one-way street. Ensure clients can connect to the destination database instance before proceeding.&lt;/p&gt;
&lt;p&gt;While it’s harmless to leave the test clone chart and &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; on the cluster, we’ll remove it to avoid confusion.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ helm uninstall my-test-clone
$ kubectl delete pvc/data-my-test-clone-mariadb-0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, we need to prepare the &lt;code&gt;values-prod.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;image:
  tag: 10.3
auth:
  rootPassword: my-password
global:
  imagePullSecrets: # Only needed if you&apos;re rate limited to Docker Hub
  - regcred
primary:
  persistence:
    enabled: true
    storageClass: hpe-transform
    annotations:
      csi.hpe.com/importVolumeName: virt-mm-db-1.vmdk
      csi.hpe.com/forceImport: &quot;true&quot;
    size: 64Gi
    subPath: lib
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only change we need to make is the &lt;code&gt;annotations&lt;/code&gt; stanza. Changing the directive from &lt;code&gt;importVolAsClone&lt;/code&gt; to &lt;code&gt;importVolumeName&lt;/code&gt;. We also need to “force” the import as the volume we’re importing already has residual vSphere application metadata associated with it. That will be overwritten by the HPE CSI Driver when forcing the import.&lt;/p&gt;
&lt;p&gt;Before deploying the new chart, the source VM needs to be shut down and removed from vSphere control. With HPE Nimble Storage and HPE Alletra 6000, it’s perfectly safe to orderly shut down the VM and “Delete from Disk” in the vCenter context menus. If you’re attempting these workflows and procedures with HPE Alletra 9000, Primera or 3PAR, only use “Remove from Inventory” to prevent disruption.&lt;/p&gt;
&lt;p&gt;Also, remember to have backups or volume replicas and a contingency plan for any unplanned event.&lt;/p&gt;
&lt;p&gt;Once the VM has been deleted from vSphere, the remnants on the storage array can be seen as offline volumes. The metadata volumes have been permanently removed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/offline-screen-shot-2022-06-14-at-5.45.27-pm.png&quot; alt=&quot;Offline HPE Storage Nimble volume&quot; title=&quot;Offline HPE Storage Nimble volume&quot;&gt;&lt;/p&gt;
&lt;p&gt;An important detail to understand is that &lt;code&gt;importVolumeName&lt;/code&gt; only works for offline volumes on HPE Alletra 6000 and Nimble Storage.&lt;/p&gt;
&lt;p&gt;Ok, let’s import the database into its final state.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ helm install my-prod bitnami/mariadb -f values-import.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the database is up, we can connect to the database and verify the contents yet again.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl exec -it my-prod-mariadb-0 -- bash
I have no name!@my-prod-mariadb-0:/$ mysql -u mmattsson -ppassword &amp;#x3C; /bitnami/mariadb/scripts/test_employees_sha.sql
INFO
TESTING INSTALLATION
computation_time
00:00:48
summary	result
CRC	OK
count	OK
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Back on the storage array, the volume has automatically been renamed and now corresponds to the &lt;code&gt;PersistentVolume&lt;/code&gt; name known to Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/pvc-screen-shot-2022-06-14-at-6.39.40-pm.png&quot; alt=&quot;PersistentVolume hosted on HPE Nimble Storage&quot; title=&quot;PersistentVolume hosted on HPE Nimble Storage&quot;&gt;&lt;/p&gt;
&lt;p&gt;It’s also safe to move the volume out of the vVol datastore folder, as it’s a non-disruptive operation and may be carried out by a storage administrator.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;While this blog only skims the surface of possibilities of performing IT transformations from thirty-year-old paradigms, it gives an idea of what tools DevOps teams have at their disposal while taking a crack at one of the biggest challenges in application modernization. The ability to iterate safely without production disruption is key. Quite frankly, I performed a dozen clone imports before I got every parameter right before the final import while writing this blog post. There is no easy button, you must put in the work!&lt;/p&gt;
&lt;p&gt;I’m gearing up for &lt;a href=&quot;http://hpe.com/discover&quot;&gt;HPE Discover 2022&lt;/a&gt;. If you’d like to connect and talk shop, I’ll be in the HPE Developer Community Hack Shack on the main expo floor. &lt;a href=&quot;https://developer.hpe.com/blog/don%E2%80%99t-miss-all-things-software-in-the-hack-shack-at-hpe-discover/&quot;&gt;We have treasure hunts, hacking challenges and meetups&lt;/a&gt;. Find my session “Get Started with Persistent Storage for Kubernetes with the HPE CSI Driver” (HSM4991) in the HPE Discover &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24991_10161&amp;#x26;locale=en_US&quot;&gt;content catalog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24991_10161&amp;#x26;locale=en_US&quot;&gt;&lt;img src=&quot;/img/hpe-dlv-2022-csi-hackshack-revc.png&quot; alt=&quot;Get Started with Persistent Storage for Kubernetes with the HPE CSI Driver&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The team is also on the HPE Developer Community Slack. &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Sign up&lt;/a&gt; and &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;login&lt;/a&gt; to join the conversation on everything HPE Developer related. See you there!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Set Up an Automation Pipeline to View Historical Trend Data of Clusters with HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise. Introduction An HPE…]]></description><link>https://developer.hpe.com/how-to-set-up-an-automation-pipeline-to-view-historical-trend-data-of-clusters-with-hpe-greenlake-for-containers/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-set-up-an-automation-pipeline-to-view-historical-trend-data-of-clusters-with-hpe-greenlake-for-containers/</guid><pubDate>Thu, 09 Jun 2022 11:05:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;An HPE GreenLake Cloud Service, &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Containers&lt;/a&gt;, is built upon &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; and runs on a container-based Kubernetes orchestrated infrastructure*.* Using the Clusters module of the HPE GreenLake Central dashboard, one can perform several cluster-related operations. A cluster can be created using default or custom machine blueprints, deleted, scaled up, or scaled-down using this module. Additionally, the cluster detail page provides some interesting insights like CPU usage, memory allocation, and storage availability. It is essential to monitor these types of data so that the resource requirement can be determined without any business impact. The image below illustrates the cluster detail page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-detail.jpg&quot; alt=&quot;Cluster detail screen&quot; title=&quot;Cluster detail screen&quot;&gt;&lt;/p&gt;
&lt;p&gt;It can be useful to get historical trend data for such metrics to analyze application-specific metrics. Using past consumption data, a customer can decide if the cluster should be scaled up on some days of the year and scaled down on others. For a system administrator working on an abstract layer of application deployment, choosing the right blueprint is crucial. A graph indicating high memory usage over CPU usage might lead him to change blueprints that offer high compute resources. The purpose of this blog post is to illustrate how a person can implement the Automation Pipeline and provide observability via the use of automation tools such as Katalon (UI automation tool), CircleCI (continuous integration and continuous delivery platform), MySQL (a database provider) and Grafana (an interactive visualization web application tool that uses time-series data to create meaningful graphs).&lt;/p&gt;
&lt;h2&gt;How to Implement the Automation Pipeline?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://katalon.com/katalon-studio/&quot;&gt;Kalalon studio &lt;/a&gt;software uses an open-source Selenium framework to interact with web browsers with HTTP commands. CI systems can trigger remote execution of Katalon Studio scripts through Docker containers or command-line interfaces. At a scheduled time and frequency, cron jobs can be configured to check various dynamic measurements of clusters such as CPU usage, memory usage, storage capacity availability, the number of worker nodes in the cluster, the number of control planes in the cluster, the time taken to scale up a cluster node, time taken to scale down a cluster node, the time taken to create a cluster, etc.&lt;/p&gt;
&lt;p&gt;Katalon Studio provides HTML-based reports or console logs after execution is complete. It is possible to extract the required data in the form of a plain text-based file, such as .csv, using any script. CircleCI offers the capability to export this .csv file as an artifact inside the job. This artifact data can then be imported into the database. In order to visualize the collected data, the Grafana dashboard is helpful. This can be seen in the architecture below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/architectural-diagram.jpg&quot; alt=&quot;Architectural Diagram of technology stack&quot; title=&quot;Architectural Diagram of technology stack&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How to Create a  CircleCI pipeline?&lt;/h2&gt;
&lt;p&gt;A CircleCI pipeline includes various stages such as spinning up the environment by downloading required images and configuring test setup, preparing CircleCI environment, copyright check, cloning repository code, security checking, execution of test scripts, generating artifacts, generating results, and sending out results over email. A cron job schedules the execution of the automation pipeline.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sample CircleCI config.yaml:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;executors:
  katalon8_test_executor:
    docker:
      - image: default/katalon8:latest
jobs:
  checkout-workspace:
    docker:
      - image: circleci/golang:latest
  copyright-check:
    docker:
      - image: copyright-tool
    steps:
      - run:
        name: Check copy right
  cluster-observability:
    executor:
      name: katalon8_test_executor
    steps:
      - run:
          name: &quot;Creating directory for artifacts&quot;
          command: mkdir /tmp/project/
      - run:
          name: &quot;Execute cluster metrics gathering test suite&quot;
          command: xvfb-run katalonc -consoleLog -browserType=&quot;Chrome&quot; -retry=0 -statusDelay=15 -testSuitePath=&quot;Test Suites/ClusterMetricsSuite&quot; -executionProfile=&apos;default&apos; -projectPath=&apos;/project/sample.prj&apos; --config -webui.autoUpdateDrivers=true
      - store_test_results:
          path: ./tests/katalon/Reports
      - store_artifacts:
          path: /tmp/project/
workflows:
  cluster-observability:
    triggers:
      - schedule:
          cron: &quot;0 20 * * *&quot;
          filters:
            branches:
              only:
                - master
    jobs:
      - checkout-workspace
      - copyright-check
      - gather-cluster-metrics
          requires:
            - checkout-workspace
      - cluster-scale-operations
          requires:
            - checkout-workspace          
      - update-results
          requires:
            - checkout-workspace 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Sample CircleCI workflow can be demonstrated as below:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/circleci-workflow.jpg&quot; alt=&quot;Sample CircleCI Automation pipeline&quot; title=&quot;Sample CircleCI Automation pipeline&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A csvDataSource.csv artifact file may look like what&apos;s shown below:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;IDCluster,DateTime,BlueprintType,CPUUsage,MemoryAllocation,StorageCapacity,WorkerNodeCount,ScalingRequired,ClusterScaleUpDuration,ClusterScaleDownDuration,ClusterCreationDuration,ClusterDeletionDuration
2a0810e6-0c32-451b-933b-74fbdf86358a,2022-06-15 20:00:00,standard,50%,70%,5GB,3,N,,,,
ce18d4e0-9af3-40da-8d43-266fe05d17ba,2022-06-16 20:00:00,standard,80%,70%,5GB,3,Y,01,,,
767801bd-7f1e-4a9a-804e-b560f168d968,2022-06-17 20:00:00,standard,10%,10%,5GB,4,Y,,01,,
r3b185d5-c96a-49a5-b6de-13ae93c93fd4,2022-06-18 20:00:00,standard,07%,10%,5GB,2,N,,,,
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As illustrated in the above data file, the UI automation run gathers data for the cluster where the application is deployed. A threshold value can be predefined for scaling requirements in the Katalon script. For example, 80% above usage should trigger scaling up, while 10% usage should trigger scaling down. Based on the first record after app deployment, CPU and Memory metrics are 50% and 70%, respectively, for which scaling is not necessary and is marked as &apos;N&apos; in the data file. The CPU reached 80% usage, on the second day, which means that a scale up operation is required and is marked with &apos;Y&apos; in the data file. The script for scaling up operation can also record the time it takes to become ready.&lt;/p&gt;
&lt;h2&gt;How to Download Artifacts from CircleCI Workflow Dynamically?&lt;/h2&gt;
&lt;p&gt;CircleCI offers &lt;a href=&quot;https://circleci.com/docs/2.0/managing-api-tokens/#creating-a-personal-api-token&quot;&gt;API Token&lt;/a&gt; to view pipelines via an API interface. Based upon requirements, various &lt;a href=&quot;https://circleci.com/docs/api/v2/&quot;&gt;APIs &lt;/a&gt;can be selected. In the &lt;a href=&quot;https://www.postman.com/downloads/&quot;&gt;Postman &lt;/a&gt;tool, a user can try a combination of APIs for passing the output of one API result to the input of another API. A curl command for downloading artifacts from CircleCI API may look like the one mentioned below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -H &quot;Circle-Token: $TOKEN&quot; &quot;https://circleci.com/api/v2/project/gh/$repo/$project/$JOB_ID/artifacts&quot; | grep -o &apos;https://[^&quot;]*csvDataSource[^&quot;]*&apos; \
   | wget --timeout=10  --verbose --header &quot;Circle-Token: $TOKEN&quot; --input-file -
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Grafana Dashboard Configurations&lt;/h2&gt;
&lt;p&gt;CircleCI build runs will collect the artifacts and those can be filled into databases like MySQL or Prometheus. In Grafana, various data source configurations are available, where the user has to configure the required data source. There are various chart options available for visual interpretation. By providing various MySQL queries, the required graph can be generated.&lt;/p&gt;
&lt;p&gt;A sample MySQL query to display the maximum CPU, Memory, and Storage specific to the cluster name can be written for Grafana, as mentioned below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;select ClusterName AS &quot;Cluster Name&quot;,
max(cast(CPU as UNSIGNED)) as &quot;Maximum CPU Usage&quot;,
max(cast(CPU as UNSIGNED)) as &quot;Maximum Memory Usage&quot;,
max(cast(CPU as UNSIGNED)) as &quot;Maximum Storage Usage&quot;,
from ClusterMetricsTable GROUP BY ClusterName;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Finally, the Grafana dashboard appears as:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/cluster-metrics-grafana.jpg&quot; alt=&quot;SampleGrafanaDashboard&quot; title=&quot;Sample Grafana Dashboard (Data is for illustrative purpose only. Axis are hidden)&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note that, all data illustrated is for understanding purposes only. No relevance to actual HPE GreenLake performance is being shown or claimed in this blog post.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Through this blog, users can integrate various tools and generate metrics for observability purpose. And by triggering actions based on these metrics, several objectives can be achieved, such as, selecting the right blueprint for the application or automating cluster scale-up or scale-down operations.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Benefits of the Platform Level Data Model for Firmware Update Standard]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal]]></description><link>https://developer.hpe.com/benefits-of-the-platform-level-data-model-for-firmware-update-standard/</link><guid isPermaLink="false">https://developer.hpe.com/benefits-of-the-platform-level-data-model-for-firmware-update-standard/</guid><pubDate>Tue, 07 Jun 2022 16:35:17 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/pldm/pldm_fwupd/pldm_fwupd&quot;&gt;Server Management Portal&lt;/a&gt;&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[How to create a virtual network in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/how-to-create-a-virtual-network-in-hpe-greenlake-for-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-create-a-virtual-network-in-hpe-greenlake-for-private-cloud/</guid><pubDate>Tue, 07 Jun 2022 05:34:46 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for private cloud is designed to deliver and help manage a private cloud. Available on the HPE GreenLake Central platform, the HPE GreenLake for private cloud is&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An HPE designed, implemented, owned, and operated private cloud that is deployed at a customer site&lt;/li&gt;
&lt;li&gt;Offered as a consumption-based service that enables customers to better align costs to outcomes&lt;/li&gt;
&lt;li&gt;An intuitive self-service portal UI is used to create and manage private cloud services such as compute, storage, and network (example described in this blog)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This blog post explains how a Customer Network Administrator can create a virtual network with a static IP pool and DHCP using NSX-T, network virtualization, and a security platform that enables the virtual cloud network in HPE GreenLake for private cloud.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;Access to Network Management is controlled by a user’s role.&lt;/p&gt;
&lt;p&gt;With the Tenant Admin user, connect to HPE GreenLake Central, locate the HPE GreenLake for private cloud dashboard widget and click the Launch icon to open the HPE GreenLake for private cloud dashboard.&lt;/p&gt;
&lt;p&gt;Navigate to Administration &gt; Roles and select the role to update the permission.&lt;/p&gt;
&lt;p&gt;From the ACCESS column of the selected role, select FULL for the below-mentioned NSX network objects:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Infrastructure: Networks&lt;/li&gt;
&lt;li&gt;Infrastructure: Network IP Pools&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Understanding private cloud networking&lt;/h2&gt;
&lt;p&gt;The following illustration shows how you can use NSX objects to achieve NSX logical networking in HPE GreenLake for private cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The network components are as follows:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;VM&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Tenant virtual machines (VMs) are connected to Blue and Green networks.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NSX-T Segments&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;NSX-T segments are layer 2 virtual domains and there are two types of segments in an NSX-T Data Center:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Overlay-backed segments(&lt;strong&gt;Default&lt;/strong&gt;): This enables traffic flow between two virtual machines on different hosts. The hosts are attached to the same overlay segment and have their Layer 2 traffic carried by a tunnel between them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;VLAN-backed segments: This is used for uplink traffic external to the NSX-T Data Center.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: Raise HPE Support case to enable the backend infrastructure to support this type.&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Blue-Network, Green-Network&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;NSX-T segments that are attached to the tenant virtual machines and Tier1 gateway.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tier-1 Gateway&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Gateway with downlink connections to NSX-T segments and uplink connections to Tier-0 gateways using an internal transit network. Typically, a Tier-1 gateway is connected to a Tier-0 gateway in the northbound direction and to segments in the southbound direction.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Internal Transit Network&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Network enables the communication between the Tier-0 gateway and all Tier-1 gateways that are linked to it. This connectivity is established when the Tier-1 gateway is attached to the Tier-0 gateway.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tier-0 Gateway&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Gateway that processes the traffic between the logical and physical networks. A Tier-0 gateway has downlink connections to Tier1 gateways and uplink connections to the physical networks.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ext-Net&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Interface connected to Virtual Distributed Switch configured in a customer environment for enabling external connectivity from the tenant virtual machines.&lt;/p&gt;
&lt;h2&gt;How to create a virtual network with a static IP pool&lt;/h2&gt;
&lt;h3&gt;Step 1: Create IP Pool&lt;/h3&gt;
&lt;p&gt;Locate HPE GreenLake for private cloud card in the HPE GreenLake Central dashboard and click the Launcher icon to open the HPE GreenLake for private cloud dashboard.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Navigate to Infrastructure &gt; Networks&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click the IP Pools tab&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click Add to open CREATE NETWORK POOL dialog box&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the NSX-T IP pool parameters as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;: IP Pool Name&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pool Type&lt;/strong&gt;: Select &quot;Morpheus&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;IP Ranges&lt;/strong&gt;: Specify the IP pool address range by entering the STARTING ADDRESS and ENDING ADDRESS&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Step 2: Create NSX-T Segment with Static IP Pool&lt;/h3&gt;
&lt;p&gt;Locate HPE GreenLake for private cloud card in the HPE GreenLake Central dashboard and click the Launcher icon to open the HPE GreenLake for private cloud dashboard.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Navigate to Infrastructure &gt; Networks&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the Networks tab, click the ADD drop-down list, select NSX-T Segment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the dialog box, configure the NSX-T segment parameters as follows. For information about additional fields that are not described here, refer &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=GUID-3DCFD624-DFE7-45A8-AFAC-BE004227C7EC.html&quot;&gt;User Guide&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Group&lt;/strong&gt;: From the drop-down list, select an infrastructure user group to isolate the network at the group level. The default is Shared (all infrastructure groups)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Service&lt;/strong&gt;: Select &quot;NSX-T&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;: Network Name&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ACTIVE&lt;/strong&gt;: Select to activate the network. Clear to deactivate the network&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gateway&lt;/strong&gt;: (Optional) Enter the gateway address&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Primary DNS&lt;/strong&gt;: (Optional) Enter the primary DNS details&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Secondary DNS&lt;/strong&gt;: (Optional) Enter the secondary DNS details&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Connected Gateway&lt;/strong&gt;: (Optional) From the drop-down list, select a Tier1 gateway router&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gateway CIDR&lt;/strong&gt;: Enter the Classless Inter-Domain Routing (CIDR) for the logical switch (example: 192.168.0.1/24)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transport Zone&lt;/strong&gt;: Select Overlay&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Pool&lt;/strong&gt;: Specify the IP Pool which was created in the prerequisites section&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-3.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &quot;Save Changes&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On successful creation, the network will list under the &quot;Networks&quot; tab. Use this segment for instance deployment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-10.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How to create a virtual network with DHCP&lt;/h2&gt;
&lt;p&gt;Locate the HPE GreenLake for private cloud card in the HPE GreenLake Central dashboard and click the Launcher icon to open the HPE GreenLake for private cloud dashboard.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Navigate to Infrastructure &gt; Networks&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the Networks tab, click the ADD drop-down list, select NSX-T Segment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From the dialog box, configure the NSX-T segment parameters as follows. For information about additional fields that are not described here, refer &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=GUID-3DCFD624-DFE7-45A8-AFAC-BE004227C7EC.html&quot;&gt;User Guide&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Group&lt;/strong&gt;: From the drop-down list, select an infrastructure user group to isolate the network at the group level. The default is Shared (all infrastructure groups)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Network Service&lt;/strong&gt;: Select &quot;NSX-T&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Name&lt;/strong&gt;: Network Name&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ACTIVE&lt;/strong&gt;: Select to activate the network. Clear to deactivate the network&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gateway&lt;/strong&gt;: (Optional) Enter the gateway address&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Primary DNS&lt;/strong&gt;: (Optional) Enter the primary DNS details&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Secondary DNS&lt;/strong&gt;: (Optional) Enter the secondary DNS details&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Connected Gateway&lt;/strong&gt;: (Optional) From the drop-down list, select a Tier1 gateway router&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gateway CIDR&lt;/strong&gt;: Enter the Classless Inter-Domain Routing (CIDR) for the logical switch (example: 192.168.0.1/24)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Transport Zone&lt;/strong&gt;: Select Overlay&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Expand &apos;Subnet DHCP&apos; Section and update the below fields:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DHCP Type: Local DHCP Server (default)&lt;/li&gt;
&lt;li&gt;DHCP ENABLED: Select to Enable&lt;/li&gt;
&lt;li&gt;DHCP Server Address: This address must not overlap the IP-ranges of the subnet, the gateway address of the subnet, or the DHCP static-binding addresses of this segment&lt;/li&gt;
&lt;li&gt;DHCP Ranges: Enter the DHCP ranges as comma-separated values. Entries can be in either range format (192.168.1.10-192.168.1.100) or CIDR format (192.168.10/24).&lt;/li&gt;
&lt;li&gt;DHCP LEASE TIME: (Optional) Enter the lease time. The default is one day.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-4.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click &quot;Save Changes&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On successful creation, the network will list under the &quot;Networks&quot; tab. Notice the tick mark in DHCP Column. Use this segment for instance deployment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-5.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Manage the virtual networks&lt;/h2&gt;
&lt;p&gt;You can manage the virtual networks from the Infrastructure &gt; Networks page. Below is the network details page of the sample network (Green-Segment) created in the previous step.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-6.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select the &quot;Instances&quot; tab to view the list of instances deployed using this network:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-7.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select the &quot;Host Records&quot; tab to view the records created for every deployment on the network.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Grid View:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-8.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;List View:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig-9.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, we covered how to get started with software-defined networking in HPE GreenLake for private cloud and explained the steps to create a sample virtual network with both static IP pool and DHCP. In the next article, we will cover the NSX distributed firewall feature of HPE GreenLake for private cloud and explain how to create and enforce firewall rules to restrict the network traffic to virtual machines.&lt;/p&gt;
&lt;p&gt;Learn more about &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=HPE-GreenLake-private-cloud-networking.html&quot;&gt;HPE GreenLake for private cloud&lt;/a&gt; networking&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open for Business]]></title><link>https://developer.hpe.com/2022-June-04/</link><guid isPermaLink="false">https://developer.hpe.com/2022-June-04/</guid><pubDate>Fri, 03 Jun 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Why Open Source is more than Software: The Example of the Linux Foundation's AgStack project]]></title><description><![CDATA[The Linux Foundation’s goal to create the greatest shared technology investment in history by enabling global open collaboration across…]]></description><link>https://developer.hpe.com/why-open-source-is-more-than-software-the-example-of-the-linux-foundations-agstack-project/</link><guid isPermaLink="false">https://developer.hpe.com/why-open-source-is-more-than-software-the-example-of-the-linux-foundations-agstack-project/</guid><pubDate>Wed, 01 Jun 2022 17:18:20 GMT</pubDate><content:encoded>&lt;p&gt;The Linux Foundation’s goal to create the greatest shared technology investment in history by enabling global open collaboration across companies, developers and users has profoundly impacted our digital heritage. Its tactics proved that using a shared development model fosters human collaboration, accelerates innovation, and notably expands adoption, with text book examples including Linux, Kubernetes, Cloud Foundry, and Jenkins. Propelled by this success, the set of open source projects under the Linux Foundation has expanded to over 400 projects.&lt;/p&gt;
&lt;p&gt;Now, a new Linux Foundation project, AgStack, aims to expand the definition of open source from discrete software code to also applying to a comprehensive set of open services aimed at solving one of the world’s greatest challenges – hunger – by getting data to those who need it from those who have it at the time it’s needed. AgStack ties open software and openly available data to build an open highway of information that’s collaborative, and at the same time, secure. Although AgStack is specifically aimed at agriculture, this new concept of open services can be applied to multiple industries, such as healthcare, housing, and education. It merely requires one to be open to working collaboratively.&lt;/p&gt;
&lt;h2&gt;Leveraging open source to improve agriculture efficiency&lt;/h2&gt;
&lt;p&gt;Global agriculture is a highly-complex, interrelated trillion dollar industry. Its ecosystem is massively inefficient and confronts grave challenges in the face of climate change. It employs half the world’s population and uses 70% of the world’s fresh water.  Of the thousands of stakeholders who have to work together, its small holder farmers produce at least a third &lt;a href=&quot;https://ourworldindata.org/smallholder-food-production#:~:text=Family%20farms%20do%20produce%20around,poorest%20people%20in%20the%20world.&quot;&gt;(from Our World Data)&lt;/a&gt; of the entire world’s food supply and represent some of the poorest people in the world. With 30% of our food wasted and 30% of the world’s people hungry, there’s quite a bit of room for improvement.&lt;/p&gt;
&lt;p&gt;The AgStack project seeks to increase global agriculture efficiency through the creation, maintenance and enhancement of a free, re-usable, open and specialized digital infrastructure for data and applications. This software infrastructure is community-maintained and provides a free and secure digital “operating system” for all agriculture stakeholders to run their agriculture applications – from farmers to marketers to extension agents. AgStack essentially sits between digital agriculture applications and third-party cloud systems, and uses an openly available API to deliver the required data, models, frameworks, extensions and toolboxes to help farmers be more successful in growing crops in a sustainable manner.&lt;/p&gt;
&lt;h2&gt;Connecting the right data to the right people at the right time&lt;/h2&gt;
&lt;p&gt;Farmers are constantly grappling with weather, procurement, supply-chain, and disease-related issues.  With better data, they can make better decisions on what to plant, when to fertilize, how to irrigate, when to harvest, etc. Knowing what their particular soil requires at specific times and how to avoid pests and diseases can make or break a harvest. Even knowing market demands can help them better plan and avoid waste.&lt;/p&gt;
&lt;p&gt;AgStack addresses these issues by focusing on being able to deliver four pieces that interconnect:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Data.&lt;/strong&gt; There is already a tremendous amount of data openly available today; weather data, satellite data, soil data.  Research about new varieties of seeds and fertilizer can significantly boost productivity. Near real-time information about soil types, weather conditions, market predictions, and nearby insect infestation can help farmers make better decisions on the ground. But hunting for that data requires time, expertise, and language translation in order to be useful.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Models.&lt;/strong&gt; Many universities and research groups have developed models to predict how pests evolve, how different soils evolve, when to fertilize, etc. Again, much of this material is publically available and features permissive licensing, but you have to go off looking for it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Frameworks.&lt;/strong&gt; Things like user registries and asset registries are other things that can be shared that are hosted by AgStack. Some of these help identify field boundaries in such a way that protects the owners’ personal information. Others can look at them to determine if the field they’re working on has similar attributes and requires similar considerations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensions and toolboxes.&lt;/strong&gt; Extensions come from folks who share new research that can help farmers improve their practices. For example, tagged image datasets or tagged audio datasets can be extremely useful for those developing applications using machine learning algorithms. These datasets can help one determine if something is a piece of lettuce, an almond, or something else.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It’s all about connecting the right data to the right people at the right time so the best decisions can be made.&lt;/p&gt;
&lt;h2&gt;Where does HPE fit in all this?&lt;/h2&gt;
&lt;p&gt;Being able to deliver all this from edge-to-cloud requires intensive data computation. Hewlett Packard is supporting this effort in a couple of ways. As a founding company, HPE has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Donated server hardware, memory, and cloud space&lt;/li&gt;
&lt;li&gt;Sponsored the project through HPE’s Tech for Good program, demonstrating that technology has the potential to drive real and positive change when harnessed effectively&lt;/li&gt;
&lt;li&gt;Openly collaborated, offering technical guidance on how to more easily and scalably provide for data motion and monitoring&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Janice Zdankus, HPE Vice President of CTO Technology Strategy and Innovation for Social Impact, whose team is involved with the AgStack project, as well as supporting projects like CGIAR and &lt;a href=&quot;https://bigdata.cgiar.org/digital-intervention/video-enabled-extension/&quot;&gt;DigitalGreen&lt;/a&gt;, points out that “bringing together industry, technology, academia and government partners to solve key societal challenges, global impact can be delivered and scaled.” Her colleague, Ted Dunning, HPE CTO for Data Fabric, a member of her team and an elected member of the Board for AgStack, agrees, adding that “a modern world should make it possible for us to be generous.”&lt;/p&gt;
&lt;h2&gt;A road to a better tomorrow – open to all&lt;/h2&gt;
&lt;p&gt;AgStack Founder, Sumer Johal, joined the HPE Developer team just recently to offer his take on what makes AgStack so special. “It’s not just open source software; it’s also open services. It’s like this global road that anyone can ride on. Something that’s neutral and trusted. Something everyone can use. Something everyone will want to adopt.”&lt;/p&gt;
&lt;p&gt;How successful will it be? Sumer had a thought on that as well. “If you measure success by adoption and not by money, then you start looking at the world and seeing that it has large gaps in adoption that happen because everyone has a different view on how they want to make money. But it turns out, just like Linux demonstrated, that once you get away from money and look more at adoption as the currency, ironically you wind up making more money for stakeholders because more people adopt it.” AgStack has a great opportunity here. The world needs it to be successful. And it needs developers and data scientists to help it be successful by contributing to this open source initiative.&lt;/p&gt;
&lt;p&gt;For more information on the AgStack project, refer to the &lt;a href=&quot;https://agstack.org/&quot;&gt;AgStack project website&lt;/a&gt; or read this &lt;a href=&quot;https://www.linuxfoundation.org/press-release/linux-foundation-launches-open-source-digital-infrastructure-project-for-agriculture-enables-global-collaboration-among-industry-government-and-academia/&quot;&gt;announcement&lt;/a&gt; from the Linux Foundation. You can also check out this HPE Developer May 2022 Munch &amp;#x26; Learn &lt;a href=&quot;https://www.youtube.com/watch?v=dnhjRF5dr6M&quot;&gt;video replay&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Don’t Miss All Things Software in the Hack Shack at HPE Discover]]></title><description><![CDATA[Calling all developers, data scientists, data architects, and machine learning engineers! The HPE Developer Community is excited to welcome…]]></description><link>https://developer.hpe.com/don’t-miss-all-things-software-in-the-hack-shack-at-hpe-discover/</link><guid isPermaLink="false">https://developer.hpe.com/don’t-miss-all-things-software-in-the-hack-shack-at-hpe-discover/</guid><pubDate>Thu, 26 May 2022 19:22:00 GMT</pubDate><content:encoded>&lt;p&gt;Calling all developers, data scientists, data architects, and machine learning engineers! The HPE Developer Community is excited to welcome you to Las Vegas, June 28-30, 2022 at the Hack Shack at HPE Discover 2022, The Edge-to-Cloud-Conference. The Hack Shack is a place specifically focused on software – from development, to design, to use. There, you can connect with our experts in meetup sessions, try our unique software challenges, and go on a treasure hunt to discover a wealth of resources for open source and HPE technologies. It’s a fun place to collaborate, learn, expand your technology skills, and win prizes!&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;p&gt;&lt;strong&gt;Register &lt;a href=&quot;https://attend.hpe.com/discover2022/index.cfm?iLangID=1&quot;&gt;here&lt;/a&gt; for HPE Discover 2022!&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;We invite you to explore all our sessions on the &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;locale=en_US&quot;&gt;content catalog&lt;/a&gt; to build your own agenda. You can create your personalized agenda &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.myagenda/?l=1049&amp;#x26;locale=en_US&quot;&gt;here&lt;/a&gt;.  To find our Hack Shack Meetup sessions in the catalogue, simply go to the search bar and type in Hack Shack. You’ll also find them listed below for easy reference. All sessions are listed in PDT.&lt;/p&gt;
&lt;h2&gt;Spark Your Curiosity at the Hack Shack Meetups!&lt;/h2&gt;
&lt;p&gt;Our Hack Shack Meetup sessions are designed to help you get your questions answered. These 30-minute discussions are led by technology experts who’ll introduce the topic, point out important features, and then engage in conversation with you as a group. You’ll also have an opportunity to connect with them after each session to go into further detail about your specific situation.&lt;/p&gt;
&lt;h2&gt;Discover the Developer’s Journey&lt;/h2&gt;
&lt;p&gt;In the Hack Shack (DEMO701), you’ll be introduced to HPE Developer, a program designed to help people like you build applications and develop integrations with HPE products and solutions. You’ll have the opportunity to connect with experts in secure connectivity, hybrid cloud, AI, and unified data analytics -- and learn more about the latest trends. You’ll also hear about unique software products that help you extract the most value from your data wherever it lives, turning insights into outcomes at the speed required to thrive in today’s complex world.&lt;/p&gt;
&lt;p&gt;To help you navigate the event, we’ve put together a list of recommended Hack Shack Meetup sessions to include in your agenda builder. Note: Live sessions, demos and hands-on labs in this article are listed in Pacific Daylight Time (PDT).&lt;/p&gt;
&lt;h4&gt;Our 13 Hack Shack Sessions&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=25292_0&amp;#x26;locale=en_US&quot;&gt;HSM5292&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 12:30 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Matthew Morris (HPE)
&lt;strong&gt;Does Your Data Lake Need a Little Chlorine? Clean Up and Modernize Your Apps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Have things become a little murky with your Hadoop solution? Join this meetup to hear how HPE Ezmeral Runtime Enterprise can modernize your applications and enable cloud-scale elasticity while tightly coupling the data at the edge with your applications running on-premises and in the cloud. You can even take advantage of cloud economics as you have it fully supported from beginning to end with HPE Professional Services and deployed as a service. Much cleaner, right?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=25101_0&amp;#x26;locale=en_US&quot;&gt;HSM5101 &lt;/a&gt;| Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 1:30 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Milind Bhandarkar (HPE)
&lt;strong&gt;Interactive Analytics Workloads with PrestoDB&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;PrestoDB is an open source distributed SQL query engine for running interactive analytics queries against data sources of all sizes, ranging from gigabytes to petabytes. Join this interactive meetup to discuss how PrestoDB modernizes data at scale and how HPE enhancements increase scale, performance, security, and governance of PrestoDB. Learn how we integrate with a shared metadata catalog and a role-based access control system to provide a consistent view of disparate data sources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24988_0&amp;#x26;locale=en_US&quot;&gt;HSM4988 &lt;/a&gt;| Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 2:30 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speakers:&lt;/strong&gt; Ted Dunning (HPE), Paul Holland (HPE)
&lt;strong&gt;Why Open Source? Discover the Value of Open Source Project Initiatives&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Wondering why open source? Let&apos;s face it, without open source, many of the products we take for granted today would never have been developed. That&apos;s because open source development encourages collaboration. Come by and let&apos;s talk about the hows and whys of open source and why HPE thinks open source is so important.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24989_0&amp;#x26;locale=en_US&quot;&gt;HSM4989&lt;/a&gt;| Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 3:30 p.m., Wednesday, June 29, 11:00 a.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Max Lambrecht (HPE)
&lt;strong&gt;SPIRE Integration with Istio for Edge-to-Cloud Security Beyond Kubernetes Microservices&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Current service mesh solutions do not support leveraging service identity beyond east-west traffic to support advanced attestation capabilities for end-to-end service encryption, granular policy management, and traffic flows across distributed environments. In this talk, we&apos;ll discuss how SPIFFE and SPIRE integration with Istio complements Istio&apos;s authentication and attestation capabilities to provide more robust and flexible attestation beyond Kubernetes namespaces and service accounts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID&lt;/strong&gt;: &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24990_0&amp;#x26;locale=en_US&quot;&gt;HSM4990&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 4:30 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Evan Sparks (HPE)
&lt;strong&gt;Scaling Language Training to Trillion-Parameter Models on a GPU Cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Today, natural language processing (NLP) powers the latest conversational AI and translation apps. Join us to explore the collaborative experimentation process that machine learning teams leverage and the challenges they face while training these large-scale NLP models. See how Determined&apos;s open source deep learning training platform helps model developers train models faster and easier using tools such as resource management, fault tolerance, and model optimization.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24991_0&amp;#x26;locale=en_US&quot;&gt;HSM4991&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Tuesday, June 28, 5:30 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Michael Mattsson (HPE)
&lt;strong&gt;Get Started with Persistent Storage for Kubernetes with the HPE CSI Driver&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Come learn about an open source Container Storage Interface (CSI) for Kubernetes that enables the use of multiple storage backends through powerful abstractions. In this talk, we&apos;ll discuss features and capabilities available via the HPE CSI Driver and where developers can find resources to support any block storage platform with a management API.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24993_0&amp;#x26;locale=en_US&quot;&gt;HSM4993&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 9:00 a.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Anthony Dutra (HPE)
&lt;strong&gt;Data Protection As Code: Introducing Zerto for Kubernetes&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Zerto is a storage software solution designed for enterprise-class business continuity and disaster recovery in virtual and cloud environments. Using data protection as code, DevOps and I&amp;#x26;O teams come together to protect apps from day one of their lifecycle. Join this meetup to discuss how Zerto&apos;s cloud-native API-first solution enables storage-agnostic backup and disaster recovery, protecting all of the application resources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24994_0&amp;#x26;locale=en_US&quot;&gt;HSM4994&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 10:00 a.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Eric Soderberg (HPE)
&lt;strong&gt;Discuss How Grommet, an HPE Open Source Project, Helps Streamline the Way You Develop Apps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Have you heard about Grommet, a powerful react component library that provides accessibility, modularity, responsiveness, and theming in a tidy package? This open source project empowers designers and developers to quickly build a responsive and accessible UX experience for the web. Come and join this discussion to hear how HPE is using Grommet in its own products and services, and how the open source nature of Grommet facilitates this.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24995_0&amp;#x26;locale=en_US&quot;&gt;HSM4995&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 12:00 p.m., Thursday, June 30, 11:00 a.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Glyn Bowden (HPE)
&lt;strong&gt;Establish a Trustworthy Data Pipeline with Project Data Map&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Data consumers need trusted data to make their best decisions. Data producers require tools to share data safely with those who need it. Learn how HPE is working with the open source community to leverage existing tools and resources that extract metadata with the Common Metadata Framework and Project Data Map. Help us bring disparate tools and resources together, creating a system the whole community can contribute to for the best insights from quality data, no matter where the data lives.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24996_0&amp;#x26;locale=en_US&quot;&gt;HSM4996&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 1:00 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Ted Dunning (HPE)
&lt;strong&gt;10 Cool Open Source Projects and How You Can Contribute&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Want to make your mark in the Open Source community? HPE has a long history of open collaboration and giving back. Join us in a Hack Shack discussion to learn how you can contribute to open source projects that are important to HPE, like Apache Kafka and Spark, Presto, Redfish, Grommet, AgStack, KubeDirector, and SPIFFE.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=24997_0&amp;#x26;locale=en_US&quot;&gt;HSM4997&lt;/a&gt;| Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 2:00 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Didier Lalli (HPE)
&lt;strong&gt;Calling all Developers, Data Scientists, Data Architects, and Machine Learning Engineers&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Are you looking for the resources you need to design and build the best possible software and architecture experiences that harness the most value from your data? Join this interactive meetup and learn how to connect with the HPE Developer Community to build, communicate, and collaborate. We&apos;re all developing something. Let&apos;s do it together.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=25100_0&amp;#x26;locale=en_US&quot;&gt;HSM5100&lt;/a&gt; | Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 3:00 p.m., Thursday, June 30, 12:00 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Navaneethan Venugopalan (HPE)
&lt;strong&gt;Preview the New HPE GreenLake Developer Portal&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Many ITOps teams are eager to see HPE expose application programmable interfaces (API) and SDKs to provide programmatic interfaces to HPE GreenLake to power their as-a-service transformation. In this session, explore the new HPE GreenLake Developer Portal, a self-service portal for developers to access published APIs of various HPE GreenLake edge-to-cloud services like Compute, Networking, and Storage.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session ID:&lt;/strong&gt; &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;sid=25099_0&amp;#x26;locale=en_US&quot;&gt;HSM5099 &lt;/a&gt;| Hack Shack Meetup
&lt;strong&gt;Wednesday, June 29, 4:00 p.m.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speaker:&lt;/strong&gt; Don Wake (HPE)
&lt;strong&gt;Accelerate Analytics with HPE Ezmeral and Apache Spark with Delta Lake&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Interested in accessing data lakes directly using a lakehouse? Do you need reliable and consistent data for real-time and batch analytics with ACID transaction support? Join this interactive meetup to discuss and learn how to accelerate your analytics with HPE Ezmeral and Apache Spark with Delta Lake.&lt;/p&gt;
&lt;h2&gt;Work Hard, Play Hard&lt;/h2&gt;
&lt;p&gt;In addition to our classroom-style Hack Shack Meetup sessions, we invite you to come, relax, and participate in our fun and rewarding community activities. These include our Treasure Hunt, physical games, software challenges, and our Hack Shack Celebration Party (complete with beer and pizza!). You’ll even have the opportunity to win prizes – from HPE Developer hats and t-shirts to a hoodie or a CanaKit Raspberry Pi 4 Extreme Kit. Details below! (Please look here for &lt;a href=&quot;https://developer.hpe.com/hackshack/hpediscover2022-participations-terms-conditions/&quot;&gt;participation details&lt;/a&gt;.)&lt;/p&gt;
&lt;h4&gt;HPE Developer Treasure Hunt&lt;/h4&gt;
&lt;p&gt;Want to win a new HPE Developer t-shirt or even possibly a hoodie? Participate in our scavenger hunt style Treasure Hunt! To answer the questions correctly, you’ll need to explore the physical Hack Shack in Las Vegas and scrutinize the HPE Developer portal. Those who complete the Treasure Hunt will have the opportunity to receive a t-shirt. Twelve lucky winners who answer all the questions correctly and attend our Wednesday evening Hack Shack Celebration Party will also win an HPE Developer hoodie. Look for the QR code in the booth to easily participate on your phone or use one of the provided laptops in our booth. (For complete details, please refer to the &lt;a href=&quot;https://developer.hpe.com/hackshack/hpediscover2022-treasurehunt-terms-conditions/&quot;&gt;Terms &amp;#x26; Conditions&lt;/a&gt;.)&lt;/p&gt;
&lt;h4&gt;Software Challenges&lt;/h4&gt;
&lt;p&gt;In an earlier blog post, &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/it%E2%80%99s-all-fun-and-games-at-the-hack-shack/&quot;&gt;It’s all Fun and Games at the Hack Shack&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;, we went into detail about our Hack Shack challenges. These role-based software challenges, which you’ll find listed in the HPE Discover 2022 catalog as session HSC5227, are available daily at the Hack Shack. Those who participate in these challenges and enter in the provided token key from June 28-30th can receive an HPE branded t-shirt. Take any of our software challenges on Tuesday, June 28th or Wednesday, June 29th and be entered for one of 6 grand prizes. One grand prize winner will be selected for each challenge. Participants must be present at the Wednesday evening 5:00-6:00pm Hack Shack Celebration to win a grand prize. (For complete details, please refer to the &lt;a href=&quot;https://developer.hpe.com/hackshack/hpediscover2022-swchallenges-terms-conditions/&quot;&gt;Terms &amp;#x26; Conditions&lt;/a&gt;.)&lt;/p&gt;
&lt;h2&gt;Join the Party!&lt;/h2&gt;
&lt;p&gt;Wednesday evening from 5:00-6:00pm, you’re invited to join us in the Hack Shack for our Celebration Party. Attendees will be treated to food (pizza, yay!) and refreshments (including beer) and get the opportunity to go home with some additional swag. Sit down, relax, and listen to our master of ceremonies, Robert Christiansen, VP of Strategy in the CTO Office at Hewlett Packard Enterprise.&lt;/p&gt;
&lt;p&gt;You’ll also meet our hardworking HPE Developer Community members who deliver the program, as well as our special guests, Kirk Bresniker and Dr. Eng Lim Goh. Kirk Bresniker, HPE VP and HPE Labs Chief Architect, will hand out the grand prizes for our software challenges. Dr. Eng Lim Goh, Senior VP and Chief Technologist for AI, will announce the prizes for our Treasure Hunt winners.&lt;/p&gt;
&lt;h2&gt;Don’t Miss Out&lt;/h2&gt;
&lt;p&gt;Build your agenda today and take advantage of the many things the Hack Shack offers to developers, designers, data scientists, data/ML engineers, open source advocates, and IT technologists.&lt;/p&gt;
&lt;link rel=&quot;stylesheet&quot; href=&quot;https://www.w3schools.com/w3css/4/w3.css&quot;&gt;
&lt;div class=&quot;w3-container w3-center w3-margin-bottom&quot;&gt;
  &lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://attend.hpe.com/discover2022/index.cfm?iLangID=1&quot;&gt;Register now&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;</content:encoded></item><item><title><![CDATA[Create a General-Purpose Kubeconfig File in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/create-a-general-purpose-kubeconfig-file-in-hpe-greenlake-for-containers/</link><guid isPermaLink="false">https://developer.hpe.com/create-a-general-purpose-kubeconfig-file-in-hpe-greenlake-for-containers/</guid><pubDate>Fri, 20 May 2022 07:02:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;HPE GreenLake for Containers&lt;/a&gt;, one of the HPE GreenLake Cloud services, is built on the CNCF certified &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; and deployed as an enterprise-grade container management service using open source Kubernetes. The HPE GreenLake for Containers service is accessed and managed through one portal, called HPE GreenLake Central. The HPE GreenLake Central dashboard allows you to open the Clusters screen to create Kubernetes clusters using cluster blueprints, view details about existing clusters, and launch the HPE Ezmeral Runtime Enterprise dashboard, where you can view the status of all Kubernetes services and resource utilization across all clusters. The HPE Ezmeral Runtime Enterprise dashboard also allows you to download the kubectl binary, together with the kubeconfig file, to access and deploy applications using the client script. In this blog post, I will describe issues that are associated with this specific kubeconfig file downloaded from the dashboard and show you how to get around them.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-ecp-dashboard.png&quot; alt=&quot;&quot; title=&quot;HPE ECP Dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;The kubeconfig file that can be downloaded from the HPE Ezmeral Runtime Enterprise dashboard can introduce a number of issues:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The kubeconfig file is tied to the user who logs in to HPE GreenLake for Containers. There are many use cases that use simple scripts, such as in Bash, that call out to the kubectl or a proper client library to access the Kubernetes cluster with the kubeconfig file outside the cluster. They are not tied to any particular user. Providing a kubeconfig file that’s tied to your user is not considered to be a clean design due to the fact that each user may have different privileges, and providing the kubeconfig file with ability to access to the cluster may violate the &lt;a href=&quot;https://en.wikipedia.org/wiki/Principle_of_least_privilege&quot;&gt;Principle of Least Privilege&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Since launching from HPE GreenLake Central to the HPE Ezmeral Runtime Enterprise is configured through SAML SSO, a session token is fetched and added to the kubeconfig file each time you launch the HPE Ezmeral Runtime Enterprise dashboard. With HPE GreenLake for Containers, the session token is configured to expire after an hour. You will be unable to use the downloaded kubeconfig file to access the cluster after the token expires. You will have to relaunch the dashboard and once again download the kubeconfig file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The kubeconfig file is generated to include some specific commands to show version, authenticate, and refresh the HPE Ezmeral Runtime Enterprise environment. The standard kubectl tool installed from the &lt;a href=&quot;https://kubernetes.io/docs/tasks/tools/&quot;&gt;official Kubernetes site&lt;/a&gt; does not work with this kubeconfig file. You have to download the &lt;code&gt;HPE kubectl plugin&lt;/code&gt;, available from the same dashboard, and use it together with the kubectl and the kubeconfig file. This causes issues due to the fact that some services and tools, such as &lt;code&gt;Azure DevOps&lt;/code&gt; and &lt;code&gt;Jenkins&lt;/code&gt;, use the standard kubectl to access and deploy applications to the Kubernetes clusters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You need a way to have a kubeconfig file that is not tied to a specific user and has the right set of permissions to access the Kubernetes cluster. It should work permanently with the standard kubectl tool. This blog post walks you through the process of creating such a general-purpose kubeconfig file that allows you to access and deploy applications to the Kubernetes cluster in HPE GreenLake for Containers. By using these instructions, you will be able to create a kubeconfig file that can be used by any external scripts specific to your CI/CD pipeline setup and have them work with the Kubernetes cluster.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;You need to download the kubectl binary, together with the HPE kubectl plugin and the kubeconfig file, from the launched HPE Ezmeral Runtime Enterprise dashboard. The downloaded kubectl binary and its plugin need to be set up in your environment. To simplify the setup process, you should export the environment variable &lt;code&gt;KUBECONFIG&lt;/code&gt; and point it to the downloaded kubeconfig file. With these setups in place, you can access the Kubernetes cluster in the HPE GreenLake for Containers.&lt;/p&gt;
&lt;p&gt;With your user access setup, you should have access to permissions that can create and update the following resources in the Kubernetes cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes Service Account(s)&lt;/li&gt;
&lt;li&gt;Kubernetes Roles &amp;#x26; RoleBindings&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Setup Details&lt;/h2&gt;
&lt;h3&gt;Create a Kubernetes Service Account&lt;/h3&gt;
&lt;p&gt;You can use the following yaml manifest file to create a service account in the Kubernetes cluster. Replace the name &lt;code&gt;cfe-demo-sa&lt;/code&gt; with your service account name.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# serviceaccount.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cfe-demo-sa
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following commands to create the service account and verify that the service account has been created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl apply -f serviceaccount.yaml 
serviceaccount/cfe-demo-sa created

$ kubectl get serviceaccounts cfe-demo-sa 
NAME          SECRETS   AGE
cfe-demo-sa   1         24s
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Create a Role&lt;/h3&gt;
&lt;p&gt;After you have the service account, you can create a role with a set of permissions that represent the access rights that you want for your scripts.&lt;/p&gt;
&lt;p&gt;Here is the yaml manifest file to create a role. Replace the &lt;code&gt;cfe-demo-role&lt;/code&gt; with your role name.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# role.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cfe-demo-role
rules:
- apiGroups:
  - &quot;&quot;
  resources:
  - bindings
  - podtemplates
  - replicationcontrollers
  - pods
  - services
  - serviceaccounts
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  - pods/exec
  - pods/log
  - pods/portforward
  verbs:
  - &apos;*&apos;
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - roles
  - rolebindings
  verbs:
  - &apos;*&apos;
- apiGroups:
  - apps
  resources:
  - controllerrevisions
  - statefulsets
  - deployments
  - replicasets
  - daemonsets
  verbs:
  - &apos;*&apos;
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - &apos;*&apos;
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - &apos;*&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&quot;&gt;Using RBAC Authorization&lt;/a&gt; section of the Kubernetes documentation provides details on how to configure the Role resource. Please check carefully the permissions for the access rights that you want for your scripts in order to comply with the Principle of Least Privilege.&lt;/p&gt;
&lt;p&gt;Run the following commands to create the role and verify that the role has been created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl apply -f role.yaml 
role.rbac.authorization.k8s.io/cfe-demo-role created

$ kubectl get role cfe-demo-role 
NAME            CREATED AT
cfe-demo-role   2022-05-19T20:51:57Z
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Grant Permissions to Service Account&lt;/h3&gt;
&lt;p&gt;You now create the RoleBinding to bind the role to the service account.&lt;/p&gt;
&lt;p&gt;Here is the manifest file to create the RoleBinding that binds the role &lt;code&gt;cfe-demo-role&lt;/code&gt; to the service account &lt;code&gt;cfe-demo-sa&lt;/code&gt;. Replace those names with the names in your environment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# rolebinding.yml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cfe-demo-rb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cfe-demo-role # Should match name of Role
subjects:
- kind: ServiceAccount
  name: cfe-demo-sa # Should match service account name
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following commands to create the rolebinding and verify that the rolebinding has been created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl apply -f rolebinding.yaml 
rolebinding.rbac.authorization.k8s.io/cfe-demo-rb created

$ kubectl get rolebindings cfe-demo-rb 
NAME          ROLE                 AGE
cfe-demo-rb   Role/cfe-demo-role   19s
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Extract Service Account Token&lt;/h3&gt;
&lt;p&gt;You can now check the secret token in the created service account and extract the token field by running the following commands. The token is a randomized string. The setup shows only a snippet of this string.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl describe serviceaccounts cfe-demo-sa 
Name:                cfe-demo-sa
Namespace:           cfe-demo-cluster
Labels:              &amp;#x3C;none&gt;
Annotations:         Image pull secrets:  &amp;#x3C;none&gt;
Mountable secrets:   cfe-demo-sa-token-2zlzf
Tokens:              cfe-demo-sa-token-2zlzf
Events:              &amp;#x3C;none&gt;

$ kubectl describe secrets cfe-demo-sa-token-2zlzf 
Name:         cfe-demo-sa-token-2zlzf
Namespace:    cfe-demo-cluster
Labels:       &amp;#x3C;none&gt;
Annotations:  kubernetes.io/service-account.name: cfe-demo-sa
              kubernetes.io/service-account.uid: a467f9bd-655d-413f-ad77-5156d03d2322

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  16 bytes
token:      iIsImtpZCI6IjA2YnhmSVZrVDRGWnBab0VOYXhnWFBTTE1WWmptUm40eER

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that if you use &lt;code&gt;-o yaml&lt;/code&gt; instead of &lt;code&gt;describe&lt;/code&gt; in the commands, you get a base64-encoded version of the token. You must decode it before you use it.&lt;/p&gt;
&lt;p&gt;If you access the Kubernetes API directly, e.g., from &lt;code&gt;curl&lt;/code&gt;, you can use the token as the bearer token for the authorization header.&lt;/p&gt;
&lt;p&gt;However, if you have your scripts running outside the cluster that uses kubectl or a client library to access the Kubernetes cluster, you need the kubeconfig file to load the configs from. Follow the instructions in the next section to create such a kubeconfig file.&lt;/p&gt;
&lt;h3&gt;Create a Kubeconfig File&lt;/h3&gt;
&lt;p&gt;Here is a shell script to create a kubeconfig file using the secret token of the service account. Update those variables in the script header to match with your environment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# create-kubeconfig.sh

# Update those variables to match your environment
SERVICE_ACCOUNT_NAME=&quot;cfe-demo-sa&quot;
CONTEXT=$(kubectl config current-context)
NEW_CONTEXT=&quot;cfe-demo-context&quot;
TOKEN_USER=&quot;cfe-token-user&quot;
KUBECONFIG_FILE=&quot;kubeconfig-sa&quot;


# Extract service account token
SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} --context ${CONTEXT}  -o jsonpath=&apos;{.secrets[0].name}&apos;)
TOKEN_DATA=$(kubectl get secret ${SECRET_NAME}  --context ${CONTEXT}  -o jsonpath=&apos;{.data.token}&apos;)
TOKEN=$(echo ${TOKEN_DATA} | base64 -d)

#Create a general-purpose kubeconfig file
kubectl config view --raw &gt; tmp.raw
kubectl --kubeconfig tmp.raw config use-context ${CONTEXT}
kubectl --kubeconfig tmp.raw config view --flatten --minify &gt; tmp.min
kubectl --kubeconfig tmp.min config rename-context ${CONTEXT} ${NEW_CONTEXT}
kubectl --kubeconfig tmp.min config set-credentials ${TOKEN_USER} --token ${TOKEN}
kubectl --kubeconfig tmp.min config set-context ${NEW_CONTEXT} --user ${ TOKEN_USER }
kubectl --kubeconfig tmp.min config view --flatten --minify &gt; ${KUBECONFIG_FILE}

# Cleanup tmp
rm tmp.raw
rm tmp.min

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After running the script, a general-purpose kubeconfig file &lt;code&gt;kubeconfig-sa&lt;/code&gt; is created. By exporting the kubeconfig file as the environment variable &lt;code&gt;KUBECONFIG&lt;/code&gt;, you can access the Kubernetes cluster and check all the resources deployed in the cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ bash create-kubeconfig.sh
Switched to context &quot;fab-zero-cfe-demo-cluster-cfe-demo-cluster-guoping.jia@hpe.com&quot;.
Context &quot;fab-zero-cfe-demo-cluster-cfe-demo-cluster-guoping.jia@hpe.com&quot; renamed to &quot;cfe-demo-context&quot;.
User &quot;cfe-token-user&quot; set.
Context &quot;cfe-demo-context&quot; modified.

$ export KUBECONFIG=kubeconfig-sa

$ kubectl get all
No resources found in cfe-demo-cluster namespace.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This blog post shows you how to create a general-purpose kubeconfig file using a service account. This kubeconfig file is not tied to any specific user. It is bound with a list of permissions carefully chosen for your need to access the Kubernetes cluster. The created kubeconfig file works permanently with both the downloaded kubectl binary from HPE GreenLake for Containers dashboard and the standard one installed from the official Kubernetes site. This allows you to use it in any Kubernetes client scripts, esp., in your Kubernetes &lt;em&gt;CI/CD&lt;/em&gt; pipeline.&lt;/p&gt;
&lt;p&gt;It should be noted that the token in a service account does not expire and is valid as long as the service account exists. You should consider to generate a new kubeconfig file by creating a new service account and then revoke access to the old service account. As one best practice for security, you should do this as often as you can. This is extremely important when you use the kubeconfig file to access the Kubernetes
cluster from outside the cluster. You can refer to the &lt;a href=&quot;https://kubernetes.io/docs/concepts/security/rbac-good-practices/&quot;&gt;Kubernetes RBAC Good Practices&lt;/a&gt; to understand the risks using RBAC and the general good practices to reduce the risk.&lt;/p&gt;
&lt;p&gt;Please check the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=HPE-GreenLake-for-containers.html&quot;&gt;HPE GreenLake Central User Guide&lt;/a&gt; for more details about the HPE GreenLake for Containers.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Overview of the Platform Level Data Model for Redfish® Device Enablement Standard]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/overview-of-the-platform-level-data-model-for-redfish®-device-enablement-standard/</link><guid isPermaLink="false">https://developer.hpe.com/overview-of-the-platform-level-data-model-for-redfish®-device-enablement-standard/</guid><pubDate>Thu, 12 May 2022 16:04:30 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/pldm/pldm_rde/pldm_rde&quot;&gt;Server Management Portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[How to Set Up Credentials to Authenticate a Docker Registry in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/how-to-set-up-credentials-to-authenticate-container-registry-in-hpe-greenlake-for-containers/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-set-up-credentials-to-authenticate-container-registry-in-hpe-greenlake-for-containers/</guid><pubDate>Tue, 10 May 2022 08:20:45 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Containers is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake/containers.html&quot;&gt;&lt;em&gt;HPE GreenLake for Containers&lt;/em&gt;&lt;/a&gt;, one of HPE GreenLake Cloud Services, is an HPE-designed, implemented, owned, and operated private cloud deployed at a customer site. It is built on &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt;, featuring  open source Kubernetes container management. Customers can benefit from the use of HPE GreenLake for Containers by having a powerful underlying container-based infrastructure for both cloud-native apps and monolithic apps, and enjoy as-a-Service ease-of-use and economics. When working with Docker images, one can run into some issues, as Docker&apos;s image download rate limit can cause unexpected errors. In this blog post, I will walk you through the process of setting up a secret using the credentials for your Docker subscription in the Kubernetes cluster to avoid this issue.&lt;/p&gt;
&lt;p&gt;With HPE GreenLake for Containers, it is configured to use a gateway host, acting as a proxy sever, that carries client requests from deployed application service endpoints in the Kubernetes clusters. The gateway host maps the private IP endpoints of services running inside the Kubernetes clusters to external accessible IP addresses and ports. It provides better security by exposing only the gateway host IP address to the public while keeping all the others behind the firewall. However, when you create your application from a Docker image in the Kubernetes cluster, your application pods may get stuck in the error state &lt;em&gt;&lt;code&gt;ErrImagePull&lt;/code&gt;&lt;/em&gt;. Below is a sample &lt;code&gt;ngnix&lt;/code&gt; application deployment to the cluster and the received error message in the pod events:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl run cfe-nginx --image=nginx
pod/cfe-nginx created

$ kubectl get pods
NAME        READY   STATUS         RESTARTS   AGE
cfe-nginx   0/1     ErrImagePull   0          5s


$ kubectl describe pods cfe-nginx 
Name:         cfe-nginx
...
Events:
  Type     Reason     Age                            From                                                             Message
  ----     ------     ----                           ----                                                             -------
  Normal   Scheduled  &amp;#x3C;invalid&gt;                      default-scheduler                                                Successfully assigned cfe-demo-cluster/cfe-nginx to k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local
  Warning  Failed     &amp;#x3C;invalid&gt;                      kubelet, k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local  Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/nginx:latest&quot;: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:19da26bd6ef0468ac8ef5c03f01ce1569a4dbfb82d4d7b7ffbd7aed16ad3eb46: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
  Warning  Failed     &amp;#x3C;invalid&gt;                      kubelet, k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local  Error: ErrImagePull
  Normal   BackOff    &amp;#x3C;invalid&gt;                      kubelet, k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local  Back-off pulling image &quot;nginx&quot;
  Warning  Failed     &amp;#x3C;invalid&gt;                      kubelet, k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local  Error: ImagePullBackOff
  Normal   Pulling    &amp;#x3C;invalid&gt; (x2 over &amp;#x3C;invalid&gt;)  kubelet, k8s-cfe-demo-cluster-worker-67f75-24jmj.glhc-hpe.local  Pulling image &quot;nginx&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above issue is caused by &lt;a href=&quot;https://docs.docker.com/docker-hub/download-rate-limit/&quot;&gt;the Docker policy changes for downloading images&lt;/a&gt;. In particular, for anonymous users, the image download rate limit is set to 100 pulls per 6 hours per IP address. No matter who starts creating a new application from a Docker image, the Kubernetes cluster downloads the image as an anonymous user, which counts toward the new rate limit on the same gateway host IP. Given that other developers are using the same Kubernetes cluster, together with many &lt;em&gt;ArgoCD&lt;/em&gt; jobs configured to run in the backend, the limit can be eventually reached and the &lt;em&gt;&lt;code&gt;ErrimagePull&lt;/code&gt;&lt;/em&gt; message pops up.&lt;/p&gt;
&lt;p&gt;To deal with this situation, the following sections show you the details on how to set up a secret in the Kubernetes cluster using the credentials for your Docker subscription. The cluster then uses the secret to authenticate to your Docker account and pull the image as an authenticated user. The image download will count against individual limit of your Docker subscription instead of the 100 downloads shared across all anonymous cluster users in the cluster.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;You need to have the following credentials for your Docker subscription, either a personal or a paid one:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker Username&lt;/li&gt;
&lt;li&gt;Docker Password or Access Token&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The personal Docker subscription credentials allow you to log in to Docker as an authenticated user in which the image rate limit is set to 200 pulls per 6 hour period. The users with a paid Docker subscription have no limits in image downloads. It could make sense to have a paid Docker subscription at team or company level. However, you do not really need to upgrade your Docker account to a paid one. 200 pulls per 6-hour period as an authenticated user should be enough for an individual developer to work on application deployment to the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Note that the Docker access token, instead of a Docker password, would be recommended to be used as the credentials. You can refer to &lt;a href=&quot;https://docs.docker.com/docker-hub/access-tokens/&quot;&gt;Docker’s Manage Access Tokens&lt;/a&gt; to create such an access token of your Docker subscription. A Docker access token provides some advantages over the password. It can&apos;t be used for performing any admin activity on your Docker account. It also provides a way to check the last usage and can easily be disabled or deleted. In case you have two-factor authentication setup on your account, using the access token is the only way to authenticate to Docker.&lt;/p&gt;
&lt;h2&gt;Setup Details&lt;/h2&gt;
&lt;h3&gt;Create a Registry Secret&lt;/h3&gt;
&lt;p&gt;You can use the following command to create a registry secret using your Docker credentials:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl create secret docker-registry cfe-registry-key --docker-server=https://index.docker.io/v1/ --docker-username=&amp;#x3C;username&gt; --docker-password=&amp;#x3C;password&gt; --docker-email=&amp;#x3C;email&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using &lt;code&gt;--docker-email&lt;/code&gt; is optional. The &lt;code&gt;cfe-registry-key&lt;/code&gt; is the sample secret name used in the setup process.&lt;/p&gt;
&lt;p&gt;You can verify that the registry secret has been created:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get secrets cfe-registry-key 
NAME               TYPE                             DATA   AGE
cfe-registry-key   kubernetes.io/dockerconfigjson   1      2m11s
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Specify &lt;code&gt;imagePullSecrets&lt;/code&gt; on Pods&lt;/h3&gt;
&lt;p&gt;You can create pods or update existing ones to refer to the created registry secret by adding the following &lt;code&gt;imagePullSecrets&lt;/code&gt; section to the Pod definition:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;imagePullSecrets:
- name: cfe-registry-key
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With those pod manifest files, the Kubernetes cluster downloads the image as an authenticated user using the credentials from the registry secret. The image download will not count against the 100 download limit shared across all anonymous cluster users in the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Although the setup works for most of your application deployment, it requires you to modify manifest files to add &lt;code&gt;imagePullSecrets&lt;/code&gt; section.&lt;/p&gt;
&lt;p&gt;You can use the following step to add image pull secret to &lt;code&gt;service accounts&lt;/code&gt;. When you try to create new applications, the &lt;code&gt;imagePullSecrets&lt;/code&gt; will be automatically injected and used in downloading the images. This setup makes sense, especially when you have the paid Docker subscription. Other developers who use the same Kubernetes cluster can benefit from the setup having unlimited image download for their application deployment.&lt;/p&gt;
&lt;h3&gt;Add &lt;code&gt;imagePullSecrets&lt;/code&gt; to Service Accounts&lt;/h3&gt;
&lt;p&gt;You can run the following command to modify the default service account for the namespace to use &lt;code&gt;imagePullSecrets&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl patch serviceaccount default -p &apos;{&quot;imagePullSecrets&quot;: [{&quot;name&quot;: &quot;cfe-registry-key&quot;}]}&apos;
serviceaccount/default patched
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can verify that the &lt;code&gt;imagePullSecrets&lt;/code&gt; section has been added to the service account:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get serviceaccount default -o yaml
kind: ServiceAccount
metadata:

  name: default
...
...
imagePullSecrets:
- name: cfe-registry-key
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you try to create a pod in the current namespace, you can verify the pod has its &lt;code&gt;sepc.imagePullSecrets&lt;/code&gt; field set automatically:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl run cfe-nginx --image=nginx 
pod/cfe-nginx created

$ kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
cfe-nginx   1/1     Running   0          4m28s


$ kubectl get pod cfe-nginx -o=jsonpath=&apos;{.spec.imagePullSecrets[0].name}{&quot;\n&quot;}&apos;
cfe-registry-key
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you deploy an application to a different namespace than the current one, you need run the command &lt;code&gt;kubectl patch serviceaccount &lt;/code&gt; with the additional option &lt;code&gt;-n &amp;#x3C;namespace&gt;&lt;/code&gt; to add &lt;code&gt;imagePullSecrets&lt;/code&gt; to the default service account in the new namespace.&lt;/p&gt;
&lt;p&gt;Similarly, if you deploy an application to a namespace using a different service account than the default one, you need to replace &lt;code&gt;default&lt;/code&gt; in the command &lt;code&gt;kubectl patch serviceaccount &lt;/code&gt;  with the service account name to add &lt;code&gt;imagePullSecrets&lt;/code&gt; to this service account. By default, deploying an application to a namespace uses the default service account. You can define the &lt;code&gt;serviceAccountName&lt;/code&gt; in your manifest files to specify the service account you want to use.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Docker&apos;s image download rate limit has caused quite a bit of confusion for those using HPE GreenLake for Containers. This blog post describes you how to set up a secret using the credentials for your Docker subscription to pull images for your application deployment. Once follow up the procedure, your application deployment will be able to download images without hitting the rate limit error any more.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Understanding HPE GreenLake Central Identity & Access Management]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. Introduction When…]]></description><link>https://developer.hpe.com/understanding-hpe-greenlake-identity-access-management/</link><guid isPermaLink="false">https://developer.hpe.com/understanding-hpe-greenlake-identity-access-management/</guid><pubDate>Tue, 10 May 2022 07:35:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;When configuring an HPE GreenLake Central tenant for diverse customer Identity and Access Management (IAM) needs, it is important to understand how the various features work, so you can take appropriate actions. Once you understand the features and how they work, you&apos;ll be able to arrange resources in an optimal way. It will also prevent the need to reconfigure anything in the future.&lt;/p&gt;
&lt;p&gt;With this in mind, I&apos;d like to begin by describing the various features of Identity and Access Management available in HPE GreenLake Central and how to perform common tasks associated with IAM configuration. I&apos;ll also include some simple steps you can follow when designing IAM configurations based on customer requirements. Finally, I&apos;ll present several fictitious customer setups and show you how each is configured.&lt;/p&gt;
&lt;h1&gt;IAM Definitions&lt;/h1&gt;
&lt;h2&gt;Users&lt;/h2&gt;
&lt;p&gt;All HPE GreenLake Central users start with getting an HPE GreenLake Central account and profile. Users are invited by a tenant administrator, or another user with appropriate permissions, to join the HPE GreenLake Central tenant.  HPE can assist with user management and can configure your HPE GreenLake environment to use the Customer&apos;s Single Sign-On (SSO) mechanism through a custom Services engagement. &lt;strong&gt;NOTE: All users in a tenant have the same email domain name.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;API Clients&lt;/h2&gt;
&lt;p&gt;API clients are nonhuman HPE GreenLake Central users for programmatic access to HPE GreenLake Central. They are assigned spaces and roles, similar to a regular user. &lt;/p&gt;
&lt;h2&gt;Tenants&lt;/h2&gt;
&lt;p&gt;A tenant is an isolated environment with unique users and workloads.  Within a tenant, users are invited to join the tenant by a tenant administrator or another user with the appropriate permissions.&lt;/p&gt;
&lt;h2&gt;User Groups&lt;/h2&gt;
&lt;p&gt;User groups are a named set of users that share a common job function or access requirements. &lt;/p&gt;
&lt;h2&gt;Spaces&lt;/h2&gt;
&lt;p&gt;Spaces enable you to grant access to a defined subset of resources within a tenant. Resources can be a part of multiple spaces. Access to resources in a space is managed through the assignment of users, user groups or API clients to roles in a space. There is a default space where users land automatically upon login if they are assigned appropriate roles for that space. The all resources space is a dynamic list of all resources in a tenant. &lt;/p&gt;
&lt;p&gt;Users can be granted access to multiple spaces.&lt;/p&gt;
&lt;h2&gt;Roles&lt;/h2&gt;
&lt;p&gt;Roles are a named set of permissions used to access resources. They are assigned to users, user groups or API clients. Roles grant permissions for a specific set of resources in a space.&lt;/p&gt;
&lt;p&gt;Roles are available for the services that are available within the tenant. The following table is incomplete but lists the most common roles and definitions.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Responsibility&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE Consumption Analytics&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consumption Analytics Viewer&lt;/td&gt;
&lt;td&gt;View cost information in HPE GreenLake Central (Customer only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consumption Analytics Contributor&lt;/td&gt;
&lt;td&gt;View cost information in HPE GreenLake Central and access HPE Consumption Analytics Platform (Customer only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake Capacity Planning&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Capacity Planning Viewer&lt;/td&gt;
&lt;td&gt;Read capacity planning information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake Billing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Billing Contributor&lt;/td&gt;
&lt;td&gt;View billing information and edit downstream rates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Billing Viewer&lt;/td&gt;
&lt;td&gt;View billing information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Billing Usage Viewer&lt;/td&gt;
&lt;td&gt;View monthly charges card and report (usage information only - no cost information)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake for Private Cloud Enterprise: Virtual Machines&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private Cloud Tenant Owner&lt;/td&gt;
&lt;td&gt;Administer HPE GreenLake for Private Cloud Enterprise: Virtual Machines dashboard&lt;br&gt;Manage scheduling and activity&lt;br&gt;Manage infrastructure&lt;br&gt;Manage provisioning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private Cloud Tenant Contributor&lt;/td&gt;
&lt;td&gt;Manage self-service VMs and app provisions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake for Private Cloud Enterprise: Containers on Virtual Machines&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container Platform Cluster Owner&lt;/td&gt;
&lt;td&gt;Manage predefined and custom cluster blueprints, along with machine blueprints&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container Platform Cluster Resource Contributor&lt;/td&gt;
&lt;td&gt;Manage resources within a cluster namespace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake for ML Ops&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MLOps Admin and IAM Owner&lt;/td&gt;
&lt;td&gt;Install services and manage projects, resources, and users, and view sites information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MLOps Project Admin&lt;/td&gt;
&lt;td&gt;Build, train, and deploy models, launch the AI/ML services&lt;br&gt;Corresponds to Project Administrator role in HPE Ezmeral Runtime Enterprise (previously known as HPE Ezmeral Container Platform)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MLOps Project Member&lt;/td&gt;
&lt;td&gt;To build, train, and deploy models, launch the AI/ML projects and consume the AI/ML services&lt;br&gt;Corresponds to Project Member role in HPE Ezmeral Runtime Enterprise (previously known as HPE Ezmeral Container Platform)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HPE GreenLake for HPC&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HPCaaS Admin&lt;/td&gt;
&lt;td&gt;Perform all actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HPCaaS Job Contributor&lt;/td&gt;
&lt;td&gt;Manage job and job-related information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HPCaaS Viewer&lt;/td&gt;
&lt;td&gt;View only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For more detail on roles, please refer to the HPE GreenLake Central User Guide which can be found here: &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=index.html&quot;&gt;HPE GreenLake Central User Guide&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Owner roles apply to different aspects of Services. Some administrator Roles are only available to HPE Information Technology Operations Center (ITOC) and DevOps teams.&lt;/li&gt;
&lt;li&gt;Contributor roles also apply to services. These roles allow non-admin operations within a service.&lt;/li&gt;
&lt;li&gt;Viewer roles are &apos;read-only&apos; roles which can be used to allow users to access services but not to modify them in any way.&lt;/li&gt;
&lt;li&gt;Custom roles can also be defined. These allow arbitrary collections of permissions to be combined into one or more roles and can be assigned as normal.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Tenant On-Boarding&lt;/h1&gt;
&lt;p&gt;New tenants are requested via an HPE representative. The request includes an initial contact email address. This new user account is associated with the new tenant. Once the tenant is created and the billing account is promoted into production, the contact email address receives an email invitation to activate their account. The new user can then log into HPE GreenLake Central and can start the process of setting up their spaces, user groups, role assignments, etc. They can also invite other users to join the tenant and perform other activities relevant to the specific user permissions.&lt;/p&gt;
&lt;h2&gt;Inviting Users&lt;/h2&gt;
&lt;p&gt;Once a tenant administrator is on-boarded, they can invite their users to join the tenant.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Access the User Management Service within the &lt;strong&gt;wrench&lt;/strong&gt; icon.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Users&lt;/strong&gt; Tab&lt;/li&gt;
&lt;li&gt;Under the &lt;strong&gt;Actions&lt;/strong&gt; pull-down, choose Invite User&lt;/li&gt;
&lt;/ol&gt;
&lt;img src=&quot;/img/invite-user.png&quot; width=&quot;480&quot; height=&quot;538&quot; alt=&quot;Invite User&quot;&gt;
&lt;p&gt;The invited user receives an email inviting them to join the tenant. Once the user activates their account they can log into HPE GreenLake Central and switch to the tenant. The tenant administrator can add the new User to various User Groups, etc. The user will only receive an invitation email if they have not already been invited to join a tenant.&lt;/p&gt;
&lt;h2&gt;Creating and modifying User Groups&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Access the User Management Service within the &lt;strong&gt;wrench&lt;/strong&gt; icon.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;User Groups&lt;/strong&gt; tab&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To modify an existing user group, click on it from the list of user groups&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Members can be added by selecting the &lt;strong&gt;Members&lt;/strong&gt; tab and selecting &lt;em&gt;Add Members&lt;/em&gt; under the &lt;strong&gt;Actions&lt;/strong&gt; pull-down&lt;/li&gt;
&lt;li&gt;Members can be removed from the user group by clicking on the &lt;strong&gt;Trash&lt;/strong&gt; icon beside the member name&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To create a new user group, from the &lt;strong&gt;User Groups&lt;/strong&gt; tab, click on the &lt;em&gt;Create User Group&lt;/em&gt; button&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter a user group name and description&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the user group has been created it may be modified using the instructions above&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Creating and Modifying Spaces&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Access the Management Service within the &lt;strong&gt;wrench&lt;/strong&gt; icon.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the &lt;strong&gt;Spaces&lt;/strong&gt; tab&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To modify an existing space, click on it from the list of spaces&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Users and user groups can be added by selecting the &lt;strong&gt;Assignments&lt;/strong&gt; tab and selecting &lt;em&gt;Create Assignment&lt;/em&gt; under the &lt;strong&gt;Actions&lt;/strong&gt; pull-down&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select appropriate role(s) for the space&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Users and user groups can be removed from the space by clicking on the &lt;strong&gt;Trash&lt;/strong&gt; icon beside the subject name&lt;/li&gt;
&lt;li&gt;Resources can be added and removed from the space by selecting the &lt;strong&gt;Resources&lt;/strong&gt; tab and selecting &lt;em&gt;Update Resources&lt;/em&gt; under the &lt;strong&gt;Actions&lt;/strong&gt; pull-down&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To create a new space, click on the &lt;strong&gt;Create Space&lt;/strong&gt; button&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enter a space name and parent space&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Resources&lt;/strong&gt; by expanding the &lt;strong&gt;All Resources&lt;/strong&gt; list&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;Let&apos;s Explore Some Fictitious Customer Examples&lt;/h1&gt;
&lt;h2&gt;Example 1: ACME Corp.&lt;/h2&gt;
&lt;h3&gt;Design&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/iam-document-example-1-tenant-1-1a.jpg&quot; alt=&quot;&quot; title=&quot;Example 1: ACME Corp.&quot;&gt;&lt;/p&gt;
&lt;p&gt;Customer ACME Corp. has users spread around the globe. The users&apos; email addresses all take the form of &lt;a href=&quot;mailto:xyz@acmecorp.com.&quot;&gt;xyz@acmecorp.com.&lt;/a&gt; ACME Corp. has several departments that are also distributed around the globe. ACME Corp. wishes to use HPE GreenLake for Private Cloud Enterprise to create and manage virtual machines and containers on behalf of the various departments. They also wish to use HPE GreenLake for ML Ops to examine their internal data and perform AI operations upon it and use HPE GreenLake for containers to run a sales application.&lt;/p&gt;
&lt;p&gt;The ACME Corp. departments include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Office of CEO&lt;/li&gt;
&lt;li&gt;IT department&lt;/li&gt;
&lt;li&gt;Sales department&lt;/li&gt;
&lt;li&gt;R&amp;#x26;D department&lt;/li&gt;
&lt;li&gt;Data Science department&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Since all users in ACME Corp. have the same email address format, and since all departments wish to be able to use all resources, only one tenant is required. ACME Corp wishes to be billed centrally for all services so a single billing account is sufficient.&lt;/p&gt;
&lt;p&gt;Once this is done, they can create user groups for each department, e.g.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Office of CEO users&lt;/li&gt;
&lt;li&gt;IT department users&lt;/li&gt;
&lt;li&gt;Sales department users&lt;/li&gt;
&lt;li&gt;R&amp;#x26;D department users&lt;/li&gt;
&lt;li&gt;Data Science department users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each user group will consist of sets of users from each department:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sales-department-users-members.png&quot; alt=&quot;&quot; title=&quot;Sales Department Users&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, the tenant administrator can create spaces for each department, e.g.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Office of CEO space&lt;/li&gt;
&lt;li&gt;IT department space&lt;/li&gt;
&lt;li&gt;Sales department space&lt;/li&gt;
&lt;li&gt;R&amp;#x26;D department space&lt;/li&gt;
&lt;li&gt;Data Science department space&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When each space is created, appropriate resources are chosen and mapped to the space:&lt;/p&gt;
&lt;img src=&quot;/img/resource-selection.png&quot; width=&quot;600&quot; height=&quot;359&quot; alt=&quot;Resource Selection&quot;&gt;
&lt;h4&gt;Office of CEO&lt;/h4&gt;
&lt;p&gt;The Office of CEO space needs access to billing across the entire company. They may also use HPE GreenLake for Private Cloud Enterprise to run some virtual machines. Finally, they occasionally run some AI on internal data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/office-of-ceo-space-resources.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The user group is assigned appropriate roles for the selected services:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/office-of-ceo-space-assignments.png&quot; alt=&quot;&quot; title=&quot;Office of CEO Space&quot;&gt;&lt;/p&gt;
&lt;p&gt;The space looks like this to the users:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/office-of-ceo-space-view.png&quot; alt=&quot;&quot; title=&quot;Office of CEO Space View&quot;&gt;&lt;/p&gt;
&lt;h4&gt;IT Department&lt;/h4&gt;
&lt;p&gt;The IT department space would need access to HPE GreenLake for Private Cloud Enterprise: Virtual Machines, HPE GreenLake for ML Ops and HPE GreenLake for Private Cloud Enterprise: Containers resources. This would allow users in this department to manage the resources running in both services. Users in this department require admin roles for the selected resources.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/it-department-space-resources.png&quot; alt=&quot;&quot; title=&quot;IT Department Space Resources&quot;&gt;&lt;/p&gt;
&lt;p&gt;The user group is assigned appropriate roles for the selected services:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/it-department-space-assignments.png&quot; alt=&quot;&quot; title=&quot;IT Department Space Assignments&quot;&gt;&lt;/p&gt;
&lt;p&gt;The space looks like this to the users:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/it-department-space-view.png&quot; alt=&quot;&quot; title=&quot;IT Department Space View&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Sales Department&lt;/h4&gt;
&lt;p&gt;The sales department space runs a series of containers that provide a service to their field sales users. Since they maintain their own code they have two clusters: Integration and Production. They do not use other services.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sales-department-space-resources.png&quot; alt=&quot;&quot; title=&quot;Sales Department Space Resources&quot;&gt;&lt;/p&gt;
&lt;p&gt;The user group is assigned appropriate roles for the selected services:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sales-department-space-assignments.png&quot; alt=&quot;&quot; title=&quot;Sales Department Space Assignments&quot;&gt;&lt;/p&gt;
&lt;p&gt;The space looks like this to the users:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/sales-department-space-view.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h4&gt;R&amp;#x26;D Department&lt;/h4&gt;
&lt;p&gt;The R&amp;#x26;D department accesses a series of virtual machines in the HPE GreenLake for Private Cloud Enterprise: Virtual Machines service. Since the service is managed by the IT department, these users do not need administrative rights.&lt;/p&gt;
&lt;h4&gt;Data Science Users&lt;/h4&gt;
&lt;p&gt;These users access several HPE GreenLake for ML Ops projects. The service is managed by the IT department.&lt;/p&gt;
&lt;h2&gt;Example 2: ABC Corp.&lt;/h2&gt;
&lt;h3&gt;Design&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/iam-document-example-2-tenant-1-1-.jpg&quot; alt=&quot;&quot; title=&quot;Example 2: ABC Corp&quot;&gt;&lt;/p&gt;
&lt;p&gt;ABC Corp. is a small company with three departments, A, B and C. Since the company is small, some employees work across departments. ABC Corp. uses multiple HPE GreenLake services, each with separate billing accounts. A single tenant is sufficient for the small number of employees of ABC Corp.&lt;/p&gt;
&lt;p&gt;The tenant administrator for ABC Corp can create user groups for each department, e.g.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Department A users&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Users: John, Mary&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Department B users&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Users: Mary&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Department C users&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Users: Robert&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;NOTE: User Mary is a member of both Department A and Department B user groups.&lt;/p&gt;
&lt;p&gt;Next, the tenant administrator can create spaces for each department and select billing account resources and user groups as appropriate, e.g.&lt;/p&gt;
&lt;p&gt;Department A space&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Billing resource HP-AMS-DMO-USA-99918&lt;/li&gt;
&lt;li&gt;Billing resource HP-AMS-DMO-USA-99919&lt;/li&gt;
&lt;li&gt;Department A users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Department B space&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Billing resource HP-AMS-DMO-USA-99920&lt;/li&gt;
&lt;li&gt;Department B users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Department C space&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Billing resource HP-AMS-DMO-USA-99918&lt;/li&gt;
&lt;li&gt;Billing resource HP-AMS-DMO-USA-99920&lt;/li&gt;
&lt;li&gt;Department C users&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this way, user John, who is a member of Department A users would only have access to Department A space and would be able to access billing resources HP-AMS-DMO-USA-99918 and HP-AMS-DMO-USA-99919.&lt;/p&gt;
&lt;p&gt;User Mary, who is a member of both Department A users and Department B users would have access to both Department A space and Department B space. User Mary would be able to see any of the billing resources by selecting the appropriate space.&lt;/p&gt;
&lt;p&gt;User Robert who is a member of the Department C users, would be able to access the Department C space with access to billing resources HP-AMS-DMO-USA-99918 and HP-AMS-DMO-USA-99920.&lt;/p&gt;
&lt;h2&gt;Example 3: Big and Small Corp&lt;/h2&gt;
&lt;h3&gt;Design&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/iam-document-example-3-tenant-1-1a.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/iam-document-example-3-tenant-2-1-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/iam-document-example-3-tenant-3-1-.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Big and Small Corp. is a large multinational company with several divisions. Each division is located in a different region and manages its services separately. Big and Small (USA) is headquartered in the US and is the holding company for all US-based business. This division runs a large marketing service which has a web presence, hosted on a public cloud. Big and Small (USA) would like to replace the public cloud with HPE GreenLake for Private Cloud Enterprise and manage the charges for this service within the division.&lt;/p&gt;
&lt;p&gt;Big and Small (Europe) is headquartered in Berlin, Germany. This division designs and manufactures a wide range of products for the European market. To support the division, they currently own a large HPC cluster, which they would like to replace with HPE GreenLake for HPC. This division also would like to manage their HPE GreenLake expenses in a separate account.&lt;/p&gt;
&lt;p&gt;Finally, Big and Small (Japan) is an acquisition, based in Tokyo, Japan. This division had a previous relationship with HPE and already has a billing account, which they would like to retain. This division is responsible for future product development. They would like to use HPE GreenLake for ML Ops to develop innovative new products.&lt;/p&gt;
&lt;p&gt;Since some divisions of Big and Small Corp. came via acquisition, they have different email addresses than the other divisions. Therefore, it is decided that each division should have a separate tenant. This will allow each division to manage their own HPE GreenLake Services and billing accounts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NOTE: Unless the customer has multiple entities and/or multiple email domains – HPE recommends consolidating all resources under a single tenant. This example is to show that multiple tenants/billing Accounts are supported but is not generally recommended unless specifically required.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Each division has their own unique billing account:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Division&lt;/th&gt;
&lt;th&gt;Billing Account&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Big and Small (USA)&lt;/td&gt;
&lt;td&gt;HP-AMS-DMO-USA-99918&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Big and Small (Europe)&lt;/td&gt;
&lt;td&gt;HP-EMEA-DMO-DEU-99919&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Big and Small (Japan)&lt;/td&gt;
&lt;td&gt;HP-APJ-DMO-JPN-99920&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;A tenant for Big and Small (USA) is created. The main service in this tenant is HPE GreenLake for Private Cloud Enterprise. This is configured such that the metrics from this service are sent to the billing account &apos;HP-AMS-DMO-USA-99918&apos;.&lt;/p&gt;
&lt;p&gt;Each department has their own separate user group, e.g. Department A users. Users from the various departments are added to the appropriate user groups.&lt;/p&gt;
&lt;p&gt;A space, &apos;&lt;strong&gt;Main Space&lt;/strong&gt;&apos; is created within the IAM Service of this tenant and resources are added. In this case, the resource for the billing account HP-AMS-DMO-USA-99918 and also the resource for the HPE GreenLake for Private Cloud Enterprise are added. Finally, the user groups are assigned to the space with the appropriate roles.&lt;/p&gt;
&lt;p&gt;A tenant for Big and Small (Europe) is created. The main service in this tenant is HPE GreenLake for HPC. This is configured such that the metrics from this service are sent to the billing account &apos;HP-EMEA-DMO-DEU-99919&apos;.&lt;/p&gt;
&lt;p&gt;Each department has their own separate user group, e.g. Department D users. Users from the various departments are added to the appropriate user groups.&lt;/p&gt;
&lt;p&gt;A space &apos;&lt;strong&gt;Main Space&lt;/strong&gt;&apos; is created within the IAM Service of this tenant and resources are added. In this case, the resource for the billing account HP-EMEA-DMO-DEU-99919 and also the resource for the HPE GreenLake for HPC are added. Finally, the user groups are assigned to the space with the appropriate roles.&lt;/p&gt;
&lt;p&gt;A tenant for Big and Small (Japan) is created. The main service in this tenant is HPE GreenLake for ML Ops. This is configured such that the metrics from this service are sent to the billing account &apos;HP-APJ-DMO-JPN-99920&apos;.&lt;/p&gt;
&lt;p&gt;Each department has their own separate user group, e.g. Department F users. Users from the various departments are added to the appropriate user groups.&lt;/p&gt;
&lt;p&gt;A space &apos;&lt;strong&gt;Main Space&lt;/strong&gt;&apos; is created within the IAM service of this tenant and resources are added. In this case, the resource for the billing account HP-APJ-DMO-JPN-99920 and also the resource for the HPE GreenLake for ML Ops are added. Finally, the user groups are assigned to the space with the appropriate roles.&lt;/p&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;As you can see, the key to configuring Identity and Access Management in a tenant is to understand the building blocks and how to make informed decisions on how to design IAM resources to match customer requirements. I hope these examples will help you to see how the flexibility of HPE GreenLake Central Identity and Access Management allows an almost infinite number of possible configurations, giving the flexibility to match diverse customer requirements.&lt;/p&gt;
&lt;p&gt;Please feel free to reach out directly to me on the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE Developer Slack Workspace&lt;/a&gt; in the &lt;a href=&quot;https://hpedev.slack.com/archives/C02EG5XFK8Q&quot;&gt;#hpe-greenlake&lt;/a&gt; channel.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Infrastructure-as-Code on HPE GreenLake using Terraform – Part 2 ]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. The process of…]]></description><link>https://developer.hpe.com/infrastructure-as-code-on-hpe-greenlake-using-terraform-–-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/infrastructure-as-code-on-hpe-greenlake-using-terraform-–-part-2/</guid><pubDate>Wed, 04 May 2022 13:13:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The process of managing and provisioning computer data centers through machine-readable definition files, otherwise known as Infrastructure-as-Code (IaC), offers many significant benefits. It helps to increase operational agility, simplify management, reduce errors, and save cost. In this second post, I’ll explore how to extract more of the benefits of using IaC on HPE GreenLake through the use of Terraform.&lt;/p&gt;
&lt;h2&gt;Let’s recap&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/&quot;&gt;my first blog&lt;/a&gt;, I covered HPE GreenLake with its Private Cloud Service and showed how to get started with Terraform and the Terraform provider for HPE GreenLake. In this post, I will start with the same VM configuration created in Part 1 and show you how to tap into more advanced functionality that’s provided by Terraform and the HPE GreenLake provider. If you’re coming in to this just now, you might want to follow the steps shown in my Part 1 blog post, up to the point (Terraform ready to apply) where the VM DidierTest1 is created in HPE GreenLake, as illustrated below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture1-1.png&quot; alt=&quot;DidierTest1 VM as created in Part 1&quot; title=&quot;DidierTest1 VM as created in Part 1&quot;&gt;&lt;/p&gt;
&lt;p&gt;The corresponding Terraform configuration file showed the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Load HPE GreenLake terraform provider
terraform {
      required_providers {
         hpegl = {
            source  = &quot;hpe/hpegl&quot;
            version = &quot;0.3.17&quot;
         }
      }
   }

# Setup provider environment (location and space)
provider &quot;hpegl&quot; {
      vmaas {
         location   = &quot;HPE&quot;
         space_name = &quot;TerraForm Space&quot;
      }
   }

# Start retrieving resources
# Retrieve a cloud
data &quot;hpegl_vmaas_cloud&quot; &quot;cloud&quot; {
     name = &quot;HPE GreenLake VMaaS Cloud-Trial4 &quot;
   }

# And a few networks
data &quot;hpegl_vmaas_network&quot; &quot;blue_net&quot; {
     name = &quot;Blue-Network&quot;
   }

data &quot;hpegl_vmaas_network&quot; &quot;green_net&quot; {
     name = &quot;Green-network&quot;
   }

data &quot;hpegl_vmaas_cloud_folder&quot; &quot;compute_folder&quot; {
   cloud_id = data.hpegl_vmaas_cloud.cloud.id
   name     = &quot;ComputeFolder&quot;
   }

# Locate a resource pool
data &quot;hpegl_vmaas_resource_pool&quot; &quot;cl_resource_pool&quot; {
     cloud_id = data.hpegl_vmaas_cloud.cloud.id
     name = &quot;ComputeResourcePool&quot;
   }

# And a group
data &quot;hpegl_vmaas_group&quot; &quot;default_group&quot; {
  name = &quot;HPEDEV-HackShackTenant-Group&quot;
}

# Locate a plan
data &quot;hpegl_vmaas_plan&quot; &quot;g1_small&quot; {
     name = &quot;G1-Small&quot;
   }

# A layout
data &quot;hpegl_vmaas_layout&quot; &quot;vmware&quot; {
  name               = &quot;VMware VM with vanilla CentOS&quot;
  instance_type_code = &quot;glhc-vanilla-centos&quot;
}

# And a template
data &quot;hpegl_vmaas_template&quot; &quot;vanilla&quot; {
     name = &quot;vanilla-centos7-x86_64-09072020&quot;
   }

resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
     name               = &quot;DidierTest1&quot;
     cloud_id           = data.hpegl_vmaas_cloud.cloud.id
     group_id           = data.hpegl_vmaas_group.default_group.id
     layout_id          = data.hpegl_vmaas_layout.vmware.id
     plan_id            = data.hpegl_vmaas_plan.g1_small.id
     instance_type_code = data.hpegl_vmaas_layout.vmware.instance_type_code
     network {
         id = data.hpegl_vmaas_network.green_net.id
     } 
     volume {
         name         = &quot;root_vol&quot;
         size         = 15
         datastore_id = &quot;auto&quot;
     }
     config {
         resource_pool_id = data.hpegl_vmaas_resource_pool.cl_resource_pool.id
         template_id      = data.hpegl_vmaas_template.vanilla.id
         no_agent         = true
         asset_tag        = &quot;vm_terraform&quot;
         folder_code      = data.hpegl_vmaas_cloud_folder.compute_folder.code
     }
     power = &quot;poweron&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Keeping things in-sync&lt;/h2&gt;
&lt;p&gt;To understand what the current state of our configuration file is, let’s run another &lt;strong&gt;terraform plan&lt;/strong&gt; after the VM has materialized on HPE GreenLake.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;terraform plan
hpegl_vmaas_instance.DidierTest1: Refreshing state... [id=149]
 
Note: Objects have changed outside of Terraform
 
Terraform detected the following changes made outside of Terraform since the last &quot;terraform apply&quot;:
 
  # hpegl_vmaas_instance.DidierTest1 has changed
  ~ resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
      ~ containers         = [
          ~ {
              ~ server         = [
                  - {
                      - compute_server_type = [
                          - {
                              - external_delete = true
                              - managed         = true
                              - name            = &quot;VMware Linux VM&quot;
                            },
                        ]
                      - date_created        = &quot;2022-03-16T16:28:08Z&quot;
                      - id                  = 155
                      - last_updated        = &quot;2022-03-16T16:29:11Z&quot;
                      - owner               = [
                          - {
                              - username = &quot;hpedev-hackshack-terraform&quot;
                            },
                        ]
                      - platform            = &quot;&quot;
                      - platform_version    = &quot;&quot;
                      - server_os           = []
                      - ssh_host            = &quot;172.17.70.27&quot;
                      - ssh_port            = 22
                      - visibility          = &quot;private&quot;
                    },
                  + {
                      + compute_server_type = [
                          + {
                              + external_delete = true
                              + managed         = true
                              + name            = &quot;VMware Linux VM&quot;
                            },
                        ]
                      + date_created        = &quot;2022-03-16T16:28:08Z&quot;
                      + id                  = 155
                      + last_updated        = &quot;2022-03-30T11:37:40Z&quot;
                      + owner               = [
                          + {
                              + username = &quot;hpedev-hackshack-terraform&quot;
                            },
                        ]
                      + platform            = &quot;&quot;
                      + platform_version    = &quot;&quot;
                      + server_os           = [
                          + {
                              + name = &quot;centOS 7 64-bit&quot;
                            },
                        ]
                      + ssh_host            = &quot;172.17.70.27&quot;
                      + ssh_port            = 22
                      + visibility          = &quot;private&quot;
                    },
                ]
                # (9 unchanged elements hidden)
            },
        ]
        id                 = &quot;149&quot;
        name               = &quot;DidierTest1&quot;
        # (9 unchanged attributes hidden)
 
      + config {
          + asset_tag        = &quot;vm_terraform&quot;
          + create_user      = false
          + folder_code      = &quot;group-v41&quot;
          + no_agent         = true
          + resource_pool_id = 2
          + template_id      = 573
        }
      - config {
          - asset_tag        = &quot;vm_terraform&quot; -&gt; null
          - folder_code      = &quot;group-v41&quot; -&gt; null
          - no_agent         = true -&gt; null
          - resource_pool_id = 2 -&gt; null
          - template_id      = 573 -&gt; null
        }
 
 
        # (2 unchanged blocks hidden)
    }
 
 
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these
changes.
 
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 
No changes. Your infrastructure matches the configuration.
 
Your configuration already matches the changes detected above. If you&apos;d like to update the Terraform state to match, create and apply a refresh-only plan:
  terraform apply -refresh-only
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Terraform has now detected that there are now additional configuration details (such as the IP address of the VM) that can be expressed concerning the infrastructure. It’s even proposing to synchronize it using a &lt;strong&gt;terraform apply -refresh-only&lt;/strong&gt; command. We can follow the advice and bring our configuration state in sync with the backend environment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;This is a refresh-only plan, so Terraform will not take any actions to undo these.

If you were expecting these changes then you can apply this plan to record the

updated values in the Terraform state without changing any remote objects.

Would you like to update the Terraform state to reflect these detected changes?

  Terraform will write these changes to the state without modifying any real infrastructure.

  There is no undo. Only &apos;yes&apos; will be accepted to confirm.

  Enter a value: yes

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Infrastructure lifecycle management&lt;/h2&gt;
&lt;p&gt;Once your infrastructure is created, it will need to evolve over time in order to cope with any change of workload or to perform maintenance. Let’s look at a few scenarios.&lt;/p&gt;
&lt;h3&gt;Use case 1: Stop this VM&lt;/h3&gt;
&lt;p&gt;To start, let’s keep things simple. You might want to just turn off the VMs that are part of an infrastructure when you don’t need them to save cost or limit your carbon impact on the planet. If you paid attention to the current configuration file, you’ll see that we inserted a power statement when originally creating the VM. While &lt;em&gt;poweron&lt;/em&gt; is the only valid option when creating a new resource of type &lt;strong&gt;hpegl_vmaas_instance&lt;/strong&gt;, other values are available for lifecycle management, such as &lt;em&gt;poweroff&lt;/em&gt; and &lt;em&gt;suspend&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Locate the following section in your configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;     }
     power = &quot;poweron&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And change it so that the power desired state is set to &lt;em&gt;poweroff&lt;/em&gt; as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;     }
     power = &quot;poweroff&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save and run a &lt;strong&gt;terraform apply&lt;/strong&gt; command, which will prompt you to accept the following change:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Terraform will perform the following actions:

# hpegl_vmaas_instance.DidierTest1 will be updated in-place
  ~ resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
        id                 = &quot;149&quot;
        name               = &quot;DidierTest1&quot;
      ~ power              = &quot;poweron&quot; -&gt; &quot;poweroff&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pretty soon afterwards, you can check out the HPE GreenLake console and see that the VM status was changed to stopped.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture2-2.png&quot; alt=&quot;VM is now stopped&quot; title=&quot;VM is now stopped&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Use case 2: Setting up tags and labels&lt;/h3&gt;
&lt;p&gt;That was straight forward, right? Now, restart the VM and try something else. You might want to test out tabs and labels. As organizations scale their cloud environments, they often need to define methodologies for organizing resources. For this, they can leverage tags and labels. Tags consist of key/value pairs that make it easier to search for, or filter, your cloud resources based on categories relevant to the organization. Another option is to attach labels, which are simple values, to your VMs in order to keep track of what it’s used for or who it belongs to.&lt;/p&gt;
&lt;p&gt;Why don’t you try adding metadata to the VM using tags and labels. According to the &lt;a href=&quot;https://github.com/HPE/terraform-provider-hpegl/blob/main/docs/resources/vmaas_instance.md&quot;&gt;documentation&lt;/a&gt;, you can add labels using the following syntax in our configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Using labels
labels = [&quot;hackshack&quot;, &quot;hpedev&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And you can add tags by inserting the following code snippet in your configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Using tags
tags = {
        team  = &quot;HPE Developer&quot;
        support = &quot;gold&quot;
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save and apply those changes with &lt;strong&gt;terraform apply&lt;/strong&gt;, wait a little and look at the VM details. You can see the labels and the tags in the capture below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture3.png&quot; alt=&quot;tags and labels applied to VM&quot; title=&quot;tags and labels applied to VM&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Use case 3: Get me more disk space&lt;/h3&gt;
&lt;p&gt;Another typical use case would be to add another disk to a VM, say a data volume for application usage. The syntax for this is the same as you used to create the VM with its &lt;em&gt;root_vol&lt;/em&gt;, already visible in the Storage details of the VM:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture4.png&quot; alt=&quot;VM was created with one disk: root_vol&quot; title=&quot;VM was created with one disk: root_vol&quot;&gt;&lt;/p&gt;
&lt;p&gt;Go ahead and add the following code snippet to your configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;  # Add another volume
  
  volume {
         name         = &quot;data_vol&quot;
         size         = 25
         datastore_id = &quot;auto&quot;
     }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save the file, apply those changes, wait a little and check the VM storage configuration again:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture5.png&quot; alt=&quot;VM with two disks&quot; title=&quot;VM with two disks&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Use case 4: Please snap this VM&lt;/h3&gt;
&lt;p&gt;Here’s one last use case you can try, which consists of snapshotting the VM. You can do tis by adding the following Terraform code snippet in your configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;   snapshot {
    name        = &quot;Snap1
    description = &quot;Snap this VM so we can restart from this state&quot;
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save the file, apply those changes, wait a little and check the details of the VM  once again, in the Backups section:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture6.png&quot; alt=&quot;Snap of VM ready&quot; title=&quot;Snap of VM ready&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Debugging when things go wrong&lt;/h2&gt;
&lt;p&gt;In this post, I’ve showed you how to make sure the Terraform configuration file is valid before applying changes using the &lt;strong&gt;terraform validate&lt;/strong&gt; command. To see more details during an apply command, you can also enable Terraform debug by simply setting up the TF_LOG environment variable. I suggest setting it up to DEBUG but other supported values are: TRACE, INFO, WARN and ERROR.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;export TF_LOG=DEBUG
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform/&quot;&gt;my first blog post&lt;/a&gt;, I covered how to get started with the Terraform provider for HPE GreenLake, I explained how to collect data from the platform and request the creation of a Virtual Machine instance. In this article, I showed you how to manage the lifecycle of a Virtual Machine using Terraform. I applied several changes to the Terraform infrastructure configuration file and observe how the desired state is automatically tracked by Terraform and applied to HPE GreenLake.&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.terraform.io/&quot;&gt; Learn more about Terraform&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt; Learn more about HPE GreenLake&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;-      &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl&quot;&gt; Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Don’t forget, you can always find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Let’s Hack Shack!]]></title><link>https://developer.hpe.com/2022-May-04/</link><guid isPermaLink="false">https://developer.hpe.com/2022-May-04/</guid><pubDate>Wed, 04 May 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Deep Learning Model Training – A First-Time User’s Experience with Determined – Part 2]]></title><description><![CDATA[Determined is an open-source training platform that aims to simplify deep learning (DL) model development and experimentation for data…]]></description><link>https://developer.hpe.com/deep-learning-model-training-–-a-first-time-user’s-experience-with-determined-–-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/deep-learning-model-training-–-a-first-time-user’s-experience-with-determined-–-part-2/</guid><pubDate>Tue, 03 May 2022 13:51:27 GMT</pubDate><content:encoded>&lt;p&gt;Determined is an open-source training platform that aims to simplify deep learning (DL) model development and experimentation for data science teams by providing tools like distributing training, automatic model tuning, GPU resource management, and automatic experiment tracking.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/deep-learning-model-training-%E2%80%93-a-first-time-user%E2%80%99s-experience-with-determined-part-1/&quot;&gt;my previous blog post&lt;/a&gt;, I put on my IT operations manager’s hat and discussed how I deployed Determined on a Kubernetes cluster in an on-premises HPE Ezmeral Runtime Enterprise environment. I also showed how easy it is to get started with the Determined CLI, REST API, and Web User Interface to interact with Determined.&lt;/p&gt;
&lt;p&gt;In this second part of the series, using  the Determined setup from Part I, I’ll  assume the role of a data scientist/ML Engineer who wants to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Explore fundamental Determined concepts and features to train a TensorFlow model&lt;/li&gt;
&lt;li&gt;Track and visualize the progress and results of the training process using a single GPU&lt;/li&gt;
&lt;li&gt;Use distributed training across multiple GPUs and fine-tune the model with state-of-the-art hyperparameter search &lt;br/&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will also use the Determined Python API in a Jupyter Notebook to load and test the trained model and see how well it performs. I’ll evaluate this by making inferences, which uses a trained model and new, unlabelled data to make predictions.&lt;/p&gt;
&lt;h2&gt;Overview of the Determined training model process&lt;/h2&gt;
&lt;p&gt;In short, Determined permits data science teams to launch deep learning model training tasks, called &lt;em&gt;&lt;strong&gt;trials&lt;/strong&gt;&lt;/em&gt;, for their ported model. These tasks are distributed across one or more GPUs and grouped as a Determined &lt;em&gt;&lt;strong&gt;experiment&lt;/strong&gt;&lt;/em&gt; using a particular set of configuration parameters specified in an &lt;em&gt;&lt;strong&gt;experiment configuration file&lt;/strong&gt;&lt;/em&gt;. This configuration file tells Determined how to run the model training process on Determined in terms of many different parameters, including, but not limited to, the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The hyperparameters&lt;/li&gt;
&lt;li&gt;The number of GPUs to use for the training task&lt;/li&gt;
&lt;li&gt;The amount of data on which to train a model&lt;/li&gt;
&lt;li&gt;How often the trial task must report the training metrics and the validation metrics to the Determined Master&lt;/li&gt;
&lt;li&gt;How often the trial task must save the model file&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The experiment configuration file and the Python code for the model used to load the datasets, build, optimize, and compile the model are collected in a &lt;em&gt;&lt;strong&gt;model definition directory&lt;/strong&gt;&lt;/em&gt;. The directory can optionally contain a startup-hook.sh script to install additional Python dependencies and libraries before the training process starts.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The Experiment configuration file has required and optional fields.  I’ll explore the most common fields in this post. To learn more about Experiment configuration settings, check out the online documentation &lt;a href=&quot;https://docs.determined.ai/latest/training-apis/experiment-config.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;The use case and the model&lt;/h2&gt;
&lt;p&gt;To get started with Determined, I need a use case and a model to train in Determined. As a Data Scientist running an experiment on Iris species, I pick the simple and well-known &lt;em&gt;“Iris classification”&lt;/em&gt; model for predicting the likelihood that the flowers are an Iris species (Iris setosa, Iris versicolor or Iris virginica) based on their sepal and petal length and width measurements.&lt;/p&gt;
&lt;p&gt;To take advantage of Determined&apos;s functionalities, I need to port the model to Determined framework APIs such as PyTorch, Tensorflow, and Keras, the most commonly used deep learning frameworks. You can check out the &lt;em&gt;Iris deep learning model&lt;/em&gt; code  — a TensorFlow Keras based model  — in the &lt;a href=&quot;https://github.com/determined-ai/determined/tree/master/examples/computer_vision/iris_tf_keras&quot;&gt;Determined GitHub repository&lt;/a&gt; and download the complete code for this use case &lt;a href=&quot;https://docs.determined.ai/latest/_downloads/b8b05d77875d7d5a43ea2bd4b35fb0f4/iris_tf_keras.tgz&quot;&gt;here&lt;/a&gt;. I’ll train the model on the publicly available Iris &lt;a href=&quot;https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv&quot;&gt;training dataset&lt;/a&gt; and &lt;a href=&quot;https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv&quot;&gt;validation dataset&lt;/a&gt;, which consist of 120 samples and 30 samples, respectively. Each sample consists of the following Iris flower properties:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Features: sepal length, sepal width, petal length, petal width&lt;/li&gt;
&lt;li&gt;Label: the species of Iris to predict&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I’ve also stored a copy of the datasets in the shared storage volume for my Determined deployment described in the &lt;a href=&quot;https://developer.hpe.com/blog/deep-learning-model-training-%E2%80%93-a-first-time-user%E2%80%99s-experience-with-determined-part-1/&quot;&gt;first post of this series&lt;/a&gt;. I simply changed the model code to ensure the data loader function loads and reads training and validation datasets from the shared storage volume.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Porting deep learning model code to Determined is beyond the scope of this blog series. The easiest way to learn how to port an existing deep learning model code to Determined is to start with the &lt;a href=&quot;https://docs.determined.ai/latest/tutorials/pytorch-porting-tutorial.html&quot;&gt;PyTorch Porting tutorial&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Time to see Determined in action!&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Launching my first experiment to train our model on a single GPU&lt;/h2&gt;
&lt;p&gt;Let’s start by simply launching an experiment with a single training task for the Iris deep learning model on a single GPU by defining the hyperparameters as fixed values in the experiment configuration file.&lt;/p&gt;
&lt;h3&gt;The Experiment configuration file&lt;/h3&gt;
&lt;p&gt;Here’s a quick look at the experiment configuration file (&lt;em&gt;const.yaml&lt;/em&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: iris_const_testuser1
hyperparameters:    # the hyperparameters to use for the training task
  learning_rate: 1.0e-4
  learning_rate_decay: 1.0e-6
  layer1_dense_size: 16
  global_batch_size: 30  # Number of data records within a batch
resources:
  slots_per_trial: 1 # Default. Use 1 GPU to train the model.
searcher:
  name: single       # Single searcher method disables hyperparameter search (HPO)
  metric: val_categorical_accuracy  # The validation metric to evaluate the performance
  smaller_is_better: false  # The higher the metric the better the performance
  max_length:        # Amount of data on which to train the model
    batches: 5000    # Set in the unit of batches (can be expressed as epochs too)	
entrypoint: model_def:IrisTrial # Starting point of the model code
min_validation_period:
  batches: 1000      # Report validation metrics to Master every 1000 batches
scheduling_unit: 100 # Report training metrics to Master every 100 batches (default) 
bind_mounts:         # Training and validation datasets location in shared volume. Ensure the datasets on the shared volume are accessible to the training tasks.
  - host_path: /opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/repo/data
    container_path: /opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/repo/data
    read_only: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At a glance, this configuration YAML file is full of concepts that data scientists and ML engineers are familiar with, such as hyperparameters, batches, batch size, metrics, etc. In other words, using the experiment configuration file allows me, as a data scientist, to describe experiments with the terms I’m familiar with. From this file, Determined can take care of the model training for me.&lt;/p&gt;
&lt;h3&gt;Creating the experiment&lt;/h3&gt;
&lt;p&gt;With the &lt;em&gt;const.yaml&lt;/em&gt; experiment configuration file and the model code in the model definition directory, after being authenticated as a Determined user (&lt;code&gt;det user login &amp;#x3C;username&gt;&lt;/code&gt;), I’m going to start an experiment using the Determined CLI command line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment create const.yaml &amp;#x3C;model-definition-directory&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Determined returns the experiment_Id and schedules the training task as a Kubernetes POD in my Kubernetes cluster. The POD container has all the libraries and dependencies required for training typical deep learning models with common deep learning frameworks like PyTorch, TensorFlow, and Keras.&lt;/p&gt;
&lt;p&gt;I can then use a couple more Det CLI commands to track the execution progress of my experiment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment list | tail -1
det experiment describe &amp;#x3C;my-experiment-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the experiment completes, I can use the CLI command below to discover the performance metric for the best version of the validated model for my experiment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment list-checkpoints --best 1 &amp;#x3C;my-experiment-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Visualizing and inspecting the learning curve of the trained model&lt;/h3&gt;
&lt;p&gt;I can also access information on both training and validation performance for my experiment using the Determined WebUI. From the dashboard I see my experiment status, as shown in the following screenshot:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webui-myexp-const-status.png&quot; alt=&quot;Determined WebUi Dashboard&quot; title=&quot;Determined WebUi Dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;Selecting the experiment, I visualize the learning curve, which shows the model validation and training accuracy metric over the number of completed batches:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webui-myexp-const-graph.png&quot; alt=&quot;Experiment performance visualization&quot; title=&quot;Experiment performance visualization&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, I see the graph changing in real-time as the experiment runs. Determined plots training metrics every &lt;em&gt;&lt;strong&gt;100 batches&lt;/strong&gt;&lt;/em&gt; of training data (the purple line) by default. Validation metrics (the blue line) are plotted every &lt;em&gt;&lt;strong&gt;1000 batches&lt;/strong&gt;&lt;/em&gt; over the amount of data (5000 batches) based on the parameters I specified in the experiment configuration file.&lt;/p&gt;
&lt;p&gt;I can also see the training and the validation metrics along with the checkpoints, which are the saved versions of the best-validated model. With the default checkpoint collection policy, Determined will checkpoint and save to a file the most recently validated model and the best model per training task (trial). If the most recent checkpoint is also the best checkpoint for a given trial, only one checkpoint will be saved for that trial, as in my example above.&lt;/p&gt;
&lt;p&gt;For data scientists familiar with TensorBoard, Determined also provides learning curve visualization through TensorBoard. I can launch a TensorBoard task from the WebUI or by using the Det CLI command below to start an instance of TensorBoard server in Determined.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det tensorboard start &amp;#x3C;Experiment-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The TensorBoard server instance will be launched as a container POD in the Kubernetes cluster. To stop the instance, I just use the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det tensorboard kill &amp;#x3C;tensorboard-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Evaluating the model by making inferences using a local Jupyter Notebook&lt;/h3&gt;
&lt;p&gt;With the model trained and the best model files saved on the shared storage volume for my Determined deployment, I can now download the checkpoint files for the best version of the model, test it and see how well it performs using the &lt;a href=&quot;https://docs.determined.ai/latest/interact/api-experimental-client.html&quot;&gt;Determined Python API&lt;/a&gt;. Using the command below, I simply start a &lt;strong&gt;CPU-only&lt;/strong&gt; JupyterLab server instance with a bind-mounting configured to ensure that the validated model file and checkpoint files are accessible by the JupyterLab instance. Like any other task launched in Determined, the JupyterLab instance is launched as a container POD in the Kubernetes cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det notebook start --config-file Notebook-config.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;em&gt;Notebook-config&lt;/em&gt; YAML configuration file below is used to control a JupyterLab instance deployment. When downloading checkpoints from a shared file system, Determined assumes the checkpoints location is mounted to the mount point &lt;em&gt;&lt;strong&gt;determined_shared_fs&lt;/strong&gt;&lt;/em&gt; inside the JupyterLab POD container. To learn more about Jupyter Notebook in Determined, check out the &lt;a href=&quot;https://docs.determined.ai/latest/features/notebooks.html&quot;&gt;Notebook documentation&lt;/a&gt; and &lt;a href=&quot;https://www.determined.ai/blog/maximize-juptyter-notebook-experience-determined&quot;&gt;a recent Determined’s blog post on this topic&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;description: My-notebook
resources:
  slots: 0  # Launch a Notebook that does not use any GPU
bind_mounts: # Validated model checkpoints location in the shared volume
  - host_path: /opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/checkpoints
    container_path: /determined_shared_fs # Mount point the host_path is mounted to inside the JupyterLab POD container 
idle_timeout: 30m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the JupyterLab instance deployed, I use the &lt;code&gt;det notebook list&lt;/code&gt; command to get the Notebook-Id, then connect to a Jupyterlab instance using my browser at the following URL: http://&amp;#x3C;DET_MASTER_URL&gt;/proxy/&amp;#x3C;Notebook-Id&gt;.&lt;/p&gt;
&lt;p&gt;Finally, I use the Determined &lt;a href=&quot;https://docs.determined.ai/latest/interact/api-experimental-client.html&quot;&gt;Python Client API&lt;/a&gt; and its TensorFlow Keras &lt;a href=&quot;https://docs.determined.ai/latest/post-training/use-trained-models.html&quot;&gt;Checkpoint API&lt;/a&gt; in the following Python code example to download the best model file, load it into memory as a Python process for TensorFlow Keras based model, and make inferences.&lt;/p&gt;
&lt;p&gt;Based on the flower&apos;s measurements, the model will predict, for each unlabelled example, the likelihood that the flower is the given Iris species (Iris setosa, Iris versicolor or Iris virginica) and print out the actual numeric value for each unlabelled example that has the highest confidence value.&lt;/p&gt;
&lt;p&gt;The Python code below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hides the GPU device to run the code on CPU-only Jupyter Notebook&lt;/li&gt;
&lt;li&gt;Disables all logging output from TensorFlow Keras&lt;/li&gt;
&lt;li&gt;Imports the Determined Python libraries&lt;/li&gt;
&lt;li&gt;Authenticates me as Determined user to interact with my Determined experiment&lt;/li&gt;
&lt;li&gt;Downloads and loads the best model checkpoint for my experiment from the shared checkpoint storage volume&lt;/li&gt;
&lt;li&gt;Tests the model by making predictions using the loaded model and unlabelled examples data&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os
os.environ[&quot;CUDA_DEVICE_ORDER&quot;]=&quot;PCI_BUS_ID&quot;
os.environ[&quot;CUDA_VISIBLE_DEVICES&quot;]=&quot;-1&quot;
os.environ[&apos;TF_CPP_MIN_LOG_LEVEL&apos;] = &apos;3&apos;
#
import numpy as np
from determined.experimental import client
from determined import keras
#
myexpId=”&amp;#x3C;experiment-Id&gt;”
client.login(master=&amp;#x3C;DET_MASTER_URL&gt;, user=&quot;MyUsername&quot;, password=&quot;MyPassword&quot;)
best_checkpoint = client.get_experiment(myexpId).top_checkpoint()
checkpoint_path = best_checkpoint.download()
model = keras.load_model_from_checkpoint_path(checkpoint_path)
#
# Inferences
X_new = np.array([[5, 3.9, 2, 0.5], [5, 2.5, 3, 1], [6.9, 3.1, 5.4, 2.1]])
prediction = model(X_new)
print(&quot;Let&apos;s predict the likelihood that the flower is the given Iris species 0: Iris setosa, 1: Iris versicolor, 2: Iris virginica {}&quot;.format(prediction))
print(&quot;&quot;)
print(&quot;the prediction of species for the 3 unlabelled examples is:&quot;)
print(np.argmax(prediction,axis=1))
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Cleaning up computational resources&lt;/h3&gt;
&lt;p&gt;Once I’ve finished with the test of my model, I can stop the JupyterLab instance and release computational resources on my Kubernetes cluster with the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det notebook kill &amp;#x3C;notebook-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Distributed training with multiple GPUs&lt;/h2&gt;
&lt;p&gt;I’ll now launch another experiment that trains a single instance of my deep learning model using multiple GPUs, known as &lt;a href=&quot;https://docs.determined.ai/latest/training-distributed/index.html&quot;&gt;distributed training&lt;/a&gt;. Similar to my first experiment, this experiment features a single trial with a set of constant hyperparameters.&lt;/p&gt;
&lt;p&gt;Determined can coordinate multiple GPUs to train a deep learning model more quickly by leveraging multiple GPUs on a single machine or over multiple machines. Typically, data science teams use distributed training to train models on larger datasets to improve model performance and accuracy, leveraging additional compute resources.&lt;/p&gt;
&lt;p&gt;Determined automatically executes &lt;a href=&quot;https://www.oreilly.com/content/distributed-tensorflow/&quot;&gt;data parallelization&lt;/a&gt; training, where a data set is divided into multiple pieces and distributed across the GPUs, &lt;strong&gt;requiring minimal changes to model code&lt;/strong&gt;. Each GPU has the full model code but trains the model on its portion of the data. Determined ensures training coordination across multiple GPUs on a single machine or multiple machines to keep the overall training task in sync.&lt;/p&gt;
&lt;p&gt;To launch a multi-GPU experiment, all I need to do is specify the desired number of GPUs I want to use in the experiment configuration file without requiring any model code changes, and Determined takes care of the rest. For example, in the &lt;em&gt;distributed.yaml&lt;/em&gt; experiment configuration file, I specify two GPUs per trial in the &lt;strong&gt;resources&lt;/strong&gt; section:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Yaml&quot;&gt;resources:
  slots_per_trial: 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I launch the experiment using the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment create distributed.yaml &amp;#x3C;model-definition-directory&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this configuration, Determined runs a single trial for my experiment. The trial uses two GPUs to train my model, whether leveraging two GPUs on a single machine or two GPUs across multiple machines in the Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;As with the other experiments, I navigate to the WebUI to monitor the progress of the training task for my experiment and visualize information on both training and validation performance over the number of completed batches. I can use the same &lt;em&gt;&lt;strong&gt;Det&lt;/strong&gt;&lt;/em&gt; CLI commands that I used for my first experiment to discover the performance metric for my model and launch auxiliary tasks, such as a TensorBoard server or a JupyterLab Notebook server. And, of course, I can use the same Determined Python API code I used for my first experiment to load and test the trained model to see how well it performs to make predictions.&lt;/p&gt;
&lt;h2&gt;Automatic model tuning with Determined&lt;/h2&gt;
&lt;p&gt;Previously, I showed you how to easily distribute a training task across multiple GPUs without changing your model code. Here, I’ll look at another way that an experiment can benefit from multiple GPUs. Determined makes it easy for data scientists and ML engineers to apply advanced functionality such as automatic model tuning with hyperparameter search, known as &lt;strong&gt;Hyperparameter Optimization&lt;/strong&gt; (HPO), to accelerate the hyperparameter search for their model with minimal effort. &lt;a href=&quot;https://docs.determined.ai/latest/training-hyperparameter/index.html#hyperparameter-tuning&quot;&gt;Determined HPO&lt;/a&gt; uses a Searcher algorithm like Random, Grid, Adaptive, or PBT, and ranges of hyperparameters, which are specified in the experiment configuration file.&lt;/p&gt;
&lt;p&gt;In general, data scientists experiment with several learning algorithms using a variety of hyperparameters on the same dataset by launching several training processes. They do so to find the model that works best for the business problem they are trying to solve. Determined HPO automates this process to find the best-performing model by running many training tasks, or trials, on the same dataset and code. Determined launches the trials simultaneously on different GPUs. Each trial uses a different configuration of hyperparameters &lt;strong&gt;randomly&lt;/strong&gt; chosen by the Searcher from the range of values specified in the experiment configuration file. Determined then chooses the set of hyperparameter values that result in a model that performs the best, as measured by the validation metric defined in the experiment configuration file.&lt;/p&gt;
&lt;p&gt;I’ll use the following experiment configuration file (&lt;em&gt;adaptive.yaml&lt;/em&gt;) to launch the HPO experiment in Determined. As you can see, the set of user-defined hyperparameters range and the state-of-the-art searcher method &lt;em&gt;Adaptive_ASHA&lt;/em&gt; are now defined. In the &lt;em&gt;&lt;strong&gt;searcher&lt;/strong&gt;&lt;/em&gt; section, the &lt;em&gt;&lt;strong&gt;max_trials&lt;/strong&gt;&lt;/em&gt; parameter indicates the number of trials that the experiment will create. Determined will explore &lt;em&gt;max_trials&lt;/em&gt; model configurations for you. The validation metric and the amount of data on which to train the model are the same as in my previous experiments. Each trial runs on one GPU because the resource parameter slot_per_trial is not specified. Therefore the default setting of a single GPU per trial is used.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The Adaptive_ASHA searcher method works best with many hundreds of trials. For the purpose of my experimental use case, I set the maximum number of trials to six. Determined will explore six model configurations.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;name: iris_adaptivesearch
hyperparameters:   # Hyperparameters specified as ranges
  learning_rate:
    type: log
    minval: -5.0
    maxval: 1.0
    base: 10.0
  learning_rate_decay: 1.0e-6
  layer1_dense_size:
    type: int
    minval: 4
    maxval: 32
  global_batch_size:
    type: int
    minval: 5
    maxval: 30
searcher:
  name: adaptive_asha    # The HPO Searcher algorithm
  metric: val_categorical_accuracy
  smaller_is_better: false
  max_length:
    batches: 5000
  max_trials: 6   # Number of trials to launch for the HPO experiment
entrypoint: model_def:IrisTrial
bind_mounts:
  - host_path: /opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/repo/data
    container_path: /opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/repo/data
    read_only: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I then launch the experiment using the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment create adaptive.yaml &amp;#x3C;model-definition-directory&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As in the previous experiments, I navigate to the WebUI to monitor the training task progress and to access information on both training and validation performance for the experiment trials. As shown in the following figure, Determined hyperparameter search functionality gives me with several &lt;a href=&quot;https://www.determined.ai/blog/hyperparameter-visualizations-determined&quot;&gt;visualization options&lt;/a&gt; for analyzing results: Learning Curve, Parallel Plot, Scatter Plot, Heat Map.&lt;/p&gt;
&lt;p&gt;As the experiment runs, I select the training metric &lt;em&gt;categorical_accuracy&lt;/em&gt; in the &lt;em&gt;&lt;strong&gt;Learning Curve&lt;/strong&gt;&lt;/em&gt; tab to visualize the model accuracy on training data for each trial over the number of completed batches. I can see that the Searcher Adaptive ASHA&apos;s &lt;em&gt;&lt;strong&gt;early stopping&lt;/strong&gt;&lt;/em&gt; capability has stopped poor-performing trials that do not require extra training. Determined releases valuable GPU resources on trials that are evaluated to never produce the best model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/webui-myexp-adaptive-graphs-v2.png&quot; alt=&quot;HPO Adaptive experiment trials visualization&quot; title=&quot;HPO Adaptive experiment trials visualization&quot;&gt;&lt;/p&gt;
&lt;p&gt;When the Determined experiment is complete, I’ll navigate to the WebUI &lt;strong&gt;Trials&lt;/strong&gt; tab and compare the results of different trials and discover the hyperparameters that yield the best model that will enable me to perform better future experiments by further tuning the hyperparameters. One could also use the CLI commands below to get the best trial and discover the hyperparameters values for the best model:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;det experiment list-checkpoints --best 1 &amp;#x3C;MyExperiment-Id&gt;
det trial describe &amp;#x3C;Trial-Id&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And, of course, I can use the same Python API code I used earlier to load and test the best model and make inferences.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;During both parts of this blog series, I wore a couple of hats: an IT operations manager’s hat and a data scientist/ML engineer’s hat.&lt;/p&gt;
&lt;p&gt;With my IT operations manager’s hat, I deployed Determined on a Kubernetes cluster running on HPE Ezmeral Runtime Enterprise that provides all the components needed to run Determined: a workload scheduler, such as Kubernetes, a namespace, multi-tenancy, an ingress gateway, persistent storage for experiments tracking, and a shared file system for storing model artifacts and datasets.&lt;/p&gt;
&lt;p&gt;With my data scientist/ML engineer’s hat, I used Determined and its interfaces (CLI and the Web User Interface) to get started with Determined fundamental concepts, training a simple Iris classification neural network model using multiple GPUs with distributed training and advanced functionality such as state-of-the-art hyperparameter search. I also used the Determined Python API to load and test the trained model and to make inferences.&lt;/p&gt;
&lt;p&gt;The Iris classification example used in this post is relatively simple. In reality, you would use Determined to build and train more complex deep learning models with much larger datasets, probably using a larger compute infrastructure with plenty of GPUs available to parallelize training models across data science teams.&lt;/p&gt;
&lt;p&gt;I hope you found this information interesting and useful in helping you get started with Determined. I was able to write this two-part blog series by joining and receiving help from the Determined Community Slack, which you can do by &lt;a href=&quot;https://join.slack.com/t/determined-community/shared_invite/zt-cnj7802v-KcVbaUrIzQOwmkmY7gP0Ew&quot;&gt;following this link&lt;/a&gt;. You can begin training models with Determined today by visiting the &lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;Determined project on GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Becoming a Linux Kernel Contributor: Following the Journey of Souptick Joarder]]></title><description><![CDATA[The Linux Kernel Archive As a leading global, edge-to-cloud company, Hewlett Packard Enterprise (HPE) prides itself in employing team…]]></description><link>https://developer.hpe.com/becoming-a-linux-kernel-contributor-following-the-journey-of-souptick-joarder/</link><guid isPermaLink="false">https://developer.hpe.com/becoming-a-linux-kernel-contributor-following-the-journey-of-souptick-joarder/</guid><pubDate>Tue, 26 Apr 2022 19:38:25 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/linux_source_code_primary_site.png&quot; alt=&quot;The Linux Kernel Archive&quot; title=&quot;The Linux Kernel Archive&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a leading global, edge-to-cloud company, Hewlett Packard Enterprise (HPE) prides itself in employing team members who share one common purpose: to advance the way people live and work. Because of this, HPE boasts some of the finest &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;Open Source&lt;/a&gt; engineering talent. In this blog series, you’ll get to meet a number of them as I interview some of the Open Source experts who make up our team.&lt;/p&gt;
&lt;p&gt;In this blog interview, Souptick Joarder, who has been contributing to the Linux Kernel for the last four years, describes his journey on becoming a trusted patch reviewer and contributor to Linux. Souptick first encountered Linux while studying embedded systems and became interested in Linux Kernel programming due to how Linux lent itself to exploration and modification according to one’s needs. He appreciated being able to give back to the community. Souptick received his Master of Technology degree in software systems at the Birla Institute of Technology and Science and works for HPE as a storage systems engineer.&lt;/p&gt;
&lt;h3&gt;Contributing to such a large Open Source project must be a daunting task. What advice would you give to would-be contributors?&lt;/h3&gt;
&lt;p&gt;To be sure, in the last four years that I’ve been involved, I’ve had to learn the ins-and-outs of making contributions and how to overcome the challenges involved. First and foremost, it’s important to understand that contributions to Linux are done through a trust-based development model. And it takes contributors a while to build that trust. Another thing that’s important to understand is the inherent hierarchy of the contribution process, since you’ll need to know who to connect to in order to build that trust. So, if you’re interested in being a kernel contributor, understand that you’ll need a lot of patience.&lt;/p&gt;
&lt;h3&gt;What would you recommend as the best way to get started?&lt;/h3&gt;
&lt;p&gt;When getting started, I’d recommend that you choose a particular area that interests you to focus on. I spend most of my time acting as a reviewer for patches posted in the Memory Management mailing list for the Virtual Memory Management subsystem. It contains the implementation of demand paging and virtual memory. Also, it contains memory allocation for user space programs and kernel internal structures.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mm_mailing_list.png&quot; alt=&quot;Memory Management Mailist List&quot; title=&quot;Memory Management Mailist List&quot;&gt;&lt;/p&gt;
&lt;p&gt;In parallel, I fix warning/errors reported by different kernel test bots on the memory management mailing list. Here is &lt;a href=&quot;https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/log/?qt=grep&amp;#x26;q=jrdr.linux@gmail.com&quot;&gt;the link for a list of patches&lt;/a&gt; for which I remain an Author/ Reviewer/ Part of Discussion.&lt;/p&gt;
&lt;p&gt;There are many different subsystems in Linux and each has a named maintainer called the Lieutenant. For example, Andrew Morton is the Memory Management (MM) maintainer. He’ll screen patches posted primarily for memory management and check to see that the patches have followed the process and whether they actually do what they set out to do. Post review, he will merge it into his MM Git branch. He usually accepts small patches from new developers as well but for large scale changes he takes only from trusted developers. Over time, you’ll learn how to submit to each subsystem according to how the lieutenants like to receive contributions.&lt;/p&gt;
&lt;h3&gt;How does one make an actual contribution?&lt;/h3&gt;
&lt;p&gt;After the source code change is made and the pull request opened in the repository, fixes/patches/enhancements are all submitted through mailing lists. Linux development is set up using a number of different mailing lists that align with different subsystems of the kernel. When you make your contribution, you’ll send it off to a specific mailing list and then you also want to CC the more general associated mailing list.&lt;/p&gt;
&lt;p&gt;For instance, when I make my contributions for the virtual memory management subsystem, I send it to the memory management mailing list. Then I also CC the higher-level, more general Linux Kernel Mailing List (LKML). There is a &lt;a href=&quot;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS&quot;&gt;MAINTAINERS file&lt;/a&gt; (text file) inside the kernel source code that will tell you exactly who should receive a review request for your patches. Once your patch is ready, you need to refer to this MAINTAINERS file and, based on the files you have modified, send patches to only those maintainers and mailing lists.&lt;/p&gt;
&lt;h3&gt;How do you determine what to add or fix?&lt;/h3&gt;
&lt;p&gt;Reviewing patches is considered one of the most important things to do in terms of the Linux Kernel development process. You can really make substantial contributions here. An additional benefit of reviewing patches from other developers is that you get the opportunity to be involved in their contribution work and learn from it. Kernel test bots work from the Git repositories and report out on warnings and errors. Any developer can analyze those issues and provide a fix by following the correct protocols. It’s a continuous process that developers can spend a good amount of time on.&lt;/p&gt;
&lt;h3&gt;Could you give an example of one of your contributions?&lt;/h3&gt;
&lt;p&gt;There had been multiple VM_FAULT_ERR error codes noted, usually reported by drivers upon failures. These VM_FAULT_ERR error codes had been defined as macro. When the drivers return a VM_FAULT_ERR error code, the data type was not shown as unique across the kernel. Each driver chose their own datatype to report the error, which turned out to be a problem.&lt;/p&gt;
&lt;p&gt;In many cases, the drivers failed to return the VM_FAULT_ERR error code despite seeing a failure and wound up returning SUCCESS instead. It also turned out that there was this silly inefficiency in the drivers (due to the lack of an appropriate API), resulting in ERRNO error codes sometimes being converted to a VM_FAULT_ERR error code before returning to memory management.&lt;/p&gt;
&lt;p&gt;The plan was to introduce a new datatype (initially named vm_fault_t type) and then to enable all the drivers/filesystems in the entire kernel to use this new type. This would ensure that when any new driver used a datatype other than vm_fault_t to report the VM_FAULT_ERR error code, the compiler would catch it. This way, we would restrict all future callers to use only the vm_fault_t type to report a VM_FAULT_ERR error code. This allows the catching of errors much earlier in the build process (at compile time) instead of waiting for higher in the chain for execution.&lt;/p&gt;
&lt;p&gt;As part of this change, we also identified all the buggy drivers that were not returning the correct VM_FAULT_ERR error code upon failure and fixed them. We also identified all the other drivers that were converting ERRNO to VM_FAULT_ERR before reporting a failure to memory management. A few new wrapper APIs were added to make the conversions easy.&lt;/p&gt;
&lt;p&gt;Just to give you an idea of the patience that’s required to do this, this work started with Linux V4.17 and will finish with V5.1 – a whole year’s worth of time.&lt;/p&gt;
&lt;h3&gt;Any final advice to developers who might like to start contributing to Linux?&lt;/h3&gt;
&lt;p&gt;It’s important to remember that Linux Kernel development doesn’t work with a deadline-based approach. Patches are reviewed and tested by developers and maintainers all around the world and this takes time. So you need to have a lot of patience. It could take months, or even years, to get your changes merged into the mainline code. Because the maintainers, in the end, are responsible for maintaining any new pieces of code, they believe in the approach of &lt;strong&gt;breaking the code first before accepting it&lt;/strong&gt;. This can take time.&lt;/p&gt;
&lt;p&gt;Something else to keep in mind is that others will often criticize your changes. Sometimes it turns ugly. Learn to take criticism in a positive way. This is probably one of the most important pieces of advice I can give to fellow developers interested in working with the kernel community.&lt;/p&gt;
&lt;p&gt;Finally, sometimes you’ll find more senior developers who will help you bring your patches forward. You’ll find this a great opportunity to learn new things and it’s also a great way to build your reputation within the community. Remember, since the entire development process is done through email communications, the only way the community will judge you will be by your work – so make your work shine!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon + CloudNativeCon, Europe: 16-20 May 2022]]></title><description><![CDATA[KubeCon EU 2022 Come join the HPE Developer Community team at Cloud Native Computing Foundation’s flagship conference, KubeCon…]]></description><link>https://developer.hpe.com/kubecon-cloudnativecon-europe-16-20-may-2022/</link><guid isPermaLink="false">https://developer.hpe.com/kubecon-cloudnativecon-europe-16-20-may-2022/</guid><pubDate>Fri, 22 Apr 2022 19:03:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/kubeconeu-2022-event-card-image.png&quot; alt=&quot;KubeCon EU 2022&quot; title=&quot;KubeCon EU 2022&quot;&gt;&lt;/p&gt;
&lt;p&gt;Come join the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; team at Cloud Native Computing Foundation’s flagship conference, KubeCon + CloudNativeCon Europe 2022. Thousands of cloud-native leaders will come together from May 16th through 20th to this conference to learn about emerging trends in container architecture, advancing the exciting world of cloud-native computing. Hewlett Packard Enterprise (HPE) will have a booth onsite (booth number G11) at the Feria Valencia where the show floor will open on Wednesday, May 18th. We will also provide a virtual presence, including the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;HPE Developer Hack Shack&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In the HPE booth (G11), you’ll get to talk with HPE experts and HPE Developer Community team members who can give you the rundown on key Open Source projects we’re involved in, like SPIFFE/SPIRE, CSI driver, Apache Spark™, etc.&lt;/p&gt;
&lt;h2&gt;Hear Agustín Martínez Fayó and Marcos Yacob Speak on SPIRE&lt;/h2&gt;
&lt;p&gt;On Friday, May 20th, contributors and maintainers of the SPIRE project, Agustín Martínez Fayó and Marcos Yacob, will lead a session where you’ll be introduced to the SPIRE project and dive deep into its new Microsoft Windows support. SPIRE, the SPIFFE Runtime Environment, is an extensible system that implements the principles embodied in the SPIFFE (Secure Production Identity Framework for Everyone) standards. It manages platform and workload attestation, provides an API for controlling attestation policies, and coordinates certificate issuance and rotation.&lt;/p&gt;
&lt;p&gt;The session will provide a high-level overview of the basic concepts behind SPIRE and explain why you should consider using it if you find issuing workload identities at scale challenging. This talk will also offer a deep dive into the Windows support that is being introduced in SPIRE, providing detailed information about the implementation, explaining the differences between running SPIRE on Windows and Linux platforms, and comparing the experience from both a user and developer perspective. &lt;a href=&quot;https://kccnceu2022.sched.com/event/yttL&quot;&gt;Don’t miss this session&lt;/a&gt;, which runs from 14:55 to 15:30. Make sure you catch it before you head out.&lt;/p&gt;
&lt;h2&gt;In-Booth Presentations&lt;/h2&gt;
&lt;p&gt;There will be many other opportunities to learn about HPE’s software development work and open source efforts at the HPE booth. Our magician will dazzle you with his magic and quiz you on how much you know about HPE’s involvement in open source. If you’ve paid attention, you may even win a prize! And don’t miss the raffle: Three lucky winners will get a chance to walk away with one Raspberry Pi 4 starter kit each day.&lt;/p&gt;
&lt;p&gt;In-booth presentations will be given by many of the HPE Developer Community experts every couple of hours.&lt;/p&gt;
&lt;p&gt;Each day, you’ll have opportunities to learn about the following topics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What? Not yet an HPE DEV Member? There&apos;s Treasure to be Found!&lt;/li&gt;
&lt;li&gt;Open Source @ HPE – a 25+ Years Love Story&lt;/li&gt;
&lt;li&gt;Overview of HPE Storage for Kubernetes&lt;/li&gt;
&lt;li&gt;Introduction to HPE Point Next&lt;/li&gt;
&lt;li&gt;Introduction to SPIFFE &amp;#x26; SPIRE&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;You Can Still Attend Virtually&lt;/h2&gt;
&lt;p&gt;As we’ve done the last few years, we’ll be offering live office hours where you can connect with HPE subject matter experts and collaborate, learning more about Open Source and HPE technologies. The featured topics will mirror those in the booth.&lt;/p&gt;
&lt;p&gt;As part of its virtual presence, HPE will once again feature &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;the HPE Developer Hack Shack&lt;/a&gt;, a place to learn and have fun. It’s a unique place designed to give virtual events a more personal touch and extend the experience beyond the event. Here’s what the HPE Developer Community will provide to you through the Hack Shack and web portal:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshops&quot;&gt;WORKSHOPS-ON-DEMAND&lt;/a&gt;: Access over two dozen free, on-demand hands-on training courses. Using a Jupyter Notebook environment, they provide you with hands-on experience across the latest HPE and Open Source technologies. Check out our newest workshops, including:
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/33&quot;&gt;Docker 101 - Introduction to Docker Concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/34&quot;&gt;Spark 101 - Introduction to Apache Spark Concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/32&quot;&gt;Creating a Zero Trust Model for Microservices Architectures with SPIRE and Envoy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/24&quot;&gt;Kubernetes 101 - Introduction to the Kubernetes Concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/18&quot;&gt;Building a dynamic Machine Learning pipeline with KubeDirector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/workshop/2&quot;&gt;Using Kubernetes CSI with HPE Ezmeral Container Platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://bit.ly/kubecon-eu-2022-hpedev-treasure-hunt&quot;&gt;TREASURE HUNT&lt;/a&gt;: Our scavenger-hunt style challenge encourages you to check out all the resources that are available on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer website&lt;/a&gt; and the Hack Shack. Be one of the first 15 people to answer all the questions correctly and win an HPE Developer branded swag!&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/replays/&quot;&gt;REPLAYS&lt;/a&gt;: You can find replays of many of the technical workshops we’ve offered live in the past in the Hack Shack as well. View them to learn more about the &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/0&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt; (now known as HPE Ezmeral Runtime Enterprise), &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/27&quot;&gt;SPIFFE and SPIRE authentication&lt;/a&gt;, and the &lt;a href=&quot;https://developer.hpe.com/hackshack/replays/2&quot;&gt;HPE Container Storage Interface&lt;/a&gt; for Kubernetes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/hackshack/arcade/&quot;&gt;ARCADE&lt;/a&gt;: In our arcade, you’ll find Hack Shack Attack! Give this popular retro-style video game a try and compete with your friends for the highest score. There’s also a place where you can download stickers, Zoom backgrounds, and cool artwork to use on your social channels.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/community/&quot;&gt;COMMUNITY&lt;/a&gt;: We invite you to join and contribute your expertise on our blog or deliver an on-demand workshop. Connect with others in the community via the &lt;a href=&quot;https://www.hpe.com/forum/ezmeral&quot;&gt;HPE Ezmeral Software Forum&lt;/a&gt; or our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; channels to start conversations and get answers to questions. Sign-up for our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE Developer Newsletter&lt;/a&gt; to stay up-to-date on the newest blog posts and tutorials.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As always, HPE is excited to have an opportunity to connect with other technologists at an event such as this. &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/register/&quot;&gt;Register now&lt;/a&gt; and meet us in Valencia to learn how HPE envisions Open Source running across edge to cloud, working to improve management and analytics, cloud-native security, and data insights. It’s been a long time since we’ve had the opportunity to see each other in person!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[It’s All Fun and Games at the Hack Shack!]]></title><description><![CDATA[Let's Hack Shack at HPE Discover 2022 Can you believe it? Hewlett Packard Enterprise (HPE) will be holding its first major in-person…]]></description><link>https://developer.hpe.com/it’s-all-fun-and-games-at-the-hack-shack/</link><guid isPermaLink="false">https://developer.hpe.com/it’s-all-fun-and-games-at-the-hack-shack/</guid><pubDate>Fri, 22 Apr 2022 12:36:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/lets-hack-shack-512.png&quot; alt=&quot;Let&amp;#x27;s Hack Shack at HPE Discover 2022&quot; title=&quot;Let&amp;#x27;s Hack Shack at HPE Discover 2022&quot;&gt;&lt;/p&gt;
&lt;p&gt;Can you believe it? Hewlett Packard Enterprise (HPE) will be holding its first major in-person conference since 2019 in just over a month! HPE Discover 2022, the Edge-To-Cloud conference, will be held in Las Vegas at the Venetian/Palazzo Resort from June 28-30th. The &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; team is really excited to be able to engage with attendees in person once again. Make sure you &lt;a href=&quot;https://attend.hpe.com/discover2022/index.cfm?iLangID=1&quot;&gt;register&lt;/a&gt; soon so you don’t miss out!&lt;/p&gt;
&lt;h2&gt;A place designed especially for you&lt;/h2&gt;
&lt;p&gt;HPE Discover 2022 attendees will have plenty of opportunity to sit and hear about all the newest HPE solutions and technologies throughout the event. But once you enter the Hack Shack, things get personal! This is where developers, data scientists, and IT technologists have the opportunity to sit down with experts and discuss topics that are important to them. Experts on hot topics like zero-trust security, data lakes, machine learning and analytics workloads, and open source projects will be hanging around just so you can get your questions answered.&lt;/p&gt;
&lt;p&gt;Within the Hack Shack, you’ll engage with members of the HPE Developer Community in topical meetups. These 30-minute sessions are informal gatherings where you can listen to various subject matter experts (SMEs) and then engage in further discussion on the topic as a group. SMEs will also hang out for a bit after each session in case you want to delve into further detail regarding your own specific needs. (Stay tuned for our next blog post where we’ll get into details on these meetup sessions.)&lt;/p&gt;
&lt;p&gt;There will also an opportunity for you to sit down with the HPE UX design team to help them improve the user experience across HPE products and services. Share your opinions and make your voice heard by participating in a variety of user research activities that will inform the design and direction of HPE products and services.&lt;/p&gt;
&lt;h2&gt;Compete in games and win prizes!&lt;/h2&gt;
&lt;p&gt;While many come to the Hack Shack to connect with subject matter experts and get their questions answered, many also come to enjoy a little competitive fun with their colleagues. We’ll have a fussball table out on the front lawn, a Jenga and cornhole game in the backyard, chairs to relax in, and a Shield console to play some cool video games.&lt;/p&gt;
&lt;p&gt;In addition to these popular games, the HPE Developer Community team has developed a couple of other diversions for you. The first one is our Treasure Hunt. Designed to familiarize players with the HPE Developer web portal, this scavenger-hunt style challenge encourages you to check out all the resources that are available on the site, including the &lt;a href=&quot;https://developer.hpe.com/hackshack/&quot;&gt;virtual Hack Shack&lt;/a&gt;. To be sure, there is treasure to be had, so be sure to check out the** &lt;a href=&quot;https://developer.hpe.com/hackshack/hpediscover2022-treasurehunt-terms-conditions/&quot;&gt;Terms &amp;#x26; Conditions&lt;/a&gt;** for details.&lt;/p&gt;
&lt;p&gt;We also designed five role-based hands-on challenges for you to take. Each onsite participant has the opportunity to go home with some nifty swag. Six of those participants will walk away with a CanaKit Raspberry Pi 4 Extreme kit after having successfully completed one of the challenges and correctly answered the associated quiz. Grand prize winners will be announced at our Hack Shack celebration on Wednesday night. &lt;strong&gt;Note: You must be present to win. Please see the &lt;a href=&quot;https://developer.hpe.com/hackshack/hpediscover2022-swchallenges-terms-conditions/&quot;&gt;Terms &amp;#x26; Conditions&lt;/a&gt; for details.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Are you ready to take these challenges?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cloud Architect Challenge - VM Desired State Management in HPE GreenLake&lt;/strong&gt; – Your mission is to deploy a VM in HPE GreenLake using the open source configuration management tool, Terraform. For this challenge, you’ll describe the desired state of your environment and use Terraform to analyze and build the necessary infrastructure artifacts. You’ll be provided with everything you need in a nice and friendly Jupyter Notebook environment with little to no code to write. Take an hour to experience Infrastructure-as-Code on HPE GreenLake and earn a chance to win a prize.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Open Source Advocate Challenge - Play with Python or Discover Ansible - Your Choice!&lt;/strong&gt; – Feeling competitive? Expand your Open Source skills with a chance to win cool prizes. In this challenge, you will be tasked with completing one of two popular HPE DEV Workshops-on-Demand. Choose from &lt;em&gt;Python 101 - A simple introduction to Python programming language&lt;/em&gt; or &lt;em&gt;Ansible 101 - Introduction to Ansible concepts&lt;/em&gt; and respond correctly to the quiz for a chance to win a prize. An hour should be enough for you to complete this challenge!&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developer Challenge - Building Modern Software with Zero Trust Security&lt;/strong&gt; – In today’s highly distributed modern software environments, security is a major concern. Who do you trust? In this challenge, choose one of two workshops, &lt;em&gt;SPIFFE – SPIRE 101 – An introduction to SPIFFE server and SPIRE agent security concepts&lt;/em&gt; or &lt;em&gt;Creating a Zero Trust Model for Microservices Architectures with SPIRE and Envoy&lt;/em&gt;, to understand, in less than an hour, how open source projects SPIFFE and SPIRE enable zero trust security at the heart of your solution and compete for a prize.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ML Engineer Challenge - Deep Learning Model Training at Scale with Determined&lt;/strong&gt; – Deep learning at scale is difficult, right? Explore the fundamentals of Determined, the open-source deep learning training platform, to learn how it can help. Take this challenge and respond correctly to a quiz to try and win a cool prize. In this challenge, you will train a TensorFlow model in Determined using one GPU, and scale up your training across multiple GPUs using distributed training, while finding accurate models faster using state-of-the-art hyperparameter search methods.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Scientist Challenge - Finding the Data You Didn&apos;t Know You Needed&lt;/strong&gt; – In this challenge, you’ll get to see how Project Data Map can help you discover new and meaningful datasets that enhance your model building experience, all whilst keeping track of the datasets you know and love, so next time you don’t have to go digging through old notebooks to find them! You’ll even learn how you can share them with your classmates or trade them for valuable tokens! This challenge should take out about an hour and gives you a chance to win an awesome prize.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Come party with us&lt;/h2&gt;
&lt;p&gt;One of the most anticipated events at HPE Discover is the Hack Shack celebration. Planned for Wednesday evening, we’ll be serving refreshments and hosting a very special speaker who will be presenting the CanaKit Raspberry Pi sets to our lucky winners of the role-based challenges.&lt;/p&gt;
&lt;p&gt;Stay tuned! We’ll be publishing more details on the meetup sessions and the schedule in an upcoming &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE Developer Blog post&lt;/a&gt;. You can also refer to the &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1049&amp;#x26;locale=en_US&quot;&gt;HPE Discover 2022 Edge-To-Cloud conference catalog&lt;/a&gt; for details on HPE Developer sessions.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Deep Learning Model Training – A First-Time User’s Experience with Determined - Part 1]]></title><description><![CDATA[Determined is an open-source deep learning training platform that helps data science teams train models more quickly, easily share GPU…]]></description><link>https://developer.hpe.com/deep-learning-model-training-–-a-first-time-user’s-experience-with-determined-part-1/</link><guid isPermaLink="false">https://developer.hpe.com/deep-learning-model-training-–-a-first-time-user’s-experience-with-determined-part-1/</guid><pubDate>Thu, 14 Apr 2022 17:21:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;Determined&lt;/a&gt; is an open-source deep learning training platform that helps data science teams train models more quickly, easily share GPU resources, and collaborate more effectively. The open-source version of Determined can be deployed on-premises in your data center, on any hardware, on Kubernetes, or in public clouds – wherever GPU resources are available to obtain the full benefit of Determined.&lt;/p&gt;
&lt;p&gt;In this two-part blog series, I’ll share my experience as a first-time user of Determined. This blog series aims to provide a high-level overview of the basic concepts behind Determined and why you should consider it if you find doing deep learning at scale a bit challenging.&lt;/p&gt;
&lt;p&gt;In this first part, I’ll put on my IT Operations manager’s hat and explain how I deploy Determined on a Kubernetes cluster in an on-premises HPE Ezmeral Runtime Enterprise deployment. In this instance, it will enable my organization’s data science team to quickly try out Determined and assess its capabilities for their data science work.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/detai-high-levl-architecture-thumbnail-v2.png&quot; width=&quot;543&quot; height=&quot;708&quot; alt=&quot;High Level architecture diagram&quot; title=&quot;High Level architecture diagram&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/deep-learning-model-training-%E2%80%93-a-first-time-user%E2%80%99s-experience-with-determined-%E2%80%93-part-2/&quot;&gt;In the second part of this series&lt;/a&gt;, I&apos;ll wear my data scientist/ML engineer hat as a member of a larger data science team that wants to get started with Determined and explore some of its fundamental concepts and features. I’ll review how to train neural network models using one or more GPUs with distributed training, and advanced functionality such as state-of-the-art hyperparameter search to improve model accuracy and find the best version of a model.&lt;/p&gt;
&lt;h2&gt;Determined AI&lt;/h2&gt;
&lt;p&gt;Determined is an open-source platform built to accelerate deep learning (DL) model development and experimentation for data science teams at scale, handling the load easily as teams, clusters and data sets all increase in size. Teams can use Determined to build, train, and optimize their deep learning models while easily sharing GPU compute resources. Determined AI was acquired by HPE in June 2021 and now operates as a part of the High Performance Computing (HPC) and Artificial Intelligence (AI) business unit.&lt;/p&gt;
&lt;p&gt;Determined provides the APIs, a command line interface (CLI), a web user interface, and tools for accelerating model experiments with integrated capabilities such as distributing training and automatic model tuning with hyperparameter search, also known as hyperparameter optimization (HPO).&lt;/p&gt;
&lt;h2&gt;HPE Ezmeral Runtime Enterprise&lt;/h2&gt;
&lt;p&gt;Built from the ground up to be open and run in hybrid environment, &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-runtime/home/&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; provides a secure, enterprise-grade platform designed to run both cloud-native and non-cloud-native applications at scale. It provides an integrated data fabric, multi-cluster Kubernetes management, enterprise-grade security and multi-tenancy capabilities.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Runtime Enterprise with the pre-integrated HPE Ezmeral Data Fabric provides all the networking, compute, and storage resources needed to run the Determined open-source platform on premises on Kubernetes.&lt;/p&gt;
&lt;h2&gt;Components of my Determined deployment&lt;/h2&gt;
&lt;center&gt;&lt;img src=&quot;/img/detai-lab-environment-architecture-v2.png&quot; width=&quot;1332&quot; height=&quot;725&quot; alt=&quot;Figure1 Determined High Level Architecture on Kubernetes&quot; title=&quot;Figure1 Determined High Level Architecture on Kubernetes&quot;&gt;&lt;/center&gt;
&lt;p&gt;As the figure above indicates, my experimental deployment of Determined consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A Kubernetes cluster, managed by HPE Ezmeral Runtime Enterprise, with a set of worker nodes with &lt;a href=&quot;https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/&quot;&gt;NVIDIA GPUs support enabled&lt;/a&gt; (1 GPU device per worker node in my Kubernetes cluster).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Determined &lt;strong&gt;Master&lt;/strong&gt;, which is attached to a &lt;strong&gt;PostgreSQL&lt;/strong&gt; database. The Determined Master and Database run as containers, each within a Kubernetes POD, in the worker nodes of the Kubernetes cluster.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The Master hosts the interface service endpoint that clients use to communicate with Determined through a CLI, WebUI, and APIs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Master schedules tasks and brings up PODs on Kubernetes worker nodes to run tasks on demand. For example, the model training tasks and auxiliary tasks (TensorBoard, Notebook).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As training tasks execute, the Master maintains communication with training task PODs and saves training model metadata, like the training and validation metrics received from the training tasks, as well as the state of the tasks, in the PostgreSQL database, for model experiment tracking and analysis.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;An ingress gateway makes the Master&apos;s interface service endpoint reachable from outside the Kubernetes cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A persistent storage volume for experiment tracking by logging the model’s metadata information, such as hyperparameters, the training and validation metrics, logs, date/time, on the PostgreSQL database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A volume shared across the Kubernetes worker nodes. The shared file system is needed to store the &lt;strong&gt;model artifacts&lt;/strong&gt;, such as model code and model &lt;strong&gt;checkpoint&lt;/strong&gt; files. The model checkpoint files are saved versions of the validated models that data science teams can access later for testing and analysis. This makes them available to a deployment or serving solution such as Seldon core. The shared file system can also be used by Determined to store the model datasets on which the model is trained by the training tasks.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Installing Determined on a Kubernetes cluster managed by HPE Ezmeral Runtime Enterprise&lt;/h2&gt;
&lt;p&gt;The open-source version of Determined is available as a Helm chart and can be installed on a Kubernetes cluster running on HPE Ezmeral Runtime Enterprise. As such, I download the chart and modify the chart &lt;em&gt;values.yaml&lt;/em&gt; file (which I’ll explain in this section) before installation of the Helm chart in my Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;Before deploying the Helm chart, an important aspect to understand is how to connect Determined to a shared storage volume. For this, I need to create a new tenant named &lt;strong&gt;determinedai&lt;/strong&gt; on HPE Ezmeral Runtime Enterprise for my Kubernetes cluster. This tenant serves as a Kubernetes cluster &quot;namespace&quot;. Each tenant created in HPE Ezmeral Runtime Enterprise is automatically provisioned with a tenant’s shared storage volume on the pre-integrated HPE Ezmeral Data Fabric cluster located at &lt;strong&gt;/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount&lt;/strong&gt;. The tenant’s shared storage volume is then automatically mounted on each Kubernetes cluster’s host on the path &lt;strong&gt;/opt/bluedata/mapr/mnt&lt;/strong&gt;. This enables Determined to connect to the shared storage &lt;em&gt;/opt/bluedata/mapr/mnt/&amp;#x3C;DataFabric-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/&lt;/em&gt; for accessing the training and validation datasets, and for storing model artifacts.&lt;/p&gt;
&lt;p&gt;Furthermore, some aspects of Helm chart deployment must be configured before installing Determined on Kubernetes. Although most of the default Helm chart configuration settings are suitable for getting started with Determined on Kubernetes, some parameters must be configured in the chart &lt;em&gt;values.yaml&lt;/em&gt; file to match the designated Kubernetes cluster deployment and available compute, storage and network resources such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The use of the Kubernetes &lt;em&gt;NodePort&lt;/em&gt; service type to expose the Determined Master service endpoint outside the Kubernetes cluster,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The shared storage volume path to use to save validated model files and checkpoints for fault tolerance,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The amount of GPU resources (known as &lt;em&gt;&lt;strong&gt;slot&lt;/strong&gt;&lt;/em&gt;) available on the Kubernetes worker hosts. In my Determined deployment I have 1 GPU per worker host,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href=&quot;https://docs.determined.ai/latest/concepts/scheduling.html&quot;&gt;advanced scheduler&lt;/a&gt; to use for large Kubernetes clusters with multiple GPUs per worker host. For my experimental Determined deployment, as I only have 1 GPU per worker host, it is recommended to let Determined use the &lt;strong&gt;default Kubernetes scheduler&lt;/strong&gt; to schedule training tasks,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Determined &lt;em&gt;Admin&lt;/em&gt; and &lt;em&gt;Determined&lt;/em&gt; default user account passwords,&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The friendly name for Determined deployment.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information about the configuration options for the Helm Chart deployment, see the &lt;a href=&quot;https://docs.determined.ai/latest/sysadmin-deploy-on-k8s/install-on-kubernetes.html&quot;&gt;installation guide documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In my Determined deployment on Kubernetes, the following aspects of the Determined Helm chart deployment configuration is set in the chart &lt;em&gt;values.yaml&lt;/em&gt; file as shown below. Other configuration settings are set to their default values.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Markdown&quot;&gt;useNodePortForMaster: true
checkpointStorage:
   type: shared_fs
   hostPath: /opt/bluedata/mapr/mnt/&amp;#x3C;DF-clusterName&gt;/exthcp/tenant-&amp;#x3C;ID&gt;/fsmount/checkpoints
maxSlotsPerPod: 1
clusterName: stagingdetai
defaultPassword: &amp;#x3C;myPassword&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the namespace created, the kubeconfig file for the Kubernetes cluster sourced in my Linux workstation, and the Helm chart deployment configuration files in hand, I can deploy Determined software on the Kubernetes namespace &lt;em&gt;determinedai&lt;/em&gt; using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install stagingdetai &amp;#x3C;relative path to determined-helm-chart repository&gt; –n determinedai [--dry-run]
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: I recommend first using the &lt;code&gt;--dry-run&lt;/code&gt; flag to validate and verify the chart manifest before actual Helm chart deployment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Upon completion, I can use the following commands to check the status of the Helm chart deployment for my Determined instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm list -n determinedai
helm status stagingdetai -n determinedai
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/determined-helm-install-status.png&quot; alt=&quot;Helm Chart deployment status&quot; title=&quot;Helm Chart deployment status&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the time of the Determined installation on the Kubernetes cluster, an instance of the &lt;strong&gt;Determined Master&lt;/strong&gt; and a &lt;strong&gt;PostgreSQL database&lt;/strong&gt; are deployed in the Kubernetes cluster. Using the kubectl command below allows me to check the resources that are deployed on the Kubernetes cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get pod,services –n determinedai
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/determined-pod-svc.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As shown in the above image, these components run as a container within a Kubernetes POD. Service endpoints for the Determined’s Master and the Database services are also deployed. The Determined Master service endpoint is a NodePort service that enables HPE Ezmeral Runtime Enterprise to expose that service outside the Kubernetes cluster through its &lt;strong&gt;ingress gateway.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Installing the Determined Command Line Interface&lt;/h2&gt;
&lt;p&gt;As mentioned earlier, Determined provides a web user interface (WebUI), APIs (REST API and Python API), and a command line interface (CLI) tool to interact with the system. The CLI is the most common tool used by data scientists and ML engineers to interact with Determined, especially for launching deep learning model training tasks on Determined. The WebUI is mainly used to monitor the progress of model experiments and training tasks, and visualize the model training performance in graphs.&lt;/p&gt;
&lt;p&gt;The Determined CLI is distributed as a Python package. I need Python 3.6 or later installed on my Linux workstation along with the latest version of &lt;code&gt;pip&lt;/code&gt;. I can use the following command to install the CLI tool on my workstation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#install latest version of pip if needed
python3 -m pip install --upgrade pip 

#install the Determined CLI
pip install determined 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using the Determined Command Line Interface&lt;/h2&gt;
&lt;p&gt;I am now ready to enter Determined CLI commands. All commands begin with &lt;strong&gt;det&lt;/strong&gt; and any CLI command has the form:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;det [-m &amp;#x3C;det_master_URL_or_IP:port&gt;] &amp;#x3C;command_argument&gt; &amp;#x3C;action_verb&gt; [-h]&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The Master service endpoint is referenced using the -m flag to specify the URL of the Determined Master that the CLI connects to. Instead of specifying the &lt;em&gt;&lt;strong&gt;-m&lt;/strong&gt;&lt;/em&gt; flag in every command, I can define an environmental variable, &lt;em&gt;&lt;strong&gt;DET_MASTER&lt;/strong&gt;&lt;/em&gt;, that points to the Determined Master service endpoint URL.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note:  The help flag [-h] can be used to learn more about CLI options.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To use and interact with Determined using the CLI, I need to tell the CLI where the Determined Master service is running. To do so, I first use the &lt;code&gt;kubectl describe service&lt;/code&gt; command below and look at the &lt;strong&gt;Annotations&lt;/strong&gt; section to get the &lt;strong&gt;ingress gateway URL&lt;/strong&gt; and &lt;strong&gt;network port&lt;/strong&gt; provided by HPE Ezmeral Runtime Enterprise for the Master service of the Determined deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl describe service determined-master-service-stagingdetai -n determinedai
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/determined-master-endpoint-ofuscated.png&quot; alt=&quot;Ingress Gateway URL for the Determined Master endpoint service&quot; title=&quot;Ingress Gateway URL for the Determined Master endpoint service&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this example, the network port is 13047.&lt;/p&gt;
&lt;p&gt;I now need to export on my workstation the &lt;strong&gt;DET_MASTER&lt;/strong&gt; environmental variable, which points to that URL and port:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export DET_MASTER=http://gateway2.&amp;#x3C;mydomain.name&gt;:13047
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, I need to authenticate as a Determined user. By default, at the time of the Determined installation, two user accounts are created: &lt;em&gt;&lt;strong&gt;Admin&lt;/strong&gt;&lt;/em&gt; an administrator account, and &lt;em&gt;&lt;strong&gt;Determined&lt;/strong&gt;&lt;/em&gt; a non-privileged user account with the password specified in the Helm chart &lt;em&gt;values.yaml&lt;/em&gt; configuration file. Using the following command allows me to authenticate as an admin user. I will be prompted by the CLI for the password.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#format: det user login &amp;#x3C;username&gt;
det user login admin
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creating User accounts for the data science team&lt;/h2&gt;
&lt;p&gt;Determined is designed for data science teams. As such, I’d recommend creating a user account for each member of the team who wants to use Determined. This provides the organizational benefits of associating each Determined entity, such as model experiments and associated training tasks, with the user who created it.&lt;/p&gt;
&lt;p&gt;I have experienced user account creation using both the CLI and the REST API. The &lt;em&gt;&lt;strong&gt;Admin&lt;/strong&gt;&lt;/em&gt; privileged user account must be used to create a user account and set the newly created user account password.&lt;/p&gt;
&lt;h3&gt;Using the Det CLI&lt;/h3&gt;
&lt;p&gt;Once logged in as Admin user on Determined, I can use the following command to create a test user account. First, I create the user account. The newly created user account has a blank password by default. Then, I set the password for the user account using the second command, which prompts me for the password and password confirmation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Create the user account
det user create &amp;#x3C;username&gt;
# Set the password for the user account
det user change-password &amp;#x3C;target-username&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Using the REST API for a programmatic approach&lt;/h3&gt;
&lt;p&gt;Unlike the DET CLI, which requires keyboard input for the password, a programmatic approach to create user accounts might be more appropriate depending on the organization’s use case. Determined is also REST API enabled. The Determined REST API documentation is available &lt;a href=&quot;https://docs.determined.ai/latest/rest-api/index.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Below is the sequence of REST API calls I can use to create a new user account (testuser1) in Determined and to set the password, all using code. You can see how I use &lt;em&gt;&lt;strong&gt;cURL&lt;/strong&gt;&lt;/em&gt; as an HTTP client to interact with Determined through its REST API.&lt;/p&gt;
&lt;p&gt;I first need to authenticate as Admin user to Determined and save the authentication token (bearer token) for subsequent REST API calls:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;detMaster=http://gateway2.&amp;#x3C;mydomain.name&gt;:13047
# Authenticate as admin user and get the authentication token for subsequent calls:
token=$(curl -i -s -X &apos;POST&apos; \
  &quot;${detMaster}/api/v1/auth/login&quot; \
  -H &apos;accept: application/json&apos; \
  -H &apos;Content-Type: application/json&apos; \
  -d &apos;{
  &quot;username&quot;: &quot;admin&quot;,
  &quot;password&quot;: &quot;&amp;#x3C;MyPassword&gt;&quot;
}&apos; | grep token | awk &apos;{print $1}&apos; | tr -d &apos;\r&apos;)

# Extract token value and remove trailing quotes 
MyToken=$(echo $token | cut -d&apos;:&apos; -f 2 | cut -d&apos;,&apos; -f 1 | tr -d &apos;&quot;&apos;) 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I then create a non-admin user account using the Admin access token as the bearer token authentication:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Create a new user account “testuser1”
curl -X &apos;POST&apos; \
  &quot;${detMaster}/api/v1/users&quot; \
  -H &apos;accept: application/json&apos; \
  -H &quot;Authorization: Bearer $MyToken&quot; \
  -d &apos;{
  &quot;user&quot;: {
    &quot;username&quot;: &quot;testuser1&quot;,
    &quot;admin&quot;: false,
    &quot;active&quot;: true
   },
   &quot;password&quot;: &quot;&amp;#x3C;UserPassword&gt;&quot;
}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Changing the password for an existing user account is a two-step process. You must first obtain the userID of the user before changing the password:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Fetch the user ID for user &quot;testuser1&quot;
detUserId=$(curl -s -X &apos;GET&apos; \
  &quot;${detMaster}/api/v1/users&quot; \
  -H &apos;accept: application/json&apos; \
  -H &quot;Authorization: Bearer $MyToken&quot;| jq &apos;.users[] | select(.username==&quot;testuser1&quot;) | .id&apos; | tr -d &apos;&quot;&apos;)
echo &quot; the determined AI user ID for user testuser1 is : $detUserId&quot;

# Set password for the user account “testuser1”
curl -X &apos;POST&apos; \
&quot;${detMaster}/api/v1/users/${detUserId}/password&quot; \
  -H &apos;accept: application/json&apos; \
  -H &apos;Content-Type: application/json&apos; \
  -H &quot;Authorization: Bearer $MyToken&quot; \
  -d &apos;&quot;&amp;#x3C;userPassword&gt;&quot;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The open-source version of Determined does not provide user access control features in case you have multiple data science teams (i.e.: multiple tenants). Open-source Determined uses a local user directory as a convenient method to show the entity created by the logged in users. However, the open-source version makes any entity (experiments, tasks) visible to all users, regardless of who created it. This can be a challenge for enterprises that need to keep strong model governance for audit purposes. The enterprise version of the open-source Determined product, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; addresses this limitation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Checking connectivity to the WebUI using the newly created user account&lt;/h2&gt;
&lt;p&gt;A good method to verify that a member of the data science team can interact with Determined is to test the connectivity to the WebUI. The WebUI is available on the same service endpoint URL as the CLI. Using my browser, I connect to the Master service URL and verify that I am prompted to login to the WebUI as shown in the following figure:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/determined-webui-login.png&quot; alt=&quot;Determined WebUI login page&quot; title=&quot;Determined WebUI login page&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon successful login, I land on the &lt;em&gt;&lt;strong&gt;dashboard&lt;/strong&gt;&lt;/em&gt; below. You’ll learn more about the WebUI in my second blog post in this series.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: At the bottom left of the menu bar, you can see that having access to a running Determined instance allows me to navigate a Swagger UI version of the REST API in an interactive fashion.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/determined-webui-dashboard.png&quot; alt=&quot;Determined WebUi Dashboard&quot; title=&quot;Determined WebUi Dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;That’s it! Everything is set. I am now ready to put on my data scientist hat, go and use Determined to train and tune a deep learning model in Determined using the CLI, visualize training results using the WebUI, and load and test models by making inferences.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;As you can see, using my IT operations manager’s hat, I deployed Determined on a Kubernetes cluster running on HPE Ezmeral Runtime Enterprise, which provides all the components needed to run Determined: a task scheduler such as Kubernetes, a namespace, multi-tenancy, an ingress gateway, persistent storage for experiment tracking, and a shared file system for storing model artifacts and datasets.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/blog/deep-learning-model-training-%E2%80%93-a-first-time-user%E2%80%99s-experience-with-determined-%E2%80%93-part-2/&quot;&gt;In the second post in this series&lt;/a&gt;, I will walk through training a TensorFlow Keras model in Determined using features such as distributed training and automatic model tuning with hyperparameter search.&lt;/p&gt;
&lt;p&gt;You can subscribe for updates from the HPE Dev Community by subscribing to our &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;newsletter&lt;/a&gt;. I was able to write this post by joining and receiving help from the &lt;a href=&quot;https://join.slack.com/t/determined-community/shared_invite/zt-cnj7802v-KcVbaUrIzQOwmkmY7gP0Ew&quot;&gt;Determined Community Slack&lt;/a&gt;, which you can also do .You can begin training models with Determined today by visiting the &lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;Determined project on GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New workshops, new code!]]></title><link>https://developer.hpe.com/2022-April-05/</link><guid isPermaLink="false">https://developer.hpe.com/2022-April-05/</guid><pubDate>Tue, 05 Apr 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Highlighting key features of HPE Ezmeral Runtime Enterprise Release 5.4]]></title><description><![CDATA[It’s a cliché to say that data is an important asset to organizations. Organizations of all sizes are looking at data as a strategic asset…]]></description><link>https://developer.hpe.com/highlighting-key-features-of-hpe-ezmeral-runtime-enterprise-release-5-4/</link><guid isPermaLink="false">https://developer.hpe.com/highlighting-key-features-of-hpe-ezmeral-runtime-enterprise-release-5-4/</guid><pubDate>Thu, 31 Mar 2022 18:18:53 GMT</pubDate><content:encoded>&lt;p&gt;It’s a cliché to say that data is an important asset to organizations. Organizations of all sizes are looking at data as a strategic asset to help them create a digital advantage. From delivering frictionless customer experiences and fraud detection, to accelerating breakthrough innovations in healthcare and personal medicine, enterprises are moving into the digital economy very rapidly. Data-first artificial intelligence (AI) and machine learning (ML) initiatives are seen as a strategic enabler and a top investment priority for these organizations. Yet, 80-85% of Enterprises* find it difficult to move their AI/ML initiatives beyond the experimentation stage and thus fail to deliver any meaningful business outcomes.&lt;/p&gt;
&lt;h6&gt;* Gartner: Don&apos;t Stumble at the Last Mile: Leveraging MLOps and DataOps to Operationalize ML and AI&lt;/h6&gt;
&lt;h2&gt;Common operational challenges&lt;/h2&gt;
&lt;p&gt;I speak to a lot of customers across different industries and verticals. It is fascinating to see that they all have many challenges in common. Some that I hear repeatedly are:&lt;/p&gt;
&lt;h4&gt;Fragmented data and governance controls&lt;/h4&gt;
&lt;p&gt;It is true that enterprises have large amounts of data at their disposal. However, the data is often silo’ed, with inconsistent governance controls, which makes it difficult for the consumers of data, namely, Data Engineers, Data Analysts and Data Scientists, to access the data in a timely, safe and self-service fashion.&lt;/p&gt;
&lt;h4&gt;Enabling modern analytics at scale&lt;/h4&gt;
&lt;p&gt;Enterprises are looking beyond Hadoop to modernize their analytics stacks, considering applications such as Apache Spark™ to run on Kubernetes. While the new approach yields several benefits, including self-service access to Spark clusters, elastic and independent scaling of compute and storage, rolling upgrades to newer Spark versions, etc., the transition from the legacy world is not that simple. There is a learning curve associated with operationalizing Kubernetes, putting the right security and access controls in place, establishing data connectivity to many different data sources, etc., just to name a few considerations.&lt;/p&gt;
&lt;p&gt;In addition, every persona has a certain preference for using their tool of choice to interact with Spark. Then, there is the data consistency problem. While data lakes are cost efficient, ensuring the data consistency and data reliability that is required for AI/ML applications is a huge challenge.&lt;/p&gt;
&lt;h4&gt;Runtime environment consistency&lt;/h4&gt;
&lt;p&gt;Deploying Kubernetes has become a lot simpler today, which leads to different development teams easily spinning up new clusters. However, these clusters may not be in compliance or meet enterprise security standards. The question is, how do you make it easier for application teams to self-provision Kubernetes clusters and, at the same time, enforce immutable security controls that are centrally governed?&lt;/p&gt;
&lt;h2&gt;HPE Ezmeral Runtime Enterprise – Modern approach to delivering end-to-end analytics solutions at scale&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; is a turnkey platform designed to support clients’ AI, analytics and data needs, as well as help them innovate faster. Built on the foundation of Kubernetes, the Ezmeral Runtime Enterprise platform brings simplicity to operationalizing the ML lifecycle through &lt;a href=&quot;https://www.hpe.com/us/en/solutions/ezmeral-machine-learning-operations.html&quot;&gt;HPE Ezmeral ML Ops &lt;/a&gt;and &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/HPE_Ezmeral_Runtime_Analytics_for_Spark.html&quot;&gt;HPE Ezmeral Runtime Enterprise Analytics for Apache Spark&lt;/a&gt; at the edge, on-premises or on public clouds with an infrastructure agnostic approach to deploying and managing the analytics stack.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Runtime Enterprise natively includes &lt;em&gt;&lt;strong&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;Ezmeral Data Fabric&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt; – a modern data platform with global namespace and built-in capabilities to store files, objects, event stream store, databases – binary and JSON db – and provide access to data via various protocols including NFS, HDFS, S3 and POSIX. HPE Ezmeral Runtime Enterprise also provides the flexibility to connect to existing data lakes and data sources such as HDFS, S3 Object stores and third-party storage systems, to enable analytical applications running on K8s clusters remote access to data, thus enabling compute-storage separation.&lt;/p&gt;
&lt;p&gt;The latest 5.4 release of HPE Ezmeral Runtime Enterprise includes quite a few new capabilities, so without further delay, let me highlight these new features.&lt;/p&gt;
&lt;h4&gt;HPE Ezmeral Unified Analytics&lt;/h4&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-unified-analytics.html&quot;&gt;HPE Ezmeral Unified Analytics&lt;/a&gt; is a modern approach to delivering elastic open-source based Spark clusters combined with the Delta Lake for a data lakehouse architecture solution. More on this later. But before that, let me walk you through some common challenges that I hear repeatedly from both Spark administrators and users (think Data Engineers, Data Analysts, and Data Scientists):&lt;/p&gt;
&lt;p&gt;Challenges of a Spark administrator:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provision multiple versions of Spark to cater to different application teams’ needs&lt;/li&gt;
&lt;li&gt;Consistent user authentication and authorization from application to downstream data layer&lt;/li&gt;
&lt;li&gt;Efficient use of resources, including expensive GPU accelerators&lt;/li&gt;
&lt;li&gt;Ability to connect to diverse sets of data sources&lt;/li&gt;
&lt;li&gt;Enterprise support with SLAs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Spark users, on the other hand, have their own concerns:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ability to use REST, CLI, or scripted workflows for Spark job submissions&lt;/li&gt;
&lt;li&gt;Desire to use CPU and/or GPU for Spark jobs, including for dynamic scaling&lt;/li&gt;
&lt;li&gt;Multiple storage options – S3, MapR FS, Hadoop data lakes access&lt;/li&gt;
&lt;li&gt;Support for real-time and batch workloads at scale&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How does HPE Ezmeral Unified Analytics solve these challenges?&lt;/h2&gt;
&lt;p&gt;Let me summarize the features and benefits here.&lt;/p&gt;
&lt;h4&gt;Multi-version Spark support&lt;/h4&gt;
&lt;p&gt;One-click deployment of a &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes-applications/spark/Spark_Overview.html&quot;&gt;Spark Operator&lt;/a&gt; on Kubernetes cluster. This single Spark Operator manages Spark version 2.4.7 and Spark version 3.1.2 and enables users to run multi-version Spark applications. This also includes Spark images with support for multiple languages including Python, R, Java, Scala, etc.&lt;/p&gt;
&lt;h4&gt;Interactive Spark experience through Livy&lt;/h4&gt;
&lt;p&gt;This is for personas that are adept at using REST APIs to interact with Spark. HPE has included &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes-applications/spark/submit_spark_application_using_livy.html&quot;&gt;Livy support&lt;/a&gt; for both Spark 2 and 3 versions. This allows personas, like Data Scientists, to interact with Spark from their Jupyter notebooks. You simply launch the Livy server with one click, connect to the server endpoint, and start submitting Spark jobs.&lt;/p&gt;
&lt;h4&gt;Interaction with BI tools&lt;/h4&gt;
&lt;p&gt;The &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes-applications/spark/installing-and-configuring-spark-thrift-server.html&quot;&gt;Thrift Server&lt;/a&gt; component provides a JDBC/ODBC interface for enabling Data Analysts to interact with Spark from BI tools such as Tableau, PowerBI, Qlik, etc.&lt;/p&gt;
&lt;h4&gt;Shared Hive metastore service&lt;/h4&gt;
&lt;p&gt;The &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes-applications/spark/install_configure_hive_metastore.html&quot;&gt;Hive metastore service&lt;/a&gt; enables multiple Spark applications running across different tenants or K8s clusters to share common metadata/schema.&lt;/p&gt;
&lt;h4&gt;Intuitive Job submission interface&lt;/h4&gt;
&lt;p&gt;This provides a &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes-applications/spark/managing_spark_applications_using_gui.html&quot;&gt;wizard-driven UI&lt;/a&gt; for tenant users to submit their Spark applications. HPE Ezmeral Runtime Enterprise also allows users to bring their YAML files and upload them to UI to run their Spark job immediately, or schedule it to run at a specified time interval. Users can monitor the status of running jobs, as well as look at the events and log files all from an intuitive user interface.&lt;/p&gt;
&lt;h4&gt;Airflow DAGs&lt;/h4&gt;
&lt;p&gt;Data engineers responsible for building data pipelines often resort to automation provided by workflow orchestration tools, like &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes/kubernetes-administrator/airflow/airflow_installation.html&quot;&gt;Airflow&lt;/a&gt;, to sequence a set of tasks using Directed Acyclic Graph (DAG). HPE Ezmeral Runtime Enterprise includes an Airflow operator - enabled with one click during Kubernetes cluster creation. Once enabled, users create and submit Spark jobs using DAGs. Users can store their DAGs in Git, version control it and configure HPE Ezmeral Runtime Enterprise to run those from the Git repository.&lt;/p&gt;
&lt;h4&gt;Delta Lake support&lt;/h4&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/Whats_New_in_Version.html?hl=delta%2Clake&quot;&gt;Delta Lake&lt;/a&gt; is a storage architecture that enables a lakehouse architecture on Spark. It combines the best of data warehouse characteristics, such as ACID properties, schema enforcement, and versioning features, to unstructured and semi-structured data stored in a massively scalable and cost-optimized data lake. With this, you get the data consistency and data reliability that are key requirements in environments with multiple concurrent data pipelines acting on data. The HPE Ezmeral Unified Analytics stack includes open delta libraries by default, enabling applications to read and write data using the delta format in parquet files on HPE Ezmeral Data Fabric or on any S3-compatible object stores.&lt;/p&gt;
&lt;h4&gt;Read and write data from different data sources&lt;/h4&gt;
&lt;p&gt;HPE Ezmeral Runtime Enterprise provides &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/universal-concepts/About_DataTaps.html&quot;&gt;DataTap&lt;/a&gt; to connect Spark applications to remote HDFS data sources, thereby bringing compute closer to data. You can also connect Spark apps to any S3-compatible object store to access and store data.&lt;/p&gt;
&lt;h4&gt;Enterprise-grade Security&lt;/h4&gt;
&lt;p&gt;Enterprise-grade security secures access to Spark clusters. Users are authenticated against AD/LDAP, and authenticated users are allowed to perform operations on Spark clusters based on the RBACs. The identity of authenticated/authorized user is preserved during data access.&lt;/p&gt;
&lt;p&gt;Hopefully, this gives you enough information to whet your appetite about the HPE Ezmeral Unified Analytics solution.&lt;/p&gt;
&lt;h2&gt;Let’s move on to HPE Ezmeral ML Ops&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/universal-concepts/About_HPE_Ezmeral_ML_Ops.html&quot;&gt;HPE Ezmeral MLOps&lt;/a&gt; is an end-to-end Machine Learning lifecycle management platform for building, training and operationalizing ML models. Here are some of the key features of HPE Ezmeral ML Ops in release 5.4.&lt;/p&gt;
&lt;h4&gt;Introducing support for KubeFlow (KF) 1.3&lt;/h4&gt;
&lt;p&gt;Single click deployment of &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/kubernetes/kubernetes-administrator/kubeflow/kubeflow.html&quot;&gt;KF 1.3&lt;/a&gt;. This includes integration with MLFlow for model management, experiment tracking and improved collaboration among data scientists operating on projects to share ML models and model metadata.&lt;/p&gt;
&lt;h4&gt;Enhancing data scientist productivity&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Kale extensions:&lt;/strong&gt; Kale stands for KubeFlow Automated Pipeline Engine. This is an add-on to Jupyter notebooks that provides a simple way for data scientists to annotate notebook cells, as opposed to writing complex code to craft KF pipelines.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prebuilt library of functions:&lt;/strong&gt; The release adds “ezmllib”, which includes several functions that makes it easy for code-first data scientists to interact with Spark, KubeFlow, ML Model registry, etc. from their notebooks and simplifies the coding experience.&lt;/p&gt;
&lt;h4&gt;Accelerating Model training and inferencing&lt;/h4&gt;
&lt;p&gt;HPE Ezmeral ML Ops is adding support for &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/reference/nvidia-gpu-support/nvidia-gpus.html&quot;&gt;NVIDIA MIG-enabled GPUs&lt;/a&gt; to be leveraged by data scientists in their notebooks for model building and training. ML/DL frameworks such as Tensorflow and Pytorch have been updated with the right CUDA (Compute Unified Device Architecture) libraries to take advantage of fractionalized GPUs providing cost-efficiency and secure isolation of workloads.&lt;/p&gt;
&lt;h2&gt;HPE Ezmeral Runtime Control plane – The wizard behind the curtain&lt;/h2&gt;
&lt;p&gt;The HPE Ezmeral Runtime Enterprise control plane is responsible for lifecycle management of on-premises K8s clusters; orchestrating cloud provider-managed K8s clusters – EKS, AKS and GKE; managing user identity and access controls; and securing ingress and access to service endpoints, to name a few of its capabilities.&lt;/p&gt;
&lt;h4&gt;New features in this release include:&lt;/h4&gt;
&lt;p&gt;&lt;strong&gt;Fractional GPU support for NVIDIA MIG-enabled devices:&lt;/strong&gt; Multi-instance GPU (MIG) provides a way to partition GPU resources into slices to improve GPU utilization, as well as offer workload security through fault isolation. HPE Ezmeral Runtime Enterprise has been enhanced to set up MIG profiles (single and mixed strategy) and expose those to workloads (Tensorflow, Pytorch, etc.) via CUDA libraries. Different classes of GPU devices (V, P, T, A classes of NVIDIA GPUs) can coexist on the Kubernetes hosts and HPE Ezmeral Runtime Enterprise can orchestrate workload placement on the right GPU device. In addition, HPE Ezmeral Runtime Enterprise automatically persists these MIG profiles to survive host reboots.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Day 2 enhancements for Centralized Policy management:&lt;/strong&gt; The previous 5.3 release included a GitOps-based centralized policy management feature to declaratively construct K8s clusters that conformed to organizations policies. The policy management feature also brought in drift management and reconciliation capabilities to create immutable clusters. The 5.4 release offers a simpler way for admins to visualize the effects of cluster policies on a dashboard. Admins can quickly see which of the existing Kubernetes objects and workloads are in non-compliance with the policy set or the new objects that are being blocked from creation due to the enforced policies. This way, admins can make better decisions about the effects of policies and fine tune them as needed.&lt;/p&gt;
&lt;h2&gt;Where to go from here&lt;/h2&gt;
&lt;p&gt;As you can tell, there are quite a few new features and capabilities included in the HPE Ezmeral Runtime Enterprise 5.4 release and we are excited to share them with you.&lt;/p&gt;
&lt;p&gt;To learn more about the HPE Ezmeral products, please contact the HPE Sales team, or visit &lt;a href=&quot;https://www.hpe.com/us/en/software.html&quot;&gt;www.hpe.com/ezmeral&lt;/a&gt; to explore how HPE Ezmeral Runtime Enterprise can accelerate your AI and Analytics journey. To learn more about this release, please refer to the &lt;a href=&quot;https://docs.containerplatform.hpe.com/54/&quot;&gt;documentation and release notes&lt;/a&gt;. More information on &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral/home/&quot;&gt;HPE Ezmeral&lt;/a&gt; can be found on the HPE Developer portal. You can view other articles on HPE Ezmeral here on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt;.&lt;/p&gt;
&lt;h6&gt;Apache® and Apache Spark™ are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by the use of these marks.&lt;/h6&gt;</content:encoded></item><item><title><![CDATA[File, objects, databases and streams – Oh my!]]></title><description><![CDATA[Analytics and machine learning (ML) have become key elements of every data-driven enterprise. The good news is that organizations have a lot…]]></description><link>https://developer.hpe.com/file-objects-databases-and-streams-–-oh-my/</link><guid isPermaLink="false">https://developer.hpe.com/file-objects-databases-and-streams-–-oh-my/</guid><pubDate>Thu, 31 Mar 2022 15:49:36 GMT</pubDate><content:encoded>&lt;p&gt;Analytics and machine learning (ML) have become key elements of every data-driven enterprise. The good news is that organizations have a lot of data at their disposal. The not-so-good news is that this data is distributed across data lakes, warehouses, edge, core, and data centers, making it more difficult for data engineers and scientists to do their jobs.  HPE Ezmeral Data Fabric solves this problem by ingesting and cleansing hybrid data distributed across edge to cloud into a single logical infrastructure providing data teams with a unified data source that increases data integrity and trust in analytic insights.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/architecture-edf-7-1140x542.png&quot; alt=&quot;Figure 1 Solution stack for HPE Ezmeral Data Fabric&quot; title=&quot;Figure 1 Solution stack for HPE Ezmeral Data Fabric&quot;&gt;&lt;/p&gt;
&lt;h5&gt;&lt;em&gt;Figure 1 Solution stack for HPE Ezmeral Data Fabric&lt;/em&gt;&lt;/h5&gt;
&lt;h2&gt;Introducing HPE Ezmeral Data Fabric 7.0 with support for S3 compatible object store and FIPS compliance&lt;/h2&gt;
&lt;p&gt;The&lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-News-GA-for-the-Enterprise-Data-and-Unified/ba-p/7163244&quot;&gt; latest release of HPE Ezmeral Data Fabric&lt;/a&gt; adds support for the fastest growing data type today, object. This release makes HPE Ezmeral Data Fabric the industry’s first data fabric to centralize files, objects, NoSQL databases and streams into a single data store, simplifying data lifecycle and operational management.  Key features of this release include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A high-performance object store&lt;/li&gt;
&lt;li&gt;Federal Information Processing Standard (FIPS) compliance&lt;/li&gt;
&lt;li&gt;Security enhancements&lt;/li&gt;
&lt;li&gt;Dynamic data masking for JSON document databases&lt;/li&gt;
&lt;li&gt;Performance improvements using Remote Direct Memory Access (RDMA)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s examine these features in detail.&lt;/p&gt;
&lt;h2&gt;High-Performance Object Store&lt;/h2&gt;
&lt;p&gt;High-performance object store capabilities optimize all object sizes for both performance and storage efficiency. With HPE Ezmeral Data Fabric, multi-protocol access, using S3 API or standard interfaces, increases object availability to a wider set of traditional and cloud-native applications. As a part of this, HPE Ezmeral Data Fabric now allows you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Be Write Once Read Many (WORM) compliant, helping organizations that must satisfy record keeping rules that ensure data is not tampered with in any possible way. This governance and compliance for data retention periods is very useful in financial and healthcare vertical markets.&lt;/li&gt;
&lt;li&gt;Integrate with MCS: With the new object store UI, administrators can manage accounts, buckets, objects, access policies, and users through a simple, intuitive interface.&lt;/li&gt;
&lt;li&gt;Archive data and build on-premises applications or migrate to cloud-native applications.&lt;/li&gt;
&lt;li&gt;Store media for operational use with fast retrieval, reduce costs of storing globally distributed media, such as music, video, and images.&lt;/li&gt;
&lt;li&gt;Run analytics on object data with tools like Apache Spark, Apache Drill, Presto, and &lt;a href=&quot;https://docs-datafabric.mip.storage.hpecorp.net/70/MapROverview/query-s3-select.html&quot;&gt;S3 Select&lt;/a&gt; to gain valuable insights into customers, operations, or markets.&lt;/li&gt;
&lt;li&gt;Store ML model data and share the ML models in real-time with downstream applications.&lt;/li&gt;
&lt;li&gt;Publish S3 events to HPE Ezmeral Data Fabric streams for monitoring activity.&lt;/li&gt;
&lt;li&gt;Be compatible with opensource AWS S3 SDK and Minio SDK.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Federal Information Processing Standard Support&lt;/h2&gt;
&lt;p&gt;The Federal Information Processing Standard (FIPS) is a US government standard used to approve cryptographic modules. FIPS-validated products give users the assurance that data within the product is protected using cryptographic algorithms meeting the stringent guidelines and testing procedures established by the FIPS standard. FIPS was established by the National Institute of Standards and Technology (NIST) and defines critical security parameters that vendors must use for encryption. Products sold to the US government must meet FIPS validation criteria. In addition, there is a growing need by organizations processing sensitive data, such as banks, financial institutions, legal and medical institutions, to have the products that are FIPS 140-2/3 validated.&lt;/p&gt;
&lt;p&gt;The release of HPE Ezmeral Data Fabric includes &lt;a href=&quot;https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/819&quot;&gt;FIPS 140-2&lt;/a&gt; Level 1 compliance to leverage operating systems that include FIPS 140-2 Level 1 certified cryptographic libraries provided by the user. It also includes support for Bouncy Castle Java FIPS API bundled with HPE Ezmeral Data Fabric, which runs on a compatible user-supplied JDK.&lt;/p&gt;
&lt;p&gt;In this release, HPE Ezmeral Data Fabric uses the open SSL cryptographic model distributed in operating systems supported by the core platform, which has obtained FIPS 140-2 Level 1 certification. It includes enhancements to the data fabric core platform that invokes FIPS 140-2 Level validated cryptography when FIPS mode is enabled and ensures that no sensitive data is stored in plain text. The following operating systems are supported:  Red Hat 8.x, Ubuntu 18.04 and 20.04 and SLES 15 SP 2.&lt;/p&gt;
&lt;p&gt;Note: For all supported operating systems listed above, HPE Ezmeral Data Fabric uses the Java FIPS API from &lt;a href=&quot;https://www.bouncycastle.org/&quot;&gt;Bouncy Castle&lt;/a&gt;, which has FIPS 140-2 Level 1 approval.&lt;/p&gt;
&lt;h2&gt;Security enhancements&lt;/h2&gt;
&lt;p&gt;This release includes a number of features that harden the HPE Ezmeral Data Fabric platform. The HPE Ezmeral Data Fabric platform now:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Eliminates all clear-text passwords.&lt;/li&gt;
&lt;li&gt;Supports for the separation of passwords for key and trust stores&lt;/li&gt;
&lt;li&gt;Enhances data-fabric SASL to enable applications that are not cluster aware (such as data-fabric ecosystem components) to gain access to services in another cluster for which they have a ticket.&lt;/li&gt;
&lt;li&gt;Enhances the &lt;em&gt;&lt;strong&gt;mrhsm&lt;/strong&gt;&lt;/em&gt; utility. &lt;em&gt;&lt;strong&gt;mrhsm&lt;/strong&gt;&lt;/em&gt; is used to configure KMIP support and includes support for file-based key stores.&lt;/li&gt;
&lt;li&gt;Includes a new property &lt;em&gt;&lt;strong&gt;(isFips)&lt;/strong&gt;&lt;/em&gt; in the output of the &lt;em&gt;&lt;strong&gt;maprcli node list&lt;/strong&gt;&lt;/em&gt; command to indicate whether a particular node is FIPS-enabled.&lt;/li&gt;
&lt;li&gt;Offers a new ticket type: A new &lt;em&gt;&lt;strong&gt;servicewithimpersonationandticket&lt;/strong&gt;&lt;/em&gt; ticket type is introduced that allows some ticket holders to generate tickets subject to their impersonation authority.&lt;/li&gt;
&lt;li&gt;Has cross-cluster security enhancements: &lt;em&gt;&lt;strong&gt;configure-crosscluster.sh&lt;/strong&gt;&lt;/em&gt; script to automate establishing security between two clusters. New options are provided to specify trust store passwords. The file &lt;em&gt;&lt;strong&gt;/etc/hadoop/ssl-server.xml&lt;/strong&gt;&lt;/em&gt; no longer includes the trust store passwords.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Other enhancements&lt;/h2&gt;
&lt;h4&gt;Dynamic Data Masking for JSON document database&lt;/h4&gt;
&lt;p&gt;Dynamic data masking is the ability to apply variety of data masks in real-time on customer-designated fields in database queries, to hide sensitive data.  This feature is perfect for Personal Identifiable Information (PII) or General Data Protection Regulation (GDPR) use cases.&lt;/p&gt;
&lt;h4&gt;Performance Improvements using Remote Direct Memory Access&lt;/h4&gt;
&lt;p&gt;Remote Direct Memory Access (RDMA) transfers data directly between user space process buffers on separate servers to bypass the Linux kernel and server CPU for increased performance and lower CPU utilization.&lt;/p&gt;
&lt;h4&gt;Ezmeral Ecosystem Pack (EEP)&lt;/h4&gt;
&lt;p&gt;Ezmeral Ecosystem Pack (EEP) provides certified open-source tools and engines that can be directly layered onto the data fabric reducing time spent integrating and configuring open-source tools for analytics. New to this release is Apache Airflow 2.2.1 along with significant updates to Apache Spark™ 3.2.0. Enhancements are also made to many other Hadoop components as well as Apache Drill 1.16.1.&lt;/p&gt;
&lt;h2&gt;Get to know HPE Ezmeral Data Fabric better&lt;/h2&gt;
&lt;p&gt;Developers, data engineers and scientists need high quality data to deliver the trusted insights businesses run on. Trusted insights only happen when distributed data is centralized into a single data layer that is optimized for analytics use cases. HPE Ezmeral Data Fabric delivers that high-performance, unified data layer without compromising security and governance that allows organizations to go to the next level by layering open-source engines and tools to deliver insights faster.&lt;/p&gt;
&lt;p&gt;For more information, please refer to the below listed resources. You can find other articles on HPE Ezmeral Data Fabric on the&lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt; HPE DEV blog&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://package.mapr.hpe.com/releases/v7.0.0&quot;&gt;Core&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://package.mapr.hpe.com/releases/MEP/MEP-8.1.0/&quot;&gt;EEP 8.1.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/70/home.html&quot;&gt;Documentation and release notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Get Started Building Data Services Cloud Console API Client Libraries for Python using OpenAPI Generator]]></title><description><![CDATA[In this tutorial, you will be introduced to the process that is required to convert the HPE Data Services Cloud Console public API…]]></description><link>https://developer.hpe.com/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-building-dscc-api-client-libraries-for-python-using-openapi-generator/</guid><pubDate>Mon, 28 Mar 2022 16:57:29 GMT</pubDate><content:encoded>&lt;p&gt;In this tutorial, you will be introduced to the process that is required to convert the HPE Data Services Cloud Console public API specification in the OpenAPI 3.X definition to any client libraries from several popular programming languages. The goal of this conversion process is to achieve the agility of cloud-like operations where updates to console API client libraries are automatic and painless.&lt;/p&gt;
&lt;p&gt;Data Services Cloud Console public REST API is for customers looking to enhance their data-ops using the programmatic extensions from Data Services Cloud Console. Please see the &lt;a href=&quot;https://developer.hpe.com/platform/data-services-cloud-console/home/&quot;&gt;Data Services Cloud Console Platform page&lt;/a&gt; for detailed information about the console benefits to customer&apos;s Data-Ops operation. Please see the &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/&quot;&gt;Getting Started with the HPE Data Services Cloud Console Public REST API&lt;/a&gt; for the authentication mechanism used to access the console API.&lt;/p&gt;
&lt;p&gt;The Data Services Cloud Console API definition is available for download in either YAML or JSON format from the console API website (&lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;US region&lt;/a&gt;) as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc-api-spec.png&quot; alt=&quot;Data Services Cloud Console API download&quot; title=&quot;Data Services Cloud Console API specification download&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also download the console API YAML file using the following Unix command line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;$ wget https://console-us1.data.cloud.hpe.com/doc/api/v1/storage-api.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The definition file contains the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A brief description of the API definition along with the version of the API in this file.&lt;/li&gt;
&lt;li&gt;The available regions with the base-URL that must be concatenated to every API path. For more information about each region, please see the &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/&quot;&gt;Getting Started with Data Services Cloud Console API&lt;/a&gt; blog post.&lt;/li&gt;
&lt;li&gt;Summary tags for the content of this API definition.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/the-introduction-to-the-api-definition.png&quot; alt=&quot;&quot; title=&quot;Data Services Cloud Console Open API specification (YAML)&quot;&gt;&lt;/p&gt;
&lt;p&gt;In addition, the definition file also contains:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;All available endpoints of the console resources, along with their HTTP headers, parameters, and the responses for each endpoint.&lt;/li&gt;
&lt;li&gt;The syntax of the HTTP methods (GET, POST, UPDATE, DELETE) and path (relative path).&lt;/li&gt;
&lt;li&gt;A more detailed description of the content of each response.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/examples-of-the-api-definition-end-points.png&quot; alt=&quot;&quot; title=&quot;Detail of a resource - host-initiator group&quot;&gt;&lt;/p&gt;
&lt;p&gt;With this definition file (YAML or JSON), anyone can generate client libraries into a selected programming language or scripts.  With the client libraries, a user can use them to programmatically consume the capabilities of Data Services Cloud Console. Currently, there are many tools on the market that are capable to perform the conversion. The list of some of the well-known open-API converter tools are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://openapi-generator.tech/&quot;&gt;OpenAPI generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://swagger.io/tools/swagger-codegen/&quot;&gt;Swagger Codegen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/Azure/autorest&quot;&gt;Azure AutoRest&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;REST API Client Code generator (Found within &lt;a href=&quot;https://marketplace.visualstudio.com/items?itemName=ChristianResmaHelle.ApiClientCodeGenerator&quot;&gt;Visual Studio MarketPlace&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this tutorial, I am going to cover the popular and simple OpenAPI generator from the &lt;a href=&quot;https://openapi-generator.tech/&quot;&gt;OpenAPI generator&lt;/a&gt; from soup to nuts.&lt;/p&gt;
&lt;h3&gt;&lt;em&gt;Let&apos;s get on with it!&lt;/em&gt;&lt;/h3&gt;
&lt;h3&gt;Generating Client Libraries using OpenAPI Generator painlessly&lt;/h3&gt;
&lt;p&gt;The OpenAPI Generator tool allows the automatic generation of API client libraries (SDKs), server stubs, documentation, and configuration with a given input of &lt;a href=&quot;https://github.com/OAI/OpenAPI-Specification&quot;&gt;OpenAPI spec&lt;/a&gt; (supporting both 2.0 and 3.0 OpenAPI formats). This tool can generate more than 50 programming languages that can be consumed by various DevOps tools.&lt;/p&gt;
&lt;p&gt;The OpenAPI Generator tool is available for the variety of applications that meet with the user&apos;s familiarity. The OpenAPI Generator website provides 4 different sets of applications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A downloadable and executable &lt;strong&gt;JAR&lt;/strong&gt; file that can be executed using &lt;strong&gt;Java Run Time tool&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Docker&lt;/strong&gt; image that can be executed using the docker engine.&lt;/li&gt;
&lt;li&gt;Some dependencies in &lt;strong&gt;Maven&lt;/strong&gt; and &lt;strong&gt;Gradle&lt;/strong&gt; projects that can be used for building the automation tool.&lt;/li&gt;
&lt;li&gt;A node package manager (&lt;strong&gt;npm&lt;/strong&gt;) package wrapper.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Those applications and the information about using the OpenAPI generator are available in the Readme section found on this &lt;a href=&quot;https://github.com/OpenAPITools/openapi-generator&quot;&gt;GitHub page&lt;/a&gt; &lt;a href=&quot;https://github.com/OpenAPITools/openapi-generator&quot;&gt;&lt;/a&gt;as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openapi-generator.png&quot; alt=&quot;&quot; title=&quot;OpenAPI Generator GitHub Page&quot;&gt;&lt;/p&gt;
&lt;p&gt;The key information found in this GitHub website that will be useful and important is the latest stable version number that can be used for the conversion. This version is available at the right column of this webpage as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/openapi-generator-version-locatoin.png&quot; alt=&quot;&quot; title=&quot;Stable version for the openAPI generator project&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this tutorial, let&apos;s take a look at the painless and simplest application of the OpenAPI Generator, which is using the JAR file. Actually, using the JAR file doesn&apos;t require any application installation at all. The JAR file can be downloaded and executed directly from the command line window in your workstation. The minimum requirement for executing the JAR file is that your workstation must be deployed with JAVA Run Time Environment (JRE) version 8.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For more information about the deployment of JAVA Run Time executeables based on the operating system of your workstation, please take a look at the installation page from &lt;a href=&quot;https://www.java.com/en/download/help/download_options.html&quot;&gt;the JAVA website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The JAR file for this OpenAPI generator is available at Maven.org. You can download it from the following &lt;a href=&quot;https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/5.4.0/openapi-generator-cli-5.4.0.jar&quot;&gt;location&lt;/a&gt;. Below, you will find the syntax required to download the OpenAPI generator JAR files from each corresponding operating systems:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;For Mac/Linux users:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;~$ wget https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/5.4.0/openapi-generator-cli-5.4.0.jar -O openapi-generator-cli.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;For Microsoft Windows users:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;&gt; Invoke-WebRequest -OutFile openapi-generator-cli.jar https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/5.4.0/openapi-generator-cli-5.4.0.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The OpenAPI Generator version shown above is 5.4.0 (current version as of March 2022). Please keep in mind that, in the future, the location of the JAR file will change once the new version is released.  Please take a look at the figure above to obtain the latest version number, and modify the path to download the latest &lt;em&gt;openapi-generator-cli&lt;/em&gt; JAR file.&lt;/p&gt;
&lt;p&gt;Once the JAR file is downloaded, you can execute the following CLI at the folder where the JAR file is downloaded, to display the brief information on how to use this JAR file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;~$ java -jar openapi-generator-cli.jar help
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output will be something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-typescript&quot;&gt;usage: openapi-generator-cli &amp;#x3C;command&gt; [&amp;#x3C;args&gt;]

The most commonly used openapi-generator-cli commands are:
    author        Utilities for authoring generators or customizing templates.
    batch         Generate code in batch via external configs.
    config-help   Config help for chosen lang
    generate      Generate code with the specified generator.
    help          Display help information about openapi-generator
    list          Lists the available generators
    meta          MetaGenerator. Generator for creating a new template set and configuration for Codegen.  The output will be based on the language you specify, and includes default templates to include.
    validate      Validate specification
    version       Show version information used in tooling

See &apos;openapi-generator-cli help &amp;#x3C;command&gt;&apos; for more information on a specific
command.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that the JAR file is downloaded and ready to be used, let&apos;s create a Python SDK using the OpenAPI generator JAR file. The following command line is used for generating a Python client library using the &lt;em&gt;openapi-generator-cli.jar&lt;/em&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;~$ java -jar openapi-generator-cli.jar generate -i storage-api.yaml -g python -o sdks/dscc-python-sdk
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&apos;-i&apos; represents the input file, that is the downloaded Data Services Cloud Console OpenAPI spec which can be in the form of JSON or YAML.&lt;/li&gt;
&lt;li&gt;&apos;generate&apos; represents generating the code based on the specified generator&lt;/li&gt;
&lt;li&gt;&apos;g&apos; represents the generator/language name like Java, Go.&lt;/li&gt;
&lt;li&gt;&apos;o&apos; represents output directory where the client library will be generated. In the above example, the generated files are going to be in the ~/sdks/dscc-python-sdk&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This Python DSCC client library can be generated in a few minutes. Below, you can see the screen output generated from the Python client library using the &lt;em&gt;openapi-generator.jar&lt;/em&gt; file.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/client-generation.jpg&quot; alt=&quot;&quot; title=&quot;Generating Python SDK using OpenAPI generator&quot;&gt;&lt;/p&gt;
&lt;p&gt;The results from the conversion are available under the following folder:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;~/sdks/dscc-python-sdk$ ls -al
total 232
drwxrwxr-x 6 ronald ronald  4096 Mar 25 16:44 .
drwxrwxr-x 3 ronald ronald  4096 Mar 25 16:44 ..
drwxrwxr-x 2 ronald ronald 32768 Mar 25 16:44 docs
-rw-rw-r-- 1 ronald ronald   807 Mar 25 16:44 .gitignore
-rw-rw-r-- 1 ronald ronald   433 Mar 25 16:44 .gitlab-ci.yml
-rw-rw-r-- 1 ronald ronald  1830 Mar 25 16:44 git_push.sh
drwxrwxr-x 6 ronald ronald  4096 Mar 25 16:44 openapi_client
drwxrwxr-x 2 ronald ronald  4096 Mar 25 16:44 .openapi-generator
-rw-rw-r-- 1 ronald ronald  1040 Mar 25 16:44 .openapi-generator-ignore
-rw-rw-r-- 1 ronald ronald 98525 Mar 25 16:44 README.md
-rw-rw-r-- 1 ronald ronald    64 Mar 25 16:44 requirements.txt
-rw-rw-r-- 1 ronald ronald    28 Mar 25 16:44 setup.cfg
-rw-rw-r-- 1 ronald ronald  1002 Mar 25 16:44 setup.py
drwxrwxr-x 2 ronald ronald 36864 Mar 25 16:44 test
-rw-rw-r-- 1 ronald ronald    18 Mar 25 16:44 test-requirements.txt
-rw-rw-r-- 1 ronald ronald   150 Mar 25 16:44 tox.ini
-rw-rw-r-- 1 ronald ronald   304 Mar 25 16:44 .travis.yml
~/sdks/dscc-python-sdk$
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The generated client library can be made available on your workstation, or it can also be uploaded to a GitHub Repo so that it can be made available for others to use. An example of the GitHub repository of a sample Python client library looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/python-open-api-sdk-repo.jpg&quot; alt=&quot;&quot; title=&quot;GitHub repository of Python Client library generated using OpenAPI generator&quot;&gt;&lt;/p&gt;
&lt;p&gt;For information on uploading a project and the associated files into GitHub, please see the following &lt;a href=&quot;https://docs.github.com/en/get-started/importing-your-projects-to-github/importing-source-code-to-github/adding-locally-hosted-code-to-github&quot;&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now, the generated client library comes with the following files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The code for assisting the console API calls.&lt;/li&gt;
&lt;li&gt;The documentation for this console client library derived from the API spec.&lt;/li&gt;
&lt;li&gt;Some test codes that can be used to validate the operation of this client library.&lt;/li&gt;
&lt;li&gt;Examples of every endpoint available in the README.md&lt;/li&gt;
&lt;li&gt;The required Python dependencies (requirements.txt and test-requirements.txt) that are required for using this SDK&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now, you have all the components that are required for invoking the console API using Python scripts. To use this Data Services Cloud Console API client library, let&apos;s go through the steps that are described in the README.md:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Installation instructions&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/readme.png&quot; alt=&quot;&quot; title=&quot;Python SDK installation instructions&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;A sample code to get started with&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/sample.png&quot; alt=&quot;&quot; title=&quot;Sample code&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Documentation list for all endpoints&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-endpoints.png&quot; alt=&quot;&quot; title=&quot;List of endpoints&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Documentation list for all models&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/models.png&quot; alt=&quot;&quot; title=&quot;List of models&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Documentation about authorization&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/auth.png&quot; alt=&quot;&quot; title=&quot;Authorization of API calls&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s run a sample code that displays the access types in the console. The required parameters and the returned results for each endpoint are described under each of the endpoints in the sample code. To execute the an API call on the console, you will need:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The authorized access token which is generated from the HPE GreenLake API Gateway as mentioned in the blog &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;post&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;To install the required Python dependencies using the following command:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;~$ pip install requirements.txt 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below is the example code for obtaining the information about the associated user&apos;s RBAC association. This code sample will provide list of the capabilities (port.read, volume.create) that are authorized for the user to exercise. To execute this code, please substitute the &lt;strong&gt;YOUR_BEARER_TOKEN&lt;/strong&gt; with the access token generated in your example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import time
import openapi_client
from openapi_client.api import authz_api
from openapi_client.model.error_response import ErrorResponse
from openapi_client.model.access_controls_response import AccessControlsResponse
from pprint import pprint
# Defining the host is optional and defaults to https://eu1.data.cloud.hpe.com
# See configuration.py for a list of all supported configuration parameters.
configuration = openapi_client.Configuration(
    host = &quot;https://eu1.data.cloud.hpe.com&quot;
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.

# Configure Bearer authorization (JWT): JWTAuth
configuration = openapi_client.Configuration(
    access_token = &apos;YOUR_BEARER_TOKEN&apos;
)

# Enter a context with an instance of the API client
with openapi_client.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    api_instance = authz_api.AuthzApi(api_client)
    permission = [&quot;volume.create&quot;,&quot;port.read&quot;,&quot;audit.read&quot;] # [str] | List of permissions, each of which, has the form \&quot;resource type.permission\&quot; (ex. volume.read,volume.write). The word \&quot;ANY\&quot; can be used as a wild card for the resource type (ex. ANY.read). Omitting the permission parameters is equivalent to asking for all user permissions. (optional)

    # example passing only required values which don&apos;t have defaults set
    # and optional values
    try:
        # Get User Accessible Resources
        api_response = api_instance.get_access_controls(permission=permission)
        pprint(api_response)
    except openapi_client.ApiException as e:
        print(&quot;Exception when calling AuthzApi-&gt;get_access_controls: %s\n&quot; % e)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output from the execution of the above code is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Shell&quot;&gt;$ python .\GetAudits.py
{&apos;items&apos;: [&apos;port.read&apos;, &apos;volume.create&apos;]}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using the client generator tool such as described above, the client libraries for DSCC can be generated in agile manner.  As the result, the Data Services Cloud Console API client library in Python can be pushed to a GitHub repository automatically so that it will be ready to be used by any projects. Further advantage of this method is the automation using Continuous Integration/Continuous Deployment (CI/CD) pipeline which require no manual intervention in updating any projects to use the latest released version of the DSCC API.&lt;/p&gt;
&lt;p&gt;I hope this blog post on generating Python client libraries for Data Services Cloud Console is helpful.&lt;/p&gt;
&lt;p&gt;More blog posts will be coming to help you taking further advantage of Data Services Cloud Console REST API capabilities. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; blog for more blog posts about Data Services Cloud Console REST API.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[3 ways a data fabric enables a data-first approach]]></title><description><![CDATA[Editor’s note: This article was originally posted on HPE Enterprise.nxt on March 15, 2022 A well-engineered modern data fabric allows DevOps…]]></description><link>https://developer.hpe.com/3-ways-a-data-fabric-enables-a-data-first-approach/</link><guid isPermaLink="false">https://developer.hpe.com/3-ways-a-data-fabric-enables-a-data-first-approach/</guid><pubDate>Tue, 15 Mar 2022 10:07:10 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s note: This article was originally posted on HPE Enterprise.nxt on March 15, 2022&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;A well-engineered modern data fabric allows DevOps and other teams to access data in the way they prefer.&lt;/p&gt;
&lt;p&gt;A data-first enterprise is a big advantage, but this strategy also puts a lot of demands on your data technology. That&apos;s all right unless your data technology puts a lot of demands on you.&lt;/p&gt;
&lt;p&gt;Take modern cars. Vehicles can now easily contain over 100 computers to manage functions like adaptive cruise control, stability control, and anti-lock braking. These systems make cars much more complicated internally than decades ago, but they are much easier and safer to drive because of them.&lt;/p&gt;
&lt;p&gt;Similarly, modern data technology needs to make it easier for users and system administrators of large-scale systems to work with data in more sophisticated and varied ways and at more locations, as is the case in a data-first enterprise. What does your data infrastructure need to do to help rather than be a hindrance?&lt;/p&gt;
&lt;h3&gt;How does data fabric meet the demands of a data-first approach?&lt;/h3&gt;
&lt;p&gt;A data fabric is a highly scalable data infrastructure designed to store, manage, and move data as a unifying layer across an enterprise from edge to data center, on-premises or in the cloud.&lt;/p&gt;
&lt;p&gt;In a data-first environment, data is treated as a foundational resource, one that is not used up when it is accessed. The capabilities of your data infrastructure should support the reuse of data and use by multiple applications. For example, the HPE Ezmeral Data Fabric File and Object Store software supports data reuse and data sharing by not requiring specialized data access methods. Off-the-shelf and custom applications can directly access data stored in the data fabric.&lt;/p&gt;
&lt;p&gt;Here&apos;s a sample of some of the many ways a modern data fabric has a positive impact in a data-first approach.&lt;/p&gt;
&lt;h3&gt;To move or not to move: Data where it needs to be&lt;/h3&gt;
&lt;p&gt;Data motion is a key issue in large-scale data systems. Data motion can include motion within a cluster and between clusters. Making wrong assumptions about data motion is one of the most common ways businesses inadvertently give up their ability to extract value from data.&lt;/p&gt;
&lt;p&gt;At one extreme, people may have an ingrained assumption that data motion is not a viable option, based on legacy systems that lack any provision for moving data. Without motion, data that could have value if put into a more global context may be discarded instead.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/3waysadatafabric-enablesadatafirstapproach-quote.png&quot; alt=&quot;Block text&quot; title=&quot;Block text&quot;&gt;&lt;/p&gt;
&lt;p&gt;At other companies, the pendulum has swung radically to the opposite extreme, with a policy that all data to be analyzed must be moved to a central data center, either on premises or in the cloud. Unfortunately, the costs of data motion mount up, and where large amounts of data are at issue, only a tiny fraction of all possible data will be moved. Once again, data you could analyze is simply discarded.&lt;/p&gt;
&lt;p&gt;Fortunately, a new middle ground is emerging. In telecommunications, finance, media, manufacturing, and other business sectors, far more data is collected than could possibly be moved back to headquarters. Data is partially processed in situ before extracts and summaries are moved to a central location or a regional sub-center for further analysis on aggregates from many edge sources. This edge processing strategy commonly uses pattern recognition to pull out the interesting bits or anomalies for transfer.&lt;/p&gt;
&lt;p&gt;There are many reasons to move data, including communication between data centers or to a secondary cluster as part of a disaster recovery plan. The key to success is choice: You should be able to efficiently move data as needed or store and analyze it in place, all within the same system. Data motion should not have to be coded into each application.&lt;/p&gt;
&lt;p&gt;Taking the example of HPE Ezmeral Data Fabric, selective data motion can be configured rather than coded, and data fabric moves the data invisibly. The data fabric even builds in remote access via a global namespace in case you need it.&lt;/p&gt;
&lt;h3&gt;Decouple the query engine from data storage&lt;/h3&gt;
&lt;p&gt;The term database conjures up images of big iron running a system like Postgres or Oracle or a data warehouse like Teradata. All of the classical databases had this in common: The software that handles the storage of data is tightly integrated with the software that optimizes and executes queries.&lt;/p&gt;
&lt;p&gt;Another common element of such database systems is that when it comes to processing the data in the database, it is their way or the highway. You could submit queries from practically any computer language around, but you can&apos;t do what SQL won&apos;t do. For applications such as machine learning, a SQL database just isn&apos;t a good fit except for data extraction. Even then, severe scale limitations are common with high-end databases.&lt;/p&gt;
&lt;p&gt;The situation is changing. The trend now is to separate the query and storage engines into independent parts. The functional independence of query and storage isn&apos;t entirely new, but the idea that a SQL query engine should mostly query data stored in ordinary files is a big change.&lt;/p&gt;
&lt;p&gt;The practical impact of this separation is that you can reuse data for a completely different purpose than originally intended. If your original purpose was to clear payments and produce statements and bills for tens of millions of credit card accounts, then a SQL query engine like Presto might be just the ticket.&lt;/p&gt;
&lt;p&gt;However, in a data-driven enterprise, the real value from data doesn&apos;t usually come from collecting entirely new data. Instead, it comes from reusing or combining existing data in new ways, often with new tools. Mixing and matching query engines on the same data strikes gold. Locking data up in a monolithic database is just the opposite.&lt;/p&gt;
&lt;p&gt;For example, while recently working on an open source project, one of the authors (Dunning) built some geospatial processing that quickly outgrew the relational database in use. Python and Parquet files worked great for initial extraction and cleaning, but indexing and sorting the historical data involved billions of geohash (a public domain geocode system) operations.&lt;/p&gt;
&lt;p&gt;Storing that data in a data fabric allowed a seamless transition to distributed processing steps in &lt;a href=&quot;https://julialang.org/&quot;&gt;the Julia programming language&lt;/a&gt; that ran 100 times faster and could scale more easily. Keeping simple tasks simple is a big win for data fabric in these kinds of systems.&lt;/p&gt;
&lt;h3&gt;Object storage vs. files&lt;/h3&gt;
&lt;p&gt;One change that characterizes a data-first enterprise is that architectural control moves much closer to the developer and data scientists in the line of business. Previously, much of that control was in the technology group in IT. This change is, in fact, the core driving force behind the DevOps movement. A major consequence of this shift has been the commoditization of IT services, an approach taken to extremes by the public cloud vendors.&lt;/p&gt;
&lt;p&gt;An unforeseen (but obvious in hindsight) side effect of this shift is a divergence between how DevOps teams view data infrastructure and how IT teams view it. The DevOps point of view is all about simplicity and flexibility, while the IT view has always been about optimization of provisioning combined with centralized control.&lt;/p&gt;
&lt;p&gt;Pushing for simplicity and flexibility drives a preference for data access with as little participation by the operating system as possible and certainly no special OS privileges. These constraints may put something as simple as mounting a file system out of bounds. These limits make object storage systems very attractive, since objects are accessed directly using simple protocols like HTTP instead of asking the OS to get file data from some file store. All you need is network access. On the other hand, performance isn&apos;t usually very high, and objects don&apos;t work like files, so compatibility suffers.&lt;/p&gt;
&lt;p&gt;Prioritizing optimization, in contrast, leads to highly manageable systems like storage appliances to provide block storage to blade servers. In this model, just enough storage is allocated to applications that primarily use non-distributed file systems for data. The operating system kernel mounts these file systems and controls all access. That&apos;s fine for some things, but it hurts scalability and makes DevOps harder.&lt;/p&gt;
&lt;p&gt;The fact is, recent technology makes both goals achievable. If a modern data fabric is engineered to allow access to data as either files or as objects, this flexibility frees DevOps teams to access data in the way they prefer. In addition, a data fabric can have built-in capabilities for data management and data motion. These capabilities make it much easier for IT teams to manage the overall system at scale.&lt;/p&gt;
&lt;h3&gt;Data fabric for data-first&lt;/h3&gt;
&lt;p&gt;Having the right data infrastructure lets you focus on the decisions that will make your organization a data-first enterprise. Whether it is choosing the right level of data motion, using multiple tools to analyze the same data, or storing data as objects or files, your data infrastructure needs to provide a host of advanced capabilities but still be easy enough to drive. A modern data fabric does just that.&lt;/p&gt;
&lt;h3&gt;Lessons for leaders&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;A data fabric is a highly scalable data infrastructure designed to store, manage, and move data as a unifying layer across an enterprise from edge to data center, on premises or in the cloud.&lt;/li&gt;
&lt;li&gt;A well-engineered modern data fabric allows DevOps and other teams to access data in the way they prefer.&lt;/li&gt;
&lt;li&gt;Making wrong assumptions about data motion is one of the most common ways businesses inadvertently give up their ability to extract value from data.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;span style=&quot;color:grey; font-family:Arial; font-size:1em&quot;&gt; This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;br /&gt;
&lt;p&gt;&lt;u&gt;&lt;strong&gt;About the authors:&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;
&lt;p&gt;Ted Dunning is chief technologist for data fabric at HPE. He has a PhD in computer science and authored more than 10 books focused on data sciences. He has more than 25 patents in advanced computing and plays the mandolin and guitar, both poorly.&lt;/p&gt;
&lt;p&gt;Ellen Friedman is a principal technologist at Hewlett Packard Enterprise focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies prior to her current role at HPE. She was a committer for the Apache Drill and Apache Mahout open source projects and is a co-author of multiple books published by O’Reilly Media, including AI &amp;#x26; Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Infrastructure-as-code on HPE GreenLake using Terraform]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. The process of…]]></description><link>https://developer.hpe.com/infrastructure-as-code-on-hpe-greenlake-using-terraform/</link><guid isPermaLink="false">https://developer.hpe.com/infrastructure-as-code-on-hpe-greenlake-using-terraform/</guid><pubDate>Tue, 08 Mar 2022 15:17:41 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The process of managing and provisioning computer data centers through machine-readable definition files, otherwise known as Infrastructure-as-Code (IaC), offers many significant benefits. It helps to increase operational agility, simplify management, reduce errors, and save cost. In this post, I’ll explore some of the benefits of using IaC on HPE GreenLake through the use of Terraform.&lt;/p&gt;
&lt;h2&gt;Let’s harness some of the benefits of Infrastructure as Code&lt;/h2&gt;
&lt;p&gt;One of the superpowers of IaC is its repeatability, the fact that you can set something up once and then use the same information in multiple ways. Implementing IaC allows organizations to store configuration files describing the desired infrastructure as a single source of truth. It also allows you to apply the DevOps methodology that’s already in place for application code directly to the infrastructure. For example, configuration files can be stored and managed through GitHub using the same way your DevOps team manages the application code. This concept is often called “Shifting Left”, as you are describing the infrastructure to host an application earlier (left) in the delivery pipeline of the application. This allows for easier and consistent deployments of infrastructure across the complete infrastructure landscape of an organization.&lt;/p&gt;
&lt;h2&gt;HPE GreenLake&lt;/h2&gt;
&lt;p&gt;HPE GreenLake is HPE’s edge-to-cloud platform. The HPE GreenLake platform provides a unified experience wherever your applications and its data are located on the edge, in colocations or in your own datacenter. This cloud experience everywhere includes the following capabilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Self-service&lt;/li&gt;
&lt;li&gt;Infinite scalability&lt;/li&gt;
&lt;li&gt;Pay-as-you-go&lt;/li&gt;
&lt;li&gt;Managed for you&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;HPE GreenLake Cloud Services&lt;/h2&gt;
&lt;p&gt;The HPE GreenLake ecosystem provides solutions for several top workloads such as containers, Machine Learning, private cloud, virtual machines, SAP HANA, HPC, VDI and many more. This page on &lt;a href=&quot;https://www.hpe.com/us/en/greenlake/services.html&quot;&gt;HPE GreenLake cloud services and ecosystem&lt;/a&gt; provides a complete list. The ecosystem also leverages many technologies from HPE partners such as Microsoft, VMware, SAP, Nutanix, Veeam and others. &lt;/p&gt;
&lt;h2&gt;HPE GreenLake for private cloud&lt;/h2&gt;
&lt;p&gt;One of the options provided by HPE GreenLake is to make it easy for customers to order and operate a private cloud with a mix of virtual machines, containers, and physical servers. This is exactly what the private cloud Service is all about. This service allows customers to point and click to create resources such as virtual machines. It also provides access via a public API, allowing developers to use  an Infrastructure-as-Code type of tool to automate provisioning, for example using Terraform.&lt;/p&gt;
&lt;h2&gt;Terraform&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://terraform.io&quot;&gt;Terraform&lt;/a&gt; is an open source Infrastructure-as-Code framework originally created by HashiCorp that is written in Go. It uses a declarative language (HashiCorp Configuration Language HCL or JSON more recently) to describe the desired state of the infrastructure in terms of cloud, virtual machines, networks, storage, and many other components. Terraform uses the concept of “providers” to integrate with all major public clouds. Terraform is a so-called idempotent system in the sense that it doesn’t generate any side effects if applied multiple times on an infrastructure already in its desired state. Terraform has gained quite the momentum in the last few years. Its main competition is Ansible, Amazon Cloud Formation, Puppet and Chef.&lt;/p&gt;
&lt;h2&gt;Readying for your Infrastructure-as-Code implementation&lt;/h2&gt;
&lt;h3&gt;Terraform installation&lt;/h3&gt;
&lt;p&gt;Your first step is to get your system ready to run Terraform. In case this has not been done yet, this will include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Installing Terraform: follow &lt;a href=&quot;https://learn.hashicorp.com/tutorials/terraform/install-cli&quot;&gt;these steps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Verifying installation: &lt;strong&gt;terraform --help&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At this point, you are ready to start building your infrastructure description file.  &lt;/p&gt;
&lt;h3&gt;Building a Terraform configuration file from scratch&lt;/h3&gt;
&lt;p&gt;Let’s start building this TF file using your favorite editor.&lt;/p&gt;
&lt;h4&gt;Selecting a Terraform provider&lt;/h4&gt;
&lt;p&gt;The first section of the file will enumerate the “providers” you rely upon for building your infrastructure, and they could be multiple providers in a single TF file. In this case here, you will only have the HPE GreenLake provider referenced as hpe/hpegl in the official &lt;a href=&quot;https://registry.terraform.io/&quot;&gt;Terraform registry&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first lines of your Terraform configuration file should look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Load HPE GreenLake terraform provider
terraform {
      required_providers {
         hpegl = {
            source  = &quot;hpe/hpegl&quot;
            version = &quot;0.3.17&quot;
         }
      }
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find out more about the HPE GreenLake Terraform provider from its &lt;a href=&quot;https://registry.terraform.io/providers/HPE/hpegl/latest&quot;&gt;Terraform Registry page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This page also provides a link to the GitHub repository corresponding to this provider. The &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/tree/main/docs&quot;&gt;docs&lt;/a&gt; folder is your best source of information for using the different data sources and resources provided by the provider. If you navigate to the resources section, you will see that one resource you can manipulate with this provider is a &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/blob/main/docs/resources/vmaas_instance.md&quot;&gt;VM instance&lt;/a&gt;. Let’s focus on this resource in this article.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Because this is open source, don’t hesitate to open issues, or even a pull request, if you identify an issue.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Setting up the Terraform provider&lt;/h4&gt;
&lt;p&gt;Now that you have expressed the fact that the hpegl provider will be used, you need to setup some parameters for it. As explained on this &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/blob/main/docs/index.md&quot;&gt;page&lt;/a&gt;, you can either explicitly set those parameters in your TF file, or have them set in a series of environment variables, or a mix of both. I suggest the following two parameters be added in your TF file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Setup provider environment (location and space)
provider &quot;hpegl&quot; {
      vmaas {
     	location   = &quot;HPE&quot;
     	space_name = &quot;TerraForm Space&quot;
  	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the rest (such as tenant id, user id and user secret key) can be placed in a RC file, which you can source before running your Terraform command.&lt;/p&gt;
&lt;p&gt;You can find your location and your space name from the HPE GreenLake user interface. In our example shown below, HPE is our location:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlakeforprivatecloud.png&quot; alt=&quot;GreenLake for private cloud&quot; title=&quot;GreenLake for private cloud&quot;&gt;&lt;/p&gt;
&lt;p&gt;And in the capture below, &lt;strong&gt;Terraform Space&lt;/strong&gt; is the space we have created for our work with Terraform. You can check your available Spaces from the HPE GreenLake console under your profile icon, &lt;strong&gt;Change space&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlakeselectingspace.png&quot; alt=&quot;GreenLake select new space&quot; title=&quot;GreenLake select new space&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Setting up a API Client access&lt;/h4&gt;
&lt;p&gt;Next, you need to create a new API Client access dedicated to Terraform. You can do this from the HPE GreenLake console under your settings icon, &lt;strong&gt;Identity &amp;#x26; Access&lt;/strong&gt; and then the &lt;strong&gt;API Clients&lt;/strong&gt; tab.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlakeapiclients.png&quot; alt=&quot;GreenLake API Clients&quot; title=&quot;GreenLake API Clients&quot;&gt;&lt;/p&gt;
&lt;p&gt;Create a new API Client (hpedev-hackshack-terraform in the screen capture above), and make sure the Tenant Contributor role is assigned on your space, better yet, create a new role from the default Tenant Contributor role for this API client access. Also take note of the client id and the Issuer URL as shown in capture below.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Notes: More details on GreenLake user roles can be found in the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=GUID-328019F3-4305-4153-BD1A-B5E43D66FB1B.html&quot;&gt;GreenLake documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlaketerraformapiclient.png&quot; alt=&quot;GreenLake Terraform API Client&quot; title=&quot;GreenLake Terraform API Client&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You need to remember the API Client secret key, as it’s not displayed anymore after creation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Finally, you’ll need your Tenant ID as shown from the HPE GreenLake console under your profile icon, &lt;strong&gt;API Access&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlaketenantid.png&quot; alt=&quot;GreenLake Tenant ID&quot; title=&quot;GreenLake Tenant ID&quot;&gt;&lt;/p&gt;
&lt;p&gt;With this you can now build a resource file that defines the following environment variables:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;export HPEGL_TENANT_ID=&amp;#x3C;Your Tenant ID&gt;
export HPEGL_USER_ID=&amp;#x3C;Client ID of the API Client&gt;
export HPEGL_USER_SECRET=&amp;#x3C;Secret Key displayed when you created the API Client&gt;
export HPEGL_IAM_SERVICE_URL=&amp;#x3C;Issuer URL&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And execute it on your machine to set these environment variables.&lt;/p&gt;
&lt;h4&gt;Querying for infrastructure components&lt;/h4&gt;
&lt;p&gt;Your next step with the TF file is to query the HPE GreenLake provider to collect information needed to create your first VM instance. From the &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/blob/main/docs/resources/vmaas_instance.md&quot;&gt;documentation&lt;/a&gt;, you can see that you need to gather the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cloud ID&lt;/li&gt;
&lt;li&gt;Group ID&lt;/li&gt;
&lt;li&gt;Layout ID&lt;/li&gt;
&lt;li&gt;Plan ID&lt;/li&gt;
&lt;li&gt;Instance type code&lt;/li&gt;
&lt;li&gt;Network ID&lt;/li&gt;
&lt;li&gt;Resource Pool ID&lt;/li&gt;
&lt;li&gt;Template ID&lt;/li&gt;
&lt;li&gt;Folder Code&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For this, you will use the Terraform &lt;strong&gt;data&lt;/strong&gt; statements. For example, the following statement retrieves the Cloud ID and stores it (in variable called &lt;strong&gt;cloud&lt;/strong&gt;), which we can later retrieve using: &lt;strong&gt;data.hpegl_vmaas_cloud.cloud.id&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# Retrieve cloud id
data &quot;hpegl_vmaas_cloud&quot; &quot;cloud&quot; {
 	name = &quot;HPE GreenLake VMaaS Cloud-Trial4 &quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using a similar technique, you can retrieve the rest of the data you need:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;# And a few networks
data &quot;hpegl_vmaas_network&quot; &quot;blue_net&quot; {
 	name = &quot;Blue-Network&quot;
   }
data &quot;hpegl_vmaas_network&quot; &quot;green_net&quot; {
 	name = &quot;Green-network&quot;
   }
 
data &quot;hpegl_vmaas_cloud_folder&quot; &quot;compute_folder&quot; {
   cloud_id = data.hpegl_vmaas_cloud.cloud.id
   name 	= &quot;ComputeFolder&quot;
   }
 
# Locate a resource pool
data &quot;hpegl_vmaas_resource_pool&quot; &quot;cl_resource_pool&quot; {
 	cloud_id = data.hpegl_vmaas_cloud.cloud.id
 	name = &quot;ComputeResourcePool&quot;
   }
 
# And a group
data &quot;hpegl_vmaas_group&quot; &quot;default_group&quot; {
  name = &quot;HPEDEV-HackShackTenant-Group&quot;
}
 
# Locate a plan
data &quot;hpegl_vmaas_plan&quot; &quot;g1_small&quot; {
 	name = &quot;G1-Small&quot;
   }
 
# A layout
data &quot;hpegl_vmaas_layout&quot; &quot;vmware&quot; {
  name           	= &quot;VMware VM with vanilla CentOS&quot;
  instance_type_code = &quot;glhc-vanilla-centos&quot;
}
 
# And a template
data &quot;hpegl_vmaas_template&quot; &quot;vanilla&quot; {
 	name = &quot;vanilla-centos7-x86_64-09072020&quot;
   }
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;You can get information about each of the data statements supported by the hpegl provider from &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/data-sources&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Creating a VM resource&lt;/h4&gt;
&lt;p&gt;The last step is to use a Terraform &lt;strong&gt;resource&lt;/strong&gt; statement to request the creation of a new VM instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
 	name           	= &quot;DidierTest1&quot;
 	cloud_id       	= data.hpegl_vmaas_cloud.cloud.id
 	group_id       	= data.hpegl_vmaas_group.default_group.id
 	layout_id      	= data.hpegl_vmaas_layout.vmware.id
 	plan_id        	= data.hpegl_vmaas_plan.g1_small.id
 	instance_type_code = data.hpegl_vmaas_layout.vmware.instance_type_code
 
 	network {
     	id = data.hpegl_vmaas_network.green_net.id
 	}
 
 	volume {
     	name     	= &quot;root_vol&quot;
     	size     	= 15
     	datastore_id = &quot;auto&quot;
 	}
 
 	config {
     	resource_pool_id = data.hpegl_vmaas_resource_pool.cl_resource_pool.id
     	template_id  	= data.hpegl_vmaas_template.vanilla.id
     	no_agent     	= true
     	asset_tag    	= &quot;vm_terraform&quot;
     	folder_code  	= data.hpegl_vmaas_cloud_folder.compute_folder.code
 	}
 
 	power = &quot;poweron&quot;
   }
 
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can get information about each of the resource statements supported by the hpegl provider from &lt;a href=&quot;https://github.com/hpe/terraform-provider-hpegl/tree/main/docs/resources&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Terraform init&lt;/h4&gt;
&lt;p&gt;Before you can use Terraform, you will have to initialize it from the configuration file we have created. This is done with the following step: &lt;strong&gt;terraform init&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hpe/hpegl versions matching &quot;0.1.7&quot;...
- Installing hpe/hpegl v0.3.17...
- Installed hpe/hpegl v0.3.17 (signed by a HashiCorp partner, key ID D1F277A1AC66CE3D)

Partner and community providers are signed by their developers.
If you&apos;d like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run &quot;terraform init&quot; in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running &quot;terraform plan&quot; to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Terraform ready to plan&lt;/h4&gt;
&lt;p&gt;To validate your configuration file, I recommend running the &lt;strong&gt;terraform validate&lt;/strong&gt; command as you add sections to your file to track syntax errors. Once ready, the &lt;strong&gt;terraform plan&lt;/strong&gt; command will provide information about what will be created when the &lt;strong&gt;terraform apply&lt;/strong&gt; method is finally used.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform plan
 
Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # hpegl_vmaas_instance.DidierTest1 will be created
  + resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
  	+ cloud_id       	= 1
  	+ containers     	= (known after apply)
  	+ group_id       	= 3
  	+ history   	     = (known after apply)
  	+ hostname       	= (known after apply)
  	+ id             	= (known after apply)
  	+ instance_type_code = &quot;glhc-vanilla-centos&quot;
  	+ layout_id      	= 1159
  	+ name           	= &quot;DidierTest1&quot;
  	+ plan_id        	= 402
  	+ power          	= &quot;poweron&quot;
  	+ server_id      	= (known after apply)
  	+ status         	= (known after apply)
 
  	+ config {
      	+ asset_tag	    = &quot;vm_terraform&quot;
      	+ folder_code  	= &quot;1&quot;
      	+ no_agent     	= true
      	+ resource_pool_id = 1
      	+ template_id  	= 573
    	}
 
  	+ network {
      	+ id      	= 6
      	+ internal_id = (known after apply)
      	+ is_primary  = (known after apply)
      	+ name    	= (known after apply)
    	}
 
  	+ volume {
      	+ datastore_id = &quot;auto&quot;
      	+ id       	= (known after apply)
      	+ name     	= &quot;root_vol&quot;
      	+ root     	= true
      	+ size     	= 10
    	}
	}
 
Plan: 1 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you agree with the plan, and what is going to be created, you can move to the last step, i.e. applying the configuration.&lt;/p&gt;
&lt;h4&gt;Terraform ready to apply&lt;/h4&gt;
&lt;p&gt;The command you need to use is now: &lt;strong&gt;terraform apply&lt;/strong&gt;. This will rerun the plan command, then prompt you to confirm before it starts building what’s in the plan:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only &apos;yes&apos; will be accepted to approve.
 
  Enter a value: yes
 
hpegl_vmaas_instance.DidierTest1: Creating...
hpegl_vmaas_instance.DidierTest1: Still creating... [10s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [20s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [30s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [40s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [50s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m0s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m10s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m20s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m30s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m40s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [1m50s elapsed]
hpegl_vmaas_instance.DidierTest1: Still creating... [2m0s elapsed]
hpegl_vmaas_instance.DidierTest1: Creation complete after 2m9s [id=145]
 
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you open your HPE GreenLake console to monitor the VM resources, you will see the effect of the &lt;strong&gt;terraform apply&lt;/strong&gt; command:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/terraform-greenlake-part2-blog-picture1-1.png&quot; alt=&quot;GreenLake instance created&quot; title=&quot;GreenLake instance created&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Cleaning it all up&lt;/h4&gt;
&lt;p&gt;In Terraform, clean-up can be done using the &lt;strong&gt;destroy&lt;/strong&gt; command. This will automatically use the HPE GreenLake provider to clean the infrastructure in HPE GreenLake.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ terraform destroy
hpegl_vmaas_instance.DidierTest1: Refreshing state... [id=145]
 
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy
 
Terraform will perform the following actions:
 
  # hpegl_vmaas_instance.DidierTest1 will be destroyed
  - resource &quot;hpegl_vmaas_instance&quot; &quot;DidierTest1&quot; {
  	- cloud_id       	= 1 -&gt; null
  	- containers     	= [
      	- {
          	- container_type = [
              	- {
                  	- name = &quot;vanilla-centos7-node&quot;
       	         },
            	]
          	- external_fqdn  = &quot;didiertest1.localdomain&quot;
          	- hostname   	= &quot;didiertest1&quot;
          	- id         	= 145
          	- ip         	= &quot;172.17.70.29&quot;
          	- max_cores  	= 1
          	- max_memory 	= 4294967296
          	- max_storage	= 16106127360
          	- name       	= &quot;DidierTest1_145&quot;
          	- server     	= [
              	- {
                  	- compute_server_type = [
                      	- {
                          	- external_delete = true
                          	- managed     	= true
                          	- name        	= &quot;VMware Linux VM&quot;
                        	},
             	       ]
                  	- date_created    	= &quot;2022-02-23T14:42:10Z&quot;
                  	- id              	= 151
                  	- last_updated    	= &quot;2022-02-23T22:05:33Z&quot;
                  	- owner           	= [
       	               - {
                          	- username = &quot;hpedev-hackshack-terraform&quot;
                        	},
                    	]
                  	- platform        	= &quot;&quot;
                  	- platform_version	= &quot;&quot;
      	            - server_os       	= [
                      	- {
                          	- name = &quot;centOS 7 64-bit&quot;
                        	},
                    	]
                  	- ssh_host        	= &quot;172.17.70.29&quot;
         	         - ssh_port        	= 22
                  	- visibility      	= &quot;private&quot;
                	},
            	]
        	},
    	] -&gt; null
  	- group_id       	= 3 -&gt; null
  	- history        	= [
      	- {
          	- account_id   = 2
          	- created_by   = [
              	- {
                  	- display_name = &quot;hpedev-hackshack-terraform&quot;
                  	- username 	= &quot;hpedev-hackshack-terraform&quot;
                	},
            	]
          	- date_created = &quot;2022-02-23T14:42:11Z&quot;
          	- display_name = &quot;DidierTest1&quot;
          	- duration 	= 54873
          	- end_date 	= &quot;2022-02-23T14:43:06Z&quot;
        	  - id       	= 1191
          	- instance_id  = 145
          	- last_updated = &quot;2022-02-23T14:43:06Z&quot;
          	- percent  	= 100
          	- process_type = [
              	- {
                  	- code = &quot;provision&quot;
      	            - name = &quot;provision&quot;
                	},
            	]
          	- reason   	= &quot;&quot;
          	- start_date   = &quot;2022-02-23T14:42:11Z&quot;
          	- status   	= &quot;complete&quot;
          	- status_eta   = 0
          	- unique_id	= &quot;dc48d7f7-f564-46b7-b60f-67eaf5193f38&quot;
          	- updated_by   = [
              	- {
                  	- display_name = &quot;hpedev-hackshack-terraform&quot;
                  	- username 	= &quot;hpedev-hackshack-terraform&quot;
                	},
            	]
        	},
    	] -&gt; null
  	- id             	= &quot;145&quot; -&gt; null
  	- instance_type_code = &quot;glhc-vanilla-centos&quot; -&gt; null
  	- layout_id      	= 1159 -&gt; null
  	- name           	= &quot;DidierTest1&quot; -&gt; null
  	- plan_id        	= 402 -&gt; null
  	- power          	= &quot;poweron&quot; -&gt; null
  	- server_id      	= 151 -&gt; null
  	- status         	= &quot;running&quot; -&gt; null
 
  	- config {
      	- asset_tag    	= &quot;vm_terraform&quot; -&gt; null
      	- create_user  	= false -&gt; null
      	- folder_code  	= &quot;group-v41&quot; -&gt; null
      	- no_agent     	= true -&gt; null
      	- resource_pool_id = 2 -&gt; null
          - template_id  	= 573 -&gt; null
    	}
 
  	- network {
      	- id       	= 7 -&gt; null
      	- interface_id = 0 -&gt; null
      	- internal_id  = 197 -&gt; null
      	- is_primary   = true -&gt; null
      	- name     	= &quot;eth0&quot; -&gt; null
    	}
 
  	- volume {
      	- datastore_id = &quot;auto&quot; -&gt; null
      	- id       	= 282 -&gt; null
      	- name     	= &quot;root_vol&quot; -&gt; null
      	- root         = false -&gt; null
      	- size     	= 15 -&gt; null
    	}
	}
 
Plan: 0 to add, 0 to change, 1 to destroy.
 
Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only &apos;yes&apos; will be accepted to confirm.
 
  Enter a value:
 
  Enter a value: yes
 
hpegl_vmaas_instance.DidierTest1: Destroying... [id=145]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 10s elapsed]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 20s elapsed]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 30s elapsed]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 40s elapsed]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 50s elapsed]
hpegl_vmaas_instance.DidierTest1: Still destroying... [id=145, 1m0s elapsed]
hpegl_vmaas_instance.DidierTest1: Destruction complete after 1m7s
 
Destroy complete! Resources: 1 destroyed.
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;What’s next?&lt;/h3&gt;
&lt;p&gt;In this blog post, I covered how to get started with the Terraform provider for HPE GreenLake, explained how to collect data from the platform, and showed how to request the creation of a VM instance. &lt;a href=&quot;https://developer.hpe.com/blog/infrastructure-as-code-on-hpe-greenlake-using-terraform-%E2%80%93-part-2/&quot;&gt;In my next article&lt;/a&gt;, I will apply changes to infrastructure configuration file and demonstrate how the desired state is automatically tracked by Terraform and applied to HPE GreenLake.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.terraform.io/&quot;&gt;Learn more about Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/greenlake.html&quot;&gt;Learn more about HPE GreenLake&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/hpe/hpegl/latest&quot;&gt;Learn more about the HPE GreenLake Terraform provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Find other tutorials and articles on HPE GreenLake on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to provision HPE GreenLake for Private Cloud Enterprise resources from the ServiceNow Service Catalog]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. Introduction A key…]]></description><link>https://developer.hpe.com/how-to-provision-hpe-greenlake-private-cloud-resources-from-servicenow-service-catalog/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-provision-hpe-greenlake-private-cloud-resources-from-servicenow-service-catalog/</guid><pubDate>Tue, 08 Mar 2022 06:13:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A key strength of HPE GreenLake for private cloud is its self-service
orchestration and automation, which is done from the HPE GreenLake for
private cloud dashboard and can be consumed using API and Infrastructure-as-Code (IaC). However,
some ServiceNow users prefer to provision resources from the ServiceNow
Service Catalog. For these users, HPE GreenLake for private cloud offers
a free Morpheus plugin for ServiceNow, which can be installed from the
ServiceNow Store. Once the plugin is installed and configured, HPE
GreenLake for private cloud &lt;a href=&quot;https://developer.hpe.com/blog/curate-and-expose-service-catalog-items-using-hpe-greenlake-for-private-cloud&quot;&gt;catalog items&lt;/a&gt;
can be presented in the ServiceNow
Service Catalog for ordering.&lt;/p&gt;
&lt;p&gt;This article walks through the
process of integrating ServiceNow with HPE GreenLake for private cloud
and exposing Service Catalog items to ServiceNow.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;!--StartFragment--&gt;
&lt;p&gt;The process of integrating HPE GreenLake for private cloud on ServiceNow requires different things depending on your environment. In any case, you will need the Morpheus plugin on ServiceNow, HPE GreenLake for private cloud service user account and Private cloud user in ServiceNow.&lt;/p&gt;
&lt;h4&gt;Install the Morpheus plugin&lt;/h4&gt;
&lt;p&gt;To obtain and install the Morpheus plugin, you must have your HI credentials. Using your HI account, simply search for Morpheus and install the Morpheus plugin.&lt;/p&gt;
&lt;h4&gt;Private cloud service user account&lt;/h4&gt;
&lt;p&gt;Before you begin the integration process, raise a ticket on HPE GreenLake support to create a service user in your HPE GreenLake for private cloud environment. For details, see &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=request-for-subtenant-service-user-for-servicenow-integration.html&quot;&gt;Request for sub-tenant service user for ServiceNow integration&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Private cloud user in ServiceNow&lt;/h4&gt;
&lt;p&gt;Using ServiceNow admin user, create a HPE GreenLake for private cloud user in ServiceNow with the below roles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;catalog_admin&lt;/li&gt;
&lt;li&gt;import_transformer&lt;/li&gt;
&lt;li&gt;itil&lt;/li&gt;
&lt;li&gt;rest_service&lt;/li&gt;
&lt;li&gt;xmodamorpheus_ca.integration &lt;em&gt;(NOTE: You can only assign this role after installing the Morpheus plugin. If you are creating this user before that, you must go back and add this role after installing the plugin.)&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--EndFragment--&gt;
&lt;h2&gt;Integrating private cloud on ServiceNow&lt;/h2&gt;
&lt;p&gt;Configure Morpheus properties in ServiceNow with the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Morpheus Appliance endpoint: Enter the full URL to your HPE
GreenLake for private cloud appliance&lt;/li&gt;
&lt;li&gt;Username: Enter the name of the HPE GreenLake for private cloud
Service user (Received from HPE Support) that the Morpheus plugin
used to connect to the HPE GreenLake Private Cloud API&lt;/li&gt;
&lt;li&gt;Password: Enter the HPE GreenLake for private cloud Service user
password (Received from HPE Support)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Integrating ServiceNow with HPE GreenLake for private cloud&lt;/h2&gt;
&lt;!--StartFragment--&gt;
&lt;p&gt;Navigate to &lt;em&gt;Administration &gt; Integrations &gt; +New Integration &gt; ITSM&gt; ServiceNow&lt;/em&gt; in the HPE GreenLake for private cloud portal.&lt;/p&gt;
&lt;p&gt;Configure ServiceNow integration with the below parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name - Enter the integration name&lt;/li&gt;
&lt;li&gt;ENABLED - Select to enable consumption of this ServiceNow integration in HPE GreenLake for private cloud. The integration is enabled by default&lt;/li&gt;
&lt;li&gt;SERVICE NOW HOST-Enter the ServiceNow instance host URL (example: &lt;a href=&quot;https://your.instance.service-now.com/&quot;&gt;https://your.instance.service-now.com&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;USER - Enter a ServiceNow user created as part of pre-requistes&lt;/li&gt;
&lt;li&gt;PASSWORD - Password of the above-mentioned user&lt;/li&gt;
&lt;li&gt;Optional variables (CMDB CUSTOM MAPPING, CMDB CLASS MAPPING DEFAULT CMDB BUSINESS CLASS) not required for this use case&lt;/li&gt;
&lt;li&gt;Click SAVE CHANGES&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The ServiceNow integration is now displayed in the list of integrations.&lt;/p&gt;
&lt;p&gt;Sample of the ServiceNow integration summary:&lt;/p&gt;
&lt;!--EndFragment--&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Expose private cloud resources to the ServiceNow Service Catalog&lt;/h2&gt;
&lt;p&gt;After creating catalog items in the HPE GreenLake for private cloud as
discussed in the previous
&lt;a href=&quot;https://developer.hpe.com/blog/curate-and-expose-service-catalog-items-using-hpe-greenlake-for-private-cloud/&quot;&gt;blog&lt;/a&gt;,
the catalog items can be made available to ServiceNow Service Catalog by
following procedure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to Administration &gt; Integrations&lt;/li&gt;
&lt;li&gt;Select the ServiceNow integration&lt;/li&gt;
&lt;li&gt;From the Catalog Items tab, click + ADD CATALOG ITEM&lt;/li&gt;
&lt;li&gt;Select the catalog item using the drop-down and click SAVE CHANGES&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below is the sample list of catalog items exposed to the ServiceNow
integration:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, from ServiceNow, verify the availability of the resources as
follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Log in to ServiceNow to access the Service Catalog&lt;/li&gt;
&lt;li&gt;From the Service Catalog, access the Morpheus plugin&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Exposed HPE GreenLake private cloud resources are now available in the Self-Service Service Catalog.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Provision private cloud resources from ServiceNow&lt;/h2&gt;
&lt;p&gt;Click on the service catalog and select the catalog item to order.
For this example, RDS_MariaDB is chosen.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click Order Now, after filling in the necessary details. Below is the
sample order status.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Order is deployed in HPE GreenLake for private cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/figure9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;With Morpheus plugin integration, Self-Service catalog items from HPE
GreenLake for private cloud can be presented as provisioning options in
the ServiceNow Service Catalog and the user can order and manage private
cloud resources directly from ServiceNow. Hopefully, you found this
tutorial helpful. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV
blog&lt;/a&gt; for more posts on topics like
this.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Secure and simplify]]></title><link>https://developer.hpe.com/2022-March-02/</link><guid isPermaLink="false">https://developer.hpe.com/2022-March-02/</guid><pubDate>Wed, 02 Mar 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Come and Engage with the HPE DEV Team at HPE Technology and Solution Summit 2022!]]></title><description><![CDATA[Today, more and more purchasing decisions related to software, including cloud-based initiatives, are being influenced by developers and…]]></description><link>https://developer.hpe.com/come-and-engage-with-the-hpe-dev-team-at-hpe-technology-and-solution-summit-2022/</link><guid isPermaLink="false">https://developer.hpe.com/come-and-engage-with-the-hpe-dev-team-at-hpe-technology-and-solution-summit-2022/</guid><pubDate>Wed, 23 Feb 2022 15:16:18 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/tss-fred-blog-1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Today, more and more purchasing decisions related to software, including cloud-based initiatives, are being influenced by developers and data scientists. The hands-on engagement opportunities HPE DEV offers at events like the Hewlett Packard Enterprise (HPE) &lt;a href=&quot;https://h41382.www4.hpe.com/tss/&quot;&gt;Technology Solutions Summit 2022&lt;/a&gt; (TSS 2022) helps build a collaborative, open discussion where we can each learn from each other to address today&apos;s challenges with innovative (and often rather fun) methods.&lt;/p&gt;
&lt;p&gt;If you&apos;re one of the lucky ones in the HPE and HPE partner presales community who can take advantage of HPE&apos;s largest and most comprehensive technical and solutions knowledge transfer event, we encourage you to seek out the many different sessions that will be offered by the HPE DEV team. From March 28-31, you&apos;ll be able to participate virtually and attend numerous sessions on new open source and HPE technologies – from breakouts to Speaker-led (pre-registration required) and Walk-in Hands-on sessions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Engage directly with the HPE DEV team&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;At HPE TSS 2022, you will have a couple of unique opportunities to speak directly with the HPE DEV team to hear about exciting work that&apos;s going on and to make your own voice heard. There will be two breakout sessions that cover HPE DEV activities and open source technologies (&lt;a href=&quot;https://h41382.www4.hpe.com/tss/application/assets/pdf/TSS22-More_About_Our_Sessions.pdf&quot;&gt;refer to the agenda for specifics&lt;/a&gt;). These tech talks are about 75 minutes long and include 20 minutes of Q&amp;#x26;A&apos;s at the end.&lt;/p&gt;
&lt;p&gt;In addition to the breakouts, the HPE DEV team is running a focus group.&lt;/p&gt;
&lt;p&gt;This highly-interactive session will occur on the last day of the event, acting as a wrap up where you will get opportunities to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Be recognized for your achievements&lt;/li&gt;
&lt;li&gt;Discover our new HPE DEV Evangelists program&lt;/li&gt;
&lt;li&gt;View the HPE DEV Workshops-on-Demand roadmaps&lt;/li&gt;
&lt;li&gt;Provide feedback on sessions and labs as well as on the overall HPE DEV program&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;HPE DEV Hands-on Sessions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tss-fred-blog-2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV will be offering two types of Hands-on Sessions at HPE TSS 2022. The Speaker-led sessions are labs where we leverage our HPE DEV Workshops-on-Demand infrastructure to introduce you to some of our newest workshops. These labs require you to pre-register in order to attend. Make sure you reserve your seat today in order to learn more about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Securing Microservice Communication with Envoy using X.509 SPIFFE IDs&lt;/li&gt;
&lt;li&gt;Ansible 101&lt;/li&gt;
&lt;li&gt;Machine Learning 101&lt;/li&gt;
&lt;li&gt;Spark 101&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Walk-in sessions are also labs but they don&apos;t require pre-registration. They are handled on a first-come/first-served basis. There, you will have access to all of our existing workshops:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tss-fred-blog-3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Several workshops will be offered during a given time slot, with each track designed to address the needs of a specific role or &quot;persona&quot;. The tracks are divided out into:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open Source Enthusiasts&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;API 101&lt;/li&gt;
&lt;li&gt;Python 101&lt;/li&gt;
&lt;li&gt;GIT 101&lt;/li&gt;
&lt;li&gt;Jupyter Notebook 101&lt;/li&gt;
&lt;li&gt;Kubernetes 101&lt;/li&gt;
&lt;li&gt;Docker 101&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Becoming Data Scientists&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Intro to the HPE Ezmeral Container Platform REST API&lt;/li&gt;
&lt;li&gt;HPE Ezmeral Data Fabric 101&lt;/li&gt;
&lt;li&gt;Using Kubernetes CSI with HPE Ezmeral Container Platform&lt;/li&gt;
&lt;li&gt;Deploying end-to-end machine learning workflows​ with HPE Ezmeral MLOPS​&lt;/li&gt;
&lt;li&gt;AI 101 - Convolutional neural network (CNN) for MNIST​ with HPE Ezmeral Container Platform&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cloud and Data Centers Operators&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Redfish API 101&lt;/li&gt;
&lt;li&gt;HPE iLOrest - Overview of the HPE RESTful Interface Tool&lt;/li&gt;
&lt;li&gt;Using the iLO Redfish API with Ansible and HPE OneView&lt;/li&gt;
&lt;li&gt;Introduction to the HPE OneView REST API&lt;/li&gt;
&lt;li&gt;Stackstorm 101&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are going to TSS 2022, make sure to sign up for the HPE DEV sessions. As you know, we value the importance of this technical event. We expect you do as well and appreciate that you have regularly rated our sessions highly. We look forward to reconnecting with you there.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Curate and Expose Service Catalog Items using HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. The HPE GreenLake…]]></description><link>https://developer.hpe.com/curate-and-expose-service-catalog-items-using-hpe-greenlake-for-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/curate-and-expose-service-catalog-items-using-hpe-greenlake-for-private-cloud/</guid><pubDate>Wed, 16 Feb 2022 08:49:59 GMT</pubDate><content:encoded>&lt;center&gt;&lt;img src=&quot;/img/intro-greenlake-512-pixels-rgb-image.jpg&quot; width=&quot;512&quot; height=&quot;384&quot; alt=&quot;Curate and expose catalog items using the HPE GreenLake Service Catalog persona view&quot;&gt;&lt;/center&gt;
&lt;br /&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The HPE GreenLake Edge-to-Cloud platform has a very useful interface for managing the HPE GreenLake for private cloud service called HPE GreenLake Central. It provides a cloud experience to manage VMs in your on-premises, pay-per-use datacenter. Personas are alternate views in the HPE GreenLake for private cloud user interface (UI). A user’s access to the various personas is controlled by role permissions.&lt;/p&gt;
&lt;p&gt;At present, there are two persona types: Standard and Service Catalog. The Standard persona is the typical default view. The Service Catalog persona is a simplified view where users are presented with different pre-configured instance types, blueprints, and workflows to choose from based upon their role. This improves the deployment experience with just a few clicks and without presenting an overwhelming list of options.&lt;/p&gt;
&lt;p&gt;The goal of this article is to discuss the Service Catalog persona in greater detail, including how administrators can curate the catalog and how users can use the Service Catalog to deploy their services. For readers needing a good introduction to HPE GreenLake for private cloud concepts, check out &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003040enw?jumpid=in_lit-psnow-red&quot;&gt;this technical paper&lt;/a&gt; that will help you better understand the different concepts leveraged in this blog.&lt;/p&gt;
&lt;h2&gt;Using the Service Catalog persona view&lt;/h2&gt;
&lt;p&gt;Access to a persona view is controlled by a user’s role. By default, new roles and roles that existed prior to the creation of the personas will only have access to the Standard persona.&lt;/p&gt;
&lt;p&gt;With the Tenant Admin user, connect to HPE GreenLake Central, locate the HPE GreenLake for private cloud dashboard widget and click the Launch icon to open the HPE GreenLake for private cloud dashboard.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Administration &gt; Roles&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Click the name of the role to modify&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Personas&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;From the default persona drop-down list, select the default setting. The &lt;strong&gt;Standard&lt;/strong&gt; type is typically used as the default.&lt;/li&gt;
&lt;li&gt;From the list of personas, locate the Service Catalog persona&lt;/li&gt;
&lt;li&gt;From the &lt;strong&gt;ACCESS&lt;/strong&gt; column of the role, select &lt;strong&gt;FULL&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Configuring Service Catalog item access&lt;/h2&gt;
&lt;p&gt;By default, user roles have no access to any catalog items. When enabling the Service Catalog persona access for user roles, you will also need to provide access to some or all catalog items.&lt;/p&gt;
&lt;p&gt;Configuring global access:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Full:&lt;/strong&gt; Gives access to all catalog items&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Custom:&lt;/strong&gt; Gives access to individually-selected items from the list below&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;None:&lt;/strong&gt; No access is given to any catalog items&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Build Service Catalog items&lt;/h2&gt;
&lt;p&gt;A Tenant Admin user can add catalog items (instance types, blueprints, and workflows) and allow some configurable options using the &lt;strong&gt;Option Types&lt;/strong&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: A Tenant Admin user should have full permission for &lt;strong&gt;Tools: Self Service&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Adding an instance catalog item&lt;/h2&gt;
&lt;p&gt;This example shows how to create an Apache instance catalog item with the assumption that the Apache instance type exists in the platform.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;From your user name drop-down list, ensure that the Standard persona is selected (For more details, refer to the &lt;em&gt;&lt;strong&gt;Accessing Service Catalog persona&lt;/strong&gt;&lt;/em&gt; section of this post)&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Tools &gt; Self Service&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;From the ADD drop-down list, select &lt;strong&gt;Instance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The ADD CATALOG ITEM dialog box opens&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configure the catalog items as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NAME - Enter the catalog item name&lt;/li&gt;
&lt;li&gt;DESCRIPTION - (Optional) Enter the catalog item description&lt;/li&gt;
&lt;li&gt;ENABLED - Select to enable the catalog item, making it available for provisioning (default). Clear to disable.&lt;/li&gt;
&lt;li&gt;FEATURED - Select to enable special visibility of this catalog item in the Service Catalog persona view. Clear to disable. Special visibility means that an item can be featured (a tag is added in the item) and given priority in the Service Catalog and Dashboard views.&lt;/li&gt;
&lt;li&gt;LOGO - From the drop-down list, do one of the following:
&lt;ul&gt;
&lt;li&gt;Select an existing logo&lt;/li&gt;
&lt;li&gt;Select custom and click Browse to locate and upload a logo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;CONFIG - Enter, view, or edit the instance configuration
&lt;ul&gt;
&lt;li&gt;To build this catalog item using the CREATE INSTANCE wizard, click CONFIGURATION WIZARD. For more details, see &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00092451en_us&amp;#x26;page=GUID-3C344C62-EA07-4263-A540-29B5B92E3CE2.html&quot;&gt;Instance creation configuration parameters&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;CONTENT - Enter a more detailed description about the instance, which will display in the order screen&lt;/li&gt;
&lt;li&gt;Option Types - (Optional) Enter the Option Types to present users with mandatory or optional selections during provisioning. Option Types can then be used in the CONFIG section. Below is the sample usage of Options Types in the CONFIG section.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click SAVE CHANGES&lt;/li&gt;
&lt;li&gt;Optionally, a Tenant Admin can provide  access to the catalog items to a specific user or role (see &lt;em&gt;&lt;strong&gt;Configuring Service Catalog item access&lt;/strong&gt;&lt;/em&gt; earlier in this article)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Adding a blueprint catalog item&lt;/h2&gt;
&lt;p&gt;Blueprints enable full multi-tier application deployment. In the self-service catalog, you can create catalog items based on existing app blueprints. You can preconfigure blueprints and expose them to the Service Catalog persona for a click-to-deploy use case.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click on your name in the upper-right corner and ensure that the STANDARD persona is selected (For more details, refer to the &lt;em&gt;&lt;strong&gt;Accessing the Service Catalog persona&lt;/strong&gt;&lt;/em&gt; section later in this tutorial)&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Tools &gt; Self Service&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;From the ADD drop-down list, select &lt;strong&gt;Blueprint&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The ADD CATALOG ITEM dialog box opens&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This example shows the steps to create blueprint catalog item, which can be used to deploy Nodejs with MariaDB with the assumption that the blueprint exists in the platform.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configure the catalog items as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NAME - Enter the catalog item name&lt;/li&gt;
&lt;li&gt;DESCRIPTION - (Optional) Enter the catalog item description&lt;/li&gt;
&lt;li&gt;ENABLED - Select to enable the catalog item, making it available for provisioning (default). Clear to disable.&lt;/li&gt;
&lt;li&gt;FEATURED - Select to enable special visibility of this catalog item in the Service Catalog persona view. Clear to disable. Special visibility means that an item can be featured (a tag is added in the item) and given priority in the Service Catalog and Dashboard views.&lt;/li&gt;
&lt;li&gt;LOGO - From the drop-down list, do one of the following:
&lt;ul&gt;
&lt;li&gt;Select an existing logo&lt;/li&gt;
&lt;li&gt;Select custom and click &lt;strong&gt;Browse&lt;/strong&gt; to locate and upload a logo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;CONFIGURE - Click CONFIGURE to build this catalog item. The NEW APP wizard opens. For information about using the wizard, refer to Creating an app from a blueprint.&lt;/li&gt;
&lt;li&gt;APP SPEC - (Optional) Inject an override blueprint spec in yaml format&lt;/li&gt;
&lt;li&gt;CONTENT - Enter a more detailed description about the app, which will display in the order screen&lt;/li&gt;
&lt;li&gt;Option Types - (Optional) Enter the option types to present to users with mandatory or optional selections prior to provisioning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click SAVE CHANGES.&lt;/li&gt;
&lt;li&gt;Optionally, a Tenant Admin can provide  access to the catalog items to a specific user or role (see Configuring Service Catalog item access earlier in this article)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Adding a workflow catalog item&lt;/h2&gt;
&lt;p&gt;Workflows are groups of Tasks and can be run on-demand against an existing instance. You can preconfigure operational workflows and expose them to the Service Catalog persona for a click-to-deploy use case.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Click on your name in the upper-right corner and ensure that the Standard persona is selected (For more details, refer to the Accessing the Service Catalog persona section of this post)&lt;/li&gt;
&lt;li&gt;Navigate to Tools &gt; Self Service&lt;/li&gt;
&lt;li&gt;From the ADD drop-down list, select WORKFLOW&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The ADD CATALOG ITEM dialog box opens&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configure the catalog items as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NAME - Enter the catalog item name&lt;/li&gt;
&lt;li&gt;DESCRIPTION - (Optional) Enter the catalog item description&lt;/li&gt;
&lt;li&gt;ENABLED - Select to enable the catalog item, making it available for provisioning (default). Clear to disable.&lt;/li&gt;
&lt;li&gt;FEATURED - Select to enable special visibility of this catalog item in the Service Catalog persona view. Clear to disable. Special visibility means that an item can be featured (a tag is added in the item) and given priority in the Service Catalog and Dashboard views.&lt;/li&gt;
&lt;li&gt;LOGO - From the drop-down list, do one of the following:
&lt;ul&gt;
&lt;li&gt;Select an existing logo&lt;/li&gt;
&lt;li&gt;Select custom and click Browse to locate and upload a logo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;WORKFLOW - From the drop-down list, select the desired workflow&lt;/li&gt;
&lt;li&gt;CONTEXT TYPE - (Optional) From the drop-down list, select the context type: none,server or instance.&lt;/li&gt;
&lt;li&gt;CONTENT - Enter a more detailed description about the instance, which will display in the order screen&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Accessing the Service Catalog persona&lt;/h2&gt;
&lt;p&gt;If your role’s default persona is set as “Service Catalog”, the link launching HPE GreenLake for private cloud dashboard card will open the Service Catalog persona dashboard.&lt;/p&gt;
&lt;p&gt;Otherwise, switch personas by clicking on your name in the upper-right corner of the application window. If your role gives you access to any additional personas, they will be listed here.&lt;/p&gt;
&lt;p&gt;The catalog shows the complete list of pre-defined catalog items available to the user for provisioning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Ordering a Service catalog item&lt;/h2&gt;
&lt;p&gt;From the Service Catalog page, select the tile for your chosen item to see any custom options that need to be set prior to provisioning.&lt;/p&gt;
&lt;p&gt;This example shows the ordering for the “Apache” catalog item created in the previous steps.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Based on the option types specified in the catalog item definition, custom options are displayed on the ordering page.&lt;/p&gt;
&lt;p&gt;Click &lt;strong&gt;Order Now&lt;/strong&gt; to place the order immediately or click &lt;strong&gt;Add to Order&lt;/strong&gt; and proceed to the cart.&lt;/p&gt;
&lt;p&gt;Click the cart to review and place the order.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;strong&gt;PLACE ORDER&lt;/strong&gt; after reviewing the order.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The state of the order can be seen from the &lt;strong&gt;INVENTORY&lt;/strong&gt; page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon clicking the inventory item &lt;strong&gt;Demo_VM&lt;/strong&gt;, the instance details page is opened.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image16.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The order is now complete. In this case, the instance has been deployed in just a few steps as compared to how it is done with the Standard persona.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image17.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Ordering a Service catalog item for multiple resources&lt;/h2&gt;
&lt;p&gt;The example below shows the ordering process for a catalog item created for a workflow that will deploy Docker on multiple resources. On the &lt;strong&gt;Resource&lt;/strong&gt; box, specify the list of instances to deploy the sample Docker install workflow and click on &lt;strong&gt;ORDER NOW&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image18.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The order is now complete.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image19.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Clicking on the order in the inventory list shows the detail of the execution.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/catalog-image20.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Use of the HPE GreenLake for private cloud Service Catalog persona improves the user experience with a simplified catalog where users can select and deploy instances or blueprints with a pre-defined configuration using just a few clicks and without presenting an overwhelming list of options. Hopefully, you found this tutorial helpful. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more posts on topics like this.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Ezmeral Data Fabric for Predictive Maintenance]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/mapr-for-predictive-maintenance/</link><guid isPermaLink="false">https://developer.hpe.com/mapr-for-predictive-maintenance/</guid><pubDate>Tue, 15 Feb 2022 05:31:46 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This project is intended to show how to build predictive maintenance applications on MapR. Predictive maintenance applications place high demands on data streaming, time-series data storage, and machine learning. Therefore, this project focuses on data ingest with MapR Streams, time-series data storage with MapR-DB and OpenTSDB, and feature engineering with MapR-DB and Apache Spark.&lt;/p&gt;
&lt;h2&gt;Overview:&lt;/h2&gt;
&lt;p&gt;Predictive maintenance requires a cutting-edge data platform that handles fast streams of Internet of Things (IoT) data with the processing required for on-the-fly feature engineering and the flexibility required for data science and machine learning.&lt;/p&gt;
&lt;h2&gt;Ingesting Factory IoT Data&lt;/h2&gt;
&lt;p&gt;Predictive maintenance applications rely heavily on ingesting multiple data sources, each with its own format and throughput. MapR Streams can ingest data, regardless of format or speed, with standard Kafka and RESTful APIs.&lt;/p&gt;
&lt;h2&gt;Machine Learning on Factory IoT Data&lt;/h2&gt;
&lt;p&gt;The &quot;predictive&quot; aspects of predictive maintenance applications are usually realized through machine learning. Feature engineering is often considered the most important aspect of machine learning (as opposed to neural network design, for example). Feature engineering places high demands on the data layer because of the amount of data that IoT data streams generate. The tendency for failures to occur infrequently and without warning means vast amounts of raw time-series data must be stored. Not only must it be stored, but it must also be possible to retroactively update the lagging features necessary in order to label failures for the purposes of supervised machine learning. MapR-DB and Spark can work together to provide the capabilities required to put machine learning into practice for predictive maintenance.&lt;/p&gt;
&lt;p&gt;In summary:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MapR Streams provide a convenient way to ingest IoT data because it is scalable and provides convenient interfaces.&lt;/li&gt;
&lt;li&gt;The integration of MapR DB with Spark provides a convenient way to label lagging features needed for predicting failures via supervised machine learning.&lt;/li&gt;
&lt;li&gt;Drill provides a convenient way to load ML data sets into Tensorflow for unsupervised and supervised machine learning&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Implementation Summary&lt;/h2&gt;
&lt;p&gt;There are two objectives relating to predictive maintenance implemented in this project. The first objective is to visualize time-series data in an interactive real-time dashboard in Grafana. The second objective is to make raw data streams and derived features available to machine learning frameworks, such as Tensorflow, in order to develop algorithms for anomaly detection and predictive maintenance. These two objects are realized using two seperate data flows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The first flow, located on the top half of the image below, is intended to persist IoT data and label training data for sequence prediction and anomaly detection of time-series data in Tensorflow.&lt;/li&gt;
&lt;/ol&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/bi_pipeline.png&quot; width=&quot;100%&quot;&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;The second flow, located on the bottom half, is intended to persist time-series IoT data in OpenTSDB for visualization in a Grafana dashboard.&lt;/li&gt;
&lt;/ol&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/ml_pipeline.png&quot; width=&quot;100%&quot;&gt;
&lt;p&gt;Put together, these data pipelines look like this. The APIs used for reading and writing data are shown in red.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/dataflow.png&quot; width=&quot;100%&quot;&gt;
&lt;h2&gt;Preliminary Steps&lt;/h2&gt;
&lt;p&gt;These steps explain how to set up this tutorial using the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;MapR Container for Developers&lt;/a&gt; on macOS.&lt;/p&gt;
&lt;h2&gt;Allocate 12GB to Docker&lt;/h2&gt;
&lt;p&gt;This project requires a lot of memory. We recommend allocating 12GB RAM, 4GB swap, and 2 CPUs to the Docker Community Edition for macOS.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/docker_config.png&quot; width=&quot;100%&quot;&gt;
&lt;h2&gt;Start the MapR sandbox (Now known as the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/RunMapRContainerDevelopers.html&quot;&gt;Development Environment for HPE Ezmeral Data Fabric&lt;/a&gt;)&lt;/h2&gt;
&lt;p&gt;Download and run the &lt;code&gt;./mapr_devsandbox_container_setup.sh&lt;/code&gt; script.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;git clone https://github.com/mapr-demos/mapr-db-60-getting-started
cd mapr-db-60-getting-started
./mapr_devsandbox_container_setup.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Run the &lt;code&gt;init.sh&lt;/code&gt; script&lt;/h2&gt;
&lt;p&gt;Run the &lt;code&gt;init.sh&lt;/code&gt; script to install Spark, OpenTSDB, Grafana, and some other things necessary to use the sample applications in this tutorial. SSH to the sandbox container, with password &quot;mapr&quot; and run the following commands. This should take about 20 minutes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;ssh -p 2222 root@localhost
wget https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/init.sh
chmod 700 ./init.sh
sudo ./init.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Import the Grafana dashboard&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;sudo /opt/mapr/server/configure.sh -R -OT `hostname -f`
sudo /opt/mapr/opentsdb/opentsdb-2.4.0/etc/init.d/opentsdb start
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Open Grafana data sources, with a URL like &lt;a href=&quot;http://maprdemo:3000/datasources/edit/1&quot;&gt;http://maprdemo:3000/datasources/edit/1&lt;/a&gt;, and add OpenTSDB as a new data source.&lt;/p&gt;
&lt;p&gt;Load the &lt;code&gt;Grafana/IoT_dashboard.json&lt;/code&gt; file using Grafana&apos;s dashboard import functionality, and specify &quot;MaprMonitoringOpenTSDB&quot; as the data source, as shown below:&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_import.png&quot; width=&quot;100%&quot;&gt;
&lt;hr&gt;
&lt;h2&gt;Predictive Maintenance Demo Procedure&lt;/h2&gt;
&lt;p&gt;For learning or debugging purposes you should run each of the following steps manually, but if you just want to see data show up in Grafana then just run &lt;code&gt;./run.sh&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Step 1 - Simulate HVAC data stream:&lt;/h2&gt;
&lt;p&gt;We have provided a dataset captured from a real-world heating, ventilation, and air conditioning (HVAC) system in the &lt;code&gt;sample_dataset&lt;/code&gt; folder. Run the following command to replay that HVAC stream to &lt;code&gt;/apps/factory:mqtt&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;cat ~/predictive-maintenance/sample_dataset/mqtt.json | while read line; do echo $line | sed &apos;s/{/{&quot;timestamp&quot;:&quot;&apos;$(date +%s)&apos;&quot;,/g&apos; | /opt/mapr/kafka/kafka-*/bin/kafka-console-producer.sh --topic /apps/factory:mqtt --broker-list this.will.be.ignored:9092; sleep 1; done
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 2 - Save IoT data stream to MapR-DB:&lt;/h2&gt;
&lt;p&gt;In the next step, we&apos;ll save the IoT data stream to OpenTSDB so we can visualize it in Grafana, but in this step, we save that stream to MapR-DB so we can apply labels necessary for supervised machine learning, as discussed in Step 4.&lt;/p&gt;
&lt;p&gt;Run the following command to persist messages from stream &lt;code&gt;/apps/factory:mqtt&lt;/code&gt; to MapR-DB table &lt;code&gt;/apps/mqtt_records&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;/opt/mapr/spark/spark-*/bin/spark-submit --class com.mapr.examples.MqttConsumer ~/predictive-maintenance/target/predictive-maintenance-1.0-jar-with-dependencies.jar /apps/factory:mqtt /apps/mqtt_records
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run this command to see how the row count increases:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;/opt/mapr/drill/drill-*/bin/sqlline -u jdbc:drill: -n mapr
    select count(*) from dfs.`/apps/mqtt_records`;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3 - Save IoT data stream to OpenTSDB:&lt;/h2&gt;
&lt;p&gt;In this step, we save the IoT data stream to OpenTSDB so we can visualize it in Grafana.&lt;/p&gt;
&lt;p&gt;Update &lt;code&gt;localhost:4242&lt;/code&gt; with the hostname and port of your OpenTSDB server before running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;/opt/mapr/kafka/kafka-*/bin/kafka-console-consumer.sh --new-consumer --topic /apps/factory:mqtt --bootstrap-server not.applicable:0000 | while read line; do echo $line | jq -r &quot;to_entries | map(\&quot;\(.key) \(.value | tostring)\&quot;) | {t: .[0], x: .[]} | .[]&quot; | paste -d &apos; &apos; - - | awk &apos;{system(&quot;curl -X POST --data \x27{\&quot;metric\&quot;: \&quot;&quot;$3&quot;\&quot;, \&quot;timestamp\&quot;: &quot;$2&quot;, \&quot;value\&quot;: &quot;$4&quot;, \&quot;tags\&quot;: {\&quot;host\&quot;: \&quot;localhost\&quot;}}\x27 http://localhost:4242/api/put&quot;)}&apos;; echo -n &quot;.&quot;; done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After you have run that command you should be able to visualize the streaming IoT data in Grafana.  Depending on where you have installed Grafana, this can be opened with a URL like &lt;a href=&quot;http://maprdemo:3000&quot;&gt;http://maprdemo:3000&lt;/a&gt;:&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_screenshot_2.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;h2&gt;Step 4 - Update lagging features in MapR-DB for each failure event:&lt;/h2&gt;
&lt;p&gt;This process will listen for failure events on a MapR Streams topic and retroactively label lagging features in MapR-DB when failures occur, as well as render the failure event in Grafana. Update &quot;&lt;a href=&quot;http://localhost:3000&quot;&gt;http://localhost:3000&lt;/a&gt;&quot; with the hostname and port for your Grafana instance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;/opt/mapr/spark/spark-*/bin/spark-submit --class com.mapr.examples.UpdateLaggingFeatures ~/predictive-maintenance/target/predictive-maintenance-1.0-jar-with-dependencies.jar /apps/factory:failures /apps/mqtt_records http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/lagging_features_explanation.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;p&gt;This particular step probably says the most about the value of MapR. Consider this scenario: You have a factory instrumented by IoT devices reporting hundres of metrics per machine per second and you&apos;re tasked with the challenge of saving all that data until one day, often months into the future, you finally experience a machine failure. At that point, you have to retroactively go back and update all those records as being &quot;about to fail&quot; or &quot;x days to failure&quot;  so that you can use that data for training models to predict those lagging features.  That&apos;s one heck of a DB update, right? The only way to store all that data is with a distributed database. This is what makes Spark and MapR-DB such a great fit. Spark - the distributed processing engine for big data, and MapR-DB - the distributed data store for big data, working together to process and store lots of data with speed and scalability.&lt;/p&gt;
&lt;h2&gt;Step 5 - Simulate a failure event:&lt;/h2&gt;
&lt;p&gt;To simulate a device failure, run this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;echo &quot;{\&quot;timestamp\&quot;:&quot;$(date +%s -d &apos;60 sec ago&apos;)&quot;,\&quot;deviceName\&quot;:\&quot;Chiller1\&quot;}&quot; | /opt/mapr/kafka/kafka-*/bin/kafka-console-producer.sh --topic /apps/factory:failures --broker-list this.will.be.ignored:9092
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will trigger the Spark process you ran in the previous step to update lagging features in the MapR-DB table &quot;/apps/mqtt_records&quot;. Once it sees the event you simulated, you should see it output information about the lagging features it labeled, like this:&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/UpdateLagging_screenshot.png?raw=true&quot; width=&quot;100%&quot; align=&quot;center&quot; alt=&quot;Update Lagging Features screenshot&quot;&gt;
&lt;h2&gt;Step 6 - Validate that lagging features have been updated:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;$ mapr dbshell
find /apps/mqtt_records --where &apos;{ &quot;$eq&quot; : {&quot;_Chiller1AboutToFail&quot;:&quot;true&quot;} }&apos; --f _id,_Chiller1AboutToFail,timestamp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here are a few example commands to look at that table with &lt;code&gt;mapr dbshell&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;$ mapr dbshell
find /apps/mqtt_records --f timestamp
find --table /apps/mqtt_records --orderby _id --fields _Chiller2RemainingUsefulLife,_id
find --table /apps/mqtt_records --where &apos;{&quot;$gt&quot; : {&quot;_id&quot; : &quot;1523079964&quot;}}&apos; --orderby _id --fields _Chiller2RemainingUsefulLife,_id
find --table /apps/mqtt_records --where &apos;{&quot;$gt&quot; : {&quot;timestamp&quot; : &quot;1523079964&quot;}}&apos; --fields _Chiller2RemainingUsefulLife,timestamp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here&apos;s an example of querying IoT data records table with Drill:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;/opt/mapr/drill/drill-*/bin/sqlline -u jdbc:drill: -n mapr
    select * from dfs.`/apps/mqtt_records` limit 2;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 7 - Synthesize a high-speed data stream:&lt;/h2&gt;
&lt;p&gt;This command simulates a high-speed data stream from a vibration sensor sampling once per 10ms.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;java -cp ~/predictive-maintenance/target/predictive-maintenance-1.0-jar-with-dependencies.jar com.mapr.examples.HighSpeedProducer /apps/fastdata:vibrations 10
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Why are we simulating a vibration sensor?&lt;/h2&gt;
&lt;p&gt;Degradation in machines often manifests itself as a low rumble or a small shake. These unusual vibrations give you the first clue that a machine is nearing the end of its useful life, so it&apos;s very important to detect those anomalies. Vibration sensors measure the displacement or velocity of motion thousands of times per second. Analyzing those signals is typically done in the frequency domain. An algorithm called &quot;fast Fourier transform&quot; (FFT) can sample time-series vibration data and identify its component frequencies. In the next step, you will run a command the converts the simulated vibration data to the frequency domain with an FFT and raises alarms when vibration frequencies vary more than a predefined threshold.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/vibration_analysis.png?raw=true&quot; width=&quot;100%&quot; align=&quot;center&quot; alt=&quot;vibration analysis&quot;&gt;
&lt;p&gt;This demonstrates the capacity of MapR to ingest and process high-speed streaming data. Depending on the hardware, you will probably see MapR Streams processing more than 40,000 messages per second in this step.&lt;/p&gt;
&lt;h2&gt;Step 8 - Process high-speed data stream:&lt;/h2&gt;
&lt;p&gt;This will calculate FFTs on the fly for the high-speed streaming data, and render an event in Grafana when FFTs changed more than 25% over a rolling window. This simulates anomaly detection for a vibration signal. Update &quot;&lt;a href=&quot;http://localhost:3000&quot;&gt;http://localhost:3000&lt;/a&gt;&quot; with the hostname and port for your Grafana instance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-*/bin/spark-submit --class com.mapr.examples.StreamingFourierTransform ~/predictive-maintenance/target/predictive-maintenance-1.0-jar-with-dependencies.jar /apps/fastdata:vibrations 25.0 http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 9 - Visualize data in Grafana&lt;/h2&gt;
&lt;p&gt;By now, you should be able to see streaming IoT data, vibration faults, and device failures in the Grafana dashboard.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_screenshot.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;h2&gt;Step 10 - Explore IoT data with Apache Drill&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://drill.apache.org/&quot;&gt;Apache Drill&lt;/a&gt; is a Unix service that unifies access to data across a variety of data formats and sources. MapR is the primary contributor to Apache Drill, so naturally, it is included in the MapR platform. Here&apos;s how you can use it to explore some of the IoT data we&apos;ve gathered so far in this tutorial.&lt;/p&gt;
&lt;p&gt;Open the Drill web interface. If you&apos;re running MapR on your laptop then that&apos;s probably at &lt;a href=&quot;http://localhost:8047&quot;&gt;http://localhost:8047&lt;/a&gt;. Here are a couple of useful queries:&lt;/p&gt;
&lt;h2&gt;Show how many messages have arrived cumulatively over days of the week:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;SELECT _day_of_week_long, count(_day_of_week_long) FROM dfs.`/apps/mqtt_records` group by _day_of_week_long;
&lt;/code&gt;&lt;/pre&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/drill_query_1.png&quot; width=&quot;100%&quot;&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/drill_result_1.png&quot; width=&quot;100%&quot;&gt;
&lt;h2&gt;Count how many faults have been detected:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;WITH x AS
(
SELECT _id, _day_of_week_long, _Chiller1AboutToFail, ROW_NUMBER() OVER (PARTITION BY _Chiller1AboutToFail ORDER BY _id) as fault FROM dfs.`/apps/mqtt_records`
)
SELECT * from x WHERE _Chiller1AboutToFail = &apos;true&apos; and fault = 1;
&lt;/code&gt;&lt;/pre&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/drill_query_2.png&quot; width=&quot;100%&quot;&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/drill_result_2.png&quot; width=&quot;100%&quot;&gt;
&lt;p&gt;Drill can also be used to load data from MapR-DB into data science notebooks. Examples of this are shown in the following section.&lt;/p&gt;
&lt;h2&gt;References for Machine Learning techniques for Predictive Maintenance&lt;/h2&gt;
&lt;p&gt;This tutorial focuses on data engineering - i.e. getting data in the right format and in the right place in order to take advantage of machine learning (ML) for predictive maintenance applications. The details of ML are beyond the scope of this tutorial but to better understand ML techniques commonly used for predictive maintenance, check out the provided Jupyter notebook for &lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance/blob/master/notebooks/jupyter/LSTM%20For%20Predictive%20Maintenance-ian01.ipynb&quot;&gt;LSTM predictions for About To Fail&lt;/a&gt;. This notebook not only talks about how to use Long Short Term Memory (LSTM) but also how to generate a sample dataset with &lt;a href=&quot;https://github.com/tdunning/log-synth&quot;&gt;logsynth&lt;/a&gt; that resembles what you might see in a real factory. This notebook is great because it explains how to experiment with LSTMs entirely on your laptop.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/lstm-about_to_fail-50.png?raw=true&quot; width=&quot;100%&quot; alt=&quot;LSTM About To Fail prediction&quot;&gt;
&lt;p&gt;Here are some of the other notebooks included in this repo that demonstrate ML concepts for predictive maintenance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance/blob/master/notebooks/jupyter/LSTM%20predictions%20for%20Remaining%20Useful%20Life.ipynb&quot;&gt;LSTM predictions for Remaining Useful Life&lt;/a&gt; shows how to train a point regression model using an LSTM neural network in Keras to predict the point in time when an airplane engine will fail.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance/blob/master/notebooks/jupyter/LSTM%20time%20series%20prediction%20from%20OpenTSDB.ipynb&quot;&gt;LSTM time-series predictions from OpenTSDB&lt;/a&gt; shows how to train a model to predict the next value in a sequence of numbers, which in this case, is a time-series sequence of numbers stored in OpenTSDB. This notebook also shows how to load training data into Tensorflow using the REST API for OpenTSDB.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance/blob/master/notebooks/jupyter/RNN%20time%20series%20prediction%20from%20OpenTSDB.ipynb&quot;&gt;RNN time-series predictions from OpenTSDB&lt;/a&gt; also shows how to train a model to predict the next value in a sequence of numbers, except it uses an RNN model instead LSTM.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/predictive-maintenance/blob/master/notebooks/jupyter/RNN%20predictions%20on%20MapR-DB%20data%20via%20Drill.ipynb&quot;&gt;RNN predictions for MapR-DB data via Drill&lt;/a&gt; also shows how to train a model to predict the next value in a sequence of numbers, except it reads time-series data from MapR-DB using Drill as a SQL engine.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;StreamSets Demonstration&lt;/h2&gt;
&lt;p&gt;To give you an idea of what dataflow management tools do, I’ve prepared a simple StreamSets project that you can run on a laptop with Docker. This project demonstrates a pipeline that streams time-series data recorded from an industrial HVAC system into OpenTSDB for visualization in Grafana.&lt;/p&gt;
&lt;p&gt;Create a docker network to bridge containers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;docker network create mynetwork
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Start StreamSets, OpenTSDB, and Grafana:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;docker run -it -p 18630:18630 -d --name sdc --network mynetwork \
streamsets/datacollector
docker run -dp 4242:4242 --name hbase --network mynetwork \
petergrace/opentsdb-docker
docker run -d -p 3000:3000 --name grafana --network mynetwork \
grafana/grafana
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Open Grafana at &lt;a href=&quot;http://localhost:3000&quot;&gt;http://localhost:3000&lt;/a&gt; and login with admin / admin&lt;/p&gt;
&lt;p&gt;Add &lt;a href=&quot;http://hbase:4242&quot;&gt;http://hbase:4242&lt;/a&gt; as an OpenTSDB datasource to Grafana. If you don’t know how to add a data source, refer to Grafana docs. Your datasource definition should look like this:&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_opentsdb_config.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;p&gt;Download the following Grafana dashboard file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;wget https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/Grafana/IoT_dashboard.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Import that file into Grafana. If you don’t know how to import a dashboard, see Grafana docs. The Grafana import dialog should look like this:&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_dashboard_config.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;p&gt;Download, unzip, and copy the MQTT dataset to the StreamSets container:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;wget https://github.com/mapr-demos/predictive-maintenance/raw/master/sample_dataset/mqtt.json.gz
unzip mqtt.json.gz
docker cp mqtt.json sdc:/tmp/mqtt.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Open StreamSets at &lt;a href=&quot;http://localhost:18630&quot;&gt;http://localhost:18630&lt;/a&gt; and login with admin / admin&lt;/p&gt;
&lt;p&gt;Download and import the following pipeline into StreamSets. If you don’t know how to import a pipeline, refer to StreamSets docs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-console&quot;&gt;wget https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/StreamSets/MQTT%20File%20Tail.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will see a warning about a missing library in the “Parse MQTT JSON” stage. Click that stage and follow the instructions to install the Jython library.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/streamSets_warning.png&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;p&gt;Finally, run the StreamSets pipeline.&lt;/p&gt;
&lt;img src=&quot;https://raw.githubusercontent.com/mapr-demos/predictive-maintenance/master/images/grafana_streamset_animation.gif&quot; width=&quot;100%&quot; align=&quot;center&quot;&gt;
&lt;p&gt;Hopefully, by setting up this pipeline and exploring StreamSets you’ll get the gist of what dataflow management tools can do.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Writing Deep Learning Tools for all Data Scientists, Not Just Unicorns]]></title><description><![CDATA[Machine learning (ML) is exploding in popularity, and, as it does, ML tooling is frantically trying to keep up. Tools for everything you can…]]></description><link>https://developer.hpe.com/writing-deep-learning-tools-for-all-data-scientists-not-just-unicorns/</link><guid isPermaLink="false">https://developer.hpe.com/writing-deep-learning-tools-for-all-data-scientists-not-just-unicorns/</guid><pubDate>Fri, 11 Feb 2022 15:45:26 GMT</pubDate><content:encoded>&lt;p&gt;Machine learning (ML) is exploding in popularity, and, as it does, ML tooling is frantically trying to keep up. Tools for everything you can imagine are popping up: data versioning, experiment tracking, model serving, and, yes, even tools to help ML run on Kubernetes. Although some projects are more popular than others, given the newness of this space, there are no clear winners yet and no one has yet established the perfect set of software to enable and accelerate ML.&lt;/p&gt;
&lt;h2&gt;Here’s the problem – Kubernetes&lt;/h2&gt;
&lt;p&gt;Kubernetes is one of the most important pieces of software produced in the last decade and one of the most influential open source projects ever. It was &lt;em&gt;&lt;strong&gt;built by software engineers specifically for software engineers&lt;/strong&gt;&lt;/em&gt; to orchestrate containers in the new cloud-native/DevOps paradigm and has completely revolutionized how applications are developed and how infrastructure is deployed and managed. Given the ease of being able to spin up and down a service, developers now have easier access to more GPU time, resulting in an increased popularity of computationally-demanding applications like deep learning (DL).&lt;/p&gt;
&lt;p&gt;As a result, however, the tools that have been developed to train DL models require data scientists to use and understand Kubernetes. If you look at the knowledge base of a data scientist versus a software engineer, you’ll understand why there are two different titles. The few whose skills overlap are considered unicorns… those that can be found are few and far between.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/updated-unicorns-picture-smaller.png&quot;&gt;&lt;/center&gt;
&lt;p&gt;If you happen to be a Unicorn, congratulations! You can use your magic to wield Kubernetes and Deep Learning together and build something beautiful. The rest of us, though, get pretty annoyed when we need to dive deep into computer systems engineering before we can make progress developing new ML models. The crazy thing is, this hassle is completely avoidable! We just need to start developing &lt;strong&gt;ML tools built for data scientists, not software engineers&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Diving Deeper&lt;/h2&gt;
&lt;p&gt;Let’s take a quick look at &lt;a href=&quot;https://www.kubeflow.org/&quot;&gt;Kubeflow&lt;/a&gt; to understand where ML tooling has gone wrong.  Kubeflow started as an adaptation of how Google was running TensorFlow internally, as a tool that allowed &lt;em&gt;TensorFlow to run on Kubernetes&lt;/em&gt;. This technology was &lt;em&gt;very&lt;/em&gt; impactful, creating a much simpler way to use hardware managed by Kubernetes to do deep learning.&lt;/p&gt;
&lt;p&gt;That initial version of Kubeflow is now the Kubeflow component called &lt;a href=&quot;https://www.kubeflow.org/docs/components/training/tftraining/&quot;&gt;TFJob&lt;/a&gt;. Without TFJob, running TensorFlow on Kubernetes would be miserable — you would need to specify a complex topology of containers, networking, and storage before you could even start writing your ML code. With TFJob, this is simplified, but, crucially, &lt;em&gt;it is not nearly simple enough&lt;/em&gt;. To use TFJob, you need to:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Wrap your ML code up neatly in a container.&lt;/strong&gt;&lt;/em&gt; This will be a clunky experience that will require you to package your code and upload it if you want to make changes. Docker is great, but this will slow down your development cycle significantly.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Write a Kubernetes TFJob manifest.&lt;/strong&gt;&lt;/em&gt; This might not sound that intimidating, but for a data scientist not fluent in Kubernetes it can be a daunting task. To do this well, you’ll need to learn a lot about Kubernetes — a far cry from the Python that these scientists are used to. Let’s look at the &lt;em&gt;most simple version of this, from the Kubeflow docs&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: kubeflow.org/v1
kind: TFJob
metadata:
  generateName: tfjob
  namespace: your-user-namespace
spec:
  tfReplicaSpecs:
    PS:
      replicas: 1
      restartPolicy: OnFailure
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: &quot;false&quot;
        spec:
          containers:
          - name: tensorflow
            image: gcr.io/your-project/your-image
            command:
              - python
              - -m
              - trainer.task
              - --batch_size=32
              - --training_steps=1000
    Worker:
      replicas: 3
      restartPolicy: OnFailure
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: &quot;false&quot;
        spec:
          containers:
          - name: tensorflow
            image: gcr.io/your-project/your-image
            command:
              - python
              - -m
              - trainer.task
              - --batch_size=32
              - --training_steps=1000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This configuration file is &lt;strong&gt;full&lt;/strong&gt; of concepts that are foreign to most data scientists. Pods, replicas, sidecars, restart policies, Kubernetes APIs — all of this is confusing, complex, and detracts from our ability to focus on data science.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Learn the Kubernetes CLI.&lt;/strong&gt;&lt;/em&gt; This is minor, but again navigating Kubernetes is not a trivial thing to figure out. Submitting jobs may be relatively straightforward, but seeing results, artifacts, and logs of experiments is unintuitive and clunky.&lt;/p&gt;
&lt;p&gt;If you think about where this all came from, it makes sense that it evolved as it did. As this is an adaptation of the technology that Google uses to train DL models, and Google (unlike the rest of the world) tends to have a lot of unicorns, running Deep Learning on Kubernetes for them wasn’t a big deal.&lt;/p&gt;
&lt;p&gt;But not everyone is a Google unicorn – many other highly-skilled data scientists simply bounce off of Kubeflow because Kubernetes is too far outside of their comfort zone.&lt;/p&gt;
&lt;p&gt;If you look at some of the other components of Kubeflow, you’ll find similar philosophies: &lt;strong&gt;MPIJob&lt;/strong&gt;, &lt;strong&gt;PyTorchJob&lt;/strong&gt;, and &lt;strong&gt;Katib&lt;/strong&gt; all expect data scientists to work with Kubernetes concepts and APIs. All of them suffer from the exact same usability issues — most data scientists don’t want to dive into the weeds of how Kubernetes is orchestrating the hardware; they just want an easier way to train their models. They want tools that abstract away foreign concepts and let them communicate ML concepts succinctly.&lt;/p&gt;
&lt;p&gt;One fascinating thing about Kubeflow is that some of the components of Kubeflow have clearly figured this out and realized that we can do better! The best example is &lt;strong&gt;Kubeflow Pipelines&lt;/strong&gt; (KFP). The core underlying technology of Kubeflow Pipelines is &lt;a href=&quot;https://github.com/argoproj/argo&quot;&gt;Argo Workflows&lt;/a&gt;, which are very similar to TFJob, providing a way to declare workflows in Kubernetes. &lt;strong&gt;Kubeflow Pipelines&lt;/strong&gt; goes the crucial extra step of providing a Domain Specific Language that allows data scientists to write pipelines &lt;em&gt;in Python&lt;/em&gt;! The builders of KFP realized that building containers and writing Kubernetes manifests wasn’t how data scientists wanted to interact with their work, so they &lt;em&gt;&lt;strong&gt;abstracted away the k8s&lt;/strong&gt;&lt;/em&gt; and made a tool that data scientists love.&lt;/p&gt;
&lt;h2&gt;Where do we go from here?&lt;/h2&gt;
&lt;p&gt;The key to providing ML practitioners with tools they’ll actually use is to truly understand them – what they like and dislike about their workflows – and then enhancing what they like and slicing out what they don’t. ML tools should allow data scientists to accomplish more with less work. When developing tools for data scientists, it’s important to avoid thinking about them as software engineers, and instead build tools that allow ML people to build, train, and deploy ML models without needing to become DevOps experts.&lt;/p&gt;
&lt;p&gt;Abstractions here can help. Abstractions make it possible to perform high-performance data science on complex, modern infrastructure without needing to be a systems expert. Designing the right abstractions is not easy but it is crucially important to making modern ML more accessible, more convenient, and more cost-effective.&lt;/p&gt;
&lt;h2&gt;Consider Determined AI – a tool that speaks your language&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/determined-ai/home/&quot;&gt;Determined AI&lt;/a&gt; accomplishes many of the same goals as a tool like Kubeflow — allowing scientists to build and train deep learning models on Kubernetes (or any hardware, really), but without expecting data scientists to master countless new technologies along the way.&lt;/p&gt;
&lt;p&gt;Compare a Determined configuration file to the TFJob configuration above:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;description: cifar10_pytorch
hyperparameters:
  learning_rate: 1e-4
  learning_rate_decay: 1e-6
  layer1_dropout: 0.25
  layer2_dropout: 0.25
  layer3_dropout: 0.5
  global_batch_size: 32
records_per_epoch: 50000
searcher:
  name: single
  metric: validation_error
  max_length:
    epochs: 32
entrypoint: model_def:CIFARTrial
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This configuration accomplishes essentially the same goal — describing an ML training workflow. The big difference is that this configuration is &lt;em&gt;written in the language of data scientists&lt;/em&gt;, with complicated infrastructure concepts abstracted away using terms they are comfortable with, like hyperparameters, epochs, metrics, etc.&lt;/p&gt;
&lt;p&gt;This means that users can do more with less. Instead of having to learn Kubernetes or configure a cluster of machines to work with Horovod, they simply need to install Determined and describe experiments with their own terms. Determined unlocks incredibly powerful tools like &lt;a href=&quot;https://www.determined.ai/blog/faster-nlp-with-deep-learning-distributed-training/&quot;&gt;distributed training&lt;/a&gt;, &lt;a href=&quot;https://www.determined.ai/blog/why-does-no-one-use-advanced-hp-tuning/&quot;&gt;hyperparameter search&lt;/a&gt;, and experiment tracking, without placing extra burden on the user to understand what is happening behind the scenes. Determined has carefully built &lt;a href=&quot;https://www.determined.ai/blog/standardized-models-with-determined/&quot;&gt;powerful abstractions that allow data scientists to focus on science&lt;/a&gt;, and not engineering, systems, and infrastructure.&lt;/p&gt;
&lt;p&gt;To see it in action, &lt;a href=&quot;https://docs.determined.ai/latest/tutorials/quick-start.html&quot;&gt;start with our quick start guide&lt;/a&gt;! Determined is open source, so you can see how we do it in our &lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;GitHub repository&lt;/a&gt;. To learn more about the issues surrounding today’s DL infrastructure tools and how Determined AI helps resolve them, view this replay of the HPE DEV Munch &amp;#x26; Learn seminar on &lt;a href=&quot;https://www.youtube.com/watch?v=ktZFLD-9qgw&amp;#x26;list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;Golden Age of AI, Dark Ages of AI Infrastructure&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you have any questions along the way, &lt;a href=&quot;https://join.slack.com/t/determined-community/shared_invite/zt-cnj7802v-KcVbaUrIzQOwmkmY7gP0Ew&quot;&gt;hop on our community Slack&lt;/a&gt;; we’re happy to help!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The right tools for the job]]></title><link>https://developer.hpe.com/2022-February-01/</link><guid isPermaLink="false">https://developer.hpe.com/2022-February-01/</guid><pubDate>Tue, 01 Feb 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[New PowerShell toolkit available for managing HPE Data Services Cloud Console]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) recently introduced a new way to manage IT data storage resources: Data Services Cloud Console (DSCC) which…]]></description><link>https://developer.hpe.com/new-powershell-toolkit-available-for-managing-hpe-data-services-cloud-console/</link><guid isPermaLink="false">https://developer.hpe.com/new-powershell-toolkit-available-for-managing-hpe-data-services-cloud-console/</guid><pubDate>Fri, 28 Jan 2022 10:27:53 GMT</pubDate><content:encoded>&lt;p&gt;Hewlett Packard Enterprise (HPE) recently introduced a new way to manage IT data storage resources: &lt;a href=&quot;https://www.hpe.com/us/en/storage/data-services-cloud-console.html&quot;&gt;Data Services Cloud Console&lt;/a&gt; (DSCC) which takes a cloud-based approach to running on-prem data storage infrastructure and offers &lt;a href=&quot;https://www.storagereview.com/news/hpe-alletra-cloud-native-high-performance-storage&quot;&gt;an innovative way &lt;/a&gt;to meet the management challenges of modern IT workloads. While DSCC initially added a Storage-as-a-Service layer on top of the new &lt;a href=&quot;https://www.hpe.com/us/en/storage/alletra.html&quot;&gt;HPE Alletra storage arrays,&lt;/a&gt; the vision of DSCC goes well beyond this, encompassing a full range of data storage services, monitoring, planning and data protection.&lt;/p&gt;
&lt;p&gt;The DSCC architecture is built on an API-driven abstraction layer. The REST API was a key aspect of the service, enabling the linking of distributed components - of course across HPE, but also as a means to enable future customer and partner developers to create, consume, maintain and monitor various HPE data infrastructure services. And it enabled the development of this new toolkit you can download and use today!&lt;/p&gt;
&lt;h2&gt;The cloud advantage&lt;/h2&gt;
&lt;p&gt;HPE GreenLake and DSCC allows you to manage across multiple storage platforms and across multiple sites, providing the advantage of using a single UI to span across servers and storage. It also offers optimizations not possible when managing devices as single management endpoints. A fully functional REST API helps accomplish all standard management tasks, like deploying volumes to servers or configuring snapshots - all designed to enable Management-at-scale.&lt;/p&gt;
&lt;p&gt;But what about the administrator who simply wants to either use a CLI or write very simple scripts to deploy and configure resources? This is where the DSCC PowerShell Toolkit comes in, helping you manage your cloud storage like a pro with PowerShell.&lt;/p&gt;
&lt;h2&gt;How to implement the use of the REST API&lt;/h2&gt;
&lt;p&gt;In this post, I&apos;ve outlined the steps required. To learn more about the DSCC REST API, check out the post &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;here&lt;/a&gt;. First, you&apos;ll need to log into your &lt;a href=&quot;https://common.cloud.hpe.com/&quot;&gt;HPE GreenLake Portal&lt;/a&gt;. Under the &lt;em&gt;Manage Account&lt;/em&gt; option, you will see an option to configure API access.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-manage-account-img.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can then create your new API account which allows you to granularly control your scripting environment. You are allowed up to 5 API Client credentials per GreenLake user account, and each API Client credential can be separately managed. A Client credential is made up of the combination of a Client_ID and a Client_Secret, which can be used to communicate with the Authentication server to obtain a valid session token. To generate these credentials, click on the appropriate button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-api-client-credentials-img.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You will need to identify which Cloud you want to generate these credentials for, as well as a common name for the Credential for later identification.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-create-credentials-img.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: In the above example image, the application selected is FLEETSCALE(US West). Please take a look at the article &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console&quot;&gt;here&lt;/a&gt; for the supported DSCC Application instance and the end-points for each DSCC regions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Once this has been created you will see a new screen that will show the Client_Id and Client_Secret. These will need to be recorded as once they are created and displayed to the user, there is no other way to retrieve them from the Cloud Console.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-credentials-created-img.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, you need to download and install the DSCC PowerShell Toolkit. From any Microsoft Windows machine, open a PowerShell window, and type in the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; $PSTK = ‘[https://codeload.github.com/HPEDSCC-PowerShell-Toolkit-main.zip](https://codeload.github.com/HPEDSCC-PowerShell-Toolkit-main.zip%E2%80%99)’
PS:&gt; $FOLDER = &apos;C:\Windows\System32\WindowsPowerShell\v1.0\Modules&apos;
PS:&gt; invoke-webrequest -uri $PSTK -outfile “MyFile.zip&quot; 
PS:&gt; expand-archive -path “MyFile.zip&quot; -DestinationPath $FOLDER
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, this little set of lines downloads the PowerShell Toolkit, which can also be found &lt;a href=&quot;https://github.com/HewlettPackard/HPEDSCC-PowerShell-Toolkit&quot;&gt;here&lt;/a&gt;. It also unzips that folder into the Windows Modules folder so that it can later be found.&lt;/p&gt;
&lt;p&gt;Once this download and install is complete, you can Import the Module into your current session, and then connect to your HPE GreenLake DSCC service using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; Import-Module HPEDSCC
PS:&gt; Connect-DSCC –Client_id ‘IdHere’ –Client_secret ‘Sercrethere’ –GreenLakeType Dev –AutoRenew
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Connection command above is fairly straightforward, however there are a few items you may not recognize, such as the GreenLake Type. The GreenLake type can be &lt;em&gt;Dev&lt;/em&gt;, &lt;em&gt;US&lt;/em&gt;, &lt;em&gt;JP&lt;/em&gt;, and &lt;em&gt;EU&lt;/em&gt; depending on your cloud location, and using the &lt;em&gt;&lt;TAB&gt;&lt;/em&gt; key will cycle through the valid values. Additionally the &lt;em&gt;AutoRenew&lt;/em&gt; is a switch that is used to enable the PowerShell Toolkit to refresh the Token every 1 hour and 45 minutes as the Token will naturally expire every 2 hours. Without the –&lt;em&gt;AutoRenew&lt;/em&gt; option, your PowerShell Toolkit session will simply time out at the 2 hour mark and you will need to run the connection command again. Once you close the PowerShell session, the Client credentials are lost, as they are not persistently stored.&lt;/p&gt;
&lt;p&gt;Once you have connected, you will notice that the returned data from each PowerShell Call is a true PowerShell Object, which can be explored. To get a list of all of the available commands, and then to get detailed help from any command, use the following commands.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; get-command –module HPEDSCC
PS:&gt; get-help New-DSCCInitiator –Detailed
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, each command has complete &lt;em&gt;Help&lt;/em&gt; defined for it that includes the parameters, as well as the valid values, for those parameters required, as well as multiple examples of the usage of each command with sample output from those commands.&lt;/p&gt;
&lt;h2&gt;Objects are deeper than they may appear&lt;/h2&gt;
&lt;p&gt;Let&apos;s talk about Objects. When a PowerShell command returns data, you are presented with a table of the many values that are most likely to be useful at a glance. However, most understand that PowerShell Objects are far deeper than they appear to be on the surface. To explore this, first you can cast the value of any returned object to a variable. You can initially only return one item instead of all items. But you can then dig into the value of any part of that object using a ‘dot’ notation. In many cases you will find that as you go deeper and deeper into the object, even more details emerge. Using the PowerShell Count feature, you can determine, as in the following example, that 8 records exist in the object, and you can walk through each object using square brackets as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; $DATA = Get-DSCCStorageSystem –DeviceType device-type1
PS:&gt; $DATA.Count
PS:&gt; $DATA
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The value here is, that to extract the value of any of these fields, you need not write any parser or process any data, you simply need to know the object layout. If you want to see the entire object in long form, you can use the following command to list out all of the fields instead of just the most common. Additionally, if you would rather read the details of an object in the same format as it comes from the REST API call, you can use the PowerShell built-in function to convert it to raw JSON format.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; $DATA | format-list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, the objects are rather deep and you can get significantly more information than a GUI would present to you. You can dig into each object deeper and deeper using standard ‘DOT’ notation as shown below&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; $DATA\[3].SystemWWN
PS:&gt; ( $DATA\[3].SoftwareVersions ).fullVersion
PS:&gt; ( $DATA\[3].SoftwareVersions ).Components
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In addition to this object manipulation, you can also filter your results to see only the details you are interested directly from the command line using the pipe operator. There is a special symbol in PowerShell that allows you to access the piped data that looks like this ‘$_’&lt;/p&gt;
&lt;p&gt;You can also work directly from the original command instead of casting the data to a variable. In the following example, you can limit the returned data to a single record using a search term such as an InitiatorID, or an IP Address, or you can limit the results to a sub-collection of results that that match a specific criteria such as iSCSI vs FC initiators. In the below example, you can see how using the ‘.count’ feature that the Get-DSCCInitiator command returns 202 results, however by filtering the results to only return those with Protocol type ‘ISCSI’ it decreases this count to 22. Running the same command without the ‘.count’ option returns the actual object collection.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; ( Get-DSCCInitiator ).count
PS:&gt; ( Get-DSCCInitiator ) | where { $_.protocol –like ‘ISCSI’ } ).count
PS:&gt; ( Get-DSCCInitiator ) | where { $_.protocol –like ‘ISCSI’ } )
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also limit the results to only those with values such as all Volumes with a Free Space greater than a specific value. You may also chain these pipelines to add additional filters as well. In this case, you may want to filter a Volume list by size, but then filter that list again by additional criteria. In the example below, you can see it&apos;s been filtered again to limit the results to a single Storage System ID.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; ( Get-DSCCStorageSystem –DeviceType device-type1 | Get-DSCCVolume ).count
PS:&gt; ( Get-DSCCStorageSystem –DeviceType device-type1 | Get-DSCCVolume | where { $_.sizeMiB –gt 524000 } ).count
PS:&gt;   Get-DSCCStorageSystem –DeviceType device-type1 | Get-DSCCVolume | where { $_.sizeMiB –gt 524000 } 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There may also exist the case where you may wish to see the raw JSON return that each call returns. The good news is that PowerShell Objects and JSON Objects are the same thing, and with slight formatting changes, you can use the following pipeline operation to convert any PowerShell Object to native JSON formatting.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; ( Get-DSCCStorageSystem –DeviceType device-type1 )[1]
PS:&gt; ( Get-DSCCStorageSystem –DeviceType device-type1 )[1] | ConvertTo-JSON
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You may have noticed in the example that, when I make calls to Storage Systems I had to specify a DeviceType. This is because the RestAPI needs to treat the Primera/Alletra9K systems (Device-Type1) differently than Alletra6K/NimbleStorage systems (Device-Type2).&lt;/p&gt;
&lt;h2&gt;More tips and tricks&lt;/h2&gt;
&lt;p&gt;Another interesting trick of the toolkit is that many of the commands support discovery mode. For example, if you want to get the details of a specific storage system, you would run the command shown below.  But how do you find the required Storage System ID? That’s easy – you run the command without the System ID specified, in which case it will discover all of the Storage Systems and return that data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; Get-DSCCStorageSystem –DeviceType device-type1
PS:&gt; Get-DSCCStorageSystem –DeviceType device-type1 –SystemId ‘2M2042059T’
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another very useful tip is to accept pipeline input for some variables; i.e. the command to obtain a Storage Systems Controller Nodes require the System ID to query against; but If I want ALL of the Controller Nodes, I can discover them by using the first command to feed the system ID to the second command. Optionally you could request the Controller if you know the Storage System ID.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; Get-DSCCStorageSystem –DeviceType device-type1 –SystemId ‘2M2042059T’ 
PS:&gt; Get-DSCCStorageSystem –DeviceType device-type1 –SystemId ‘2M2042059T’ | Get-DSCCController
PS:&gt; Get-DSCCStorageSystem –DeviceType device-type1 | Get-DSCCController
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Understanding the REST API more deeply&lt;/h2&gt;
&lt;p&gt;Now that you are an expert at using the Toolkit, let’s see how you can use it to help you better understand the RestAPI model, in addition to the embedded help, Examples, and JSON conversion.&lt;/p&gt;
&lt;p&gt;This would include a feature of all commands implemented as a switch called ‘-WHATIF’. By adding a ‘-whatif’ option to any command, you can see all the details of what is going to be sent to the RestAPI endpoint, including the URI, the Header, the method, connection type, and Body if one exists. Below is the example of this type of data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;PS:&gt; New-DSCCInitiator –address ‘deadbeefdeadbeeddeadbeef’ –hbaModel ‘LPe12002 –hostSpeed 10000 –ipAddress ’10.10.10.1’ –name    ‘MyInit’ –protocol iSCSI –vendor ‘Emulex’ -whatif
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/powershell-codeblock-dscc-img11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With this information, you can replicate any call in either Python, CURL, or other language of your choice. The primary reference should always be the RestAPI documentation, as well as any SDK that is produced for the DSCC, but the PowerShell Toolkit can be used to provide a functional solution, which you could then turn into a true programming project.&lt;/p&gt;
&lt;h2&gt;Get the free toolkit today to manage your storage like a pro&lt;/h2&gt;
&lt;p&gt;The new HPE Data Services Cloud Console PowerShell Toolkit is a valuable innovation for managing the HPE Data Services Cloud Console. It leverages your familiarity with PowerShell to speed the management of HPE storage resources programmatically and at scale. You can download the toolkit for free &lt;a href=&quot;https://github.com/HewlettPackard/HPEDSCC-PowerShell-Toolkit&quot;&gt;here&lt;/a&gt; on GitHub.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Developer launches its Munch & Learn technical talks]]></title><description><![CDATA[Here’s your opportunity to learn more about today’s most popular technologies and connect with some of the industry’s leading technical…]]></description><link>https://developer.hpe.com/hpe-dev-launches-its-munch-learn-technical-talks/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-launches-its-munch-learn-technical-talks/</guid><pubDate>Thu, 27 Jan 2022 16:44:13 GMT</pubDate><content:encoded>&lt;p&gt;Here’s your opportunity to learn more about today’s most popular technologies and connect with some of the industry’s leading technical specialists. Held monthly by the HPE DEV community, the Munch &amp;#x26; Learn technology talks are free, 60-minute sessions where subject matter experts (SMEs)  offer valuable insights and take your questions. Each month we will present a different topic, ranging from using containers to develop code to how to deploy data fabrics in AI and machine learning applications. We’ll cover a variety of technologies, including open source, Kubernetes, and &lt;a href=&quot;https://www.hpe.com/us/en/ezmeral.html&quot; target=&quot;_blank&quot;&gt;HPE Ezmeral software&lt;/a&gt;. To view each month’s topic and catch replays of previous sessions, check out our &lt;a href=&quot;/blog/munch-and-learn&quot; target=&quot;_blank&quot;&gt;Munch &amp;#x26; Learn calendar&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;At each monthly gathering, a technology expert is introduced by an industry luminary to the online audience. These are live sessions where you can bring questions and participate in the discussion. In keeping with the HPE DEV community’s fun and engaging atmosphere, during each session we share recipes and pictures of our favorite munchies on the &lt;a href=&quot;https://hpedev.slack.com/archives/C01GVQUPM3P&quot; target=&quot;_blank&quot;&gt;#munch-and-learn channel&lt;/a&gt; of our &lt;a href=&quot;https://slack.hpedev.io&quot; target=&quot;_blank&quot;&gt;Slack workspace&lt;/a&gt;. Attendees are able to not only get a better taste of new technologies, but also benefit by being able to try out some new recipes.&lt;/p&gt;
&lt;h2&gt;Harkening back to our very first sessions&lt;/h2&gt;
&lt;p&gt;For our first session, held on January 27, 2021, Ellen Friedman, principal technologist at HPE, hosted Ted Dunning, HPE Ezmeral Data Fabric CTO, to discuss the value of data fabric and how it can run, manage, control, and secure the apps, data, and IT that run your business – from edge to cloud. Starting at a very high-level, but quickly getting into details, they explored three examples of how data fabric has been used in production at scale. Details included how these and similar systems are constructed using data fabric primitives. Ted also touched on some of the details of how the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; works. And we had the Slack channel open for folks to pose their questions. For details on this session, read &lt;a href=&quot;/blog/exploring-data-fabric-and-containers-in-hpe-devs-new-munch-learn-monthly&quot; target=&quot;_blank&quot;&gt;Exploring Data Fabric and Containers in HPE DEVs new Munch &amp;#x26; Learn monthly gatherings&lt;/a&gt;. You can also catch a replay of the session &lt;a href=&quot;https://youtu.be/qi6sTvu8osk?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;During our February meeting, Nigel Poulton, Kubernetes technology author, introduced Tom Phelan, HPE Fellow and Ezmeral Software Platform CTO at HPE. They shared their thoughts on container architectures and how to leverage the Kubernetes Container Orchestrator to deploy and manage stateful, as well as microservice-based, applications. Nigel was kind enough to donate several eBooks to reward five attendees for posing excellent questions and sharing pictures of their favorite munchy. You can view the replay of this session &lt;a href=&quot;https://youtu.be/9PvKpe7yMpI?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On March 24, 2021, Ellen Friedman hosted Doug Cackett, who led us all in a whiteboard discussion about how data is transformed to create meaningful business value. He covered why data science was important, automating decisions, how it works in the real world using some simple examples, and some of the reasons why a data scientist might want to finesse some of the data. You can catch the &lt;a href=&quot;https://youtu.be/Inh6eXM0EbA?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;replay of this session&lt;/a&gt; and also review the &lt;a href=&quot;https://developer.hpe.com/uploads/media/2021/3/munchandlearn-3-chat-1617017930299.pdf&quot;&gt;side chat conversations&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Things start heating up&lt;/h2&gt;
&lt;p&gt;Daniel Feldman of HPE and Frederick Kautz of Sharecare joined us on April 21st for a discussion on how to build a foundation for zero trust using &lt;a href=&quot;https://spiffe.io/&quot;&gt;SPIFFE&lt;/a&gt;, the secure production identity framework for everyone. As was pointed out, enterprise applications and other software services are increasingly running across multiple platforms spanning diverse data centers and public clouds, all in various domains. To truly secure them, many are turning to a Zero Trust model where nothing is taken for granted and every request is verified. Because Service identity and authentication is perhaps the most fundamental piece of building a Zero Trust environment, it’s important to get it right so as to not undermine later efforts to enable Zero Trust in one’s organization. Catch the replay &lt;a href=&quot;https://youtu.be/G1ceKr16nn8&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;On May 19th, Doug Cackett, an expert on the HPE Ezmeral Container Platform, returned to talk with us more about Data Science 101. Ellen Friedman joined as well as the session moderator, adding in interesting insights on different operational models used in the industry. If you missed the session, be sure to listen to the replay &lt;a href=&quot;https://youtu.be/Va4tSr__Yok?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We were really excited to have Owen Garrett of Deepfence.io join us on June 30th to discuss the fundamentals of production microservices. This session was one of our most highly attended sessions so far, and understandably so. Owen ran us through a very interesting presentation on why microservices are important for architecting and deploying modern applications; pointing out their unique characteristics, what opportunities they offer, and the challenges they introduce. He provided a high-level introduction to some of the technologies encountered when deploying a microservices application (i.e. Containers, Kuberetes, CI/CD), and then proceeded to get into a bit more detail on Ingress Controllers and Service Meshes as they pertain to microservices apps. He finished off by discussing some of the technologies you might want to keep in mind to secure your microservices application. Listen to the replay of this fascinating session &lt;a href=&quot;https://youtu.be/qyyxQU37ZyQ?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For the July 28th Munch &amp;#x26; Learn session, Ellen Friedman and Ted Dunning once again joined us to explore more about how to make data consumable for real world data science. With as much data being gathered today, businesses are eager to make the most of it, but it can be challenging when you work with diverse and mismatched data sets and teams. Ted shared some interesting issues he&apos;s come across as well as how those situations were handled. They pointed out how data might often be initially collected for one purpose but hoped to be applied by a different team and how a little advance planning can make this possible. If you missed the talk, you can still take advantage of the replay &lt;a href=&quot;https://youtu.be/4WKjRqflF7M&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Schooling on Kubernetes and Data Analytics&lt;/h2&gt;
&lt;p&gt;Heading into the fall, the HPE DEV team invited Nigel Poulton, reknowned author and K8s expert, to come back on August 25th to reprise his Kubernetes 101 session that he gave at the HPE Discover 2021 Edge-to-Cloud Conference. Nigel walked us through the basics of Kubernetes in one of our most highly attended sessions yet, which you can catch in this &lt;a href=&quot;https://www.youtube.com/watch?v=PWVJKK1obKQ&quot;&gt;replay&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For our September 22nd session, we focused on how organizations that have progressed along their digital transformation journey are now looking at the next set of workloads and processes to modernize. For many, this includes Data and Analytics.  HPE Ezmeral data science experts, Matt Maccaux and Randy Thomasson, joined us to give their insights, discussing strategic best practices and offering helpful tips for real-world applications.&lt;/p&gt;
&lt;h2&gt;And they just keep coming&lt;a href=&quot;https://hpe.zoom.us/meeting/register/tJcrduuuqzgsHNEC-u8l_Y86YeZLMxEDF5fP&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The year 2021 closed out with two additional Munch &amp;#x26; Learn sessions; one on using HPE Ezmeral Unified Analytics and another on Redfish. Replays of those sessions are available through our &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;calendar page&lt;/a&gt;. The Munch &amp;#x26; Learn sessions proved quite popular, bring in over 1200 attendees over the course of the year. This success sparked the launch of a new set of technology talks which the HPE Developer team also offers on a monthly basis called Meetups. Whereas the Munch &amp;#x26; Learn sessions bring in industry luminaries to cover emerging industry trends, the Meetups are opportunities to dig down deep into new technologies, inviting product experts to come in and talk on detail about a given subject. You can learn more about these meetups in this &lt;a href=&quot;https://developer.hpe.com/blog/new-for-2022-hpe-dev-meetups/&quot;&gt;blog post&lt;/a&gt; and check the &lt;a href=&quot;https://developer.hpe.com/campaign/meetups&quot;&gt;schedule&lt;/a&gt; for upcoming talks and video replays.&lt;/p&gt;
&lt;p&gt;Keep checking back to see what new topics will be covered for HPE DEV Munch &amp;#x26; Learns on &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;our schedule&lt;/a&gt; so you don’t miss any of these fun and engaging discussions. Better yet, subscribe to our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE DEV Newsletter&lt;/a&gt;. In it, you’ll find information on our upcoming Munch &amp;#x26; Learn sessions as well as learn about everything else that’s new.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Real-time Smart City Traffic Monitoring Using Microservices-based Streaming Architecture (Part 2)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/real-time-smart-city-traffic-monitoring-using-microservices-based-streaming-architecture-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/real-time-smart-city-traffic-monitoring-using-microservices-based-streaming-architecture-part-2/</guid><pubDate>Thu, 27 Jan 2022 09:56:33 GMT</pubDate><content:encoded>&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;category&quot;: &quot;machine-learning&quot;,
&quot;publish&quot;: &quot;2017-01-10T06:00:00.000Z&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Modern Open Source Complex Event Processing For IoT&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This series of blog posts details my findings as I bring to production a fully modern take on Complex Event Processing, or CEP for short. In many applications, ranging from financials to retail and IoT applications, there is tremendous value in automating tasks that require one to take action in real time. Putting aside the IT system and frameworks that would support this capability, this is clearly a useful capability.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;https://developer.hpe.com/blog/better-complex-event-processing-at-scale-using-a-microservices-based-str/&quot;&gt;first post of the series&lt;/a&gt;, I explain how CEP has evolved to meet this requirement and how the requirements for CEP can be met in a modern big data context. In short, I present an approach based on the best practices of modern architecture, microservices, and Kafka-style stream messaging, with an up-to-date open source business rule engine.&lt;/p&gt;
&lt;p&gt;In this second part, I’ll get more concrete and work through a working example using the system I propose. Let’s get started.&lt;/p&gt;
&lt;h2&gt;Smart City Traffic Monitoring&lt;/h2&gt;
&lt;p&gt;We made a working demo for which the code will be released on the MapR GitHub. It is made to work on either the &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-spark-on-mapr-sandbox/&quot;&gt;MapR Sandbox&lt;/a&gt; or using a real MapR cluster.&lt;/p&gt;
&lt;p&gt;For this example, we’ll use a very simple “smart city” use case for traffic monitoring. In this case, we’ll model a single sensor that can measure the speed of cars that pass on it. Using such sensor data, it would be fairly easy to detect traffic jams in real time, and thus notify the police to take action much more quickly than they otherwise could.&lt;/p&gt;
&lt;img src=&quot;/img/picture1.png&quot; width=&quot;600&quot;&gt;
&lt;p&gt;Some other types of use cases are easy to envision. We can add data from a public, real-time weather feed and connect to information panels along the roads and show an advisory to drivers without any human intervention. By combining road condition sensors with the weather data, we can provide advisory feedback to drivers about road conditions and warn the public very quickly. Furthermore, by adding historical accident data and using predictive analytics, we can imagine road safety measures that can be deployed in real time to areas with a higher probability of accidents.&lt;/p&gt;
&lt;p&gt;To be honest, I’m only scratching the surface of what this kind of smart road could do to make a commuter’s life both easier and safer while saving money for the city. But how would we build a prototype of such a system without it being hugely complicated and expensive?&lt;/p&gt;
&lt;h2&gt;How to Build a Rule-Engine Based CEP System&lt;/h2&gt;
&lt;p&gt;So we now have a proper, concrete target in mind. It turns out that if we decide to base our system around reading our data from a Kafka-style stream (i.e., persistent, scalable, and high performance), then we will naturally end up with a pretty cool, modern CEP microservice.&lt;/p&gt;
&lt;p&gt;The important point here is not to show how to build a super complicated enterprise architecture, but rather that, by making some simple technology choices and building our demo in a reasonable way, we &lt;strong&gt;naturally end up with&lt;/strong&gt; this elegant, modern, and &lt;strong&gt;simple architecture&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;For simplicity’s sake, I have decided to implement my prototype on a MapR Sandbox&lt;sup&gt;*&lt;/sup&gt;. This is because it will include the stream messaging system, MapR Event Store, which I can use through the Kafka 0.9 API with very little configuration and know it will work the same on a production MapR 5.1+ cluster.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;sup&gt;*&lt;/sup&gt;Use &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;The Development Environment for HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Finally, it should be noted that an Apache Kafka cluster could be used as well with the same design and code, just with some additional work to get it up and running.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;High-Level Architecture View&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As shown in the diagram above, the flow is to have sensor data get aggregated to a producer gateway, which will forward the data to the stream with a topic named “data.” The data will be in JSON format so it is easy to manipulate, human readable, and easily sent to Elasticsearch as-is for monitoring with a Kibana dashboard.&lt;/p&gt;
&lt;p&gt;The consumer side will have two tasks: to read the data from the stream and host an instance of a KieSession where the rule engine can apply rules on the facts as they are added to it.&lt;/p&gt;
&lt;p&gt;Rules are edited in the Workbench GUI, a Java webapp that can be run on a Java application server such as &lt;a target=&apos;\_blank&apos;  href=&apos;http://wildfly.org/&apos;&gt;WildFly&lt;/a&gt; or &lt;a target=&apos;\_blank&apos;  href=&apos;http://tomcat.apache.org/&apos;&gt;Tomcat&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Rules are fetched from the workbench by the consumer application using the appropriate methods provided by the Drools framework, which are entirely based on Maven Repository.&lt;/p&gt;
&lt;p&gt;We can now look at the proposed system in more detail in the following sections.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;List of Technologies Used&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The technology we’re going to use is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MapR Sandbox 5.2&lt;/li&gt;
&lt;li&gt;The Java programming language (any JVM language is fine)&lt;/li&gt;
&lt;li&gt;The Jackson 2 library to convert to and from JSON&lt;/li&gt;
&lt;li&gt;MapR Event Store or Apache Kafka stream messaging system&lt;/li&gt;
&lt;li&gt;Wildfly 10 application server to host the Workbench&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://www.drools.org/&apos;&gt;JBoss Drools&lt;/a&gt; as our choice of OSS business rule engine&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/tdunning/log-synth&apos;&gt;Log Synth&lt;/a&gt; to generate some synthetic data for our prototype&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://streamsets.com/&apos;&gt;Streamsets&lt;/a&gt; 1.6 to connect MapR Event Store and Elasticsearch&lt;/li&gt;
&lt;li&gt;Elasticsearch 2.4 and Kibana 4 for monitoring&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Traffic Monitoring Prototype Architecture&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As I built this demo to run on the MapR Sandbox, I’m using instructions for CentOS 6.X, an open-source version of RHEL 6.X. Instructions for CentOS 7 are almost identical, and finding similar instructions for Ubuntu would be pretty straightforward and left up to the reader.&lt;/p&gt;
&lt;p&gt;To build the core of the traffic monitoring system, we’re going to need two basic parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A program to feed sensor data into MapR Event Store/Kafka. This part will be using fake data modeled by a vehicle simulation coded with Log Synth. We’ll use the MapR &lt;a target=&apos;\_blank&apos;  href=&apos;http://docs.confluent.io/2.0.0/kafka-rest/docs/index.html&apos;&gt;Kafka-rest&lt;/a&gt; proxy implementation (just introduced with MEP 2.0)  to add the data with Python.&lt;/li&gt;
&lt;li&gt;A JVM-language application that will read data from the stream and pass it to a KieSession. The minimal code to get this working is surprisingly small.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To edit the rules, we deploy the Workbench on Wildfly 10, which is a fairly straightforward process. &lt;a target=&apos;\_blank&apos;  href=&apos;http://betzelblog.blogspot.jp/2015/02/setting-up-drools-workbench-and.html&apos;&gt;Check this blog post&lt;/a&gt; for instructions, or read the Drools Documentation. Installing Wildfly is pretty simple; &lt;a target=&apos;\_blank&apos;  href=&apos;https://docs.wildfly.org/22/Installation_Guide.html&apos;&gt;see this article&lt;/a&gt; for great instructions on how to install it as a service on Centos/RHEL (it’s for Wildfly, but the same instructions work for 9 and 10).&lt;/p&gt;
&lt;p&gt;We made a single configuration change to Wildfly. We changed the port to 28080 instead of 8080, as it is already used by the sandbox. Wildfly runs in standalone mode, so the configuration file is in WILDFLY_HOME/standalone/configuration/standalone.xml.&lt;/p&gt;
&lt;p&gt;For monitoring, we let the streaming architecture work for us. We use the &lt;a target=&apos;\_blank&apos;  href=&apos;https://streamsets.com/products/dataops-platform/data-collector-engine/&apos;&gt;Streamset Data collector&lt;/a&gt; to easily redirect the sensor data to Elasticsearch so that we can actually monitor the system with a nice dashboard with Kibana. To set up Streamsets with MapR Event Store requires some work with version 1.6, or from the &lt;a target=&apos;\_blank&apos;  href=&apos;https://streamsets.com/documentation/datacollector/latest/help/#Install_Config/MapR-Prerequisites.html&apos;&gt;official Streamsets documentation&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Finally, installation and setup of Elasticsearch and Kibana is &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7&apos;&gt;well documented on Centos/RHEL&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For production, all those parts can be readily separated to run on separate servers. They can be run on either cluster nodes or edge nodes. If it’s a MapR cluster, installing the MapR Client and pointing it to the cluster CLDB nodes will be all the configuration needed for full access to the streams. For an Apache Kafka cluster, refer to the &lt;a target=&apos;\_blank&apos;  href=&apos;http://kafka.apache.org/documentation&apos;&gt;official Kafka documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Traffic Monitoring Prototype - How To&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Creating the Streams with maprcli&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The first task is to create streams and topics for our application. To do this, it’s a best practice to create a volume for streams first. Acting as the user: &lt;code&gt;mapr&lt;/code&gt;, type from the command line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;maprcli volume create -name streams -path /streams
maprcli stream create -path /streams/traffic -produceperm p -consumerperm p
maprcli stream topic create -path /streams/traffic -topic data
maprcli stream topic create -path /streams/traffic -topic agenda
maprcli stream topic create -path /streams/traffic -topic rule-runtime
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note: MapR Event Store is more of a superset of Kafka than simply a clone. In addition to being faster, MapR Event Store can use all the advantages of the MapR Distributed File and Object Store, such as volumes (with permissions and quotas and so on) and replication. A cluster is not limited to just defining topics, but can define several streams that each can have several topics. Therefore, instead of a topic name, a MapR Stream has a &lt;code&gt;path:topic&lt;/code&gt; notation. Here, our data stream’s full name is &lt;code&gt;“/streams/traffic:data”&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Generating Fake Data&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I used the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/tdunning/log-synth&apos;&gt;Log-Synth&lt;/a&gt; tool to generate data for this prototype. Log-Synth uses a schema combined with a Sampler class to generate data in a very flexible and simple manner.&lt;/p&gt;
&lt;p&gt;My Schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[  
 {&quot;name&quot;:&quot;traffic&quot;, &quot;class&quot;:&quot;cars&quot;, &quot;speed&quot;: &quot;70 kph&quot;, &quot;variance&quot;: &quot;10 kph&quot;, &quot;arrival&quot;: &quot;25/min&quot;, &quot;sensors&quot;: {&quot;locations&quot;:[1, 2, 3, 4, 5, 6, 7,8,9,10], &quot;unit&quot;:&quot;km&quot;},  
   &quot;slowdown&quot;:[{&quot;speed&quot;:&quot;11 kph&quot;, &quot;location&quot;:&quot;2.9 km - 5.1 km&quot;, &quot;time&quot;: &quot;5min - 60min&quot;}]}  
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command to generate the data is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;synth -count 10K -schema my-schema.json &gt;&gt; output.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Data is generated one car at a time, and each data point is a reading at a sensor. The data will model a flow of cars driving at 70 km/h, arriving at a rate of 25 cars per minute. A slowdown will happen between km 2.9 and 5.1 where speed will be reduced to 11km/h 5 minutes to 60 minutes after the start of the simulation. This will be the traffic jam we wish to detect using our CEP system.&lt;/p&gt;
&lt;p&gt;The generated data is a file where each line is the resulting list of sensor measurements for a single car:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[{&quot;id&quot;:&quot;s01-b648b87c-848d131&quot;,&quot;time&quot;:52.565782936267404,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s02-4ec04b36-2dc4a6c0&quot;,&quot;time&quot;:103.5216023752337,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s03-e06eb821-cda86389&quot;,&quot;time&quot;:154.4774218142,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s04-c44b23f0-3f3e0b9e&quot;,&quot;time&quot;:205.43324125316627,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s05-f57b9004-9f884721&quot;,&quot;time&quot;:256.38906069213255,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s06-567ebda7-f3d1013b&quot;,&quot;time&quot;:307.3448801310988,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s07-3dd6ca94-81ca8132&quot;,&quot;time&quot;:358.3006995700651,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s08-2d1ca66f-65696817&quot;,&quot;time&quot;:409.25651900903136,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s09-d3eded13-cf6294d6&quot;,&quot;time&quot;:460.21233844799764,&quot;speed&quot;:19.62484385513174},{&quot;id&quot;:&quot;s0a-1cbe97e8-3fc279c0&quot;,&quot;time&quot;:511.1681578869639,&quot;speed&quot;:19.62484385513174}]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A reading has a sensor ID, a speed in meters per second, and a time delta from time 0 (the moment the simulation starts) in seconds.&lt;/p&gt;
&lt;p&gt;My producer code simply translates the readings into a list of sensor readings ordered by time, and I transform the speed into km/s and the time into a timestamp as milliseconds from epoch.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sending to the code onto the stream can be done one line at a time using standard producer code. The &lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapR_Streams/code_for_the_sample_java_producer.html&quot;&gt;code in the sample Java producer&lt;/a&gt; works just fine.&lt;/p&gt;
&lt;p&gt;Another exciting new possibility is to use the brand new &lt;a target=&apos;\_blank&apos;  href=&apos;http://docs.confluent.io/2.0.0/kafka-rest/docs/index.html&apos;&gt;Kafka Rest Proxy&lt;/a&gt;, which is also available on MapR from MEP 2.0 (MapR Ecosystem Pack). This means sensors can connect directly to Kafka from any language since HTTP-based REST API is a global standard.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Using the Workbench&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We can login to the Workbench and login with an admin user (a user with the role “admin”) created with the &lt;code&gt;add-user.sh&lt;/code&gt; script from Wildfly.&lt;/p&gt;
&lt;p&gt;Using the Workbench is beyond the scope of the article, but the general idea is to create an Organizational Unit and a Repository, and then create a project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Data Objects&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will need to create facts for the rule engine to work with. The best practice for Drools is to use a separate Maven project for your data model. For the sake of simplicity, I created them right from the workbench.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Measurement is a super generic bean that models the sensor speed measurements. For the prototype, I kept it as simple as the raw data with a timestamp, a sensor ID, and a speed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Sensor bean models the sensor itself and will have an ID and the average speed of all cars that it measures over a time window defined by our rules. This average speed will be used to trigger an alert for heavy traffic. The traffic String is to mark the current traffic level which can be “NONE”, “LIGHT” or “HEAVY.”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The Traffic Monitoring Rules&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;create sensors for new id&lt;/li&gt;
&lt;li&gt;detect heavy traffic&lt;/li&gt;
&lt;li&gt;detect light traffic&lt;/li&gt;
&lt;li&gt;detect normal traffic&lt;/li&gt;
&lt;li&gt;get average speed at the sensor&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The create sensors rule ensures that there are sensor objects available in memory. This is the fact we use to know the average speed at a certain sensor.&lt;/p&gt;
&lt;p&gt;The detect heavy traffic is the key rule we want to use to send an alarm to the police if heavy traffic is detected at a certain sensor.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;So when the average speed reaches 20 km/h or less, and the sensor isn’t already in the HEAVY traffic level, set the level to HEAVY and send an alert.&lt;/p&gt;
&lt;p&gt;This means we need to know the average speed. Here is the rule to compute it using the Drools rule DSL (domain-specific language):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This is not rocket science! These rules illustrate rather clearly how making up simple but useful rules can be realistically left up to business analysts and developed separately from the whole stream and big data platform.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The Consumer Side&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The consumer reads the data from the stream. The tutorial code in Java from the documentation is perfectly adequate. Jackson is used to convert the JSON to Measurement objects.&lt;/p&gt;
&lt;p&gt;The consumer has an instance of the KieSession, and each measurement is added to the session using &lt;code&gt;kieSession.insert(fact)&lt;/code&gt; and followed by a call to &lt;code&gt;kieSession.fireAllRules()&lt;/code&gt;, which triggers the algorithm to check if any rules match with the new state of the session given the new data.&lt;/p&gt;
&lt;p&gt;A channel, which is just a callback, is used to allow a rule to call a function “outside” of the &lt;code&gt;KieSession&lt;/code&gt;. My Prototype uses this method to log the alert. In a production system, we could easily change the code to send an email, an SMS, or take some other action.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Importing the Rules into the Consumer Application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The way we get the rules into the running application is by fetching it from the Maven repository integrated into the Workbench.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;KieServices kieServices = KieServices.Factory.get();  
ReleaseId releaseId = kieServices.newReleaseId( &quot;org.mapr.demo&quot;, &quot;smart-traffic-kjar&quot;, &quot;1.0.0&quot; );  
KieContainer kContainer = kieServices.newKieContainer( releaseId );


KieSession kieSession = kContainer.newKieSession();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So the question becomes, how does the call to &lt;code&gt;newReleaseId&lt;/code&gt; know to fetch the artifact with our rules from the Maven repository in the Workbench?&lt;/p&gt;
&lt;p&gt;The answer is with the &lt;code&gt;~/.m2/settings.xml&lt;/code&gt; file, where you add this information. We recommend to use the user mapr for everything in the sandbox, so the full path is: &lt;code&gt;/home/mapr/.m2/settings.xml&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;[mapr@maprdemo .m2]$ cat settings.xml  
&amp;#x3C;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;  
&amp;#x3C;settings&gt;  
   &amp;#x3C;servers&gt;  
       &amp;#x3C;server&gt;  
           &amp;#x3C;id&gt;guvnor-m2-repo&amp;#x3C;/id&gt;  
           &amp;#x3C;username&gt;admin&amp;#x3C;/username&gt;  
           &amp;#x3C;password&gt;admin&amp;#x3C;/password&gt;  
       &amp;#x3C;/server&gt;  
   &amp;#x3C;/servers&gt;  
   &amp;#x3C;profiles&gt;  
       &amp;#x3C;profile&gt;  
           &amp;#x3C;id&gt;cep&amp;#x3C;/id&gt;  
           &amp;#x3C;repositories&gt;  
               &amp;#x3C;repository&gt;  
                   &amp;#x3C;id&gt;guvnor-m2-repo&amp;#x3C;/id&gt;  
                   &amp;#x3C;name&gt;Guvnor M2 Repo&amp;#x3C;/name&gt;  
                   &amp;#x3C;url&gt;**http://127.0.0.1:28080/drools-wb-webapp/maven2/**&amp;#x3C;/url&gt;  
                   &amp;#x3C;releases&gt;  
                       &amp;#x3C;enabled&gt;true&amp;#x3C;/enabled&gt;  
            &amp;#x3C;updatePolicy&gt;interval:1&amp;#x3C;/updatePolicy&gt;  
                   &amp;#x3C;/releases&gt;  
                   &amp;#x3C;snapshots&gt;  
                       &amp;#x3C;enabled&gt;true&amp;#x3C;/enabled&gt;  
            &amp;#x3C;updatePolicy&gt;interval:1&amp;#x3C;/updatePolicy&gt;  
                   &amp;#x3C;/snapshots&gt;  
               &amp;#x3C;/repository&gt;  
           &amp;#x3C;/repositories&gt;  
       &amp;#x3C;/profile&gt;  
   &amp;#x3C;/profiles&gt;  
   &amp;#x3C;activeProfiles&gt;  
       &amp;#x3C;activeProfile&gt;cep&amp;#x3C;/activeProfile&gt;  
   &amp;#x3C;/activeProfiles&gt;  
&amp;#x3C;/settings&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The key information is in bold, which corresponds to the URL of the maven2 repository of the Drools workbench. This information can be copied and pasted from the pom.xml, which can be seen by using the repository view:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;So I just copy-pasted that and now everything works like magic.&lt;/p&gt;
&lt;h2&gt;Monitoring the Smart Traffic Prototype&lt;/h2&gt;
&lt;p&gt;We have one stream with data, and two other streams to monitor the rule engine internals. This is very easy with Drools because it uses Java Listeners to report on its internal state. We simply provide a custom implementation of the listeners to produce the data to a stream, and then use Streamsets to redirect everybody to Elasticsearch.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Elasticsearch Mappings&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The mappings are defined in a small script I created:&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://pastebin.com/kRCbvAkU&apos;&gt;&lt;a href=&quot;http://pastebin.com/kRCbvAkU&quot;&gt;http://pastebin.com/kRCbvAkU&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Streamsets for No-Code Stream To Elasticsearch Pipeline&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each stream has its own pipeline, where each looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Jython Evaluator adds timestamp information if necessary.&lt;/p&gt;
&lt;h2&gt;Running the Prototype&lt;/h2&gt;
&lt;p&gt;Start the consumer:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture16.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then start the producer:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture17.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In my prototype, I added code to control the rate at which the data is sent to the stream to make the rules firing easier to see. 10,000 events are quite small for Drools and for MapR Event Store/Kafka, and so the whole demo would be over in less than a second. This is the meaning of the “-r 25” for 25 events per second rate.&lt;/p&gt;
&lt;p&gt;The dashboard looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture18.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once data starts to stream in:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture19.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The traffic jam is quite visible now:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture20.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As soon as a sensor average speed drops below 20 km/h, an alarm is fired:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture21.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;And the dashboard will display a count of “1”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/picture22.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The simulation will continue and the two other sensors will go below 20 in turn for a total of 3 alarms fired.&lt;/p&gt;
&lt;h2&gt;In Conclusion&lt;/h2&gt;
&lt;p&gt;This project turned into a great illustration of the power and ease of streaming architectures using microservices. A common misconception of microservices is that they are limited to synchronous REST services. This is not the case at all. Our system is a backend system and so the microservices (the producer, the consumer, streamsets, and ES/Kibana) are all run asynchronously, dealing with the data as it comes off the stream.&lt;/p&gt;
&lt;p&gt;The project was fairly easy to build technically because every part is totally independent of the others, and can therefore be tested in isolation. Once the producer can send data to the stream properly, it doesn’t need to be tested ever again for issues that concern other parts of the system. Also, each part was added one at a time, making it easy to identify issues and fix them.&lt;/p&gt;
&lt;p&gt;In total, with very little original code, we could implement a working, useful system that would need very little change to be put into production. Only the producer needs to be customized for the particular sensor or data source.&lt;/p&gt;
&lt;p&gt;Rules are the most time-consuming and error-prone part. This is no different than if the project had been done using custom-coded Spark code. The win for an approach based on a rule engine such as Drools and the Drools Workbench is that the rules can be edited, tested, and improved independently of how the code runs on the cluster. The work in Workbench has no dependency on the system at all, as it is pulled in by the consumer application automatically.&lt;/p&gt;
&lt;p&gt;From a business value point of view, all the value is in the rules, assuming a stable production system. There is no reason for organizations not to take advantage of this quick edition capability to become ever more agile and responsive to evolving conditions for the benefit of their customers…and the bottom line.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get started with Prometheus and Grafana on Docker with HPE Storage Array Exporter]]></title><description><![CDATA[With the recently released HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus, it's a good time to…]]></description><link>https://developer.hpe.com/get-started-with-prometheus-and-grafana-on-docker-with-hpe-storage-array-exporter/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-prometheus-and-grafana-on-docker-with-hpe-storage-array-exporter/</guid><pubDate>Wed, 26 Jan 2022 16:00:00 GMT</pubDate><content:encoded>&lt;p&gt;With the recently released &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-enhancements-with-monitoring-and/ba-p/7158137&quot;&gt;HPE Storage Array Exporter for Prometheus and HPE CSI Info Metrics Provider for Prometheus&lt;/a&gt;, it&apos;s a good time to familiarize ourselves with the cloud native technologies involved and get some first-hand experience.&lt;/p&gt;
&lt;p&gt;Prometheus is a time-series database that also provides monitoring and alerting. It&apos;s a &lt;a href=&quot;https://www.cncf.io/projects/&quot;&gt;CNCF graduated project&lt;/a&gt;. Grafana is a web-based visualization tool that uses time-series data to create beautiful graphs and elements to present data in views referred to as dashboards. Prometheus scrapes an HTTP endpoint of a target periodically to consume an assortment of metric types and and metadata. The target is usually referred to as an exporter and the data being scraped is the current state of one or many exporter data points.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheus-1.0.0-reva.png&quot; alt=&quot;Prometheus Overview&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this tutorial we&apos;ll learn how to deploy Prometheus, Grafana and the HPE Storage Array Exporter for Prometheus using nothing but Docker. A supported storage backend, such as a HPE Alletra, Nimble Storage, Primera or 3PAR is needed to gather metrics for visualization. These prerequisites are assumed along with basic Docker and container knowledge. While this is a highly opinionated tutorial, the different components may be run standalone on Windows or Linux and may also be deployed directly on Kubernetes with Helm.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This tutorial is purely for educational purposes and not intended for production deployments.&lt;/p&gt;
&lt;h1&gt;Get started&lt;/h1&gt;
&lt;p&gt;This tutorial assumes Docker is installed and functional. Credentials are also needed for a backend system. HPE Nimble Storage is being used in the examples and metrics may be named differently when using something else. Still, the principles remain the same.&lt;/p&gt;
&lt;p&gt;To allow the containers to communicate with each other by name, a separate bridge network is created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker network create prometheus
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that all containers created below do not utilize any persistent storage at all. Any dashboards that have been created or metrics that have been gathered will not be accessible after a container restart.&lt;/p&gt;
&lt;h1&gt;Run HPE Storage Array Exporter for Prometheus&lt;/h1&gt;
&lt;p&gt;Let&apos;s begin by starting the HPE Storage Array Exporter for Prometheus container. It&apos;s a passive background process that responds to HTTP requests made by Prometheus. It also requires a backend configuration file. In the case of having multiple backends, one container and configuration file is needed for each one.&lt;/p&gt;
&lt;p&gt;Create a file named &lt;code&gt;my-array-1.yaml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;address: my-array-1
username: admin
password: admin
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;address&lt;/code&gt; value can be an IP address or hostname resolvable in DNS. Then, run &lt;code&gt;docker&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker run -d --rm --name my-array-1 \
     -v $(pwd)/my-array-1.yaml:/etc/config/storage-system.yaml \
     --network prometheus \
     quay.io/hpestorage/array-exporter:v1.0.0 \
     --log.path /var/log/hpe-array-exporter.log \
     --accept-eula \
     /etc/config/storage-system.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By running the above command with the &lt;code&gt;--accept-eula&lt;/code&gt; parameter, the end-user accepts the HPE End User License Agreement at &lt;a href=&quot;https://www.hpe.com/us/en/software/licensing.html&quot;&gt;https://www.hpe.com/us/en/software/licensing.html&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The exporter is now up and running listening on port 8080. To ensure we can reach the metrics endpoint, run the following &lt;code&gt;docker&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker run --rm --network prometheus \
    curlimages/curl -s http://my-array-1:8080/metrics
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should produce an output that enumerates the metric keys, value, type and brief help. An example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# HELP hpenimble_pool_snapshot_used_bytes Total logical size of all snapshots in a storage pool
# TYPE hpenimble_pool_snapshot_used_bytes gauge
hpenimble_pool_snapshot_used_bytes{pool=&quot;default&quot;} 2.659803136e+09
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let&apos;s start Prometheus and scrape some metrics.&lt;/p&gt;
&lt;h1&gt;Run and Inspect Prometheus&lt;/h1&gt;
&lt;p&gt;When Prometheus is deployed on Kubernetes, discovery of scrape targets may be configured to be performed dynamically. In this example we need to instruct Prometheus where to find our target. Create a configuration file named &lt;code&gt;prometheus.yml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: &quot;my-array-1&quot;
    static_configs:
      - targets: [&quot;my-array-1:8080&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create the Prometheus container:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker run -d --rm --name prometheus-server \
    --network prometheus \
    -p 80:9090 \
    -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml \
    prom/prometheus:v2.30.3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In a few moments, the Prometheus server should be up and listening on the Docker host and accessible from &lt;a href=&quot;http://your.docker.host.example.com/&quot;&gt;http://your.docker.host.example.com/&lt;/a&gt; or &lt;a href=&quot;http://localhost&quot;&gt;http://localhost&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To inspect if metrics are being collected properly, access the web interface and start typing &lt;em&gt;hpe&lt;/em&gt; in the query bar.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheus-auto.png&quot; alt=&quot;Prometheus Query&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select a &lt;code&gt;volume_reads_per_second&lt;/code&gt; metric, hit &lt;em&gt;Execute&lt;/em&gt; and click &lt;em&gt;Graph&lt;/em&gt;. This should produce a filled line graph with all the volumes on the array and their read IOPS. Wait a few minutes if there&apos;s no data being presented.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/prometheus-graph.png&quot; alt=&quot;Prometheus Graph&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, we&apos;ll bolt on Grafana and connect to the Prometheus data source.&lt;/p&gt;
&lt;h1&gt;Run Grafana and Build Your First Dashboard&lt;/h1&gt;
&lt;p&gt;In this tutorial, we&apos;ll just use the built-in dashboards and plugins that come with Grafana. Accessing Grafana is mainly done through the intuitive web UI found on port 3000. Let&apos;s start it up:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;docker run --rm -d --name grafana \
    --network prometheus \
    -p 3000:3000 \
    grafana/grafana:8.2.2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, navigate to the web UI (&lt;a href=&quot;http://your.docker.host.example.com:3000/&quot;&gt;http://your.docker.host.example.com:3000/&lt;/a&gt; or &lt;a href=&quot;http://localhost:3000&quot;&gt;http://localhost:3000&lt;/a&gt;) and log in with user &quot;admin&quot; and password &quot;admin&quot;. You&apos;ll be prompted to create a new password.&lt;/p&gt;
&lt;p&gt;Now, a data source needs to be added. A panel to do so is available on the main screen.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-welcome.png&quot; alt=&quot;Add new data source&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, select &quot;Prometheus&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-select.png&quot; alt=&quot;Select Prometheus&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the HTTP section of the configuration, enter &lt;code&gt;http://prometheus-server:9090&lt;/code&gt;. Click &lt;em&gt;Save &amp;#x26; test&lt;/em&gt; at the bottom of the page. A green checkmark should be displayed and indicate that the data source is working.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-server.png&quot; alt=&quot;Configure Prometheus data source&quot;&gt;&lt;/p&gt;
&lt;p&gt;At this point, Grafana is able to retrieve data series from Prometheus and dashboards may now be created.&lt;/p&gt;
&lt;p&gt;Hit the big &lt;strong&gt;+&lt;/strong&gt; sign on the right, click &lt;em&gt;Dashboards&lt;/em&gt; and &lt;em&gt;Add an empty panel&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Creating a graph involves adding a query from the data source. The field uses auto-complete, so start typing &lt;em&gt;hpe&lt;/em&gt; and the field will populate. In the example below, a simple &lt;em&gt;sum&lt;/em&gt; operation is performed on the &lt;code&gt;hpenimble_volume_reads_per_second_avg5m&lt;/code&gt; metric which produces a combined read IOPS performance graph. The axis itself has been customized to show &lt;em&gt;io/s&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/grafana-panel.png&quot; alt=&quot;A simple Grafana dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;This concludes this introductory tutorial. Next, the possibilities are endless. Use cases include full stack monitoring of applications and the underlying infrastructure. Setting up thresholds for alarms with different outputs, such as email or Slack. Project growth trends for both capacity and performance for your storage estate. Please find additional resources below on how to learn more about Prometheus, Grafana and the HPE Storage Array Exporter for Prometheus.&lt;/p&gt;
&lt;h1&gt;Where to go next?&lt;/h1&gt;
&lt;p&gt;I&apos;ve only touched the tip of the iceberg in this tutorial to inspire deeper learning. Both Prometheus and Grafana are rich ecosystems that require their own exploration to better understand the fundamentals.&lt;/p&gt;
&lt;p&gt;It&apos;s also important to highlight again that this does not reflect a production deployment. It&apos;s more common to deploy and manage Prometheus on Kubernetes. The same patterns also apply for Grafana. The HPE Storage Array Exporter for Kubernetes may also be deployed on Kubernetes along with the HPE CSI Info Metrics Provider for Prometheus (Container Storage Interface) to give a holistic view of stateful workloads utilizing persistent storage from any of the supported HPE storage backends.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Introduction to &lt;a href=&quot;https://prometheus.io/docs/introduction/overview/&quot;&gt;Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Get started with &lt;a href=&quot;https://grafana.com/docs/grafana/latest/getting-started/getting-started/&quot;&gt;Grafana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn more about &lt;a href=&quot;https://hpe-storage.github.io/array-exporter/&quot;&gt;HPE Storage Array Exporter for Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Deploy &lt;a href=&quot;https://scod.hpedev.io/csi_driver/metrics.html&quot;&gt;HPE CSI Info Metrics Provider for Prometheus&lt;/a&gt; with the HPE CSI Driver for Kubernetes&lt;/li&gt;
&lt;li&gt;Explore the preconfigured HPE Storage dashboards on &lt;a href=&quot;https://grafana.com/orgs/hpestorage/dashboards%5D&quot;&gt;grafana.com&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Watch this space for future updates on monitoring and alerting for HPE Storage using the aforementioned technologies. Don&apos;t forget that you can reach the team on the HPE DEV Slack. Sign up &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;here&lt;/a&gt; and log in at &lt;a href=&quot;hpedev.slack.com&quot;&gt;https://hpedev.slack.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data is everywhere]]></title><link>https://developer.hpe.com/2022-January-01/</link><guid isPermaLink="false">https://developer.hpe.com/2022-January-01/</guid><pubDate>Fri, 07 Jan 2022 13:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[New for 2022: HPE DEV Meetups]]></title><description><![CDATA[The HPE Developer team is happy to announce another opportunity to come and learn with us in 2022 called Meetups. These sessions will…]]></description><link>https://developer.hpe.com/new-for-2022-hpe-dev-meetups/</link><guid isPermaLink="false">https://developer.hpe.com/new-for-2022-hpe-dev-meetups/</guid><pubDate>Mon, 03 Jan 2022 17:24:41 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/gettyimages-1213470242_v2_1600_0_72_rgb.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The HPE Developer team is happy to announce another opportunity to come and learn with us in 2022 called Meetups. These sessions will complement the continuing HPE DEV &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt; that started in 2021.&lt;/p&gt;
&lt;p&gt;The Munch &amp;#x26; Learn Technology Talks were designed to provide industry thought-leadership insights on today&apos;s newest technologies. They proved quite successful with a grand total of 1,415 participants attending the 11 sessions that were held this past year. Participants of these sessions indicated that, while they enjoyed and wished to continue attending Munch &amp;#x26; Learn sessions, they also needed an opportunity to dive in deep and more fully explore specific projects or products. The new Meetups are designed to fill that gap, covering topics that sit at the intersection of development and open source. The cadence for these meetups will be monthly, occurring the last week of every month.&lt;/p&gt;
&lt;p&gt;The first meetup was held on January 26th, 5PM CET (8AM PST). Entitled &lt;strong&gt;Quarkus - Supersonic Subatomic Java&lt;/strong&gt;, it was delivered by &lt;strong&gt;Dimitris Andreadis&lt;/strong&gt;, Engineering Director at Red Hat. The next session was held on February 23rd and covered &lt;strong&gt;Streamlit - the fastest way to build and share data science apps.&lt;/strong&gt; The session was delivered by &lt;strong&gt;Arnaud Miribel&lt;/strong&gt;, a machine learning engineer at Streamlit.io. You can catch replays of these sessions and see what&apos;s coming next in our &lt;a href=&quot;https://developer.hpe.com/campaign/meetups&quot;&gt;Meetups calendar&lt;/a&gt;. As you can see, we are already off to a good start and will continue to look for the best possible speakers inside and outside of HPE to cover the most relevant topics for developers and open source enthusiasts.&lt;/p&gt;
&lt;p&gt;Please check our &lt;a href=&quot;https://developer.hpe.com/campaign/meetups/&quot;&gt;Meetup calendar page&lt;/a&gt; for the complete list of registration links. We will continue to post links to replays once they are available. We hope you will join us and continue to expand your skills and expertise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Massive parallel management with iLOrest on Linux]]></title><description><![CDATA[Thanks to HPE having provided the iLOrest Python API, an entirely new avenue has been opened to developers for managing their servers. It…]]></description><link>https://developer.hpe.com/massive-parallel-management-with-ilorest-on-linux/</link><guid isPermaLink="false">https://developer.hpe.com/massive-parallel-management-with-ilorest-on-linux/</guid><pubDate>Mon, 03 Jan 2022 14:57:18 GMT</pubDate><content:encoded>&lt;p&gt;Thanks to HPE having provided the &lt;a href=&quot;http://hpe.com/info/restfulapi&quot;&gt;iLOrest Python API&lt;/a&gt;, an entirely new avenue has been opened to developers for managing their servers. It allows them to add in their favorite deployment tools (i.e. &lt;a href=&quot;https://hackshack.hpedev.io/workshop/23&quot;&gt;Ansible&lt;/a&gt;) or other management tools, like &lt;a href=&quot;http://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; (as described in several articles found here, in the &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;HPE DEV Blog&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;But what if you are not a Python geek? What if you prefer managing your servers the old way with &lt;code&gt;bash&lt;/code&gt; / &lt;code&gt;grep&lt;/code&gt; / &lt;code&gt;awk&lt;/code&gt; / &lt;code&gt;jq&lt;/code&gt; / &lt;code&gt;curl&lt;/code&gt; / &lt;code&gt;wget&lt;/code&gt;? In that case, you can loop over the list of your iLO IP addresses and use &lt;code&gt;curl&lt;/code&gt; / &lt;code&gt;wget&lt;/code&gt; for getting and setting Redfish parameters.&lt;/p&gt;
&lt;p&gt;However, this loop approach brings out two fundamental problems: 1) Using &lt;code&gt;curl&lt;/code&gt; and &lt;code&gt;wget&lt;/code&gt; in management scripts implies the creation of smart crawling functions (as explained in this &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-ilo-restful-api-redfish-api-conformance&quot;&gt;article&lt;/a&gt;), 2) a sequential management with loops is fine for taking care of five or ten servers, but not more.  A parallel approach should be considered for an important number of managed servers.&lt;/p&gt;
&lt;p&gt;This blog post explains how to manage many servers in parallel using bash scripts without the need of implementing schema crawlers, thus allowing a long-term stability to the scripts.&lt;/p&gt;
&lt;p&gt;To achieve this goal, you will combine a parallel task launcher (namely the &lt;a href=&quot;https://clustershell.readthedocs.io/en/latest/index.html&quot;&gt;ClusterShell&lt;/a&gt; and/or the Parallel Distributed Shell) with &lt;a href=&quot;https://www.hpe.com/info/resttool&quot;&gt;iLOrest&lt;/a&gt; (former &lt;code&gt;hprest&lt;/code&gt;) the HPE RESTful Interface Tool in a more detailed and slightly different manner than what you can read already in the &lt;a href=&quot;https://hewlettpackard.github.io/python-redfish-utility/#overview&quot;&gt;user guide&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Clush and pdsh: Quick (and dirty) overview&lt;/h2&gt;
&lt;p&gt;The Parallel Distributed Shell (&lt;code&gt;pdsh&lt;/code&gt;) and the ClusterShell (&lt;code&gt;clush&lt;/code&gt;) are both part of the &lt;a href=&quot;https://fedoraproject.org/wiki/EPEL&quot;&gt;Extra Package for Enterprise Linux&lt;/a&gt; (EPEL) repository and they give the ability to perform local or remote commands in parallel. &lt;code&gt;Pdsh&lt;/code&gt; came first as a replacement of an IBM utility (DSH) with Google being the main contributor. However, Google pulled out from this project in 2013.&lt;/p&gt;
&lt;p&gt;The ClusterShell is an interesting alternative to &lt;code&gt;pdsh&lt;/code&gt; for many reasons. First the basic syntax and most of the switches, options and arguments are identical to &lt;code&gt;pdsh&lt;/code&gt;.  Hence, moving from pdsh to clush is easy. Additionally, &lt;code&gt;clush&lt;/code&gt; provides more flexibility for selecting the remote nodes you want to execute commands against or on. It is associated with the &lt;a href=&quot;https://clustershell.readthedocs.io/en/latest/tools/nodeset.html&quot;&gt;&lt;code&gt;nodeset(1)&lt;/code&gt;&lt;/a&gt; command for computing advanced nodeset operations like &lt;code&gt;expand&lt;/code&gt;, &lt;code&gt;fold&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, etc. Another advantage of &lt;code&gt;clush&lt;/code&gt; over &lt;code&gt;pdsh&lt;/code&gt; is that it is written in Python and can be used in Python scripts whenever you are ready to use this language.&lt;/p&gt;
&lt;p&gt;For the purposes of this article, I will use ClusterShell only.&lt;/p&gt;
&lt;p&gt;For those systems running Red Hat or a similar distribution, you need to mention the two packages containing the code and the eco-system:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo dnf install python3-clustershell clustershell
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On Ubuntu systems, just mention the &lt;code&gt;clustershell&lt;/code&gt; parameter on the command line to install both packages:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt install clustershell
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here are basic examples to illustrate the launch of several commands in parallel on the local node and then one command on several remote nodes.&lt;/p&gt;
&lt;p&gt;If you want to perform a parallel &lt;code&gt;ping&lt;/code&gt; toward &lt;code&gt;node1&lt;/code&gt;, &lt;code&gt;node2&lt;/code&gt; and &lt;code&gt;node3&lt;/code&gt; type:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clush -R exec -w node[1-3] ping %h
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;-R exec&lt;/code&gt; specifies a local execution of the &lt;code&gt;ping&lt;/code&gt; command toward the list of hosts: &lt;code&gt;node[1-3]&lt;/code&gt;. &lt;code&gt;%h&lt;/code&gt; is substituted with &lt;code&gt;node1&lt;/code&gt;, &lt;code&gt;node2&lt;/code&gt; and &lt;code&gt;node3&lt;/code&gt; for each &lt;code&gt;ping&lt;/code&gt; command. When you hit the return key, your system forks in parallel the following 3 tasks:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ping node1 
ping node2 
ping node3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Imagine now that you  want the root user of &lt;code&gt;node1&lt;/code&gt;, &lt;code&gt;node2&lt;/code&gt; and &lt;code&gt;node4&lt;/code&gt; to &lt;code&gt;ping&lt;/code&gt; a remote gateway called &lt;code&gt;aremotegateway&lt;/code&gt;. With the above syntax you can issue:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clush -R exec -w node[1-2,4] ssh -l root %h &quot;ping aremotegateway&quot;`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, EPEL provides &lt;code&gt;clush&lt;/code&gt; compiled with an &lt;code&gt;ssh&lt;/code&gt; (default) module, allowing the following simplified syntax:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clush -w node[1-2,4] -l root &quot;ping aremotegateway&quot;`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since the &lt;code&gt;-R&lt;/code&gt; switch is not present, the &lt;code&gt;ssh&lt;/code&gt; default module is used toward &lt;code&gt;node[1-2,4]&lt;/code&gt;  and the &lt;code&gt;ping&lt;/code&gt; command is executed on this list or remote nodes.&lt;/p&gt;
&lt;h2&gt;Nodeset management&lt;/h2&gt;
&lt;p&gt;The list of remote nodes following the &lt;code&gt;-w&lt;/code&gt; switch can be the output of a &lt;code&gt;nodeset&lt;/code&gt; command taking its input from a node set configuration file. The possible locations of the configuration file depends on the &lt;code&gt;groups.conf(5)&lt;/code&gt; file. For this article, I populated the default file located in my home directory (&lt;code&gt;~/.local/etc/clustershell/groups.d/local.cfg&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Here is one example showing the power of this &lt;code&gt;clush&lt;/code&gt; companion command.&lt;/p&gt;
&lt;p&gt;Imagine you define two groups of iLOs in the nodeset configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilo-sales: ilo-sales[1-10]
ilo-marketing: ilo-foo,ilo-bar,ilo-test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The  &lt;code&gt;ilo-sales&lt;/code&gt; group contains ten iLO names and ilo-marketing group contains three. If you want to select both groups and exclude node &lt;code&gt;ilo-test&lt;/code&gt;, you can do it with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nodeset --fold @ilo-sales,@ilo-marketing  --exclude ilo-test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/1-nodeset-exclude.png&quot; alt=&quot;nodeset exclude example&quot; title=&quot;nodeset exclude example&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can now combine the &lt;code&gt;nodeset&lt;/code&gt; command with &lt;code&gt;clush&lt;/code&gt; to issue a parallel &lt;code&gt;ping&lt;/code&gt; toward this list with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clush -R exec -w $(nodeset -f @ilo-sales,@ilo-marketing -x ilo-test) ping -c 2 %h 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Out-of-band or in-band: that is the question&lt;/h2&gt;
&lt;p&gt;As discussed in this &lt;a href=&quot;https://developer.hpe.com/blog/chif-driver-not-found/&quot;&gt;blog&lt;/a&gt; post, iLOrest offers both out-of-band and in-band management. In the first form, you need to log into a remote iLO with the credentials of an iLO user. The following sequence of commands retrieves the Bios version of a remote server using an out-of-band management:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ilorest login ilo-IP -u ilo-user -p ilo-password
ilorest select computersystem. 
ilorest ls oem/hpe/bios/current # On Gen9, replace with bios/current
ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can always use out-of-band management even when the server is OFF or when there is no operating system installed. The iLO constantly listens on the network as long as a power cable is plugged in the server.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Performing in-band management with iLOrest means that you need to first access the operating system before launching the local &lt;code&gt;ilorest&lt;/code&gt;. If you login as a privileged user (root or Administrator), you don&apos;t need to supply any username/password credentials to access the underlying iLO:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ssh root@server ilorest login
ssh root@server  ilorest select computersystem.
ssh root@server ilorest ls oem/hpe/bios/current # On Gen9 replace with bios/current
ssh root@server ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Who kicked me out?&lt;/h2&gt;
&lt;p&gt;All of this is awesome except that in both examples (out-of-band and in-band), a cache directory location is not explicitly specified. Hence, you will be kicked out if a concurrent iLOrest session is opened using the default cache directory as shown in the following picture.&lt;/p&gt;
&lt;p&gt;The picture below shows the following sequence of events: (1) you first log into ilo-lab1. (2) Then you select the &lt;code&gt;Bios.&lt;/code&gt; type (select &lt;code&gt;HpBios.&lt;/code&gt; on Gen9 servers) and (3) verify that it has been selected.&lt;/p&gt;
&lt;p&gt;(4) In the other terminal session and from the same server, log into ilo-lab2 without selecting any type. This second session clears off the iLOrest cache including the selection we performed in the first terminal. As a result, (5) asking for the resource type selection returns an error.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2-kicked-out.png&quot; alt=&quot;Demonstrating iLOrest local cache&quot; title=&quot;Demonstrating iLOrest local cache&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a consequence (and to be safe), you should always specify a specific cache directory location for your iLOrest sessions which will avoid such contention. One possible solution is to generate a random string of ASCII characters and use it as a location for the iLOrest cache.&lt;/p&gt;
&lt;p&gt;With a random string of 16 characters (lowercase, uppercase, digits) for a cache directory name, the out-of-band example becomes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;UUID=$(cat /dev/urandom | tr -dc &apos;a-zA-Z0-9&apos; | head -c 16)
iLOrest=&quot;ilorest --cache-dir=/tmp/$UUID&quot;
$iLOrest   login ilo-IP -u ilo-user -p ilo-password
$iLOrest select computersystem. 
$iLOrest ls oem/hpe/bios/current # On Gen9, replace with bios/current
$iLOrest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the in-band example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;UUID=$(cat /dev/urandom | tr -dc &apos;a-zA-Z0-9&apos; | head -c 16) 
iLOrest=&quot;ilorest --cache-dir=/tmp/$UUID&quot;
ssh root@server $iLOrest login
ssh root@server  $iLOrest select computersystem.
ssh root@server $iLOrest ls oem/hpe/bios/current  # On Gen9, replace with bios/current
ssh root@server $iLOrest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Parallel out-of-band management&lt;/h2&gt;
&lt;p&gt;You are now ready to send iLOrest commands in parallel toward a set of iLOs sharing common administrator credentials.&lt;/p&gt;
&lt;p&gt;This example applies a Bios configuration file to a list of servers. The Bios configuration has been created and customized in an ASCII/json file called &lt;code&gt;BiosConfig.json&lt;/code&gt; using the &lt;code&gt;ilorest save&lt;/code&gt; command. The list of target servers to configure is represented by a group of iLOs defined in the nodeset configuration file as explained earlier. Note that, for a performance reason, I grouped the type selection (&lt;code&gt;Bios.&lt;/code&gt;) with the login process:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;CLUSH=&quot;clush -R exec&quot; 
iLOrest=&quot;ilorest --nologo --cache-dir=/tmp/cache-%h&quot;
ILO_LIST=$(nodeset @myilos)
$CLUSH -w $ILO_LIST $iLOrest login %h -u ilo-admin -p password --selector=Bios. # On gen9, selector is HpBios
$CLUSH -w $ILO_LIST $iLOrest load -f BiosConfig.json
$CLUSH  -w $ILO_LIST $iLOrest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I personally successfully tested this example against 24 servers. I am pretty optimistic that it works as well for a larger number of servers, although a risk of bumping into iLOrest timeouts exists.&lt;/p&gt;
&lt;h2&gt;Parallel in-band management&lt;/h2&gt;
&lt;p&gt;Offloading iLOrest processing onto remote servers may be required in secure environments where iLOs are physically disconnected from the network.&lt;/p&gt;
&lt;p&gt;In this case, in-band management requires that an operating environment is up and running with &lt;code&gt;sshd&lt;/code&gt; and &lt;code&gt;ilorest&lt;/code&gt; installed and configured on all the managed nodes. This operating environment could be a customized Live-CD booted via the iLO virtual Drive facility as explained in a &lt;a href=&quot;https://developer.hpe.com/blog/in-band-management-with-ilorest-and-a-livecd/&quot;&gt;previous article&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The in-band version of our Bios configuration requires one more step compared to the out-of-band method: the copy of the JSON configuration file onto all the managed nodes. This operation is simple to perform with the &lt;code&gt;--copy&lt;/code&gt; and &lt;code&gt;--dest&lt;/code&gt; clush options dedicated to that purpose. Moreover, this copy will not impact the overall process time because the JSON configuration file to transfer is a small ASCII file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;UUID=$(cat /dev/urandom | tr -dc &apos;a-zA-Z0-9&apos; | head -c 16)
CLUSH=&quot;clush -l root &quot;
iLOrest=&quot;ilorest --nologo --cache-dir=/tmp/cache-${UUID}&quot;
SRV_LIST=$(nodeset -f &quot;@lab&quot;)

$CLUSH -w SRV_LIST --copy BiosConfig.json --dest /tmp/. 

$CLUSH -w $SRV_LIST &quot;$iLOrest login --selector=Bios.&quot; # selector is HpBios. on Gen9 servers
$CLUSH -w $SRV_LIST &quot;$iLOrest load -f /tmp/BiosConfig.json &quot;
$CLUSH -w $SRV_LIST $iLOrest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In Linux infrastructures, iLOrest combined to the ClusterShell offers easy out-of-band and in-band management scripting possibilities for system administrators who prefer using Bash to pure Python. Of course, there are pros and cons to both methods. Out-of-band iLOrest management does not require a customized operating environment running on the managed servers. In-band management offers the possibility to physically disconnect iLOs to decrease the network footprint.&lt;/p&gt;
&lt;p&gt;In Windows environments, system managers should be able to perform similar efficient parallel configuration of their servers using &lt;a href=&quot;https://devblogs.microsoft.com/scripting/parallel-processing-with-jobs-in-powershell/&quot;&gt;PowerShell&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Here’s to 2021 and the HPE DEV Community: Take a bow – You deserve it!]]></title><description><![CDATA[The rolling in of a new year often brings with it a sense of nostalgia, offering a time for reflection to take stock of what we’ve…]]></description><link>https://developer.hpe.com/here’s-to-2021-and-the-hpe-dev-community-take-a-bow-–-you-deserve-it/</link><guid isPermaLink="false">https://developer.hpe.com/here’s-to-2021-and-the-hpe-dev-community-take-a-bow-–-you-deserve-it/</guid><pubDate>Thu, 16 Dec 2021 15:55:54 GMT</pubDate><content:encoded>&lt;p&gt;The rolling in of a new year often brings with it a sense of nostalgia, offering a time for reflection to take stock of what we’ve accomplished. It’s also a time when we resolve to do even better and try new things in the time ahead.&lt;/p&gt;
&lt;p&gt;When I look back on 2021, I can’t help but to feel a certain amount of pride in being a part of HPE DEV and all that our Community has accomplished. I specifically use the term Community here, because, without you, these things would not have happened. Without you lending your expertise and sharing your skill sets and talents, we could not have delivered all of this. So, let’s take a look back on our achievements and take a moment to appreciate all the hard work that it took to get here.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The HPE DEV Portal&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the biggest undertakings was the refurbishment of the HPE DEV portal early in the year. With more and more platform pages being added, including, SmartSim, Determined AI, HPE Alletra , and Data Services Cloud Console, as well as new learning opportunities, it was evident that the site needed to be revamped to facilitate navigation for portal users.&lt;/p&gt;
&lt;p&gt;The new portal now offers great ways to &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;engage&lt;/a&gt;, new &lt;a href=&quot;https://developer.hpe.com/skillup&quot;&gt;Skill Up&lt;/a&gt; opportunities, and drop down menus to quickly find what you need.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/hpedev-website.png&quot; width=&quot;800&quot; height=455&quot; alt=&quot;portal benefits&quot;&gt;&lt;/center&gt;
&lt;p&gt;One of the newest additions to the site is the link to our new &lt;a href=&quot;https://www.youtube.com/playlist?list=PLtS6YX0YOX4fWMwKbp9blyI1GLdXlbWjY&quot;&gt;YouTube video channel&lt;/a&gt; where you can now find over 50 videos, including many in-depth interviews with industry experts explaining new technologies and offering tips and tricks on how to work with them.&lt;/p&gt;
&lt;p&gt;Another reason for the site revamp was to implement a better back-end content management system. From the outside, you might not notice this much, but if you’ve written blog posts for us this year, you’ve undoubtedly noticed the improved method for submitting your posts and the ability to manage them through GitHub.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The HPE Blog Site&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To all of you who submitted articles and tutorials this past year, thank you! In 2021, we had about 100 blog posts published. The articles ranged in content from high-level interviews with developers and open source experts to exceptional, deep-dive tutorials helping readers learn how to take advantage of newer technologies to make their lives easier. Data analytics, data/ML/AI pipelines, data fabric, and containers appeared to be the hottest topics, although systems management with Redfish proved to be highly popular as well. We’re hoping to add even more open source and edge-to-cloud articles as we move into 2022.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE DEV Workshops-on-Demand&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://hackshack.hpedev.io/workshops&quot;&gt;HPE DEV Workshops-on-Demand &lt;/a&gt;were introduced in November of 2020, starting off with just three Jupyter Notebooks. In 2021, Community members contributed and added to that so we now have 23 workshops with more coming soon. While adding 20 workshops over the course of the year is notable, it’s not nearly as impressive as the fact that we were able to deliver these workshops to over 2,000 registrants this year!&lt;/p&gt;
&lt;p&gt;The hottest workshops for 2021 were the two focused on Kubernetes, quickly followed by the sessions on Redfish. HPE Ezmeral workshops also made a good showing, as did workshops on Python, SPIFFE/SPIRE, and HPE OneView. The workshops garnered fantastic ratings in the post-workshop surveys registrants are requested to fill out, earning an average of 4.6 stars out of 5. A great addition to the program is our new set of recognition badges awarded to those who take the workshops. If you’re not familiar with this program, it’s outlined in this &lt;a href=&quot;https://developer.hpe.com/blog/become-a-legend/&quot;&gt;blog post&lt;/a&gt;. Basically, you receive a badge for each workshop you complete and additional badges based on how many workshops you complete. These badges can be shared via social media to brag to your friends on how technically advanced you are!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE DEV Munch &amp;#x26; Learn Technology Talks&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Another place the Community shined this year is in the area of collaborating with one another in the &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;HPE DEV Munch &amp;#x26; Learn&lt;/a&gt; sessions. These free, one-hour monthly webinars provide an opportunity for industry experts to share their knowledge with and answer questions from the Community. Many of these talks introduce topics that eventually make their way into blog posts or Jupyter Notebooks for new HPE DEV Workshops-on-Demand.&lt;/p&gt;
&lt;p&gt;This year we delivered 11 Munch &amp;#x26; Learn sessions on some really great topics. The most recent session had a record-breaking number of registrants – over 350 – who all came to hear about Redfish: Past, Present and Future.&lt;/p&gt;
&lt;p&gt;For a quick look at the topics delivered and to access their replays, check out the list below:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Date&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Title&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Link&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;January&lt;/td&gt;
&lt;td&gt;What’s a data fabric and how does it work?&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qi6sTvu8osk&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;February&lt;/td&gt;
&lt;td&gt;Explore Containerization and MLOps&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/9PvKpe7yMpI&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;March&lt;/td&gt;
&lt;td&gt;Data Science Unplugged: Part 1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Inh6eXM0EbA&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;April&lt;/td&gt;
&lt;td&gt;Building a foundation for zero trust with SPIFFE&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/G1ceKr16nn8&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;May&lt;/td&gt;
&lt;td&gt;Data Science Unplugged: Part 2&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Va4tSr__Yok&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;June&lt;/td&gt;
&lt;td&gt;Microservices Architecture 101&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/qyyxQU37ZyQ&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;July&lt;/td&gt;
&lt;td&gt;How to make data consumable for real-world data science&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/4WKjRqflF7M&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;August&lt;/td&gt;
&lt;td&gt;Kubernetes 101&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/PWVJKK1obKQ&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;September&lt;/td&gt;
&lt;td&gt;Digital Transformation.Next: Data &amp;#x26; Analytics Workloads&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Q4kJKCS7rbo&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;November&lt;/td&gt;
&lt;td&gt;The Great Unification: Building Analytic pipelines with Apache Spark Workloads&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/TxZP_T9CC5Y&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;December&lt;/td&gt;
&lt;td&gt;Redfish: Past, Present and Future&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://youtu.be/Q1Qeb24lpKg&quot;&gt;Replay&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In addition to our monthly Munch &amp;#x26; Learn Technology Talks, HPE DEV facilitated 19 different online gatherings to share technology information and collaborate between HPE engineering groups. The effects of these meetings are expected to result in some interesting new meetup opportunities for the Community at large in 2022, so keep an eye out for announcements soon in this area.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Events&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As we continued to struggle worldwide with the pandemic, in-person event opportunities were limited, however we did have HPE DEV Community representation at KubeCon 2021 NA in Los Angeles where Sir Hackington once again stood watch over HPE DEV swag.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/sir-hackington-2021-small.png&quot;&gt;&lt;/center&gt;
&lt;p&gt;We were still able to come together for a number of virtual events, including KubeCon Europe, 3D Data World 2021 (a really cool event where we participated as avatars!), and Discover 2021 where we achieved the highest technology session participation with 672 attendees!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Looking Forward to a Bright New Year!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;2021 gave us all a great opportunity to explore and learn more about HPE Ezmeral, data fabric, Kubernetes, and open source projects like SPIFFE/SPIRE. We’ve added more content around data analytics, and ML and AI technologies. We also on-boarded six ISV (independent software vendor) partners to the HPE DEV program who delivered content related to HPE Ezmeral via blog posts and technology talks helping to grow developer awareness around this solution. Because of this, we’ve welcomed more and more data scientists and related personas into our Community.&lt;/p&gt;
&lt;p&gt;We look forward to our continued collaboration, diving deeper into exciting new ways of dealing with all this data around us, making it useful, and expanding work in open source projects.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting Started with Data Services Cloud Console Public REST API]]></title><description><![CDATA[Customers across industries are struggling with the complexity of managing data and infrastructure, because it creates a roadblock to…]]></description><link>https://developer.hpe.com/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-hpe-data-services-cloud-console-public-rest-api/</guid><pubDate>Wed, 15 Dec 2021 16:01:03 GMT</pubDate><content:encoded>&lt;center&gt;&lt;img src=&quot;/img/dscc-idp-core-architect.png&quot; width=&quot;500&quot; height=&quot;501&quot; alt=&quot;Unified DataOps&quot;&gt;&lt;/center&gt;
&lt;p&gt;Customers across industries are struggling with the complexity of managing data and infrastructure, because it creates a roadblock to innovation and agility. Today, every organization is required to unleash the power of data to drive digital transformation, but fragmented data management tools, manual processes, and infrastructure silos - spanning edge to cloud - are getting in the way. This complexity also amplifies business risk, and it&apos;s only getting harder as data continues to grow, apps evolve, and infrastructure continues its spread from edge to cloud.&lt;/p&gt;
&lt;p&gt;Data Services Cloud Console public REST API provides a resource for customers who are looking to enhance their infrastructure management and data-ops using the programmatic extensions from Data Services Cloud Console.&lt;/p&gt;
&lt;h3&gt;A Public REST API which is based on the OpenAPI 3.X Specification&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/img/universal-public-api.png&quot; alt=&quot;API diagram&quot; title=&quot;API &quot;&gt;&lt;/p&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) offers the Data Services Cloud Console unified REST API to provide the agility previously mentioned. It is specified based on the OpenAPI format version 3 (OpenAPI 3.0 information). The specification defines a standard, language-agnostic interface to the RESTful API allowing clients (both human and computer) to consume capabilities of the console&apos;s services efficiently. The API definition is available for download in either OpenAPI 3 YAML or JSON format at the link mentioned in the next chapter.&lt;/p&gt;
&lt;p&gt;Some of the advantages of distributing the API in OpenAPI 3.0 format:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Updates to the API can be generated in a more agile manner where documentation is embedded, describing any endpoints, parameters, and more; such as contact information, license, terms of use.&lt;/li&gt;
&lt;li&gt;Consumers of this API also gain the benefits of agility by using the converter from openAPI yaml, or json to any programming language that is used as part of their automation or CI/CD workflow. (Please check &lt;a href=&quot;https://openapi.tools&quot;&gt;https://openapi.tools&lt;/a&gt; for more information for the API tools to generate client code)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Data Services Cloud Console REST API Details&lt;/h3&gt;
&lt;p&gt;Anyone can download this OpenAPI Specification (OAS) v3 definition of the Data Services Cloud Console from the following: &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;Link to the API repository&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-documentation-display.png&quot; alt=&quot;HPE GreenLake API documentation&quot; title=&quot;API Doc&quot;&gt;&lt;/p&gt;
&lt;p&gt;The website also provides additional information:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The list of the REST API resources that are supported as of the release.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The information about the HTTP method, parameters and the responses that are expected from each resource.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The syntax for the HTTP method and path to this resource. Note that this path is a relative path. For the complete path, please add the base-URL documented below.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The body of response is returned in JSON format according to the response status of the REST API.&lt;/p&gt;
&lt;p&gt;The website also provides the links to download the cloud console OpenAPI definitions in either json or yaml format. Below is an example of the downloaded yaml definition file from the Data Services Cloud Console REST API documentation website.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/open-api-yaml.png&quot; alt=&quot;API definition yaml&quot; title=&quot;yaml&quot;&gt;&lt;/p&gt;
&lt;p&gt;Users can download the API definition from the API documentation website, and the API definition is available in both YAML and JSON version. It can be downloaded by clicking on the download button on the top left of the documentation website.&lt;/p&gt;
&lt;h3&gt;Documented Attributes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;API Name &amp;#x26; Description&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides a short description of the objective for this API with the supported HTTP request method (POST, GET, DELETE, PATCH, PUT etc).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API Path&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides the detailed URL path as the end-point to issue the API call. Note that the user must add the base path URL to extend this path to the correct resource.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API Parameter&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Allows the client to input information, such as the object for manipulation, select a filter to limit the returned objects, and other purposes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API Data/Body/Payload&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This is the data passed along in a different part of the REST API request, usually associated with HTTP method such as POST/PATCH/PUT.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API Response&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides detailed response information on the result of the particular API and may include more data in JSON format.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;API Error Codes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provides the result of the execution of the API, returning either good or error, along with the error message due to an incorrect or unauthorized API call.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Supported API Categories (Services)&lt;/h3&gt;
&lt;p&gt;The API categories for Data Services Cloud Console will grow in accordance to the expansion of future services. As recorded today, the current services that are available include:&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Common (Alletra-6K, Alletra-9K, Primera, Nimble)&lt;/strong&gt;&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;authentication&lt;/li&gt;
&lt;li&gt;tasks&lt;/li&gt;
&lt;li&gt;event Audit&lt;/li&gt;
&lt;li&gt;authZ (User RBAC permissions)&lt;/li&gt;
&lt;li&gt;issues&lt;/li&gt;
&lt;li&gt;controllers&lt;/li&gt;
&lt;li&gt;host-initiator-groups&lt;/li&gt;
&lt;li&gt;host-initiators&lt;/li&gt;
&lt;li&gt;ports&lt;/li&gt;
&lt;li&gt;shelves&lt;/li&gt;
&lt;li&gt;storage-pools&lt;/li&gt;
&lt;li&gt;storage-systems&lt;/li&gt;
&lt;li&gt;system-settings&lt;/li&gt;
&lt;li&gt;volume-sets&lt;/li&gt;
&lt;li&gt;volumes&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;&lt;strong&gt;Alletra-6K or Nimble&lt;/strong&gt;&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;protection-templates&lt;/li&gt;
&lt;li&gt;disks&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Versioning&lt;/h3&gt;
&lt;p&gt;The major version number will be provided in the resource path as &quot;v1&quot; in this example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;/api/v1/&amp;#x3C;resource group&gt;/...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here are some examples of these resource paths that contain several resource groups under the same root:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;/api/v1/storage-systems/...

/api/v1/controllers/...

/api/v1/volumes/...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Existing clients will be able to maintain the backward compatibility from the higher major version incremental and adopt any newly introduced API. However, both the new and old version of the API will be supported until the announcement of the deprecation for the old version of the API. Nonetheless, the older major version will always be frozen, with the exception of bug fixes. There will also be an announcement of deprecation in the header and sunset header.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;/api/v1/&amp;#x3C;resource group&gt;/...

/api/v2/&amp;#x3C;resource group&gt;/...
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;HTTP Request Methods&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;HTTP Verbs&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;Retrieves target resource&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;Creates an entity or changes state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PUT&lt;/td&gt;
&lt;td&gt;Replaces target resource with data part of the HTTP request payload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;Removes the resource&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Authorization through OAuth2 Client Credential Workflow&lt;/h3&gt;
&lt;p&gt;Glossary of the terms:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Resources&lt;/a&gt;&lt;/strong&gt;:  Components inside the cloud console, such as storage array, volumes, and many other objects that are consumable and related to each other, and provides methods to operate on it. Usually is represented by path that is appended to Endpoint e.g. /api/v1/storage-array.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Resource Owner&lt;/a&gt;:&lt;/strong&gt;  The user that is registered inside the HPE GreenLake console that has the capability to authorize the client application access to the cloud console resources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Client Application&lt;/a&gt;:&lt;/strong&gt;  The stand-alone application that runs on the client machine, and usually represent the customer&apos;s business application for automation, ticketing, monitoring and many other business processes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Access Token:&lt;/a&gt;&lt;/strong&gt; This is the object that describes the permission and the security context of which the client application was delegated. This token contains the identity and privileges of the cloud console user which create this token.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;API Gateway&lt;/a&gt;&lt;/strong&gt;: The API Gateway is the menu in the HPE GreenLake console that is used to register a client application and obtain the API client credentials (client-id and client-secret) for that client application. These credentials are required to generate a short-lived access token that is used to make secure REST API calls to the Data Services Cloud Console application instance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;Endpoint&lt;/a&gt;&lt;/strong&gt;: Location where service can be accessed, usually is represented by URL (Uniform Resource Locator) e.g. &lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc-public-api-introduction-updated_111122.jpg&quot; alt=&quot;OAuth 2.0 flow&quot; title=&quot;authentication and authorization flow&quot;&gt;&lt;/p&gt;
&lt;p&gt;The client&apos;s application can issue a REST API request using the access token as the bearer of the token. The client can obtain this access token from the authorization API end point, after the client successfully authenticates through an associated customer&apos;s application credential (client-id and client-secret). This application credential is created by the console&apos;s user who has the permission to access resources (such as controllers, volumes etc.) under the console instances. This access token expiration time, by default, is set for 7200 seconds (2 hours). When the resource server sees this expired access token, it returns a 0x401 response (not authorized). The client must then authenticate using the associated client-id and client-secret to obtain the next access-token to use for the next REST API request.&lt;/p&gt;
&lt;h3&gt;Authorization Policies&lt;/h3&gt;
&lt;p&gt;The client can only receive properties from the authorized API resources based on the Role Base Access Control for the user who created the client-credential pair (client-id and client-secret). This authorization derives from the organization, capability, and scope (roles) that the associated user is assigned. As a result, the authorization for the client application will inherit the user&apos;s permission who created the client-application registration under the API Gateway. Note that subsequent changes to the user&apos;s permission after the client application registered will impact the response returned based on current authority.&lt;/p&gt;
&lt;h3&gt;The API Endpoints (base-URL) for each Data Services Cloud Console&apos;s Region&lt;/h3&gt;
&lt;p&gt;The REST API for Data Services Cloud Console requires the client application to issue the REST API request to the URL that is associated with the console&apos;s instance deployed at the associated region of the storage array. As of November 2021, here are the Domain URLs which the client application must use as the base-URL to the resource path of REST API.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Data Services Cloud Console Region&lt;/th&gt;
&lt;th&gt;Base-URL&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;EU Central (Europe)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AP Northeast (Asia Pacific)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://jp1.data.cloud.hpe.com&quot;&gt;https://jp1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US West (United States)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://us1.data.cloud.hpe.com&quot;&gt;https://us1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Asynchronous Response&lt;/h3&gt;
&lt;p&gt;All of the REST API operations are stateless in nature. One example of such is POST. In that scenario, the task resource will return a response with HTTP code 0x202 &quot;Accepted&quot; and the reference to the task as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;Response: 202 (Accepted)

{
  &quot;taskURI&quot;:&quot;/api/v1/tasks/{task id}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to ensure the completion of this remote procedural call through POST, the user will use the task resource to query the status of this asynchronous task.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsoniq&quot;&gt;/api/v1/tasks/{task id}

GET responses
{
  state: {state ENUM}
}

state ENUM:
- UNSPECIFIED
- INITIALIZED
- RUNNING
- FAILED
- SUCCEEDED
- TIMEDOUT
- PAUSED
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For more in depth discussion on the topics about API Gateway and OAuth 2.0 (Open Authorization), please take a look at these blogs in HPE DEV, tektalk on point at On24 website, and also the demo in Youtube.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=g3UO0S-4r6I&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/g3UO0S-4r6I/hqdefault.jpg&quot; alt=&quot;Introduction to DSCC&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Blog: &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;Using HPE GreenLake&apos;s API Gateway for Data Services Cloud Console&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Blog: &lt;a href=&quot;https://developer.hpe.com/blog/oauth2-for-hpe-greenlake-data-services-cloud-console/&quot;&gt;Implementing OAuth 2 Flow for Data Services Cloud Console&apos;s Client Application&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;TEKTALK ON POINT: &lt;a href=&quot;https://vshow.on24.com/vshow/HPETekTalks/content/3571890/&quot;&gt;Introduction to Data Services Cloud Console public API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More blog posts will be coming to help you take further advantage of its capabilities. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more blog posts about HPE Data Services Cloud Console REST API.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting Started with Spark on MapR Sandbox]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/getting-started-with-spark-on-mapr-sandbox/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-spark-on-mapr-sandbox/</guid><pubDate>Tue, 14 Dec 2021 09:35:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;At MapR, we distribute and support Apache Spark as part of the MapR Converged Data Platform. This tutorial will help you get started with running Spark application on the MapR Sandbox (now known as the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/RunMapRContainerDevelopers.html&quot;&gt;Development Environment for HPE Ezmeral Data Fabric&lt;/a&gt;).&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;HARDWARE REQUIREMENTS&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;8GB RAM, multi-core CPU&lt;/li&gt;
&lt;li&gt;20GB minimum HDD space&lt;/li&gt;
&lt;li&gt;Internet access&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;SOFTWARE REQUIREMENTS&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A hypervisor. This example uses VMware Fusion 6.0.2 on OSX; however, other VMware products could be used instead. Additionally, &lt;a href=&quot;https://www.virtualbox.org/&quot;&gt;VirtualBox&lt;/a&gt; can be used&lt;/li&gt;
&lt;li&gt;A virtual machine image for the MapR Sandbox. Spark comes pre-loaded with version 5.0 and later of the MapR Sandbox.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Starting up and Logging into the Sandbox&lt;/strong&gt;&lt;br /&gt;
Install and fire up the Sandbox using the instructions &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/RunMapRContainerDevelopers.html&quot;&gt;here&lt;/a&gt;. Once you are able to log in to the web interface for the Sandbox, you are ready to start setting up Spark.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Logging in to the Command Line&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Before you get started, you&apos;ll want to have the IP address handy for your Sandbox VM. See the screenshot below for an example of where to find that.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, use an SSH client such as Putty (Windows) or Terminal (Mac) to login. See below for an example:&lt;br /&gt;
use userid: user01 and password: mapr.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;For VMWare use: &lt;code&gt;$ ssh user01@ipaddress&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;For Virtualbox use: &lt;code&gt;$ ssh user01@127.0.0.1 -p 2222&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;“How to” for a Spark Application&lt;/h2&gt;
&lt;p&gt;Next, we will look at how to write, compile, and run a Spark word count application on the MapR Sandbox. First, we will walk step by step through the following word count application in Java.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example Word Count App in Java&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can download the complete Maven project Java and Scala code &lt;a href=&quot;https://github.com/caroljmcdonald/sparkwordcountapp&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Get a text-based dataset&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;First, let&apos;s grab a text-based dataset that will be used for counting the words. Today, we&apos;ll be using the freely available Alice In Wonderland text. Create a new folder:&lt;br /&gt;
&lt;code&gt;mkdir -p /mapr/demo.mapr.com/input&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Pull down the text file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;wget -O /mapr/demo.mapr.com/input/alice.txt
http://www.gutenberg.org/cache/epub/11/pg11.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Initializing a SparkContext&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The first thing a Spark program has to do is create a SparkContext object, which represents a connection to a Spark cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To create a SparkContext, you first need to create a SparkConf object to configure your application, as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;        // Create a Java Spark Context.
        SparkConf conf = new SparkConf().setAppName(&quot;JavaWordCount&quot;);
        JavaSparkContext sc = new JavaSparkContext(conf);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Create an RDD from a file&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. The following code uses the SparkContext to define a base RDD from the file inputFile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;        String inputFile = args[0];
        JavaRDD &amp;#x3C;string&gt;input = sc.textFile(inputFile);&amp;#x3C;/string&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Transform input RDD with flatMap&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To split the input text into separate words, we use the &lt;a href=&quot;https://spark.apache.org/docs/1.3.0/programming-guide.html&quot;&gt;flatMap(func)&lt;/a&gt; RDD transformation, which returns a new RDD formed by passing each element of the source through a function. The String split function is applied to each line of text, returning an RDD of the words in the input RDD:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;         // map/split each line to multiple words
        **JavaRDD&amp;#x3C;String&gt; words = input.flatMap(**
                new FlatMapFunction&amp;#x3C;String, String&gt;() {
                    public Iterable&amp;#x3C;String&gt; call(String x) {
                        return Arrays.asList(x.split(&quot; &quot;));
                    }
                }
        );
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Transform words RDD with map&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We use the &lt;a href=&quot;https://spark.apache.org/docs/1.3.0/programming-guide.html&quot;&gt;map(func)&lt;/a&gt; to transform the words RDD into an RDD of (word, 1) key-value pairs:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;        JavaPairRDD&amp;#x3C;String, Integer&gt; wordOnePairs = words.mapToPair(
                new PairFunction&amp;#x3C;String, String, Integer&gt;() {
                    public Tuple2&amp;#x3C;String, Integer&gt; call(String x) {
                        return new Tuple2(x, 1);
                    }
                }
        );
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Transform wordOnePairs RDD with reduceByKey&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To count the number of times each word occurs, we combine the values (1) in the wordOnePairs with the same key (word) using &lt;a href=&quot;https://spark.apache.org/docs/1.3.0/programming-guide.html&quot;&gt;reduceByKey(func)&lt;/a&gt;, This transformation will return an RDD of (word, count) pairs where the values for each word are aggregated using the given reduce function func x+y:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;        // reduce add the pairs by key to produce counts
        JavaPairRDD&amp;#x3C;String, Integer&gt; counts = wordOnePairs.reduceByKey(
                new Function2&amp;#x3C;Integer, Integer, Integer&gt;() {
                    public Integer call(Integer x, Integer y) {
                        return x + y;
                    }
                }
        );
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Output with RDD action saveAsTextFile&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Finally, the RDD action saveAsTextFile(path) writes the elements of the dataset as a text file (or set of text files) in the outputFile directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Java&quot;&gt;        String outputFile = args[1];
        // Save the word count back out to a text file, causing evaluation.
        counts.saveAsTextFile(outputFile);                
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Example Word Count App in Scala&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is the same example in Scala:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Building a Simple Application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spark can be linked into applications in either Java, Scala, or Python. Maven is a popular package management tool for Java-based languages that lets you link to libraries in public repositories. In Java and Scala, you give your application a Maven dependency on the spark-core artifact. The current Spark version is 1.6.1 and the Maven coordinates are:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;        &amp;#x3C;dependency&gt;
            &amp;#x3C;groupId&gt;org.apache.spark&amp;#x3C;/groupId&gt;
            &amp;#x3C;artifactId&gt;spark-core_2.10&amp;#x3C;/artifactId&gt;
            &amp;#x3C;version&gt;1.6.1&amp;#x3C;/version&gt;
            &amp;#x3C;scope&gt;provided&amp;#x3C;/scope&gt;
        &amp;#x3C;/dependency&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can build with Maven using IDEs like Eclipse or NetBeans, and then copy the JAR file to your MapR Sandbox, or you can install Maven on your sandbox and build from the Linux command line.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can use scp to copy your JAR file to the MapR Sandbox:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/tutorial_spark10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;See below for an example of using scp from the command line:&lt;br /&gt;
&lt;code&gt;use userid: user01 and password: mapr.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;For VMWare use: &lt;code&gt;$ scp nameoffile.jar user01@ipaddress:/user/user01/.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;For Virtualbox use: &lt;code&gt;$ scp -P 2222 nameoffile.jar user01@127.0.0.1:/user/user01/.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running Your Application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;First, find the version of Spark on the sandbox with &lt;code&gt;ls /opt/mapr/spark/&lt;/code&gt;. Then you can use the spark commands in the /opt/mapr/spark/spark-version/bin directory.&lt;/p&gt;
&lt;p&gt;You use the &lt;code&gt;bin/spark-submit&lt;/code&gt; script to launch your application. This script takes care of setting up the classpath with Spark and its dependencies. Here is the spark-submit format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;./bin/spark-submit \
  --class &amp;#x3C;main-class&gt;
  --master &amp;#x3C;master-url&gt; \
  &amp;#x3C;application-jar&gt; \
  [application-arguments]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the spark-submit command for our example, passing the input file and output directory as arguments to the main method of the &lt;code&gt;JavaWordCount&lt;/code&gt; class:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;/opt/mapr/spark/spark-1.6.1/bin/spark-submit --class example.wordcount.JavaWordCount --master yarn \
  sparkwordcount-1.0.jar /user/user01/input/alice.txt /user/user01/output
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the spark-submit command to run the &lt;code&gt;scala SparkWordCount&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;/opt/mapr/spark/spark-1.6.1/bin/spark-submit --class SparkWordCount --master yarn \
  sparkwordcount-1.0.jar /user/user01/input/alice.txt /user/user01/output
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This concludes the Getting Started with Spark on the MapR Sandbox tutorial. You can download the example Maven project code &lt;a href=&quot;https://github.com/caroljmcdonald/sparkwordcountapp&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://spark.apache.org/docs/latest/&quot;&gt;http://spark.apache.org/docs/latest/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/examples.html&quot;&gt;https://spark.apache.org/examples.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/quick-start.html&quot;&gt;https://spark.apache.org/docs/latest/quick-start.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Using Apache Spark DataFrames for Processing of Tabular Data]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/using-apache-spark-dataframes-for-processing-of-tabular-data/</link><guid isPermaLink="false">https://developer.hpe.com/using-apache-spark-dataframes-for-processing-of-tabular-data/</guid><pubDate>Mon, 13 Dec 2021 10:27:46 GMT</pubDate><content:encoded>&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;category&quot;: &quot;apache-spark&quot;,
&quot;publish&quot;: &quot;2015-06-24T07:00:00.000Z&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This post will help you get started using Apache Spark DataFrames with Scala on the MapR Sandbox (now known as the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;Development Environment for HPE Ezmeral Data Fabric&lt;/a&gt;). The &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes&apos;&gt;new Spark DataFrames API&lt;/a&gt; is designed to make big data processing on tabular data easier.&lt;/p&gt;
&lt;h2&gt;What is a Spark DataFrame?&lt;/h2&gt;
&lt;p&gt;A Spark DataFrame is a distributed collection of data organized into named columns that provides operations to filter, group, or compute aggregates, and can be used with Spark SQL. DataFrames can be constructed from structured data files, existing RDDs, tables in Hive, or external databases.&lt;/p&gt;
&lt;p&gt;In this post, you’ll learn how to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Load data into Spark DataFrames&lt;/li&gt;
&lt;li&gt;Explore data with Spark SQL&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This post assumes a basic understanding of Spark concepts. If you have not already read the tutorial on &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-spark-on-mapr-sandbox/&quot;&gt;Getting Started with Spark on MapR Sandbox&lt;/a&gt;, it would be good to read that first.&lt;/p&gt;
&lt;h2&gt;Software&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;The sample data sets&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We will use two example datasets - one from &lt;a target=&apos;\_blank&apos;  href=&apos;http://www.modelingonlineauctions.com/datasets&apos;&gt;eBay online auctions&lt;/a&gt; and one from the &lt;a target=&apos;\_blank&apos;  href=&apos;https://data.sfgov.org/Public-Safety/SFPD-Incidents-from-1-January-2003/tmnf-yvry&apos;&gt;SFPD Crime Incident Reporting system&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The eBay online auction dataset has the following data fields:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;auctionid&lt;/strong&gt;&lt;/em&gt; - unique identifier of an auction &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;bid&lt;/strong&gt;&lt;/em&gt; - the proxy bid placed by a bidder &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;bidtime&lt;/strong&gt;&lt;/em&gt; - the time (in days) that the bid was placed, from the start of the auction &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;bidder&lt;/strong&gt;&lt;/em&gt; - eBay username of the bidder &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;bidderrate&lt;/strong&gt;&lt;/em&gt; - eBay feedback rating of the bidder &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;openbid&lt;/strong&gt;&lt;/em&gt; - the opening bid set by the seller &lt;br /&gt;
&lt;em&gt;&lt;strong&gt;price&lt;/strong&gt;&lt;/em&gt; - the closing price that the item sold for (equivalent to the second highest bid + an increment) &lt;br /&gt;&lt;/p&gt;
&lt;p&gt;The table below shows the data fields with some sample data:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_sparkdataframes_table1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using Spark DataFrames, we will explore the data with questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How many auctions were held?&lt;/li&gt;
&lt;li&gt;How many bids were made per item?&lt;/li&gt;
&lt;li&gt;What&apos;s the minimum, maximum, and average number of bids per item?&lt;/li&gt;
&lt;li&gt;Show the bids with price &gt; 100&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The table below shows the SFPD data fields with some sample data:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_sparkdataframes_table2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using Spark DataFrames, we will explore the SFPD data with questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are the top 10 Resolutions?&lt;/li&gt;
&lt;li&gt;How many Categories are there?&lt;/li&gt;
&lt;li&gt;What are the top 10 incident Categories?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Loading data into Spark DataFrames&lt;/h2&gt;
&lt;p&gt;Log into the MapR Sandbox, as explained in &lt;a href=&quot;https://developer.hpe.com/blog/getting-started-with-spark-on-mapr-sandbox/&quot;&gt;Getting Started with Spark on MapR Sandbox&lt;/a&gt;, using userid user01, password mapr. Copy the sample data files to your sandbox home directory /user/user01 using scp. Start the spark shell with: &lt;br /&gt;
&lt;code&gt;$ spark-shell&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;First, we will import some packages and instantiate a sqlContext, which is the entry point for working with structured data (rows and columns) in Spark and allows the creation of DataFrame objects. &lt;br /&gt;
(In the code boxes, &lt;font color=&quot;green&quot;&gt;comments are in Green&lt;/font&gt; and &lt;font color=&quot;#005CB9&quot;&gt;output is in Blue&lt;/font&gt;)&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;//  SQLContext entry point for working with structured data&lt;/font&gt;
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
&lt;font color=&quot;green&quot;&gt;// this is used to implicitly convert an RDD to a DataFrame.&lt;/font&gt;
import sqlContext.implicits._
&lt;font color=&quot;green&quot;&gt;// Import Spark SQL data types and Row.&lt;/font&gt;
import org.apache.spark.sql._
&lt;/pre&gt;
&lt;p&gt;Below, we load the data from the ebay.csv file into a Resilient Distributed Dataset (RDD). RDDs can have transformations and actions; the first() action returns the first element in the RDD, which is the String &lt;code&gt;“8213034705,95,2.927373,jake7870,0,95,117.5,xbox,3”&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// load the data into a  new RDD&lt;/font&gt;
val ebayText = sc.textFile(&quot;ebay.csv&quot;)


&lt;font color=&quot;green&quot;&gt;// Return the first element in this RDD&lt;/font&gt;
ebayText.first()
&lt;font color=&quot;#005CB9&quot;&gt;// res6: String = 8213034705,95,2.927373,jake7870,0,95,117.5,xbox,3&lt;/font&gt;
&lt;/pre&gt;
&lt;p&gt;Below, we use a Scala case class to define the Auction schema corresponding to the ebay.csv file. Then, map() transformations are applied to each element of ebayText to create the ebay RDD of Auction objects.&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;//define the schema using a case class&lt;/font&gt;
case class Auction(auctionid: String, bid: Float, bidtime: Float, bidder: String, bidderrate: Integer, openbid: Float, price: Float, item: String, daystolive: Integer)


&lt;font color=&quot;green&quot;&gt;// create an RDD of Auction objects&lt;/font&gt;
val ebay = ebayText.map(_.split(&quot;,&quot;)).map(p =&gt; Auction(p(0),p(1).toFloat,p(2).toFloat,p(3),p(4).toInt,p(5).toFloat,p(6).toFloat,p(7),p(8).toInt ))
&lt;/pre&gt;
&lt;p&gt;The ebay RDD first() action returns the first element in the RDD, Auction = Auction( 8213034705, 95.0, 2.927373, jake7870, 0, 95.0, 117.5, xbox,3).&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// Return the first element in this RDD&lt;/font&gt;
ebay.first()
&lt;font color=&quot;#005CB9&quot;&gt;//res7: Auction = Auction(8213034705,95.0,2.927373,jake7870,0,95.0,117.5,xbox,3)&lt;/font&gt;
&lt;font color=&quot;green&quot;&gt;// Return the number of elements in the RDD&lt;/font&gt;
ebay.count()
res8: Long = 10654
&lt;/pre&gt;
&lt;p&gt;A DataFrame is &lt;a target=&apos;\_blank&apos;  href=&apos;https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html&apos;&gt;a distributed collection of data organized into named columns.&lt;/a&gt; Spark SQL supports automatically converting an RDD containing case classes to a DataFrame with the method toDF():&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// change ebay RDD of Auction objects to a DataFrame&lt;/font&gt;
val auction = ebay.toDF()
&lt;/pre&gt;
&lt;p&gt;The previous RDD transformations can also be written on one line like this:&lt;/p&gt;
&lt;pre&gt;
val auction = sc.textFile(&quot;ebay.csv&quot;).map(_.split(&quot;,&quot;)).map(p =&gt;
Auction(p(0),p(1).toFloat,p(2).toFloat,p(3),p(4).toInt,p(5).toFloat,p(6).toFloat,p(7),p(8).toInt )).toDF()
&lt;/pre&gt;
&lt;h2&gt;Explore and query the eBay auction data with Spark DataFrames&lt;/h2&gt;
&lt;p&gt;DataFrames provide a domain-specific language for structured data manipulation in Scala, Java, and Python; below are some examples with the auction DataFrame. The DataFrame show() action displays the top 20 rows in a tabular form.&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// Display the top 20 rows of DataFrame&lt;/font&gt;
auction.show()
&lt;font color=&quot;#005CB9&quot;&gt;// auctionid  bid   bidtime  bidder         bidderrate openbid price item daystolive
// 8213034705 95.0  2.927373 jake7870       0          95.0    117.5 xbox 3
// 8213034705 115.0 2.943484 davidbresler2  1          95.0    117.5 xbox 3 …&lt;/font&gt;
&lt;/pre&gt;
&lt;p&gt;DataFrame printSchema() Prints the schema to the console in a tree format&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// Return the schema of this DataFrame&lt;/font&gt;
auction.printSchema()
&lt;font color=&quot;#005CB9&quot;&gt;root
 |-- auctionid: string (nullable = true)
 |-- bid: float (nullable = false)
 |-- bidtime: float (nullable = false)
 |-- bidder: string (nullable = true)
 |-- bidderrate: integer (nullable = true)
 |-- openbid: float (nullable = false)
 |-- price: float (nullable = false)
 |-- item: string (nullable = true)
 |-- daystolive: integer (nullable = true)&lt;/font&gt;
&lt;/pre&gt;
&lt;p&gt;After a dataframe is instantiated, you can query it using SQL queries. Here are some example queries using the Scala DataFrame API:&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// How many auctions were held?&lt;/font&gt;
auction.select(&quot;auctionid&quot;).distinct.count
&lt;font color=&quot;#005CB9&quot;&gt;// Long = 627&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// How many bids per item?&lt;/font&gt;
auction.groupBy(&quot;auctionid&quot;, &quot;item&quot;).count.show
auctionid  item    count
3016429446 palm    10
8211851222 xbox    28
3014480955 palm    12
8214279576 xbox    4
3014844871 palm    18
3014012355 palm    35
1641457876 cartier 2
. . .
&lt;font color=&quot;green&quot;&gt;// What&apos;s the min number of bids per item? what&apos;s the average? what&apos;s the max?&lt;/font&gt;
auction.groupBy(&quot;item&quot;, &quot;auctionid&quot;).count.agg(min(&quot;count&quot;), avg(&quot;count&quot;),max(&quot;count&quot;)).show


&lt;font color=&quot;#005CB9&quot;&gt;// MIN(count) AVG(count)        MAX(count)
// 1  16.992025518341308 75&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// Get the auctions with closing price &gt; 100&lt;/font&gt;
val highprice= auction.filter(&quot;price &gt; 100&quot;)
&lt;font color=&quot;#005CB9&quot;&gt;// highprice: org.apache.spark.sql.DataFrame = [auctionid: string, bid: float, bidtime: float, bidder: // string, bidderrate: int, openbid: float, price: float, item: string, daystolive: int]&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// display dataframe in a tabular format&lt;/font&gt;
highprice.show()
&lt;font color=&quot;#005CB9&quot;&gt;// auctionid  bid   bidtime  bidder         bidderrate openbid price item daystolive
// 8213034705 95.0  2.927373 jake7870       0          95.0    117.5 xbox 3        
// 8213034705 115.0 2.943484 davidbresler2  1          95.0    117.5 xbox 3&lt;/font&gt;
&lt;/pre&gt;
&lt;p&gt;You can register a DataFrame as a temporary table using a given name, and then run SQL statements using the sql methods provided by sqlContext. Here are some example queries using &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/1.3.0/sql-programming-guide.html&apos;&gt;sqlContext&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// register the DataFrame as a temp table&lt;/font&gt;
auction.registerTempTable(&quot;auction&quot;)
&lt;font color=&quot;green&quot;&gt;// SQL statements can be run
// How many  bids per auction?&lt;/font&gt;
val results =sqlContext.sql(&quot;SELECT auctionid, item,  count(bid) FROM auction GROUP BY auctionid, item&quot;)
&lt;font color=&quot;green&quot;&gt;// display dataframe in a tabular format&lt;/font&gt;
results.show()
&lt;font color=&quot;#005CB9&quot;&gt;// auctionid  item    count
// 3016429446 palm    10
// 8211851222 xbox    28\. . .&lt;/font&gt;


val results =sqlContext.sql(&quot;SELECT auctionid, MAX(price) FROM auction  GROUP BY item,auctionid&quot;)
results.show()
&lt;font color=&quot;#005CB9&quot;&gt;// auctionid  c1
// 3019326300 207.5
// 8213060420 120.0 . . .&lt;/font&gt;
&lt;/pre&gt;
&lt;h2&gt;Loading the SFPD data into Spark dataframes using a csv parsing library&lt;/h2&gt;
&lt;p&gt;Now we will load the SFPD dataset into a Spark dataframe using the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/databricks/spark-csv&apos;&gt;spark-csv parsing library&lt;/a&gt; from Databricks. You can use this library at the Spark shell by specifying &lt;em&gt;--packages com.databricks:spark-csv_2.10:1.0.3&lt;/em&gt; when starting the shell as shown below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ spark-shell --packages com.databricks:spark-csv_2.10:1.0.3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The load operation will parse the sfpd.csv file and return a dataframe using the first header line of the file for column names.&lt;/p&gt;
&lt;pre&gt;
import sqlContext.implicits._
import org.apache.spark.sql._


&lt;font color=&quot;green&quot;&gt;//  Return the dataset specified by data source as a DataFrame, use the header for column names&lt;/font&gt;
val df = sqlContext.load(&quot;com.databricks.spark.csv&quot;, Map(&quot;path&quot; -&gt; &quot;sfpd.csv&quot;, &quot;header&quot; -&gt; &quot;true&quot;))
&lt;/pre&gt;
&lt;p&gt;The take operation returns the specified number of rows in the DataFame.&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// Return the first n rows in the DataFrame&lt;/font&gt;
df.take(1)


&lt;font color=&quot;#005CB9&quot;&gt;// res4: Array[org.apache.spark.sql.Row] = Array([150467944,LARCENY/THEFT,GRAND THEFT FROM LOCKED AUTO,Thursday,05/28/2015,23:59,TENDERLOIN,NONE,TAYLOR ST / OFARRELL ST,-122.411328369311,37.7859963050476,(37.7859963050476, -122.411328369311),15046794406244])&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// Print the schema to the console in a tree format&lt;/font&gt;
df.printSchema()
&lt;font color=&quot;#005CB9&quot;&gt;root
 |-- IncidntNum: string (nullable = true)
 |-- Category: string (nullable = true)
 |-- Descript: string (nullable = true)
 |-- DayOfWeek: string (nullable = true)
 |-- Date: string (nullable = true)
 |-- Time: string (nullable = true)
 |-- PdDistrict: string (nullable = true)
 |-- Resolution: string (nullable = true)
 |-- Address: string (nullable = true)
 |-- X: string (nullable = true)
 |-- Y: string (nullable = true)
 |-- Location: string (nullable = true)
 |-- PdId: string (nullable = true)&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// display dataframe in a tabular format&lt;/font&gt;
df.show()
&lt;font color=&quot;#005CB9&quot;&gt;IncidntNum Category Descript DayOfWeek Date Time PdDistrict Resolution Address X Y Location PdId
150467944  LARCENY/THEFT GRAND THEFT FROM ... Thursday  05/28/2015 23:59 TENDERLOIN NONE           TAYLOR ST / OFARR... -122.411328369311 37.7859963050476 (37.7859963050476... 15046794406244&lt;/font&gt;
&lt;/pre&gt;
&lt;h2&gt;Here are some example queries using sqlContext:&lt;/h2&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// how many categories are there?&lt;/font&gt;
df.select(&quot;Category&quot;).distinct.count
&lt;font color=&quot;#005CB9&quot;&gt;// res5: Long = 39&lt;/font&gt;
&lt;font color=&quot;green&quot;&gt;// register as a temp table inorder to use sql&lt;/font&gt;
df.registerTempTable(&quot;sfpd&quot;)


&lt;font color=&quot;green&quot;&gt;// How many categories are there&lt;/font&gt;
sqlContext.sql(&quot;SELECT distinct Category FROM sfpd&quot;).collect().foreach(println)


&lt;font color=&quot;#005CB9&quot;&gt;// [ASSAULT]
// [MISSING PERSON]
// [TREA] . . .&lt;/font&gt;
&lt;/pre&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// What are the top 10 Resolutions ?&lt;/font&gt;
sqlContext.sql(&quot;SELECT Resolution , count(Resolution) as rescount FROM sfpd group by Resolution order by rescount desc limit 10&quot;).collect().foreach(println)
&lt;font color=&quot;#005CB9&quot;&gt;// [NONE,1063775]
// [ARREST, BOOKED,414219]
// [ARREST, CITED,154033] . . .&lt;/font&gt;
&lt;/pre&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;// What are the top 10 most incident Categories?&lt;/font&gt;
val t =  sqlContext.sql(&quot;SELECT Category , count(Category) as catcount FROM sfpd group by Category order by catcount desc limit 10&quot;)


t.show()
&lt;font color=&quot;#005CB9&quot;&gt;// Category       catcount
// LARCENY/THEFT  353793
// OTHER OFFENSES 253627
// NON-CRIMINAL   186272\. . .&lt;/font&gt;


&lt;font color=&quot;green&quot;&gt;// The results of SQL queries are DataFrames and support RDD operations.
// The columns of a row in the result can be accessed by ordinal&lt;/font&gt;
t.map(t =&gt; &quot;column 0: &quot; + t(0)).collect().foreach(println)
&lt;font color=&quot;#005CB9&quot;&gt;// column 0: LARCENY/THEFT
// column 0: OTHER OFFENSES
// column 0: NON-CRIMINAL
// column 0: ASSAULT …&lt;/font&gt;
&lt;/pre&gt;
&lt;h2&gt;The physical plan for DataFrames&lt;/h2&gt;
&lt;p&gt;The &lt;a target=&apos;\_blank&apos;  href=&apos;https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html&apos;&gt;Catalyst query optimizer creates the physical Execution Plan&lt;/a&gt; for DataFrames as shown in the diagram below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/blog_sparkdataframes_image3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Print the physical plan to the console&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;DataFrames are designed to take the SQL queries constructed against them and optimize the execution as sequences of Spark Jobs as required. You can print the physical plan for a DataFrame using the explain operation as shown below:&lt;/p&gt;
&lt;pre&gt;
&lt;font color=&quot;green&quot;&gt;//  Prints the physical plan to the console for debugging purpose&lt;/font&gt;
auction.select(&quot;auctionid&quot;).distinct.**explain**()


&lt;font color=&quot;#005CB9&quot;&gt;// == Physical Plan ==
// Distinct false
// Exchange (HashPartitioning [auctionid#0], 200)
//  Distinct true
//   Project [auctionid#0]
//   PhysicalRDD   //[auctionid#0,bid#1,bidtime#2,bidder#3,bidderrate#4,openbid#5,price#6,item#7,daystolive#8], MapPartitionsRDD[11] at mapPartitions at ExistingRDD.scala:37&lt;/font&gt;
&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, you’ve learned how to load data into Spark DataFrames, and explore data with Spark SQL.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Want to learn more?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#dataframes&quot;&gt;Spark SQL and DataFrame Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.oreilly.com/library/view/learning-spark-2nd/9781492050032/&quot;&gt;Learning Spark - O&apos;Reilly Book&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Mapping Kubernetes Services to HPE Ezmeral Runtime Enterprise Gateway]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was…]]></description><link>https://developer.hpe.com/mapping-kubernetes-services-to-hpe-ezmeral-container-platform-gateway/</link><guid isPermaLink="false">https://developer.hpe.com/mapping-kubernetes-services-to-hpe-ezmeral-container-platform-gateway/</guid><pubDate>Mon, 06 Dec 2021 10:44:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise.&lt;/strong&gt; For more information on why the name was changed, please click &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Imagine you have different Kubernetes services and different services that come with different IP addresses. Are there tools that unify different services into single domain name? A gateway can answer that question. There are several benefits of using a gateway. First, the gateway can act as a load-balancer for different services. Second, only a gateway host IP address is exposed to the public while the rest remains behind the firewall. Follow this blog post to learn more about how to map Kubernetes Services to HPE Ezmeral Runtime Enterprise Gateway.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Runtime Enterprise (formerly known as HPE Ezmeral Container Platform) comes with one or more gateway hosts. A gateway host acts as a proxy server that carries client requests like HPE Ezmeral Runtime Enterprise UI, REST API calls, Kubernetes API, and containerized application service endpoints. The Gateway host maps both the IP address of the Controller host and the private IP endpoints of services running on the Kubernetes nodes inside the Kubernetes clusters to publicly-accessible IP addresses/ports.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: To learn more about the the Gateway Host, check out the online documentation &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/universal-concepts/Gateway_Hosts.html#v52_gateway-hosts__logical&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To set up this architecture properly, you&apos;ll want to first set up a new Kubernetes (K8s) tenant, as shown in the image below. Just check the box next to &quot;Map Services To Gateway&quot;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138654527-77f3bf2c-f001-4fc7-88f3-d17436368dc3.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Create a Hello World Kubernetes Application&lt;/h1&gt;
&lt;p&gt;Let&apos;s create a hello-world application for Kubernetes. This is a simple webpage showing which pod are you using. To begin with, create a deployment called &lt;code&gt;k8s-helloworld&lt;/code&gt; with the hello-world image. After that, run &lt;code&gt;get deployment&lt;/code&gt; and &lt;code&gt;describe deployment&lt;/code&gt; to view the detail of the deployment. If you see &lt;code&gt;1/1&lt;/code&gt; under the &lt;code&gt;READY&lt;/code&gt; column, it is good to go.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Create deployment of the application k8s-helloworld using the specific image
kubectl create deployment k8s-helloworld --image=gcr.io/google-samples/kubernetes-bootcamp:v1

# Get the information of the deployment with label k8s-helloworld
kubectl get deployment -l app=k8s-helloworld

# Describe the detail information of the deployment named as k8s-helloworld
kubectl describe deployment k8s-helloworld
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138656214-73c9418f-e291-4678-b3a2-c318a318d325.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;The deployment will spin up some pods. To view which pods are running, you can run the &lt;code&gt;get pods&lt;/code&gt; command. It will return a list of pods. Copy the pod name starting with k8s-helloworld and run an &lt;code&gt;exec&lt;/code&gt; command to check if the website is up or not. If you see the terminal reply with &lt;code&gt;Hello Kubernetes bootcamp!&lt;/code&gt;, you have successfully deployed a website on Kubernetes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Get the information of pods labeled with k8s-helloworld
kubectl get pods -l app=k8s-helloworld # copy your pod id
kubectl describe pods k8s-helloworld-5f84bb5d68-l9vch 

# exec curl command inside the pod

kubectl exec k8s-helloworld-5f84bb5d68-l9vch -- curl -s http://localhost:8080
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138670950-75f96e40-3bc6-4ef6-aff6-578f45b90c04.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Using a Service to Expose an Application in a Cluster&lt;/h1&gt;
&lt;p&gt;So, now you have a website running on Kubernetes, and you want to share this website to your friends. It turns out that your friends cannot open the website. The reason for this is that, in order to get a deployment exposed to the public, a service object is needed to tell Kubernetes which services port is mapped to the deployment.&lt;/p&gt;
&lt;p&gt;To expose your deployment, just run the command &lt;code&gt;expose deployment&lt;/code&gt; with the name of the deployment and the port number the pod used to create a service object. In this case, it will be port 8080. Run &lt;code&gt;get services&lt;/code&gt; to view the services and the mapped services port. Now you can access the website externally using the Kubernetes nodes IP address together with the services port.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;# Expose the deployment with Port number 8080
kubectl expose deployment/k8s-helloworld --type=&quot;NodePort&quot; --port 8080
# Get the information on Services labeled with k8s-helloworld
kubectl get svc -l app=k8s-helloworld
# Check if the application can be accessed from Kubernetes Cluster

curl ez-vm01.hpeilab.com:31856
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138665803-dea57cb9-1209-4b55-810a-5d564ea2b7e5.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;At this point, you are half way done. Now, move to the Kubernetes tenant management GUI (shown in the image below). In the &lt;code&gt;Service endpoints&lt;/code&gt; tab, you can see that the access point is not yet mapped to HPE Ezmeral Runtime Enterprise Gateway.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138668470-ce8b6846-5fb4-4494-9a90-24aa2be73456.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Making the application available to users&lt;/h1&gt;
&lt;p&gt;One more step is needed to expose your containerized application to users outside your HPE Ezmeral Runtime Enterprise infrastructure. You can expose the Kubernetes NodePort service of your application via the HPE Ezmeral Runtime Enterprise Gateway by setting up a port mapping. You have to apply a label &lt;code&gt;hpecp.hpe.com/hpecp-internal-gateway: &quot;true&quot;&lt;/code&gt; on your NodePort Service object. You can do that by adding one line in your YAML files or run &lt;code&gt;label&lt;/code&gt; command. The label generates a service port on the gateway host.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;p&gt;This behavior will be done automatically within any namespace associated with an HPE Ezmeral Runtime Enterprise tenant and if that tenant has the &quot;Map Services To Gateway&quot; enabled. However, you can control this behavior by labelling the NodePort service, either to force or to prevent the port mapping on the gateway host.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Label the service named k8s-helloworld with hpecp.hpe.com/hpecp-internal-gateway=true
kubectl label service k8s-helloworld hpecp.hpe.com/hpecp-internal-gateway=true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138669273-fa2969b3-61f3-4bae-a2f6-66425daf0a7b.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Go back to the Kubernetes tenant management GUI. Now, in the &lt;code&gt;Service endpoints&lt;/code&gt; tab, you can see the access point is mapped to HPE Ezmeral Runtime Enterprise Gateway.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138668836-0313c1c5-e720-4575-a759-842c85d5502c.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also find the port by running the command &lt;code&gt;kubectl describe services&lt;/code&gt;. The access point will be shown under the key annotations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/138810536-f1255048-2d91-44eb-ba33-ccc4bc52ca1e.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, the services has mapped to the gateway and you can now access it though the gateway. A hello world message will appear when you &lt;code&gt;curl&lt;/code&gt; the URL.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# check your port number on the GUI
curl http://ez-gateway.hpeilab.com:10022/
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Deleting the services and deployment&lt;/h1&gt;
&lt;p&gt;After playing around with Kubernetes, if you would like to clean up the application, remember to delete both services and the deployment.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# delete everything
kubectl delete services/k8s-helloworld
kubectl delete deployment/k8s-helloworld
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Take away&lt;/h1&gt;
&lt;p&gt;Mapping Kubernetes services to HPE Ezmeral Runtime Enterprise Gateway provides a single point of secure access to the platform, which also allows for load-balancing. As you can see from what we just went through, it really isn&apos;t that hard. Feel free to navigate to HPE DEV Hack Shack to register for a &lt;a href=&quot;/hackshack/workshop/24&quot;&gt;Workshop-on-Demand&lt;/a&gt; for Kubernetes 101. Here, you can try it for yourself. After playing around with this, if you would like to clean up the application, remember to delete both the services and the deployment. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; to learn more on other &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral/home/&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; related topics.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Clouds, Automation & Data]]></title><link>https://developer.hpe.com/2021-December-2/</link><guid isPermaLink="false">https://developer.hpe.com/2021-December-2/</guid><pubDate>Fri, 03 Dec 2021 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Implementing OAuth 2 Flow for Data Services Cloud Console's Client Application ]]></title><description><![CDATA[HPE GreenLake API Security In my blog post, Using HPE GreenLake Console's API Gateway for Data Services Cloud Console, I explained that the…]]></description><link>https://developer.hpe.com/oauth2-for-hpe-greenlake-data-services-cloud-console/</link><guid isPermaLink="false">https://developer.hpe.com/oauth2-for-hpe-greenlake-data-services-cloud-console/</guid><pubDate>Tue, 30 Nov 2021 16:07:28 GMT</pubDate><content:encoded>&lt;h2&gt;HPE GreenLake API Security&lt;/h2&gt;
&lt;p&gt;In my &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;blog&lt;/a&gt; post, &lt;strong&gt;Using HPE GreenLake Console&apos;s API Gateway for Data Services Cloud Console&lt;/strong&gt;, I explained that the HPE GreenLake console supports the Client Credential authentication grant type (This concept is known as OAuth 2 client credential authorization workflow). This particular grant type allows the client application (scripts, applications, programs that leverage the console features via the API) to authenticate using separate credentials (client id and client secret) that are authorized inside the API Gateway menu using the HPE GreenLake user account (resource owner).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/dscc-public-api-introduction-updated_111122.jpg&quot; alt=&quot;Rehash the flow of the GreenLake access token acquisition&quot; title=&quot;Client Credentials process&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Some of the benefits of the Data Services Cloud Console Client Credential OAuth authentication grant:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The authorization for the client does not involve the transmission of the HPE GreenLake user credentials.&lt;/li&gt;
&lt;li&gt;Changing the client secret or deleting the client credentials will not impact HPE GreenLake user credentials.&lt;/li&gt;
&lt;li&gt;According to OAuth 2.0 Authorization Framework, &lt;a href=&quot;https://datatracker.ietf.org/doc/html/rfc6749#section-4.4&quot;&gt;the Client Credentials Grant type&lt;/a&gt; allows the client application to authenticate itself independent of the user (no user intervention), which makes this grant type appropriate for machine-to-machine (M2M) applications that can safely protect the registered client credentials (confidential clients), such as scripts, daemon, or services contained in a host. Please refer to the &lt;a href=&quot;https://tools.ietf.org/html/rfc6749#section-2.1&quot;&gt;OAuth 2.0 Authorization Framework&lt;/a&gt; for more information.&lt;/li&gt;
&lt;li&gt;Each client application uses a different set of client ids and client secrets to ensure the secrecy and the independence of each client application.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;How do I implement Data Services Cloud Console API in my client application or my script?&lt;/h3&gt;
&lt;p&gt;This blog post will go through an example of setting up the client application using the client id, client secret, console API definition in yaml, and the Postman tool. The flow to get the client id and client secret from this menu is detailed in my &lt;a href=&quot;https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/&quot;&gt;blog&lt;/a&gt; entitled &lt;strong&gt;Using HPE GreenLake&apos;s API Gateway to Data Services Cloud Console.&lt;/strong&gt; Note that the client id and client secret are shown only once during the API credential creation; hence they need to be securely recorded.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/credentials-created-client.png&quot; alt=&quot;image of the client id and client secret&quot; title=&quot;Client Credentials&quot;&gt;&lt;/p&gt;
&lt;p&gt;The user who generates this client id and client secret pair must store them and transfer them securely to the designated client application. Using the client id and the client secret, the client application can generate the access token in order to issue the REST API request to resources in the console. The client application access to the permitted console&apos;s resources depends on the role-based access control (RBAC) of the user. If the user does not have the correct authority for a resource, such as the volumes of an array, then the REST API request will receive an &quot;unauthorized request&quot; response.&lt;/p&gt;
&lt;p&gt;For the client application to perform the REST API request, the application must obtain the access token from HPE GreenLake as described in below diagram. This special end-point &lt;code&gt;https://sso.common.cloud.hpe.com/as/token.oauth2&lt;/code&gt; provides the access token in the response of the authorization request from any client application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/client-credential-access-token.png&quot; alt=&quot;Diagram for client credential &quot; title=&quot;Client Credential&quot;&gt;&lt;/p&gt;
&lt;p&gt;The method required to obtain the access token is described in the following HTTPs request as shown in this snippet of HTTP strings.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;HTTP method: 
POST

URL: 
https://sso.common.cloud.hpe.com/as/token.oauth2 

Headers:
Content-Type: application/x-www-form-urlencoded

Body:
grant_type=client_credentials
&amp;#x26;client_id=xxxxxxxxxx
&amp;#x26;client_secret=xxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The following code snippet shows an example using cURL to obtain the access token. The variables of the $YOUR_CLIENT_ID and $YOUR_CLIENT_SECRET will be substituted with the client id and client secret from the above menu.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;curl -X POST https://sso.common.cloud.hpe.com/as/token.oauth2 -H         
&quot;Content-Type: application/x-www-form-urlencoded&quot;         
-d &quot;grant_type=client_credentials&amp;#x26;client_id=$YOUR_CLIENT_ID&amp;#x26;client_secret=$YOUR_CLIENT_SECRET&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next code snippet shows an example using Python to obtain the access token. As in the previous code snippet, YOUR_CLIENT_ID and YOUR_CLIENT_SECRET variables must be substituted accordingly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from oauthlib.oauth2 import BackendApplicationClient       
from requests.auth import HTTPBasicAuth       
from requests_oauthlib import OAuth2Session       

client = BackendApplicationClient(YOUR_CLIENT_ID)       
     
oauth = OAuth2Session(client=client)       
auth = HTTPBasicAuth(YOUR_CLIENT_ID, YOUR_CLIENT_SECRET)       
      
token = oauth.fetch_token(token_url=&apos;https://sso.common.cloud.hpe.com/as/token.oauth2&apos;, auth=auth)       
print(token[&quot;access_token&quot;])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Another code snippet below shows an example using the PowerShell to obtain the access token. As in the previous code snippet, the client id and the client secret variables must be substituted accordingly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;$AuthUri = &quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot;
 [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
 $AuthHeaders = @{ &apos;Content-Type&apos; = &apos;application/x-www-form-urlencoded&apos; }
 $AuthBody    = [ordered]@{  &apos;grant_type&apos; = &apos;client_credentials&apos;,
                             &apos;client_id&apos; = {Insert Client ID Here},
                             &apos;client_secret&apos; = {Insert Client Secret Here } }   
 (invoke-restmethod -uri &quot;$AuthURI&quot; -Method Post -headers $AuthHeaders -body $MyBody).access_token
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The access token contains the information in JWT format that is self-contained, securely encoded, and signed using RS256. The access token is designed to enable secure transmission between the client application and the REST API server with a limited lifetime (2 hours).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-http&quot;&gt;eyJhbGciOiJSUzI1NiIsImtpZCI6IjFvVEFmay1UOTZ1ZDd5cDBZTGlYM1ROSWdDWSIsInBpLmF0bSI6ImRlejAifQ.eyJjbGllbnRfaWQiOiIwMGNmZmY3MC04NmFiLTRmNjYtODI0NS0xZWIwNTQ2MzljMzgiLCJpc3MiOiJodHRwczovL3Nzby5jb21tb24uY2xvdWQuaHBlLmNvbSIsImF1ZCI6ImV4dGVybmFsX2FwaSIsInN1YiI6InJvbmFsZC5kaGFybWFAaHBlLmNvbSIsInVzZXJfY3R4IjoiZThhNGRhMmVlZmMzMTFlYmEwMmNiNjAzNDIyYmMwYTAiLCJhdXRoX3NvdXJjZSI6ImNjc190b2tlbl9tYW5hZ2VtZW50IiwicGxhdGZvcm1fY3VzdG9tZXJfaWQiOiIyMzRkNzZjNmU5ZDAxMWViYjczMDgyYjIxMmFkNmZlYSIsImlhdCI6MTYzNDc3OTIwNiwiYXBwbGljYXRpb25faW5zdGFuY2VfaWQiOiIzYzE4YmQwMy04MzA2LTRjN2MtOTQyZS1jNzA0YTRiODc0NGMiLCJleHAiOjE2MzQ3ODY0MDZ9.sz7GHvCdO_NjPgVt5rz7JHRSegZWD0pimNqiw7s_SC9vB2XsQnSEP71Kh1y3SqQxkKF8AgbJ02iEZYsk-GO-JmufGfeIUbl2idrFlfXPiKsKftn35dHO-uHW8s4KwL7mUF_HiPxUPIsixQ1zS_88-qdUGzAWDjcR0JO2gKnkaWeQ_AUGzdDw09ZSYZG3sxIoqU_HNjLF1c8hJmVV9Q6IN1ItKAspECc_UYTnjUBrZz5mpupDxuLIMJytTFUFwCriphi9cXQCTyQ3TXW_EALtRq_KdLEe311WFMX9mAL87zXP2JNc8bf8CTiiAty5eCjM2wxrPK9-ep0i5J5v6kJW_Q
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Some of the information inside the JWT details the client id, source of authentication, and other details, including time of expiration (in Unix epoch time).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &quot;client_id&quot;: &quot;00cfff70-86ab-4f66-8245-1eb054639c38&quot;,
  &quot;iss&quot;: &quot;https://sso.common.cloud.hpe.com&quot;,
  &quot;aud&quot;: &quot;external_api&quot;,
  &quot;sub&quot;: &quot;xxxxxxxxxxxxxx@hpe.com&quot;,
  &quot;user_ctx&quot;: &quot;e8a4da2eefc311eba02cb603422bc0a0&quot;,
  &quot;auth_source&quot;: &quot;ccs_token_management&quot;,
  &quot;platform_customer_id&quot;: &quot;234d76c6e9d011ebb73082b212ad6fea&quot;,
  &quot;iat&quot;: 1634779206,
  &quot;application_instance_id&quot;: &quot;3c18bd03-8306-4c7c-942e-c704a4b8744c&quot;,
  &quot;exp&quot;: 1634786406
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;I don&apos;t know any programming language. How can I explore this console&apos;s REST API?&lt;/h4&gt;
&lt;p&gt;Postman is the well-known tool used to explore a REST API that provides enough flexibility to import the API definition, automate the access token retrieval and experiment with the parameters. Postman provides this capability with a lot less of typing and knowledge of any programming language. To experiment with Postman, I recommend you download the application-based (rather than web-based) version. This is the &lt;strong&gt;&lt;a href=&quot;https://www.postman.com/downloads/&quot;&gt;download link&lt;/a&gt;&lt;/strong&gt; for the Postman app, which is available in either a Microsoft Windows or Apple MacOS version. Install the application on a workstation that has access to the internet via website browser (HTTPS) and is capable of connecting to &lt;em&gt;&lt;strong&gt;cloud.hpe.com&lt;/strong&gt;&lt;/em&gt; such as &lt;a href=&quot;https://common.cloud.hpe.com&quot;&gt;https://common.cloud.hpe.com&lt;/a&gt; or &lt;a href=&quot;https://us-west.data.cloud.hpe.com&quot;&gt;https://us-west.data.cloud.hpe.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Start by downloading the storage-api.yaml OpenAPI 3.0 definition into the workstation. Anyone can go to the &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;DSCC API documentation website&lt;/a&gt; and click on the download button for either the Yaml or JSON definition file.&lt;/p&gt;
&lt;p&gt;Postman provides the ability to create a workspace where one can experiment with the DSCC OpenAPI by importing the DSCC OpenAPI definition (storage-api.yaml) into the API library. Start the process by selecting the Workspaces menu, then click on the &lt;strong&gt;Create Workspace&lt;/strong&gt; button and type in the desired workspace name. In this example, it&apos;s called HPE DSCC API.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-create-workspace.png&quot; alt=&quot;Create Workspace&quot; title=&quot;Create the workspace in the Postman&quot;&gt;&lt;/p&gt;
&lt;p&gt;Inside this new workspace, you will need to create an environment variable called {baseUrl} that represents the baseURL toward the console&apos;s API path for that specified region. This is the current list of the URLs based on the region where the console are allowed to be deployed as of November 2021:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Data Services Cloud Console Region&lt;/th&gt;
&lt;th&gt;base-URL&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;EU Central (Europe)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AP Northeast (Asia Pacific)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://jp1.data.cloud.hpe.com&quot;&gt;https://jp1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US West (United States)&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://us1.data.cloud.hpe.com&quot;&gt;https://us1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In this example, the baseURL points to a testing instance of console (&lt;a href=&quot;https://scint-app.qa.cds.hpe.com&quot;&gt;https://scint-app.qa.cds.hpe.com&lt;/a&gt;). For your exercise,  you must replace this value with any of the baseUrl values that match the region where the console is deployed based on the above table.  For any activities issuing the console&apos;s API request, configure the environment context to &quot;DSCC testing&quot; under the &quot;HPE DSCC API&quot; workspace, and ensure that this environment contains the current value of the variable of {baseUrl}.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-create-environment-variable.png&quot; alt=&quot;Set Enviroment&quot; title=&quot;Set baseUrl under this environment under the workspace&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, import the Data Services Cloud Console API definition into this workspace. Note that you will be importing the storage-api.yaml rather than JSON; nevertheless, Postman can recognize the console API in either format. To save the time required for importing, you can uncheck the &lt;strong&gt;Create Documentation&lt;/strong&gt; button. If need be, the documentation can be created after the importing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-import-api.png&quot; alt=&quot;Select upload files to import API&quot; title=&quot;Upload API definition&quot;&gt;&lt;/p&gt;
&lt;p&gt;Select the Data Services Cloud Console API definition in yaml format that was downloaded from the &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;HPE Data Services Cloud Console API documentation&lt;/a&gt; by clicking on the &lt;strong&gt;Download YAML&lt;/strong&gt; button as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/download-dscc-api-definition.png&quot; alt=&quot;Download the API definition&quot; title=&quot;Save the API definition&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the API definition completes the import, Postman recognizes the fact that it&apos;s an OpenAPI 3.0 definition and builds the API library automatically.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-import-load.png&quot; alt=&quot;Confirmation of the API Definition Import&quot; title=&quot;OpenAPI 3.0 Confirmation&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a result of the upload of the console API definition file, Postman will show the API definition tree under the Postman API menu as shown in the picture below. Note that Postman validates that the console API definition doesn&apos;t contain any issues, as shown by the message at the very bottom of the following picture.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-api-loaded.png&quot; alt=&quot;DSCC API loaded&quot; title=&quot;DSCC API definition loaded&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the Data Services Cloud Console API definition is loaded, you can then use the automation for obtaining that access token that is facilitated by Postman. To start the OAuth2 automation, select the &lt;strong&gt;Collections&lt;/strong&gt; menu and display the rest of the API listing under the tree. At the top of the tree, you will initialize the authorization with correct parameters, such as the client id, client secret, Data Services Cloud Console OAuth2 end point, and other required parameters. With this setup, any API request that inherits the authorization from the top of tree will be able to populate their header for REST API request with the access token as the token bearer. Below, you can see a display of the configuration that is populated with the required parameters under the authorization menu.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Type&lt;/strong&gt; = OAuth 2.0&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add auth data to&lt;/strong&gt; = Request Headers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Token Name&lt;/strong&gt; = &lt;Strings&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Grant Type&lt;/strong&gt; = Client Credentials&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Access Token URL&lt;/strong&gt; = &lt;a href=&quot;https://sso.common.cloud.hpe.com/as/token.oauth2&quot;&gt;https://sso.common.cloud.hpe.com/as/token.oauth2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client ID&lt;/strong&gt; = obtained from the client credential creation in API Gateway&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client Secret&lt;/strong&gt; = obtained from the client credential creation in API Gateway (The pair to the Client ID)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client Authentication&lt;/strong&gt; = Send client credentials in body&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Click on the &lt;strong&gt;Get New Access Token&lt;/strong&gt; button to obtain the valid access token&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-setup-access-token-at-top-folder.png&quot; alt=&quot;Setting Up Authorization&quot; title=&quot;Automation for OAuth 2.0&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a result of getting the new access token, a new menu will be shown that shows the content of the access token. Click the &lt;strong&gt;Use Token&lt;/strong&gt; button to select this new access token. If there is an existing access token, it will be invalidated and the name of that invalid access token will have a strike-through, as shown in the image below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-use-access-token.png&quot; alt=&quot;Obtain New Access Token&quot; title=&quot;Valid Access Token&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once selection to use the valid access token is done, this token can be made available for all of the API Requests under the same tree. To make this happen, click on the &lt;strong&gt;Sync Access Token&lt;/strong&gt; button until it&apos;s synced-up (the icon color changes to green). To enable this sync-up, please select &lt;strong&gt;Delete Expired Tokens&lt;/strong&gt; and select &lt;strong&gt;Available Tokens&lt;/strong&gt;. Lastly, ensure that the header prefix is set to &lt;strong&gt;Bearer&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-setup-sync-token-at-top-folder.png&quot; alt=&quot;Access Token is sync-ed&quot; title=&quot;Sync access token&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the access token is synced-up, you can then issue any REST API request by selecting the appropriate REST API request. For this example, you are going to issue the REST API to show all of the storage systems connected to the Data Services Cloud Console(&lt;strong&gt;Get all Storage systems&lt;/strong&gt;).  Select the &lt;strong&gt;Headers (7)&lt;/strong&gt; to display the parameters of the REST API header and note that the Authorization parameter contains the valid access token.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Note that the environment selection (the menu at top right) must be set to the above mentioned environment (console testing for the current exercise) using the &lt;strong&gt;V&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-set-correct-environment.png&quot; alt=&quot;Use the correct environment&quot; title=&quot;Correct Environment Variable&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Select the &lt;strong&gt;Authorization&lt;/strong&gt; tab and set the &lt;strong&gt;Type&lt;/strong&gt; to &lt;strong&gt;inherit auth from the parent&apos;s&lt;/strong&gt; authorization to allow this REST API request to use the access token acquired at the top of the console API tree. Note that this inheritance requires the sync of the valid access token obtained at the top of the tree.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-set-inherit-auth.png&quot; alt=&quot;Inherit Authorization&quot; title=&quot;Parent&amp;#x27;s Authorization&quot;&gt;&lt;/p&gt;
&lt;p&gt;For the first REST API request in this example, you will issue the &lt;strong&gt;Get all storage systems&lt;/strong&gt; API request. This request can return a huge amount of data depending on the arrays that were registered in this instance of the console. To enable concise information to be returned from this REST API request, we can check the following parameters and uncheck all others:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;sort&lt;/strong&gt; = id asc, name dscc&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;select&lt;/strong&gt; = id&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Click on the &lt;strong&gt;Send&lt;/strong&gt; button to issue the GET all storage system API request. Within seconds, the body section on the bottom of this menu is filled with the list of the available arrays. It&apos;s very easy to use, simple, with no programming required and minimal typing.&lt;/p&gt;
&lt;p&gt;Isn&apos;t it awesome?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/postman-get-storage-system-sort-select-id-only.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I hope you find this blog post is helpful in using the Data Services Cloud Console public REST API. More blog posts will be coming to help you take further advantage of its capabilities. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more blog posts about HPE Data Services Cloud Console REST API.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using HPE GreenLake Console's API Gateway for Data Services Cloud Console]]></title><description><![CDATA[Secured, Yet Agile A major guiding principle in the creation of the Application Programming Interface (API) for Data Services Cloud Console…]]></description><link>https://developer.hpe.com/api-console-for-data-services-cloud-console/</link><guid isPermaLink="false">https://developer.hpe.com/api-console-for-data-services-cloud-console/</guid><pubDate>Tue, 30 Nov 2021 15:28:17 GMT</pubDate><content:encoded>&lt;h2&gt;Secured, Yet Agile&lt;/h2&gt;
&lt;p&gt;A major guiding principle in the creation of the Application Programming Interface (API) for Data Services Cloud Console from Hewlett Packard Enterprise (HPE) is security. However, to be able to be used by applications or tools that rely on the API to extend their features using the console, the API must also be flexible. To provide both security and flexibility, the console&apos;s REST API uses the 0Auth 2.0 authentication flow based on the client credential, which generates a limited lifetime access token. This access token is then embedded in the header of each REST API request as the authorization bearer.&lt;/p&gt;
&lt;p&gt;This blog will walk through the essential steps required to exercise or experiment with the Data Services Cloud Console REST API.&lt;/p&gt;
&lt;h3&gt;Authentication Process to Obtain the Access Token&lt;/h3&gt;
&lt;p&gt;The Data Services Cloud Console public API relies on an OAuth 2.0 third party authorization framework on behalf of the resource owner (HPE GreenLake&apos;s user) for security. The user starts by logging and authenticating into HPE GreenLake console, which is authenticated by the Identity Provider (validated through username, password, or Multi Factor Authentication). Using the API gateway menu in the HPE GreenLake, a customer registers their client application (REST API client) to obtain the OAuth 2.0 API client credentials (client id and client secret). This association allows the user to obtain the access token from the menu, and the user can then use the access token inside the token bearer field (header) with any REST API request. This action allows any client application or script to perform any API request to the correct instance of Data Services Cloud Console.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/greenlake-api-access-flow.png&quot; alt=&quot;client-credential application flow&quot; title=&quot;obtain client-id and client-secret&quot;&gt;&lt;/p&gt;
&lt;p&gt;The access token has a limited lifetime (about 7200 seconds or 2 hours). Once it expires, the client application must use the obtained client id and client secret to generate a new access token. One indication of the expiration of the access token, the request to console&apos;s API will return a response error: &apos;401 Unauthorized HTTP.&apos;  If the client application generates a new access token before the current one has expired, then the current access token will be invalidated or treated as not authorized.&lt;/p&gt;
&lt;p&gt;Additionally, a user can also change the client secret to update the authorization when the authorized client application has lost it&apos;s client secret, or when the client secret has been compromised.&lt;/p&gt;
&lt;p&gt;And lastly, when access to the console&apos;s REST API must be disabled, a user can delete the API client credential associated with client id and client secret in the API Gateway menu.&lt;/p&gt;
&lt;p&gt;The following flow chart describes steps required to perform the console&apos;s REST API request. The flow starts from the GreenLake authorized user creating the client id and client secret to be used to obtain the access token. The access token will be used in the authorization bearer to ensure the secure REST API request.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/user-guide-for-authorization.png&quot; alt=&quot;Access API process&quot; title=&quot;Process to authenticate and to obtain secure access &quot;&gt;&lt;/p&gt;
&lt;h2&gt;Accessing the API Gateway Menu&lt;/h2&gt;
&lt;p&gt;To access the API gateway menu, the user must log into the &lt;a href=&quot;https://common.cloud.hpe.com&quot;&gt;HPE GreenLake,&lt;/a&gt; deployed Data Services Cloud Console to the intended region, and onboarded a storage array (HPE Alletra, HPE Nimble, or HPE Primera) into the organization that is associated with the user account. The user must have the role that is required to perform the intended operation at the instance of console where the storage has been deployed. For instance, the user must have volume management capability in the Data Ops Manager to create a storage volume in US region. For more information about the role based access control, please take a look at the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=ccs-help_en_us&quot;&gt;HPE GreenLake User Guide&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The API Gateway menu is available inside the HPE GreenLake&apos;s Manage menu. From HPE Greenlake Console click on Menu to access this Manage Menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/accesing-manage-menu-from-console.png&quot; alt=&quot;Access Menu &quot; title=&quot;Menu in Cloud Console&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/accesing-api-gateway-from-manage-menu.png&quot; alt=&quot;CCS Menu&quot; title=&quot;GreenLake Common Cloud Menu&quot;&gt;&lt;/p&gt;
&lt;p&gt;The API Gateway menu provides the following operations:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Creates and manages API client credential association in order to obtain:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Instance ID of the Data Services Cloud Console at the particular region (Hexadecimals).&lt;/li&gt;
&lt;li&gt;Client ID (Hexadecimals).&lt;/li&gt;
&lt;li&gt;Client Secret (Hexadecimals).&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Generates access token, changes client secret, and deletes client credential.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-gateway-block.png&quot; alt=&quot;API Gateway Menu&quot; title=&quot;DSCC API Gateway Menu&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Manages API client application&lt;/h3&gt;
&lt;p&gt;Each instance of API client credential represents the authorization relationship between the client application and the Data Services Cloud Console REST API resources. Please click on the Create Credentials button to generate a client credential. Afterwards, the user can obtain the client secret and client secret, and use them to generate the access token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-credentials-button.png&quot; alt=&quot;API client credentials&quot; title=&quot;Create API Client Credentials&quot;&gt;&lt;/p&gt;
&lt;p&gt;Inside the Create Credentials menu, click on the V button to show the pull down list and use the mouse to click on the desired application. For the purpose of using the Data Services Cloud Console REST API, please select the console instance in the region where the array has been deployed. The list shows all instances with the regions where the applications are deployed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/select-the-desired-application.png&quot; alt=&quot;select the application&quot; title=&quot;Choose the application from Data Services Cloud Console&quot;&gt;&lt;/p&gt;
&lt;p&gt;After selecting the correct application, enter the Credential Name (Please see &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=ccs-help_en_us&quot;&gt;HPE GreenLake Cloud Console User Guide&lt;/a&gt; for supported characters).  Click the Create Credentials button to proceed with Client Credentials creation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/create-credentials-menu.png&quot; alt=&quot;Create Credential no input yet&quot; title=&quot;Generate Client Credentials 1st time&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once the Create Credential button is clicked, the following information about the OAuth (Open Authorization) will be shown. These are the Client ID and Client Secret information menus that are going to be shown only once. Please copy both client id and client secret to a safely recorded location. In the case that the user missed copying this information, then the user can only regenerate the client secret at a later time using the Reset Client Secret menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-client-credential-created.png&quot; alt=&quot;&quot; title=&quot;Credentials Created Close&quot;&gt;&lt;/p&gt;
&lt;p&gt;After closing the credential creation menu, the user can observe the prior created API client credential by identifying the credential name on the menu.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/application-credential-created-prior-shown.png&quot; alt=&quot;&quot; title=&quot;API Client Credentials are created&quot;&gt;&lt;/p&gt;
&lt;p&gt;After clicking on the down arrow button, the user can see the Generate Access Token button in order to generate the access token required to be used for the Data Services Cloud Console REST API request. Please click on the Generate Access Token to generate an access token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/api-client-credential-get-access-token.png&quot; alt=&quot;&quot; title=&quot;Time to obtain the Access Token&quot;&gt;&lt;/p&gt;
&lt;p&gt;After clicking on the Generate Access Token button, the Generate Access Token menu will appear. The menu requires the user to enter the client secret obtained from the associated API client credential. The user must copy and paste the recorded client secret from Credential Created menu (in the previous image in this blog) so that the user can obtain the access token. Click on the Create Access Token button to generate the access token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/generate-access-token-with-secret.png&quot; alt=&quot;obtained from client credential.&quot; title=&quot;Use the client secret to generate Access Token&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Access Token Created menu will appear and shows the generated access token. Copy the access token using using the &quot;sheets&quot; icon and store this access token to a safely recorded location. Click Close button to continue.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/access-token-created-and-close.png&quot; alt=&quot;&quot; title=&quot;Access Token Generated and Consumed&quot;&gt;&lt;/p&gt;
&lt;p&gt;Afterward, the user can embed the access token to the REST API request header in order to perform the HTTP method against the desired resource in order to obtain the response.  Note that the user must use the correct base-URL according to the region where the DSCC is deployed. Currently these are the base-URL and the corresponding region where the DSCC is deployed (November 2021).&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;DSCC Region&lt;/th&gt;
&lt;th&gt;base-URL&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;EU Central&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://eu1.data.cloud.hpe.com&quot;&gt;https://eu1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AP Northeast&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://jp1.data.cloud.hpe.com&quot;&gt;https://jp1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;US West&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://us1.data.cloud.hpe.com&quot;&gt;https://us1.data.cloud.hpe.com&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Oops! What if I missed to copy the client secret for this instance of client credential?&lt;/h4&gt;
&lt;p&gt;The client secret can be recreated inside the create credentials menu by clicking on the three dots at the bottom menu. Once the three dot menu opened, the user must click on the Reset Client Secret button to display the newly created client secret. This prior action will open the Credentials Created menu to show the new client secret for one time only. Note that user must copy this newly created secret to a secured location so that it can be used to generate new access token.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/reset-the-client-secret-in-client-credential-menu.png&quot; alt=&quot;Reset the client secret&quot; title=&quot;resetting the client secret&quot;&gt;&lt;/p&gt;
&lt;h4&gt;Nice! Can you give me an example of using the access token?&lt;/h4&gt;
&lt;p&gt;The access token is a long string of JSON Web Token that is signed using RS256 algorithm. Note that the access token must be added into the header of with keyword &quot;Authorization: Bearer &lt;access token in JWT&gt;&quot; for any Data Services Cloud Console REST API request. This following example is based on ubiquitous cURL tool, and it uses &quot;&lt;a href=&quot;https://scalpha-app.qa.cds.hpe.com&quot;&gt;https://scalpha-app.qa.cds.hpe.com&lt;/a&gt;&quot; as the base URL. Note that this base URL is the DSCC testing-site only. For your exercise, please use one of the base-URL noted in the above table. The following example of the DSCC REST API request uses GET method for this resource /api/v1/audit-events to obtain a list of the available audit-events. Note the additional parameter with keyword &quot;Authorization: Bearer&quot; is added into the header of this REST API request.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;&gt;curl -X GET https://scalpha-app.qa.cds.hpe.com/api/v1/audit-events 
-H &quot;Accept: application/json&quot; 
-H &quot;Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IllUMU9MZWRYeDFCbHZ2and6OU1FNm8ya1BQayIsInBpLmF0bSI6ImRlejAifQ.eyJjbGllbnRfaWQiOiIwMGNmZmY3MC04NmFiLTRmNjYtODI0NS0xZWIwNTQ2MzljMzgiLCJpc3MiOiJodHRwczovL3Nzby5jb21tb24uY2xvdWQuaHBlLmNvbSIsImF1ZCI6ImV4dGVybmFsX2FwaSIsInN1YiI6InJvbmFsZC5kaGFybWFAaHBlLmNvbSIsInVzZXJfY3R4IjoiZThhNGRhMmVlZmMzMTFlYmEwMmNiNjAzNDIyYmMwYTAiLCJhdXRoX3NvdXJjZSI6ImNjc190b2tlbl9tYW5hZ2VtZW50IiwicGxhdGZvcm1fY3VzdG9tZXJfaWQiOiIyMzRkNzZjNmU5ZDAxMWViYjczMDgyYjIxMmFkNmZlYSIsImlhdCI6MTYzNzAwNjk0NSwiYXBwbGljYXRpb25faW5zdGFuY2VfaWQiOiIzYzE4YmQwMy04MzA2LTRjN2MtOTQyZS1jNzA0YTRiODc0NGMiLCJleHAiOjE2MzcwMTQxNDV9.gHcBzl0n2wwrMRR2tSbT6jHN68d1TSNT743GED3LuF2B08ABYh9ePKQjhqYW6mjY-oSfEW2BTfG7TfTzZj9MtQ2kJGmq3DvLBl6fAaN6MEkSIz54hu0PdmDW8His6oET2txq_0kp5XJ7T6n_QJzZY0xvSoquE-48gCxwGFPWIRwefIpdw_1URFXYgfdKCxCIDTdPfYKs8kD8hzwyF9uvgLgVPWZJD6b1UHJK5OpNnBOpAxrs1xfFBz688b0vheZdARCJsl5E3Qxjyg68hw2cjavZZOX-_RWpd6JWPrQnqxyxQeYQ5yYy7giVCViM5SUZkv6j0Ts3TVguapE2kvahkQ&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The response is returned in the form of JSON string, as shown in the below example. Note that, the user can use additional parameter of the REST API Get audit-events to filter particular events. Please take a look at the &lt;a href=&quot;https://console-us1.data.cloud.hpe.com/doc/api/v1/&quot;&gt;Data Services Cloud Console API documentation&lt;/a&gt; for more information on additional parameters that are available for /api/v1/audit-events resource.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;{
  &quot;items&quot;: [
    {
      &quot;associatedResource&quot;: {
        &quot;id&quot;: &quot;/api/v1/host-initiators?filter=editStatus%20in%20(Update_In_Progress,Update_Success,Update_Failed,Not_Applicable,Delete_Failed)&quot;,
        &quot;name&quot;: &quot;&quot;,
        &quot;type&quot;: &quot;&quot;
      },
      &quot;code&quot;: &quot;Unauthorized privilege&quot;,
      &quot;contextId&quot;: &quot;&quot;,
      &quot;customerId&quot;: &quot;e8a4da2eefc311eba02cb603422bc0a0&quot;,
      &quot;id&quot;: &quot;9bafe7ae-84bf-42a2-9b82-2592ce62715e&quot;,
      &quot;loggedAt&quot;: &quot;2021-11-10T04:09:07Z&quot;,
      &quot;message&quot;: &quot;Unauthorized user access&quot;,
      &quot;occurredAt&quot;: &quot;2021-11-10T04:09:07Z&quot;,
      &quot;permission&quot;: &quot;data-services.host-initiator.read&quot;,
      &quot;scope&quot;: &quot;&quot;,
      &quot;source&quot;: &quot;/api/v1/host-initiators?filter=editStatus%20in%20(Update_In_Progress,Update_Success,Update_Failed,Not_Applicable,Delete_Failed)&quot;,
      &quot;sourceIpAddress&quot;: &quot;fleet-gql-data-graph:4000&quot;,
      &quot;state&quot;: &quot;PermissionDenied&quot;,
      &quot;taskId&quot;: &quot;&quot;,
      &quot;uniqueId&quot;: &quot;audit.events+0+3936&quot;,
      &quot;userEmail&quot;: &quot;mandy.shen@hpe.com&quot;,
      &quot;version&quot;: 1
    },
    {
      &quot;associatedResource&quot;: {
        &quot;id&quot;: &quot;/api/v1/host-initiators?filter=editStatus%20in%20(Update_In_Progress,Update_Success,Update_Failed,Not_Applicable,Delete_Failed)&quot;,
        &quot;name&quot;: &quot;&quot;,
        &quot;type&quot;: &quot;&quot;
      },
      &quot;code&quot;: &quot;Unauthorized privilege&quot;,
      &quot;contextId&quot;: &quot;&quot;,
      &quot;customerId&quot;: &quot;e8a4da2eefc311eba02cb603422bc0a0&quot;,
      &quot;id&quot;: &quot;f0d2c4c6-d859-42f3-ae4b-60f8d3d2d89d&quot;,
      &quot;loggedAt&quot;: &quot;2021-11-10T04:09:03Z&quot;,
      &quot;message&quot;: &quot;Unauthorized user access&quot;,
      &quot;occurredAt&quot;: &quot;2021-11-10T04:09:03Z&quot;,
      &quot;permission&quot;: &quot;data-services.host-initiator.read&quot;,
      &quot;scope&quot;: &quot;&quot;,
      &quot;source&quot;: &quot;/api/v1/host-initiators?filter=editStatus%20in%20(Update_In_Progress,Update_Success,Update_Failed,Not_Applicable,Delete_Failed)&quot;,
      &quot;sourceIpAddress&quot;: &quot;fleet-gql-data-graph:4000&quot;,
      &quot;state&quot;: &quot;PermissionDenied&quot;,
      &quot;taskId&quot;: &quot;&quot;,
      &quot;uniqueId&quot;: &quot;audit.events+2+3975&quot;,
      &quot;userEmail&quot;: &quot;mandy.shen@hpe.com&quot;,
      &quot;version&quot;: 1
    },
    
    .... snippet ....
    
      {
      &quot;associatedResource&quot;: {
        &quot;id&quot;: &quot;/api/v1/storage-systems/volumes?filter=isSystemVolume%20eq%20false\u0026limit=1\u0026offset=0&quot;,
        &quot;name&quot;: &quot;&quot;,
        &quot;type&quot;: &quot;&quot;
      },
      &quot;code&quot;: &quot;Unauthorized privilege&quot;,
      &quot;contextId&quot;: &quot;&quot;,
      &quot;customerId&quot;: &quot;e8a4da2eefc311eba02cb603422bc0a0&quot;,
      &quot;id&quot;: &quot;86988487-f7f4-403e-b33e-e15abfcd568a&quot;,
      &quot;loggedAt&quot;: &quot;2021-08-19T15:38:49Z&quot;,
      &quot;message&quot;: &quot;Unauthorized user access&quot;,
      &quot;occurredAt&quot;: &quot;2021-08-19T15:38:49Z&quot;,
      &quot;permission&quot;: &quot;data-services.volume.read&quot;,
      &quot;scope&quot;: &quot;&quot;,
      &quot;source&quot;: &quot;/api/v1/storage-systems/volumes?filter=isSystemVolume%20eq%20false\u0026limit=1\u0026offset=0&quot;,
      &quot;sourceIpAddress&quot;: &quot;scalpha-app.qa.cds.hpe.com&quot;,
      &quot;state&quot;: &quot;PermissionDenied&quot;,
      &quot;taskId&quot;: &quot;&quot;,
      &quot;uniqueId&quot;: &quot;audit.events+6+3057&quot;,
      &quot;userEmail&quot;: &quot;matt.haron@hpe.com&quot;,
      &quot;version&quot;: &quot;&quot;
    }
  ],
  &quot;total&quot;: 123,
  &quot;numItems&quot;: 123,
  &quot;pageLimit&quot;: 500,
  &quot;pageOffset&quot;: 0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The recommended tool at this moment of time to experiment with the REST API for the console is the Postman which is downloadable from the &lt;a href=&quot;https://www.postman.com/downloads/&quot;&gt;Postman website&lt;/a&gt;. The postman is a versatile tool, that anyone can copy the access token (or better to use the client id and client secret) from the API Gateway menu and issue a REST API request without using programming language. Furthermore, user can also test the parameters and format the responses of each REST API request using the Postman tool.&lt;/p&gt;
&lt;p&gt;In conclusion, this blog gives you a great example on how to obtain the access token and experiment with the Data Services Cloud Console REST API. Please take a look at the next blog on getting the access token programmatically to enable any client application using any familiar tool like Postman, use a programming, or a scripting language.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Autopilot Kubernetes Deployments on HPE Ezmeral Runtime Enterprise]]></title><description><![CDATA[‘Autopilot’ is a concept that is familiar to many in the aviation and automobile industries. As it pertains to airplanes, autopilot systems…]]></description><link>https://developer.hpe.com/autopilot-kubernetes-deployments-on-hpe-ezmeral-runtime-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/autopilot-kubernetes-deployments-on-hpe-ezmeral-runtime-enterprise/</guid><pubDate>Tue, 23 Nov 2021 18:20:34 GMT</pubDate><content:encoded>&lt;p&gt;‘Autopilot’ is a concept that is familiar to many in the aviation and automobile industries. As it pertains to airplanes, autopilot systems aid pilots in focusing on crucial tasks by automating and controlling every aspect of the flight, from take-off to landing. Modern-day autopilot systems can be customized and pilots can decide on which features are to be manually controlled. Drawing parallels, in the context of Kubernetes, an autopilot system can manage the end-to-end automation of deployments as well as the maintenance of applications, relieving developers from the pain of manual intervention.  A system like this incorporates the necessary checks and balances to ensure the health of the applications and the CI/CD (continuous integration/continuous delivery) pipelines.&lt;/p&gt;
&lt;p&gt;It’s important to note that the success of such an autopilot system relies on the extent to which developers can fine-tune and intervene in the autopilot capabilities. Just like a pilot warming up the aircraft during take-off, developers need to prepare their applications for an autopilot system to kick in. This largely depends on how the applications are structured/architected.&lt;/p&gt;
&lt;p&gt;In this post, I will cover autopilot systems for Kubernetes and how the combination of gopaddle and HPE Ezmeral Runtime Enterprise enables enterprises to speed up their modernization journey. Before I do that, let me first explain the need for such systems. It all starts with a demand for efficiency of the business operations.&lt;/p&gt;
&lt;h2&gt;Addressing the need to optimize operations&lt;/h2&gt;
&lt;p&gt;Application development has changed significantly in recent years, with the introduction of microservices, DevOps processes, and Kubernetes. Maintaining a traditional, monolithic application is arduous. It requires coordinated releases, as the entire team contributes to a single binary, which is then configured and re-configured to suit the hybrid environments it may encounter. Depending upon where the application is deployed, the application has to be configured to fit the underlying infrastructure. How you scale the application or how you react to an application’s downtime would greatly depend on where the application is being deployed and used. These sequential and tedious software releases result in sub-optimal business operations.&lt;/p&gt;
&lt;p&gt;When businesses start adopting a microservices architecture and Kubernetes, they gain multiple benefits. A microservices architecture is nimble and gives developers the independence required to build and deploy their services as long as they conform to their API contracts. Using a microservices architecture, developers can more easily incorporate new features and ship them quickly. Kubernetes aids these agile practices by standardizing the infrastructure. Using these cloud-native technologies is a great win for  businesses, as they help them respond to the market quickly and deliver more in a short period of time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig1-k8s-complex-gopaddle.png&quot; alt=&quot;Kubernetes is complex to manage&quot; title=&quot;Kubernetes management requirements&quot;&gt;&lt;/p&gt;
&lt;p&gt;But, even though the value of Kubernetes is apparent, implementing Kubernetes requires a huge cultural shift amongst developers. Developers are now responsible for the operational demands of how to configure and deploy their microservices and maintain those services at scale. This introduces a huge learning curve. In addition, developers end up in a cycle of maintenance and automation that, in turn, pushes them away from innovation. In other words, implementing Kubernetes can become stressful and counter-productive.&lt;/p&gt;
&lt;h2&gt;Autopilot System for Kubernetes&lt;/h2&gt;
&lt;p&gt;The answer to this double-edged requirement of balancing business efficiency without compromising developer productivity and morale is to leverage an autopilot system for Kubernetes. Such a system should abstract the complexity across the three stages of a Kubernetes implementation.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt; — Provisioning and maintaining Kubernetes environments&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Application&lt;/strong&gt; — Deploying and maintaining applications on Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automation&lt;/strong&gt; — DevOps automation, i.e., CI/CD, monitoring, logging, and alerting&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig2-hpe-gopaddle-solution.png&quot; alt=&quot;Autopilot system for Kubernetes&quot; title=&quot;Autopilot system for Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Runtime Enterprise and gopaddle offer a combined solution that addresses these end-to-end requirements for developers.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; provides you with an enterprise-grade container orchestration platform that is designed to run modern applications (both cloud-native and non-cloud-native monolithic applications) with persistent data.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://gopaddle.io/&quot;&gt;gopaddle&lt;/a&gt; is a no-code platform used to build, deploy and maintain Kubernetes workloads across hybrid environments. It offers with three main capabilities:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Accelerate modernization&lt;/strong&gt; — With its intelligent scaffolding, gopaddle can transform      code into Kubernetes deployments in minutes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Increase productivity&lt;/strong&gt; — With its out-of-the-box tools integration and ready-to-use templates, gopaddle reduces the overhead of tedious maintenance and increases productivity.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Increase operational efficiency&lt;/strong&gt; — gopaddle’s centralized control plane helps      provision and manage Kubernetes clusters across clouds and on-premise/edge environments.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;How does the solution work?&lt;/h2&gt;
&lt;h3&gt;1. Installing the gopaddle Enterprise edition on HPE Ezmeral Runtime Enterprise platform&lt;/h3&gt;
&lt;p&gt;gopaddle is available in two variations. gopaddle SaaS is a subscribe and pay-as-you-go model where the Kubernetes cluster managed by HPE Ezmeral Runtime Enterprise can be registered with gopaddle, securely via a bastion host. An external cluster in the context of gopaddle is a cluster that is not provisioned by gopaddle. The second variation  described in detail in this post, is to install the gopaddle Enterprise edition on the Kubernetes cluster. gopaddle Enterprise edition is available as a Helm chart and can be installed on a Kubernetes cluster running on HPE Ezmeral Runtime Enterprise platform within the corporate firewall. To achieve a self-contained CI/CD environment within the corporate firewall, the GitLab repository and container registry can be installed on the Kubernetes cluster and registered with gopaddle.&lt;/p&gt;
&lt;h3&gt;2. Registering and configuring HPE Ezmeral Runtime Environment with gopaddle&lt;/h3&gt;
&lt;p&gt;Once gopaddle Enterprise Edition is installed on the Kubernetes cluster managed by HPE Ezmeral Runtime Enterprise, you can register that Kubernetes cluster securely with gopaddle for further application build and/or deployment processes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig3-hpe-cluster-dashboard-gopaddle.png&quot; alt=&quot;gopaddle dashboard&quot; title=&quot;gopaddle dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the time of Kubernetes cluster registration, gopaddle also installs a set of add-ons, like Prometheus, Event exporters and Grafana, on the Kubernetes cluster for monitoring and alerting.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig4-hpe-addons-builds-apps-gopaddle.png&quot; alt=&quot;Add-ons, container builds and applications&quot; title=&quot;Add-ons, container builds and applications&quot;&gt;&lt;/p&gt;
&lt;p&gt;As soon as the Kubernetes cluster is registered, gopaddle discovers the nodes and gives a visual representation of their configurations. Configurations like node labels and taints can be updated from the gopaddle UI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig5-hpe-node-detail-gopaddle.png&quot; alt=&quot;Kubernetes nodes details&quot; title=&quot;Kubernetes nodes details&quot;&gt;&lt;/p&gt;
&lt;p&gt;The node logs section can be used for troubleshooting HPE Ezmeral Runtime Enterprise managed Kubernetes node-related issues.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig6-hpe-node-logs-gopaddle.png&quot; alt=&quot;Kubernetes nodes logs&quot; title=&quot;Kubernetes nodes logs&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes cluster can now be used for building and deploying applications.&lt;/p&gt;
&lt;h3&gt;3. Registering source control repositories and Docker registries&lt;/h3&gt;
&lt;p&gt;Before modernizing the applications, the source control repository and a container registry need to be registered with gopaddle. Below is a sample configuration for registering the on-premise GitLab repository installed on the HPE Ezmeral Runtime Enterprise managed Kubernetes cluster registered with gopaddle. First, a custom GitLab authenticator is registered and then the GitLab repository is registered using the custom authenticator. Once registered, the GitLab repository can now be used to create containers within gopaddle. Similarly, an on-premise GitLab container registry can be registered with gopaddle to push Docker images.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig7-hpe-code-rep-reg-updated-gopaddle.png&quot; alt=&quot;GitLab source control repository and Container registry&quot; title=&quot;GitLab source control repository and Container registry&quot;&gt;&lt;/p&gt;
&lt;h3&gt;4. Building and deploying applications&lt;/h3&gt;
&lt;p&gt;Source code projects in the GitLab repository can be on-boarded either through the gopaddle’s intuitive wizard-based UI or through its command-line utility — &lt;em&gt;&lt;strong&gt;gpctl&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;For instance, &lt;em&gt;&lt;strong&gt;gpctl&lt;/strong&gt;&lt;/em&gt; can be installed locally on the developer’s desktop and can be used to perform an intelligent scaffolding — a process that automatically converts a source code project to a Kubernetes deployment or a Helm chart. Here is an example of how &lt;em&gt;gpctl init&lt;/em&gt; can be used to import the ‘Contentful’ application from the GitLab repository to the registered Kubernetes cluster running on HPE Ezmeral Runtime Enterprise. The outcome of this utility is the access URL of the application deployed on the HPE Ezmeral Runtime Enterprise. This transformation process takes less than 10 minutes to onboard the ‘Contentful’ application on the HPE Ezmeral Runtime Enterprise.&lt;/p&gt;
&lt;p&gt;As you can see, the HPE Ezmeral Runtime Enterprise is used seamlessly for both building the container image and for deploying the application. gpctl supports any type of Linux workloads like NodeJS, Java, Python, GoLang, Ruby, .NET Core. More information on &lt;em&gt;&lt;strong&gt;gpctl&lt;/strong&gt;&lt;/em&gt; can be found &lt;a href=&quot;https://help.gopaddle.io/en/articles/5056807-initializing-a-microservice-from-scratch&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig8-hpe-gpctl-init-1-gopaddle.png&quot; alt=&quot;gopaddle gpctl command line interface&quot; title=&quot;gopaddle gpctl command line interface&quot;&gt;&lt;/p&gt;
&lt;h3&gt;5. CI/CD automation&lt;/h3&gt;
&lt;p&gt;To enable continuous integration for the containers, the CI toggle must be enabled for the container from the gopaddle UI. When enabled, any changes to the source code will be automatically detected and a new container build will be triggered.
Using gopaddle’s alerts and notification channels, the Jenkins pipeline can be triggered to run a custom workflow — say, run a regression suite or perform a rolling update on an application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig9-enable-ci-gopaddle.png&quot; alt=&quot;gopaddle enables Continuous Integration&quot; title=&quot;gopaddle enables Continuous Integration&quot;&gt;&lt;/p&gt;
&lt;h3&gt;6. HPE Ezmeral Runtime Enterprise for Stateful Applications&lt;/h3&gt;
&lt;p&gt;gopaddle integrates with HPE Ezmeral Runtime Enterprise seamlessly to manage stateful workloads. HPE Ezmeral Runtime Enterprise provides a pre-integrated Data Fabric (formerly known as MapR Data Platform) for persistent volumes for stateful applications that require persistence of data The below screenshot gives an overview of the Data Fabric volumes and the applications/services using those volumes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fig10-hpe-volumes-gopaddle.png&quot; alt=&quot;gopaddle dashboard for volumes&quot; title=&quot;gopaddle dashboard for volumes&quot;&gt;&lt;/p&gt;
&lt;p&gt;The HPE Ezmeral Runtime Enterprise and gopaddle joint solution is a great autopilot solution for developers. While HPE Ezmeral Runtime Enterprise offers a robust enterprise-grade Kubernetes environment, gopaddle automates the deployment and application lifecycle on the HPE Ezmeral platform. Developers can now focus on creativity, innovation, and collaboration rather than spending their time on resolving configuration issues. At the same time, businesses can achieve operational efficiency without compromising on developer productivity.&lt;/p&gt;
&lt;p&gt;More information on the HPE Ezmeral Runtime Enterprise and gopaddle joint solution can be found &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50005229enw&quot;&gt;here&lt;/a&gt;. Check back on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more articles on how to automate and improve your DevOps environment. You can follow Vinothini (Founder &amp;#x26; CEO @gopaddle.io) and her team on Twitter &lt;a href=&quot;https://twitter.com/gopaddleio?lang=en&quot;&gt;@gopaddle.io&lt;/a&gt;. You can also reach Vinothini directly on &lt;a href=&quot;https://www.linkedin.com/in/vinothini-raju-9817ab5/&quot;&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[StackStorm: simple, elegant automation for everyone!]]></title><description><![CDATA[There is so much information speeding through a data center from one resource to another - vCenter, storage arrays, network controllers…]]></description><link>https://developer.hpe.com/stackstorm-simple-elegant-automation-for-everyone/</link><guid isPermaLink="false">https://developer.hpe.com/stackstorm-simple-elegant-automation-for-everyone/</guid><pubDate>Thu, 18 Nov 2021 15:16:44 GMT</pubDate><content:encoded>&lt;p&gt;There is so much information speeding through a data center from one resource to another - vCenter, storage arrays, network controllers, switches, routers, power systems, cooling systems, etc. Yet, oddly enough, these resources don&apos;t all speak the same language, hindering communications and their ability to react to one another. But, what if routers could could gain some insightful knowledge from a storage array? Or, maybe HPE OneView could automatically exchange vlan information with an Aruba Cx switch?&lt;/p&gt;
&lt;p&gt;Since just about every piece of hardware today has a Restful API (application programming interface), there is actually a way to make this happen. Using a high-level programming language, like Python, one could develop some code to leverage those APIs to get traffic flowing between two disparate types of systems.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/actors.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sounds like a lot of code to write. For every system I want to talk to, I will have to write a separate module. For example, in vCenter, I want to build a new port group. Wouldn&apos;t it be awesome if the act of creating the port group would automatically trigger something to add that port group information to a new VLAN (virtual LAN) Aruba central? I could write a python script to get that information from vCenter and then refactor it, and use a different API to send the information to Aruba Central. I still wouldn&apos;t have the automation I am looking for unless I scheduled a &lt;strong&gt;cron job&lt;/strong&gt; to run my code. The task becomes exponentially harder when I want to add another device like HPE OneView into the conversation.&lt;/p&gt;
&lt;p&gt;I like writing code, but prefer to work smarter, not harder. After all, there is only so much time in the day. That is why I love using Python. Python has a lot of pre-written modules that make it extremely extensible. By using the &lt;strong&gt;import&lt;/strong&gt; command in my script I can take advantage of modules I have no idea how to even start writing. I like the &lt;strong&gt;plug and play&lt;/strong&gt; of how easy it is to consume those extra modules. I like using &lt;strong&gt;requests&lt;/strong&gt;, because I can just code &lt;strong&gt;import requests&lt;/strong&gt; and I have access to a well-developed chunk of code that I did not have to write.&lt;/p&gt;
&lt;p&gt;What if we applied the same concept to our problem at the start of this blog? What if each of the resources in the data center had a &lt;strong&gt;module&lt;/strong&gt; of some sorts, that I could plug into some sort of framework? Something like StackStorm!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/stackstorm.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Before I go too far, let me give you a quick recap of what &lt;a href=&quot;https://developer.hpe.com/blog/master-the-automation-universe-the-easy-way-part-1-introduction-to-stack/&quot;&gt;StackStorm&lt;/a&gt; is. StackStorm uses &lt;strong&gt;sensors&lt;/strong&gt;, &lt;strong&gt;rules&lt;/strong&gt;, &lt;strong&gt;triggers&lt;/strong&gt;, &lt;strong&gt;actions&lt;/strong&gt;, and &lt;strong&gt;workflows&lt;/strong&gt;. You can think of it as &lt;strong&gt;If-this-then-that&lt;/strong&gt; automation. Based on a event on some device, a StackStorm sensor can detect it and this could trigger a rule. The rule would have some logic in it and if the trigger launched the rule, the rule would run an action, or a series of actions called a workflow. It might sound a little involved but all of the rules, actions, sensors and workflows are packaged together in a StackStorm pack to make it easy to deploy these solutions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/process.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;What if I told you there was a place called the StackStorm Exchange that already had 170 of these modules you can plug into your projects? &lt;a href=&quot;https://exchange.stackstorm.org/&quot;&gt;Look at this link&lt;/a&gt;,  It is quite an impressive stockpile of automation just waiting for you to take advantage of it. I recently finished developing a StackStorm integration pack for HPE Primera Storage. Before that, I developed a pack to integrate HPE Nimble Storage. What communication can we develop?&lt;/p&gt;
&lt;p&gt;Using these packs, I went about trying to determine what sort of communication could be developed between the two storage systems, I settled on volumes. I wanted to be able to create a volume on the HPE Primera and have it &lt;strong&gt;automatically&lt;/strong&gt; appear on a HPE Nimble storage array. Maybe this is something cool, maybe not, but it was just going to demonstrate the power of HPE devices talking to one another and have the capability to talk to the StackStorm integration packs out on the exchange. The process would look something like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flow.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The most important thing to remember is that when I run a StackStorm workflow, one step in the process can be running a StackStorm action from one integration pack, and the next task can be a StackStorm action from any other pack! Now, I have StackStorm integration packs for HPE OneView, HPE Nimble, and HPE Primera. Using these StackStorm integration packs allows HPE OneView, HPE Nimble, and HPE Primera communicate with one another. Because they are part of StackStorm, they now have access to all the pre-written automation packs available on the StackStorm Exchange!&lt;/p&gt;
&lt;p&gt;I have written StackStorm integration packs for &lt;strong&gt;&lt;a href=&quot;https://github.com/xod442/Stackstorm-qumulo&quot;&gt;Qumulo&lt;/a&gt;&lt;/strong&gt;, HPE &lt;strong&gt;&lt;a href=&quot;https://github.com/HewlettPackard/stackstorm-hpe-oneview&quot;&gt;OneView&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;a href=&quot;https://github.com/HewlettPackard/stackstorm-aruba-fc&quot;&gt;Aruba Fabric Composer&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;a href=&quot;https://github.com/xod442/stackstorm-hpe-iloamplifier&quot;&gt;iLo-amplifier&lt;/a&gt;&lt;/strong&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/stackstorm-hpe-nimble&quot;&gt;HPE Nimble&lt;/a&gt;. and even one for &lt;strong&gt;&lt;a href=&quot;https://github.com/xod442/stackstorm-arista-dev&quot;&gt;Arista Cloud Vision Portal&lt;/a&gt;&lt;/strong&gt;. The list has grown quite a bit since I started this endeavour. What is the net result of this activity? The communication between servers, and storage, and networking in the data center just got a little better. I challenge myself, as well as you, to think about the solutions that could be developed by applying this automation. It is said that any process you can document, you should automate. If you can automate the process you no longer have to do it yourself and that can give you back precious minutes of your busy schedule.&lt;/p&gt;
&lt;p&gt;If you have something you would like to have a StackStorm integration pack developed for, you can take action. You can start by reading my other blog posts &lt;a href=&quot;https://developer.hpe.com/search/?term=stackstorm&quot;&gt;here:&lt;/a&gt; You could check out the &lt;a href=&quot;/hackshack/workshop/21&quot;&gt;Workshop-on-Demand&lt;/a&gt; or take my free &lt;a href=&quot;https://github.com/xod442/stackstorm-tutorial&quot;&gt;StackStorm introductory training &lt;/a&gt;and start your journey on your way to simple, elegant automation.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[On the cutting edge]]></title><link>https://developer.hpe.com/2021-November-4/</link><guid isPermaLink="false">https://developer.hpe.com/2021-November-4/</guid><pubDate>Fri, 05 Nov 2021 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Build Transformative AI Applications at Scale with HPE Machine Learning Development Environment]]></title><description><![CDATA[Building and training optimized machine learning (ML) models at scale is considered the most demanding and critical stage of ML development…]]></description><link>https://developer.hpe.com/build-transformative-ai-applications-at-scale-with-hpe-cray-ai-development-environment/</link><guid isPermaLink="false">https://developer.hpe.com/build-transformative-ai-applications-at-scale-with-hpe-cray-ai-development-environment/</guid><pubDate>Thu, 04 Nov 2021 09:34:21 GMT</pubDate><content:encoded>&lt;p&gt;Building and training optimized machine learning (ML) models at scale is considered the most demanding and critical stage of ML development. Doing it well requires researchers and data scientists to overcome many challenges typically encountered in High Performance Computing (HPC) environments.&lt;/p&gt;
&lt;p&gt;These challenges often include properly setting up and managing a fast-moving ML software ecosystem and infrastructure spanning specialized compute, storage, network fabric, and tensor processors (e.g., GPUs). Additionally, users need to program, schedule, and train their models to maximize the use of the highly specialized infrastructure they have set up, which can create complexity and impede productivity.&lt;/p&gt;
&lt;p&gt;To meet these challenges, ML Engineers and data scientists are on a never-ending search for novel and innovative solutions that help them focus on building better models and accelerate their time-to-production. &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;HPE Machine Learning Development Environment&lt;/a&gt; is designed to help them specifically achieve this.&lt;/p&gt;
&lt;p&gt;Built upon the widely popular open-source &lt;a href=&quot;https://www.determined.ai/&quot;&gt;Determined Training Platform&lt;/a&gt;, HPE Machine Learning Development Environment reduces the complexity and cost associated with machine learning model development by removing the need to write infrastructure code and makes it easy for IT administrators to set up, manage, secure, and share AI compute clusters.&lt;/p&gt;
&lt;p&gt;By adopting HPE Machine Learning Development Environment, model developers can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Train models faster&lt;/strong&gt; using state-of-the-art distributed training without changing the model code.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatically find high-quality models&lt;/strong&gt; with advanced hyperparameter tuning from the creators of Hyperband.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Maximize GPU performance&lt;/strong&gt; with smart scheduling and cut cloud GPU costs by seamlessly using spot/preemptible instances.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Track and reproduce work&lt;/strong&gt; with experiment tracking that runs out-of-the-box, covering code versions, metrics, checkpoints, and hyperparameters.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE Machine Learning Development Environment integrates these features into an easy-to-use, high-performance machine learning environment — which means you can spend your time building models instead of managing infrastructure.&lt;/p&gt;
&lt;p&gt;To learn more about HPE Machine Learning Development Environment &lt;a href=&quot;https://www.hpe.com/us/en/solutions/artificial-intelligence/machine-learning-development-environment.html&quot;&gt;visit our landing page&lt;/a&gt; and get in touch with our team of ML and distributed system experts.&lt;/p&gt;
&lt;p&gt;To learn more about the open-source project that powers HPE Machine Learning Development Environment, we invite you to check out the Determined Training Platform on &lt;a href=&quot;https://github.com/determined-ai/determined&quot;&gt;GitHub&lt;/a&gt;, read the &lt;a href=&quot;https://docs.determined.ai/latest/&quot;&gt;Documentation&lt;/a&gt;, and &lt;a href=&quot;https://join.slack.com/t/determined-community/shared_invite/zt-cnj7802v-KcVbaUrIzQOwmkmY7gP0Ew&quot;&gt;join the Determined Community Slack&lt;/a&gt; to get started. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more informative articles on this subject.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How fine-grained data placement helps optimize application performance]]></title><description><![CDATA[Does data locality matter? In an ideal world, after all the work you put into developing an analytics or AI application, you would have…]]></description><link>https://developer.hpe.com/how-fine-grained-data-placement-helps-optimize-application-performance/</link><guid isPermaLink="false">https://developer.hpe.com/how-fine-grained-data-placement-helps-optimize-application-performance/</guid><pubDate>Fri, 22 Oct 2021 17:49:57 GMT</pubDate><content:encoded>&lt;p&gt;Does data locality matter? In an ideal world, after all the work you put into developing an analytics or AI application, you would have unlimited access to resources to run the application to get top performance when it’s deployed in production. But the world is not perfect.&lt;/p&gt;
&lt;p&gt;Access may be hampered through latency caused by distance, limitations on compute power, transmission mediums, or poorly optimized databases. What can you do about these issues and how does fine-grained control of data locality help?&lt;/p&gt;
&lt;h2&gt;Getting the resources your applications need&lt;/h2&gt;
&lt;p&gt;Even though there are limitations on the availability of resources in any shared system, you don’t need resources at the same level at all points in the lifecycle of your data. For instance, with &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Budgeting-time-for-AI-ML-projects/ba-p/7090807#.YTmA_-lKhE4&quot;&gt;AI and machine learning projects&lt;/a&gt;, data latency and computational requirements change at various stages in the lifetime of models. The learning process, when models are trained, tends to be compute-intensive as compared to requirements when models run in production. Model training also requires high throughput, low-latency data access. It might seem ideal (if there were no limitations on resources) to run your entire machine learning project on high performance computing (HPC) machines with specialized numerical accelerators such as graphical processing units (&lt;a href=&quot;https://www.hpe.com/us/en/solutions/hpc-high-performance-computing/nvidia-collaboration.html&quot;&gt;GPUs&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;But while these are ideal for the model training phase, they may be less useful for the high-bulk, low-compute steps of raw data ingestion, data exploration and initial processing for feature extraction. Instead, you may get the best net performance by carrying out these operations with data on traditional spinning media, especially if you live on a fixed budget (like everyone I know).&lt;/p&gt;
&lt;p&gt;The point is that it’s not just real world limitations on resources that drive the need to place data on different types of storage media. For top performance, you’d want to consider how the application processes the data. For certain, you’d want flexibility in any event.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/text-block-data-locality.jpg&quot; width=&quot;867&quot; height=&quot;133&quot;&gt;&lt;/center&gt;
&lt;p&gt;To make this work, the system on which your applications are deployed must allocate resources efficiently. The good news is that with a data infrastructure engineered to support &lt;a href=&quot;https://www.hpe.com/us/en/resources/software/ai-and-analytics-systems.html&quot;&gt;scale-efficiency&lt;/a&gt; through granular data placement, it’s easy to optimize resource use and, in turn, to maximize application performance. Here’s how.&lt;/p&gt;
&lt;h2&gt;Match storage type to data requirements to maximize performance&lt;/h2&gt;
&lt;p&gt;The key to optimizing application performance and resource usage is to be able to match data throughput, latency and total size requirements with the appropriate type of storage media. Keep in mind that to get the full benefit of high performance storage, it’s important to &lt;a href=&quot;https://www.youtube.com/watch?v=4E2beYyhux8&quot;&gt;support GPUs&lt;/a&gt; and other accelerators from a data point of view.&lt;/p&gt;
&lt;p&gt;In large systems, this optimization is accomplished by giving dedicated HPC machines high performance storage, such as solid-state disks (SSDs) or nVME drives and provisioning regular machines with slower, spinning media (HDDs), capable of handling large amounts of data storage at lower cost. This type of large-scale cluster is depicted in Figure 1.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/datastorage-fig1.png&quot; width=&quot;1200&quot; height=&quot;459&quot;&gt;&lt;/center&gt;
_Figure 1. Large cluster containing a combination of dedicated, fast-compute/fast storage nodes (orange) and regular nodes/slower storage devices (green)_
&lt;p&gt;In the figure above, the orange squares represent SSDs and orange lines represent machines with computational accelerators (such as GPUs). Green cylinders stand for slower spinning storage media (HDDs) and servers with green lines indicate traditional CPUs. In a typical machine learning/AI scenario, raw data is ingested on the non-HPC machines, where data exploration and feature extraction would take place on very large amounts of raw data. In a scale-efficient system, bulk analytic workloads, such as monthly billing, would also take place on the non-HPC (green) machines.&lt;/p&gt;
&lt;p&gt;Once feature extraction is complete, training data is written to fast storage machines (orange) with SSDs and GPUs, ready to support the model training process. Other compute-intensive applications, such as simulations, would also run on the fast machines.&lt;/p&gt;
&lt;p&gt;Smaller systems (clusters with less than 20 machines) often cannot afford dedicated HPC machines with high performance storage. Instead, the need for high performance computing is met by employing some heterogeneous mix -- nodes with fast-compute capabilities but with a mix of different kinds of data storage devices rather than just SSDs. This arrangement is shown in Figure 2.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/datstorage-fig2.png&quot; width=&quot;600&quot; height=&quot;224&quot;&gt;&lt;/center&gt;
_Figure 2. Small cluster containing fast-compute nodes (orange) having a mixture of SSDs (orange squares) plus slower HDDs (green cylinders) and regular nodes with HDDs only._
&lt;p&gt;Similar to the earlier example, you need a way to assign what data will be placed on which machines. Fortunately, &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; lets you use &lt;a href=&quot;https://docs.datafabric.hpe.com/62/AdministratorGuide/LBS.html&quot;&gt;storage labels&lt;/a&gt; to do just that.&lt;/p&gt;
&lt;h2&gt;Fine-grained data locality with HPE Ezmeral Data Fabric&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; is a highly scalable, unifying data infrastructure engineered for data storage, management, and motion. Data fabric is software-defined and hardware agnostic. It lets you &lt;a href=&quot;https://docs.datafabric.hpe.com/62/AdministratorGuide/SettingUpTopology-Volume-MCS.html?hl=data%2Cplacement&quot;&gt;conveniently position data at the level of different racks, machines,&lt;/a&gt; or even &lt;a href=&quot;https://docs.datafabric.hpe.com/62/AdministratorGuide/LBS.html&quot;&gt;different storage types&lt;/a&gt; within machines.&lt;/p&gt;
&lt;p&gt;Figure 3 below shows how easy it is to create a data fabric volume, assign topology and apply data placement policies via storage labels. (A data fabric volume is a data management unit holding files, directories, NoSQL tables, and event streams all together that act like directories with &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/What-s-your-superpower-for-data-management/ba-p/7100920#.YThGGOlKhE4&quot;&gt;superpowers for data management&lt;/a&gt;. Many policies, including data placement, are assigned to volumes.)&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/datafabric-volume-fig3.png&quot; width=&quot;1382&quot; height=&quot;749&quot;&gt;&lt;/center&gt;
_Figure 3. Screenshot of the control plane for HPE Ezmeral Data Fabric._
&lt;p&gt;What happens if you need cross-cutting requirements for data placement? Data fabric lets you define data locality to address multiple goals, such as placement across multiple racks within topologies designated for different failure domains plus additional requirements for particular storage media imposed by assigning storage labels. Locality of the data fabric volume would have to meet both requirements.&lt;/p&gt;
&lt;p&gt;Figure 4 illustrates an example of fine-grained data placement accomplished using storage labels in a cluster with heterogeneous machines.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/storagelabels-fig4.jpg&quot; width=&quot;515&quot; height=&quot;372&quot;&gt;&lt;/center&gt;
_Figure 4. Using the storage labels feature of HPE Ezmeral Data Fabric for differential data placement on particular types of storage devices at the sub-machine level._
&lt;h2&gt;Benefits of high performance metadata with HPE Ezmeral Data Fabric&lt;/h2&gt;
&lt;p&gt;Performance in distributed systems running many different applications is further enhanced by fine-grained data placement using HPE Ezmeral Data Fabric storage labels. This capability lets you easily assign data locality down to the level of &lt;a href=&quot;https://docs.datafabric.hpe.com/62/glossary/gloss_storage_pool.html?hl=storage%2Cpool&quot;&gt;storage pools&lt;/a&gt;, a unit of storage within a machine made up of multiple disks. To understand how this additional performance boost works, you’ll need a little background information about the data fabric and to understand how metadata is handled.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Data Fabric uses a large unit of data storage, known as a &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR-DB/Architecture-MapRDBandMAPRFS.html?hl=data%2Ccontainer&quot;&gt;data fabric container&lt;/a&gt; (not to be confused with a Kubernetes container, despite the similarity in the name) as the unit of replication. Data replication is an automatic feature of the data fabric – the basis for data fabric’s self-healing capabilities – with data replicas spread across multiple machines by default. But you can also specify particular data placement policies, and data fabric containers and their replicas will automatically be placed according to the policies you apply.&lt;/p&gt;
&lt;p&gt;Data fabric also has a special container, known as a name container, which holds metadata for the files, directories, tables, and event streams associated with a data fabric volume. The name container is a strength of the HPE Ezmeral Data Fabric design because it provides a way for metadata to be distributed across a cluster, resulting in extreme reliability and high performance.&lt;/p&gt;
&lt;p&gt;With the fine granularity for data placement afforded by the storage labels feature, &lt;em&gt;data fabric containers and their replicas can have one placement policy while the name container can have a different policy&lt;/em&gt;. As Figure 4 shows, you can apply a label “Warm” to position data for bulk workloads on storage pools with slower devices while maintaining the metadata for that volume on fast solid-state devices by applying the label “Hot” to the name container.&lt;/p&gt;
&lt;p&gt;This situation can result in significant throughput improvements in processes such as massive disk-based sorts where a very large number of spill files must be created quickly (requiring super-fast meta-data updates on SSDs) and then these spill files must be written very quickly (requiring fast sequential I/O that hordes of hard drives can provide). The combination can work better than either option in isolation by providing the right resources for the right micro-workloads.&lt;/p&gt;
&lt;h2&gt;Making the most of fine-grained data placement&lt;/h2&gt;
&lt;p&gt;Turns out you don’t need unlimited resources to get excellent performance for your applications when you take advantage of the fine granularity of data placement afforded by HPE Ezmeral Data Fabric. You can easily assign data topologies when you create a data volume, and you can use convenient storage labels for differential data placement on particular types of storage devices even down to different storage pools within machines. And with the added capability of placing metadata independently of data containers, you can further optimize performance for both bulk applications and in situations using many small files.&lt;/p&gt;
&lt;p&gt;To find out more about the capabilities provided by HPE Ezmeral Data Fabric visit the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home/&quot;&gt;data fabric platform page&lt;/a&gt; in the HPE Developer Community.&lt;/p&gt;
&lt;p&gt;For a hands-on workshop highlighting data fabric volumes, go to &lt;a href=&quot;/hackshack/workshop/26&quot;&gt;HPE Ezmeral Data Fabric 101 – Get to know the basics around the data fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To learn about data access management using HPE Ezmeral Data Fabric, read the New Stack article &lt;a href=&quot;https://thenewstack.io/data-access-management-with-aces-vs-acls-the-power-of-and-and-not/&quot;&gt;Data Access Control via ACEs vs ACLs: The power of “AND” and “NOT”&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Coding styles: A personal preference or bad practice?]]></title><description><![CDATA[We all have different styles and preferences in everything in life, including how we write code. Imprinting your personality in the code…]]></description><link>https://developer.hpe.com/java-coding-style-or-bad-practice/</link><guid isPermaLink="false">https://developer.hpe.com/java-coding-style-or-bad-practice/</guid><pubDate>Fri, 22 Oct 2021 11:32:06 GMT</pubDate><content:encoded>&lt;!--StartFragment--&gt;
&lt;p&gt;We all have different styles and preferences in everything in life, including how we write code.&lt;/p&gt;
&lt;p&gt;Imprinting your personality in the code brings originality and a sense of &lt;a href=&quot;https://books.google.co.uk/books/about/Patterns_of_Software.html?id=0-i3QgAACAAJ&amp;#x26;redir_esc=y&quot;&gt;ownership and responsibility&lt;/a&gt; It’s essential to keep us motivated, and it makes us feel good (at least I do). However, is one&apos;s coding style always just a harmless style? Or does it impact readability and hence maintenance?&lt;/p&gt;
&lt;p&gt;This has been on my mind a lot lately. For instance, during a code review, I often question whether I should bring specific ways of &lt;a href=&quot;https://medium.com/javarevisited/5-coding-interview-books-to-prepare-for-programming-job-interviews-d8f63348afaf&quot;&gt;coding &lt;/a&gt;into the discussion or not. How does it affect the application; Is it readable, is it easy to maintain?&lt;/p&gt;
&lt;p&gt;Or perhaps I should leave it alone, thinking to myself — &lt;em&gt;Don’t be picky, it’s just their preference, it’s not a matter of right or wrong.&lt;/em&gt;&lt;/p&gt;
&lt;h1&gt;Identifying a programmer&apos;s fingerprint&lt;/h1&gt;
&lt;p&gt;We could say a developer has a coding identity or ‘fingerprint’, similar to what happens with regular writing. When writing, there is often a pattern with which someone writes — the terms, vocabulary, structure. A linguistic expert, for instance, can identify the author of some anonymous material simply by analyzing these patterns.&lt;/p&gt;
&lt;p&gt;Analizing these patterns can even tell things such as the age and place of birth of the author. This technique is called &lt;em&gt;Stylometry.&lt;/em&gt; It’s even used in criminal investigations. Machine learning algorithms are used for &lt;a href=&quot;http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;#x26;pid=S1405-55462018000100047&quot;&gt;Stylometry &lt;/a&gt;as well — as they can process many texts/books and identify patterns.&lt;/p&gt;
&lt;p&gt;We probably can’t tell who committed a crime based on the coding style (&lt;em&gt;can we?&lt;/em&gt;). But, let’s say in a team of ten developers, if there are no strict standards to follow, I believe it’s possible to identify who wrote a code block without looking at the author information.&lt;/p&gt;
&lt;p&gt;In this post, I’ll list a number of different ways of writing code I’ve encountered throughout my career as a Software Engineer. I’ll focus mostly on Java, but some things are applicable in general.&lt;/p&gt;
&lt;p&gt;I’ll also offer my perspective on whether it is just a coding preference that we shouldn’t care about, or if perhaps there is a right (and wrong) way of doing it.&lt;/p&gt;
&lt;h1&gt;Multiple or single “returns”&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/700/1*NNP98veaLxsykgqKP876Ig.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;One coding practice that tends to reflect a developer&apos;s preference is the use of a single or multiple &apos;returns&apos;.&lt;/p&gt;
&lt;p&gt;I used to prefer a single ‘return’ at the end of the method, and I still do this sometimes. But more recently, I find that I tend to return where the condition satisfies — I think it’s easier to maintain (it looks uglier, though). You’re more sure of when the method returns a particular value, and you can be certain that any code after the return won’t be executed.&lt;/p&gt;
&lt;p&gt;Otherwise, you need to read every if-else or break inside a loop. Often the logic is not as simple as the one presented above.&lt;/p&gt;
&lt;p&gt;If I see some complex logic with multiple if-else conditions chained together, mixed with ‘break’ inside loops, etc., and one single return at the end, when a particular value could’ve been returned before — I’d explain my perspective and see if the person agrees with doing the change. However, I wouldn’t push it too much and be picky about it. It’s a subtle benefit that may be hard to convey.&lt;/p&gt;
&lt;h1&gt;To Else or not?&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/700/1*-_0Gs6GdptRNMA1Efh5KAg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another variation I tend to see is whether the coder uses the Else statement. Is it really necessary? I commonly do the version on the left — “Default value with no else” when it’s a simple variable assignment case. It just feels cleaner to me.&lt;/p&gt;
&lt;p&gt;A counter-argument could be that the first example uses fewer resources, because you start with null, and only one value (A or B) is assigned at max. On the other hand, a maximum of two variable assignments could happen (if booleanFlag is true). I’d agree with that, but not for all cases. Setting a default first would be fine. It depends on what is being executed as the ‘default’.&lt;/p&gt;
&lt;p&gt;This example was one ‘challenge’ that my Bachelor course coordinator threw at us newbies in the first semester during a programming class — “How could you rewrite the first version in fewer lines?!”&lt;/p&gt;
&lt;p&gt;No one in the class could answer it. Everyone was still coming to terms with the fact that the course wasn’t really about learning Microsoft Office.&lt;/p&gt;
&lt;p&gt;Although I prefer the second version (for a simple variable assignment), I’d probably not bring it up to discuss or ask to change in a code review.&lt;/p&gt;
&lt;h1&gt;Curly braces or not&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/700/1*rHOOuQTYZtI66rephlNgZg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Curly braces are used to delimit the start and end of a block of code. Curly braces become ‘optional’ when only one statement is inside an IF condition, a While or For loop.&lt;/p&gt;
&lt;p&gt;Both code snippets do the same thing; there is no difference functionality wise. Which one do you prefer?&lt;/p&gt;
&lt;p&gt;For me, I’m totally in favour of using curly braces, always. It shouldn’t be optional. I think that mainly because, in languages like Java, the indentation doesn’t drive what will be executed as part of the if condition or loop (for Python, it does, for example). Indentation only, without curly braces, cannot be relied on — it may trick you into thinking that something will be executed (or won’t be executed) when it won’t (or when it will).&lt;/p&gt;
&lt;p&gt;So NOT using curly braces may lead to hidden bugs and bad readability in general. In contrast, using it leaves no room for doubt on which line will run or not. It becomes easier to maintain, in my view.&lt;/p&gt;
&lt;p&gt;Here are some examples to help illustrate what I mean:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;if (count &gt; 10)
    System.out.println(1);
    System.out.println(2);
    System.out.println(3);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you read the code above, you think all three lines will be executed if the condition satisfies. But it’s not true. There are no curly braces; hence only one will be printed if, let’s say, the count is equal to eleven. Two and three will be printed in any case, even if, for example, the count is five.&lt;/p&gt;
&lt;p&gt;Another example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;int count=15;
if (count &gt; 10)
    if (count &gt; 20)
        return 1;
else    
    return 2;
return 3;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The else is aligned with the first if condition, but it’s instead part of the if condition just before. The program returns two as the count is less than twenty.&lt;/p&gt;
&lt;p&gt;In a code review, I would probably ask to change it (very politely and diplomatically, of course). The other team member may prefer without curly braces and depending on my position, that’s fine — I wouldn’t push it too much.&lt;/p&gt;
&lt;h1&gt;Checked or unchecked exception&lt;/h1&gt;
&lt;p&gt;Exceptions are events that happen outside of the normal flow. It allows programmers to separate the code that deals with the success path from those that deal with errors.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://medium.com/javarevisited/10-free-courses-to-learn-java-in-2019-22d1f33a3915?source=collection_home---4------8-----------------------&quot;&gt;Java &lt;/a&gt;has its Exception classes, or the developer can create its own by extending Exception or RuntimeException.&lt;/p&gt;
&lt;p&gt;Let’s say there is some particular error validation related to your business. You could create a class, for example, ProductNotFoundException, that extends the Exception class.&lt;/p&gt;
&lt;p&gt;Another characteristic of how exceptions in Java work is that there are two types of exceptions: Checked and Unchecked.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://javarevisited.blogspot.com/2011/12/checked-vs-unchecked-exception-in-java.html&quot;&gt;Checked exceptions&lt;/a&gt; are exceptions that extend the Exception class. Their behaviour is: If a code inside method A throws a checked exception, any method that calls method A must handle the checked exception by either catching or throwing (or perhaps both). The code will not compile otherwise. Extending a &lt;a href=&quot;https://docs.oracle.com/javase/tutorial/essential/exceptions/definition.html&quot;&gt;Checked exception&lt;/a&gt; is a way to force programmers to handle a specific error.&lt;/p&gt;
&lt;p&gt;Unchecked exceptions are used for unrecoverable errors. Such errors are not to be handled. Instead, programmers should tackle the root cause that triggers them. Example: &lt;a href=&quot;https://javarevisited.blogspot.com/2012/06/common-cause-of-javalangnullpointerexce.html&quot;&gt;NullpointerException&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;These exceptions extend RuntimeException, and are different from Checked ones. The caller method is not enforced to handle it by catching or throwing it.&lt;/p&gt;
&lt;p&gt;Despite being used for unrecoverable errors, one could create an Unchecked exception. It’s just a matter of extending RuntimeException.&lt;/p&gt;
&lt;p&gt;Any method can handle such exceptions, but the compiler doesn’t complain if they don’t. That means that you can have the exception handling code only where it is needed.&lt;/p&gt;
&lt;p&gt;I learned and used to code by always using &lt;a href=&quot;http://www.java67.com/2012/12/difference-between-runtimeexception-and-checked-exception.html&quot;&gt;Checked exceptions&lt;/a&gt;. You probably learned that way too. If you implement a method that calls another method A that generates a checked error, the compiler will tell you that you need to do something about it. And the intent of who created method A was exactly that, to alert and force others to handle the error.&lt;/p&gt;
&lt;p&gt;Oracle does recommend always using a Checked exception if you expect to recover from the error. &lt;a href=&quot;https://docs.oracle.com/javase/tutorial/essential/exceptions/runtime.html&quot;&gt;https://docs.oracle.com/javase/tutorial/essential/exceptions/runtime.html&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Here’s the bottom line guideline: If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I admit that despite Oracle’s recommendation and being a good practice to use Checked exceptions, I have extended RuntimeException before. I understand that throwing exceptions should be considered just as essential as the method’s parameters and return value; &lt;a href=&quot;https://docs.oracle.com/javase/tutorial/essential/exceptions/runtime.html&quot;&gt;it’s part of the method programming interface&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But, no one creates a method that receives a parameter and does nothing with it. I find that throwing it, and re-throwing it upstream without doing any meaningful handling (log, return message to the user), creates a bit of clutter. It’s unnecessary.&lt;/p&gt;
&lt;p&gt;With unchecked exceptions, only the method that generates the error and the one that handles it needs to deal with it. It’s a calculated risk I choose to take sometimes. It’s a calculated risk in the sense that an error that is supposed to be handled may not be — another developer that calls your method won’t be alerted by the compiler to handle the exception that your method raises. It’s a drawback from making it Unchecked, that’s why is considered a bad practice in general.&lt;/p&gt;
&lt;p&gt;If I see that one of the team members chose to create and use an Unchecked exception, I would probably want to know the thought process and make sure they know the pros and cons.&lt;/p&gt;
&lt;h1&gt;Using an If then versus an Else exception&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/700/1*i9wP8lt_G0auF6VFncfKYg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another thing programmers tend to differ on is the use of an If then exception when an Else exception would also work.&lt;/p&gt;
&lt;p&gt;I’ve experimented with both ways throughout my career as a developer. Today I prefer the version on the right — “If then exception”. I see it as clearer — easier to read where errors are generated. And I usually have it aside from the main logic in a private ‘validate’ method.&lt;/p&gt;
&lt;p&gt;I would probably try to change in a code review if one of my peers uses the second version. Unless the code is as simple as the example in this section — then I’d leave it (maybe).&lt;/p&gt;
&lt;h1&gt;Positioning the curly braces&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/700/1*w0PHfP8XQZejoWbIJRKRuw.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This one is just cosmetics. It’s silly. It’s like preferring a toast cut diagonally versus horizontally or vertically (You’re probably wondering — How can someone** &lt;em&gt;not&lt;/em&gt;** choose diagonally?! Anyway…).&lt;/p&gt;
&lt;p&gt;I find it funny that, even in things like this, people have preferences.&lt;/p&gt;
&lt;p&gt;I prefer the first one, with curly braces in the same line as if condition or loop. I don’t see any considerable benefit of one or the other. I would not ask another programmer to change it.&lt;/p&gt;
&lt;h1&gt;Final thoughts&lt;/h1&gt;
&lt;p&gt;Certain coding style choices are personal, with no benefit or cons over others. It’s like preferring blue to red, orange to apple.&lt;/p&gt;
&lt;p&gt;However, some other preferences are more arguable — does it make the code less readable or error-prone? Out of the stylistic differences I covered, the no use of curly braces and checked versus unchecked exception examples stand out. These are the ones with the most impact.&lt;/p&gt;
&lt;p&gt;Even if there are coding standards set in place, it&apos;s probably best if they aren&apos;t too rigid. One still needs to allow the developer a certain amount of leeway to make their own personal mark. If it were up to me, I would set a rule to always use curly braces and, possibly, to use checked exceptions (because it tends to be safer), but that&apos;s about it. In the end, it should be discussed and agreed upon as a team.&lt;/p&gt;
&lt;!--EndFragment--&gt;</content:encoded></item><item><title><![CDATA[Determined AI is Joining Hewlett Packard Enterprise]]></title><description><![CDATA[Editor’s Note: Determined AI products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in…]]></description><link>https://developer.hpe.com/determined-ai-is-joining-hewlett-packard-enterprise/</link><guid isPermaLink="false">https://developer.hpe.com/determined-ai-is-joining-hewlett-packard-enterprise/</guid><pubDate>Tue, 12 Oct 2021 17:24:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; Determined AI products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2021 may have older product names that differ from current solutions.&lt;/p&gt;
&lt;p&gt;Today is a big day for Determined AI, our customers, and our open source community. We are thrilled to announce that &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2021/06/hewlett-packard-enterprise-acquires-determined-ai-to-accelerate-artificial-intelligence-innovation-with-fast-and-simple-machine-learning-modeling.html&quot;&gt;Determined AI is joining one of Silicon Valley’s most iconic companies: Hewlett Packard Enterprise (HPE)&lt;/a&gt;, working under the umbrella of the High Performance Computing (HPC) and Mission Critical Solutions (MCS) business unit.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/hpe-logo.png&quot; width=&quot;500&quot; height=&quot;333&quot;&gt;&lt;/center&gt;
&lt;p&gt;This is a &lt;em&gt;&lt;strong&gt;massive&lt;/strong&gt;&lt;/em&gt; accelerant for our mission to empower users to efficiently build cutting-edge AI applications. Over the last several years, building AI applications has become &lt;em&gt;&lt;strong&gt;extremely&lt;/strong&gt;&lt;/em&gt; compute, data, and communication intensive. Ten years ago you could do cutting-edge computer vision research on a laptop. Today, you need a massive farm of GPUs or other specialized chips to remain competitive. These problems aren’t unique to vision or academic research – they’re affecting organizations large and small, as we are hearing from our thriving open source community. To put it a different way: AI is rapidly becoming a High Performance Computing problem. HPE is already a global leader in designing and delivering High Performance Computing systems and, via their acquisitions of Cray and SGI, they have decades of experience in the space working with some of the most sophisticated users on the planet. We are thrilled about the opportunity to partner with HPE to deliver co-designed software and hardware and tackle some of society’s most pressing challenges.&lt;/p&gt;
&lt;p&gt;HPE shares our vision that driving an open standard for AI software infrastructure is the fastest way for the industry to realize the potential of AI. Consequently, &lt;strong&gt;HPE is committed to investing in and rapidly growing the Determined Training Platform as an open source project.&lt;/strong&gt; Our customers and open source community members will continue to receive the same high level of service and support that they always have, from a team of experts who are intimately familiar with the challenges they’re facing. Moving forward, we’ll be in a position to serve a much more global network of users, both large and small – we’re excited to continue building new features to both our customers and our open-source community.&lt;/p&gt;
&lt;p&gt;Relatedly, &lt;a href=&quot;https://www.determined.ai/contact&quot;&gt;we are actively hiring!&lt;/a&gt; We are truly humbled by the amazing team we have had the good fortune to work with over the past 4 years. Building infrastructure for AI is fundamentally an interdisciplinary endeavor, and from the day we started the company, we have been committed to fostering an environment that encourages diversity of thought. At HPE we’ll be rapidly expanding the team’s headcount and continuing to build on our culture of collaboration.&lt;/p&gt;
&lt;p&gt;Today is a big day for Determined AI, and we’re immensely grateful to the customers and community members we have had the privilege of working with for the past 4 years. The next phase in our journey promises to be even more exciting, and we’re thrilled to continue to work with the ML community to build the future of AI infrastructure.&lt;/p&gt;
&lt;p&gt;For more information on using Determined AI to build and train deep learning (DL) models at scale, please check out the &lt;a href=&quot;https://developer.hpe.com/platform/determined-ai/home/&quot;&gt;Determined AI platform page&lt;/a&gt; on HPE DEV.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Fast data processing pipeline for predicting flight delays using Apache APIs: Kafka, Spark Streaming and Machine Learning (part 3)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-3/</link><guid isPermaLink="false">https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-3/</guid><pubDate>Mon, 11 Oct 2021 21:12:59 GMT</pubDate><content:encoded>&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;tags&quot;: &quot;use-cases&quot;,
&quot;publish&quot;: &quot;2018-02-08T12:00:00.000Z&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor&apos;s Note:&lt;/strong&gt; This is a 3-part Series, see the previously published posts below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/&quot;&gt;Part 1 - Spark Machine Learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-2/&quot;&gt;Part 2 - Kafka and Spark Streaming&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;According to Bob Renner, previous CEO of Liaison Technologies, the possibility to blend machine learning with real-time transactional data flowing through a single platform is opening a world of new possibilities, such as enabling organizations to take advantage of opportunities as they arise. According to &lt;a href=&quot;https://www.gartner.com/newsroom/id/3812063&quot;&gt;Gartner&lt;/a&gt; over the next few years, virtually every app, application and service will incorporate some level of machine learning. Leveraging these opportunities requires fast and scalable data processing pipelines.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Platform&quot; src=&quot;/img/mapr-platform.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;This is the third in a series of blog posts, that discuss the architecture of a data pipeline that combines streaming data with machine learning and fast storage. &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/&quot;&gt;The first post&lt;/a&gt; discussed creating a machine learning model to predict flight delays. The &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-2/&quot;&gt;second post&lt;/a&gt; discussed using the saved model with streaming data to do a real-time analysis of flight delays. This third post will discuss fast storage and analysis with MapR Database, Apache Spark, Apache Drill and OJAI.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Machine Learning Logistics and Data Pipelines&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Machine Learning usually refers to the model training piece of an ML workflow. But, as data fabric expert Ted Dunning says, 90% of the effort around Machine Learning is data logistics which includes all of the aspects that occur before and after this training.  When you combine event streams with microservices, you can greatly enhance the agility with which you build, deploy, and maintain complex data pipelines. Pipelines are constructed by chaining together microservices, each of which listens for the arrival of some data, performs its designated task, and optionally publishes its own messages to another topic.  Combining event-driven data pipelines with machine learning can handle the logistics of machine learning in a flexible way by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Making input and output data available to independent consumers&lt;/li&gt;
&lt;li&gt;Managing and evaluating multiple models and easily deploying new models&lt;/li&gt;
&lt;li&gt;Monitoring and analyzing models, with historical and real-time data.&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Architectures&quot; src=&quot;/img/mapr-architectures.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Architectures for these types of applications are discussed in more detail in the eBooks: &lt;a href=&quot;https://www.oreilly.com/library/view/machine-learning-logistics/9781491997628/&quot;&gt;Machine Learning Logistics&lt;/a&gt;, &lt;a href=&quot;https://www.oreilly.com/library/view/streaming-architecture/9781491953914/&quot;&gt;Streaming Architecture&lt;/a&gt;, and &lt;a href=&quot;https://www.academia.edu/41522528/A_Practical_Guide_to_Microservices_and_Containers_Mastering_the_Cloud_Data_and_Digital_Transformation&quot;&gt;Microservices and Containers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The following figure depicts the (simplified) data pipeline for this tutorial:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Flight trip data is published to a MapR Event Streams (ES) topic using the Kafka API. (Note this data contains the actual delay label. In the real-world architecture, the actual delay label would come later in a different topic, but to keep the tutorial code simple it is combined with the input data).&lt;/li&gt;
&lt;li&gt;A Spark Streaming application subscribed to the first topic enriches the event with the flight predictions and publishes the results in JSON format to another topic. ( In the real-world architecture, there would be multiple consumers publishing model predictions, but to keep the tutorial code simple there is only one here).&lt;/li&gt;
&lt;li&gt;A Spark Streaming application subscribed to the second topic stores the flight trip data and predictions in MapR Database using the Spark MapR Database Connector.&lt;/li&gt;
&lt;li&gt;Apache Spark SQL, Apache Drill SQL, and Open JSON applications query MapR Database to analyze flight data and prediction performance.&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Database&quot; src=&quot;/img/mapr-db.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;How to Store the Data&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the challenges, when you are processing lots of streaming data, is where do you want to store it? With a relational database and a normalized schema, related data is stored in different tables. Queries joining this data together can cause bottlenecks with lots of data. For this application, &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR-DB/developing_client_applications_for_mapr_db.html&quot;&gt;MapR Database JSON&lt;/a&gt;, a high-performance NoSQL database, was chosen for its scalability and flexible ease of use with JSON. MapR Database and a denormalized schema scale, because data that is read together is stored together.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Storage Model&quot; src=&quot;/img/storage-model.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;With MapR Database (HBase API or JSON API), a table is automatically partitioned into tablets across a cluster by key range, providing for scalable and fast reads and writes by row key.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Database Connector&quot; src=&quot;/img/mapr-db-connector.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;The Spark MapR Database Connector leverages the Spark &lt;a href=&quot;https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html&quot;&gt;DataSource API&lt;/a&gt;. The connector architecture has a connection object in every Spark Executor, allowing for distributed parallel writes, reads, or scans with MapR Database tablets.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Spark Executor&quot; src=&quot;/img/spark-executor.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;JSON Schema Flexibility&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MapR Database supports JSON documents as a native data store. MapR Database makes it easy to store, query and build applications with JSON documents.  The Spark connector makes it easy to build real-time or batch pipelines between your JSON data and MapR Database and leverage Spark within the pipeline.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;JSON Schema Flexibility&quot; src=&quot;/img/json-schema-flexibility.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;JSON facilitates the natural evolution of your data schema during the life of your application. For example, suppose at first we have the following schema, where each JSON message has the predicted flight delay using a decision tree:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;_id&quot;: &quot;UA_2017-03-28_DEN_SFO_721&quot;,
    &quot;dofW&quot;: 2,
    &quot;carrier&quot;: &quot;UA&quot;,
    &quot;origin&quot;: &quot;DEN&quot;,
    &quot;dest&quot;: &quot;SFO&quot;,
    &quot;crsdephour&quot;: 11,
    &quot;crsdeptime&quot;: 1120.0,
    &quot;crsarrtime&quot;: 1308.0,
    &quot;crselapsedtime&quot;: 168.0,
    &quot;pred_dtree&quot;: 1.0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Later, you can easily capture more prediction data values quickly without changing the architecture of your application and without updating a database schema, by adding attributes. In the example below, we have added predictions for other machine learning models. These can be added dynamically to the same document instance in MapR Database without any database schema changes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;_id&quot;: &quot;UA_2017-03-28_DEN_SFO_721&quot;,
    &quot;dofW&quot;: 2,
    &quot;carrier&quot;: &quot;UA&quot;,
    &quot;origin&quot;: &quot;DEN&quot;,
    &quot;dest&quot;: &quot;SFO&quot;,
    &quot;crsdephour&quot;: 11,
    &quot;crsdeptime&quot;: 1120.0,
    &quot;crsarrtime&quot;: 1308.0,
    &quot;crselapsedtime&quot;: 168.0,
    &quot;pred_dtree&quot;: 1.0,
    &quot;pred_randforest&quot;: 1.0,
    &quot;pred_svm&quot;: 1.0,
    &quot;actual_delay&quot;: 1.0


}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;MapR Event Store allows processing of the same messages by different consumers. This makes it easy to add different consumers for the same message. With this type of architecture and flexible schema, you can easily add and deploy new microservices with new machine learning models.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Spark Streaming writing to MapR Database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The MapR Database OJAI Connector for Apache Spark enables you to use &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/SavingDStreamMapRDB.html&quot;&gt;MapR Database as a sink for Apache Spark Data Streams&lt;/a&gt;.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Spark Streaming writing to MapR Database&quot; src=&quot;/img/spark-streaming-writing-to-mapr-db.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;You can read about the MapR Event Streams Spark Streaming code in &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-2/&quot;&gt;part 2 of this series&lt;/a&gt;; here, we will focus on Spark streaming writing to MapR Database. The messages from the MapR Database topic are in JSON format and contain the following for each flight: the flight id, day of the week, carrier, origin, destination, scheduled departure hour, scheduled departure time, scheduled arrival time, scheduled travel time, delay prediction, and actual delay label (Note in the real-world architecture, the actual delay label would come later in a different topic, but to keep the tutorial code simple, it is combined here). An example is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;_id&quot;: &quot;UA_2017-03-28_DEN_SFO_721&quot;,
    &quot;dofW&quot;: 2,
    &quot;carrier&quot;: &quot;UA&quot;,
    &quot;origin&quot;: &quot;DEN&quot;,
    &quot;dest&quot;: &quot;SFO&quot;,
    &quot;crsdephour&quot;: 11,
    &quot;crsdeptime&quot;: 1120.0,
    &quot;crsarrtime&quot;: 1308.0,
    &quot;crselapsedtime&quot;: 168.0,
    &quot;label&quot;: 0.0,
    &quot;pred_dtree&quot;: 1.0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we use a Scala case class and &lt;a href=&quot;https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema&quot;&gt;Structype&lt;/a&gt; to define the schema, corresponding to the input data.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Scala case class 1&quot; src=&quot;/img/scala-case-class.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Scala case class 2&quot; src=&quot;/img/scala-case-class-2.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;We use the KafkaUtils createDirectStream method with Kafka configuration parameters to create an input stream from a MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each message is a key value pair. We use the DStream map transformation to create a DStream with the message values.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;KafkaUtils createDirectStream method&quot; src=&quot;/img/kafkautils-createdirectstream-method.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;DStream&quot; src=&quot;/img/dstream.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;In the code below, each RDD in the valuesDStream is transformed into a Spark Dataset. Then, the MapR Database Spark Connector DStream saveToMapRDB method performs a parallel partitioned bulk insert of JSON FlightwPred objects into MapR Database.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;RDD in the valuesDStream&quot; src=&quot;/img/rdd-in-valuesdstream.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Save DStream to MapR Database JSON&quot; src=&quot;/img/save-dsstream-mapr-db-json.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Querying MapR Database JSON with Spark SQL&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Spark MapR Database Connector enables users to perform complex SQL queries and updates on top of MapR Database using a Spark Dataset, while applying critical techniques such as Projection and filter pushdown, custom partitioning, and data locality.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Application&quot; src=&quot;/img/application.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;A Spark Dataset is a distributed collection of data. Dataset is a newer interface, which provides the benefits of strong typing, the ability to use powerful lambda functions, efficient object serialization/deserialization, combined with the benefits of Spark SQL&apos;s optimized execution engine.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Spark Dataset&quot; src=&quot;/img/spark-dataset.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;A DataFrame is a Dataset organized into named columns Dataset[Row]. (In Spark 2.0, the DataFrame APIs merged with Datasets APIs.)&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Unified Apache Spark 2.0 API&quot; src=&quot;/img/unified-apache-spark.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Loading data from MapR Database into a Spark Dataset&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/LoadDataFromMapRDBasDataset.html&quot;&gt;load data from a MapR Database JSON&lt;/a&gt; table into an Apache Spark Dataset, we invoke the loadFromMapRDB method on a SparkSession object, providing the tableName, schema and case class. This will return a Dataset of FlightwPred objects:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;load data from a MapR Database JSON&quot; src=&quot;/img/load-data-from-mapr-db-json.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Explore and query the Flight data with Spark SQL&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Datasets provide a domain-specific language for structured data manipulation in Scala, Java, and Python. Below are some examples in scala. The Dataset show() action displays the top 20 rows in a tabular form.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;top 20 rows in a tabular form&quot; src=&quot;/img/top-20-rows-tabular-form.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;In the code below, a filter is used to count the predicted delays, actual delays and total. This is then used to calculate the ratio wrong, correct, false positive. These type of calculations would be useful for continued analysis of models in production.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;delays, actual delays and total&quot; src=&quot;/img/filter-count-predicted-delays.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;The output is shown below.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Output of predicted delays, actual delays and total&quot; src=&quot;/img/delays-actual-delays-total-output.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;What is the count of predicted delay/notdelay for this dstream dataset?&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;What is the count of predicted delay/notdelay for this dstream dataset?&quot; src=&quot;/img/count-predicted-dstream-dataset.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;You can register a Dataset as a temporary table using a given name, and then run Spark SQL. Here are some example Spark SQL queries on the Dataset of FlightwPred objects:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is the count of predicted delay/notdelay by day of the week?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;scala&gt; spark.sql(&quot;select dofW, pred_dtree, count(pred_dtree) from flight group by dofW, pred_dtree order by dofW&quot;).show
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img alt=&quot;Spark SQL queries on the Dataset of FlightwPred objects&quot; src=&quot;/img/queries-dataset-flightwpred-objects.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;What is the count of predicted delay/notdelay by destination?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;scala&gt; spark.sql(&quot;select dest, pred_dtree, count(pred_dtree) from flight group by dest, pred_dtree order by dest&quot;).show
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img alt=&quot;Delay/NotDelay destination&quot; src=&quot;/img/delay-notdelay-destination.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;(The complete code, instructions and more example queries are in the github code link at the end.)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Querying the Data with Apache Drill&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Apache Drill is an open source, low-latency query engine for big data that delivers interactive SQL analytics at petabyte scale. Drill provides a massively parallel processing execution engine, built to perform distributed query processing across the various nodes in a cluster.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Querying the Data with Apache Drill&quot; src=&quot;/img/querying-data-with-apache-drill.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;With Drill, you can use SQL to interactively query and join data from files in JSON, Parquet, or CSV format, Hive, and NoSQL stores, including HBase, MapR Database, and Mongo, without defining schemas. MapR provides a &lt;a href=&quot;https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/&quot;&gt;Drill JDBC&lt;/a&gt; driver that you can use to connect Java applications, BI tools, such as SquirreL and Spotfire, to Drill. Below is a snippit of Java code for querying MapR Database using Drill and JDBC:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;querying MapR Database using Drill and JDBC&quot; src=&quot;/img/querying-mapr-db-using-drill-jdbc.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;The output for this query &lt;strong&gt;&quot;What is the count of predicted delay/notdelay by origin?&quot;&lt;/strong&gt; is shown below:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;output of predicted delay/notdelay by origin&quot; src=&quot;/img/output-delay-notdelay-origin.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;Below are some example SQL queries using the Drill shell.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is the count of predicted delay/notdelay by origin&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;count of predicted delay/notdelay by origin&quot; src=&quot;/img/count-predicted-delay-notdelay-origin.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;What is the count of predicted delay/notdelay by origin and dest?&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;What is the count of predicted delay/notdelay by origin and dest?&quot; src=&quot;/img/count-predicted-delay-notdelay-origin-dest.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Follow the instructions in the github code README to &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Drill/optimizing_queries_with_indexes.html&quot;&gt;add a secondary index to MapR Database&lt;/a&gt; and try more queries using the index.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Querying with the Open JSON API (OJAI)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Below is a Java example of using the OJAI Query interface to query documents in a MapR Database JSON table:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;OJAI Query interface to query documents in a MapR Database JSON&quot; src=&quot;/img/ojai-query-interface-mapr-db.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Partial output for this query to &quot;find predicted late flights for AA&quot; is shown below:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;find predicted late flights for AA&quot; src=&quot;/img/find-predicted-late-flights-aa.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Below are some example OJAI queries using the MapR Database shell.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What are the SFO to DEN flights that were predicted late ?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;maprdb&gt; find /apps/flights -where &apos;{&quot;$and&quot;:[{&quot;$eq&quot;:{&quot;pred_dtree&quot;:1.0}},{ &quot;$like&quot; : {&quot;_id&quot;:&quot;%SFO_DEN%&quot;} }]}&apos; --f _id,pred_dtree
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Database find /apps/flights&quot; src=&quot;/img/mapr-db-find.png&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, you&apos;ve learned how to consume streaming JSON events, store in a document database, and explore with SQL using Apache Spark, Apache Kafka API, Apache Drill, MapR Event Store, MapR Database, and OJAI.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can download the code and data to run these examples from here (refer to the README for complete instructions to run): &lt;a href=&quot;https://github.com/mapr-demos/mapr-es-db-60-spark-flight&quot;&gt;https://github.com/mapr-demos/mapr-es-db-60-spark-flight&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running the Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform. The MapR Data Platform integrates global event streaming, real-time database capabilities, and scalable enterprise storage with a collection of data processing and analytical engines to power data processing pipelines and intelligent applications.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Data Platformm&quot; src=&quot;/img/mapr-cdp.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;This example was developed using the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;MapR 6.0 container for developers&lt;/a&gt;, a docker container that enables you to create a single node MapR cluster. The container is lightweight and designed to run on your laptop. (refer to the code README for instructions on running the code).&lt;/p&gt;
&lt;p&gt;You can also look at the following examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-db-60-getting-started&quot;&gt;MapR Database 60-getting-started&lt;/a&gt; to discover how to use DB Shell, Drill and OJAI to query and update documents, but also how to use indexes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-streams-60-getting-started&quot;&gt;MapR Event Store getting started on MapR 6.0 developer container&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/ojai-2-examples&quot;&gt;Ojai 2.0 Examples&lt;/a&gt; to learn more about OJAI 2.0 features&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-db-cdc-sample&quot;&gt;MapR Database Change Data Capture&lt;/a&gt; to capture database events such as insert, update, delete and react to these events.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;WANT TO LEARN MORE?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.ezmeral.software.hpe.com/&quot;&gt;Free On-Demand Training&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR-DB/designing_row_keys.html?hl=designing%2Crow%2Ckeys%2Cmapr-db&quot;&gt;MapR Database Rowkey Design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/NativeSparkConnectorJSON.html&quot;&gt;MapR Database OJAI Connector for Apache Spark&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/ReferenceGuide/mapr_dbshell.html&quot;&gt;MapR Database shell&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/Spark_IntegrateMapRStreams.html&quot;&gt;Integrate Spark with MapR Event Store Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/home/r_home_intro.html?origin=/display/MapR/Drill+Tutorial&quot;&gt;Apache Drill Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR-DB/Indexes/Indexes.html&quot;&gt;MapR Database secondary indexes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;MapR Container for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://spark.apache.org/docs/latest/streaming-programming-guide.html&quot;&gt;Apache Spark Streaming programming guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://spark.apache.org/docs/latest/sql-programming-guide.html&quot;&gt;Spark SQL programming Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Fast data processing pipeline for predicting flight delays using Apache APIs: Kafka, Spark Streaming and Machine Learning (part 2)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-apis-kafka-spark-streaming-and-machine-learning-part-2/</guid><pubDate>Mon, 11 Oct 2021 19:54:43 GMT</pubDate><content:encoded>&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;tags&quot;: &quot;use-cases&quot;,
&quot;publish&quot;: &quot;2018-01-10T10:00:00.000Z&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor&apos;s Note: You can find Part 1 of this series &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/&quot;&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;According to Bob Renner, former CEO of Liaison Technologies, the possibility to blend machine learning with real-time transactional data flowing through a single platform is opening a world of new possibilities, such as enabling organizations to take advantage of opportunities as they arise. Leveraging these opportunities requires fast, scalable data processing pipelines that process, analyze, and store events as they arrive.&lt;/p&gt;
&lt;p&gt;This is the second in a series of blog posts that discusses the architecture of a data pipeline that combines streaming data with machine learning and fast storage. &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/&quot;&gt;The first post&lt;/a&gt; discussed creating a machine learning model to predict flight delays. This second post will discuss using the saved model with streaming data to do a real-time analysis of flight delays. The third post will discuss fast storage with MapR Database.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Saved model with streaming data&quot; src=&quot;/img/saved-model-with-streaming-data.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Microservices, Data Pipelines, and Machine Learning Logistics&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://martinfowler.com/articles/microservices.html&quot;&gt;microservice architectural style&lt;/a&gt; is an approach to developing an application as a suite of small independently deployable services. A common architecture pattern combined with microservices is event sourcing using an append-only publish-subscribe event stream such as MapR Event Streams (which provides a Kafka API).&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Immediate access to operational and analytical data in MapR&quot; src=&quot;/img/immediate-access-to-operational-analytical-data.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Publish Subscribe Event Streams with MapR Event Store&lt;/h2&gt;
&lt;p&gt;A central principle of publish/subscribe systems is decoupled communications, wherein producers don’t know who subscribes, and consumers don’t know who publishes; this system makes it easy to add new listeners or new publishers without disrupting existing processes. MapR Event Store allows any number of information producers (potentially millions of them) to publish information to a specified topic. MapR Event Store will reliably persist those messages and make them accessible to any number of subscribers (again, potentially millions). Topics are partitioned for throughput and scalability, producers are load balanced, and consumers can be grouped to read in parallel. MapR Event Store can scale to very high throughput levels, easily delivering millions of messages per second using very modest hardware.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Kafka API&quot; src=&quot;/img/kafka-api.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;You can think of a partition like a queue; new messages are appended to the end, and messages are delivered in the order they are received.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/mapr-cluster.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Messages&quot; src=&quot;/img/messages.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Unlike a queue, messages are not deleted when read; they remain on the partition, available to other consumers. Messages, once published, are immutable, and can be retained forever.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Not deleting messages&quot; src=&quot;/img/not-deleting-messages.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Not deleting messages when they are read allows for high performance at scale and also for processing of the same messages by different consumers for different purposes such as multiple views with polyglot persistence.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;New subscribers of information&quot; src=&quot;/img/new-subscribers-information.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;New subscribers of information can replay the data stream, specifying a starting point as far back as the data retention policy enables.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Create new view, index, cache&quot; src=&quot;/img/create-new.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Read From new View&quot; src=&quot;/img/read-from-new-view.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Data Pipelines&lt;/h2&gt;
&lt;p&gt;When you combine these messaging capabilities with the simple concept of microservices, you can greatly enhance the agility with which you build, deploy, and maintain complex data pipelines. Pipelines are constructed by simply chaining together multiple microservices, each of which listens for the arrival of some data, performs its designated task, and optionally publishes its own messages to a topic. Development teams can deploy new services or service upgrades more frequently and with less risk, because the production version does not need to be taken offline. Both versions of the service simply run in parallel, consuming new data as it arrives and producing multiple versions of output. Both output streams can be monitored over time; the older version can be decommissioned when it ceases to be useful.&lt;/p&gt;
&lt;h2&gt;Machine Learning Logistics and Data Pipelines&lt;/h2&gt;
&lt;p&gt;Combining data pipelines with machine learning can handle the logistics of machine learning in a flexible way by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Making input and output data available to independent consumers&lt;/li&gt;
&lt;li&gt;Managing and evaluating multiple models and easily deploying new models&lt;/li&gt;
&lt;/ul&gt;
&lt;center&gt;&lt;img alt=&quot;Machine Learning Logistics and Data Pipelines&quot; src=&quot;/img/machine-learning-logistics.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Architectures for these types of applications are discussed in more detail in the ebooks &lt;a href=&quot;https://www.oreilly.com/library/view/machine-learning-logistics/9781491997628/&quot;&gt;Machine Learning logistics&lt;/a&gt;, &lt;a href=&quot;https://www.oreilly.com/library/view/streaming-architecture/9781491953914/&quot;&gt;Streaming Architecture&lt;/a&gt;, and &lt;a href=&quot;https://www.academia.edu/41522528/A_Practical_Guide_to_Microservices_and_Containers_Mastering_the_Cloud_Data_and_Digital_Transformation&quot;&gt;Microservices and Containers&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Below is the data processing pipeline for this use case of predicting flight delays. This pipeline could be augmented to be part of the rendezvous architecture discussed in the Oreilly &lt;a href=&quot;https://www.oreilly.com/library/view/machine-learning-logistics/9781491997628/&quot;&gt;Machine Learning Logistics ebook&lt;/a&gt;.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Data Processing Pipeline&quot; src=&quot;/img/data-processing-pipeline.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Spark Streaming Use Case Example Code&lt;/h2&gt;
&lt;p&gt;The following figure depicts the architecture for the part of the use case data pipeline discussed in this post:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;architecture data pipeline&quot; src=&quot;/img/architechture-data-pipeline.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;ol&gt;
&lt;li&gt;Flight data is published to a MapR Event Store topic using the Kafka API.&lt;/li&gt;
&lt;li&gt;A Spark streaming application, subscribed to the first topic: Ingests a stream of flight data&lt;/li&gt;
&lt;li&gt;Uses a deployed machine learning model to enrich the flight data with a delayed/not delayed prediction&lt;/li&gt;
&lt;li&gt;publishes the results in JSON format to another topic.&lt;/li&gt;
&lt;li&gt;(In the 3rd blog) A Spark streaming application subscribed to the second topic: Stores the input data and predictions in MapR Database&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Example Use Case Data&lt;/h2&gt;
&lt;p&gt;You can read more about the data set &lt;a href=&quot;https://developer.hpe.com/blog/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/&quot;&gt;in part 1 of this series&lt;/a&gt;. The incoming and outgoing data is in JSON format, an example is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;_id&quot;:&quot;AA_2017-02-16_EWR_ORD_1124”,
&quot;dofW”:4, &quot;carrier&quot;:&quot;AA&quot;, &quot;origin&quot;:&quot;EWR&quot;,
&quot;dest&quot;:&quot;ORD”, &quot;crsdeptime&quot;:705,
&quot;crsarrtime”:851, &quot;crselapsedtime&quot;:166.0,&quot;dist&quot;:719.0}
&lt;/code&gt;&lt;/pre&gt;
&lt;center&gt;&lt;img alt=&quot;Example Use Case Data&quot; src=&quot;/img/example-use-case-data.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Spark Kafka Consumer Producer Code&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Parsing the Data Set Records&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We use a Scala case class and &lt;a href=&quot;https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema&quot;&gt;Structype&lt;/a&gt; to define the schema, corresponding to the input data.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Structype&quot; src=&quot;/img/structype.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Loading the Model&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Spark &lt;a href=&quot;https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.tuning.CrossValidatorModel&quot;&gt;CrossValidatorModel&lt;/a&gt; class is used to load the saved model fitted on the historical flight data.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Loading the Model&quot; src=&quot;/img/loading-model.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Spark Streaming Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These are the basic steps for the Spark Streaming Consumer Producer code:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Configure Kafka Consumer and Producer properties.&lt;/li&gt;
&lt;li&gt;Initialize a Spark StreamingContext object. Using this context, create a DStream that reads a message from a Topic.&lt;/li&gt;
&lt;li&gt;Apply transformations (which create new DStreams).&lt;/li&gt;
&lt;li&gt;Write messages from the transformed DStream to a Topic.&lt;/li&gt;
&lt;li&gt;Start receiving data and processing. Wait for the processing to be stopped.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;We will go through each of these steps with the example application code.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;1) Configure Kafka Consumer Producer properties&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The first step is to set the KafkaConsumer and KafkaProducer configuration properties, which will be used later to create a DStream for receiving/sending messages to topics. You need to set the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Key and value deserializers: for deserializing the message.&lt;/li&gt;
&lt;li&gt;Auto offset reset: to start reading from the earliest or latest message.&lt;/li&gt;
&lt;li&gt;Bootstrap servers: this can be set to a dummy host:port since the broker address is not actually used by MapR Event Store.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information on the configuration parameters, &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR_Streams/differences_in_configuration_parameters_for_producers_and_consumers.html&quot;&gt;see the MapR Event Store documentation.&lt;/a&gt;&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Configure Kafka&quot; src=&quot;/img/configure-kafka-cosumer.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;2) Initialize a Spark StreamingContext object.&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;ConsumerStrategies.Subscribe, as shown below, is used to set the topics and Kafka configuration parameters. We use the &lt;a href=&quot;https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html&quot;&gt;KafkaUtils createDirectStream&lt;/a&gt; method with a StreamingContext, the consumer and location strategies, to create an input stream from a MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each message is a key value pair. We use the DStream map transformation to create a DStream with the message values.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Initialize a Spark StreamingContext Object 1&quot; src=&quot;/img/initialize-spark-streamingcontext.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Initialize a Spark StreamingContext Object 2&quot; src=&quot;/img/initialize-spark-streamingcontext-2.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;3) Apply transformations (which create new DStreams)&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;We use the DStream foreachRDD method to apply processing to each RDD in this DStream. We read the RDD of JSON strings into a Flight Dataset. Then we display 20 rows with the Dataset show method. We also create a temporary view of the Dataset in order to execute SQL queries.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Apply transformations&quot; src=&quot;/img/apply-transformation.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;&lt;strong&gt;Here is example output from the df.show :&lt;/strong&gt;&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Example output from the df.show&quot; src=&quot;/img/example-df-show.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;We transform the Dataset with the model pipeline, which will transform the features according to the pipeline, estimate and then return the predictions in a column of a new Dateset. We also create a temporary view of the new Dataset in order to execute SQL queries.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Temporary view&quot; src=&quot;/img/temporary-view.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;&lt;strong&gt;4) Write messages from the transformed DStream to a Topic&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The Dataset result of the query is converted to JSON RDD Strings, then the RDD sendToKafka method is used to send the JSON key-value messages to a topic (the key is null in this case).&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Write messages from the transformed DStream&quot; src=&quot;/img/write-messages-from-dstream.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;center&gt;&lt;img alt=&quot;Write messages from the transformed DStream 2&quot; src=&quot;/img/write-messages-from-dstream-2.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Example message values (the output for &lt;code&gt;temp.take(2)&lt;/code&gt; ) are shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;_id&quot;:&quot;DL_2017-01-01_MIA_LGA_1489&quot;,&quot;dofW&quot;:7,&quot;carrier&quot;:&quot;DL&quot;,&quot;origin&quot;:&quot;MIA&quot;,&quot;dest&quot;:&quot;LGA&quot;,&quot;crsdephour&quot;:13,&quot;crsdeptime&quot;:1315.0,&quot;crsarrtime&quot;:1618.0,&quot;crselapsedtime&quot;:183.0,&quot;label&quot;:0.0,&quot;pred_dtree&quot;:0.0}
{&quot;_id&quot;:&quot;DL_2017-01-01_LGA_MIA_1622&quot;,&quot;dofW&quot;:7,&quot;carrier&quot;:&quot;DL&quot;,&quot;origin&quot;:&quot;LGA&quot;,&quot;dest&quot;:&quot;MIA&quot;,&quot;crsdephour&quot;:8,&quot;crsdeptime&quot;:800.0,&quot;crsarrtime&quot;:1115.0,&quot;crselapsedtime&quot;:195.0,&quot;label&quot;:0.0,&quot;pred_dtree&quot;:0.0}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;strong&gt;5) Start receiving data and processing it. Wait for the processing to be stopped.&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;To start receiving data, we must explicitly call start() on the StreamingContext, then call &lt;code&gt;awaitTermination&lt;/code&gt; to wait for the streaming computation to finish. We use &lt;code&gt;ssc.remember&lt;/code&gt; to cache data for queries.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Receive and process data&quot; src=&quot;/img/receive-process-data.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Streaming Data Exploration&lt;/h2&gt;
&lt;p&gt;Now we can query the cached streaming data in the input temporary view flights, and the predictions temporary view &lt;code&gt;flightsp&lt;/code&gt;. Below we display a few rows from the flights view:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Flights view&quot; src=&quot;/img/flights-view.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Below we display the count of predicted delayed/not delayed departures by Origin:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Predicted Departures&quot; src=&quot;/img/predicted-departures.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Below we display the count of predicted delayed/not delayed departures by Destination:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Predicted Delayed&quot; src=&quot;/img/predicted-delayed.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;p&gt;Below we display the count of predicted delayed/not delayed departures by Origin,Destination:&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;Count predicted delayed&quot; src=&quot;/img/count-predicted-delayed.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, you learned how to use a Spark machine learning model in a Spark Streaming application, and how to integrate Spark Streaming with MapR Event Streams to consume and produce messages using the Kafka API.&lt;/p&gt;
&lt;h2&gt;Code&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You can download the code and data to run these examples from &lt;a href=&quot;https://github.com/caroljmcdonald/spark-ml-flightdelay&quot;&gt;here&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/spark-ml-flightdelay/blob/master/notebooks/sparkmlpipelineflightdelays.json&quot;&gt;Zeppelin Notebook for the code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running the Code&lt;/h2&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform.&lt;/p&gt;
&lt;center&gt;&lt;img alt=&quot;MapR Data Platform&quot; src=&quot;/img/mapr-cdp.png&quot; width=&quot;700&quot;&gt;&lt;/center&gt;</content:encoded></item><item><title><![CDATA[Celebrate Hacktoberfest 2021 with Grommet]]></title><description><![CDATA[It’s that time of the year again… the leaves are falling, the pumpkins are ripe, and developers once again get to participate in…]]></description><link>https://developer.hpe.com/celebrate-hacktoberfest-2021-with-grommet/</link><guid isPermaLink="false">https://developer.hpe.com/celebrate-hacktoberfest-2021-with-grommet/</guid><pubDate>Thu, 07 Oct 2021 18:02:43 GMT</pubDate><content:encoded>&lt;p&gt;It’s that time of the year again… the leaves are falling, the pumpkins are ripe, and developers once again get to participate in Hacktoberfest – a month-long celebration run by Digital Ocean that encourages contributions to open source projects. Coders are rewarded for their participation. This year, if you make 4 contributions during the month of October to open source projects you can earn a limited edition T-shirt or opt to help save the planet by having a tree planted in your name.&lt;/p&gt;
&lt;p&gt;Grommet contributors have traditionally participated heavily in this event and it looks like it’s going to be another great event. In just the first week of Hacktoberfest, the Grommet team saw 22 Pull Requests come in, including a number of pull requests coming from first time contributors. Congratulations, and welcome to the Grommet community!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What are Grommet contributors focusing on this year?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This year, coders have been heavily focused on Typescript, working on an issue that has recently gotten a lot of attention. A lot of progress has already been made in converting our unit tests to Typescript, which improves our type definitions. There’s a really good chance we’re going to come out of this Hacktoberfest with a much more Typescript-friendly experience for Grommet users.&lt;/p&gt;
&lt;p&gt;We’ve also seen some great collaboration going on in the repository. Community contributors are helping review each other’s pull requests, submitting issues, making suggestions for improvement, and helping answer questions. While these types of contributions don’t necessarily count towards the 4 pull requests needed to win a prize, this sort of assistance is greatly appreciated by everyone in the community and helps make it a more friendly, collaborative space.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;There’s still time to participate!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Haven’t had a chance to check it out yet? There’s still a few weeks left before the end of the month. All you need to do is register on the &lt;a href=&quot;https://hacktoberfest.digitalocean.com/&quot;&gt;Hacktoberfest&lt;/a&gt; site to link your Github account and track your contributions. To contribute to Grommet, simply take a look at issues labeled &lt;a href=&quot;https://github.com/grommet/grommet/issues?q=is%3Aopen+is%3Aissue+label%3Ahacktoberfest&quot;&gt;‘hacktoberfest’&lt;/a&gt; on Grommet’s repository.&lt;/p&gt;
&lt;p&gt;We welcome contributions from any skill level. There are a variety of issues to pick from. You can even contribute to issues that don’t have the &lt;a href=&quot;https://github.com/grommet/grommet/issues?q=is%3Aopen+is%3Aissue+label%3Ahacktoberfest&quot;&gt;‘hacktoberfest’&lt;/a&gt; label, as they will still count. If you are a first time contributor check out our &lt;a href=&quot;https://github.com/grommet/grommet/blob/master/CONTRIBUTING.md&quot;&gt;contribution guide&lt;/a&gt; and keep an eye out for issues labeled &lt;a href=&quot;https://github.com/grommet/grommet/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22&quot;&gt;‘good first issue’&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Happy Hacking!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Sharing expertise]]></title><link>https://developer.hpe.com/2021-October-1/</link><guid isPermaLink="false">https://developer.hpe.com/2021-October-1/</guid><pubDate>Sat, 02 Oct 2021 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Secure containerized and traditional apps concurrently]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/secure-containerized-and-traditional-apps-concurrently/</link><guid isPermaLink="false">https://developer.hpe.com/secure-containerized-and-traditional-apps-concurrently/</guid><pubDate>Thu, 30 Sep 2021 08:48:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;IT Ops teams have run and managed Windows and Linux deployments for years. They are greatly experienced at dealing with the storage, networking, virtual machines and firewalls required for these environments. But container use, and the modern apps with which they are built, continues to gain ground as businesses move from small, proof-of-concept implementations to broader deployments. These modern applications require a whole new set of skills, which many administrative teams are just learning as new technologies, like Kubernetes, come into play.&lt;/p&gt;
&lt;p&gt;The best practices used to monitor and guide these implementations continue to evolve, taking these newer technologies into account. One area that continues to crop up as an area of concern is security. Just how does one ensure the security of containerized workloads, especially during a time when they most probably need to co-exist with traditional applications?&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Centralize security and compliance management with Runecast&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) offers the &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;, which comes integrated with Falco, an open source runtime threat detection engine.  Falco uses community-sourced detection of malicious activity and Common Vulnerabilities and Exposures (CVE) exploits to generate alerts.  In addition, HPE Ezmeral Container Platform also provides a core set of monitoring and alerting capabilities using a combination of Metricbeat data collector, Elasticsearch for search and analytics, and Kibana for dashboard displays. These tools provide IT Ops teams with the metrics they need to monitor traditional applications.&lt;/p&gt;
&lt;p&gt;To further enhance the security capabilities on the HPE Ezmeral Container Platform, HPE partnered with &lt;a href=&quot;https://www.runecast.com/&quot;&gt;Runecast&lt;/a&gt; to offer an analyzer that provides insights to container security compliance and improves the stability of mission-critical IT applications as they migrate to a modern cloud architecture. Organizations can leverage Runecast Analyzer as a central security and compliance management console for both container and non-container environments.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Automate audits&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.runecast.com/how-does-runecast-analyzer-work&quot;&gt;Runecast Analyzer&lt;/a&gt; complements the base HPE Ezmeral Container Platform monitoring features with an analysis of best practices and security compliance checks. It does this for container-based workloads as well as workloads running on more traditional platforms; like VMware’s vSphere, vSAN, NSX, Horizon, and VMware Cloud Director. These capabilities are also available for AWS and Microsoft Azure – all from a single interface.&lt;/p&gt;
&lt;p&gt;Runecast Analyzer provides continuous analysis of the workload infrastructure. It includes best practices (as detailed by the Cloud Native Computing Foundation, the maintainer of the Kubernetes open-source project) and security compliance checks against the latest benchmark from the Center for Internet Security (CIS). Runecast Analyzer provides full coverage for the entire CIS Benchmark for Kubernetes with 71 individual cross-referenced checks against entire Kubernetes environments, highlighting areas that may need improvement.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/runecast.png&quot; alt=&quot;How Runecast works with the HPE Ezmeral Container Platform&quot; title=&quot;How Runecast works with the HPE Ezmeral Container Platform&quot;&gt;&lt;/p&gt;
&lt;p&gt;With Runecast Analyzer, IT organizations gain a common tool for monitoring VMs, cloud, and HPE Ezmeral Container Platform applications. This shortens the learning curve for IT Ops teams in adopting Kubernetes. It also helps IT administrators deploy and manage VMs, containers, and cloud environments at scale with confidence.&lt;/p&gt;
&lt;p&gt;To learn how Runecast helps you monitor and improve the security of your containerized and traditional apps, visit &lt;a href=&quot;https://urldefense.com/v3/__http:/www.runecast.com__;!!NpxR!20PeQhlxWuRFNki74flD2O5Cb4wduoVPQd30Aso29B0LbmGLbcLmPg9JZ2O3D_ao$&quot;&gt;www.runecast.com&lt;/a&gt;. To learn more about Runecast and how it works on HPE Ezmeral, check out the &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Ezmeral Container Platform website&lt;/a&gt; and &lt;a href=&quot;https://psnow.ext.hpe.com/doc/a50003809enw&quot;&gt;technical paper&lt;/a&gt;. To get Runecast Analyzer and run it natively on your HPE Ezmeral Container Platform (now HPE Ezmeral Runtime), go to the &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace.html&quot;&gt;HPE Ezmeral Marketplace&lt;/a&gt;. For more articles on HPE Ezmeral Container Platform, check out the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; and &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/bg-p/software/label-name/containers%20and%20devops#.YVNc4LhKg2w&quot;&gt;HPE Ezmeral UnCut blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView and ServiceNow integration with StackStorm]]></title><description><![CDATA[HPE OneView is a powerful infrastructure automation/management platform from Hewlett Packard Enterprise (HPE) used to manage and monitor HPE…]]></description><link>https://developer.hpe.com/hpe-oneview-and-servicenow-integration-with-stackstorm/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-and-servicenow-integration-with-stackstorm/</guid><pubDate>Wed, 29 Sep 2021 17:53:58 GMT</pubDate><content:encoded>&lt;p&gt;HPE OneView is a powerful infrastructure automation/management platform from Hewlett Packard Enterprise (HPE) used to manage and monitor HPE DL servers and HPE Synergy products. Recently, I wanted to get all the alarms from HPE OneView and automatically save them as records in a ServiceNow table.&lt;/p&gt;
&lt;p&gt;ServiceNow is a software as a service (SaS) used by many large corporations for automating critical business workflows and information.  I wanted to make an event-based automation that would leverage HPE OneView and ServiceNow&apos;s Restful APIs. There is another way to do this but I found it was a bit too &apos;involved&apos; and I wanted something easy. Who doesn&apos;t like easy?&lt;/p&gt;
&lt;p&gt;Having developed solutions to complete an identical task in StackStorm (HPE Nimble Storage to ServiceNow), it was super easy to dust off the StackStorm integration pack I created for HPE Nimble and refactor it for HPE OneView. Creating such an integration pack would give users a way to transfer these alarms into a ServiceNow table, with very little human intervention. Naturally, my second thought was how can I use the Twitter platform to &apos;tweet&apos; some VLAN (or any other) information into HPE OneView! Can it be done?&lt;/p&gt;
&lt;p&gt;HPE OneView has a powerful RESTful API that can be used to get information in and out of HPE OneView. ServiceNow has a powerful RESTful API as well. All that I need to write is some middleware and leverage a couple of Python bindings (Python code that abstracts the API). Turns out the python bindings are already written for both systems and available on GitHub! To solve this problem I can write a handful of Python scripts and I should be good to go (or GTG if you&apos;re hip and cool).&lt;/p&gt;
&lt;p&gt;I quickly realized that, in order to do what I wanted to do, it would involve writing the code for both systems. But what if I were to leverage &lt;a href=&quot;https://stackstorm.com/&quot;&gt;StackStorm&lt;/a&gt;? StackStorm is an event based automation platform with over one hundred and seventy 3rd party integrations just waiting to be consumed! A quick check of the &lt;a href=&quot;https://exchange.stackstorm.org/&quot;&gt;StackStorm Exchange&lt;/a&gt; indicates that there&apos;s a StackStorm integration pack available for ServiceNow. Using StackStorm, I&apos;d only have to write half the code, as I would only have to write the code for an HPE OneView StackStorm integration pack. The other benefit of using StackStorm is I can take advantage of the programmable rules and triggers. Something I like to call &apos;Real automation&apos;.&lt;/p&gt;
&lt;p&gt;Note: I have written a couple of other blog posts on StackStorm. If you are interested in trying this approach, I suggest you go to the HPE DEV blog and &lt;a href=&quot;https://developer.hpe.com/search/?term=stackstorm&quot;&gt;read my other posts&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/flowchart.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Developing the stackstorm-hpe-oneview integration pack (which is available &lt;a href=&quot;https://github.com/HewlettPackard/stackstorm-hpe-oneview&quot;&gt;here&lt;/a&gt;) is fairly straightforward. For this interaction to function, I wrote five very short actions and a couple of simple rules. You can see in the chart at the top of this blog that two of the actions will be used with the first workflow and three will be need for the second workflow. Actions are the workhorse of StackStorm. Actions have a &apos;runner-type&apos; and there are &lt;a href=&quot;https://docs.stackstorm.com/actions.html&quot;&gt;12 different ones&lt;/a&gt; to choose from. They can be shell scripts, Python scripts, or Orquesta for creating workflows. I could use a single action to connect to HPE OneView and request all of the current alarms and another to format and store the alarms in a MongoDB database for further processing.&lt;/p&gt;
&lt;p&gt;In the code example below I am using the alerts.get_all() function to retrieve the alarms from HPE OneView. A quick check to see if the object is a list and return it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from lib.actions import HpeOVBaseAction

class networks(HpeOVBaseAction):
    def run(self):
        ov_alerts = self.client.alerts.get_all()
        if isinstance(ov_alerts, list):
            return (True, ov_alerts)
        return (False)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The second action in workflow &quot;A&quot; will format the information into a MongoDB record, add a process field and save the MongoDb BSON document. Again, this is very simple to code and test. The class is passed the alarms and iterates through each one, a query to check if the document exists and if not, formats a Python dictionary and writes the MongoDb BSON document via pymongo. This is all it takes to collect the alarms and save them in the database.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pymongo
from lib.actions import MongoBaseAction


class loadDb(MongoBaseAction):
    def run(self, alarms):

        mydb = self.dbclient[&quot;app_db&quot;]
        known = mydb[&quot;dwralarms&quot;]

        new_alarm={}

        for alarm in alarms:
            myquery = { &quot;_id&quot; : alarm[&apos;created&apos;] }
            records = known.find(myquery).count()
            if records == 0:
                new_alarm[&apos;u_vendor&apos;]=&apos;hpe-oneview&apos;
                new_alarm[&apos;u_sev&apos;]=alarm[&apos;severity&apos;]
                new_alarm[&apos;u_desc&apos;]=alarm[&apos;description&apos;]
                new_alarm[&apos;u_uuid&apos;]=alarm[&apos;resourceUri&apos;]
                new_alarm[&apos;_id&apos;]=alarm[&apos;created&apos;]
                new_alarm[&apos;u_created&apos;]=alarm[&apos;created&apos;]
                new_alarm[&apos;u_process&apos;]=&apos;no&apos;
                write_record = known.insert_one(new_alarm)
                
        return (records)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is an example of the StackStorm workflow that calls two actions every five minutes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: 1.0

description: A workflow to copy HPE OneView alarms into a mongo database.

tasks:
  getalarms:
    action: hpeoneview.get_alerts
    next:
      - when: &amp;#x3C;% succeeded() %&gt;
        publish:
          - alarms: &amp;#x3C;% result().result %&gt;
        do: sendmongo

  sendmongo:
    action: hpeoneview.load-hpeov-alarms alarms=&amp;#x3C;% ctx().alarms %&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The second workflow, workflow &quot;B&quot; will call another action every five minutes that reads the documents from the MongoDB database, looks for the processed flag set to no, collects the results into a Python list and returns it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pymongo
from lib.actions import MongoBaseAction


class loadDb(MongoBaseAction):
    def run(self):

        mydb = self.dbclient[&quot;app_db&quot;]
        known = mydb[&quot;dwralarms&quot;]

        list_to_process = []

        myquery = { &quot;u_process&quot; : &apos;no&apos; }
        records = known.find(myquery)

        for r in records:
            list_to_process.append(r)

        return (list_to_process)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next task is to send the list of alarms that have not been processed to ServiceNow. Here is where the power of integration packs comes into view. All I need to do is issue a command on my StackStorm server &lt;strong&gt;&quot;st2 pack install servicenow&quot;&lt;/strong&gt;. By issuing this command, I gain access to the automation scripts (actions) that are pre-written for ServiceNow. Now that I am using StackStorm and have access to all the automation on the StackStorm exchange. I can communicate with many other systems without writing any vendor specific code to do so. The following example is the ServiceNow action that creates records in a ServiceNow table.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;from lib.actions import BaseAction


class CreateRecordAction(BaseAction):
    def run(self, table, payload):
        s = self.client

        path = &apos;/table/{0}&apos;.format(table)
        response = s.resource(api_path=path).create(payload=payload)
        return response
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is what the workflow looks like for the collection of alarms and sending them to ServiceNow. The first action pulls the unprocessed alarms from the MongoDB database and publishes the array to the &apos;context&apos;, a place to stash variables that can be accessed by other actions. Next it iterates through the array by using the &apos;with&apos; statement and sends the contents to ServiceNow.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: 1.0

description: A workflow to copy HPE OneView alarms from mongo and into snow.

tasks:
  getalerts:
    action: hpeoneview.get_mongo_alarms
    next:
      - when: &amp;#x3C;% succeeded() %&gt;
        publish:
          - alarms: &amp;#x3C;% result().result %&gt;
        do: snowalerts

  snowalerts:
    with: &amp;#x3C;% ctx().alarms %&gt;
    action: servicenow.create_record table=&quot;u_hpeov_alarms&quot; payload=&apos;&amp;#x3C;% item() %&gt;&apos;
    next:
      - when: &amp;#x3C;% succeeded() %&gt;
        do: processalarms

  processalarms:
    action: hpeoneview.process_alarms alarms=&amp;#x3C;% ctx().alarms %&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To finish this up, set the process flag to &quot;Yes&quot; so you do not duplicate records into ServiceNow. It looks like the example below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pymongo
from lib.actions import MongoBaseAction


class loadDb(MongoBaseAction):
    def run(self, alarms):

        mydb = self.dbclient[&quot;app_db&quot;]
        known = mydb[&quot;dwralarms&quot;]

        for a in alarms:
            known.update_one({&quot;_id&quot;:a[&apos;_id&apos;]},{&quot;$set&quot;:{&quot;u_process&quot;:&quot;yes&quot;}})

        return ()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s it! Once both integration packs are installed on a StackStorm server and authorized, the rules will &apos;fire&apos; every five minutes and the workflows will do the heavy lifting so you don&apos;t have to.&lt;/p&gt;
&lt;p&gt;Finally, a diagram that shows all the moving parts of workflow &quot;A&quot;. The rule that runs on the interval timer calls an action that, in turn, calls a workflow that calls a couple other actions. Notice that actions can be Python scripts or YAML files. It just depends on their function.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/full-workflow.png&quot; alt=&quot;&quot; title=&quot;Workflow &quot;&gt;&lt;/p&gt;
&lt;p&gt;To make this a truly automated process, the ServiceNow account needs to exist and the tables need to be created in advance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Let&apos;s break down the steps to get this going.&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create the tables in ServiceNow, you will need your instance ID from ServiceNow to authorize the StackStorm server.&lt;/li&gt;
&lt;li&gt;Install the HPE OneView StackStorm integration pack and authorize it.&lt;/li&gt;
&lt;li&gt;Install the ServiceNow StackStorm integration pack and authorize it.&lt;/li&gt;
&lt;li&gt;Wait five minutes.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the beginning of this blog I mentioned tweeting information into HPE OneView. I don&apos;t know why you would want to do this but yes, with StackStorm it is possible. StackStorm exchange has a Twitter integration pack and it can be installed on the StackStorm server by issuing the command &apos;st2 pack install twitter&apos;. I can use the twitter StackStorm sensor to &apos;watch&apos; the twittersphere for any tweets containing a certain word or phrase contained in the tweet-body. If the sensor reacts, it will cause a StackStorm trigger to fire and I can call an action to pull the tweet and collect the information from the tweet-body and send that information to the HPE OneView Stackstorm actions. This can be done with ANYTHING on the StackStorm exchange.&lt;/p&gt;
&lt;p&gt;In conclusion, this may seem complicated at first. In reality, its a group of small simple scripts that are linked together inside the StackStorm framework. It also provides for the adoption of many different integration packs and allows for the event-based automation of many different systems. To learn more about StackStorm, you can take my &lt;a href=&quot;https://github.com/xod442/stackstorm-tutorial&quot;&gt;tutorial&lt;/a&gt;, attend the StackStorm Workshop-on-Demand available &lt;a href=&quot;/hackshack/workshop/21&quot;&gt;here&lt;/a&gt; &lt;a href=&quot;/hackshack/workshop/21&quot;&gt;&lt;/a&gt;and join the automation revolution!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Sir Hackington Rides Again!]]></title><description><![CDATA[If you’re like me, you’re probably itching to get out and about after months and months of virtual meetings and events. When I heard that…]]></description><link>https://developer.hpe.com/sir-hackington-rides-again/</link><guid isPermaLink="false">https://developer.hpe.com/sir-hackington-rides-again/</guid><pubDate>Wed, 22 Sep 2021 17:20:11 GMT</pubDate><content:encoded>&lt;p&gt;If you’re like me, you’re probably itching to get out and about after months and months of virtual meetings and events. When I heard that &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&quot;&gt;KubeCon|CloudNativeCon NA&lt;/a&gt; was going to be a hybrid live/virtual event, I could hardly contain myself! From October 13th to 15th, this Cloud Native Computing Foundation’s flagship conference will take place both in Los Angeles, California and online where vendors will be showcasing a full range of technologies that support the cloud native ecosystem. And I’m going to proudly be there myself in the Platinum Sponsor Hewlett Packard Enterprise (HPE) booth!&lt;/p&gt;
&lt;p&gt;Who am I, you ask? Could it be? Has it been so long that you’ve been to an event that you’ve forgotten your dear, little friend, Sir Hackington? Or perhaps you have only recently joined our HPE Developer Community and we haven’t yet been properly introduced. My name is Sir Hackington Appbuilder III. I’m the cute little canine that travels with the HPE DEV team to different events as Guardian of the Swag. Here’s a picture of me in my glory days:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/vmworld_collage-1538516807872-2-.jpg&quot; alt=&quot;Sir Hackington at Discover&quot; title=&quot;Sir Hackington at Discover&quot;&gt;&lt;/p&gt;
&lt;p&gt;A lot has happened since then. HPE acquired several companies, along with important technologies, that boosted its position in the market by offering &lt;a href=&quot;https://siliconangle.com/2021/03/17/hpe-ezmeral-proving-enterprise-ready-container-platform-hybrid-world-hpeezmeral/&quot;&gt;the enterprise-ready container platform for a hybrid world&lt;/a&gt;. Recognizing the critical need of companies today to manage data pipelines across hybrid computing environments from edge to cloud, HPE invested heavily in a 100% open source Kubernetes-based container environment from BlueData. It blended it with groundbreaking data fabric technology from MapR and added hooks for important open source projects; SPIFFE and SPIRE for security, and Apache Spark for data analytics. The final result is the HPE Ezmeral software platform, which ensures those who require data have easy and secure access to what they need regardless where they are. And customers can even consume this as a service through HPE GreenLake.&lt;/p&gt;
&lt;h2&gt;KubeCon|CloudNativeCon NA 2021&lt;/h2&gt;
&lt;h3&gt;What’s happening onsite?&lt;/h3&gt;
&lt;p&gt;Starting at 10:30am PT, you’ll be able to find the HPE team at booth P4 in the Los Angeles Convention Center. The theme of our booth for this event is &lt;strong&gt;BUILD FAST. SECURE. TOGETHER. ANYWHERE&lt;/strong&gt;. And that’s really what it’s all about – HPE working &lt;em&gt;&lt;strong&gt;with&lt;/strong&gt;&lt;/em&gt; developers and providing the tools to innovate and deliver solutions quickly and securely, from edge to cloud. Folks still working from home? Great! Need to share apps and information from one site to another and collaborate seamlessly? Great! The cloud &lt;em&gt;&lt;strong&gt;is&lt;/strong&gt;&lt;/em&gt; hybrid – just like this conference is – and HPE really knows how to enable this environment.&lt;/p&gt;
&lt;p&gt;In the booth, there’s a presentation theater and three dedicated demo stations where you can physically talk with HPE subject matter experts. Technologies being presented include:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Apache Spark on HPE Ezmeral&lt;/strong&gt; – where you’ll discover how to accelerate analytics with HPE Ezmeral by leveraging Apache Spark’s latest features&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Discover how HPE Ezmeral helps data scientists and engineers leverage Apache Spark 3 with Delta Lake integration&lt;/li&gt;
&lt;li&gt;Learn how Apache Spark applications easily access S3 storage&lt;/li&gt;
&lt;li&gt;Dig into how Delta Lake enables ACID transactions for enterprise-class storage access in Spark applications&lt;/li&gt;
&lt;li&gt;Watch a demonstration of the tools HPE Ezmeral provides for DevOps and Sys Admins to assist data scientists and engineers with elastic compute and storage resources in CNCF Kubernetes-based clusters&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;HPE CSI Driver for Kubernetes&lt;/strong&gt; – where you’ll learn to speed up DevOps and boost operations with advanced data services&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hear about multitenancy involving HPE storage for Kubernetes clusters with private cloud&lt;/li&gt;
&lt;li&gt;See how the HPE CSI Driver’s advanced security features encrypt volumes regardless of the storage backend, both in-flight and at-rest&lt;/li&gt;
&lt;li&gt;Learn about opportunities with Prometheus monitoring and alerting for cloud-native apps and infrastructure&lt;/li&gt;
&lt;li&gt;Become familiar with HPE partnerships in data protection and data management, including Veeam, Commvault, Cohesity, and Zerto, HPE’s latest acquisition&lt;/li&gt;
&lt;li&gt;Explore enhanced developer efficiency with file and block for K8apps and data management for stateful apps&lt;/li&gt;
&lt;li&gt;Get to know more about operational agility and data protection&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Zero Trust&lt;/strong&gt; – where you’ll observe how SPIFFE/SPIRE delivers strongly attested identities from edge to cloud to establish a zero trust security model&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learn what HPE, the leading contributor to CNCF’s SPIFFE and SPIRE projects, is developing around zero-trust security for application infrastructures in Kubernetes and beyond&lt;/li&gt;
&lt;li&gt;Celebrate the milestones reached: The release of SPIRE V1.0 and the successful completion of a detailed CNCF security audit&lt;/li&gt;
&lt;li&gt;Familiarize yourself with new feature development, like support for serverless functions, hardware attestation using TPM devices, and integration with Istio, that are in progress&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Check out these links to find out more information about what will be shown regarding &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Don-t-Miss-HPE-at-KubeCon-CloudNativeCon-North-America-2021/ba-p/7149736&quot;&gt;HPE Ezmeral &lt;/a&gt;and the &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/Visit-HPE-Storage-at-KubeCon-North-America-Virtual-and-in-person/ba-p/7149757&quot;&gt;HPE CSI Driver for Kubernetes&lt;/a&gt;. You might also want to check out this blog post on how to &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Ready-to-become-a-superhero-Build-an-ML-model-with-Spark-on-HPE/ba-p/7149454#.YVH9BLhKg2y&quot;&gt;Build an ML model with Spark on HPE&lt;/a&gt; to get some cool insight on one of the demos being shown there.&lt;/p&gt;
&lt;p&gt;In the booth’s HPE DEV Community Center, take a tour of the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV portal&lt;/a&gt; – a door into the online dimension where developers and data scientists alike can find the resources they need to build innovative solutions on HPE products and solutions. This is where you’ll find me and all the HPE DEV swag.&lt;/p&gt;
&lt;p&gt;I also heard that there was going to be a magician over in my area! As long as he doesn’t make me disappear, I’m good with that.&lt;/p&gt;
&lt;p&gt;You’re going to want to pace yourself, because there are a lot of breakout sessions in addition to the presentations on the event floor. On Monday, October 11th, HPE will be hosting and sponsoring the &lt;a href=&quot;https://events.linuxfoundation.org/production-identity-day-spiffe-spire-north-america/&quot;&gt;CNCF’s Production Identity Day&lt;/a&gt; event. This provides a forum for SPIFFE and SPIRE users to work together. Speakers will include engineers from HPE and other vendors, as well as SPIRE users. If you can’t make it to this event, stop by the HPE booth to see a demo of new SPIRE features.&lt;/p&gt;
&lt;p&gt;On Friday, October 15th, from 3:25-4:00pm, catch HPE Chief Technologist Tom Golway and HPE Fellow Thomas Phelan in their talk on &lt;a href=&quot;https://kccncna2021.sched.com/event/lV5v?iframe=no&quot;&gt;Using Kubernetes with Data Processing Units to Offload Infrastructure&lt;/a&gt;. They plan on sharing some novel work related to offloading core Kubernetes software infrastructure components from the main CPU onto the processing units of DPUs that looks pretty interesting.&lt;/p&gt;
&lt;h3&gt;What’s happening online?&lt;/h3&gt;
&lt;p&gt;As I mentioned earlier, this is a hybrid event, with people being able to attend the conference virtually as well as in person. While I admit to missing the physical aspect of conferences like this, some of the best activities are found &lt;a href=&quot;https://kubecon-cloudnativecon-na.com/virtual-exhibitor/?v0326b739525aaf6a5900c153ea6485e67109462e8db159b156161fc07c7e3d8016769932b4c0398e64b5ea52edb3d1c5=98D89AED6140001531DE1D5095DD75E0A2A33735543DD0787B548CBAEAA423B2F93B8EABF8C08225934CAB9C3C342DEF&amp;#x26;fromHall&quot;&gt;online&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/virtual-booth-update-3.jpg&quot; alt=&quot;HPE Virtual Booth&quot; title=&quot;HPE Virtual Booth&quot;&gt;&lt;/p&gt;
&lt;p&gt;As part of our online presence, the HPE DEV team will be delivering several live office hour sessions. Speak with HPE DEV team members to learn more about the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE Developer Community &lt;/a&gt;and how you can benefit from joining it. Whether it’s information found on the web portal, technical blogs and tutorials, GitHub repos, free hands-on technical workshops, interactive technology talks, or online collaboration &lt;em&gt;there’s treasure to be found&lt;/em&gt; when you’re an HPE DEV member! And it’s all free!&lt;/p&gt;
&lt;p&gt;Just like what was done for KubeCon|CloudNativeCon Europe, the HPE DEV team is running a &lt;a href=&quot;https://bit.ly/kubecon-na-2021-hpedev-treasure-hunt&quot;&gt;Treasure Hunt&lt;/a&gt;. This giveaway contest will run from October 11-17, 2021 – the week of the KubeCon event. Be one of the first to answer all the online questions correctly during this time period and you could win an HPE DEV hat!&lt;/p&gt;
&lt;p&gt;One of my fondest memories of pre-pandemic events was when folks used to stop by the HPE DEV Hack Shack. There, they would engage in coding challenges, workshops, and play a fun, little retro arcade game called &lt;a href=&quot;/hackshack/hackshackattack&quot;&gt;Hack Shack Attack!&lt;/a&gt; When the pandemic hit, the Hack Shack moved to an online presence. It’s now open 24/7. This means you can take workshops any time in the form of our &lt;a href=&quot;/hackshack/workshops&quot;&gt;HPE DEV Workshops-on-Demand&lt;/a&gt;. You can even play Hack Shack Attack! by visiting the Hack Shack &lt;a href=&quot;/hackshack/arcade&quot;&gt;Arcade&lt;/a&gt;. We even offer the ability to &lt;a href=&quot;/hackshack/stickerwall&quot;&gt;download stickers&lt;/a&gt; (virtual swag) from the site. Make sure you stop by and visit!&lt;/p&gt;
&lt;h3&gt;So, are you coming?&lt;/h3&gt;
&lt;p&gt;I’ve only scratched the surface of everything that will be going on at the event. I hear there’s going to be a couple of really cool surprises in the booth as well (did someone mention hoodies?), so come on by and check things out.&lt;/p&gt;
&lt;p&gt;It feels like such a long time since we’ve had the chance to meet in person. I’m really excited about this opportunity. For those of you attending in person, remember to wear your mask. If you need an extra, we’ll have some over in the HPE booth. And for those of you attending online, don’t forget to visit the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt; and check out what’s being offered there as well.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Analyzing the success of the HPE DEV Workshops-on-Demand]]></title><description><![CDATA[HPE DEV Hack Shack In October of 2020, the HPE DEV team introduced a very easy way for users to learn about a variety of different software…]]></description><link>https://developer.hpe.com/analyzing-the-success-of-the-hpe-dev-workshops-on-demand/</link><guid isPermaLink="false">https://developer.hpe.com/analyzing-the-success-of-the-hpe-dev-workshops-on-demand/</guid><pubDate>Thu, 16 Sep 2021 07:14:18 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/wodanalysisblog1.png&quot; alt=&quot;HPE DEV Hack Shack&quot; title=&quot;HPE DEV Hackshack&quot;&gt;&lt;/p&gt;
&lt;p&gt;In October of 2020, the HPE DEV team introduced a &lt;strong&gt;very easy&lt;/strong&gt; way for users to learn about a variety of different software topics by interacting with code virtually – in the form of &lt;strong&gt;free&lt;/strong&gt; HPE DEV Workshops-on-Demand. Starting with just three workshops, the program has continued to grow and now offers over 20 courses, ranging from basic coding 101 to sophisticated infrastructure automation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wodanalysisblog2.png&quot; alt=&quot;HPE DEV Hack Shack Workshops-on-demand catalog&quot; title=&quot;HPE DEV Hack Shack Workshops-on-demand catalog&quot;&gt;&lt;/p&gt;
&lt;p&gt;These hands-on workshops cover a variety of &lt;a href=&quot;/hackshack/workshops&quot;&gt;topics&lt;/a&gt;, including open source technologies (i.e. Kubernetes, SPIFFE/SPIRE, Grommet, StackStorm), popular programming languages (i.e. Python, RUST), coding tools (i.e. APIs, Git), and market-leading HPE technologies (i.e. HPE Ezmeral Container Platform and Data Fabric, HPE OneView, HPE iLO). Whether it’s due to the popularity of these topics, the workshop’s unique, easy-to-follow methodology, the associated badge recognition program, or the fact that they’re free – these workshops have certainly gained quite a lot of interest!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Measuring success&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The goal of the HPE DEV program is to foster a community where members can innovate through shared experiences, collaboration, and learning. As just one of the services HPE DEV offers to the developer community, the Workshops-on-Demand program is an important one that fits these goals very nicely. Every so often we take stock of our programs to determine how much value each one offers so we can make any necessary adjustments. And, as the program manager for the HPE DEV Workshops-on-Demand, it really falls on me to make that determination.&lt;/p&gt;
&lt;p&gt;From my standpoint, rocketing from offering 3 to now over 20 workshops during the course of a year could easily be seen as success. But from a participant‘s side, it might be different. So what data should be considered a good measure of success for this program?&lt;/p&gt;
&lt;p&gt;I decided to analyze a few data points from different categories that appeared to provide some important insights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Participants:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Number of attendees&lt;/li&gt;
&lt;li&gt;Type of attendees (including HPE employees, partners, customers)&lt;/li&gt;
&lt;li&gt;Completed workshop surveys&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Content:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Number of workshops produced&lt;/li&gt;
&lt;li&gt;Topics covered&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Infrastructure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resiliency of the platform&lt;/li&gt;
&lt;li&gt;Accessibility&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/wodanalysisblog3.png&quot; alt=&quot;&quot; title=&quot;Total Number of participants per month&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see on the chart above, the increase in the number of people using the workshops has been growing steadily month over month. The times where you see noticeable jumps in attendance are due to events where the team would make use of the HPE DEV Workshops-on-Demand platform to deliver technical sessions; i.e. HPE Technology and Solutions Summit in March or HPE Discover in June. These events help promote the HPE DEV Workshops-on-Demand, as many event participants come back later to register for additional workshops. In my mind, having a participant show up to a workshop is good, but having one come back for more sessions, knowing that they could take up to 4 hours, is impressive. I consider this as a good mark of success.&lt;/p&gt;
&lt;p&gt;Analyzing the types of attendees helps us gauge whether the content being provided is appropriate. It helps us answer questions like “Is the technical level too deep?”, “Is more time required?”, and “Are we hitting the appropriate target audience?”&lt;/p&gt;
&lt;p&gt;Another important measurement is the number of surveys filled out at the end of each workshop. These surveys are very important to us for several reasons:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reason #1&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;They allow us to see if our content is relevant to the participant. In the completed surveys, participants gave the workshops an average rating of&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;4.64 stars out of 5 for technical level&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;4.63 stars out of 5 for the overall value of the workshop&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Reason #2&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;They allow us to see if the delivery method is correct. The surveys indicated that the Jupyter Notebooks format has been very well accepted, with an average rating of&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;4.60 stars out of 5&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Reason #3&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;They help us define new topics and give us insight into the user’s experience. Comments such as “&lt;em&gt;Great initiative, the more technical format is the way to go for us engineers and developers, just try to go deeper (using more time if required)&lt;/em&gt;.”and “&lt;em&gt;Great idea and a very suitable form of delivery.&lt;/em&gt;” provide feedback that we can use to modify content and add additional, appropriate courses.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Content&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Aligning with the HPE DEV charter, the workshops span topics that interest our audience, which tend to be related to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;IT OPS and DevOPs in terms of API interaction, Automation&lt;/li&gt;
&lt;li&gt;Open source&lt;/li&gt;
&lt;li&gt;Programming languages&lt;/li&gt;
&lt;li&gt;Data analytics&lt;/li&gt;
&lt;li&gt;AI / MLOPS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The workshops range from simple 101-level sessions to more advanced labs. We try to keep a balance of courses available for participants who are just starting, to courses for those community members who are looking for more technical depth.&lt;/p&gt;
&lt;p&gt;As mentioned earlier, we started small and have continued to build out a wide selection of workshops over this past year. We plan to continue to expand this program, always taking your feedback on which new topics should be offered.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/wodanalysisblog4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some topics tend to be more popular than others. You can see that the HPE Ezmeral workshops are very well attended. AI / MLOPS and HPE iLO/Redfish workshops are also quite popular. Open Source and Programming workshops follow close behind.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;From a backend perspective, we started the program with a single JupyterHub server to support the very first workshops. Each of the workshops are using different types of API endpoints. When reaching a public weather forecasting API, nothing much is required on the backend side apart from the JupyterHub server itself.&lt;/p&gt;
&lt;p&gt;When targeting an HPE Ezmeral Container Platform API, however, the requirements are completely different. To face an ever growing number of workshops and participants, we decided to implement additional JupyterHub servers. In addition to the first JupyterHub server sitting in an HPE site in France, we created a second JupyterHub server in another HPE location. We also leverage HPE GreenLake cloud services to offload some workshops. We now have three production sites, which allow us to improve our workshop’s resiliency. We can now easily redirect an existing workshop from one site to another.&lt;/p&gt;
&lt;p&gt;Although this is all transparent to the participant, this is key to the success of the program, as it allows our workshops to run 24/7, all year long, and still for free!&lt;/p&gt;
&lt;p&gt;Our Ansible-based automation allows us to replicate content easily when combined with the right set of Git commands. We can build or rebuild a complete working and production-ready JupyterHub server in just a few hours and monitor the stability of the environment to ensure our workshops are always ready to go.&lt;/p&gt;
&lt;p&gt;From a frontend perspective, we also improved several aspects. Our developer did a great job in redesigning the tiles layout making it easier to navigate through the Workshops-on-Demand catalog. We added a dedicated registration page per workshop too. Finally, we also performed a REST API integration of our frontend with the &lt;a href=&quot;https://hpedemoportal.ext.hpe.com/&quot;&gt;HPE Demonstration Portal&lt;/a&gt;, which allows HPE Demonstration Portal users to book workshops while remaining in the context of HPE Demonstration Portal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Looking forward&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I plan to come back next year and summarize the achievements of the team regarding Phase 2 of the program. Planned upgrades include an even nicer portal for our workshops, a broader range of 101 workshops covering artificial intelligence (AI), MLOPS, Spark, Concourse and more.&lt;/p&gt;
&lt;p&gt;The HPE DEV Workshops-on-Demand continue to gain momentum. The quality and relevance of the content has definitely played a part in that. Internal and external promotion also helps to build awareness. But another, more fun, reason why more and more users are taking them may have to do with our badge recognition program. For each workshop that you take, you earn a badge. Check out this &lt;a href=&quot;https://developer.hpe.com/blog/become-a-legend/&quot;&gt;blog post&lt;/a&gt; for more information on this program.&lt;/p&gt;
&lt;p&gt;If you haven’t taken one of our workshops yet, please &lt;a href=&quot;/hackshack/workshops&quot;&gt;register for one of our Workshops-on-Demand&lt;/a&gt;, and see why they have attracted so many users. It’s really important to fill out the survey at the end of the workshop to provide us with feedback on your experience, as well as inform us about new subjects you would like to see offered. As always, it’s your feedback that really helps to improve this program within the &lt;a href=&quot;https://hpedev.io&quot;&gt;HPE DEV Community&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Accessing HPE Ezmeral Data Fabric Object Storage from Spring Boot S3 Micro Service deployed in K3s cluster]]></title><description><![CDATA[Containers and microservices are transforming Edge and IoT platform use cases that can be deployed in small footprint Kubernetes clusters on…]]></description><link>https://developer.hpe.com/accessing-hpe-data-fabric-s3-storage-from-spring-boot-s3-micro-service-deployed-in-k3s-cluster/</link><guid isPermaLink="false">https://developer.hpe.com/accessing-hpe-data-fabric-s3-storage-from-spring-boot-s3-micro-service-deployed-in-k3s-cluster/</guid><pubDate>Mon, 13 Sep 2021 08:13:40 GMT</pubDate><content:encoded>&lt;p&gt;Containers and microservices are transforming Edge and IoT platform use cases that can be deployed in small footprint Kubernetes clusters on edge nodes and persisting data at a central location. This data pipeline can be easily accessed by downstream complex analytics applications for further processing.&lt;/p&gt;
&lt;p&gt;In this article, I will discuss how to access HPE Ezmeral Data Fabric Object Store (S3) using Spring Boot S3 Micro Service application deployed in a &lt;a href=&quot;https://k3s.io/&quot;&gt;K3s cluster&lt;/a&gt; and perform basic S3 operations like upload, list, delete etc. The below diagram gives an overview of the architecture.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/hpe-ezmeral-data-fabric-s3-springboot-k3s.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h5&gt;Figure 1:  Architecture overview of Spring Boot S3 Micro Service on K3s with HPE Data Fabric Object Storage as the back end.&lt;/h5&gt;
&lt;p&gt;A brief description of the technology stack used is described in the sections below:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Data Fabric Object Storage&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Object Store with an S3-compatible API stores data generated through multiple data protocols, such as NFS, POSIX, S3, and HDFS. Data in the Object Store is accessible through S3 API requests. The Object Store manages all inbound S3 API requests to store data in or retrieve data from an HPE Ezmeral Data Fabric cluster. More details can be found &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRObjectStore/MapRObjectStorewithS3-compatibleAPI.html&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;K3s&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Lightweight Kubernetes, aka K3s, is easy to install and consumes half the memory, all in a binary of less than 100MB. It&apos;s great for edge and IoT use cases. More information on K3s can be found at &lt;a href=&quot;https://rancher.com/docs/k3s/latest/en/&quot;&gt;Rancher&lt;/a&gt; site&lt;a href=&quot;https://rancher.com/docs/k3s/latest/en/&quot;&gt;&lt;/a&gt;. Follow the steps as mentioned in &lt;a href=&quot;https://rancher.com/docs/k3s/latest/en/quick-start/&quot;&gt;QuickStart Guide&lt;/a&gt; &lt;a href=&quot;https://rancher.com/docs/k3s/latest/en/quick-start/&quot;&gt;&lt;/a&gt; for installation of the K3s cluster. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Spring Boot&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spring Boot is an open source Java-based framework used to create a micro Service. Many real world applications are written in Spring Boot for faster development and better maintainability. More information can be found at &lt;a href=&quot;https://spring.io/projects/spring-boot&quot;&gt;spring.io&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: We can move the Spring Boot application to Quarkus with no change in code. Moving code to Quarkus will reduce the footprint of the Spring Boot application. More information can be found at &lt;a href=&quot;https://quarkus.io/blog/quarkus-for-spring-developers/&quot;&gt;Quarkus site&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, you&apos;ll checkout the existing Spring Boot application from &lt;a href=&quot;https://github.hpe.com/kiran-mavatoor/df-s3-springboot-k3s-demo&quot;&gt;GitHub&lt;/a&gt;, customise it and execute. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Application Prerequisites&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;1.      The K3s cluster must be accessible. Note down the control plan node details. This information is required to deploy the Spring Boot application.&lt;/p&gt;
&lt;p&gt;2.      Access the HPE Data Fabric Object Store service UI running on port 9000. For example URL  - &lt;a href=&quot;https://FQDN:9000/.%C2%A0%C2%A0Note&quot;&gt;https://FQDN:9000/.  Note&lt;/a&gt; down the access key and secret key. It is advised to change the default values.&lt;/p&gt;
&lt;p&gt;3.      Java 11, Apache Maven 3.8+, Docker Client.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Build and Install Steps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;1.      Check out an existing Spring Boot application from &lt;a href=&quot;https://github.hpe.com/kiran-mavatoor/df-s3-springboot-k3s-demo&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;2.      Copy the ssl_usertruststore.p12 from the HPE Data Fabric cluster into certs folder under project directory. The ssl_usertruststore.p12 file is located at /opt/mapr/conf directory in cluster node. The password for p12 can be copied from “ssl.client.truststore.password” property value in /opt/mapr/conf/ssl-client.xml .&lt;/p&gt;
&lt;p&gt;3.      From the project directory, open resources/application.properties. Change the key values as per your environment. &lt;/p&gt;
&lt;p&gt;4.      Execute “mvn clean install” .&lt;/p&gt;
&lt;p&gt;5.      The distributable is available in target/df-s3-springboot-k3s-demo-1.0-SNAPSHOT.jar .&lt;/p&gt;
&lt;p&gt;6.      Edit the DockerFile located in project directory. The value of 
“-Djavax.net.ssl.trustStorePassword”  must be same as the value of  “ssl.client.truststore.password” obtained from Step 2.&lt;/p&gt;
&lt;p&gt;Note: This value can be configured using config-map.yaml in K3s cluster.&lt;/p&gt;
&lt;p&gt;7.      Execute below docker commands to build docker image and push it to docker hub. &lt;/p&gt;
&lt;p&gt;Note: Alternatively, we can use podman instead of docker to create images. More information on podman can be obtained from &lt;a href=&quot;https://developers.redhat.com/blog/2020/11/19/transitioning-from-docker-to-podman#transition_to_the_podman_cli&quot;&gt;here.&lt;/a&gt; &lt;a href=&quot;https://developers.redhat.com/blog/2020/11/19/transitioning-from-docker-to-podman#transition_to_the_podman_cli&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build -f Dockerfile -t &amp;#x3C;Dockerhub user id&gt;/df-s3-springboot-k3s-demo:latest .
docker image ls
docker login -u &amp;#x3C;Dockerhub user id&gt;
&gt; enter password:  &amp;#x3C;Dockerhub password&gt;
docker push &amp;#x3C;Dockerhub user id&gt;/df-s3-springboot-k3s-demo:latest
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;8.      Next, create below df-s3-springboot-k3s-demo.yaml file for deploying the executable in K3s cluster. A sample yaml file is given in the project directory. Please replace &lt;dockerhub userid&gt; with a valid id.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Yaml&quot;&gt;apiVersion: v1
kind: Service
metadata:
  name: df-s3-springboot-k3s-demo-service
spec:
  selector:
    app: df-s3-springboot-k3s-demo
  ports:
  - protocol: TCP
    name: df-s3-springboot-k3s-demo
    port: 8000
    targetPort: 8000
  type: LoadBalancer

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: df-s3-springboot-k3s-demo
spec:
  selector:
    matchLabels:
      app: df-s3-springboot-k3s-demo
  replicas: 1
  template:
    metadata:
      labels:
        app: df-s3-springboot-k3s-demo
    spec:
      containers:
      - name: df-s3-springboot-k3s-demo
        image: &amp;#x3C;Dockerhub userid&gt;/df-s3-springboot-k3s-demo:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;9.      Before deploying it in the Kubernetes cluster, validate the docker or podman image by running the image. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run -p 8000:8000 &amp;#x3C;Dockerhub userid&gt;/df-s3-springboot-k3s-demo
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-UI&quot;&gt;http://localhost:8000/swagger-ui.hmtl
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Deploying in K3s cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;1.      Log into the control plane node of the K3s cluster. Create/Copy the df-s3-springboot-k3s-demo.yaml to the node.&lt;/p&gt;
&lt;p&gt;2.      Execute “kubectl apply -f df-s3-springboot-k3s-demo.yaml”. If required, you can specify the namespace option.&lt;/p&gt;
&lt;p&gt;3.      Check the pod creation status by using the command “kubectl get pods -l app=df-s3-springboot-k3s-demo -o wide”.&lt;/p&gt;
&lt;p&gt;4.      Verify if the services are properly deployed by using the command “kubectl get service df-s3-springboot-k3s-demo-service”.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Accessing the Swagger UI from the pod&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;1.      Connect to the pod &lt;a href=&quot;http://pod-ip:8000/swagger-ui.html&quot;&gt;http://pod-ip:8000/swagger-ui.html&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;2.      Verify the services exposed in the Swagger-UI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/swagger-ui.png&quot; alt=&quot;Swagger UI&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, you learned how data can be processed from edge to a persistence store by creating a data pipeline using a different technology stack. The aforementioned data pipeline is not limited to this current use case but can be used in many diversified microservice use cases.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data Analytics with PySpark using HPE Ezmeral Container Platform]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/data-analytic-with-pyspark-using-hpe-ezmeral-container-platform/</link><guid isPermaLink="false">https://developer.hpe.com/data-analytic-with-pyspark-using-hpe-ezmeral-container-platform/</guid><pubDate>Tue, 07 Sep 2021 02:12:15 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;PySpark is an interface for Apache Spark in Python. &lt;a href=&quot;https://spark.apache.org/&quot;&gt;Apache Spark&lt;/a&gt; is a unified analytics engine for big data processing. It allows developers to perform data processing on files in a distributed filesystem, like the Hadoop distributed filesystem or HPE Ezmeral Data Fabric (formerly known as MapR-XD). Due to its complexity, setting up a Spark environment is always a pain for data scientists. Hopefully, HPE Ezmeral Container Platform can make this much easier.&lt;/p&gt;
&lt;p&gt;You can run Apache Spark jobs on Kubernetes-managed clusters on the HPE Ezmeral Container Platform. The HPE Ezmeral Container Platform provides you with access to a wealth of MLOps tools such as Apache Spark Operator, and things like a &lt;a href=&quot;https://kubedirector.io/&quot;&gt;Kubedirector&lt;/a&gt; Jupyter Notebook where you will do your Data Science work and interact with the Apache Spark Operator to run your Apache Spark jobs.&lt;/p&gt;
&lt;p&gt;Once logged in as an MLOps tenant member, you can deploy an instance of Jupyter Notebook. From the Jupyter Notebook, you can either run Spark jobs with Apache Livy to make REST API calls to Spark Operator, or you can directly run a Spark job against the Spark Operator with the PySpark module.&lt;/p&gt;
&lt;p&gt;In this post, I will focus on running simple Spark jobs using the PySpark module on a Jupyter Notebook cluster instance deployed on HPE Ezmeral Container Platform. For those who want to squeeze the best performance out of Spark and run Spark Jobs with Apache Livy, visit this &lt;a href=&quot;https://developer.hpe.com/blog/on-premise-adventures-how-to-build-an-apache-spark-lab-on-kubernetes/&quot;&gt;post&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Preparing the Jupyter Notebook Cluster&lt;/h2&gt;
&lt;p&gt;First, we have to prepare our favorite Jupyter Notebook environment. Inside a MLOps tenant, navigate to &lt;strong&gt;Notebooks&lt;/strong&gt; tab. You will see a Jupyter Notebook KubeDirector app prepared for you. After clicking the &lt;strong&gt;Launch&lt;/strong&gt; button, you will need to configure the compute resource needed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120459929-39c63300-c3cb-11eb-9e7a-65189f4367d3.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see below, you must specify the name of the Jupyter Notebook cluster. Click &lt;strong&gt;Enable DataTap&lt;/strong&gt; to expand access to shared data by specifying a named path to a specified storage resource.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/132812471-d1ce5ce8-0d47-41ae-bd96-879262018f84.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Switching to the &lt;strong&gt;Notebook Endpoints&lt;/strong&gt; tab, you can see that the access points are prepared for you. Just click the link and, after logging in with your LDAP/AD account, your favorite Jupyter environment is ready for you.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120460678-ea343700-c3cb-11eb-9aef-8afc9252d471.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Different kernels are already installed for you. No matter which languages you are using, there will be one that suits your ML project. There are two kernels which are Python related, i.e. Python3 and PySpark kernel. Python3 kernel is the kernel for you to run single node workloads while PySpark kernel, connected with Spark Operator and Livy, is the kernel for you to run distributed workloads. In this post, I am running a simple Spark job. So I will pick the Python3 kernel and import the PySpark module in the runtime.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120460537-cc66d200-c3cb-11eb-8410-3b7ec95051d5.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Preparing the datasets&lt;/h2&gt;
&lt;p&gt;Imagine that you have a very large CSV file ready for analysis. You need to put the file to the distributed filesystem. Of course, you can do that with the graphic user interface provided through HPE Ezmeral Container Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120461217-67f84280-c3cc-11eb-9126-e69cacef4432.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;The other way to do this would be to drag the file to the left panel of the local Jupyter Notebook cluster and run the following HDFS commands to put the file to the &quot;TenantStorage&quot; through DataTap.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Put a file from local filesystem to distributed filesystem using HDFS commands
hdfs dfs -put enhanced_sur_covid_19_eng.csv dtap://TenantStorage/enhanced_sur_covid_19_eng.csv
# List the files or directories
hdfs dfs -ls dtap://TenantStorage/
# List the files or directories
hdfs dfs -tail dtap://TenantStorage/enhanced_sur_covid_19_eng_.csv
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/129331881-dbe602e7-b3d9-4541-a9d0-4ea274aa7e51.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Getting Started with PySpark&lt;/h1&gt;
&lt;p&gt;The PySpark module is already installed for you. No extra installation is needed. So convenient, isn&apos;t it? Some configurations for the PySpark runtime are needed in order to read files from DataTap.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-py&quot;&gt;# python3 kernel
from PySpark import SparkConf, SparkContext

# Specify the path of the jars files
conf = SparkConf().set(&quot;spark.jars&quot;, &quot;/opt/bdfs/bluedata-dtap.jar&quot;)
sc = SparkContext(conf=conf)
# Specify the Hadoop configurations.
sc._jsc.hadoopConfiguration().set(&apos;fs.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.Bdfs&apos;)
sc._jsc.hadoopConfiguration().set(&apos;fs.AbstractFileSystem.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.BdAbstractFS&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Reading datasets from HPE Ezmeral Data Fabric&lt;/h2&gt;
&lt;p&gt;After some configuration, your Spark engine is connected to the platform and you can now read files from HPE Ezmeral Data Fabric through Data Tap.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-py&quot;&gt;# Commands for reading DataTap file.
text = sc.textFile(&quot;dtap://TenantStorage/hello.txt&quot;)
text.take(5)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For reading CSV files as a Spark dataframe, run the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-py&quot;&gt;# Commands for importing PySpark SQL module.
from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

# Commands for reading DataTap csv file.
df = spark.read.csv(&apos;dtap://TenantStorage/enhanced_sur_covid_19_eng.csv&apos;, header=True, inferSchema=True)
df.take(3)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021373-333ab100-cdf8-11eb-9e58-edbccf43f0b2.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021431-3e8ddc80-cdf8-11eb-9c61-d9bd400a4c9b.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Data analytics with PySpark&lt;/h2&gt;
&lt;p&gt;With PySpark, you can easily read data from files, cleanse data and do analytics within a Jupyter notebook. To view the entire notebook, click &lt;a href=&quot;https://github.com/helloezmeral/HPE-Ezmeral-HelloWorld/blob/main/pyspark/pyspark_covidhk.ipynb&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021467-45b4ea80-cdf8-11eb-8ca4-ffc11c03f1ad.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can run &lt;code&gt;df.printSchema()&lt;/code&gt; to view the schema of your dataframe.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021502-4baacb80-cdf8-11eb-87d3-b29ef643b373.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;These are the commands for selecting columns of data and filtering according to the criteria.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021550-56fdf700-cdf8-11eb-9c31-e0d171c7406e.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;Some common commands to interact with your datasets.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021576-5ebd9b80-cdf8-11eb-9810-36d744560327.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;Commands for data aggregation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/122021616-667d4000-cdf8-11eb-8400-2dc03f4290f3.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;Example for visualizing your datasets.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Possible error&lt;/h2&gt;
&lt;p&gt;You may encounter the error, &quot;permission denied&quot;, when running HDFS commands. To solve this error, you have to &quot;exec&quot; into the pod and change the access mode for the core-site.xml.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/124234611-d6086480-db46-11eb-849e-7d4f7a8c35e4.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can &quot;exec&quot; into it through the Jupyter notebook or using the WebTerminal that comes along with HPE Ezmeral Container Platform. To grab the Kubectl credential from HPE Ezmeral Container Platform, run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Bash Kernel
# Grap the kubectl credential
kubectl hpecp refresh ez-gateway.hpeilab.com --insecure --hpecp-user=hpecli --hpecp-pass=hpecli
kubectl get pods --all-namespaces
kubectl get pods --namespace=poc-tenant
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following command for accessing the bash of pod:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# 1: exec into the pod
kubectl exec -it &amp;#x3C;pod name&gt; -- /bin/bash
# example
kubectl exec -it testnotebook-controller-6kq7r-0 --namespace=poc-tenant -- /bin/bash
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the following command for changing the access mode:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# 2: changed the access mode for the core-site.xml
chmod 666 /opt/bluedata/hadoop-2.8.5/etc/hadoop/core-site.xml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And now you can run the HDFS command without error.&lt;/p&gt;
&lt;h1&gt;Key takeaway&lt;/h1&gt;
&lt;p&gt;I hope this post offered you some tips on how to do big data analytics using the Apache Spark Python API with less time spent on setting up the environment and more time on digging out business insight from your data. Keep this notebook handy so you can refer back to it often. Also, keep an eye out on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; to make sure you catch future articles on this subject.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[People make a community]]></title><link>https://developer.hpe.com/2021-September-03/</link><guid isPermaLink="false">https://developer.hpe.com/2021-September-03/</guid><pubDate>Fri, 03 Sep 2021 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Discovering HPE DEV: One Developer’s Journey]]></title><description><![CDATA[As a full-stack developer, I’m always looking for ways to more easily build out applications. In my exploration, I came across the HPE…]]></description><link>https://developer.hpe.com/discovering-hpe-dev-one-developer’s-journey/</link><guid isPermaLink="false">https://developer.hpe.com/discovering-hpe-dev-one-developer’s-journey/</guid><pubDate>Thu, 19 Aug 2021 18:13:30 GMT</pubDate><content:encoded>&lt;p&gt;As a full-stack developer, I’m always looking for ways to more easily build out applications. In my exploration, I came across the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer web portal&lt;/a&gt; (aka, the HPE DEV site). One of the first things that caught my eye was the variety of different platforms that HPE offers and supports, many of which are open source. One platform in particular, &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home/&quot;&gt;Grommet&lt;/a&gt;, really piqued my interest.&lt;/p&gt;
&lt;p&gt;As described on the site, the open source project &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; helps you create responsive and accessible mobile-first projects for the web with an &lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;easy-to-use&lt;/a&gt;, &lt;a href=&quot;https://reactjs.org/&quot;&gt;React&lt;/a&gt;-based component library that is part design system and part development framework. While reading the Grommet page on the HPE DEV site, I discovered the HPE Design System, which helps web developers offer their website visitors with an improved user experience. The HPE Design System shows how Grommet can be themed and used to build user interfaces for your own brand. From what I understand, the HPE Design System is actually being used within HPE to guide the design of the user interfaces that HPE creates.&lt;/p&gt;
&lt;h2&gt;Moseying on over to the Hack Shack&lt;/h2&gt;
&lt;p&gt;Also, on the HPE DEV &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home/&quot;&gt;Grommet platform page&lt;/a&gt;, I discovered that there are a number of in-depth, hands-on technical workshops that I can take, which are found over in the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt;. In the &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand library&lt;/a&gt;, I found an awesome course entitled &lt;em&gt;&lt;a href=&quot;/hackshack/workshop/14&quot;&gt;Streamline app development with open source Grommet&lt;/a&gt;&lt;/em&gt;. When I clicked on the Learn More button, I found a video replay of the course, which looked really interesting, so I decided to register for it. It didn’t cost me anything, and I actually found that there were real, live people at the other end of the Slack channel supporting it who were happy to help me with any questions I had. That, in and of itself, was really cool because, being developers like me, we had a lot of things to talk about! &lt;a href=&quot;https://grommet.slack.com/&quot;&gt;Grommet has a Slack channel&lt;/a&gt; as well, which is highly active. Folks are making contributions there all the time.&lt;/p&gt;
&lt;p&gt;But I digress… back to the Hack Shack. In my meanderings about the Hack Shack, I found that there was a ton of other information there, too. The Workshops-on-Demand cover a broad range of topics of interest to all developers: API 101, Python, Kubernetes, Redfish, Rust, Git, etc. While taking my Grommet workshop, I noticed a couple of other workshops that I’m interested in taking. The ones that caught my eye are the courses on technology basics – the 101 level – especially the workshops on &lt;a href=&quot;/hackshack/workshop/24&quot;&gt;Kubernetes 101&lt;/a&gt; and &lt;a href=&quot;/hackshack/workshop/26&quot;&gt;HPE Ezmeral Data Fabric 101&lt;/a&gt;. I’ve heard a lot about HPE Ezmeral and I am curious to learn more.&lt;/p&gt;
&lt;p&gt;What really got me, though, was the &lt;a href=&quot;/hackshack/arcade&quot;&gt;Arcade&lt;/a&gt; where they house the &lt;a href=&quot;/hackshack/hackshackattack&quot;&gt;Hack Shack Attack&lt;/a&gt;! retro video game and offer stickers and wallpaper you can use on your own computer. It’s a really cool place, and I have to admit I spent some time trying to tackle the IT Monster.&lt;/p&gt;
&lt;h2&gt;Back to the HPE DEV portal&lt;/h2&gt;
&lt;p&gt;While my initial path took me over to all things Grommet, I wondered what else I might find on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV portal&lt;/a&gt;, so I headed back there. On the Grommet platform page, I was impressed to find so many blog articles on Grommet. I found &lt;a href=&quot;https://developer.hpe.com/blog/using-grommet-with-gatsby/&quot;&gt;Using Grommet with Gatsby&lt;/a&gt; to be very helpful with setting up a situation where I wanted to convert a website from a create-react-app to Gatsby. After reading that one, I decided to go over to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; and see what else they had. Like the Workshops-on-Demand, the blog site covers a variety of different topics. It even had a &lt;a href=&quot;https://developer.hpe.com/blog/hpe-dev-launches-its-munch-learn-technical-talks/&quot;&gt;blog post&lt;/a&gt; on another HPE DEV offering, the &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;Munch &amp;#x26; Learn technical talks&lt;/a&gt;. From what I understand, the Munch &amp;#x26; Learn sessions bring in industry experts to talk on different subjects. You can view some of the replays of these sessions on that page and register for upcoming sessions. I’m hoping they’ll have a session on Grommet soon.&lt;/p&gt;
&lt;p&gt;After spending some time exploring the HPE DEV portal, I decided to sign up for the &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE DEV Monthly Newsletter&lt;/a&gt;. Taking a look at their archive, I found that these newsletters highlighted some of the more interesting blogs and tutorials that were published in a given month. I figured this was the easiest way for me to stay up to date. Still, I do find that I come back to the portal often to see if I’m missing out on anything new. I also found that I could follow what’s happening on the site by following &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;HPE DEV on Twitter&lt;/a&gt;. I hear that sometimes HPE DEV will host coding challenges in the Hack Shack. I figured this was probably the best way to stay in touch and get any alerts on upcoming challenges. Those could be fun!&lt;/p&gt;
&lt;p&gt;Something else I found rather fun was actually getting this article published on their blog! I found this one post on the blog called &lt;a href=&quot;https://developer.hpe.com/blog/be-an-hpe-dev-blogger/&quot;&gt;Be an HPE DEV blogger!&lt;/a&gt; I wasn’t sure if writing a post as simple as this would be something they’d find valuable, but when I checked in with the editor, I found that they were very welcoming and eager to help me get published. Who knows? I may write more posts in the future.&lt;/p&gt;
&lt;p&gt;I hope my journey has inspired you to see what you can find on the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Portal&lt;/a&gt;. There’s a ton more there that I haven’t even touched on because I was so focused on learning more about Grommet. Maybe your interests lie elsewhere. Maybe you need access to a platform-specific API or SDK. In any case, you’re sure to find something interesting. Check it out!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Linux Distinguished Technologist and Open Source evangelist, Bruno Cornec]]></title><description><![CDATA[As a leading global, edge-to-cloud platform-as-a-service company, Hewlett Packard Enterprise (HPE) prides itself in employing team members…]]></description><link>https://developer.hpe.com/meet-linux-distinguished-technologist-and-open-source-evangelist-bruno-cornec/</link><guid isPermaLink="false">https://developer.hpe.com/meet-linux-distinguished-technologist-and-open-source-evangelist-bruno-cornec/</guid><pubDate>Wed, 18 Aug 2021 16:39:18 GMT</pubDate><content:encoded>&lt;p&gt;As a leading global, edge-to-cloud platform-as-a-service company, Hewlett Packard Enterprise (HPE) prides itself in employing team members who share one common purpose: to advance the way people live and work. In this blog series, you’ll get to meet a number of them as I interview some of the open source experts working with the HPE DEV team.&lt;/p&gt;
&lt;p&gt;Bruno Cornec has been administrating Unix systems since 1987, and Linux systems since 1993. He began his career as an engineer in Software Engineering focused on Software Configuration Management. He was responsible for nearly everything related to the product, including design, development, presales, consulting, and training.&lt;/p&gt;
&lt;p&gt;Bruno discovered the Free Libre Open Source Software (FLOSS) movement in 1993 thanks to Linux and has worked with it ever since, engaging in various aspects such as development, presales, technical writing, translation, consulting, training, and small technical team management. He is also involved in a number of open source projects, like &lt;a href=&quot;http://www.mondorescue.org/&quot;&gt;MondoRescue&lt;/a&gt;, &lt;a href=&quot;http://www.mageia.org/&quot;&gt;Mageia&lt;/a&gt;, &lt;a href=&quot;http://www.project-builder.org/&quot;&gt;project-builder.org&lt;/a&gt;, &lt;a href=&quot;https://opendev.org/x/python-redfish/&quot;&gt;python-redfish&lt;/a&gt;, &lt;a href=&quot;http://www.fossology.org/&quot;&gt;FOSSology&lt;/a&gt;, and LinuxCOE. He is now a strong Linux and open source advocate and continues to evangelize its use in many conferences around the globe.&lt;/p&gt;
&lt;h2&gt;Bruno, can you tell me a little about these different open source projects that you are working on?&lt;/h2&gt;
&lt;p&gt;My work on open source projects all started with MondoRescue back in 2000 when I began working in what was then HP in Grenoble, France as a Linux Technologist. I was chartered with allowing our plants to preinstall Linux distributions on our servers, and I found this tool that was nearly doing everything I wanted for that purpose.&lt;/p&gt;
&lt;p&gt;Two or three patches later, it was ready for me to use to build the installation media for a remote team to use to preinstall Red Hat and SUSE distributions on our servers that even launched automatic tests at the end of the setup so the machines could be validated before being sent to customers.&lt;/p&gt;
&lt;p&gt;I worked with the upstream MondoRescue project lead to have my patches adopted in the project, and voilà, that’s how you become an upstream contributor! I then took over the maintenance of the project and created a package build environment for more than 150 different distributions. This became the Project-Builder.org project. I used my packaging knowledge to help the Mageia distribution, which I’m using daily on HPE desktops and servers, both at work and at home.&lt;/p&gt;
&lt;p&gt;Each time I looked at another technology, there were FLOSS projects related to it that could benefit from some contributions linked to an HPE usage. This has allowed me to work with the Fossology developer team while looking at FLOSS to tool open source governance, as well as create a Python-Redfish module to have a Python-based implementation of a Redfish (DMTF standard) library and CLI tool.&lt;/p&gt;
&lt;h2&gt;I understand that you’ve been collaborating with the HPE DEV team to automate the backend infrastructure that hosts our Workshops-on-Demand. Can you tell me a little more about your role in regards to this project?&lt;/h2&gt;
&lt;p&gt;My official role is to act as the liaison between the HPE DEV team and the World Wide Customer Innovation Center (CIC) team for which I work. We are in charge of the WW &lt;a href=&quot;https://hpedemoportal.ext.hpe.com/&quot;&gt;HPE demonstration portal&lt;/a&gt;, and are always looking for content to share with our presales and partners communities. Integrating the technical and educational HPE DEV developed content into our portal was an obvious thing to do.&lt;/p&gt;
&lt;p&gt;My role quickly expanded, however. Before integrating the content, I helped the HPE DEV team with the automation steps I thought were required to ensure the best customer service. We worked on the set up and automation  of the JupyterHub portals, using some scripts and lots of Ansible playbooks. We also automated the setup of training appliances from end-to-end to serve both as the documentation of these installations and a nice way to recover in case of any issues. As the back-end infrastructure is not directly accessible from the Internet, I introduced the team to some old-fashioned, but extremely useful, tools such as procmail to create a very functional non-REST API. Going through the firewall without security concerns using mail was, I think, one of the more interesting ideas I introduced to the HPE DEV team in parallel to their more “classical” HTTPS-based REST API development.&lt;/p&gt;
&lt;p&gt;Once all that was ready and working well, we could finally make the two portals interact so that, with single sign-on authentication, our presales and partner colleagues could easily consume the HPE DEV Workshops-on-Demand from our CIC portal.&lt;/p&gt;
&lt;h2&gt;I understand you collaborate with the HPE DEV team on other projects as well. Can you expand on that?&lt;/h2&gt;
&lt;p&gt;I’m also helping with the Greenlake integration. We’re designing it so the full JupyterHub and appliances stack can be instantiated at will as-a-service in Greenlake through one touch of a button. At least, that’s the goal, and we now have many pieces in place to make that successful.&lt;/p&gt;
&lt;p&gt;As I’m also interested in sharing knowledge, and have always been active around training either in HPE or Open Source events in the past, I’m converting &lt;a href=&quot;https://github.com/bcornec/Labs/&quot;&gt;my former Labs&lt;/a&gt; into Jupyter Notebooks starting with the most popular, the Docker 101. I’m hoping this will attract more system admins and help them discover all the other Workshops-on-Demand the HPE DEV team offers. I’m a firm believer in continuing to educate oneself over the course of their career. The HPE DEV Workshops-on-Demand offer a great way to do this.&lt;/p&gt;
&lt;h2&gt;Speaking of training, you recently held a training class for the Campus Numérique in the Alps. I’d love to hear more about that.&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://le-campus-numerique.fr/&quot;&gt;Campus Numérique in the Alps’&lt;/a&gt; role is to bring people back to IT jobs. They train a wide variety of students, from people without a diploma who become Webmasters all the way up to doctors in science to become future (big) data specialists (the Data-8 track). They manage 170 students across 4 sites, with the help of benevolent experts that share their knowledge, in addition to their own teachers - 100 people in total. They deliver 5 different diplomas that are recognized at the European level.&lt;/p&gt;
&lt;p&gt;Their teaching practice is based on experimentation first and introducing theory as they see fit. Thus, the possibility of using the HPE DEV Workshops-on-Demand in this context was particularly well-suited. I was chartered with explaining to Data-8 students the FLOSS ecosystem and the notion of contribution, licenses, etc.&lt;/p&gt;
&lt;p&gt;Before working on a concrete example of code contribution, I had the students refresh their Git knowledge of using Git as a base tool for project contribution using the &lt;a href=&quot;/hackshack/workshop/17&quot;&gt;Git 101 Workshop-on-Demand&lt;/a&gt;. I also used the &lt;a href=&quot;/hackshack/workshop/9&quot;&gt;API 101 Workshop-on-Demand&lt;/a&gt; to teach them what an API is and how they could interact with a tool providing one. As the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack portal&lt;/a&gt; is easily accessible to everybody, I used it to support my training sessions.&lt;/p&gt;
&lt;h2&gt;Is there anything else you’d like to share with our readers?&lt;/h2&gt;
&lt;p&gt;What is great about FLOSS is that, even 28 years after having started looking at it, you continue to learn. The ability to have such easy access to everything from code, to docs, to training materials, empowers you incredibly. My next step with the HPE DEV team is to make the contents we develop, and later on, the platform itself, available under a FLOSS license in order to give back for what I got out of it for all these years. Many thanks to HPE for supporting &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;our FLOSS contributions&lt;/a&gt; with our Open Source Program Office!&lt;/p&gt;
&lt;p&gt;And all that can only continue if you, dear readers, go on sharing your knowledge and code so we can all capitalize on it to always build even greater pieces of software. As said on the MondoRescue website: “All rights reversed”.&lt;/p&gt;
&lt;p&gt;To learn more about the open source projects that HPE is involved with, please visit our website. Interested in exploring what HPE offers for developers and data scientists? Check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; for a ton of articles, workshops, tutorials, and other resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Handing out helpful hints]]></title><link>https://developer.hpe.com/2021-August-02/</link><guid isPermaLink="false">https://developer.hpe.com/2021-August-02/</guid><pubDate>Tue, 03 Aug 2021 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Getting started with iLO RESTful API- Redfish® API Conformance ]]></title><description><![CDATA[Updated July 25, 2023 A primer for coders Introduction With the introduction of iLO 4 2.00 on ProLiant Gen9 servers, we introduced our next…]]></description><link>https://developer.hpe.com/getting-started-with-ilo-restful-api-redfish-api-conformance/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-ilo-restful-api-redfish-api-conformance/</guid><pubDate>Tue, 20 Jul 2021 18:09:08 GMT</pubDate><content:encoded>&lt;h3&gt;Updated July 25, 2023&lt;/h3&gt;
&lt;h1&gt;&lt;strong&gt;A primer for coders&lt;/strong&gt;&lt;/h1&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;With the introduction of iLO 4 2.00 on ProLiant Gen9 servers, we introduced our next generation programmatic interface for server management. HPE iLO has a rich history of remote management capabilities including IPMI, SNMP and the XML scripting language used with HPQONCFG. The need for a new API was so obvious to us that we also began an effort with the DMTF to create a standard around it, which eventually emerged in August 2015 as the “Redfish API”. The fundamental features of the API were quickly agreed upon by the participants, but as always happens in standards bodies, what emerged had some details changed.  The iLO 2.30 release in September 2015 was the beginning of convergence with the Redfish standard. The release included Redfish properties (including the newly introduced &lt;code&gt;@odata&lt;/code&gt; meta-properties) as well as the compatible pre-Redfish data model. As we move forward with our HPE Gen10 Servers, iLO 4 and iLO 5 will follow Redfish mode and we encourage our customers to code to the Redfish standard and move away from any pre-Redfish implementation. If you are just now beginning to look at leveraging the Redfish API, you should make sure your client code is interacting with iLO using Redfish API standards:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Read the &lt;a href=&quot;http://www.dmtf.org/standards/redfish&quot;&gt;Redfish specification&lt;/a&gt;. Make sure your assumptions about service URIs, including the starting URI, do not exceed the specification’s guarantees.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Include the HTTP header `&quot;OData-Version&quot;: &quot;4.0&quot;&apos; in all HTTP requests. This causes iLO 4 2.30 to hide pre-Redfish properties, decreasing your chance of inadvertently creating a dependency on something that will be removed in the future.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For iLO 5, we updated the HPE branding on the OEM extension properties, so some updates might be needed from your iLO 4 scripts if you have already integrated with the API. For detailed information of the differences between iLO 4 and iLO 5 visit our &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo5/ilo5_adaptation/&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Differences between HPE iLO 5 and iLO 6 are also listed in the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_adaptation/&quot;&gt;documentation&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Redfish Versioning&lt;/h2&gt;
&lt;p&gt;Redfish and its release process is designed to allow the data model to develop quickly as needed by the industry while limiting protocol and specification changes to very infrequent updates.  Data model and protocol are both versioned independently and move at their own pace. The goal of the design is to ensure the data model can be adapted easily to future implementations, as new technologies need to be represented, without having to revise the specification repeatedly. This is one reason you do not find many specified URIs in the specification.&lt;/p&gt;
&lt;h2&gt;Redfish URIs – Writing Durable Client Code&lt;/h2&gt;
&lt;p&gt;Keeping with the vision of infrequent specification and protocol updates, the API is designed with a self-describing data model with each resource containing its own type and version information and links to other resources. Redfish is a &quot;hypermedia API&quot; by design which means that the navigation of the data model is built into the data itself rather than defined by specification. Instead of a published list of resource URIs, a client discovers the data model dynamically by following links between resources. This is to avoid building in restrictive assumptions to the data model that will make it difficult to adapt to future hardware implementations.&lt;/p&gt;
&lt;p&gt;A URI should be treated by the client as opaque. A client should not attempt to deconstruct URIs into a template pattern. It is entirely legitimate that a resource at &lt;code&gt;/redfish/v1/systems/1&lt;/code&gt; could contain a link to &lt;code&gt;/arbitrary/stuff&lt;/code&gt;. Only specific top level URIs documented in the specification may be assumed, and even these may be absent based upon the implementation. The other URIs must be discovered dynamically by following &lt;code&gt;@odata.id&lt;/code&gt; links contained in the data model, which point to other resources.&lt;/p&gt;
&lt;p&gt;For example, iLO on an HPE ProLiant DL360 server has one compute node, and we happen to give the resource that describes that compute node the URI &lt;code&gt;/redfish/v1/systems/1/&lt;/code&gt; but there are implementations in the industry, and even the demonstration mockups, that use alternate paths including things like system serial number as the “leaf” of the URI (e.g. &lt;code&gt;/redfish/v1/systems/&lt;/code&gt;). If you assume too much about the URIs you will discover that your client code is not portable across various implementations of Redfish.&lt;/p&gt;
&lt;h2&gt;Traversing the Resource Model&lt;/h2&gt;
&lt;p&gt;Because objects link together, there are some best practices you should be aware of as you create new client code. If you create a generic “crawler” app that simply GETs every resource and follows its links, your crawl will never terminate because the various object interlinks mean that the data model is not strictly a tree, but a graph.  A generic crawler must keep track of visited URIs and not re-crawl them.  Additionally, as best practice you should treat the visited resource URIs set as case insensitive (iLO does). Most use cases are not generic crawls so this won’t be an issue. Typically you know what you want to find in the data model and you should make sure you correctly find and iterate the collections needed to get to the specific resource you are interested in. To explore a data model demo visit our &lt;a href=&quot;https://ilorestfulapiexplorer.ext.hpe.com/&quot;&gt;iLO RESTful API Demo&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Interoperability&lt;/h2&gt;
&lt;p&gt;The good news is that our internal tools at HPE required very little tweaking to work correctly on another industry implementation of Redfish. If you read the specification and understand the principles behind the design choices you have a very good chance of writing durable client code that is widely interoperable.&lt;/p&gt;
&lt;p&gt;HPE Developers have additional resources that can help you understand and integrate more effectively with the iLO RESTful API. To get the latest libraries and sample code available visit the &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home/&quot;&gt;iLO RESTful API&lt;/a&gt; platform page.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting started with the Redfish® API - Part 2]]></title><description><![CDATA[Updated July 25, 2023 A primer for coders In my last blog post I began a discussion about best practices for writing Redfish API client code…]]></description><link>https://developer.hpe.com/getting-started-with-the-redfish-api-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-the-redfish-api-part-2/</guid><pubDate>Tue, 20 Jul 2021 17:55:56 GMT</pubDate><content:encoded>&lt;h3&gt;Updated July 25, 2023&lt;/h3&gt;
&lt;h1&gt;&lt;strong&gt;A primer for coders&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;In my last blog &lt;a href=&quot;/blog/getting-started-with-ilo-restful-api-redfish-api-conformance&quot;&gt;post&lt;/a&gt; I began a discussion about best practices for writing Redfish API client code. Last time we talked about resource versioning and resource inter-linking. I explained that client code should discover the data model and avoid making incorrect assumptions. In this post I will continue discussing some issues you should be aware of in order to create durable clients that interoperate across different implementations of the Redfish API.&lt;/p&gt;
&lt;h2&gt;HTTP Status Codes and Redirect&lt;/h2&gt;
&lt;p&gt;HTTP requests to any REST API return an HTTP status code as part of the response. When writing Redfish client code you should encounter and handle any status codes defined in the Redfish specification. In addition, iLO uses &lt;code&gt;308 Redirect&lt;/code&gt; on some URIs to redirect the client from an older version of the API to the newer version. For instance, iLO 4 responded to GET &lt;code&gt;/rest/v1&lt;/code&gt;. If you try this with iLO 5, you will receive an HTTP &lt;code&gt;308 Redirect&lt;/code&gt; to &lt;code&gt;/redfish/v1/&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;Unimplemented Properties&lt;/h2&gt;
&lt;p&gt;Redfish is designed to be extremely flexible in implementation which is why so much emphasis is placed upon a self-describing data model (&lt;code&gt;@odata&lt;/code&gt; meta-data and links to related resources). Part of this flexibility is that properties within a resource may be omitted if not supported by an implementation. For instance, a server that does not implement an indicator LED may completely omit the &lt;code&gt;IndicatorLED&lt;/code&gt; property in the data model.  Likewise, entire resources and the links to them may vary between implementations. Good client code should handle missing properties and links in whatever way is appropriate for your application.&lt;/p&gt;
&lt;h2&gt;Null values for Properties&lt;/h2&gt;
&lt;p&gt;If you examine the &lt;a href=&quot;http://redfish.dmtf.org/schemas/v1/&quot;&gt;Redfish schema&lt;/a&gt; documents, you may notice that many properties are allowed to return a JSON null value. As an implementer of the API, returning null is not ideal but because of various relationships between open-architecture server components, the value of a property may not be available at all times. The iLO RESTful API  uses null as a return value when, due to system state, the value is either not yet available or too stale to be of use. Again, good client code should test the value.&lt;/p&gt;
&lt;h2&gt;What Version of Redfish does iLO support?&lt;/h2&gt;
&lt;p&gt;We are often asked the question:  “What version of Redfish does iLO support?” Due to the flexibility of the API, the question is not straightforward to answer. Recall from &lt;a href=&quot;/blog/getting-started-with-ilo-restful-api-redfish-api-conformance&quot;&gt;part 1&lt;/a&gt; that the “Protocol” is versioned separately from the “Data Model”. iLO 5 implements the latest protocol version 1.3 at the time of writing. However, the individual resources in the data model each implement a specific schema and version. Some resources may report the latest version while some may report older versions. Since the versions are cumulative, if we need to report a newly defined property we will update the version to the schema that defines the new property. However, if we do not need to update a resource, we may choose to leave the versions information unchanged as well. Since schema updates are likely to occur on each iLO firmware update, the best answer is to inspect the API dynamically.&lt;/p&gt;
&lt;h2&gt;What is next?&lt;/h2&gt;
&lt;p&gt;We hope we have given you enough information to get started with the iLO RESTful API and see the benefits of server automation from a single API. As we continue to improve and update our Redfish interface, please refer to our &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Docker Wrong: My Journey to a Better Container]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/using-docker-wrong-my-journey-to-a-better-container/</link><guid isPermaLink="false">https://developer.hpe.com/using-docker-wrong-my-journey-to-a-better-container/</guid><pubDate>Tue, 13 Jul 2021 06:03:11 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&quot;authorDisplayName&quot;: &quot;John Omernik&quot;,
&quot;publish&quot;: &quot;2018-06-07T13:00:00.000&quot;,
&quot;category&quot;: &quot;use-case&quot;,
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Since starting at MapR, I’ve been focusing on a number of things, including learning more about MapR, meeting coworkers and customers, and a topic near and dear to my heart: immersing myself in containers and their integration with the MapR Data Platform.  In past jobs, I’ve used MapR with containers but have not dug into the Persistent Application Client Container (PACC) nearly as much as I should.  This cautionary tale is one to remind everyone how amazing containers are AND how to keep the benefits of containers at the top of your mind when working to create them, so you don’t design your containers in such a way that you give up those benefits and end up writing a mea culpa blog post like this one.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/container-wide.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;I was tasked to come up with a way to have a syslog server receive log messages and produce those messages to MapR Event Store – a powerful setup for Information Security and IT Operations professions.  I had a few requirements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Must run in Docker.  This is now a self-imposed philosophy for all things.  I want everything to be portable, sharable, and usable by everyone on the MapR Data Platform.&lt;/li&gt;
&lt;li&gt;Simple in and out.  It has to listen on a syslog port (TCP and UDP) and produce to MapR Event Store.  I wanted to show how it works in the Dockerfile, but I didn’t want to chain multiple containers together.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;After some research I decided to utilize a great MapR partner, &lt;a href=&quot;https://streamsets.com/&quot;&gt;StreamSets&lt;/a&gt;, and their &lt;a href=&quot;https://streamsets.com/products/sdc&quot;&gt;StreamSets Data Collector&lt;/a&gt; (SDC) as the core.  The SDC’s interface, capabilities, and integrations with MapR would serve me well here.  I had worked with StreamSets before in Docker with MapR; however, I had NOT used the MapR PACC, so it would be a learning experience.  I pulled out my old Dockerfiles, modified some components, ensured I was handling my newly secured cluster with tickets, and it worked great!  I even produced a script to create the MapR Event Store and generate test messages, all contained in one handy &lt;a href=&quot;https://github.com/johnomernik/maprssss_wrong&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As you can see by the title of this post, I did it wrong. Now, most folks would just update their code once they realized their mistake; however, I created a second repository because I wanted to share what I did wrong as an example to others.  What did I do wrong?  I did not create a deterministic container (&lt;a href=&quot;https://en.wikipedia.org/wiki/Deterministic_system&quot;&gt;https://en.wikipedia.org/wiki/Deterministic_system&lt;/a&gt;).  &lt;/p&gt;
&lt;p&gt;One of the primary benefits of Docker should be that, if I create a Docker container, I will be able run it on all my systems, and if I hand that container to someone else, it should run the same way, thereby being a deterministic system. Now, let me say there are always caveats, but this should be a primary goal for Docker users.&lt;/p&gt;
&lt;p&gt;Where did I go wrong? Well, my main issue was using my old code from before the MapR PACC. In this code, I took a number of directories for StreamSets, including logs, conf (config), and others, and placed it on the local system.  When I ran this on a MapR node in my cluster, it worked great, because all the users that I would use inside the container were also on my MapR nodes running Docker. Thus, I could set up permissions on those local directories, and when the container ran, the user inside the container had proper filesystem permissions to the directories required by StreamSets.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/docker-host.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;However, when I shared my repository with a colleague at MapR, they reported it didn’t work on their system. I was baffled: it should work; it’s Docker! That’s the whole point of using containers. Well, my colleague was not running the container on MapR nodes; instead, it was running on a client machine. This client machine did NOT have the same users as the MapR cluster. Thus there were a number of issues that my scripts ran into with creating the configuration directories, settings permissions, and even running StreamSets.  I made assumptions in my original Docker, and by doing so, made a container that ran differently depending on the execution system.&lt;/p&gt;
&lt;p&gt;This was an eye-opener for me.  From one perspective, I had tried to get StreamSets running in the PACC by using old knowledge and approaches in an effort to expedite the process. By doing so, I hampered the usability, portability, and shareability of my container. Because of this, I’ve created another &lt;a href=&quot;https://github.com/johnomernik/maprssss&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here is my first attempt to “get it right.”  Instead of volume mounting the configuration directories for StreamSets to the local filesystem, I use the MapR POSIX Client built into the PACC to put the configuration directly on MapR XD.  This ensures that the user in the container always has access and that the config files are available to StreamSets anywhere they run.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/docker-host-2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This allowed me to simplify my startup scripts and ensure a much cleaner runtime environment, independent of the system I was running the container on. This is a much better approach and really shows the power of using MapR with Docker containers.  I am still learning more about the PACC, but as I learn, I wanted to take the time to share my follies, so others can learn as well.  If you take the time to review my repository &lt;a href=&quot;https://github.com/johnomernik/maprssss&quot;&gt;here&lt;/a&gt; and find ways to improve them even more, I am all ears! Leave a comment, or post a GitHub issue.&lt;/p&gt;
&lt;h2&gt;Related Links:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/AdvancedInstallation/RunningtheMapRPACC.html&quot;&gt;Running the MapR PACC Using Docker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Comparing "To Kill a Mockingbird" to its Sequel with Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/comparing-to-kill-a-mockingbird-to-its-sequel-with-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/comparing-to-kill-a-mockingbird-to-its-sequel-with-apache-spark/</guid><pubDate>Tue, 13 Jul 2021 05:38:30 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&quot;authorDisplayName&quot;: &quot;Joseph Blue&quot;,
&quot;publish&quot;: &quot;2015-08-05T07:00:00.000Z&quot;,
&quot;category&quot;: &quot;apache-spark&quot;,
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Courting Controversy&lt;/h2&gt;
&lt;p&gt;Did Harper Lee write &lt;em&gt;To Kill a Mockingbird&lt;/em&gt;? For many years, conspiracy buffs supported the urban legend that Truman Capote, Lee’s close friend with considerably more literary creds, might have ghost-authored the novel. The author’s reticence on that subject (as well as every other subject) fueled the rumors and it became another urban legend.&lt;/p&gt;
&lt;p&gt;However, the recent ‘discovery’ and subsequent publication of her earlier novel &lt;em&gt;Go Set a Watchmen&lt;/em&gt; has generated renewed scrutiny of the chain of events. Is the newly published book a discarded rough draft that was to become the universally beloved classic, or was it a truly forgotten separate work that deserves to be cast in the literary limelight for analysis? A concise summary of the publishing controversy can be found in this NYT &lt;a target=&apos;\_blank&apos;  href=&apos;http://www.nytimes.com/2015/07/25/opinion/joe-nocera-the-watchman-fraud.html&apos;&gt;op-ed-column&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But the new book offers curious readers an opportunity to analyze the two works together with machine learning tools that are ideal for classifying text among a corpus of documents. Apache Spark has a mature set of libraries for text-based analysis that can be leveraged with very few lines of code.&lt;/p&gt;
&lt;p&gt;The publisher of &lt;em&gt;Go Set a Watchman&lt;/em&gt; is unlikely to make available their best seller even for lofty academic purposes. Luckily, the Wall Street Journal printed the &lt;a target=&apos;\_blank&apos;  href=&apos;http://www.wsj.com/articles/harper-lees-go-set-a-watchman-read-the-first-chapter-1436500861&apos;&gt;first chapter&lt;/a&gt; on July 10th for anyone to analyze. In this blog, we extract features from the first chapter of each book, and then build a model to tell the difference between them. Comparing passages from each may provide clues as to the authorship.&lt;/p&gt;
&lt;p&gt;All of the data and code to train the models and make your own conclusions using Apache Spark is located in this &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/joebluems/Mockingbird&apos;&gt;Github repository&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Dissecting a Classic by the Numbers&lt;/h2&gt;
&lt;p&gt;The theory behind document classification is that text from the same source will contain similar combinations of words with comparable frequency. Any conclusions based from this type of analysis are only as strong as that assumption.&lt;/p&gt;
&lt;p&gt;To build a model to classify documents, text must be translated into numbers. This involves standardizing the text, converting to numbers (via hashing) then adjusting the word importance based on its relative frequency.&lt;/p&gt;
&lt;p&gt;Text standardization was done with Apache Lucene. An example below shows how to perform this with the Spark shell:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&gt; ./bin/spark-shell --packages &quot;org.apache.lucene:lucene-analyzers-common:5.1.0&quot;
val line=&quot;Flick. A tiny, almost invisible movement, and the house was still.&quot;
val tokens=Stemmer.tokenize(line)
tokens: Seq[String] = ArrayBuffer(flick, tini, almost, invis, movement, hous, still)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Stemmer object that invokes the Lucene analyzer comes from &lt;a target=&apos;\_blank&apos;  href=&apos;https://chimpler.wordpress.com/2014/06/11/classifiying-documents-using-naive-bayes-on-apache-spark-mllib/&apos;&gt;this Chimpler example&lt;/a&gt;. Notice how the line describing the tranquility of the Radley house is affected. The punctuation and capitalization is removed, and words like “house” are stemmed, so tokens with the same root (“housing”, “housed”, etc.) will be considered equal. Next, we translate those tokens into numbers and count how often they appear in each line. Spark’s HashingTF library performs both operations simultaneously.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.mllib.feature.HashingTF
val tf = new HashingTF(10)
tf: org.apache.spark.mllib.feature.HashingTF = org.apache.spark.mllib.feature.HashingTF@d32d034

val hashed = tf.transform(tokens)
hashed: org.apache.spark.mllib.linalg.Vector = (10,[0,1,2,3,6,7],[1.0,1.0,1.0,2.0,1.0,1.0])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A “hash” is a one-way translation from text to an integer (i.e. once it’s translated, there’s no way to go back). Initializing the hash with HashingTF(10) notifies Spark we want every string mapped to the integers 0-9. The transform method performs the hash on each word, and then provides the frequency count for each. This is an impractical illustration and would result in a huge number of “collisions” (different strings assigned the same number).&lt;/p&gt;
&lt;p&gt;The default size of the resulting Vector of token frequencies is 1,000,000. The size and number of collisions are inversely related. But a large hash also requires more memory. If your corpus contains millions of documents, this is an important factor to consider. For this analysis, a hash size of 10,000 was used.&lt;/p&gt;
&lt;p&gt;The last step in the text-preparation process is to account for the rareness of words- we want to reward uncommon words such as “chifferobe” with more importance than frequent words such as “house” or “brother”. This is referred to as TF-IDF transformation and is available as an (almost) one-liner in Spark.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.mllib.feature.IDF
val idfModel = new IDF(minDocFreq = 3).fit(trainDocs)
val idfs = idfModel.transform(hashed)
idfs: org.apache.spark.mllib.linalg.Vector = (10,[0,1,2,3,6,7],[0.413734499590671,0.4244680552337798,0.4761400657781007, 1.4004620708967006,0.37876590175292424,0.48374466516332])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The “fit” method of the IDF library examines the entire corpus to tabulate the document count for each word. On the second pass, Spark creates the TF-IDF for each non-zero element (tokeni) as the following:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mockingbird-blog-figa.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A corpus of many documents is needed to create an IDF dictionary, so in the example above, excerpts from both novels were fed into the fit method. The transform method was then used to convert individual passages to TF-IDF vectors.&lt;/p&gt;
&lt;p&gt;Having been transformed into TF-IDF vectors, passages from both books are now ready to be classified.&lt;/p&gt;
&lt;h2&gt;Building the Classifier&lt;/h2&gt;
&lt;p&gt;The secret to getting value from business problems is not the classification; it is primarily about ranking objects based on the confidence of our decision and then leveraging the value of a good decision minus the cost of a misidentification. Spark has several machine learning algorithms that are appropriate for this task.&lt;/p&gt;
&lt;p&gt;During examination of the text it was noted that a few modifications should be made to the novels to make the comparison more “fair”. &lt;em&gt;To Kill a Mockingbird&lt;/em&gt; was written in the first person and includes many pronouns that would be giveaways (e.g. “I”,”our”,”my”,”we”, etc.). These were removed from both books. Due to the inevitability of variable sentence length in novels, passages were created as a series of ten consecutive words.&lt;/p&gt;
&lt;p&gt;The parsed passages were combined, split into training and testing sets, and then transformed with the idfModel built on the training data using the code below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val data = mockData.union(watchData)
val splits = data.randomSplit(Array(0.7, 0.3))
val trainDocs = splits(0).map{ x=&gt;x.features}
val idfModel = new IDF(minDocFreq = 3).fit(trainDocs)
val train = splits(0).map{ point=&gt;
  LabeledPoint(point.label,idfModel.transform(point.features))
}
val test = splits(1).map{ point=&gt; LabeledPoint(point.label,idfModel.transform(point.features))
}
train.cache()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using randomly split data files for training and testing a model is standard procedure for insuring that performance is not a result of over-training (i.e. memorizing the specific examples instead of abstracting the true patterns). It is critical that the idfModel is built only on the training data. Failure to do so may result in over-stating your performance on the test data.&lt;/p&gt;
&lt;p&gt;The data is prepared for machine learning algorithms in Spark. Naïve Bayes is a reasonable first choice for document classification. The code below shows the training and evaluation of a Naïve Bayes model on the passages.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.mllib.classification.{NaiveBayes, NaiveBayesModel}
val nbmodel = NaiveBayes.train(train, lambda = 1.0)
val bayesTrain = train.map(p =&gt; (nbmodel.predict(p.features), p.label))
val bayesTest = test.map(p =&gt; (nbmodel.predict(p.features), p.label))
println(&quot;Mean Naive Bayes performance&quot;)
(bayesTrain.filter(x =&gt; x._1 == x._2).count() / bayesTrain.count().toDouble,
bayesTest.filter(x =&gt; x._1 == x._2).count() / bayesTest.count().toDouble)

Mean Naive Bayes performance
res25: (Double, Double) = (0.9053398058252428,0.7068965517241379)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Applying the Naïve Bayes algorithm in Spark gives a classification from which accuracy and a confusion matrix can be derived. The method makes the correct classification on 90.5% of the train records and 70.7% of the test records (performance on the training is almost always better than the test). The confusion matrix on the test data appears below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mockingbird-blog-fig1.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The diagonal elements of the confusion matrix represent correct classifications and the off-diagonal counts are classification errors. It is informative to look at a confusion matrix (especially when there are more than two classes) – the better the classification rate on the test set, the more separable the populations. However, when data scientists are looking to apply classification to a business problem, they prefer to examine how well the algorithm rank-orders the results.&lt;/p&gt;
&lt;p&gt;Currently, Spark does not support a user-supplied threshold for Naïve Bayes. Only the best classification rate in the training data is reported. But in real business problems, there is an overhead associated with a misclassification so that the &lt;em&gt;“best” rate may not be the optimal rate&lt;/em&gt;. It is of keen interest to the business to find the point at which maximum value of correct classifications is realized when accounting for incorrect answers. To do this via Spark, we need to use methods that allow for analysis of the threshold.&lt;/p&gt;
&lt;p&gt;Given the number of features (a TF-IDF vector of size 10,000) and the nature of the data, Spark’s tree-based ensemble methods are appropriate. Both Random Forest and Gradient Boosted Trees are available.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.mllib.tree.RandomForest
import org.apache.spark.mllib.tree.model.RandomForestModel
import org.apache.spark.mllib.tree.GradientBoostedTrees
import org.apache.spark.mllib.tree.configuration.BoostingStrategy
import org.apache.spark.mllib.tree.model.GradientBoostedTreesModel

// RANDOM FOREST REGRESSION
val categoricalFeaturesInfo = Map[Int, Int]()
val numClasses = 2
val featureSubsetStrategy = &quot;auto&quot;
val impurity = &quot;variance&quot;
val maxDepth = 10
val maxBins = 32
val numTrees = 50
val modelRF = RandomForest.trainRegressor(train, categoricalFeaturesInfo, numTrees, featureSubsetStrategy, impurity, maxDepth, maxBins)

// GRADIENT BOOSTED TREES REGRESSION
val boostingStrategy = BoostingStrategy.defaultParams(&quot;Regression&quot;)
boostingStrategy.numIterations = 50
boostingStrategy.treeStrategy.maxDepth = 5
boostingStrategy.treeStrategy.categoricalFeaturesInfo = Map[Int, Int]()
val modelGB = GradientBoostedTrees.train(train, boostingStrategy)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The regression model options (estimating vs. classifying) will produce continuous outputs that can be used to find the right threshold. Both of these methods can be configured with tree depth and number of trees– read the Spark &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/1.3.0/mllib-ensembles.html&apos;&gt;documentation&lt;/a&gt; for details, but general rules of thumb are the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Random Forest – trees are built in parallel and overtraining decreases with more trees, so setting this number to be large is a great way to leverage a Hadoop environment. The max depth should be larger than Gradient Boosted Trees (GBT).&lt;/li&gt;
&lt;li&gt;Gradient Boosted Trees – the number of trees is directly related to overtraining and the trees are not built in parallel. This method can produce some extremely high classification rates on the training data, but set the max depth of trees to be smaller than random forest.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The table below shows the commands to calculate the Receiver Operating Characteristic (ROC) for the Random Forest model – the ROC will tell the real story on the model performance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;//// Random forest model metrics on training data
val trainScores = train.map { point =&gt;
  val prediction = modelRF.predict(point.features)
  (prediction, point.label)

//// Random forest model metrics on training data
val trainScores = train.map { point =&gt;
  val prediction = modelRF.predict(point.features)
  (prediction, point.label)
}
val metricsTrain = new BinaryClassificationMetrics(trainScores,100)
val trainroc= metricsTrain.roc()
trainroc.saveAsTextFile(&quot;/ROC/rftrain&quot;)
metricsTrain.areaUnderROC()
res11: Double = 0.9811325444963708

//// Random forest model metrics on test data
val testScores = test.map { point =&gt;
  val prediction = modelRF.predict(point.features)
  (prediction, point.label)
}
val metricsTest = new BinaryClassificationMetrics(testScores,100)
val testroc= metricsTest.roc()
testroc.saveAsTextFile(&quot;/ROC/rftest&quot;)
metricsTest.areaUnderROC()
res12: Double = 0.8844304733727815
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To calculate an ROC, the following steps are performed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Results are binned according to score (highest to lowest).&lt;/li&gt;
&lt;li&gt;In each bin, the number of each class is tabulated (Mockingbird vs Watchman passages).&lt;/li&gt;
&lt;li&gt;Starting with the highest bin, generate a data point containing the cumulative percent of the total Mockingbird and Watchman passages that have occurred.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Graphing those points for the Random Forest and Gradient Boosted Trees yields the following curves:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mockingbird-blog2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The diagonal “baseline” is the performance one could expect from random guessing (i.e. selecting 50% of the passages, you would expect to find half of each book’s examples). Any performance better than that is considered the “lift” delivered by the model. It should be intuitive from examining the graph that steeper, higher curves provide greater lift. The table below quantifies the area under the ROC, which is a standard metric used by data scientists to evaluate the performance of many models simultaneously.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mockingbird-blog-fig3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Gradient Boosted Tree model achieved an essentially perfect 1.0 area under the curve. This implies that the model scored all Mockingbird passages higher than all Watchman passages. However, the Random Forest model has higher performance on the test set (0.884 vs 0.867) so it is assumed to generalize better.&lt;/p&gt;
&lt;p&gt;In the setting of a business problem, the underlying data of the ROC is used to estimate how many items of interest can be identified when the real cost of an error is considered. Focusing on the highest scoring items from the model and working down the list is where real value comes from.&lt;/p&gt;
&lt;p&gt;The results cannot be interpreted as conclusive, but there is significant lift displayed on these curves, and that doesn’t look good for Harper Lee.&lt;/p&gt;
&lt;h2&gt;The Verdict&lt;/h2&gt;
&lt;p&gt;There are plenty of great tools to build classification models. Apache Spark provides an excellent framework for building solutions to business problems that can extract value from massive, distributed files.&lt;/p&gt;
&lt;p&gt;Machine learning algorithms cannot answer the great mysteries of life. But they do provide evidence for humans to consider when interpreting results, assuming we ask the right question in the first place.&lt;/p&gt;
&lt;p&gt;Readers are encouraged to check out the books themselves and reach their own conclusions. If the controversy surrounding the publication of Harper Lee’s books causes more people to read them, that’s probably a good thing.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[We’ve been Discovered!]]></title><link>https://developer.hpe.com/2021-July-6/</link><guid isPermaLink="false">https://developer.hpe.com/2021-July-6/</guid><pubDate>Wed, 07 Jul 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Getting Started with DataTaps in Kubernetes Pods]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/accessing-dtap-in-pods/</link><guid isPermaLink="false">https://developer.hpe.com/accessing-dtap-in-pods/</guid><pubDate>Tue, 06 Jul 2021 06:44:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;What is DataTap?&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/126251305-e100faf1-aac5-410b-8c67-cb7cdd01a50b.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Handling different protocols of file systems is always a pain for a data analyst. DataTap is a file system connector that aims to alleviate this pain. DataTap provides HDFS protocol abstraction that allows big data applications like Spark to run unmodified with fast access to data sources other than HDFS, i.e. HPE Ezmeral Data Fabric XD (formerly named MapR-FS/XD) and GCS (Google Cloud Storage). Using DataTap, you can unify your code while the underlying data sources can be swapped from HDFS, MapR-FS. This flexibility allows developers like you to focus more on coding rather than the underlying infrastructure. More information on DataTap can be found &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/universal-concepts/About_DataTaps.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this blog, I will introduce two ways to access DataTaps in Kubernetes clusters managed by HPE Ezmeral Container Platform deployed with a pre-integrated HPE Ezmeral Data Fabric. The first method covers how to access the DataTaps using HDFS Commands and the second focuses on directly reading data from Apache Spark (using pyspark). Here we go!&lt;/p&gt;
&lt;h2&gt;Enable DataTap when creating KubeDirector App&lt;/h2&gt;
&lt;p&gt;First and foremost, you have to enable DataTaps while creating a KubeDirector app. This can be done by ticking the &quot;Enable DataTap&quot; box.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/119443704-9cc92180-bd5c-11eb-8fce-b6b53823336c.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;This will result in mounting a lot of files to &lt;code&gt;/opt/bdfs/&lt;/code&gt; of your pod. If you can see the files in your pod (as shown in the image below), it means that your pod is DataTap enabled and you are now ready to access the files in DataTap.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120776952-58593500-c557-11eb-9dcd-4146d581a761.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;The generic approach can be summarized into these two steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;/opt/bdfs/bluedata-dtap.jar&lt;/code&gt; to the classpath.&lt;/li&gt;
&lt;li&gt;Configure Hadoop with the following values.&lt;/li&gt;
&lt;/ol&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;fs.dtap.impl&lt;/td&gt;
&lt;td&gt;com.bluedata.hadoop.bdfs.Bdfs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fs.AbstractFileSystem.dtap.impl&lt;/td&gt;
&lt;td&gt;com.bluedata.hadoop.bdfs.BdAbstractFS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fs.dtap.impl.disable.cache&lt;/td&gt;
&lt;td&gt;false&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Note: fs.dtap.impl.disable.cache can be designated as an option.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Reference:
&lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/kubernetes/tenant-project-administration/datataps/Accessing_DataTaps_in_Kubernetes_Pods.html&quot;&gt;Accessing DataTaps in Kubernetes Pods&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Uniform Resource Identifier&lt;/h2&gt;
&lt;p&gt;In HPE Ezmeral Container Platform, you can see different types of file systems used by the shared storage resources. You can manage different data sources through a GUI while representing files with the same URI. The URI will be in the format of&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;dtap://datatap_name/some_subdirectory/another_subdirectory/some_file
&lt;/code&gt;&lt;/pre&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/121467168-35150680-c9eb-11eb-901c-77e83097cdf9.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can manage different data source whether they are in MapR filesystem or HDFS.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/126249359-0a192c2e-6dbf-4c22-b923-94b230cc1215.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can add new DataTap with this screen.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/121467262-5f66c400-c9eb-11eb-958d-911f18281a27.png&quot; alt=&quot;image&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can upload, delete or rename files using GUI.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;Access DataTaps using HDFS commands&lt;/h1&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The Hadoop distributed file system (HDFS) is the key component of the Hadoop ecosystem. HDFS commands, of course, are the commands that are responsible for manipulating files for HDFS.&lt;/p&gt;
&lt;p&gt;To use the HDFS commands, first you need to start the Hadoop services using the following steps:&lt;/p&gt;
&lt;h2&gt;Prepare Hadoop&lt;/h2&gt;
&lt;p&gt;Some of the KubeDirector App provided by HPE is pre-installed a well-configured Hadoop for you. Hence, the following installation steps can be skipped.&lt;/p&gt;
&lt;h3&gt;Install OpenJDK and the dependency&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;apt update &amp;#x26;&amp;#x26; apt upgrade -y
apt install wget -y

# install openjdk
DEBIAN_FRONTEND=noninteractive apt-get install openjdk-11-jdk-headless -y
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Download Hadoop and untar Hadoop&lt;/h3&gt;
&lt;p&gt;You can always find the latest version of Hadoop on &lt;a href=&quot;https://hadoop.apache.org/releases.html&quot;&gt;Apache Hadoop Releases&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;wget https://apache.website-solution.net/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz   # Download Hadoop binary
tar zxf hadoop-*.tar.gz                                                                   # Untar Hadoop binary
mv hadoop-3.3.0 $HOME/hadoop                                                              # Rename and move Hadoop folder to $HOME
cd $HOME/hadoop                                                                           # Move directory to hadoop
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Configure the required environment&lt;/h3&gt;
&lt;p&gt;In &lt;code&gt;$HADOOP_HOME/etc/hadoop/hadoop-env.sh&lt;/code&gt; file, assign the following environment variables (&lt;code&gt;$JAVA_HOME&lt;/code&gt;, &lt;code&gt;$HADOOP_HOME&lt;/code&gt;, &lt;code&gt;$HADOOP_CLASSPATH&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# These two variables is needed for HDFS command. Located at line 54, 58.
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/
export HADOOP_HOME=$HOME/hadoop

# This variable is DataTap specific. Located at line 126.
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/lib/:/opt/bdfs/bluedata-dtap.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In &lt;code&gt;$HADOOP_HOME/etc/hadoop/core-site.xml&lt;/code&gt; file, configure Hadoop with the following values:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&amp;#x3C;configuration&gt;
  &amp;#x3C;property&gt;
    &amp;#x3C;name&gt;fs.dtap.impl&amp;#x3C;/name&gt;
    &amp;#x3C;value&gt;com.bluedata.hadoop.bdfs.Bdfs&amp;#x3C;/value&gt;
  &amp;#x3C;/property&gt;

  &amp;#x3C;property&gt;
    &amp;#x3C;name&gt;fs.AbstractFileSystem.dtap.impl&amp;#x3C;/name&gt;
    &amp;#x3C;value&gt;com.bluedata.hadoop.bdfs.BdAbstractFS&amp;#x3C;/value&gt;
  &amp;#x3C;/property&gt;

  &amp;#x3C;property&gt;
    &amp;#x3C;name&gt;fs.dtap.impl.disable.cache&amp;#x3C;/name&gt;
    &amp;#x3C;value&gt;false&amp;#x3C;/value&gt;
  &amp;#x3C;/property&gt;
&amp;#x3C;/configuration&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Alternative&lt;/h3&gt;
&lt;p&gt;I have prepared an example configuration file on &lt;a href=&quot;https://github.com/helloezmeral/hpe-binary/tree/main/hadoop-dtap-config&quot;&gt;Github&lt;/a&gt;. If your Hadoop does not have a special configuration, you can simply download and replace your existing configuration file.&lt;/p&gt;
&lt;h2&gt;Test your HDFS command&lt;/h2&gt;
&lt;p&gt;Here are some common commands used to interact with DataTap.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# bash: Current working directory -&gt; $HADOOP_HOME

# Check the version of the Hadoop.
bin/hadoop version

# List the files from the default TenantStorage Data Source.
bin/hdfs dfs -ls dtap://TenantStorage/

# Make new directory user in dtap://TenantStorage/.
bin/hdfs dfs -mkdir dtap://TenantStorage/user

# Move the text files helloworld.txt to &quot;cenz&quot; folder.
bin/hdfs dfs -put helloworld.txt dtap://TenantStorage/cenz
bin/hdfs dfs -put -f helloworld.txt dtap://TenantStorage/cenz # force replacement

# Concatenate a file in dtap.
bin/hdfs dfs -cat dtap://TenantStorage/cenz/helloworld.txt

# Remove a file in dtap.
bin/hdfs dfs -rm dtap://TenantStorage/cenz/helloworld.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Tip:&lt;/p&gt;
&lt;p&gt;To get rid of the file path &lt;code&gt;bin/&lt;/code&gt;, we can add the Hadoop&apos;s &lt;code&gt;bin&lt;/code&gt; and &lt;code&gt;sbin&lt;/code&gt; file to &lt;code&gt;$PATH&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;export HADOOP_HOME=$HOME/hadoop
export PATH=$PATH:$HADOOP_HOME:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Reference:
&lt;a href=&quot;https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html&quot;&gt;Hadoop File System Shell Document&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Access DataTaps using pyspark&lt;/h1&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;PySpark is an interface for Apache Spark in Python. Apache Spark is a unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. Apache Spark can access data from HDFS and, with the extension, file systems managed by DataTap.&lt;/p&gt;
&lt;h2&gt;Install pyspark&lt;/h2&gt;
&lt;p&gt;There are lots of ways to install Spark. The simplest way is to install the pyspark package directly using &lt;code&gt;pip install pyspark&lt;/code&gt;. Run the following to install the prerequisite packages and pyspark.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# install pyspark &amp;#x26; Java
apt-get install python3 -y
apt-get install python3-pip -y
DEBIAN_FRONTEND=noninteractive apt-get install openjdk-11-jdk-headless -y
pip install pyspark
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are two ways to interact with pyspark. The first one is to execute the &lt;code&gt;pyspark&lt;/code&gt; command in bash to initiate the pyspark session. The second way is that to treat pyspark as a module that the python kernel can import to. (&lt;code&gt;import pyspark&lt;/code&gt;)&lt;/p&gt;
&lt;h3&gt;Method one&lt;/h3&gt;
&lt;h4&gt;Initiate &lt;em&gt;pyspark&lt;/em&gt; session with jars&lt;/h4&gt;
&lt;p&gt;In order to use datatap with pyspark, you have to add an external jar file as an argument to pyspark. Initiate Spark&apos;s interactive shell in python using the following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# bash

# Specify the path of the jars files
pyspark --jars /opt/bdfs/bluedata-dtap.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After starting the interactive shell, &lt;code&gt;Spark Context&lt;/code&gt; and &lt;code&gt;Spark Session&lt;/code&gt; are automatically initiated for you.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120170783-e8d00680-c233-11eb-9fe8-136da9996fdc.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;After specifying the Hadoop configurations, you can read files from DataTap just like you normally did with HDFS.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-py&quot;&gt;# pyspark

# Specify the Hadoop configurations.
sc._jsc.hadoopConfiguration().set(&apos;fs.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.Bdfs&apos;)
sc._jsc.hadoopConfiguration().set(&apos;fs.AbstractFileSystem.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.BdAbstractFS&apos;)

# Commands for reading DataTap file
text = sc.textFile(&quot;dtap://TenantStorage/HPE.txt&quot;)
text.take(5)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/120171213-61cf5e00-c234-11eb-8928-2514e8b867a8.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Method two&lt;/h3&gt;
&lt;h4&gt;Initiate &lt;em&gt;python&lt;/em&gt; and initiate &lt;em&gt;pyspark&lt;/em&gt; with &lt;em&gt;jars&lt;/em&gt; at runtime&lt;/h4&gt;
&lt;p&gt;Run the Python Shell first:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# bash
python3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At the Python runtime, add the path of the jar file using the Spark configuration command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-py&quot;&gt;# python
from pyspark import SparkConf, SparkContext

# Specify the path of the jars files.
conf = SparkConf().set(&quot;spark.jars&quot;, &quot;/opt/bdfs/bluedata-dtap.jar&quot;)
sc = SparkContext( conf=conf)

# Specify the Hadoop configurations.
sc._jsc.hadoopConfiguration().set(&apos;fs.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.Bdfs&apos;)
sc._jsc.hadoopConfiguration().set(&apos;fs.AbstractFileSystem.dtap.impl&apos;, &apos;com.bluedata.hadoop.bdfs.BdAbstractFS&apos;)

# Commands for reading DataTap file
text = sc.textFile(&quot;dtap://TenantStorage/HPE.txt&quot;)
text.take(5)
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/configuration.html#runtime-environment&quot;&gt;Spark Document: Runtime Environment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/delta-io/delta/issues/346&quot;&gt;Related GitHub Issues&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Conclusion&lt;/h1&gt;
&lt;p&gt;A distributed file system is fundamental for handling large amounts of data. Managing those file systems tends to always be a pain for developers. DataTaps unify different storage resources into a path that different clusters can use. This helps you to get rid of time-consuming copies or transfers of data. More time spent on extracting business insight from your data and less time handling tedious stuff - that&apos;s what DataTap can give you. And that&apos;s what you get with HPE Ezmeral.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Become a Legend!]]></title><description><![CDATA[The HPE DEV Workshops-on-Demand are a great way to get hands-on experience with the newest and most popular technologies. These free…]]></description><link>https://developer.hpe.com/become-a-legend/</link><guid isPermaLink="false">https://developer.hpe.com/become-a-legend/</guid><pubDate>Tue, 22 Jun 2021 19:32:31 GMT</pubDate><content:encoded>&lt;center&gt;&lt;img src=&quot;/img/15_workshops_legend_no_bg-img1.png&quot; width=&quot;400&quot; height=&quot;439&quot;&gt;&lt;/center&gt;
&lt;p&gt;The &lt;a href=&quot;/hackshack/workshops&quot;&gt;HPE DEV Workshops-on-Demand&lt;/a&gt; are a great way to get hands-on experience with the newest and most popular technologies. These free workshops are easy to take and really give you a good feel for how to interact with things like containers, data fabric, Kubernetes, etc. We understand that you put in time and effort whenever you take one of our HPE DEV Workshops-on-Demand and we now offer badges in recognition of your achievement.&lt;/p&gt;
&lt;p&gt;Now, every time you finish one of our Workshops-on-Demand, you will receive a badge commemorating having completed that specific workshop. You’ll receive the badge in your final, congratulatory email. Share your badge with friends and colleagues on Twitter or LinkedIn directly from the email to show them what new skills you’ve recently acquired.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/data_fabric_no_bg-img2.png&quot; width=&quot;400&quot; height=&quot;407&quot;&gt;&lt;/center&gt;
&lt;h3&gt;More opportunities to collect badges!&lt;/h3&gt;
&lt;p&gt;But that’s not all! Once you have finished 3 different workshops, you’ll receive an additional badge celebrating your achievement. More badges will be provided once you’ve completed 5, 7, 10, and 15 workshops. Climb the ranks, from &lt;strong&gt;Apprentice&lt;/strong&gt; to &lt;strong&gt;Expert&lt;/strong&gt; on to &lt;strong&gt;Hero&lt;/strong&gt; and &lt;strong&gt;Super Her&lt;/strong&gt;o and, ultimately, to &lt;strong&gt;Legend!&lt;/strong&gt; Share these on your social media channels as well, competing with friends and colleagues to see who can make it to the highest level!&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/complete_set_of_option_2-img3.png&quot; width=&quot;1000&quot; height=&quot;222&quot;&gt;&lt;/center&gt;
&lt;p&gt;In addition to any other recognition we grant them, special badges will be awarded to those who create a Jupyter Notebook-based workshop that we can offer to the rest of the HPE Developer Community. Put your thinking caps on to try and collect one of these special badges.&lt;/p&gt;
&lt;p&gt;The HPE DEV Community is a great place where we can all enjoy some fun now and again. Try out some of our newest workshops found here on the &lt;a href=&quot;/hackshack/workshops&quot;&gt;HPE DEV Workshops-on-Demand&lt;/a&gt; page. You can also navigate there from our HPE DEV portal &lt;a href=&quot;https://developer.hpe.com/skillup&quot;&gt;Skill Up&lt;/a&gt; page. The workshops are great for getting acquainted with these newer technologies and boosting your skill set. And now they’re even more fun! Don’t forget to check out the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV portal&lt;/a&gt; on a regular basis. We’re always coming up with something new.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[CHIF driver not found]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/chif-driver-not-found/</link><guid isPermaLink="false">https://developer.hpe.com/chif-driver-not-found/</guid><pubDate>Tue, 22 Jun 2021 12:06:33 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/chif/chif-driver-not-found&quot;&gt;Server Management Portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[On-Premise Adventures: How to build an Apache Spark lab on Kubernetes]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/on-premise-adventures-how-to-build-an-apache-spark-lab-on-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/on-premise-adventures-how-to-build-an-apache-spark-lab-on-kubernetes/</guid><pubDate>Tue, 15 Jun 2021 16:51:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Apache Spark™ is an awesomely powerful developer tool for finding the value in your data.  I highly recommend you check out this Hewlett Packard Enterprise (HPE) white paper for some background – &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50004177enw&quot;&gt;Apache Spark 3 on HPE Ezmeral&lt;/a&gt;. In this post, I’m going to explain how I deployed Apache Spark in my own on-premises HPE Ezmeral Container Platform-managed lab so that I could try Apache Spark out for myself.&lt;/p&gt;
&lt;h3&gt;We need a story first, right?&lt;/h3&gt;
&lt;p&gt;Suppose your story is similar to mine: I am a data scientist and I work for ACME Windmills Incorporated – or just ACME, for short.  ACME owns and operates windmills that generate power. They have multiple sites, such as one in Australia and another one in Southern California. I’ve been asked to predict how much power will be generated by these various sites.&lt;/p&gt;
&lt;h3&gt;What tools do I have at my disposal?&lt;/h3&gt;
&lt;p&gt;Our infrastructure team runs all of the hardware and software for the Data Science team. They provide me access to things like a Jupyter Notebook where I will do my Data Science work. The infrastructure team of DevOps engineers and system administrators have the ultimate toolbox to work with, a toolbox called the &lt;a href=&quot;https://www.hpe.com/us/en/ezmeral.html&quot;&gt;HPE Ezmeral Software Platform&lt;/a&gt;. HPE Ezmeral software enables an “all of the above” approach to deployment and management of ACME&apos;s data, apps, and the compute &amp;#x26; storage resources that run it all - from anywhere. The infrastructure team can use this toolbox to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;modernize legacy apps, and manage those apps alongside cloud-native apps&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;use existing data lakes alongside HPE Ezmeral Data Fabric&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and all of the hardware and software can be consumed as-a-service from HPE!&lt;/p&gt;
&lt;p&gt;For this particular job, the infrastructure team pulled out the &lt;a href=&quot;http://www.hpe.com/containerplatform&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt; from the toolbox for application and cluster management. This will provide the data science teams with a wealth of MLOps tools such as an Apache Spark cluster configured to run a customized Jupyter Notebook with connections to ACME’s &lt;a href=&quot;https://assets.ext.hpe.com/is/content/hpedam/a00110846enw&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.  I can’t wait to get access to my environment and get to coding!&lt;/p&gt;
&lt;h3&gt;What is the infrastructure team building for me?&lt;/h3&gt;
&lt;p&gt;As mentioned above, ACME needs to predict power output from their windmills, so they can make intelligent decisions about where to add more windmills and how to optimize the windmills they already have. This is a job for Apache Spark on Kubernetes on HPE Ezmeral! The infrastructure team amazingly built the following workspace for me in the blink of an eye! This used to take them weeks before they got their hands on the HPE Ezmeral software.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-architecture.jpg&quot; alt=&quot;&quot; title=&quot;Spark on HPE Ezmeral Architecture&quot;&gt;&lt;/p&gt;
&lt;p&gt;As the figure above indicates, the infrastructure team built a Kubernetes Cluster and deployed the Apache Spark Operator onto it. The Apache Spark Operator is the glue that allows you to run Apache Spark within a containerized environment, such as a Kubernetes Cluster. The infrastructure team then carved out a Kubernetes Namespace and deployed the MLOps applications I will need like that custom Jupyter Notebook I mentioned previously. I will use that Jupyter Notebook to run my Apache Spark jobs.&lt;/p&gt;
&lt;p&gt;The code running inside the Jupyter Notebook will make an API call to the Apache Spark Operator using an API server called “Livy”. Livy will ask Apache Spark to create an Apache Spark session so I can perform analytics on the data. Apache Spark will be able to ingest data from the Australia core deployment of HPE Ezmeral Data Fabric using an HPE Ezmeral Container Platform feature called a DataTap (more on that below).&lt;/p&gt;
&lt;h2&gt;What the infrastructure team built for me&lt;/h2&gt;
&lt;p&gt;If you are a data scientist or data engineer and you really don’t want to know how all these back-end system administrator tasks are done, then I highly recommend you skip to the &lt;strong&gt;&lt;a href=&quot;#runourapachesparkjob&quot;&gt;Now we get to run our Apache Spark jobs!&lt;/a&gt;&lt;/strong&gt; section. I’d definitely put this section firmly in the infrastructure person category.&lt;/p&gt;
&lt;p&gt;First, I log into my HPE Ezmeral Container Platform WebUI. My organization has already setup Active Directory integration to use with the HPE Ezmeral Container Platform WebUI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image2.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below, on the Kubernetes Dashboard Web UI page, you can see a list of options in the left hand pane.  One of those options is &lt;strong&gt;Clusters&lt;/strong&gt; found under the &lt;strong&gt;Kubernetes&lt;/strong&gt; section in the upper left hand portion of the panel on the left side of the screen. From my dashboard, I click on my &lt;strong&gt;Cluster’s&lt;/strong&gt; option on the left under &lt;strong&gt;Kubernetes&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image3.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, in the &lt;strong&gt;Kubernetes Clusters&lt;/strong&gt; window, I see I have a single cluster called DataFabricOnK8s.  This cluster is running a version of HPE Ezmeral Data Fabric that runs within a Kubernetes Cluster. This is handy since now all of my Kubernetes namespaces and other clusters will get the same enterprise-grade storage features available in a Bare Metal deployment of HPE Ezmeral Data Fabric. I will create a new Kubernetes namespace from this cluster. To do that, I click on &lt;strong&gt;Tenants&lt;/strong&gt; on the left. Tenants, as the name implies, are how we organize multiple Kubernetes namespaces in the HPE Ezmeral Container Platform. Or, in other words, how we help you manage &lt;em&gt;&lt;strong&gt;multi-tenancy&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image4.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;strong&gt;Tenants&lt;/strong&gt; window below, you can see that I created a new tenant on the DataFabricOnK8s cluster. Now I have a place to work and it is called “SparkLivyDemo”.  I can use the WebUI to create a Jupyter Notebook in that namespace/tenant by clicking on the hyperlinked name.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image5.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, I am in my own private “lab”, as I like to call it.  This is my tenant work area. When I created my tenant, I chose to include my MLOps licensed content. This provides me with a Kubernetes namespace pre-loaded with all the goodies I need for machine learning, including a Jupyter Notebook tailored to work with my HPE Ezmeral Container Platform DataTaps. Speaking of which, I should add DataTaps to my HPE Ezmeral Data Fabric deployments in Australia and Southern California. I do that by clicking &lt;strong&gt;DataTaps&lt;/strong&gt; on the left.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image6.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;On the screen below, you can see that I’ve tapped into two HPE Ezmeral Data Fabric deployments.  I simply clicked on &lt;strong&gt;DataTaps&lt;/strong&gt; in the left hand column of the WebUI and provided the details you see summarized in the &lt;strong&gt;Details&lt;/strong&gt; column to the right of each DataTap &lt;strong&gt;Name&lt;/strong&gt;. You can see that I needed to provide FQDNs to my secure-by-default HPE Ezmeral Data Fabric Core &amp;#x26; Edge deployments. I also provided a user authentication &lt;strong&gt;Ticket&lt;/strong&gt; that was generated by the Data Fabric administrator, so access is tightly controlled and easily connected using this WebUI (or REST API if you prefer).&lt;/p&gt;
&lt;p&gt;In this &lt;strong&gt;DataTaps&lt;/strong&gt; window, you will also see a 3rd DataTap with a name of &lt;strong&gt;TenantStorage&lt;/strong&gt;. That is a DataTap that is automatically created for me when I created this Tenant, and the storage is persistent and protected and managed by my Data Fabric Kubernetes Cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, I launched a custom Jupyter Notebook using the Notebooks section of this MLOps tenant.  This Notebook application comes with a full toolkit pre-integrated, so all I need to do is send the access endpoint to my Data Science team.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image8.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To put my Data Scientists to work, I can just click on the &lt;strong&gt;Notebook Endpoints&lt;/strong&gt; text, copy the Access Point URL, and then send that securely to my Data Science team in whatever manner I wish. Active Directory is passed on to the Jupyter Hub itself, so only an authorized user can log into that Access Point.  From the Data Scientist’s perspective, they just pasted a hyperlink into their browser.&lt;/p&gt;
&lt;p&gt;You may now remove your &lt;em&gt;infrastructure person&lt;/em&gt; hat and proceed to doing some really cool Apache Spark Analytics!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image9.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a id=&quot;runourapachesparkjob&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Now we get to run our Apache Spark jobs!&lt;/h2&gt;
&lt;p&gt;For those of you who put on your infrastructure hat to read the previous section, thank you for sticking with me. Now you get to put on your Data Scientist hat! Or, if you skipped to here, that’s cool, too.&lt;/p&gt;
&lt;h3&gt;What happened behind the scenes&lt;/h3&gt;
&lt;p&gt;In the previous section, an infrastructure nerd built a lab out of some servers running a Kubernetes Cluster. Then, that person created a Kubernetes namespace from that cluster and applied the HPE Ezmeral MLOps template to that namespace. The infrastructure nerd also tapped into ACME’s global HPE Ezmeral Data Fabric and connected those &lt;em&gt;DataTaps&lt;/em&gt; to a Jupyter Notebook.  Just now you, the Data Scientist, received a link or &lt;em&gt;Access Point&lt;/em&gt; to your Jupyter Notebook, put that link into your web browser, and logged into Jupyter Hub with your Active Directory credentials.&lt;/p&gt;
&lt;h3&gt;Jupyter Notebook + Spark (PySpark) + Livy&lt;/h3&gt;
&lt;p&gt;Here, inside the Jupyter Notebook, is where all the Apache Spark analytics is done. At the top of the file, I set up my Livy URL.  I was able to copy this URL directly from my HPE Ezmeral Container Platform’s webUI, within the tenant view. A helpful HPE Ezmeral feature is the addition of MLOps &lt;strong&gt;magic&lt;/strong&gt; commands that get added to this Jupyter Notebook automatically. For more information regarding this, see &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/kubernetes/using-kubernetes/ai-ml-functionality/notebooks/Kubernetes_Notebook_Magic_Functions.html?hl=kubernetes%2Cnotebook%2Cmagic%2Cfunctions&quot;&gt;Kubernetes Notebook Magic Functions&lt;/a&gt;. In the screen below, you can see that the first Jupyter Notebook cell includes the%setLivy magic command. This command allows a Jupyter Notebook developer to specify the URL of the Livy server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image10.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The URL that is provided to the &lt;em&gt;%setLivy&lt;/em&gt; magic command can be obtained from the HPE Ezmeral Container Platform’s WebUI under the “Applications” menu option under the &lt;em&gt;Service Endpoints&lt;/em&gt; link.  Here, in the &lt;em&gt;Service Endpoints&lt;/em&gt; page you will find the &lt;em&gt;livy-http&lt;/em&gt; URL or &lt;em&gt;Access Point&lt;/em&gt; and the corresponding port.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image11.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, within this Jupyter Notebook I use the &lt;em&gt;%%configure&lt;/em&gt; command to override the default Apache Spark configuration and customize this Spark environment. This allows me to specify exactly what I need for this particular Apache Spark job.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image12.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Moving down in my Jupyter Notebook, I next import some libraries needed to analyze and transform my data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image13.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, I define a few simple variables and connect the Apache Spark code to the global HPE Ezmeral Data Fabric – all thanks to the infrastructure person who set up those DataTaps for me. I could also have taken advantage of some of the other pre-configured local sandbox storage options automatically generated for me when my MLOps tenant was created. Like, for example, an NFS shared repository called an FS Mount and the &lt;strong&gt;Tenant Storage&lt;/strong&gt; sandbox managed by our HPE Ezmeral Data Fabric on Kubernetes Cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image14.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;From this point on, it is pure Apache Spark running in this standard Jupyter Notebook framework, using the PySpark kernel of the Jupyter Notebook.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image15.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The final result from all this hard work is a Wind Turbine Power Production Prediction graphic plot – created within this same Jupyter Notebook. You can see from the graphic that my predictive model is lining up with actual, real power data. I can now ship this model off to be used by other members of the team.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image16.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the final lines of my Jupyter Notebook, I again use the DataTaps and standard PySpark commands (and also the NFS FS Mount feature) to save this model back out to the Australia and Southern California Ezmeral Data Fabric deployments and to the local sandbox storage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/spark-on-ezmeral-image17.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Well, that was fun!&lt;/h2&gt;
&lt;p&gt;As you can see, going through this whole process I wore a couple of hats: &lt;em&gt;Infrastructure&lt;/em&gt; or &lt;em&gt;DevOps&lt;/em&gt; and &lt;em&gt;Data Scientist&lt;/em&gt; or &lt;em&gt;Data Engineer&lt;/em&gt;. With my &lt;em&gt;Infrastucture&lt;/em&gt; hat on, I used the HPE Ezmeral Container Platform to create a workspace for the &lt;em&gt;Data Science&lt;/em&gt; teams. That workspace is, in fact, a Kubernetes Cluster and one or more namespaces managed in a multi-tenant, secure environment.&lt;/p&gt;
&lt;p&gt;Everything discussed here was illustrated using the HPE Ezmeral Container Platform WebUI, but that is for illustration purposes only. HPE Ezmeral software is totally REST API enabled. This one example of building a single MLOps model is relatively simple. In reality, you would run a full MLOps pipeline and use other tools such as KubeFlow or AirFlow to build a pipeline of much more complex and iterative models – and that is built into the HPE Ezmeral Container Platform, as well. In addition, you can integrate directly with ML Flow in the same tenant workspace that I demonstrated here for much more powerful model management. Further, you can use the MLOps training, model management, and deployment features to fully control your machine learning pipelines.&lt;/p&gt;
&lt;p&gt;I hope you found this information interesting and look forward to helping you to extract the value from your data more efficiently and securely than ever before in subsequent &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt; posts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Host-based Volume Encryption with HPE CSI Driver for Kubernetes]]></title><description><![CDATA[Security is on everyone's mind today, and storage should be considered of the utmost importance. In highly dynamic and agile environments…]]></description><link>https://developer.hpe.com/host-based-volume-encryption-with-hpe-csi-driver-for-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/host-based-volume-encryption-with-hpe-csi-driver-for-kubernetes/</guid><pubDate>Tue, 15 Jun 2021 15:00:55 GMT</pubDate><content:encoded>&lt;p&gt;Security is on everyone&apos;s mind today, and storage should be considered of the utmost importance. In highly dynamic and agile environments where many hosts and applications share the same wire for consolidation purposes, it has become important to not only secure communication between endpoints, but also store data fully encrypted where only the designated reader and writer is capable of unlocking the data through the use of a private key.&lt;/p&gt;
&lt;p&gt;In this blog post, we&apos;ll discuss how to use the host encryption feature of the HPE Container Storage Interface (&quot;CSI&quot;) Driver for Kubernetes that was introduced in version 2.0. This functionality is available to use with all supported backend Container Storage Providers (&quot;CSP&quot;).&lt;/p&gt;
&lt;h1&gt;In-flight encryption VS data-at-rest encryption&lt;/h1&gt;
&lt;p&gt;Many of the supported HPE CSI Driver backends support encryption in one way or another. Let&apos;s examine the different modes to understand a little bit better where host encryption comes in.&lt;/p&gt;
&lt;h2&gt;Full disk encryption&lt;/h2&gt;
&lt;p&gt;FDE (Full Disk encryption) and SED (Self-Encrypting Drives) are two technologies available to users on certain storage devices that encrypts the entire drive, whether it&apos;s a SSD or HDD. Software may be used to manage the keys to read and write data. Access is established during power on of the drive, and once the drive is powered off, the decryption key, or passphrase, is needed to read and write content to the drive. This method will protect data if a drive is stolen or lost. The downside is that FDE drives are usually more expensive and key/passphrase management can be impractical due to the key is needed close to the data it protects, unless an external key manager is being used.&lt;/p&gt;
&lt;h2&gt;Storage appliance software encryption&lt;/h2&gt;
&lt;p&gt;A more sensible approach for a storage appliance is to have a proprietary software component to allow administrators to selectively choose logical volumes to be encrypted. Keys can be stored on the appliance, either password protected or automatically put the keys in place at boot. Drives taken out of the appliance will have encrypted data on them and if concerns of the whole appliance might be stolen or tampered with, an optional passphrase could be used.&lt;/p&gt;
&lt;p&gt;Neither FDE or appliance-based encryption secures data coming off the data fabric serving client workloads, such as iSCSI, FC or NVMe-oF. If a host is compromised or spoofed on the fabric, full access to the volume content is granted. This is better known as data-at-rest encryption.&lt;/p&gt;
&lt;h2&gt;Host encryption&lt;/h2&gt;
&lt;p&gt;Data that travels across a wire or data fabric in an encrypted fashion is known as in-flight encryption. In most cases, data is encrypted and decrypted at each side with a shared secret that has been established from either a trusted entity or a simple password. Data may be stored either encrypted or bare depending on the use case and usually there are separate technologies doing in-flight encryption and data-at-rest encryption.&lt;/p&gt;
&lt;p&gt;Host based encryption works very similar to the storage appliance software encryption but the control of the encryption is at the disposal of the host administrator using platform independent standard on-disk format. If the host administrator lose the key, the data is lost. All data, in-flight and at-rest is done outside any controls the actual storage administrator has. If the storage appliance is compromised or a rogue node spoofs the fabric identity of a legit host, the data will not decrypt without the key.&lt;/p&gt;
&lt;p&gt;Storing the key securely is where Kubernetes comes in and is necessary to transition volumes from node to node within a cluster. Let&apos;s walk through a simple example on how to use the host encryption for persistent volumes and how end-users are in full control of their own destiny on how their data is being secured.&lt;/p&gt;
&lt;h1&gt;Cluster-wide or namespace local keys&lt;/h1&gt;
&lt;p&gt;Configuring host encryption for persistent volumes is controlled with &lt;code&gt;StorageClass&lt;/code&gt; parameters. A &lt;code&gt;StorageClass&lt;/code&gt; is a cluster object which restricted users that deploy apps normally don&apos;t have access to. The HPE CSI Driver for Kubernetes provide a construct that allows restricted users direct access to &lt;code&gt;StorageClass&lt;/code&gt; parameters defined by an administrator. This gives the users and administrators immense flexibility in terms of what to encrypt and what to encrypt it with. Let&apos;s examine these in detail.&lt;/p&gt;
&lt;h2&gt;Cluster administrator controlled host encryption&lt;/h2&gt;
&lt;p&gt;If an organization policy is to encrypt everything by default and only allow Kubernetes cluster administrators access to the key, a default &lt;code&gt;StorageClass&lt;/code&gt; could look like the one below. Pay attention to &lt;code&gt;allowOverrides&lt;/code&gt;, that would allow users who create persistent volume claims to opt-out encryption of certain persistent volumes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  csi.storage.k8s.io/fstype: xfs
  allowOverrides: hostEncryption
  hostEncryption: &quot;true&quot;
  hostEncryptionSecretName: hpe-encrypt
  hostEncryptionSecretNamespace: hpe-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Any &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; created from this &lt;code&gt;StorageClass&lt;/code&gt; would be encrypted with the below &lt;code&gt;Secret&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: Secret
metadata:
  name: hpe-encrypt
  namespace: hpe-storage
stringData:
  hostEncryptionPassphrase: &quot;This is a very secret passphrase&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If a user wants to opt-out of encryption for a certain &lt;code&gt;PersistentVolumeClaim&lt;/code&gt;, the PVC needs to be annotated.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-unencrypted-pvc
  annotations:
    csi.hpe.com/hostEncryption: &quot;false&quot;
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 64Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are good reasons for allowing certain volumes to be unencrypted as not all data is confidential and encryption does come with both compute overhead and storage inefficiencies (more on this later).&lt;/p&gt;
&lt;h2&gt;Namespace controlled host encryption&lt;/h2&gt;
&lt;p&gt;In scenarios where the namespaced user need to be in control of the key, it&apos;s possible for the cluster administrator to delegate the key management. This is incredibly useful when data is being replicated or backed up to an off-site location and data is being reused for dev/test use cases, disaster recovery or running compute heavy analytics to offload production. To allow this behavior, the &lt;code&gt;StorageClass&lt;/code&gt; needs to be tweaked. Pay attention to &lt;code&gt;allowOverrides&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;false&quot;
  name: hpe-standard-delegated
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  csi.storage.k8s.io/fstype: xfs
  allowOverrides: hostEncryption,hostEncryptionSecretName,hostEncryptionSecretNamespace
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, the user first need to create their own &lt;code&gt;Secret&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: Secret
metadata:
  name: my-encryption-key
  namespace: my-namespace
stringData:
  hostEncryptionPassphrase: &quot;This is another secret, it&apos;s all mine!&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This &lt;code&gt;Secret&lt;/code&gt; now needs to be referenced in a set of annotations of the &lt;code&gt;PersistentVolumeClaim&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-encrypted-pvc
  annotations:
    csi.hpe.com/hostEncryption: &quot;true&quot;
    csi.hpe.com/hostEncryptionSecretName: my-encryption-key
    csi.hpe.com/hostEncryptionSecretNamespace: my-namespace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 64Gi
  storageClassName: hpe-standard-delegated
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s use the above &lt;code&gt;StorageClass&lt;/code&gt;, &lt;code&gt;Secret&lt;/code&gt; and &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; and attach a workload.&lt;/p&gt;
&lt;h1&gt;Provisioning and writing data&lt;/h1&gt;
&lt;p&gt;In this example I&apos;m just bringing up a sleeping &lt;code&gt;Pod&lt;/code&gt; and attaching the &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; named &quot;my-encrypted-pvc&quot; created from the previous section.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
kind: Pod
apiVersion: v1
metadata:
  name: my-pod
spec:
  containers:
    - name: my-test-pod
      image: alpine
      command: [&quot;/bin/sh&quot;]
      args: [&quot;-c&quot;, &quot;while true; do echo Snoozing...; sleep 10; done&quot;]
      volumeMounts:
        - name: my-encrypted-mount
          mountPath: /data
  volumes:
    - name: my-encrypted-mount
      persistentVolumeClaim:
        claimName: my-encrypted-pvc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the &lt;code&gt;Pod&lt;/code&gt; has come up, we can inspect the mount point.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl exec pod/my-pod -- df -h /data
Filesystem              Size   Used   Available  Use%  Mounted on
/dev/mapper/enc-mpatha  64.0G  32.2M  63.9G      0%    /data
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The important difference we can observe here versus an unencrypted &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; is the &quot;enc&quot; keyword in the device name. This means that the kernel module &quot;dm-crypt&quot; has been used to broker access to the actual volume via Linux Unified Key Setup (LUKS). The HPE CSI Driver uses the default cipher with &quot;dm-crypt&quot; which currently is &quot;aes-xts-plain64&quot; with a key size of 256 bits.&lt;/p&gt;
&lt;h1&gt;Inspecting the wire&lt;/h1&gt;
&lt;p&gt;Assume we would write &quot;Hello World&quot; into an unencrypted volume versus an encrypted. In the following experiment I captured &lt;code&gt;echo &quot;Hello World&quot; &gt; /data/file.txt &amp;#x26;&amp;#x26; sync&lt;/code&gt; with a network packet sniffer and loaded up the result in Wireshark.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/unecrypted.png&quot; alt=&quot;Hello World unencrypted&quot; title=&quot;Hello World unencrypted&quot;&gt;&lt;/p&gt;
&lt;p&gt;Imagine if this would be Personal Identifiable Information (PII) traveling across a compromised network. A security breach could potentially result in an expensive lawsuit if the information end up in the wrong hands. Let&apos;s repeat the experiment with an encrypted device.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/encrypted.png&quot; alt=&quot;Hello World encrypted&quot; title=&quot;Hello World encrypted&quot;&gt;&lt;/p&gt;
&lt;p&gt;Inspecting the same packet in the sequence, it&apos;s quite obvious the entire payload has been encrypted. This, of course, has impact on the actual storage. Data you know traditionally compress and deduplicate well, all of a sudden present a near 1:1 representation.&lt;/p&gt;
&lt;p&gt;I conducted an experiment where I wrote an 8GiB file, filled with zeroes. Under normal circumstances, such a file is nearly invisible to any enterprise array. Not when encrypted.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;HPE Alletra $ vol --info  pvc-44159f94-0f2b-4109-ae90-e48d0df082b1 | grep ^Volume
Volume mapped usage (MiB): 7859
Volume compression: 0.97X
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s advised to consult with the storage administrator to ensure data-at-rest encryption on the array is turned off along with deduplication and compression to spare CPU cycles.&lt;/p&gt;
&lt;p&gt;Naturally, there will be a performance impact on the host, and it&apos;s advised to study empirical data from benchmarks conducted with a production-like workload to understand the amount of CPU headroom that&apos;s needed to ensure the application meets its performance criteria.&lt;/p&gt;
&lt;h1&gt;Maliciously trying to retrieve data&lt;/h1&gt;
&lt;p&gt;In the event of a rogue host gaining access to a volume without having the key, assume the LUN gets connected and discovered.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# mount /dev/mapper/mpatha /mnt
mount: unknown filesystem type &apos;crypto_LUKS&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The encrypted volume may, in this state, be brute forced or maliciously deleted. Therefore, it&apos;s advisable to use a strong, non-dictionary passphrase for encryption and decryption. The passphrase can be up to 512 characters long. The passphrase length does not affect performance or the cipher strength. It&apos;s only used to open the device to the host.&lt;/p&gt;
&lt;p&gt;As an extra layer of security when using the iSCSI protocol, you can use the Challenge-Handshake Authentication Protocol (CHAP) facility available to the HPE CSI Driver. This ensures a mutual (between initiator and target) shared secret is needed to perform a discovery in the first place.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Being able to confidently store sensitive data on storage systems out of your control over insecure networks is becoming more important in the era of data being the digital oil. With this solution, you can take your keys and walk away without any concern of your volumes (or replicas of them) being stolen, manipulated or sold to third parties.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Check out the release announcement of &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-now-available-for-HPE-Alletra/ba-p/7136280&quot;&gt;HPE CSI Driver for Kubernetes 2.0.0&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read the documentation on HPE Storage Container Orchestrator Documentation (SCOD) around the &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#volume_encryption&quot;&gt;host-based Volume Encryption&lt;/a&gt; feature&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Learn about the &lt;a href=&quot;https://scod.hpedev.io/container_storage_provider/hpe_alletra_6000/index.html#multitenant_deployment&quot;&gt;new multitenancy feature on HPE Alletra 6000 and Nimble Storage&lt;/a&gt; to further improve security of your storage infrastructure&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The team hangs out on Slack and is eager to learn about your security challenges. Sign up at &lt;a href=&quot;http://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; and login to the community at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;, to check out #kubernetes #nimblestorage and #3par-primera.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Multitenancy for Kubernetes clusters using HPE Alletra 6000 and Nimble Storage]]></title><description><![CDATA[We live in a storage infrastructure economy, where IT is under constant pressure to deliver more with less, yet still provide a high…]]></description><link>https://developer.hpe.com/multitenancy-for-kubernetes-clusters-using-hpe-alletra-6000-and-nimble-storage/</link><guid isPermaLink="false">https://developer.hpe.com/multitenancy-for-kubernetes-clusters-using-hpe-alletra-6000-and-nimble-storage/</guid><pubDate>Tue, 15 Jun 2021 15:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We live in a storage infrastructure economy, where IT is under constant pressure to deliver more with less, yet still provide a high standard in data services directly to end users without compromising system or data security.&lt;/p&gt;
&lt;p&gt;With the introduction of &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-now-available-for-HPE-Alletra/ba-p/7136280&quot;&gt;HPE CSI Driver for Kubernetes 2.0&lt;/a&gt; and the software powering HPE Alletra 6000 and Nimble Storage, Hewlett Packard Enterprise (&quot;HPE&quot;) introduces multitenancy for Kubernetes clusters accessing persistent volumes on the aforementioned storage arrays.&lt;/p&gt;
&lt;p&gt;The term &lt;em&gt;tenant&lt;/em&gt; is ambiguous within the industry. For HPE Alletra 6000 and Nimble Storage, a tenant is a storage appliance user account with confined privileges to volumes existing within one or many &lt;em&gt;folders&lt;/em&gt; on the array defined by a storage administrator. In turn, a folder is a logical construct to loosely group volumes together, which allows capacity and performance accounting to be limited on per folder level. The tenant may manage any aspect of the volume within the folder and may not exceed the boundaries set on the folder.&lt;/p&gt;
&lt;p&gt;In this blog post, I&apos;ll step through some of the basic elements to enable storage administrators to safely hand over credentials to Kubernetes administrators.&lt;/p&gt;
&lt;h1&gt;The enabling primitives&lt;/h1&gt;
&lt;p&gt;HPE Alletra 6000 and NimbleOS 6.0 includes a new command-line interface (CLI) called &lt;code&gt;tenantadmin&lt;/code&gt;. This new CLI enables storage administrators to confine a user account into specific folders. Folders need to exist on the array prior creating a new tenant.&lt;/p&gt;
&lt;p&gt;The synopsis of the &lt;code&gt;--help&lt;/code&gt; help flag gives an overview of the supported workflows.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;HPE Alletra $ tenantadmin --help
Usage: tenantadmin [options]
Manage Tenants.

Available options are:
  --help                           Program help.

  --list                           List Tenants.

  --info name                      Tenant info.

  --add tenant_name                Add a tenant.
    --folders folders              List of folder paths (comma separated
                                   pool_name:fqn) the tenant will be able to
                                   access (mandatory).

  --remove name                    Remove a tenant.

  --add_folder tenant_name         Add a folder path for tenant access.
    --name folder_name             Name of the folder path (pool_name:fqn) to
                                   be added (mandatory).

  --remove_folder tenant_name      Remove a folder path from tenant access.
    --name folder_name             Name of the folder path (pool_name:fqn) to
                                   be removed (mandatory).

  --passwd                         Change tenant&apos;s login password.
    --tenant name                  Change a specific tenant&apos;s login password
                                   (mandatory).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With these basic &lt;em&gt;create&lt;/em&gt;, &lt;em&gt;read&lt;/em&gt;, &lt;em&gt;update&lt;/em&gt; and &lt;em&gt;delete&lt;/em&gt; (&quot;CRUD&quot;) elements, the storage administrator is now empowered to delegate and confine all the storage resource management to a folder for a Kubernetes administrator to use.&lt;/p&gt;
&lt;h1&gt;The storage administrator&apos;s workflow&lt;/h1&gt;
&lt;p&gt;Assume a new Kubernetes environment is being deployed within an Enterprise. Each cluster needs to be compartmentalized to not consume all the performance and capacity of the array.&lt;/p&gt;
&lt;p&gt;First step, create a new restricted folder in the pool of your choice.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;folder --create k8s-prod \
  --iops_limit 75000 --usage_limit 2500000 \
  --description=&quot;Kubernetes Production Cluster&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, create a new tenant.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;tenantadmin --add K8sAdminProd --folders default:/k8s-prod
Enter new password: ********
Retype new password: ********
Created User K8sAdminProd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Hint:&lt;/strong&gt; Username must be alphanumeric, cannot start with a number and cannot exceed 32 characters in length. The password policy enforced is derived from the system policy.&lt;/p&gt;
&lt;p&gt;At this point, the storage administrator hands over the credentials to the Kubernetes administrator.&lt;/p&gt;
&lt;p&gt;It&apos;s important to understand that giving the folder name to the Kubernetes administrator is completely optional. This could be useful in situations where it has been determined that a tenant need different performance characteristics for the folders. Like a &lt;em&gt;gold&lt;/em&gt;, &lt;em&gt;silver&lt;/em&gt; and &lt;em&gt;bronze&lt;/em&gt; scheme. The Container Storage Provider (&quot;CSP&quot;) will, by default, pick the folder with the most available capacity for the tenant. The upside by omitting the folder information to the Kubernetes administrator is that the storage administrator has all the power and flexibility to grow storage to new pools for a tenant without the tenant knowing about it. A very popular cloud operational model.&lt;/p&gt;
&lt;h1&gt;Apply Kubernetes configuration&lt;/h1&gt;
&lt;p&gt;Applying the tenant configuration to the Kubernetes cluster is not more difficult than using a standard &quot;administrator&quot; or &quot;poweruser&quot; account on the array.&lt;/p&gt;
&lt;p&gt;YAML declarations below are created with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create -f-
&amp;#x3C; Paste the YAML content &gt;
Hit CTRL-D on a new line.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s create a &lt;code&gt;Secret&lt;/code&gt; referencing the HPE Alletra 6000 with the tenant credentials.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: Secret
metadata:
  name: hpe-backend
  namespace: hpe-storage
stringData:
  serviceName: alletra6000-csp-svc
  servicePort: &quot;8080&quot;
  backend: 192.168.1.30
  username: K8sAdminProd
  password: qweqwe123
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, create a new default &lt;code&gt;StorageClass&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  description: &quot;Volume created by the HPE CSI Driver for Kubernetes&quot;
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point the Kubernetes cluster is ready to accept &lt;code&gt;PersistentStorageClaims&lt;/code&gt;. Let&apos;s create one.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-first-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the &lt;code&gt;PersistentVolume&lt;/code&gt; has been bound we can inspect the array and determine that volume has been placed in the tenant&apos;s folder.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/multitenancy-hpedev-screenshot.png&quot; alt=&quot;HPE Alletra 6000 Multitenancy for Kubernetes Clusters&quot; title=&quot;HPE Alletra 6000 Multitenancy for Kubernetes Clusters&quot;&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s it. Multitenant storage is now properly configured!&lt;/p&gt;
&lt;h1&gt;Container Storage Provider (CSP) access only&lt;/h1&gt;
&lt;p&gt;Multitenant storage is only available today through the HPE Alletra 6000 and Nimble Storage Container Storage Provider for Kubernetes. The CSP uses an undisclosed REST API resource of the array to perform CRUD operations on objects that in turn are being tagged and grouped accordingly to create the notion of a full blown cloud experience both from the Kubernetes administrator perspective but most importantly for the storage administrator. The storage administrator does not need to worry about storage system credentials being compromised and wreaking havoc beyond the compartment they were assigned for.&lt;/p&gt;
&lt;p&gt;At the time of writing, no limitations on the CSP functionality is restricted by using a tenant instead of a system account with the &quot;administrator&quot; or &quot;poweruser&quot; role. HPE recommends switching over to the tenant model for Kubernetes clusters accessing HPE Alletra 6000 or Nimble Storage arrays running NimbleOS 6.0.0 or later.&lt;/p&gt;
&lt;p&gt;Visit &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD) to learn more about what storage resources are being exposed to tenants.&lt;/p&gt;
&lt;h1&gt;Example use cases&lt;/h1&gt;
&lt;p&gt;There are plenty of different use cases that multitenancy enables for IT Ops looking to manage and secure storage resources for a diverse set of applications running on Kubernetes.&lt;/p&gt;
&lt;h2&gt;Ephemeral Inline Volumes&lt;/h2&gt;
&lt;p&gt;End users that deploy applications on Kubernetes that require ephemeral storage at a capacity beyond of what a worker node is capable of providing, may use Ephemeral Inline Volumes. Before multitenancy, Kubernetes administrators had to share the &lt;code&gt;Secret&lt;/code&gt; with the end user to allow provisioning of the Ephemeral Inline Volume. That is not very practical from a security standpoint as application administrators would have privileges on the storage array. Now, the end user may request a separate tenant to allow management of Ephemeral Inline Volumes securely for their application.&lt;/p&gt;
&lt;p&gt;Example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app-inline-volume
spec:
  containers:
    - name: my-frontend
      image: busybox
      command: [ &quot;sleep&quot;, &quot;100000&quot; ]
      volumeMounts:
      - mountPath: &quot;/data&quot;
        name: my-csi-volume
  volumes:
  - name: my-csi-volume
    csi:
      driver: csi.hpe.com
      nodePublishSecretRef:
        name: my-tenant-secret
      fsType: ext3
      volumeAttributes:
        csi.storage.k8s.io/ephemeral: &quot;true&quot;
        accessProtocol: &quot;iscsi&quot;
        size: &quot;5Gi&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;Secret&lt;/code&gt; &quot;my-tenant-secret&quot; would have to exist in the same &lt;code&gt;Namespace&lt;/code&gt; as the &lt;code&gt;Pod&lt;/code&gt; and contain the necessary connectivity and credentials to the tenant.&lt;/p&gt;
&lt;h2&gt;Virtualization and Containers&lt;/h2&gt;
&lt;p&gt;Deploying Kubernetes on a virtualization platform such as VMware vSphere, OpenStack or Hyper-V is by far the most popular pattern for deploying on-premises Kubernetes. Many times customers want to leverage the same array to provide persistent storage both for the virtualization platform and the container platform.&lt;/p&gt;
&lt;p&gt;Allowing Kubernetes clusters &quot;administrator&quot; or &quot;poweruser&quot; access to the array served by the virtualization platform the cluster is running on might be feasible in a single tenant and single application type scenario. Once weaving in the Ephemeral Inline Volumes use case into the mix and we&apos;ve basically given application administrators way too many privileges on the array.&lt;/p&gt;
&lt;p&gt;In many cases the virtualization and storage administrator is combined into the same role. Moving forward, this administrative function would be able to securely hand over credentials to Kubernetes administrators that need a first class persistent storage solution.&lt;/p&gt;
&lt;h2&gt;Kubernetes-as-a-Service&lt;/h2&gt;
&lt;p&gt;Cloud and Managed Service Providers (CSPs and MSPs) monetizing their infrastructure are in a constant battle to safely and securely share infrastructure resources between their tenants and at the same time provide a differentiating portfolio. In the case of dispensing Kubernetes clusters to their tenants, they would have to resort to either using the virtualization platform CSI driver (such as the vSphere CSI driver), which is incredibly limited in functionality, or running a Container Attached Storage (&quot;CAS&quot;) solution on the Kubernetes cluster itself, which in turn would result in storage and performance inefficiencies.&lt;/p&gt;
&lt;p&gt;With multitenancy, CSPs and MSPs are now enabled to create new tenants on the array as part of their catalog workflows and provide an entirely new set of rich data services enabled by HPE Alletra 6000 and HPE Nimble Storage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/kaas.png&quot; alt=&quot;Kubernetes as a Service&quot; title=&quot;Kubernetes as a Service&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Expect more content that elaborates on how multitenancy can be used with Kubernetes using HPE Alletra 6000 and Nimble Storage. Consider this blog post a teaser regarding the cornerstone capability of multitenancy.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Visit SCOD to learn more about the &lt;a href=&quot;https://scod.hpedev.io/container_storage_provider/hpe_nimble_storage/index.html&quot;&gt;HPE Alletra 6000 CSP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Explore the all-new &lt;a href=&quot;https://hpe.com/storage/alletra&quot;&gt;HPE Alletra&lt;/a&gt; 6000&lt;/li&gt;
&lt;li&gt;Check out the release blog of &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-now-available-for-HPE-Alletra/ba-p/7136280&quot;&gt;HPE CSI Driver for Kubernetes 2.0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The team hangs out in #kubernetes and #nimblestorage (Alletra channels pending) on Slack. Join at &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; and sign in at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;. We&apos;re eager to learn about how you&apos;ll put multitenancy to use!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Similar Document Search using Apache Spark with TF-IDF]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/similar-document-search-using-apache-spark-with-tf-idf/</link><guid isPermaLink="false">https://developer.hpe.com/similar-document-search-using-apache-spark-with-tf-idf/</guid><pubDate>Tue, 15 Jun 2021 06:09:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&quot;authorDisplayName&quot;: [&quot;Prasad Singathi&quot;,&quot;Maikel Pereira&quot;],
&quot;publish&quot;: &quot;2019-06-18T07:00:00.000Z&quot;,
&quot;category&quot;: [&quot;machine-learning&quot;],
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Background&lt;/h2&gt;
&lt;p&gt;As a professional services group, we were tasked with providing a solution to automatically find messages in the archives that are similar to new messages and send them to the person asking the question.&lt;/p&gt;
&lt;p&gt;To accomplish that goal, we decided to apply machine learning to the process, so that there is an automated program able to find similarities between the current message and the historical data. The algorithm used was &lt;em&gt;term frequency—inverse document frequency&lt;/em&gt; (TF-IDF). TF-IDF is used in a variety of applications. Typical use cases include document search, document tagging, and finding similar documents.  &lt;/p&gt;
&lt;h2&gt;Problem Description&lt;/h2&gt;
&lt;p&gt;The desired solution was built using two Apache Spark applications running in a MapR cluster: one of them uses the historical data to update data features and train the model on a regular basis, and the second one analyzes every new message and finds five similar ones.&lt;/p&gt;
&lt;h2&gt;Application 1 - Creates Features and Trains Model&lt;/h2&gt;
&lt;p&gt;This application was developed using Spark and Scala, and it can run on a schedule, depending on the needs. &lt;strong&gt;Here is what it does, step by step:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Loads all messages from MapR Database. For the sake of brevity, we omit preprocessing steps like tokenization, stop words removal, punctuation removal, and other types of cleanup.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val rawEmaiData=spark.loadFromMapRDB(&quot;/googlegroups/messages&quot;)
val rawEmaiDataDF=rawEmaiData.select(&quot;_id&quot;,&quot;bodyWithHistory&quot;,&quot;threadId&quot;,&quot;emailDate&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Creates hashingTF, using HashingTF class available in Spark, and sets fixed-length feature vectors of 1000. It applies the hashing transformation to the document, resulting in the featurizedData.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val hashingTF = new HashingTF().setInputCol(&quot;words&quot;).setOutputCol(&quot;rawFeatures&quot;).setNumFeatures(1000)
val featurizedData = hashingTF.transform(wordsData)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Creates the IDF, and from the TF and the IDF, it creates the TF-IDF.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val idf = new IDF().setInputCol(&quot;rawFeatures&quot;).setOutputCol(&quot;features&quot;)
val idfModel = idf.fit(featurizedData)
val rescaledData = idfModel.transform(featurizedData)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;A UDF is necessary for pre-calculating sparse vector norm.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def calcNorm(vectorA: SparseVector): Double = {
      var norm = 0.0
      for (i &amp;#x3C;-  vectorA.indices){ norm += vectorA(i)*vectorA(i) }
      (math.sqrt(norm))
    }
val calcNormDF = udf[Double,SparseVector](calcNorm)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Creates a TF-IDF corpus.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val normalized = rescaledData.withColumn(&quot;norm&quot;,calcNormDF(col(&quot;features&quot;)))
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Saves IDF model to MapR XD Distributed File and Object Store.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;idfModel.write.overwrite().save(&quot;/googlegroups/save_model_idf&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;To save features vector to the MapR Database table, we have to convert the features vector to JSON format. For this, we create and register a UDF.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def toJson(v: Vector): String = {
   v match {
     case SparseVector(size, indices, values) =&gt;
       val jValue = (&quot;type&quot; -&gt; 0) ~
         (&quot;size&quot; -&gt; size) ~
         (&quot;indices&quot; -&gt; indices.toSeq) ~
         (&quot;values&quot; -&gt; values.toSeq)
       compact(render(jValue))
     case DenseVector(values) =&gt;
       val jValue = (&quot;type&quot; -&gt; 1) ~ (&quot;values&quot; -&gt; values.toSeq)
       compact(render(jValue))
        }
      }
}
val asJsonUDF = udf[String,Vector](toJson)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;Finally, saves features vector to the MapR Database table.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val dfToSave = normalized.withColumn(&quot;rawFeaturesJson&quot;, asJsonUDF(col(&quot;rawFeatures&quot;))).withColumn(&quot;featuresJson&quot;, asJsonUDF(col(&quot;features&quot;))).drop(&quot;rawFeatures&quot;).drop(&quot;features&quot;)
dfToSave.saveToMapRDB(&quot;/googlegroups/trained_model&quot;, createTable = false)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Application 2 - New Messages&lt;/h2&gt;
&lt;p&gt;The second application is a &lt;a href=&quot;https://developer.hpe.com/blog/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-/&quot;&gt;Spark Stream Consumer application&lt;/a&gt; that &lt;strong&gt;will execute the following steps:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Loads the previously saved &lt;code&gt;idfModel&lt;/code&gt; and initializes a new &lt;code&gt;HashingTF&lt;/code&gt; model.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val idfModel = IDFModel.load(&quot;path/to/serialized/model&quot;)
val hashingTF = new HashingTF().setInputCol(&quot;words&quot;).setOutputCol(&quot;rawFeatures&quot;).setNumFeatures(1000)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Loads in memory and caches the data with the features saved previously.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val all = contextFuntions.loadFromMapRDB(argsConfiguration.trained).toDF
all.cache()
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Creates a DataFrame with the current message.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val one = Seq((x._id,x.body)).toDF(&quot;_id&quot;, &quot;contents&quot;)
val newWords = prepareWords(one, &quot;words&quot;)
val newFeature = hashingTF.transform(newWords)
val newRescale = idfModel.transform(newFeature)
val normalized = newRescale.withColumn(&quot;norm2&quot;, UDF.calcNormUDF(col(&quot;features2&quot;)))
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Then, it finds the crossjoin DataFrame between the one element and all existing messages in the database and calculates the similarity.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val cross = normalized.crossJoin(all).drop(normalized.col(&quot;_id&quot;))
val cosine = cross.withColumn(&quot;similarity&quot;, UDF.calcCosineUDF(col(&quot;features&quot;), col(&quot;features2&quot;), col(&quot;norm&quot;), col(&quot;norm2&quot;)))
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;For this, it uses the cosine function implemented as follows and registered as a UDF.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def cosineSimilarity(vectorA: SparseVector, vectorB:SparseVector,normASqrt:Double,normBSqrt:Double) :(Double) = {
 var dotProduct = 0.0
 for (i &amp;#x3C;-  vectorA.indices){ dotProduct += vectorA(i) * vectorB(i) }
 val div = (normASqrt * normBSqrt)
 if( div == 0 ) (0)
 else (dotProduct / div)
}
udf[Double,SparseVector,SparseVector,Double,Double](cosineSimilarity)
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;The result can then be ordered by similarity, in descending order, taking the top five elements.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val similarsDF = cosine.sort(desc(&quot;similarity&quot;)).select(&quot;similarity&quot;,&quot;_id&quot;).limit(5)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/img/image2_.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;MapR provides the ecosystem needed for Apache Spark applications to run and scale as needed. It integrates all database and streaming platforms and enables the ability to do distributed processing. It efficiently integrates Spark with the database and the file system by extending it. Both capabilities, which are particularly useful for this solution, will be implemented in production as a feature of a bigger product in an effort to organize the Google Groups forum and with the intention of extending it to other data sources and realms. Since it is tested in a MapR cluster, all that would be needed is to install it and dedicate more resources when the moment comes.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Build your own iLO Redfish simulator]]></title><description><![CDATA[Introduction When I first began developing Redfish Workshops-on-Demand, I quickly realized that I would not be able to provision more than…]]></description><link>https://developer.hpe.com/build-your-own-ilo-redfish-simulator/</link><guid isPermaLink="false">https://developer.hpe.com/build-your-own-ilo-redfish-simulator/</guid><pubDate>Fri, 11 Jun 2021 16:36:54 GMT</pubDate><content:encoded>&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When I first began developing Redfish &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt;, I quickly realized that I would not be able to provision more than one or two physical servers with an embedded Redfish service allowing students to perform concurrent write operations. This was a problem since the infrastructure had been designed to host many more students.&lt;/p&gt;
&lt;p&gt;So, I started to look for Redfish simulators and found the &lt;a href=&quot;https://www.qemu.org/&quot;&gt;qemu&lt;/a&gt; based &lt;a href=&quot;https://github.com/openbmc/openbmc&quot;&gt;OpenBmc&lt;/a&gt; simulator that I used for the &lt;a href=&quot;/hackshack/workshops&quot;&gt;Redfish API 101&lt;/a&gt; workshop. This simulator is perfect for this introductory lab as its Redfish implementation is simple and without Original Equipment Manufacturer (OEM) &lt;a href=&quot;https://redfish.dmtf.org/redfish/mockups/v1/1060&quot;&gt;extensions&lt;/a&gt;. Moreover, its light memory and CPU consumption allows the start of almost a hundred instances in a single virtual machine, leading to as many attendees able to perform concurrent modifications.&lt;/p&gt;
&lt;p&gt;For the other two &lt;a href=&quot;/hackshack/workshops&quot;&gt;workshops&lt;/a&gt; (&lt;a href=&quot;http://hpe.com/info/resttool&quot;&gt;iLOrest&lt;/a&gt; and Ansible/OneView), I had to look for a more fully featured Redfish implementation in order to propose a wider range of exercises.&lt;/p&gt;
&lt;p&gt;This article presents the &lt;a href=&quot;https://redfish.dmtf.org/&quot;&gt;Distributed Management Task Force (DMTF)&lt;/a&gt; &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator&quot;&gt;Redfish Mockup Creator&lt;/a&gt; and &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Server&quot;&gt;Redfish Mockup Server&lt;/a&gt; and how they can be used to learn and test the Redfish API by several tens of students concurrently.&lt;/p&gt;
&lt;h2&gt;The Redfish Mockup Creator&lt;/h2&gt;
&lt;h3&gt;Basic presentation, installation and invocation&lt;/h3&gt;
&lt;p&gt;To create your own Redfish simulator, you need to have read mode access to a live Redfish service (i.e. iLO 5). Then, using the &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator&quot;&gt;DMTF Redfish Mockup Creator&lt;/a&gt; deployed in a place with network connectivity to the live Redfish service, you will be able to retrieve the entire Redfish resources in &lt;code&gt;index.json&lt;/code&gt; files under a specified directory.&lt;/p&gt;
&lt;p&gt;The Redfish Mockup Creator is a single, simple and easy to deploy &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;python 3&lt;/a&gt; script with a very small number of parameters and options that makes it easy to use. The associated documentation is up to date and provides several deployment methods and invocation examples in its GitHub &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator#readme&quot;&gt;&lt;code&gt;README.md&lt;/code&gt;&lt;/a&gt; file.&lt;/p&gt;
&lt;p&gt;You can download the latest sources from this &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator/releases/tag/1.1.1&quot;&gt;release location&lt;/a&gt; in &lt;code&gt;.zip&lt;/code&gt; or &lt;code&gt;.tar.gz&lt;/code&gt; format. Once downloaded, extract the sources into a location reachable by Python 3 and companion modules.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT NOTE:&lt;/strong&gt; As mentioned in the &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator/blob/1.1.1/requirements.txt&quot;&gt;&lt;code&gt;requirements.txt&lt;/code&gt;&lt;/a&gt; file, the DMTF &lt;code&gt;redfish&lt;/code&gt; Python module is required to run the Mockup Creator. However, this module is not compatible with the HPE &lt;code&gt;python-redfish-library&lt;/code&gt; because both of them contain a class called &lt;code&gt;redfish&lt;/code&gt; but with different content. Use &lt;code&gt;pip uninstall python-redfish-library&lt;/code&gt; before installing the DMTF &lt;code&gt;redfish&lt;/code&gt; Python module with &lt;code&gt;pip install redfish&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The &lt;code&gt;python3&lt;/code&gt; command below, launches the &lt;code&gt;redfishMockupCreate.py&lt;/code&gt; script against a remote Redfish service (&lt;code&gt;-r ilo5&lt;/code&gt;) accessible with the &lt;code&gt;-u&lt;/code&gt; and &lt;code&gt;-p&lt;/code&gt; credentials.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;python3 redfishMockupCreate.py -r ilo5 -u ilouser -p ilopassword \
     --Secure --Auth Session  --Headers \
     --Dir ./ilo5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;--Secure&lt;/code&gt; argument specifies the use of the &lt;code&gt;HTTPS&lt;/code&gt; secure protocol. The &lt;code&gt;--Auth&lt;/code&gt; parameter allows two modes of authentication in the remote Redfish service: &lt;code&gt;Basic&lt;/code&gt; and &lt;code&gt;Session&lt;/code&gt;. With the &lt;code&gt;Basic&lt;/code&gt; authentication, the username/password credentials will be used for each GET request. You can use the &lt;code&gt;Session&lt;/code&gt; authentication mechanism, if it is supported by the remote Redfish service. In this case, the Mockup Creator will create a Redfish session using the supplied credentials and retrieve a session token from the response headers. This token will be used for all the GET requests needed to create the mockup.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;None&lt;/code&gt; authenticated mode displayed in the help message of the Mockup Creator is a synonym of &lt;code&gt;Basic&lt;/code&gt;. See the &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Creator/blob/1.1.1/redfishMockupCreate.py&quot;&gt;Python code&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--Headers&lt;/code&gt; options stores the response headers of each &lt;code&gt;GET&lt;/code&gt; requests in a &lt;code&gt;headers.json&lt;/code&gt;. More details are present in the in the next paragraph.&lt;/p&gt;
&lt;p&gt;Lastly, the &lt;code&gt;--Dir&lt;/code&gt; option provides the folder entry point for the mockup.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The mockup target directory will be created if necessary. If it exists, it must be empty before the launch of the Mockup Creator.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Preparing the use of iLOrest against your Mockup Server&lt;/h3&gt;
&lt;p&gt;If you intend to use &lt;a href=&quot;http://hpe.com/info/resttool&quot;&gt;ilOrest&lt;/a&gt; against the mockup you created with the above command, you should, right after the mockup creation, open an iLOrest session and capture its cache. This cache directory is created during the authentication process in a default location unless a specific location is specified on the command line.&lt;/p&gt;
&lt;p&gt;The easiest way to perform this action is to install &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot;&gt;iLOrest&lt;/a&gt; on your favorite operating system and identify the default cache location with the &lt;code&gt;help&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/ilorestcachelocation.png&quot; alt=&quot;iLOrest default cache directory location&quot; title=&quot;iLOrest default cache directory location&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then, open an iLOrest session (&lt;code&gt;ilorest login &amp;#x3C;ilo-ip&gt; -u &amp;#x3C;user&gt; -p &amp;#x3C;password&gt;&lt;/code&gt;) and save the content of the cache directory in a &lt;code&gt;.zip&lt;/code&gt; or &lt;code&gt;.tgz&lt;/code&gt;file. Once the cache is saved, you can logout safely (&lt;code&gt;ilorest logout&lt;/code&gt;). You will use it later when the mockup server is up and running.&lt;/p&gt;
&lt;h3&gt;Mockup structure&lt;/h3&gt;
&lt;p&gt;Once authenticated in the remote Redfish service, the Mockup Creator crawls recursively the service. For each endpoint, starting at &lt;code&gt;/redfish&lt;/code&gt; it sends an HTTP(s) GET request and creates an &lt;code&gt;index.json&lt;/code&gt; file containing the response body as well as a folder for each sub-endpoint present in the &lt;code&gt;index.json&lt;/code&gt; file. The first three endpoint levels are shown in the next picture.&lt;/p&gt;
&lt;p&gt;If the &lt;code&gt;--Headers&lt;/code&gt; is present on the command line, a &apos;headers.json` file is created with the content of the GET response headers. This file holds potentially interesting information like the HTTP requests allowed against the current endpoint.&lt;/p&gt;
&lt;p&gt;The following screenshot lists the content of the output directory of the  of the above invocation of the Mockup Creator. The &quot;root&quot; folder (&lt;code&gt;ilo5&lt;/code&gt;) contains a &lt;code&gt;README&lt;/code&gt; file and a &lt;code&gt;redfish\&lt;/code&gt; directory. The &lt;code&gt;README&lt;/code&gt; file contains the command line invocation. The &lt;code&gt;redfish&lt;/code&gt; sub-folder contains two files (&lt;code&gt;headers.json&lt;/code&gt;, &lt;code&gt;index.json&lt;/code&gt;) and a directory (&lt;code&gt;v1&lt;/code&gt;). Lastly, the &lt;code&gt;v1&lt;/code&gt; directory contains the same two files as well as a sub-folder for each endpoint contained in the &lt;code&gt;index.json&lt;/code&gt; file (AccountService, Managers, ....).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/mockupdirstructure.png&quot; alt=&quot;Mockup structure&quot; title=&quot;Redfish mockup structure&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following picture displays the first lines of the &lt;code&gt;redfish/v1/index.json&lt;/code&gt; file. Note that the endpoints location reflects the mockup directory structure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/redfishv1indexjson.png&quot; alt=&quot;index.json partial content&quot; title=&quot;index.json partial content&quot;&gt;&lt;/p&gt;
&lt;p&gt;The following image shows the content of the &lt;code&gt;/redfish/v1/Systems/1/headers.json&lt;/code&gt; file with the list of possible requests: GET, HEAD, POST, PATCH.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/headersjson.png&quot; alt=&quot;headers.json example&quot; title=&quot;Headers.json example&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Redfish Mockup Server&lt;/h2&gt;
&lt;h3&gt;Basic presentation and invocation&lt;/h3&gt;
&lt;p&gt;Once the Redfish mockup is created, you can make it available to Redfish clients with the &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Server/releases/latest&quot;&gt;DMTF Mockup Server&lt;/a&gt;. This Python 3 application is a web server taking the location of a Redfish mockup as input.&lt;/p&gt;
&lt;p&gt;Deployment and usage of this program is easy and well documented in its &lt;a href=&quot;https://github.com/DMTF/Redfish-Mockup-Server&quot;&gt;GitHub&lt;/a&gt; repository. The following code block shows how I launch it in the &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt; infrastructure.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Shell&quot;&gt;python3 redfishMockupServer.py                 \
   --ssl                                       \
   --key  /SecureLocation/FdzSelfSigned.pem    \
   --cert /SecureLocation/FdzSelfSigned.pem    \
   --host 10.31.86.81                          \
   --port 45675                                \
   -D /usr/kits/VMs/RedfishMockups/ilo5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the above command, the &lt;code&gt;--ssl&lt;/code&gt; parameter specifies that the Redfish service simulator will be accessible via HTTPS/SSL. The required private and public keys are located in a single file (&lt;code&gt;FdzSelfSigned.pem&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;--host&lt;/code&gt; and &lt;code&gt;--port&lt;/code&gt; parameters specify the IP address and TCP port to listen to. Of course, the IP/Port tuple must be accessible to Redfish clients willing to send requests to the simulator.&lt;/p&gt;
&lt;p&gt;The last option (&lt;code&gt;-D&lt;/code&gt;) provides the location of the Redfish mockup created previously.&lt;/p&gt;
&lt;h3&gt;How does the simulator work ?&lt;/h3&gt;
&lt;p&gt;Upon startup, the simulator copies the Redfish mockup in a private location and use this copy for GET and set requests. The original mockup files are never modified. Hence, a restart of the simulator places the simulator back in a fresh and known state.&lt;/p&gt;
&lt;p&gt;As of the writing of this article the DMTF Redfish Mockup Server does not implement any authentication mechanism. Hence, you don&apos;t have to authenticate before sending your client requests.&lt;/p&gt;
&lt;p&gt;When a Redfish client sends a GET request to the Redfish Mockup Server, it responds with the &lt;code&gt;index.json&lt;/code&gt; file located in the folder of the requested endpoint as well as &lt;code&gt;200 OK&lt;/code&gt; response code. If the target endpoint is not valid, the usual response codes &lt;code&gt;40X&lt;/code&gt;  will be sent back.&lt;/p&gt;
&lt;p&gt;For POST, PUT and PATCH requests, the simulator performs limited verification of the query, modifies the requested endpoint and sends back a &lt;code&gt;204 No Content&lt;/code&gt; status code with no associated body. For the same request, a real Redfish service performs additional verification and sends back a non-empty response body with a &lt;code&gt;20X&lt;/code&gt; status code.&lt;/p&gt;
&lt;p&gt;The different behavior of the simulator, compared to a real iLO 5 Redfish service, can be illustrated with a PATCH request for modifying the &lt;code&gt;IndicatorLED&lt;/code&gt; resource of a computer chassis. The authorized values for this parameter are defined by the DMTF in the Chassis Redfish schema. For an iLO 5 with firmware version 2.30, the &lt;a href=&quot;http://redfish.dmtf.org/schemas/v1/Chassis.v1_10_2.json#/definitions/IndicatorLED&quot;&gt;implemented schema&lt;/a&gt; specifies the following possible values: &lt;code&gt;unknown&lt;/code&gt;, &lt;code&gt;Lit&lt;/code&gt;, &lt;code&gt;Blinking&lt;/code&gt;, &lt;code&gt;Off&lt;/code&gt;. A physical iLO 5 complains if you supply a value different from what the schema proposes. However, the DMTF Redfish Mockup Server accepts any string, as shown in the following screenshot.&lt;/p&gt;
&lt;p&gt;The command shown in the screen shot below show the sending of a &lt;code&gt;PATCH&lt;/code&gt; request with an invalid value (&lt;code&gt;Foo&lt;/code&gt;) toward an iLO 5 simulator. The simulator performs the patch action and responds with status code &lt;code&gt;204&lt;/code&gt;. The last command of shows that the action has been successfully performed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fooindicatorled.png&quot; alt=&quot;PATCH an invalid property in the DMTF Redfish Mockup Server&quot; title=&quot;PATCH an invalid property in the DMTF Redfish Mockup Server&quot;&gt;&lt;/p&gt;
&lt;p&gt;The same query against a physical iLO 5 returns a &lt;code&gt;400 Bad Request&lt;/code&gt; status and a &lt;code&gt;@Message.ExtendedInfo&lt;/code&gt; mentioning the faulty argument.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/fooindicatorledagainstilo5.png&quot; alt=&quot;PATCH of an invalid property in a physical iLO 5&quot; title=&quot;PATCH of an invalid property in a physical iLO 5&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Querying the Mockup Server with iLOrest&lt;/h3&gt;
&lt;p&gt;If you want to query your iLO simulator with &lt;a href=&quot;http://hpe.com/info/resttool&quot;&gt;iLOrest&lt;/a&gt;, you have to extract the cache directory you saved during the mockup creation (see the &quot;Preparing the use of iLOrest against your Mockup Server&quot; paragraph above) and edit the &lt;code&gt;url&lt;/code&gt; property of its two files &lt;code&gt;index&lt;/code&gt; and, &lt;code&gt;&amp;#x3C;longUniqIdenfier&gt;&lt;/code&gt;, to make them point to the simulator.&lt;/p&gt;
&lt;p&gt;On a Linux system, this operation can be done with the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Shell&quot;&gt;cd $iloCacheDir/cache/
sed -i &apos;s?\(&quot;url&quot;: &quot;https://\)ilo-IP-physical&quot;?\1ilo-IP-simulator&quot;?&apos; *
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the iLOrest cache points to your mockup server, you can use this Redfish client tool to query the mockup.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: If you use the &lt;code&gt;login&lt;/code&gt; iLOrest commands, the cache will be overwritten. If you use the &lt;code&gt;logout&lt;/code&gt; command, the cache will be erased.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;To learn and test the Redfish API, the DMTF provides two very useful tools to create Redfish mockups and simulate a live service. They are easy to install and use, and their quality is good. In addition, from my personal experience, I found the maintainers of these active GitHub projects very responsive to address quality issues and proposed enhancements.&lt;/p&gt;
&lt;p&gt;I presented only iLOrest to query this Redfish simulator, but many other Redfish clients can be used, like the ones mentioned in this &lt;a href=&quot;https://youtu.be/ur9UKRV_0S8&quot;&gt;video&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Join us for Discover 2021!]]></title><link>https://developer.hpe.com/2021-May-03/</link><guid isPermaLink="false">https://developer.hpe.com/2021-May-03/</guid><pubDate>Tue, 01 Jun 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Ways to interact with Kubernetes Clusters managed by HPE Ezmeral Container Platform]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/ways-to-interact-with-kubernetes-clusters-managed-by-hpe-ezmeral-container-platform/</link><guid isPermaLink="false">https://developer.hpe.com/ways-to-interact-with-kubernetes-clusters-managed-by-hpe-ezmeral-container-platform/</guid><pubDate>Fri, 28 May 2021 08:40:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;HPE Ezmeral Software Platform consists of a bundle of software that helps you run, manage, control, and secure the apps, data, and IT that run your business. A major component is the HPE Ezmeral Container Platform (HPE ECP),a unified cloud container software platform built on Kubernetes (K8s). You can use it to deploy a new Kubernetes cluster with a few clicks. But how exactly do you connect through HPE ECP to interact with K8s? Don&apos;t panic! This blog post will introduce to you most, if not all, of the ways to connect with a Kubernetes cluster using HPE ECP.&lt;/p&gt;
&lt;h2&gt;WebUI&lt;/h2&gt;
&lt;p&gt;The first method by which you can interact with a Kubernetes Clusters managed by HPE ECP is, of course, through the web user interface (WebUI).&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/K8s-Cluster.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;td&gt;In the Main Menu, if you are the administrator of the HPE Ezmeral CP, you can manage and monitor the status of the K8s clusters managed by HPE ECP.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/K8s-Dashboard.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can access the Kubernetes Dashboard just as you would do with open source Kubernetes.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;WebTerminal&lt;/h2&gt;
&lt;p&gt;Inside the Kubernetes Tenant, at the bottom, there is a web terminal for you to interact with Kubernetes using the &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/K8s-Tenant.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;td&gt;At the bottom, you can click &quot;Initialize&quot; to initiate the web terminal instance.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/K8s-Tenant-02.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;td&gt;Type your &lt;code&gt;kubectl&lt;/code&gt; command to interact with Kubernetes.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;HPE Kubectl Plugin&lt;/h2&gt;
&lt;p&gt;If you want to use kubectl remotely, you must first install the kubectl-plugin to get the session ID to access HPE Ezmeral. To install the kubectl-hpecp plugin, you can run the following command. The HPE kubectl plugin is required to establish the tenant’s authenticated kubectl requests. Therefore, if you want to use kubectl remotely as a tenant user, you must first install the HPE kubectl plugin (kubectl-hpecp) to get an HPE ECP authentication token (a session ID), fetch the kubconfig manifest file, and then interact with the managed K8s clusters.&lt;/p&gt;
&lt;p&gt;To install the kubectl-hpecp plugin, first fetch the plugin from the WebUI. Log in as a tenant user, and click on Download HPE Kubectl Plugin (as shown in the picture below) and download the plugin and installation instructions according to your target operating system. Note: the following code is all executed in a Linux environment.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/K8s-Tenant-03.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;td&gt;You can download the required binary here as well.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Installation of &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kubectl-hpecp&lt;/code&gt;&lt;/h3&gt;
&lt;h4&gt;Step 1: Make sure you have &lt;code&gt;kubectl&lt;/code&gt; installed.&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Download the latest version of kubectl
curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl&quot;
# And place it anywhere in your PATH:
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/&quot;&gt;How to install and set up kubectl on Linux&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h4&gt;Step 2: Install hpe kubectl plugin&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# This link might subject to be changed, renew this link on HPE Ezmeral
# Download kubectl-hpecp binary and untar the file
wget https://bluedata-releases.s3.amazonaws.com/kubectl-epic/3.4/14/linux/kubectl-hpecp.star
tar xf kubectl-hpecp.star
# And place it anywhere in your PATH:
sudo mv ./kubectl-hpecp /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check that &lt;code&gt;kubectl-hpecp&lt;/code&gt; is installed correctly.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Screenshot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubectl plugin list&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/kubectl-plugin-list.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubectl hpecp -h&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&quot;https://github.com/helloezmeral/cdn/raw/main/kubectl-hpecp-h.png&quot; alt=&quot;&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/kubernetes/using-kubernetes/Using_the_HPE_Kubectl_Plugin.html&quot;&gt;Using the HPE Kubectl Plugin&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/&quot;&gt;Extend kubectl with plugins&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Getting the &lt;code&gt;kubeconfig&lt;/code&gt; file&lt;/h3&gt;
&lt;h4&gt;Using &lt;code&gt;kubectl hpecp refresh&lt;/code&gt; command&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;kubectl hpecp refresh&lt;/code&gt; command gets the user a new Kubeconfig. Using the Kubeconfig, you can interact with Kubernetes through the HPE Ezmeral Container Platform.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl hpecp refresh &amp;#x3C;ip_address, host alias, or hostname&gt; --insecure --hpecp-user=&amp;#x3C;new_username&gt; --hpecp-pass=&amp;#x3C;new_password&gt;
# Example
kubectl hpecp refresh 172.16.10.41 --insecure --hpecp-user=your-username --hpecp-pass=your-pass
kubectl hpecp refresh ez53-gateway.hpeilab.com --insecure --hpecp-user=your-username --hpecp-pass=your-pass
kubectl hpecp refresh ez53-gateway.hpeilab.com --insecure
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Download your &lt;code&gt;Kubeconfig&lt;/code&gt; file, and define the path to the &lt;code&gt;Kubeconfig&lt;/code&gt; file as a shell environment variable.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/117413580-bab71980-af48-11eb-808e-1f46f074451c.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Example
export KUBECONFIG=&quot;/home/hpeadmin/.kube/.hpecp/ez53-gateway.hpeilab.com/config&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Download the &lt;code&gt;Kubeconfig&lt;/code&gt; file manually&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://user-images.githubusercontent.com/72959956/119962105-4b799600-bfd9-11eb-985d-2c867162902e.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Download your &lt;code&gt;kubeconfig&lt;/code&gt; file, and define the &lt;code&gt;Kubeconfig&lt;/code&gt; file as a shell environment variable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Example
export KUBECONFIG=&quot;/the/path/of/your/kubeconfig&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Using REST API&lt;/h4&gt;
&lt;p&gt;HPE Ezmeral Container Platform provides a REST API for you to interact with. Here is the set of REST API calls that allows you to download the &lt;code&gt;Kubeconfig&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Authenticate as a tenant user in the specified tenant, getting the session ID:
curl -k -i -s --request POST &quot;http://ez53-gateway.hpeilab.com:8080/api/v2/session&quot; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; \
--data-raw &apos;{
&quot;name&quot;: &quot;username&quot;,
&quot;password&quot;: &quot;password&quot;,
&quot;tenant_name&quot;: &quot;test-tenant&quot;
}&apos;

# output
HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Content-Length: 13
Content-Type: text/plain
Date: Fri, 30 Apr 2021 13:18:38 GMT
Location: /api/v2/session/__thisisthesessionid__
Server: HPE Ezmeral Container Platform 5.3

201 Created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Get the Kubeconfig file for your tenant working context:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -k -s --request GET &quot;http://ez53-gateway.hpeilab.com:8080/api/v2/k8skubeconfig&quot; \
--header &quot;X-BDS-SESSION: /api/v2/session/__thisisthesessionid__&quot; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; &gt; ./kubeconfig

# Define the Kubeconfig file as a shell environment variable
export KUBECONFIG=kubeconfig
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The screenshot below shows you how you can combine two commands into a single command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -k -s --request GET &quot;http://&amp;#x3C;you-ez-gateway&gt;:8080/api/v2/k8skubeconfig&quot; \
--header &quot;X-BDS-SESSION: $(curl -k -i -s --request POST &quot;http://&amp;#x3C;you-ez-gateway&gt;:8080/api/v2/session&quot; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; \
--data-raw &apos;{
&quot;name&quot;: &quot;&amp;#x3C;change-your-user-name&gt;&quot;,
&quot;password&quot;: &quot;&amp;#x3C;change-your-user-password&gt;&quot;,
&quot;tenant_name&quot;: &quot;&amp;#x3C;change-the-tenant-you-want&gt;&quot;
}&apos; | grep Location | awk &apos;{print $2}&apos; | tr -d &apos;\r&apos;)&quot; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; &gt; ./kubeconfig

export KUBECONFIG=&quot;./kubeconfig&quot;

kubectl get pods
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Resources:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.containerplatform.hpe.com/53/reference/accessing-the-applications/API_Access.html&quot;&gt;API_Access&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/hpe-notebooks/tree/master/HPECPAPI&quot;&gt;Jupyter Notebook: Introduction to the HPE Ezmeral Container Platform REST API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/bluedatainc/solutions/tree/master/APIs&quot;&gt;API documents&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/blog/hpe-container-platform-rest-api-part-1-authenticating/&quot;&gt;HPE Container Platform REST API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h2&gt;hpecp python library (pre-alpha)&lt;/h2&gt;
&lt;p&gt;If you are looking for a way to interact with HPE Ezmeral programmatically, you can keep an eye on the hpecp python library from HPE Container Platform Community. Note that it is still a prototype, it may be unstable and subject to change until this library reaches beta.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;References:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hpe-container-platform-community/hpecp-python-library&quot;&gt;Github: HPECP Python Library&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-container-platform-community.github.io/hpecp-python-library/index.html&quot;&gt;HPE Container Platform Python Library Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://pypi.org/project/hpecp/&quot;&gt;HPECP Python Pypi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As you can see, HPE Ezmeral Container Platform provides a number of different ways for you to interact with Kubernetes clusters so you can take advantage of the benefits it provides. Just pick your favorite way for your favorite environment. Happy Kubectl!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Calling all developers… Make your voices heard!]]></title><description><![CDATA[survey HPE DEV partners with SlashData for its State of the Developer Nation 2021 survey As technologists, your world is always changing…]]></description><link>https://developer.hpe.com/calling-all-developers-make-your-voices-heard/</link><guid isPermaLink="false">https://developer.hpe.com/calling-all-developers-make-your-voices-heard/</guid><pubDate>Fri, 28 May 2021 06:23:25 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/02-developer-nation-1024.jpg&quot; alt=&quot;survey&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE DEV partners with SlashData for its State of the Developer Nation 2021 survey&lt;/h1&gt;
&lt;p&gt;As technologists, your world is always changing. And you want to stay on top of what’s next. To understand these trends and what is important to engineers today, &lt;a href=&quot;http://slashdata.co/&quot;&gt;SlashData&lt;/a&gt; runs its developer survey twice yearly. The survey is expected to reach more than 36,000 developers across 159 countries to better understand who they are, what tools they use, and what they need.&lt;/p&gt;
&lt;p&gt;HPE Dev will once again be joining forces with SlashData as a media partner for the Summer 2021 developer survey. Hewlett Packard Enterprise (HPE), a technology  and everything-as-a-service company, understands the importance of developers and how working with them is key to improving the way people live and work. We know that data drives business decisions, and so we encourage all our HPE DEV community members to take this survey and make their voices heard.&lt;/p&gt;
&lt;p&gt;SlashData&apos;s global, independent research gathers data through a detailed survey which covers almost every aspect of a programmer&apos;s life. The survey reaches out to millions of developers to get diversified opinions. It is available in multiple languages and dives deeply into 12 important areas such as web, cloud, IoT, games, ML/AI, and data science. While this in-depth survey may take 20 minutes of your time, your effort will will be rewarded by helping to drive change that meets your future needs. You might also be rewarded with a prize, since they run prize drawings worth more than $15,000 USD during every wave.&lt;/p&gt;
&lt;p&gt;The survey covers programming languages and tools, skill sets, and the resources developers use. It also focuses on key technology areas, including cloud, AI, machine learning, and game development. Using the data acquired through this research, SlashData produces “State of the Developer Nation” reports twice a year. You can find these free reports &lt;a href=&quot;https://www.slashdata.co/free-resources?section=subscribe&quot;&gt;here&lt;/a&gt;. The data derived from these reports makes its way back to technology companies to address the current needs of developers around the world. Use these reports yourself to understand how the industry continues to evolve and make the right choices for your business.&lt;/p&gt;
&lt;p&gt;The survey opens on June 9th and ends August 4, 2021. Although it’s promoted by several other media partners, we would grateful if you could use &lt;a href=&quot;https://www.developereconomics.net/?member_id=hpe&quot;&gt;our link&lt;/a&gt; to participate in the survey because it will provide us with specific data on how we can best address your needs. Feel free to forward &lt;a href=&quot;https://www.developereconomics.net/?member_id=hpe&quot;&gt;this link&lt;/a&gt; to customers, partners, colleagues and friends who might also be interested in participating.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Discover 2021: Don’t Miss these HPE DEV Industry Expert Talks and Hands-on Workshops]]></title><description><![CDATA[HPE Discover 2021 is where the next wave of digital transformation begins, powered by the rise of the Intelligent Edge and the vital data it…]]></description><link>https://developer.hpe.com/hpe-discover-2021-don’t-miss-these-hpe-dev-industry-expert-talks-and-hands-on-workshops/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discover-2021-don’t-miss-these-hpe-dev-industry-expert-talks-and-hands-on-workshops/</guid><pubDate>Tue, 25 May 2021 17:06:00 GMT</pubDate><content:encoded>&lt;p&gt;HPE Discover 2021 is where the next wave of digital transformation begins, powered by the rise of the Intelligent Edge and the vital data it creates. Spanning three days packed with actionable live and on-demand sessions, HPE Discover is where you’ll learn what Hewlett Packard Enterprise (HPE) offers in a world where software reigns supreme and helps define so many of our experiences today.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://attend.hpe.com/discover2021/email?l=15AC66757307&amp;#x26;EID=78EF62707200&quot;&gt;Register here for HPE Discover 2021.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Calling all Coders!&lt;/h2&gt;
&lt;p&gt;This year, HPE DEV, the team that supports HPE’s developer community, will be offering a virtual Hack Shack technology session and four hands-on coding workshops. &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24350&amp;#x26;schid=9807&amp;#x26;locale=en_US&amp;#x26;from=virtualplatform.catalogue_session&amp;#x26;sf=2967&quot;&gt;In the technology session&lt;/a&gt;, HPE customers, Sysdig and ORock will discuss how they take advantage of the HPE Ezmeral software platform to provide secure utility-based As-a-Service offerings. This 30-minute roundtable is be moderated by key HPE CTO Office executive, Robert Christiansen, who works with HPE global clients and partners to deepen relationships and align joint technology efforts that improve the way people live and work.&lt;/p&gt;
&lt;p&gt;The four hands-on workshops will be similar in style to those offered last year, but this time with a twist. In the first 30 minutes, you’ll be introduced to industry experts who’ll give an overview of a specific technology. For the remaining 60 minutes, you’ll be invited to stay to experience a unique, hands-on workshop where you’ll actually get to play with the technology by employing Jupyter Notebooks to facilitate your coding experience.&lt;/p&gt;
&lt;p&gt;Most of the HPE DEV Hack Shack sessions will take place on Day 3. You can find them in the session catalog in one of two ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Access the &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.catalogue_session/?l=1045&amp;#x26;sf=2879&amp;#x26;locale=en_US&quot;&gt;HPE DEV courses here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Go to the &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.catalogue_session/?l=1045&amp;#x26;locale=en_US&quot;&gt;Discover 2021 session catalog here&lt;/a&gt; and enter HPE DEV in the Keyword Search. The Hack Shack sessions and workshops will appear below.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Make sure you put these recommended sessions on your calendar:&lt;/p&gt;
&lt;h2&gt;HPE DEV Sessions and Workshops:&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;How HPE Ezmeral Provides Secure Utility-Based As-a-Service Platforms &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24350&amp;#x26;schid=0&amp;#x26;locale=en_US&amp;#x26;sf=546&quot;&gt;HSW4350&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Dates/Times: June 23, 11:30am-12:00pm PST / June 24, 11:00am-12:30pm CEST&lt;/p&gt;
&lt;p&gt;Industry luminary Robert Christiansen (&lt;a href=&quot;https://twitter.com/rbchristiansen&quot;&gt;@rbchristiansen&lt;/a&gt;) talks with two industry insiders from Sysdig and ORock on how they provide secure container platforms. Alexander Lawrence (&lt;a href=&quot;https://twitter.com/alaw_sd&quot;&gt;@alaw_sd&lt;/a&gt;), Principal Solutions Engineer with Sysdig, describes how they extend security and monitoring on HPE Ezmeral through their SaaS platform, while Matt Plummer, Chief Cloud Architect at ORock Technology (&lt;a href=&quot;https://twitter.com/ORock_Tech&quot;&gt;@ORock_Tech&lt;/a&gt;), discusses their open-source IaaS and PaaS with FedRAMP security built on the HPE Ezmeral platform.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HPE Ezmeral Data Fabric 101 – Get to Know the Basics Around the Data Fabric &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24349&amp;#x26;schid=0&amp;#x26;locale=en_US&amp;#x26;sf=547&quot;&gt;HSW4349&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Dates/Times: June 24, 9:00am-10:30am PST / June 25, 11:00am-12:30pm CEST&lt;/p&gt;
&lt;p&gt;Join industry experts—Ted Dunning (&lt;a href=&quot;https://twitter.com/ted_dunning&quot;&gt;@ted_dunning&lt;/a&gt;) and Ellen Friedman (&lt;a href=&quot;https://twitter.com/Ellen_Friedman&quot;&gt;@ellen_friedman&lt;/a&gt;)—for a 30-minute talk on how workflows and data management change when systems are built on a unified data layer with data fabric. A hands-on workshop will follow, where you’ll learn how to create/delete/update volumes, set up disaster recovery mechanisms, and apply security policies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes 101 – Intro to the Kubernetes Concepts Managed by HPE Ezmeral Container Platform &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24347&amp;#x26;schid=0&amp;#x26;locale=en_US&amp;#x26;sf=548&quot;&gt;HSW4347&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Dates/Times: June 24, 11:00am-12:30pm PST / June 25, 1:00pm-2:30pm CEST&lt;/p&gt;
&lt;p&gt;Walk through the basics of the Kubernetes cluster orchestration system with industry expert, Nigel Poulton (&lt;a href=&quot;https://twitter.com/nigelpoulton&quot;&gt;@nigelpoulton&lt;/a&gt;), Chief Technologist at Kubetrainer.com, and Thomas Phelan, HPE Ezmeral Container Platform CTO. During a following 60-min hands-on workshop, you’ll deploy a containerized application on a cluster, scale its deployment, update the application, and debug it, using info you learned on Kubernetes features and an interactive online tutorial.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SPIFFE and SPIRE Fundamentals &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24392&amp;#x26;schid=0&amp;#x26;locale=en_US&amp;#x26;sf=549&quot;&gt;HSW4392&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Dates/Times: June 24, 12:00pm-1:30pm PST / June 25, 2:00pm-3:30pm CEST&lt;/p&gt;
&lt;p&gt;Join this workshop to better understand SPIFFE as a set of open-source standards for securely authenticating software services in dynamic and heterogeneous environments through the use of platform-agnostic, cryptographic identities. Explore SPIRE as an open-source system that implements the SPIFFE specification in a wide variety of environments. Featuring Sunil James of HPE (&lt;a href=&quot;https://twitter.com/sunubunu&quot;&gt;@sunubunu&lt;/a&gt;) and Phil Vachon, Security leader at Bloomberg (&lt;a href=&quot;https://twitter.com/pvachonnyc&quot;&gt;@pvachonnyc&lt;/a&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Redfish Programming Made Easy and Secure with Ansible and HPE OneView &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.details/?l=1045&amp;#x26;SID=24345&amp;#x26;schid=0&amp;#x26;locale=en_US&amp;#x26;sf=553&quot;&gt;HSW4345&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Dates/Times: June 24, 10:00am-11:30am PST / June 25, 12:00pm-1:30pm CEST&lt;/p&gt;
&lt;p&gt;Learn to securely automate management, monitoring, and configuration of compute nodes. After a 30-min intro by Jeff Hilland, DMTF president, stay for a hands-on workshop using Redfish and Ansible, leveraging HPE OneView Single Sign On. You will learn multiple ways for writing Redfish Ansible playbooks using built-in modules, DMTF, and HPE examples.&lt;/p&gt;
&lt;p&gt;In addition to these HPE DEV Hack Shack sessions, there’s so much more to explore at Discover 2021 for those interested in software; from the HPE Ezmeral Container Platform and Data Fabric to open source projects sponsored by HPE.  For details on AI, ML, and Data Analytics sessions you don’t want to miss, check out this &lt;a href=&quot;https://community.hpe.com/t5/Advancing-Life-Work/HPE-Discover-2021-AI-ML-and-Data-Analytics-sessions-you-don-t/ba-p/7138437#.YMdnm5NKiMK&quot;&gt;blog post&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Live Demos&lt;/h2&gt;
&lt;p&gt;We will also be featuring live, on-location, demos at Chase Center and the Mercedes Formula 1 Factory, scheduled to take place on June 22 or 23 (depending on which region you’re joining from). During the times specified below, HPE will have experts available to help answer your questions.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AMS: 11:00 AM – 11:45 PM PDT and 11:45AM – 12:30PM PDT on Tuesday (Day 1)&lt;/li&gt;
&lt;li&gt;APJ: 2:30 PM – 3:15PM JST and 3:15PM – 4:30 PM JST on Wednesday (Day 1)&lt;/li&gt;
&lt;li&gt;EMEA: 12:30 PM – 1:15 PM CET and 1:15PM – 2:00PM on Wednesday (Day 1)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Interested in other sessions? We invite you to explore the full line-up of sessions on &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.catalogue/?l=1045&amp;#x26;locale=en_US&quot;&gt;our content catalog&lt;/a&gt; and build your own agenda. You can also view the &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.agenda/?l=1045&amp;#x26;locale=en_US&quot;&gt;agenda for each region here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Build Your Playlist&lt;/h2&gt;
&lt;p&gt;Want to build your own Playlist? Here’s how. Once registered for HPE Discover, &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.landing/?l=1045&amp;#x26;locale=en_US&quot;&gt;log in&lt;/a&gt; to the virtual platform to view each of the keynotes, sessions, demos, etc. You can filter based on content type, areas of interest, or keyword search, etc. Then simply click on the “+” icon to add the item to your My Playlist.  You can also download your Playlist into your preferred personal calendar.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://attend.hpe.com/discover2021/email?l=15AC66757307&amp;#x26;EID=78EF62707200&quot;&gt;Register now.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We look forward to seeing you virtually at HPE Discover 2021!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Help Build the Hospital of the Future and Be Eligible for Prizes!]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE), the multinational enterprise information technology company based in Houston, Texas,  is proud to partner…]]></description><link>https://developer.hpe.com/help-build-the-hospital-of-the-future-and-be-eligible-for-prizes/</link><guid isPermaLink="false">https://developer.hpe.com/help-build-the-hospital-of-the-future-and-be-eligible-for-prizes/</guid><pubDate>Tue, 11 May 2021 14:58:32 GMT</pubDate><content:encoded>&lt;p&gt;Hewlett Packard Enterprise (HPE), the multinational enterprise information technology company based in Houston, Texas,  is proud to partner with &lt;a href=&quot;https://www.texaschildrens.org/departments/us-news-world-report&quot;&gt;Texas Children’s Hospital&lt;/a&gt;, one of the top-ranked children’s hospitals in the US, in its &lt;a href=&quot;https://www.hackerearth.com/challenges/hackathon/texas-childrens-hospital-healthcare-hackathon/&quot;&gt;inaugural innovation-focused hackathon&lt;/a&gt;. To celebrate the groundbreaking of its new hospital, Texas Children’s is hosting a two-week long &lt;a href=&quot;https://www.hackerearth.com/challenges/hackathon/texas-childrens-hospital-healthcare-hackathon/&quot;&gt;virtual Healthcare Hackathon&lt;/a&gt; from May 14th through May 24th, 2021. This virtual event is designed to inspire and draw attention to the need for new ways of thinking and the use of technology in healthcare in treatments for women and children.&lt;/p&gt;
&lt;p&gt;Through this hackathon, the hospital is seeking innovative ideas that incorporate technology to address its biggest day-to-day challenges. Showcase your innovative, tech-based solutions to win awesome prizes! Coders are encouraged to work individually or in teams of 2-5 people to build solutions that align to one or more of the following &lt;a href=&quot;https://www.hackerearth.com/challenges/hackathon/texas-childrens-hospital-healthcare-hackathon/&quot;&gt;themes&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Food Service and Delivery&lt;/strong&gt; – How do we reimagine the food service and delivery experience to ensure patients, families, employees and guests get the nutritious meals they need?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Meeting People Where They Are&lt;/strong&gt; – How do we transform traditional hospital experiences/processes so they are more convenient, personalized, and equitable?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Touchless Experience&lt;/strong&gt; – How can we convert physical interactions into touchless, yet still personal and trustworthy, digital healthcare experiences?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AR/VR in Hospitals&lt;/strong&gt; – How can we implement extended reality experiences (XR) to reimagine the doctor/patient experience, training, and patient/family/care team communication and coordination?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Evolution of the Child Life Experience&lt;/strong&gt; – What innovative experiences can Child Life specialists use to help patients and their families cope, connect, socialize, and heal? &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;For Kids By Kids&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;As part of the hackathon, Texas Children’s is offering a special category exclusively for coders under 18 years of age called &lt;a href=&quot;https://www.hackerearth.com/challenges/hackathon/texas-childrens-hospital-healthcare-hackathon/custom-tab/for-kids-by-kids/#For%20Kids%20By%20Kids&quot;&gt;For Kids By Kids&lt;/a&gt;.  Using Scratch, a free visual programming language that’s designed to teach kids how to program through drag and drop blocks of code, young coders are invited to animate patient journey’s at Texas Children’s Hospital. Examples of a patient’s journey include &lt;a href=&quot;https://s3-ap-southeast-1.amazonaws.com/he-public-data/discharge-pathw_081420187d3b3ef.pdf&quot;&gt;what to expect when being discharged from the hospital&lt;/a&gt; or what to expect when going to see your pediatrician. Please note that coding classes and camps are welcome! Once done with the animation, a recording of the animation and/or any Scratch code the young developer would like to share should be submitted for the judges’ evaluation.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Judging&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Submissions are due by May 24, 2021 11:55 PM CDT. Every submission will be evaluated by a panel of Texas Children’s Hospital experts and at least one of their sponsors, who include HPE, NVIDIA, Mark III Systems, the Butler Bros., Google Cloud, T-Mobile for Business, Unity, and HEB Digital. Winners will be announced on June 9th, 2021. There are up to $10,000 USD in prizes to be given away, including an NVIDIA Shield and leather duffle bags. Time is short! &lt;a href=&quot;https://www.hackerearth.com/challenges/hackathon/texas-childrens-hospital-healthcare-hackathon/&quot;&gt;Register on the bottom right of this page&lt;/a&gt; for the hackathon.&lt;/p&gt;
&lt;p&gt;There is no better mission than the health of our children. That is why HPE has teamed up with Texas Children’s Hospital to identify innovative solutions to be a part of their new state-of-the-art hospital in Austin, TX. Sign up for the &lt;a href=&quot;https://developer.hpe.com/newsletter-signup/&quot;&gt;HPE DEV Newsletter&lt;/a&gt; to stay abreast of news on hackathons like this in the future, as well as learn more on different topics through our &lt;a href=&quot;https://developer.hpe.com/blog/&quot;&gt;blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Application Modernization with the Application Workbench]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/application-modernization-with-the-application-workbench/</link><guid isPermaLink="false">https://developer.hpe.com/application-modernization-with-the-application-workbench/</guid><pubDate>Mon, 10 May 2021 08:22:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;One of the most significant issues facing enterprises in their journey towards digital transformation is the challenge of application modernization. Did you know that 70% of the Global IT budget is spent on legacy application maintenance?  In fact, 7 in 10 companies today struggle with legacy application maintenance while they tackle their digital transformation. Consider how counterproductive it is to spend that much time, energy, and money on something that doesn’t really help them move forward.&lt;/p&gt;
&lt;p&gt;In this post, I’ll discuss the different approaches one can take to modernize an application and how the HPE Ezmeral Application Workbench can help.&lt;/p&gt;
&lt;h2&gt;What is Application Modernization?&lt;/h2&gt;
&lt;p&gt;Application modernization is the process of taking legacy applications and the platforms they run on and making them new again by replacing or updating each with modern features and capabilities. It includes changing the underlying application architecture from a monolithic to a distributed model by taking the advantage of the integration and automation implicit in DevOps practices. It also often means that the architecture and source code are updated through the use of modern programming languages in order to support containers and microservices and can take advantage of built-in security and storage features.&lt;/p&gt;
&lt;p&gt;One could equate application modernization to updating the design of a car. For instance, if you were to compare a vintage 1967 Ford Mustang to a modern 2021 version, the 1967 would not have disk brakes, ABS, fuel injection, air bags, shoulder belts, crush zones and all the other things that make it a modern, safe, reliable, and economical car.&lt;/p&gt;
&lt;p&gt;Truly, there is very little you can do to a 1967 Mustang to bring it up to the standards of a 2021 car. To transform it, you would need to change the 1967 car’s base architecture. Another option would be to completely replace the older car with a different brand that boasts a modern architecture, like Tesla. But that would probably change a lot about how you drive. If you happen to like Ford and want to stick with it because you’re familiar with how their cars work, that approach may not work well for you.&lt;/p&gt;
&lt;p&gt;This same idea applies to modernizing software applications. You cannot simply update the underlying compute and storage infrastructure and expect an application to automatically behave as if it had modern features. An application that was built for a mainframe environment is very different from applications that are built today, applications that are optimized for distributed environments and developed through the use of DevOps practices. To have an application work efficiently and securely across hybrid clouds, it needs to be modernized.&lt;/p&gt;
&lt;h2&gt;The Challenges of Modernizing Legacy Apps&lt;/h2&gt;
&lt;p&gt;Enterprises face a variety of challenges as they address application modernization, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Isolating applications from their infrastructure is often a very time consuming and manual process.&lt;/li&gt;
&lt;li&gt;Legacy, monolithic systems are not designed with distributed hardware in mind and are difficult to break apart.&lt;/li&gt;
&lt;li&gt;Legacy apps lack security integrations.&lt;/li&gt;
&lt;li&gt;They also lack tight DevOps integration.&lt;/li&gt;
&lt;li&gt;Lastly, budget and cost control are difficult to estimate, as well as difficult to test and consolidate amongst legacy applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are different approaches one can use to migrate legacy applications into modern architectures. Most of these solutions are offered in the &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt; along with the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;. Depending on the customer use case, application, cost, and time requirements, HPE can help to smooth the way in this journey.&lt;/p&gt;
&lt;h2&gt;What are the Options for App Modernization?&lt;/h2&gt;
&lt;p&gt;There are several different paths organizations take in their attempt to modernize their applications:&lt;/p&gt;
&lt;img src=&quot;/img/1-gunna.png&quot; width=&quot;600&quot; height=&quot;359&quot; class=&quot;center&quot;&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Rehost&lt;/strong&gt; – This simply means that a legacy application is moved to a modern compute and storage platform, either on premise or in the cloud, without altering the original code. In this case, the application does not magically acquire modern features. Going back to our car analogy, it would be like giving a 1967 Mustang new tires. You can’t expect that it would now also have all the features found in a 2021 Mustang. Rehosting an application does not modernize it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Refactor&lt;/strong&gt;, &lt;strong&gt;Redesign&lt;/strong&gt; and &lt;strong&gt;Rebuild&lt;/strong&gt; – These options involve rewriting portions of the applications from scratch, while preserving the original scope and specifications. Using these methods will modernize an application by updating the source code to a newer programing language, the underlying data to modern formats, and the compute and storage infrastructure to support the newly modernized application.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replace&lt;/strong&gt; – Another option is to replace the application altogether. This option eliminates the original application completely and replaces it with a new application better suited to an organization&apos;s current business needs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE Ezmeral has a part to play in each of these scenarios to assist in your application modernization goals. For instance, if you decide to &lt;strong&gt;Rehost&lt;/strong&gt;, the &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt; offers a central control plane for virtual or physical compute resources located on premise, in the cloud, or at the edge. It includes full support for standard Kubernetes (K8s) orchestration and the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;, which powers the data plane with a global namespace for data from core to edge – all wrapped in a secure, enterprise-class, highly available platform.&lt;/p&gt;
&lt;p&gt;For those who wish to &lt;strong&gt;Refactor&lt;/strong&gt; their code, HPE Ezmeral smooths the path to do so. There are many different resource management types for deploying an application on Kubernetes, and all of these are supported by the HPE Ezmeral Container Platform. You can use different options for refactoring or rebuilding the app, depending on what best matches your organization needs. A key feature of the HPE Ezmeral Container Platform is its use of KubeDirector, an open source Kubernetes custom controller that addresses stateful scale out application deployment in standard K8s clusters. This approach enables transparent integration with K8s user/resource management and existing K8s clients and tools.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral also offers the &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/app-workbench-5-1/getting-started/AWB51_App_Workbench.html&quot;&gt;HPE Ezmeral Application Workbench&lt;/a&gt; for those who wish to &lt;strong&gt;refactor&lt;/strong&gt; their code. The Workbench helps you rebuild or redesign application architecture while preserving their scope and specifications. Once you have built your new application, it can be easily deployed on the HPE Ezmeral Container Platform and leverage the HPE Ezmeral Data Fabric.&lt;/p&gt;
&lt;p&gt;Finally, for those who prefer to &lt;strong&gt;Replace&lt;/strong&gt; their application, HPE Ezmeral software is tested with a large partner ecosystem. Our partners offer modern applications through the &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace.html&quot;&gt;HPE Ezmeral Marketplace&lt;/a&gt; that can be used to replace legacy applications.&lt;/p&gt;
&lt;h2&gt;What is HPE Ezmeral Application Workbench?&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Application Workbench is a free, stand-alone, Python-based software development kit (SDK) that allows for the quick development of applications for Kubernetes, AI/ML, and other use cases, either from scratch or from existing container images. The HPE Ezmeral Application Workbench takes advantage of KubeDirector, an open-source project from HPE, which acts as a custom controller that allows you to easily bring your app into a standard K8s cluster. Kubedirector makes it easy to run complex stateful scale-out application clusters on Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2-gunna.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Application Workbench features a simple to use, workflow-based, web-based graphical user interface (WebUI) to help you build modern applications visually. Use it to update legacy source code to build a new, custom docker image, and convert your legacy application into a microservice-based, modern app. After installing the App Workbench on your workstation, you can &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/app-workbench-5-1/getting-started/AWB51_Overview.html&quot;&gt;build your application&lt;/a&gt; using just a few simple steps.&lt;/p&gt;
&lt;p&gt;Key features of the HPE Ezmeral Application Workbench include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;KubeDirector Application support&lt;/li&gt;
&lt;li&gt;Public and private Docker registry support&lt;/li&gt;
&lt;li&gt;Source-to-Image capabilities, allowing users to transform monolithic application source code to executable Docker images&lt;/li&gt;
&lt;li&gt;A feature rich application development workspace, including the ability to:&lt;/li&gt;
&lt;li&gt;Edit Dockerfiles, HTML, JSON, Markdown, Python, SH, XML, YAML files in one place.&lt;/li&gt;
&lt;li&gt;Organize configuration scripts and application specific startup scripts in one place.&lt;/li&gt;
&lt;li&gt;Output a fully formatted JSON or YAML file to apply to your K8s cluster with &lt;a href=&quot;https://kubedirector.io/&quot;&gt;KubeDirector operator&lt;/a&gt; installed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The HPE Ezmeral Application Workbench allows you to point to a repository, bring in a Docker image, modify config scripts, and create your own startup script. With the help of the Workspace editor in the App Workbench tool set, users can create Dockerfiles by splitting the monolithic application components into a microservice or distributed model. This feature also helps in containerization refactoring where monolithic apps are moved into containers with minimal modifications, enabling users to incorporate cloud-native features and improve portability. Once you’re done, all you need to do is to use a standard kubectl command: &lt;strong&gt;kubectl apply –f &amp;#x3C;yourapp.json/yaml&gt;&lt;/strong&gt; of the JSON or YAML file that’s been outputted. The HPE Ezmeral Application Workbench delivers your app right into your HPE Ezmeral Container Platform App catalog.&lt;/p&gt;
&lt;h2&gt;Advantages of using the HPE Ezmeral Application Workbench&lt;/h2&gt;
&lt;p&gt;The HPE Ezmeral Application Workbench offers support to build KubeDirector applications that can be deployed on a CNCF-certified K8s cluster with the KubeDirector operator add-on. KubeDirector is an open-source project that uses the Kubernetes Custom Resource Definition (CRD) framework to enable transparent integration with Kubernetes user/resource management, allowing you to deploy stateful applications on Kubernetes. It leverages native Kubernetes API extensions and acts as a scheduler and launcher of applications on top of the Kubernetes platform.&lt;/p&gt;
&lt;p&gt;KubeDirector empowers the application, doing a lot of stuff in the background for you. You don’t have to write “go code” or an operator for it; it will use what you have. And if you don’t have an operator, and you have a legacy application that needs statefulness and persistent directories, KubeDirector can assist. You can think of it as a custom operator. Using KubeDirector is a lot easier than using YAML, requiring less files and manual definition, and more powerful than using Helm charts, which don’t track the state of a node.&lt;/p&gt;
&lt;p&gt;Because of KubeDirector, applications built using the HPE Ezmeral Application Workbench can be easily deployed on the HPE Ezmeral Container Platform, which allows it to leverage the HPE Ezmeral Data Fabric, for a full end-to-end Kubernetes application experience. Using the HPE Ezmeral Application Workbench accelerates time to value for all your application modernization projects, including big data, artificial intelligence / machine learning (AI/ML), developer operations (DevOps), and continuous integration / continuous deployment (CI/CD).&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Legacy applications can hold you back in your digital transformation because it is software developed on outdated hardware and design principles. The success of your digital transformation depends on your ability to innovate and accelerate developer productivity at scale through newer, cloud native technologies.&lt;/p&gt;
&lt;p&gt;Don’t let legacy applications hold you back in your digital transformation. Modernize your legacy applications in a cost-effective way, improve developer productivity, and accelerate time to value for your business with &lt;a href=&quot;https://www.hpe.com/us/en/ezmeral.html&quot;&gt;HPE Ezmeral&lt;/a&gt;. For more information, access &lt;a href=&quot;https://docs.containerplatform.hpe.com/53/app-workbench-5-1/getting-started/AWB51_App_Workbench.html&quot;&gt;HPE Ezmeral Container Platform Documentation on App Workbench&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For more articles designed to help you in your digital transformation, stay tuned to the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing the Verified HPE OneView Terraform Provider]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) and HashiCorp have worked together to verify the new HPE OneView Terraform Provider. The new provider is…]]></description><link>https://developer.hpe.com/introducing-the-verified-hpe-oneview-terraform-provider/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-the-verified-hpe-oneview-terraform-provider/</guid><pubDate>Fri, 07 May 2021 05:00:00 GMT</pubDate><content:encoded>&lt;!--StartFragment--&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) and HashiCorp have worked together to verify the new HPE OneView Terraform Provider. The new provider is based on Terraform v0.13. This enables users to take full advantage of the improved infrastructure automation capabilities available in the latest versions of Terraform. The provider gives users the ability to automate infrastructure through &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;, which uses software-defined intelligence via a template-driven approach to automate the deployment, provisioning, updating, and integration of resources, such as compute, storage, and networking infrastructure.&lt;/p&gt;
&lt;p&gt;HashiCorp verification of the HPE OneView Terraform Provider based on Terraform v0.13 permits HPE code to be made available in the &lt;a href=&quot;https://registry.terraform.io/providers/HewlettPackard/oneview/latest&quot;&gt;Terraform Registry&lt;/a&gt;. The registry allows for the Terraform Provider for HPE OneView to be initiated directly from the registry maintained by HashiCorp by introducing the provider source attribute in Terraform. The verification process also ensures that provider code is from a reliable source, making automated installation a secure process. In the case of the HPE OneView Provider, GPG (GNU Privacy Guard) encryption is used to digitally sign the HPE code.&lt;/p&gt;
&lt;p&gt;Terraform v0.13 is a major update that includes dozens of improvements and features spanning the breadth and depth of Terraform’s functionality. One of the major changes in Terraform 0.13 is HCL2, the second generation of HashiCorp Configuration Language. HCL2 introduces Rich Data Types as a means to describe more complex structures with your Terraform Modules. &lt;/p&gt;
&lt;p&gt;The Terraform Provider for HPE OneView now uses Go Modules for dependency management and vending. The Terraform Provider for HPE OneView is also an upgrade to the Terraform Plugin SDK. More details about the Terraform Plugin SDK can be found &lt;a href=&quot;https://www.terraform.io/docs/extend/guides/v1-upgrade-guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Terraform Provider for HPE OneView supports several installation paths. It can be installed from Source, a Docker container, or the Terraform Registry. HPE has produced an &lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/tree/master/Migration%20Support&quot;&gt;Installation and User Guide&lt;/a&gt; to simplify migration from HPE OneView Providers based on previous versions of Terraform. The guide provides step-by-step instructions for each installation path.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview&quot;&gt;Code Repository and Examples&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;HPE OneView SDK Docker Image for Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/tree/master/Migration%20Support&quot;&gt;Installation and User Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://registry.terraform.io/providers/HewlettPackard/oneview/latest&quot;&gt;Terraform Registry&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;!--EndFragment--&gt;</content:encoded></item><item><title><![CDATA[Using Python and ODBC to connect HPE NonStop SQL/MX]]></title><description><![CDATA[Hello World! In this tutorial, I will show you how Python can execute queries on the HPE NonStop SQL/MX Database using pyodbc and the…]]></description><link>https://developer.hpe.com/python-how-to-use-odbc-to-connect-hpe-nonstop-sql-mx/</link><guid isPermaLink="false">https://developer.hpe.com/python-how-to-use-odbc-to-connect-hpe-nonstop-sql-mx/</guid><pubDate>Tue, 04 May 2021 06:27:49 GMT</pubDate><content:encoded>&lt;p&gt;Hello World! In this tutorial, I will show you how Python can execute queries on the HPE NonStop SQL/MX Database using pyodbc and the NonStop SQL/MX ODBC driver.&lt;/p&gt;
&lt;p&gt;This tutorial assumes that NonStop ODBC 3.x Unicode driver has already been installed. Check out the &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docId=a00045523en_us&amp;#x26;docLocale=en_US&quot;&gt;NonStop ODBC/MX Client Drivers User Guide&lt;/a&gt; for more information on the driver.&lt;/p&gt;
&lt;p&gt;This tutorial also assumes that on your host, NonStop SQL/MX has been installed, MXCS is running, and a MXCS data source has been added and started. Check with your administrator for the IP address, port number etc. (If you’re the administrator check out this &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=emr_na-a00090054en_us&quot;&gt;manual&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/shaniceabigail/python-odbc-nonstop-sqlmx&quot;&gt;Link to source code.&lt;/a&gt; &lt;a href=&quot;https://github.com/shaniceabigail/python-odbc-nonstop-sqlmx&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Let’s get started!&lt;/p&gt;
&lt;h1&gt;Getting Started&lt;/h1&gt;
&lt;h2&gt;Python and pip&lt;/h2&gt;
&lt;p&gt;First, &lt;a href=&quot;https://www.python.org/downloads/&quot;&gt;download Python,&lt;/a&gt; if you have not already done so. You can check to see if your machine already has Python installed by running the command below in the Windows command prompt. It should return Python with its version number, if it has been installed properly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\&gt; python --version
Python 3.9.0
C:\&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, install pip (pip is the Python package manager that we will be using). Download &lt;a href=&quot;https://bootstrap.pypa.io/get-pip.py&quot;&gt;get-pip.py&lt;/a&gt; on your laptop. Navigate to the folder containing the file and run the following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Downloads&gt; python get-pip.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once it’s complete, double check to see if pip has been installed by running the following command in Windows.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\&gt; py -m pip --version
pip 20.3.1 from C:\Miniconda3\lib\site-packages\pip (python 3.9)
C:\&gt; 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Installing pyodbc&lt;/h2&gt;
&lt;p&gt;pyodbc is an open source Python module that makes it simple to access ODBC databases. It implements the &lt;a href=&quot;https://www.python.org/dev/peps/pep-0249&quot;&gt;DB API 2.0&lt;/a&gt; specification (the standard interfaces for Python to access a database), but is packed with even more Pythonic convenience. TLDR; it helps you to access the databases that use ODBC drivers.&lt;/p&gt;
&lt;p&gt;You can read more about pyodbc on its &lt;a href=&quot;https://github.com/mkleehammer/pyodbc/wiki&quot;&gt;Github wiki page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Use pip to install pyodbc.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;C:\&gt; pip install pyodbc
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configuring your ODBC&lt;/h2&gt;
&lt;p&gt;In order for pyodbc to recognize the ODBC driver and data source to use, it will check with the ODBC Data Source Administrator on your Windows machine. Here’s how you can configure a new data source that will use the HPE NonStop SQL/MX ODBC driver.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Search and open the ODBC Data Source Administrator on your machine.&lt;/li&gt;
&lt;li&gt;Select the “Add” button to create a new data source.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/594/1*PWpQ3yfwfB08ITElY9IHRQ.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;ODBC Data Source Administrator&lt;/h3&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Select the HPE NonStop™  ODBCMX 3.x Unicode driver, and click “Finish”.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;ODBC Data Source Administrator — Create New Data Source&lt;/h3&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;A new window should pop up. Write a Data Source Name for this data source that you want to connect to. Note: The data source names &lt;strong&gt;must match&lt;/strong&gt; between those defined to MXCS on the database server and the client PCs; otherwise the connection will &lt;strong&gt;FAIL.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/563/1*n48eArrYZ1moeC432v2gZg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Data Source Name and Description&lt;/h3&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Insert the IP address of the NonStop SQL/MX database, as well as the port number that has been opened up for connections.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/564/1*4FWFtcvDezDej8zjf90jhg.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;IP address and Port number&lt;/h3&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Insert the catalog and schema that you want to connect to.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/564/1*EPl5NDJsUHZJd6PI-U4eRA.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Catalog and Schema&lt;/h3&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Leave the “Translate DLL” portion (DLL Name and Option), and the Localization (Replacement Character) blank for now.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/564/1*7BZPU6fI38qaTcXR6IrIag.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Leave blank&lt;/h3&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;We will not be doing any tracing in this data source, so leave the settings as the default, and click finish.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://miro.medium.com/max/564/1*DtYFoVsOh4fTpAHwG1n01w.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Test the connection, and click “OK”. You should see that the data source has been added to the list.&lt;/p&gt;
&lt;p&gt;Alright, now it’s time to code!&lt;/p&gt;
&lt;h1&gt;The code:&lt;/h1&gt;
&lt;h2&gt;The setup&lt;/h2&gt;
&lt;p&gt;Create a new .py file in any of your favourite text editors — mine is VSCode.&lt;/p&gt;
&lt;p&gt;Import the Python package into the script.&lt;/p&gt;
&lt;p&gt;Add your Data Source Name, UID (User ID) and Password (PWD) in the fields.&lt;/p&gt;
&lt;p&gt;Finally, set the decoding and encoding parameters for the connection. These are database specific. NonStop SQL/MX supports “iso-8859–1”, but this varies with the database you’re using. (We set this up for good measure — so copy/paste the parameters from the code below for &lt;strong&gt;NonStop SQL/MX.&lt;/strong&gt;)&lt;/p&gt;
&lt;p&gt;This is what you should have so far.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import pyodbc 
conn = pyodbc.connect(&apos;DSN=[DATA SOURCE NAME];UID=[USER];PWD=[PASSWORD]&apos;) 
conn.setdecoding(pyodbc.SQL_CHAR, encoding=&apos;iso-8859-1&apos;)
conn.setdecoding(pyodbc.SQL_WCHAR, encoding=&apos;iso-8859-1&apos;)
conn.setencoding(encoding=&apos;iso-8859-1&apos;) 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Now onto executing an SQL query in the database.&lt;/h2&gt;
&lt;p&gt;Create a cursor variable and execute the SQL statement that you would like to have in your database.&lt;/p&gt;
&lt;p&gt;Note that &lt;strong&gt;YOU HAVE TO COMMIT THE TRANSACTION&lt;/strong&gt; if you make an insert or update a table statement in the Python script. You can insert the commit at the end of the set of updates or inserts.&lt;/p&gt;
&lt;p&gt;The act of committing the transactions is how we make sure that the set of transactions / executions are properly executed, and data integrity is maintained.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;cursor = conn.cursor()
cursor.execute(&apos;INSERT INTO CATALOG.SCHEMA.TABLE VALUES (VALUE1, VALUE2)&apos;)
# makes sure that insert statement is a committed transaction in NonStop SQL/MX database
conn.commit() 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Seeing the result / select statement&lt;/h2&gt;
&lt;p&gt;You can execute the select statement using the cursor and print the values in the cursor.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# prints table to make sure that data was updated
cursor.execute(&apos;SELECT * FROM CATALOG.SCHEMA.TABLE&apos;)
for row in cursor:    
    print(row)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And there you have it!&lt;/p&gt;
&lt;p&gt;The Python script should be able to insert, update, create, and select. Alright, that’s it for now. I hope you’ll have fun coding with the NonStop SQL/MX database using information you learned in this tutorial!&lt;/p&gt;
&lt;p&gt;For more interesting posts and tutorials, keep checking back on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New Look. New Tools]]></title><link>https://developer.hpe.com/2021-June-01/</link><guid isPermaLink="false">https://developer.hpe.com/2021-June-01/</guid><pubDate>Mon, 03 May 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[New look. New brains. All the tools!]]></title><description><![CDATA[The HPE DEV team is excited to present its revamped HPE Developer Community web portal. Featuring a fresh look and feel, with easier…]]></description><link>https://developer.hpe.com/new-look-new-brains-all-the-tools/</link><guid isPermaLink="false">https://developer.hpe.com/new-look-new-brains-all-the-tools/</guid><pubDate>Fri, 16 Apr 2021 11:24:13 GMT</pubDate><content:encoded>&lt;center&gt;&lt;img src=&quot;/img/full-reveal-top-of-page-image1.jpg&quot; width=&quot;800&quot; height=&quot;453&quot;&gt;&lt;/center&gt; 
&lt;p&gt;The HPE DEV team is excited to present its revamped &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community web portal&lt;/a&gt;. Featuring a fresh look and feel, with easier navigation, it provides the resources you need to design and build software experiences that harness the most value from your data. With a new backend system and new design, it’s easier to use and contribute to.&lt;/p&gt;
&lt;h2&gt;A rich set of tools&lt;/h2&gt;
&lt;p&gt;A key area of the portal is our &lt;a href=&quot;https://developer.hpe.com/platforms&quot;&gt;Platforms section&lt;/a&gt;. Here, you can find APIs, GitHub repositories, and many of the other resources we make available for developers, designers, data scientists, and architects. We host numerous platforms here, including the HPE Ezmeral Container Platform, HPE Ezmeral Data Fabric, HPE GreenLake, SPIFFE and SPIRE projects, Chapel, Grommet, Aruba Developer Hub, HPE Nimble Storage, HPE OneView, iLO RESTful API, and many others.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/platforms-3-image2.jpg&quot; width=&quot;800&quot; height=&quot;225&quot;&gt;&lt;/center&gt; 
&lt;p&gt;Make sure you check out the new &lt;a href=&quot;https://developer.hpe.com/skillup&quot;&gt;Skill Up&lt;/a&gt; section. It provides you with easy access to our popular &lt;a href=&quot;https://developer.hpe.com/blog/munch-and-learn&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt;, &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt;, and other learning opportunities.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/skill-up-v2-image3.jpg&quot; width=&quot;800&quot; height=&quot;427&quot;&gt;&lt;/center&gt; 
&lt;p&gt;Blog posts continue to be a staple of the portal, with many of the more recent posts covering topics like AI, machine learning, how to stream data and capture it, and how to efficiently manage your hybrid cloud environment. The new &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;blog site&lt;/a&gt; makes it easier to find the articles you’re looking for. We believe that sharing expertise is a great way to move technology forward and invite you to browse through our extensive library of tutorials and articles to learn new ways to do things.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/blogs-v2-image4.jpg&quot; width=&quot;800&quot; height=&quot;518&quot;&gt;&lt;/center&gt; 
&lt;h2&gt;Easy access&lt;/h2&gt;
&lt;p&gt;Our new portal provides easy access to the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt;, a place for fun and learning.  During events, the Hack Shack is a place where you can connect with experts in HPE and open source technologies and learn more from the teams behind these products.  It’s where attendees can come to participate in coding challenges, hands-on workshops, and entertaining games. Afterwards, it extends the event experience by allowing you to continue to revisit replays of workshops and explore the hands-on workshops at your own pace.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/home-page-2-bis.png&quot; width=&quot;627&quot; height=&quot;720&quot;&gt;&lt;/center&gt; 
&lt;p&gt;From the home page, you can explore the HPE Design System, a set of resources for designing great apps and websites that integrate seamlessly with HPE platforms. You can also jump from there to our &lt;a href=&quot;/hackshack/workshops&quot;&gt;HPE DEV Workshops-on-Demand&lt;/a&gt; and our monthly community meetups, the &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn/&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt;, where you can interact with experts regarding popular new technologies and get your questions answered. You can also directly access the &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;HPE Open Source portal&lt;/a&gt; from our top page directory.&lt;/p&gt;
&lt;p&gt;Easily access past issues of our HPE DEV Community Newsletter in our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;Newsletter Archive&lt;/a&gt;. Make sure you’re signed up to get any subsequent issues we send out. From the&lt;a href=&quot;https://developer.hpe.com/community&quot;&gt; Community&lt;/a&gt; page you’ll be able to connect with us on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV&lt;/a&gt; Slack channel and follow us on Twitter. You can start and participate in discussions on the &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Software-platform/bd-p/ezmeral-software-platform&quot;&gt;HPE Ezmeral Software Forum&lt;/a&gt; as well.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/community-v2-image5.jpg&quot; width=&quot;800&quot; height=&quot;464&quot;&gt;&lt;/center&gt; 
&lt;h2&gt;We’re all developing something. Come join us in making the future.&lt;/h2&gt;
&lt;p&gt;We hope you’re as excited as we are about the new portal . Find us at hpedev.io.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[KubeCon + CloudNativeCon, Europe: Be Involved via this Virtual Opportunity]]></title><description><![CDATA[As stated in last year’s KubeCon keynote; “our lives have gone remote – our challenges have gone global.” The Cloud Native Computing…]]></description><link>https://developer.hpe.com/kubecon-cloudnativecon-europe-be-involved-via-this-virtual-opportunity/</link><guid isPermaLink="false">https://developer.hpe.com/kubecon-cloudnativecon-europe-be-involved-via-this-virtual-opportunity/</guid><pubDate>Thu, 15 Apr 2021 14:50:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/kubecon-eu-2021-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As stated in last year’s KubeCon keynote; “our lives have gone remote – our challenges have gone global.” The &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation&lt;/a&gt; (CNCF) will address this challenge through its flagship conference, &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&quot;&gt;KubeCon | CloudNativeCon EU&lt;/a&gt; delivered via a virtual platform May 4-7, 2021. And Hewlett Packard Enterprise (HPE) will once again be there, excited for the opportunity to show what it offers in the areas of containers and Kubernetes (k8s)!&lt;/p&gt;
&lt;p&gt;New this year, HPE Principal Software Engineer Daniel Feldman, will speak on how to securely bridge cloud-native and traditional workloads with SPIRE. Daniel is a major contributor to the SPIRE project, an open source zero trust security layer for cloud services. With a strong background in security, Daniel will cover the challenges organizations face when going cloud native, and how the CNCF’s SPIRE project provides the solution to help establish secure service identities across an organization. Make sure you check out his lightening talk on Tuesday, May 4th at 15:00 CEST. Find it on &lt;a href=&quot;https://kccnceu2021.sched.com/event/igUc&quot;&gt;the schedule here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Discover what’s new in our virtual Hack Shack!&lt;/h2&gt;
&lt;p&gt;HPE’s presence at the event will again include the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; Hack Shack.&lt;a href=&quot;https://developer.hpe.com/&quot;&gt;&lt;/a&gt; In the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack,&lt;/a&gt; developers, designers, and data scientists have the opportunity to connect with HPE subject matter experts and collaborate with them to accelerate innovation using open source and HPE technologies. It’s a unique place designed to give virtual events a more personal touch and extend the experience beyond the event. Here, you’ll find lots of opportunities to learn and have fun! Make sure you check out the HPE DEV Treasure hunt for an opportunity to win HPE DEV swag!&lt;/p&gt;
&lt;p&gt;To get a complete picture of what’s happening in the virtual Hack Shack, take a &lt;a href=&quot;https://vimeo.com/444872340&quot;&gt;tour now&lt;/a&gt;. Here’s a snapshot of what you will find:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/workshops&quot;&gt;WORKSHOPS-ON-DEMAND:&lt;/a&gt;  Access over a dozen free, on-demand hands-on training courses that include &lt;em&gt;&lt;strong&gt;Building a dynamic Machine Learning pipeline with KubeDirector&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;Using Kubernetes CSI with HPE Ezmeral Container Platform&lt;/strong&gt;&lt;/em&gt;. Using a Jupyter Notebook environment, you’ll have the opportunity to gain hands-on experience across different technologies, like KubeDirector and the HPE Ezmeral Container Platform REST API. You’ll also find numerous replays of talks we’ve given at different events and a complete selection of material on the HPE Ezmeral Platform, an enterprise-grade platform used to deploy Kubernetes at scale for a wide range of use cases on bare metal or VMs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/arcade&quot;&gt;TREASURE HUNT:&lt;/a&gt; New this year, our scavenger-hunt style challenge offers an opportunity for you to check out all the resources that are available on the HPE Developer Community website and the Hack Shack. Be one of the first people to answer all the questions correctly and win an HPE DEV hat!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/replays&quot;&gt;REPLAYS:&lt;/a&gt; Augmenting our Workshops-on-Demand, we’ve posted replays of many of the technical workshops we’ve offered live in the past. View them to learn more about the &lt;a href=&quot;/hackshack/replays/1&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;, &lt;a href=&quot;/hackshack/replays/5&quot;&gt;SPIFFE and SPIRE authentication&lt;/a&gt;, and the &lt;a href=&quot;/hackshack/replays/2&quot;&gt;HPE Container Storage Interface for Kubernetes&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/ezmeral&quot;&gt;HPE EZMERAL:&lt;/a&gt; We’ve made it easy for you to find detailed information on the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-container-platform/home&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/machine-learning-operations.html&quot;&gt;HPE Ezmeral ML Ops&lt;/a&gt;, along with other products in this innovative set of software that can be deployed on any cloud, on any hardware, and is 100% open source Kubernetes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/arcade&quot;&gt;ARCADE:&lt;/a&gt;  In our arcade, you’ll find Hack Shack Attack! Give our popular retro-style video game a try and compete with your friends for the highest score. Here, you can also download stickers, Zoom backgrounds, and cool artwork to use on your social channels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/hackshack/community&quot;&gt;COMMUNITY:&lt;/a&gt;  We invite you to join and contribute your expertise on our blog or deliver an on-demand workshop. Connect with others in the community via the &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Software-platform/bd-p/ezmeral-software-platform#.YGzDjuhKg2w&quot;&gt;HPE Ezmeral Software Forum&lt;/a&gt;, or our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack &lt;/a&gt;and &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; channels to start conversations and get answers to questions. Sign-up for our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE DEV Newsletter&lt;/a&gt; to stay up-to-date on the newest blog posts and tutorials.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/skillup&quot;&gt;SKILL UP:&lt;/a&gt; There are even more opportunities to get together and learn when you’re part of the HPE DEV Community. From the Hack Shack, hop on over to the HPE DEV portal where you’ll find our monthly Munch &amp;#x26; Learn gatherings. These free, one-hour meetups are designed to provide developers and data scientists the opportunity to meet and engage with experts in popular HPE and open source technologies.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As always, the&lt;a href=&quot;https://developer.hpe.com/community&quot;&gt; HPE DEV community&lt;/a&gt; is excited to have an opportunity to connect with other technologists at an event such as this. If you’re planning on attending, make sure you stop on by and say “Hi” in the Hack Shack by dropping us a note in our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack Channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Developer Community Meets a Changing World Head On]]></title><description><![CDATA[The world is changing When have those words never been true? The only real constant in life is change. How you deal with change is what…]]></description><link>https://developer.hpe.com/hpe-developer-community-meets-a-changing-world-head-on/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-developer-community-meets-a-changing-world-head-on/</guid><pubDate>Wed, 14 Apr 2021 10:17:45 GMT</pubDate><content:encoded>&lt;h2&gt;The world is changing&lt;/h2&gt;
&lt;p&gt;When have those words never been true? The only real constant in life &lt;strong&gt;is&lt;/strong&gt; change. &lt;strong&gt;How&lt;/strong&gt; you deal with change is what defines how you will survive – and thrive. Consider the challenges posed by social distancing to any community of technologists. Take the HPE Developer Community, for example. We thrive on connecting with each other, discussing problems, and collaborating to build solutions. As much as we wish we could “get back to normal” and live in a post-COVID world where we can greet each other in the Hack Shack, there are things that will never go back… nor, perhaps, should they.&lt;/p&gt;
&lt;p&gt;As I said, &lt;strong&gt;how&lt;/strong&gt; you deal with change defines your future success. By learning to adapt to our current circumstances, we have found ways to reach out to others as never before; more efficiently, economically, and engagingly. By confronting the challenges posed by social distancing head on, the HPE DEV team that leads the Developer Community has come up with some very innovative ways to connect and continue our mission – to Build, Communicate, and Collaborate. This includes developing a virtual &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt;, creating &lt;a href=&quot;/hackshack/workshops&quot;&gt;on-demand technical workshops&lt;/a&gt;, offering &lt;a href=&quot;https://developer.hpe.com/campaign/munch-and-learn&quot;&gt;online meetups&lt;/a&gt;, and redesigning our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer web portal&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Morphing the Hack Shack&lt;/h2&gt;
&lt;p&gt;When COVID shut down the first tradeshows, it became obvious that going virtual was the only real option. Digital platforms were quickly made available. These helped companies show what they wanted to present, but from an event attendee’s perspective, they really weren’t all that engaging. Virtual platforms did make events easier to attend, as they could be streamed worldwide throughout the day. And they saved people a lot of money, since many events were free and no one had to travel. But the ability to connect and learn from each other was lacking.&lt;/p&gt;
&lt;p&gt;One of the HPE DEV team’s first challenges was to figure out a way to deliver the hands-on learning sessions HPE DEV had become known for through its physical presence in the Hack Shack. The Hack Shack was a friendly place where event attendees could relax, attend hands-on coding sessions, and mingle with like-minded technologists to pick each other’s brain. It was a place where you came to learn.&lt;/p&gt;
&lt;p&gt;As developers, designers, and owners of our own HPE DEV Portal, the most obvious answer to making the Hack Shack virtual was to actually create an on-line Hack Shack. Our resident ace designer, Chris Carlozzi, created one of the coolest sites you could imagine. One of the most popular attractions at the Hack Shack was our Hack Shack Attack game, which was already available online, so that was easily integrated. We made stickers available that people could download and offered prizes for high scores in the game.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/hpedev-world-is-changing-image1-small.png&quot; height=&quot;542&quot; width=&quot;500&quot;&gt;&lt;/center&gt;
&lt;p&gt;But what about the coding sessions? These had been tremendously popular during our live events. Didier Lalli, HPE DEV Technology lead, and Frederic Passeron, our resident education evangelist, came up with a way to deliver the workshops in a way where students could interact with the code online through the use of Jupyter Notebooks. This gave birth to a whole set of &lt;a href=&quot;/hackshack/workshops&quot;&gt;Workshops-on-Demand&lt;/a&gt;. The workshops became so popular that people began asking for them even when there wasn’t an event. Students returned surveys giving the workshops glowing reviews, explaining how happy they were to be able to work with the technologies through such a hands-on approach. (For more information on how we did this, read Frederic Passeron’s post &lt;a href=&quot;https://developer.hpe.com/blog/from-jupyter-notebooks-as-a-service-to-hpe-dev-workshops-on-demand/&quot;&gt;From Jupyter Notebooks-as-a-Service to HPE DEV Workshops-on-Demand&lt;/a&gt;). Now, the workshops are not only available in the Hack Shack, but you can also find them through the &lt;a href=&quot;https://hpedemoportal.ext.hpe.com/login&quot;&gt;HPE Demo Portal&lt;/a&gt; and through the &lt;a href=&quot;https://developer.hpe.com/skillup/&quot;&gt;Skill Up&lt;/a&gt; section of the HPE DEV Portal.&lt;/p&gt;
&lt;h2&gt;Zooming in on Technology Talks&lt;/h2&gt;
&lt;p&gt;In addition to everything else the pandemic has done, it has certainly made Zoom a household name. And through its use, we’ve been able to find other ways to gather and collaborate. There’s a huge knowledge base in HPE. It’s the job of the HPE DEV team to connect subject matter experts with others in the community who can benefit from that knowledge, and Zoom has helped us to do that.&lt;/p&gt;
&lt;p&gt;In the past year, we’ve connected with more and more experts in the fields of data fabric, Kubernetes, containers, and open source technologies. Industry luminaries (e.g. Ted Dunning, Ellen Friedman, and Nigel Poulton), open source innovators (e.g. Umair Khan, Brad Chamberlain, Kartik Mathur, and Agustin Fayo), and brilliant technologists (i.e. Tom Phelan and Doug Cackett) have helped us inform others in the community about the nuances of these technologies by presenting in our forums and adding to our workshop library.&lt;/p&gt;
&lt;p&gt;Our newest offering, the &lt;a href=&quot;https://developer.hpe.com/blog/munch-and-learn&quot;&gt;Munch &amp;#x26; Learn Technology Talks&lt;/a&gt;, are monthly meetups where we invite everyone to gather and hear from these technologists. These are great opportunities for you to come and get your questions answered straight from the experts. And, because we’re a fun sort of group, we also encourage you to bring a munchie/snack, take a picture of it and share with the group via our Slack channel. You can even share a recipe, if you’d like.&lt;/p&gt;
&lt;h2&gt;Change Hits the Portal&lt;/h2&gt;
&lt;p&gt;As we added more and more to our online presence, we found that the existing HPE DEV Portal no longer served as a true representation of who the HPE DEV team is and what we do. We needed a better way for our community to find the things they were looking for, including the technology talks, the workshops, and other training resources we offer. We wanted to offer people easier ways to find us and connect through &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt;, and the &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Software-platform/bd-p/ezmeral-software-platform&quot;&gt;HPE Ezmeral Software Forum&lt;/a&gt;. Even finding older newsletters tended to be a chore, and that needed to be fixed.&lt;/p&gt;
&lt;center&gt;&lt;img src=&quot;/img/hpedev-world-is-changing-v2-portal-reveal-575-by-375.jpg&quot; width=&quot;575&quot; height=&quot;375&quot; &gt;&lt;/center&gt;
&lt;p&gt;So, we’ve updated the &lt;a href=&quot;https://developer.hpe.com&quot;&gt;HPE DEV Portal&lt;/a&gt; and given it a new, fresh and clean look. It’s a lot easier to navigate. Here, you’ll find all the resources you need to design and build the best possible software experiences that harness the most value from your data. Blog posts continue to be a staple of the portal, with many of the more recent posts covering topics like AI, machine learning, how to stream data and capture it, and how to efficiently manage your hybrid cloud environment. You’ll also be able to access the brand new &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;HPE Open Source project website&lt;/a&gt; through our top navigation.&lt;/p&gt;
&lt;h2&gt;Join Us in Moving Forward&lt;/h2&gt;
&lt;p&gt;One of my favorite Comcast Business commercials extols the virtue of moving forward. It acknowledges how businesses that were previously thriving are now challenged with figuring out how to deal with a disastrous situation. The commercial asks “How do you bounce back? You don’t. You bounce forward.” And, at the end, it points out that we need to do this together. The HPE DEV team is excited by the changes we’ve made as a team and as a community – and how these changes are helping us to move forward.&lt;/p&gt;
&lt;p&gt;If you haven’t taken advantage of the HPE Developer Community yet, consider taking a closer look now and in the coming months. There’s a broad set of resources there that you can take advantage of. It costs you nothing to join. You just need to participate, which can be as simple as signing up for our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;monthly newsletter&lt;/a&gt;. Or, post a question on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack Channel&lt;/a&gt; and find out how quickly you can get an answer. You might want to start a discussion on the &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Software-platform/bd-p/ezmeral-software-platform&quot;&gt;HPE Ezmeral Software Forum&lt;/a&gt;. Or, follow us on Twitter at our new &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; handle @HPE_DEV. I think you’ll find a lot of value in connecting and collaborating with us.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to integrate Chef automation with HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/how-to-integrate-chef-automation-with-hpe-greenlake-for-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-integrate-chef-automation-with-hpe-greenlake-for-private-cloud/</guid><pubDate>Tue, 13 Apr 2021 17:25:38 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/img/gettyimages-521980823_16x9_1600_0_72_rgb.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;br /&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for private cloud is one of the cloud services powered by the HPE GreenLake Central platform. This service provides a cloud experience to manage virtual machines (VMs) in your on-premises, pay-per-use datacenter. It is an integrated solution comprised of HPE optimized hardware and software, fully managed by HPE. The solution provides rich application management capabilities for cloud-native and traditional applications along with a self-service portal and integrates with a lot of popular automation tools such as Ansible, Chef, Puppet, and others. This article explains how to integrate your existing Chef automation platform, Chef Infra, with HPE GreenLake for private cloud to help improve efficiency and minimize errors caused by manual configuration. And the tutorial walks you through the step-by-step process for two use case scenarios:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Scenario 1: Add Chef automation integration, and provision a new application instance (i.e. Nginx) bootstrapped to an integrated Chef Infra server using HPE GreenLake for private cloud self-service user interface (UI).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Scenario 2: Bootstrap an existing VM instance to the integrated Chef infra server using the HPE GreenLake for private cloud automation feature Tasks.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before we dive into the use cases, let me give you an overview of Chef and its prerequisites before any integration with HPE GreenLake for private cloud.&lt;/p&gt;
&lt;h2&gt;Chef overview&lt;/h2&gt;
&lt;p&gt;Chef is one of the most widely adopted open source automation solutions. Chef Infra is a powerful automation platform that transforms infrastructure into code. It provides for both infrastructure and application automation for physical or virtual machines, and helps in reducing manual and repetitive tasks. Chef Infra works on a three-tier client server model: Chef workstation, Chef Infra server, and Chef Infra client nodes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chef workstation:&lt;/strong&gt; This is where a user develops configuration files, such as Chef recipes and cookbooks, and uploads them to the Chef Infra server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chef Infra server:&lt;/strong&gt; This acts a hub for the configuration data. It stores cookbooks, policies that are applied to Chef Infra client nodes, and metadata that describes each registered Chef node.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Chef Infra client nodes:&lt;/strong&gt; Chef nodes are registered and managed by Chef Infra server. Chef client is installed on each node, which helps in setting up the communication between the Chef Infra server and Chef node. Nodes use Chef client to ask the Chef server for configuration details, such as recipes, templates, and file distributions. Chef client then does the configuration work on the nodes.&lt;/p&gt;
&lt;p&gt;See the &lt;a href=&quot;https://www.tutorialspoint.com/chef/index.htm&quot;&gt;Chef Tutorial&lt;/a&gt; for more information on Chef architecture.&lt;/p&gt;
&lt;h2&gt;Chef integration prerequisites with HPE GreenLake for private cloud&lt;/h2&gt;
&lt;p&gt;To integrate Chef with HPE GreenLake for private cloud, the following pre-requisites are assumed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You have a Chef Infra server:  It can be hosted on either a public or private network. For a Chef Infra server hosted in a private network, make sure it is reachable from the HPE GreenLake Central platform.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You have configured the organization, organization validator key, users, and user key on a Chef Infra server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You have uploaded a sample cookbook &lt;strong&gt;mydocker&lt;/strong&gt; to the Chef Infra server. Refer to this &lt;a href=&quot;https://medium.com/@pierangelo1982/cooking-docker-nginx-with-chef-server-95f179aa17ca&quot;&gt;blog post&lt;/a&gt; to create the sample cookbook. The cookbook has a single recipe &lt;strong&gt;default.rb&lt;/strong&gt; that will be referred as &lt;strong&gt;recipe[mydocker]&lt;/strong&gt; in the Chef runlist. The cookbook installs a docker environment on the VM, and then installs a Nginx docker container. A Chef runlist defines all the information necessary for Chef to configure a node into desired state.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;#
# Cookbook:: mydocker
# Recipe:: default
#
# Copyright:: 2021, The Authors, All Rights Reserved.
docker_service &apos;default&apos; do
  action [:create, :start]
end

# Create Docker Service directory
directory &apos;/etc/systemd/system/docker.service.d&apos; do
  owner &apos;root&apos;
  group &apos;root&apos;
  mode &apos;0755&apos;
  action :create
end

# Creating Docker Proxy Configuration
cookbook_file &apos;/etc/systemd/system/docker.service.d/proxy.conf&apos; do
  source &apos;proxy.conf&apos;
  mode &quot;0644&quot;
  action :create
end

docker_service &apos;default&apos; do
  action :restart
end


# Pull latest image
docker_image &apos;nginx&apos; do
  tag &apos;latest&apos;
  action :pull
end


# Run container exposing ports
docker_container &apos;my_nginx&apos; do
  repo &apos;nginx&apos;
  tag &apos;latest&apos;
  port &apos;85:80&apos;
  volumes &quot;/home/docker/default.conf:/etc/nginx/conf.d/default.conf:ro&quot;
  volumes &quot;/home/docker/html:/usr/share/nginx/html&quot;
end

# create file default.conf for volumes docker
template &quot;/home/docker/default.conf&quot; do
  source &quot;default.conf.erb&quot;
  #notifies :reload, &quot;service[default]&quot;
end

# create file index.html for volumes docker
template &apos;/home/docker/html/index.html&apos; do
  source &apos;index.html.erb&apos;
  variables(
    :ambiente =&gt; node.chef_environment
  )
  action :create
  #notifies :restart, &apos;service[httpd]&apos;, :immediately
end
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Scenario 1: Add Chef automation integration and provision a new application instance (Nginx) bootstrapped to the integrated Chef Infra server using HPE GreenLake for private cloud self-service UI&lt;/h2&gt;
&lt;p&gt;For this scenario, we need to integrate Chef automation with the HPE GreenLake for private cloud first, then we provision a new application instance (i.e. Nginx) bootstrapped to the integrated Chef Infra server.&lt;/p&gt;
&lt;h3&gt;Follow the below steps to add Chef automation integration in the HPE GreenLake for private cloud.&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;In the HPE GreenLake for private cloud main menu dashboard, navigate to &lt;strong&gt;Administration &gt; Integrations&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image1.png&quot; alt=&quot;&quot; title=&quot;Image1&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Click the green &lt;strong&gt;+NEW INTEGRATION&lt;/strong&gt; drop-down menu, and select integration type &lt;strong&gt;Chef&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image2.png&quot; alt=&quot;&quot; title=&quot;Image2&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;
&lt;p&gt;Populate the following fields and save changes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Name: Name of the Chef integration; for example, &lt;strong&gt;Local-Chef-Server&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Chef Endpoint: URL of Chef Infra server API endpoint in &lt;a href=&quot;https://api.example.com&quot;&gt;https://api.example.com&lt;/a&gt; format. Do not add /organization/xxxx here, which is populated in the Chef Organization field. In this example, &lt;strong&gt;&lt;a href=&quot;https://chefserver.localdomain&quot;&gt;https://chefserver.localdomain&lt;/a&gt;&lt;/strong&gt; is used.&lt;/li&gt;
&lt;li&gt;Chef Version: Chef client version, which needs to be installed in the nodes. Use 16.1.X or greater. Version can be changed to use a different/more recent version of Chef.&lt;/li&gt;
&lt;li&gt;Chef Organization: Chef server organization&lt;/li&gt;
&lt;li&gt;Chef User: Chef Infra server user&lt;/li&gt;
&lt;li&gt;User Private Key: The private key of the user with access to this Chef Infra server&lt;/li&gt;
&lt;li&gt;Organization Validator: Validator key for the organization&lt;/li&gt;
&lt;li&gt;DataBags: Optional. Add it if it is configured in the Chef Infra server&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image3-bis.png&quot; alt=&quot;&quot; title=&quot;image3&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Create a new infrastructure group for Chef integration.&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;From HPE GreenLake for private cloud main menu dashboard, navigate to &lt;strong&gt;Infrastructure &gt; Groups&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image4.png&quot; alt=&quot;&quot; title=&quot;Image4&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Click the green &lt;strong&gt;+CREATE&lt;/strong&gt; icon to add a new group with the name of &lt;strong&gt;ManagedChef&lt;/strong&gt;. Expand the &lt;strong&gt;Advanced Options&lt;/strong&gt; section in the &lt;strong&gt;Config Management&lt;/strong&gt; field, and select the previously configured Chef Integration &lt;strong&gt;Local-Chef-Server&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image5.png&quot; alt=&quot;&quot; title=&quot;Image5&quot;&gt;&lt;/p&gt;
&lt;p&gt;Save changes. The added Chef integration is now available for use in HPE GreenLake for private cloud during instance provisioning. Please note that one infrastructure group can only associate with one Chef server.&lt;/p&gt;
&lt;h3&gt;Provision the new instance.&lt;/h3&gt;
&lt;p&gt;With Chef integration added to an infrastructure group, when users provision a new instance into the infrastructure group, a &lt;strong&gt;Chef&lt;/strong&gt; section will appear in the &lt;strong&gt;Configure&lt;/strong&gt; section of the provisioning wizard. By default, Chef is enabled, but it can be disabled by expanding the &lt;strong&gt;Chef&lt;/strong&gt; section and unchecking &lt;strong&gt;Enable Chef&lt;/strong&gt;. Follow the steps below to provision the new instance using &lt;strong&gt;CREATE INSTANCE&lt;/strong&gt; wizard.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;In the wizard &lt;strong&gt;GROUP&lt;/strong&gt; page, select the previous configured &lt;strong&gt;ManagedChef&lt;/strong&gt; infrastructure group, enter the instance name &lt;strong&gt;Demo2&lt;/strong&gt;, select environment, and click next.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image6.png&quot; alt=&quot;&quot; title=&quot;Image 6&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;In the &lt;strong&gt;CONFIGURE&lt;/strong&gt; menu, expand the &lt;strong&gt;Chef&lt;/strong&gt; section and update the required fields as shown in the screen capture below. In the &lt;strong&gt;CHEF RUNLIST&lt;/strong&gt; field, enter the Chef recipe &lt;strong&gt;recipe[mydocker]&lt;/strong&gt; from the cookbook you previously uploaded to your Chef Infra server in the Prerequisite section.  If the &lt;strong&gt;CHEF RUNLIST&lt;/strong&gt; field is left empty, the instance will be just bootstrapped to the Chef Infra server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image7.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Review the instance configuration and complete the installation.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image8.png&quot; alt=&quot;&quot; title=&quot;Image 8&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;On successful completion of Instance provisioning, the instance &lt;strong&gt;Demo2&lt;/strong&gt; is bootstrapped to the Chef server integrated to the selected Group. Expanding the instance &lt;strong&gt;History&lt;/strong&gt; tab shows the bootstrap task status.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image9.png&quot; alt=&quot;&quot; title=&quot;Image 9&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;You can also confirm the instance &lt;strong&gt;Demo2&lt;/strong&gt; from Chef-Manage dashboard of the integrated Chef Infra server. Chef-Manage needs to be installed explicitly on the Chef Infra server. The screen below shows that instance &lt;strong&gt;Demo2&lt;/strong&gt; is bootstrapped to the Chef server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image10.png&quot; alt=&quot;&quot; title=&quot;Image 10&quot;&gt;&lt;/p&gt;
&lt;p&gt;Based on the recipe specified in the &lt;strong&gt;CHEF RUNLIST&lt;/strong&gt; defined in the provisioning wizard, the Chef client on the newly created instance &lt;strong&gt;demo2&lt;/strong&gt; pulled the recipe from the Chef server and configured an Nginx docker container on the instance.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image11.png&quot; alt=&quot;&quot; title=&quot;Image 11&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Scenario 2: Bootstrap an existing VM instance to the integrated Chef Infra server using the HPE GreenLake for private cloud automation feature &lt;strong&gt;Tasks&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for private cloud provides an option to bootstrap existing VM instances to Chef Infra server. This section describes a step-by-step process you can use to achieve that.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The first step is to create a Chef bootstrap task. On the HPE GreenLake for private cloud main menu screen, navigate to &lt;strong&gt;Provisioning &gt; Automation &gt; Tasks&lt;/strong&gt;, and create a Chef bootstrap task. A task is an individual automation element; for example, a script or a Chef cookbook. Select &lt;strong&gt;TYPE&lt;/strong&gt; as &lt;strong&gt;Chef bootstrap&lt;/strong&gt; and choose the previously integrated Chef Infra server. In the &lt;strong&gt;RUN LIST&lt;/strong&gt; field, enter &lt;strong&gt;recipe[mydocker]&lt;/strong&gt; to refer the recipe from the cookbook you previously uploaded to your Chef Infra server.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image12.png&quot; alt=&quot;&quot; title=&quot;Image 12&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;To bootstrap any VM instances with this task, on the HPE GreenLake for private cloud main menu, navigate to &lt;strong&gt;Provisioning &gt; Instances&lt;/strong&gt;, and select the VM instance that needs to be bootstrapped. Select &lt;strong&gt;Actions &gt; Run&lt;/strong&gt; Task. The screen below shows the VM instance &lt;strong&gt;Demo&lt;/strong&gt; detail page.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image13.png&quot; alt=&quot;&quot; title=&quot;Image 13&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Select the &lt;strong&gt;chef-bootstrap&lt;/strong&gt; task created previously and execute it by clicking &lt;strong&gt;EXECUTE&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image14.png&quot; alt=&quot;&quot; title=&quot;Image 14&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Task will start the execution on the instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image15.png&quot; alt=&quot;&quot; title=&quot;Image 15&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;On completion, the instance status is green and the task status can be seen in &lt;strong&gt;History&lt;/strong&gt; tab.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image16.png&quot; alt=&quot;&quot; title=&quot;Image 16&quot;&gt;&lt;/p&gt;
&lt;p&gt;The instance is now bootstrapped as a Chef client registered in the Chef Infra server. The recipe is run as specified in the &lt;strong&gt;Run List&lt;/strong&gt; from Chef-Manage dashboard of the integrated Chef Infra server, shown in screenshot below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/glpc-chef-integration-image17.png&quot; alt=&quot;&quot; title=&quot;Image 17&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Chef Infra is a powerful automation platform that transforms infrastructure into code. By integrating it with HPE GreenLake for private cloud, you can utilize your existing Chef Infra code to automate the VMs and applications provisioned in HPE GreenLake for private cloud. This will greatly reduce errors caused by manual configuration and improve efficiency and agility.&lt;/p&gt;
&lt;p&gt;To learn more about HPE GreenLake cloud services – the cloud that comes to wherever your apps and data live – visit the &lt;a href=&quot;https://www.hpe.com/greenlake&quot;&gt;HPE GreenLake homepage&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To learn more about Chef architecture, visit the &lt;a href=&quot;https://www.tutorialspoint.com/chef/index.htm&quot;&gt;Chef Tutorial&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;More HPE GreenLake for private cloud white papers can be found on the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-greenlake/home&quot;&gt;HPE Developer platform page&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From Jupyter Notebooks-as-a-Service to HPE DEV Workshops-on-Demand]]></title><description><![CDATA[HackShack At the end of a blog post I wrote a year ago, I left off with an idea: Jupyter Notebooks-as-a-Service. Apparently, our HPE DEV…]]></description><link>https://developer.hpe.com/from-jupyter-notebooks-as-a-service-to-hpe-dev-workshops-on-demand/</link><guid isPermaLink="false">https://developer.hpe.com/from-jupyter-notebooks-as-a-service-to-hpe-dev-workshops-on-demand/</guid><pubDate>Fri, 09 Apr 2021 08:38:05 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;/hackshack/workshops&quot;&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/4/hackshack-1617960622993.png&quot; alt=&quot;HackShack&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;At the end of a &lt;a href=&quot;https://developer.hpe.com/blog/jupyter-saved-my-day&quot;&gt;blog post&lt;/a&gt; I wrote a year ago, I left off with an idea: Jupyter Notebooks-as-a-Service. Apparently, our HPE DEV team thought it was a great idea, because we went on to create the HPE DEV Workshops-on-Demand. While the name is different, the ultimate goal remained the same. Leveraging the JupyterHub technology, the HPE DEV team created a set of hands-on technology training workshops where anyone could connect to our service and benefit from notebook-based workshops at any time, and from anywhere.&lt;/p&gt;
&lt;p&gt;In our Workshops-on-Demand, users can actually interact and play with code to learn about a variety of different subjects. From infrastructure automation, coding 101 or API-related content, the Workshops-on-Demand catalog covers a broad range of subjects where you can learn about new technologies for free.&lt;/p&gt;
&lt;p&gt;Check out all the different topics we offer &lt;a href=&quot;/hackshack/workshops&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now, let me explain how we made this possible…&lt;/p&gt;
&lt;h2&gt;Early prototype&lt;/h2&gt;
&lt;p&gt;We built up our first JupyterHub server for the virtual HPE Technology &amp;#x26; Solutions Summit (TSS) that took place in March 2020 and used it to deliver the workshops we would normally give in person in a virtual way. We hosted just a few Jupyter notebooks-based workshops and found that they scored really well in feedback we received from the students. All notebooks were delivered through a single instance of a JupyterHub server.&lt;/p&gt;
&lt;p&gt;During the event, we ran only a single workshop at a time. We first set up a range of users (students) to match a given workshop. It required a lot of manual setup, a few scripts and a single Ansible playbook to handle the copy of the workshop content assigned to the desired range of students, with a few variable substitutions like student IDs, passwords, and some API endpoints when relevant. The student assignment was done manually, at the beginning of every workshop. For instance, Fred was student1, Didier was student2, and so on… which was quite cumbersome. When you only have a range of 25 students, one or two people handling the back end is sufficient. But if 40 people show up, then it becomes tricky.&lt;/p&gt;
&lt;h2&gt;Standing up&lt;/h2&gt;
&lt;p&gt;We offered this virtual training first at TSS and then Aspire, (two internal HPE events.) Then HPE Discover, the largest HPE technology event, appeared on the radar. We were asked to renew the content of the workshops, add in new subjects like artificial intelligence (AI), machine learning operations (MLOPS), containers, etc. and that’s what we did. We also needed to provide a registration portal to allow automated customer registration.&lt;/p&gt;
&lt;p&gt;Both the workshop and challenge activities we planned on providing at HPE Discover were notebooks-based. The workshop was a follow-along, hands-on lab (aka an instructor-led workshop). The code challenges were different in that those who participated were asked to answer questions and then produce code snippets for a dedicated problem with the goal of winning a prize for the best submission. To assist in streamlining our processes, our developer, Pramod, stood up a nice app that allowed customers to register as well as automated the deployment of the notebooks. We implemented it only for the code challenges. The deployment for the workshops remained manual. These challenges allowed us to validate the beta version of the registration app.&lt;/p&gt;
&lt;h2&gt;Walking (or First steps)&lt;/h2&gt;
&lt;p&gt;By the end of the summer, we had many of the pieces in place that we needed to make the dream of Notebooks-as-a-Service a reality. We had the content and the automation to deliver it through an automation layer like Ansible. We also now had a registration app that simplified customer registration as well as workshop management (workshop capacity, workshop reset specificities, etc.). From this point, we really had something we could build upon.&lt;/p&gt;
&lt;p&gt;Where everything was manual or lightly scripted before, we really need it to be fully automated in order to make this a viable solution. We developed each new workshop on a staging environment. When ready, we moved it to the production environment. We also had a third environment located on a different site that could be used to recover a workshop. This proved very advantageous when, on the first day of HPE Discover 2020, we had to recover everything from the secondary site a few hours before the event started, since a backhoe cut the internet line on main site. Today, we are also leveraging our HPE GreenLake offering to build up one more environment. This continues to be a work in progress.&lt;/p&gt;
&lt;p&gt;As you can see, we have a few environments to deploy and manage. Without automation, it would be too much work and prone to error.&lt;/p&gt;
&lt;p&gt;Each server is associated to a logical location and role (Production, Staging, Sandbox, and GreenLake): Jupyter1 is the production server. Jupyter2 is the sandbox server. It is used to perform some early testing and is located in a different datacenter from the production and staging servers. Jupyter3 is the staging server. Workshops are developed and validated on this server before moving to production. Finally, Jupyter4 is deployed in a dedicated HPE GreenLake tenant. It will serve over time as a second production site.&lt;/p&gt;
&lt;p&gt;Each location is defined through a set of yaml files to specify the different parameters linked to the location (IP Addresses, Hostnames, etc…)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;PBKDIR: staging
JPHOST: jupyter3.example.com
JPIP: xx.xx.xx.xx
JPHOSTEXT: nb3.example.com
JPHUBAPISRV: http://{{ JPHOST }}:8000
JPHUBTOKEN: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
BASESTDID: 0
APIENDPOINT: https://example.com/api
APIUSER: user
APIPWD: password
LDAPDMN: dc=example,dc=com
#
KIBANAPWD: &quot;zzzzzzzzzzzzzzzzzzzzzzzzzzzzz&quot;
KIBANAPORT: &quot;xxxxxx&quot;
#
STACKSTORMSRVNAME: &quot;sta-stackstorm&quot;
STACKSTORMSRIP: &quot;xx.xx.xx.xx&quot;
STACKSTORMWEBUIPORT: &quot;yyyy&quot;
#
VCENTERAPP: &quot;vcenter.example.com&quot;
VCENTERADMIN: &quot;user&quot;
VCENTERPWD: &quot;xxxxxxx&quot;
#
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We standardized on Ubuntu 20.04 and Centos 7 for the operating systems and created a few Ansible playbooks to prepare the servers.&lt;/p&gt;
&lt;p&gt;The first playbook based on a location parameter would perform:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the JupyterHub installation
&lt;ul&gt;
&lt;li&gt;System Update&lt;/li&gt;
&lt;li&gt;Repository Update&lt;/li&gt;
&lt;li&gt;Apps Installation&lt;/li&gt;
&lt;li&gt;System Performance Tuning&lt;/li&gt;
&lt;li&gt;Security Setup&lt;/li&gt;
&lt;li&gt;JupyterHub application installation and configuration&lt;/li&gt;
&lt;li&gt;kernels setup &amp;#x26; configuration&lt;/li&gt;
&lt;li&gt;Linux users creation&lt;/li&gt;
&lt;li&gt;JupyterHub users creation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The second playbook would take care of deploying the reference notebooks on the newly created JupyterHub server.&lt;/p&gt;
&lt;p&gt;A third playbook is run on demand and nightly to ensure that the configuration is consistent and up to date.&lt;/p&gt;
&lt;p&gt;We created two Git repositories, one for the infrastructure management (and all the development we did to automate our deployments) and a second one for the reference notebooks’ content.&lt;/p&gt;
&lt;h2&gt;Running&lt;/h2&gt;
&lt;p&gt;While we worked on automating the deployment/redeployment of a JupyterHub at will, we also focused on improving the notebooks’ deployment automation that we implemented earlier during HPE Discover.&lt;/p&gt;
&lt;p&gt;Let me first show you how the overall process works for our Workshops-on-Demand:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/4/hackshack2-1617960613898.png&quot; alt=&quot;Workshops-on-Demand&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you’re looking for a live explanation of this automation, you can review &lt;a href=&quot;https://www.youtube.com/watch?v=D6Ss3T2p008&amp;#x26;t=515s&quot;&gt;the following session&lt;/a&gt; Bruno Cornec and I delivered at Linuxconf in Australia in January 2021.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; The customer registers for a workshop online at our virtual &lt;a href=&quot;/hackshack/workshops&quot;&gt;Hack Shack&lt;/a&gt;. When clicking the register button for the selected workshop and after agreeing to the terms and conditions, the front-end triggers the first REST API calls to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Register the student in the Customers Database.&lt;/li&gt;
&lt;li&gt;Send a welcome email to the student&lt;/li&gt;
&lt;li&gt;Assign a student ID to him/her according to the student range allocated to the given workshop&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; The registration App then orders (through a mail API call managed by procmail) the backend to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Generate a random password for the selected student&lt;/li&gt;
&lt;li&gt;Deploy the selected workshop&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; The backend Infrastructure calls back the registration application using its REST API to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Provide back the new student password&lt;/li&gt;
&lt;li&gt;Make the student’s database record active&lt;/li&gt;
&lt;li&gt;Decrement Workshop capacity&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; The registration App sends:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The credentials email to allow the student to connect to the workshop&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This infrastructure automation relies mainly on a few bash scripts and Ansible playbooks to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Generate random passwords&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt; -    Update LDAP passwords accordingly when required
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Deploy the given workshop to the proper student home directory through an Ansible playbook&lt;/li&gt;
&lt;li&gt;Update the notebook content through Ansible variables substitutions
&lt;ul&gt;
&lt;li&gt;Student ID&lt;/li&gt;
&lt;li&gt;Student password&lt;/li&gt;
&lt;li&gt;API Endpoints definition&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Update student permissions&lt;/li&gt;
&lt;li&gt;Perform all necessary create or reset actions linked to a given workshop outside of the notebook customization&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The registration app actually sends an email to the backend with a dedicated format (an API!) that is then parsed by the Procmail process to retrieve the necessary information to perform the different tasks. We use 3 verbs: CREATE (to setup to the student environment as described upper), DELETE (to remove the user from tables), and RESET (to clean up the student content and reset the back-end infrastructure when needed).&lt;/p&gt;
&lt;p&gt;At the end of this automated process, the backend makes a series of API calls to the registration app to send back the required information, like a new password, workshop status, etc.&lt;/p&gt;
&lt;p&gt;More info on Procmail can be found &lt;a href=&quot;https://en.wikipedia.org/wiki/Procmail&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Flying&lt;/h2&gt;
&lt;p&gt;In the fall, we started a pilot phase for a month. We opened the platform internally and gathered some feedback. We managed to discover a few bugs then that are now corrected. The platform went live in November 2020 and is used on a daily basis.&lt;/p&gt;
&lt;p&gt;Since then, we have added new workshops on a monthly basis.  We lately returned to TSS using the Workshops-on-Demand to deliver our content; this time without any manual process in the scheme.&lt;/p&gt;
&lt;p&gt;Our next steps include the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Implement complete deployment scenario
&lt;ul&gt;
&lt;li&gt;OS Bare metal deployment&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Provide a new backend on HPE GreenLake
&lt;ul&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code&gt;We have now a JupyterHub server running in HPE GreenLake
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;We validated the relationship between this new backend and our registration application&lt;/li&gt;
&lt;li&gt;More testing to come&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Automate the student ID range booking using a YAML configuration file&lt;/li&gt;
&lt;li&gt;Open source our Workshops-on-Demand Notebooks content:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;-    This is work in progress. The repository is ready. Licensing is ok. 
-    Only missing some automation layer again to sync different repositories
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before leaving you with some final thoughts, let me show you a screenshot of our dashboard presenting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The number of Active Workshops&lt;/li&gt;
&lt;li&gt;The active workshops by type&lt;/li&gt;
&lt;li&gt;The total number of registrations from November 1st 2020 till today&lt;/li&gt;
&lt;li&gt;The total number of Customer (student) registrations&lt;/li&gt;
&lt;li&gt;The total workshops split&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/4/hackshack3-1617960606167.png&quot; alt=&quot;Dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;Didier Lalli from our HPE DEV team created this informative dashboard and wrote a blog about it. If you are interested in learning more about ElasticSearch, read it out &lt;a href=&quot;https://developer.hpe.com/blog/open-source-elasticsearch-helped-us-globally-support-virtual-labs&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As you can see, we have made significant progress over the past year, moving from a heavily manually oriented approach to now a fully automated one. If you are interested in seeing the result for yourself, &lt;a href=&quot;/hackshack/workshops&quot;&gt;please register for one of our Workshops-on-Demand&lt;/a&gt;. Don’t forget to fill out the survey at the end of the workshop to provide us with feedback on your experience, as well as subjects you would like to see covered in the near future, as this really helps to improve the program.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Managing Multiple Instances of Python in Microsoft Windows]]></title><description><![CDATA[I use many Microsoft Windows systems for development and testing; both real systems and VMs. Most of them have one or more versions of…]]></description><link>https://developer.hpe.com/managing-multiple-instances-of-python-in-microsoft-windows/</link><guid isPermaLink="false">https://developer.hpe.com/managing-multiple-instances-of-python-in-microsoft-windows/</guid><pubDate>Tue, 06 Apr 2021 09:08:53 GMT</pubDate><content:encoded>&lt;p&gt;I use many Microsoft Windows systems for development and testing; both real systems and VMs. Most of them have one or more versions of Python installed. Several have many versions. And, over the years, I&apos;ve experienced a lot of problems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Old programs, not compatible with Python 3, breaking on some systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;New programs, not compatible with Python 2, breaking on others. (Yes, I know that Python 2 is obsolete, but the reality is that I still have mission-critical tools that depend on it!)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The py.exe tool not detecting some installed versions of Python.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The python.exe command not always starting the most recent version of Python installed on a system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Etc.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, I decided to get to the bottom of it. In this blog post, I&apos;ll share with you what I learned.&lt;/p&gt;
&lt;h2&gt;Where to get Python for Microsoft Windows&lt;/h2&gt;
&lt;p&gt;Recent versions of Microsoft Windows 10 have a python.exe stub pre-installed that gets you directly to a Microsoft Store page.From there you can download a free version of Python officially supported by Microsoft.Just run &lt;code&gt;python&lt;/code&gt; at the command prompt and follow the instructions on the Microsoft Store page.&lt;/p&gt;
&lt;p&gt;An alternative is to go to &lt;code&gt;https://www.python.org/downloads/&lt;/code&gt;, and get the official Python releases for Microsoft Windows. Advantage: Their version is slightly more recent than the one supported by Microsoft.&lt;/p&gt;
&lt;p&gt;Another possibility is to use a third party build for Microsoft Windows, like the &lt;a href=&quot;https://www.activestate.com/products/python/downloads/&quot;&gt;ActivePython&lt;/a&gt;distribution maintained by &lt;a href=&quot;https://www.activestate.com&quot;&gt;ActiveState&lt;/a&gt;. If you still need Python 2.7, their ActivePython 2.7 is an all-batteries included distribution,with all modules that you can think of built-in.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Caution: These ActivePython distributions are free for development use, but not for production use.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There are many other possible alternatives, the ones above being just some of the most common.&lt;/p&gt;
&lt;p&gt;This diversity is proof of the vitality of the Python ecosystem...
But this is also the source of many problems, as different versions make different, sometimes conflicting, installation choices!&lt;/p&gt;
&lt;h2&gt;The py.exe launcher tool&lt;/h2&gt;
&lt;p&gt;The &lt;em&gt;py.exe&lt;/em&gt; launcher tool is a critically important tool that comes with Python 3 distributions for Microsoft Windows. Unfortunately its installation is optional and usually &lt;em&gt;not&lt;/em&gt; selected by default. If you ever install more that one Python version on your system (Python 2.7 and Python 3.x maybe), then it&apos;s a no-brainer:You &lt;strong&gt;must&lt;/strong&gt; select it and install it.&lt;/p&gt;
&lt;p&gt;First and foremost, py.exe is a front end to python.exe. By default, it runs the latest installed version of python.exe. But it has options to select another version at will. Usage:&lt;br&gt;
&lt;code&gt;py [py.exe options] [python.exe options and arguments]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;py --version
Python 3.9.2

C:\Temp&gt;py -2 --version
Python 2.7.12

C:Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, in both cases, the &lt;code&gt;--version&lt;/code&gt; option was interpreted by python.exe, &lt;em&gt;not&lt;/em&gt; by py.exe.&lt;br&gt;
In the first case, it was python.exe version 3.9.2 that ran. In the second, it was python.exe version 2.7.12 that did.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-2&lt;/code&gt; option is a py.exe option that tells it to run the latest Python 2 version.&lt;br&gt;
You can also specify the minor version, like &lt;code&gt;-3.5&lt;/code&gt;, or even the target processor size, i.e. &lt;code&gt;-3-32&lt;/code&gt; or &lt;code&gt;-3.8-64&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;-0&lt;/code&gt; option tells py.exe to just list the available instances:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;py -0
Installed Pythons found by py Launcher for Windows
 -3.9-64 *
 -3.8-32
 -3.7-64
 -3.6-64
 -3.5-64
 -2.7-64


C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There&apos;s also a &lt;code&gt;-0p&lt;/code&gt; option to list both the version and pathname of the python.exe instances it can use:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;py -0p
Installed Pythons found by py Launcher for Windows
 -3.9-64        C:\Program Files\Python39\python.exe *
 -3.8-32        C:\Users\Larvoire\AppData\Local\Programs\Python\Python38-32\python.exe
 -3.7-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\python.exe
 -3.6-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\python.exe
 -3.5-64        C:\Program Files\Python35\python.exe
 -2.7-64        C:\Program Files\Python27\python.exe


C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Python instances not seen by py.exe&lt;/h3&gt;
&lt;p&gt;Sometimes, py.exe does not see some installed versions of Python. This is often the case with Python 2.7 instances.The root cause is that the setup program for these instances did not create the necessary registry keysused by py.exe to enumerate the installed instances.&lt;/p&gt;
&lt;p&gt;To fix that, you need to manually create one of these registry keys and values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;For user-specific instances (Installed in C:\Users\YOURNAME\AppData\Local\Programs\Python), the base key is:&lt;br&gt;
&lt;code&gt;HKEY_CURRENT_USER\Software\Python\PythonCore\MAJOR.MINOR\InstallPath\...&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For system-wide instances (Installed in C:\Program Files\python*, or even in C:\Python*), the base key is:&lt;br&gt;
&lt;code&gt;HKEY_LOCAL_MACHINE\Software\Python\PythonCore\MAJOR.MINOR\InstallPath\...&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In each of these keys, there are three values needed:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Value name&lt;/th&gt;
&lt;th&gt;Content&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;@ (In the key itself)&lt;/td&gt;
&lt;td&gt;The Python installation directory name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ExecutablePath&lt;/td&gt;
&lt;td&gt;The full pathname of the python.exe instance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WindowedExecutablePath&lt;/td&gt;
&lt;td&gt;The full pathname of the pythonw.exe instance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For example, for my Python 2.7 installation, I had to add keys and values from this .reg file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\Software\Python\PythonCore\2.7]

[HKEY_LOCAL_MACHINE\Software\Python\PythonCore\2.7\InstallPath]
@=&quot;C:\\Program Files\\Python27\\&quot;
&quot;ExecutablePath&quot;=&quot;C:\\Program Files\\Python27\\python.exe&quot;
&quot;WindowedExecutablePath&quot;=&quot;C:\\Program Files\\Python27\\pythonw.exe&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Automatically running Python 2 or Python 3 for scripts that need it&lt;/h3&gt;
&lt;p&gt;Another great feature of py.exe is that it interprets the Python scripts shebang. The shebang is the special #! comment on the first line of a script. This comment is intended to be used by Unix shells to select which interpreter to run.&lt;/p&gt;
&lt;p&gt;Problem: There is no such mechanism in Microsoft Windows shells (cmd or PowerShell). That&apos;s where py.exe steps in: Based on that shebang, it selects the right Python interpreter to use in Microsoft Windows.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Shebang&lt;/th&gt;
&lt;th&gt;py.exe action&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;#!/usr/bin/env python&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Select the most recent Python version available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;#!/usr/bin/env python2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Select the most recent Python 2 version available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;#!/usr/bin/env python3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Select the most recent Python 3 version available&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Use the first generic shebang &lt;em&gt;only&lt;/em&gt; if your scripts is compatible with both Python 2 and Python 3. In the most common case where your script is compatible with just one of the two, use the right one for that version.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;type testpy2.py
#!/usr/bin/env python2
import platform;
print(platform.sys.version);

C:\Temp&gt;py testpy2.py
2.7.12 (default, Dec 19 2016, 15:56:45) [MSC v.1500 64 bit (AMD64)]

C:\Temp&gt;type testpy3.py
#!/usr/bin/env python3
import platform;
print(platform.sys.version);

C:\Temp&gt;py testpy3.py
3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)]

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that this tip is good for Unix also. With the right shebang, the script will use the right Python versionboth in Unix and Microsoft Windows.&lt;/p&gt;
&lt;p&gt;One last important note about this: Unix shells choke on Python scripts created in Microsoft Windows with CRLF at the end of lines. If you want your Python script to be portable to Mac and Linux systems, use Unix LF line endings, even in Microsoft Windows.&lt;/p&gt;
&lt;h2&gt;Running Python scripts based on the .py extension&lt;/h2&gt;
&lt;h3&gt;The local machine configuration&lt;/h3&gt;
&lt;p&gt;Microsoft Windows can select interpreters based on the file extension. This should be setup correctly by recent versions of Python 3, but may not be so with old versions of Python 2! If you install an old Python 2.7 &lt;em&gt;after&lt;/em&gt; a recent Python 3, this may bite you! To check if the configuration is correct, run &lt;code&gt;assoc.exe&lt;/code&gt;, then &lt;code&gt;ftype.exe&lt;/code&gt;, as shown here:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;assoc .py
.py=Python.File

C:\Temp&gt;ftype Python.File
Python.File=C:\Windows\py.exe &quot;%L&quot; %*

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;assoc&lt;/code&gt; command must show that extension .py is associated with class &lt;code&gt;Python.File&lt;/code&gt;. If it&apos;s not, run &lt;code&gt;assoc .py=Python.File&lt;/code&gt; to correct it.And the &lt;code&gt;ftype&lt;/code&gt; command must show that the &lt;code&gt;Python.File&lt;/code&gt; class is associated with the py.exe command,or if py.exe is not available on your (very old) system, to the latest python.exe command available.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;&quot;%L&quot; %*&lt;/code&gt; arguments tell the shell to append the script full pathname, and all its arguments if any.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note that I&apos;ve seen cases where this command was corrupt, with a long string of garbage characters instead. If it&apos;s incorrect or corrupt, correct it by running:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;ftype Python.File=C:\Windows\py.exe &quot;%L&quot; %*&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;The current user configuration&lt;/h3&gt;
&lt;p&gt;However this is not always sufficient. If you&apos;ve installed a Python instance for yourself only, not for all users,the system-wide extension-to-class and class-to-command associations are overridden by user-specific HKCU registry keys:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;reg query &quot;HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.py\UserChoice&quot; /v Progid

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.py\UserChoice
    Progid    REG_SZ    Python.File


C:\Temp&gt;reg query HKCU\Software\Classes\Python.File\shell\open\command /ve
ERROR: The system was unable to find the specified registry key or value.

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In my case, the first key is defined with the same value as the global system-wide association, which is fine. If they&apos;re not defined, as with my second key, this is also fine, as the system-wide version will be used. But again, these two keys may be incorrect or corrupt. If they are, correct them with regedit.exe or delete them altogether.&lt;/p&gt;
&lt;h3&gt;Other failure causes&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;ftype.exe stores its data in the regitry in key &lt;code&gt;HKEY_LOCAL_MACHINE\Software\Classes\Python.File\shell\open\command&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If that key contains sub-keys (I&apos;ve seen this), then Microsoft Windows shells get confused. If that happens, remove the sub-keys.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There&apos;s also a python.exe association in &lt;code&gt;HKEY_LOCAL_MACHINE\Software\Classes\Applications\%EXE%\shell\open\command&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If defined, it should contain: &lt;code&gt;C:\Windows\py.exe &quot;%1&quot;&lt;/code&gt; (The argument &lt;code&gt;&quot;%L&quot; %*&lt;/code&gt; also works, not sure which is best.)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The above two have counterparts in &lt;code&gt;HKEY_CURRENT_USER\Software\Classes\...&lt;/code&gt;. Same remarks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;File type associations can also be overridden by global policies, including some set by your Domain Controller. I&apos;ve never seen this, but be aware it&apos;s possible.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Finding python.exe in the PATH&lt;/h2&gt;
&lt;p&gt;The Python setup optionally adds the Python and Python Scripts installation directories to the PATH. This allows starting interactive Python sessions by typing &lt;code&gt;python&lt;/code&gt; at the command prompt. Likewise, this allows starting &lt;code&gt;pip&lt;/code&gt;, or tools installed by pip into the Scripts directory just by typing their name.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note that this is optional, because as explained above, if you have the py.exe command available, you&apos;ll get the&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;same result by typing &lt;code&gt;py&lt;/code&gt;, or &lt;code&gt;py -n pip&lt;/code&gt;, etc.&lt;/p&gt;
&lt;p&gt;Having the Python and Python Scripts directories in the PATH makes things a bit more intuitive and similar to howthey work in Unix. The drawback is that this makes the ridiculously long Microsoft Windows PATH even longer, and so potentially slows down
your system a little bit.&lt;/p&gt;
&lt;p&gt;But a worse problem may occur when you have several versions of Python installed. You may find situations where an old version was installed with the PATH updated and a newer version without. The result is that the &lt;code&gt;py&lt;/code&gt; and &lt;code&gt;python&lt;/code&gt; commands do not start the same version of Python, which can lead to problems.If this happens, you must correct your local, user, and system PATH to make sure the latest version of Python runs in
all cases.&lt;/p&gt;
&lt;h3&gt;Tools for managing the Microsoft Windows PATH&lt;/h3&gt;
&lt;h4&gt;paths.bat&lt;/h4&gt;
&lt;p&gt;As the Microsoft Windows PATH is extremely long, it&apos;s often difficult to tell if a given directory is in the list and where.The open source &lt;a href=&quot;https://github.com/JFLarvoire/SysToolsLib/blob/master/Batch/paths.bat&quot;&gt;paths.bat&lt;/a&gt; tool makes is easy
to review, and optionally correct, your local or system PATH. By default, it displays all entries in your local PATH, one per line. This makes it much easier to review what&apos;s in there. This also allows filtering the output using command-line filtering tools. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;paths | findstr /i python
C:\Program Files\Python39
C:\Program Files\Python39\scripts

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run &lt;code&gt;paths -?&lt;/code&gt; to display a help screen describing all available options. The following options will be particularly useful for fixing problems with your Python configuration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-s&lt;/code&gt; tells it to manage the system PATH, instead of the local shell PATH by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-u&lt;/code&gt; tells it to manage the user PATH, instead of the local shell PATH by default.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-r DIRECTORY&lt;/code&gt; tells it to remove that DIRECTORY from the managed PATH.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;-m DIR1 -b DIR2&lt;/code&gt; tells it to move a DIRECTORY 1 just before DIRECTORY 2.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Other System Tools Library tools&lt;/h4&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/JFLarvoire/SysToolsLib/releases/latest/download/SysTools.zip&quot;&gt;System Tools Library&lt;/a&gt;contains paths.bat and other tools for managing the PATH:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;which.exe&lt;/code&gt; includes the best of Unix which and Microsoft Windows where.exe and some more. For a detailed description of which.exe features, see &lt;a href=&quot;https://www.dostips.com/forum/viewtopic.php?f=3&amp;#x26;t=9058&quot;&gt;this post&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/JFLarvoire/SysToolsLib/blob/master/Bash/paths&quot;&gt;paths&lt;/a&gt; is a simple Posix Shell script&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;for doing the same things in Unix shells as paths.bat does in Microsoft Windows shells.&lt;/p&gt;
&lt;h3&gt;Interference with the Microsoft Store link&lt;/h3&gt;
&lt;p&gt;In recent Microsoft Windows 10 versions, Microsoft installs a python.exe stub, that redirects you to the Microsoft Store,where you can download their distribution of Python.This stub is stored in &lt;code&gt;%LOCALAPPDATA%\Microsoft\WindowsApps&lt;/code&gt; (cmd) or &lt;code&gt;$env:LOCALAPPDATA\Microsoft\WindowsApps&lt;/code&gt; (PowerShell).&lt;/p&gt;
&lt;p&gt;Problem: If you install a non-Microsoft distribution and add its location in the Microsoft Windows PATH,it sometimes ends up in the PATH &lt;em&gt;behind&lt;/em&gt; the Microsoft stub!&lt;/p&gt;
&lt;p&gt;Running &lt;code&gt;python&lt;/code&gt; will open the Microsoft Store instead of the Python instance you just installed.&lt;/p&gt;
&lt;p&gt;Quick workaround: Use py.exe instead of python.exe.Py.exe does not use the PATH to locate instances, and will find your latest python.exe anyway.&lt;/p&gt;
&lt;p&gt;Long term solution: Move the Python directories in the system PATH &lt;em&gt;before&lt;/em&gt; &quot;%LOCALAPPDATA%\Microsoft\WindowsApps&quot;.Then restart the open shells to get the modified PATH.&lt;/p&gt;
&lt;p&gt;Below is an example of a system that has this problem: (Using the &lt;code&gt;paths&lt;/code&gt; tool described in the previous section.)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;where python
C:\Users\Larvoire\AppData\Local\Microsoft\WindowsApps\python.exe
C:\Program Files\Python39\python.exe

C:\Temp&gt;paths | findstr /i /c:python /c:WindowsApps
C:\Users\Larvoire\AppData\Local\Microsoft\WindowsApps
C:\Program Files\Python39
C:\Program Files\Python39\scripts

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this case, to move the above two directories before &quot;%LOCALAPPDATA%\Microsoft\WindowsApps&quot;, run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;paths -m &quot;C:\Program Files\Python39&quot; -b &quot;%LOCALAPPDATA%\Microsoft\WindowsApps&quot;
[Outputs the updated PATH contents]
C:\Temp&gt;paths -m &quot;C:\Program Files\Python39\scripts&quot; -b &quot;%LOCALAPPDATA%\Microsoft\WindowsApps&quot;
[Outputs the updated PATH contents]
C:\Temp&gt;paths | findstr /i /c:python /c:WindowsApps
C:\Program Files\Python39
C:\Program Files\Python39\scripts
C:\Users\Larvoire\AppData\Local\Microsoft\WindowsApps

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Finding Python scripts in the PATH without specifying their extension&lt;/h2&gt;
&lt;p&gt;Microsoft Windows shells search for commands using a list of implicit extensions defined in the PATHEXT environment variable.This allows running a script by entering just its base name &lt;em&gt;without&lt;/em&gt; the extension.For example on my system, reusing the testpy2.py and testpy3.py scripts described above:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;set PATHEXT
PATHEXT=.PY;.PY3;.PYC;.PYO;.PYW;.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC

C:\Temp&gt;testpy3
3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)]

C:\Temp&gt;testpy2
2.7.12 (default, Dec 19 2016, 15:56:45) [MSC v.1500 64 bit (AMD64)]

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If your PATHEXT does not contain .PY or .py (This is case-independant, so the two are equivalent),then add it into both the system PATHEXT and your local shell PATHEXT. Example for the cmd.exe shell:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;setx PATHEXT &quot;.PY;%PATHEXT%&quot; -m
set PATHEXT=.PY;%PATHEXT%
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and for PowerShell:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$SystemPathExt = [Environment]::GetEnvironmentVariable(&apos;PATHEXT&apos;, &apos;Machine&apos;)
[Environment]::SetEnvironmentVariable(&apos;PATH&apos;, &quot;.PY;$SystemPathExt&quot;, &apos;Machine&apos;)
$env:PATHEXT = &quot;.PY;$env:PATHEXT&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Automating all of this&lt;/h2&gt;
&lt;p&gt;If all the above gives you a headache (it definitely does for me!), there is a way to do everything automatically. The Open Source &lt;a href=&quot;https://github.com/JFLarvoire/SysToolsLib/releases/latest/download/SysTools.zip&quot;&gt;System Tools Library&lt;/a&gt;
contains a tool called &lt;a href=&quot;https://github.com/JFLarvoire/SysToolsLib/blob/master/Python/PySetup.bat&quot;&gt;PySetup.bat&lt;/a&gt;.This tool checks all the information documented above and, optionally, fixes it.&lt;/p&gt;
&lt;p&gt;First run &lt;code&gt;pysetup&lt;/code&gt; without any option to check the current configuration. Then, if anything is wrong, and you agree with the proposed changes, run &lt;code&gt;pysetup -s&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;When everything is correct, pysetup.bat outputs all green &lt;span style=&quot;color:green&quot;&gt;[OK]&lt;/span&gt; statuses:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;C:\Temp&gt;pysetup

Testing the &quot;C:\Program Files\Python39\python.exe&quot; configuration
Using C:\Windows\py.exe

The .py extension is globally associated with class: Python.File
[OK]
The .py extension is associated for user Larvoire with class: Python.File
[OK]
The open command for class Python.File is: C:\Windows\py.exe &quot;%L&quot; %*
[OK]
There&apos;s no additional command for class Python.File
[OK]
The open command for application python.exe is: C:\Windows\py.exe &quot;%L&quot; %*
[OK]
The Python InstallPath registration is: &quot;C:\Program Files\Python39\&quot;
[OK]
The Python PythonPath registration is: &quot;C:\Program Files\Python39\Lib\;C:\Program Files\Python39\DLLs\&quot;
[OK]
The PATH contains C:\Program Files\Python39
[OK]
The PATH contains C:\Program Files\Python39\scripts
[OK]
Other Python directories in the PATH:
[OK]
The global system PATH contains C:\Program Files\Python39
[OK]
The global system PATH contains C:\Program Files\Python39\scripts
[OK]
Other Python directories in the global system PATH:
[OK]
The global environment variable PATHEXT is: .PY;.PY3;.PYC;.PYO;.PYW;.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
[OK]
The local environment variable PATHEXT is: .PY;.PY3;.PYC;.PYO;.PYW;.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
[OK]
Verifying that &apos;python&apos; starts &quot;C:\Program Files\Python39\python.exe&quot;
[OK]
Verifying that &apos;python.exe&apos; starts &quot;C:\Program Files\Python39\python.exe&quot;
[OK]

The setup is good.

C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Fixing missing instances that py.exe does not see&lt;/h3&gt;
&lt;p&gt;Another feature of pysetup.bat is that it can scan known places on the disk for Python instances,and optionally register missing entries so that py.exe knows about them.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;First run &lt;code&gt;pysetup -l&lt;/code&gt; to list instances. (This may take some time if you have a slow disk.)&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; C:\Temp&gt;pysetup -l
 #0   3.9.2    AMD64   C:\Windows\py.exe
 #1   2.7.12   AMD64   C:\Program Files\Python27\python.exe
 #2   3.5.2    AMD64   C:\Program Files\Python35\python.exe
 #3   3.7.4    AMD64   C:\Program Files\Python37\python.exe
 #4   3.9.2    AMD64   C:\Program Files\Python39\python.exe
 #5   3.6.6    AMD64   C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\python.exe
 #6   3.7.8    AMD64   C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\python.exe
 #7   3.8.7    x86     C:\Users\Larvoire\AppData\Local\Programs\Python\Python38-32\python.exe
 
 C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Compare that to the output of the &lt;code&gt;py -0p&lt;/code&gt; command.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; C:\Temp&gt;py -0p
 Installed Pythons found by py Launcher for Windows
  -3.9-64        C:\Program Files\Python39\python.exe *
  -3.8-32        C:\Users\Larvoire\AppData\Local\Programs\Python\Python38-32\python.exe
  -3.7-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\python.exe
  -3.6-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\python.exe
  -3.5-64        C:\Program Files\Python35\python.exe
 
 C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If any one is missing (As is the case here for Python 2.7.12), then run &lt;code&gt;python -r VERSION&lt;/code&gt; to fix the issue. Ex:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; C:\Temp&gt;pysetup -r 2.7.12
 reg add &quot;HKLM\Software\Python\PythonCore\2.7\InstallPath&quot; /ve /d &quot;C:\Program Files\Python27\\&quot; /f
 The operation completed successfully.
 reg add &quot;HKLM\Software\Python\PythonCore\2.7\InstallPath&quot; /v ExecutablePath /d &quot;C:\Program Files\Python27\python.exe&quot; /f
 The operation completed successfully.
 reg add &quot;HKLM\Software\Python\PythonCore\2.7\InstallPath&quot; /v WindowedExecutablePath  /d &quot;C:\Program Files\Python27\pythonw.exe&quot; /f
 The operation completed successfully.
 
 C:\Temp&gt;py -0p
 Installed Pythons found by py Launcher for Windows
  -3.9-64        C:\Program Files\Python39\python.exe *
  -3.8-32        C:\Users\Larvoire\AppData\Local\Programs\Python\Python38-32\python.exe
  -3.7-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\python.exe
  -3.6-64        C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\python.exe
  -3.5-64        C:\Program Files\Python35\python.exe
  -2.7-64        C:\Program Files\Python27\python.exe
 
 C:\Temp&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I hope you found this post helpful when dealing with some of the issues that can occur when you are configuringmultiple instances of Python in Microsoft Windows environments. For more informative tutorials, make sure you keep checking back on the &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data science and security - Newsletter]]></title><link>https://developer.hpe.com/2021-April-06/</link><guid isPermaLink="false">https://developer.hpe.com/2021-April-06/</guid><pubDate>Tue, 06 Apr 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Streaming ML pipeline for Sentiment Analysis using Apache APIs: Kafka, Spark and Drill - Part 2 ]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/streaming-ml-pipeline-for-sentiment-analysis-using-apache-apis-kafka-spark-and-drill-part-2/</link><guid isPermaLink="false">https://developer.hpe.com/streaming-ml-pipeline-for-sentiment-analysis-using-apache-apis-kafka-spark-and-drill-part-2/</guid><pubDate>Wed, 31 Mar 2021 16:16:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Carol McDonald&quot;],
&quot;publish&quot;: &quot;2019-05-20T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Editor&apos;s Note:  This tutorial is the second part in a series related to this topic. The first part in this series is found here: &lt;a href=&quot;/blog/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-&quot;&gt;Part 1.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This is the second in a series of blogs that discuss the architecture of a data pipeline that combines streaming data with machine learning and fast storage.  In the first part,  we explored sentiment analysis using Spark Machine learning Data pipelines and saved a sentiment analysis machine learning model. This second post will discuss using the saved sentiment analysis model with streaming data to do real-time analysis of product sentiment, storing the  results in MapR Database, and making them rapidly available for Spark and Drill SQL.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image1-1607498963348.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this post we will go over the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Overview of Streaming concepts&lt;/li&gt;
&lt;li&gt;Ingesting Kafka Events with Spark Structured Streaming&lt;/li&gt;
&lt;li&gt;Enriching events with a machine learning model.&lt;/li&gt;
&lt;li&gt;Storing the events in MapR Database&lt;/li&gt;
&lt;li&gt;Querying the rapidly available enriched events in MapR Database with Apache Spark SQL and Apache Drill.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Streaming Concepts&lt;/h2&gt;
&lt;h2&gt;Publish-Subscribe Event Streams with MapR Event Store for Apache Kafka&lt;/h2&gt;
&lt;p&gt;MapR Event Store for Apache Kafka is a distributed publish-subscribe event streaming system that enables producers and consumers to exchange events in real time in a parallel and fault-tolerant manner via the Apache Kafka API.&lt;/p&gt;
&lt;p&gt;A stream represents a continuous sequence of events that goes from producers to consumers, where an event is defined as a key-value pair.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image2-1607498972008.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Topics are a logical stream of events. Topics organize events into categories and decouple producers from consumers. Topics are partitioned for throughput and scalability. MapR Event Store can scale to very high throughput levels, easily delivering millions of messages per second using very modest hardware.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image13-1607498981360.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can think of a partition like an event log: new events are appended to the end and are assigned a sequential ID number called the &lt;em&gt;offset&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image22-1607498989167.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Like a queue, events are delivered in the order they are received.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image20-1607498997119.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Unlike a queue, however, messages are not deleted when read. They remain on the partition available to other consumers. Messages, once published, are immutable and can be retained forever.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image5-1607499005836.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Not deleting messages when they are read allows for high performance at scale and also for processing of the same messages by different consumers for different purposes such as multiple views with polyglot persistence.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image15-1607499014273.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Spark Dataset, DataFrame, SQL&lt;/h2&gt;
&lt;p&gt;A Spark Dataset is a distributed collection of typed objects partitioned across multiple nodes in a cluster. A Dataset can be manipulated using functional transformations (map, flatMap, filter, etc.) and/or Spark SQL. A DataFrame is a Dataset of Row objects and represents a table of data with rows and columns.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image14-1607499022152.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Spark Structured Streaming&lt;/h2&gt;
&lt;p&gt;Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine. Structured Streaming enables you to view data published to Kafka as an unbounded DataFrame and process this data with the same DataFrame, Dataset, and SQL APIs used for batch processing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image6-1607499031160.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As streaming data continues to arrive, the Spark SQL engine incrementally and continuously processes it and updates the final result.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image21-1607499040966.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Stream processing of events is useful for real-time ETL, filtering, transforming, creating counters and aggregations, correlating values, enriching with other data sources or machine learning, persisting to files or Database, and publishing to a different topic for pipelines.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image4-1607499049861.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Spark Structured Streaming Use Case Example Code&lt;/h2&gt;
&lt;p&gt;Below is the data processing pipeline for this use case of sentiment analysis of Amazon product review data to detect positive and negative reviews.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image1-1607499057405.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Amazon product review JSON formatted events are published to a MapR Event Store topic using the Kafka API.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Spark Streaming application subscribed to the topic:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ingests a stream of product review events&lt;/li&gt;
&lt;li&gt;Uses a deployed machine learning model to enrich the review event with a positive or negative sentiment prediction&lt;/li&gt;
&lt;li&gt;Stores the transformed and enriched data in MapR Database in JSON format.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image3-1607499065187.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Example Use Case Data&lt;/h2&gt;
&lt;p&gt;The example data set is Amazon product reviews data from &lt;a href=&quot;https://developer.hpe.com/blog/wzvGV1qzj3c2YA8QQnMD/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-&quot;&gt;the previous blog in this series&lt;/a&gt;. The incoming data is in JSON format; an example is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;reviewerID&quot;: &quot;A3V52OTJHKIJZX&quot;, &quot;asin&quot;: &quot;2094869245&quot;,&quot;reviewText&quot;: &quot;Light just installed on bike, seems to be well built.&quot;, &quot;overall&quot;: 5.0, &quot;summary&quot;: &quot;Be seen&quot;, &quot;unixReviewTime&quot;: 1369612800}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We enrich this data with the sentiment prediction, drop some columns, then transform it into the following JSON object:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;
{&quot;reviewerID&quot;: &quot;A3V52OTJHKIJZX&quot;, &quot;_id&quot;:&quot;2094869245_1369612800&quot;, &quot;reviewText&quot;: &quot;Light just installed on bike, seems to be well built.&quot;, &quot;overall&quot;: 5.0, &quot;summary&quot;: &quot;Be seen&quot;, &quot;label&quot;:&quot;1&quot;, &quot;prediction&quot;:&quot;1&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Loading the Spark &lt;em&gt;pipeline&lt;/em&gt; Model&lt;/h2&gt;
&lt;p&gt;The Spark PipelineModel class is used to load the pipeline model, which was fitted on &lt;a href=&quot;https://developer.hpe.com/blog/wzvGV1qzj3c2YA8QQnMD/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-&quot;&gt;the historical product review data&lt;/a&gt; and then saved to the MapR XD file system.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;// Directory to read the saved ML model from
var modeldirectory =&quot;/user/mapr/sentmodel/&quot;

// load the saved model from the distributed file system
val model = PipelineModel.load(modeldirectory)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Reading Data from Kafka Topics&lt;/h2&gt;
&lt;p&gt;In order to read from Kafka, we must first specify the stream format, topic, and offset options. For more information on the configuration parameters, &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR_Streams/differences_in_configuration_parameters_for_producers_and_consumers.html&quot;&gt;see the MapR Streams documentation.&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;var topic: String = &quot;/user/mapr/stream:reviews&quot;

val df1 = spark.readStream.format(&quot;kafka&quot;)
      .option(&quot;kafka.bootstrap.servers&quot;, &quot;maprdemo:9092&quot;)
      .option(&quot;subscribe&quot;, topic)
      .option(&quot;group.id&quot;, &quot;testgroup&quot;)
      .option(&quot;startingOffsets&quot;, &quot;earliest&quot;)
      .option(&quot;failOnDataLoss&quot;, false)
      .option(&quot;maxOffsetsPerTrigger&quot;, 1000)
      .load()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This returns a DataFrame with the following schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df1.printSchema()

result:
root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Parsing the Message Values into a DataFrame&lt;/h2&gt;
&lt;p&gt;The next step is to parse and transform the binary values column into a DataFrame with the product review schema.  We will use Spark from_json to extract the JSON data from the Kafka DataFrame value field seen above.  The Spark SQL from_json() function turns an input JSON string column into a Spark struct, with the specified input schema.&lt;/p&gt;
&lt;p&gt;First we use a Spark StructType to define the schema corresponding to the incoming JSON message value.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;val schema = StructType(Array(
    StructField(&quot;asin&quot;, StringType, true),
    StructField(&quot;helpful&quot;, ArrayType(StringType), true),
    StructField(&quot;overall&quot;, DoubleType, true),
    StructField(&quot;reviewText&quot;, StringType, true),
    StructField(&quot;reviewTime&quot;, StringType, true),
    StructField(&quot;reviewerID&quot;, StringType, true),
    StructField(&quot;reviewerName&quot;, StringType, true),
    StructField(&quot;summary&quot;, StringType, true),
    StructField(&quot;unixReviewTime&quot;, LongType, true)
  ))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the code below, we use the &lt;em&gt;from_json()&lt;/em&gt; Spark SQL function, in a &lt;em&gt;select expression&lt;/em&gt; with a &lt;em&gt;string cast&lt;/em&gt; of the df1 column &lt;em&gt;value&lt;/em&gt;, which returns a DataFrame of thespecified schema.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import spark.implicits._

val df2 = df1.select($&quot;value&quot; cast &quot;string&quot; as &quot;json&quot;)
.select(from_json($&quot;json&quot;, schema) as &quot;data&quot;)
.select(&quot;data.*&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This returns a DataFrame with the following schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df2.printSchema()


result:
root
 |-- asin: string (nullable = true)
 |-- helpful: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- overall: double (nullable = true)
 |-- reviewText: string (nullable = true)
 |-- reviewTime: string (nullable = true)
 |-- reviewerID: string (nullable = true)
 |-- reviewerName: string (nullable = true)
 |-- summary: string (nullable = true)
 |-- unixReviewTime: long (nullable = true)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the code below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;we use the withColumn method to add a column combining the review summary with the review text .&lt;/li&gt;
&lt;li&gt;we filter to remove neutral ratings (=3)&lt;/li&gt;
&lt;li&gt;a Spark &lt;a href=&quot;https://spark.apache.org/docs/2.2.0/ml-features.html#bucketizer&quot;&gt;Bucketizer&lt;/a&gt; is used to add a label 0/1 column to the dataset for Positive (overall rating &gt;=4)  and not positive (overall rating &amp;#x3C;4)  reviews.  (Note the label is for testing the predictions)&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;// combine summary reviewText into one column
val df3 = df2.withColumn(&quot;reviewTS&quot;,
concat($&quot;summary&quot;,lit(&quot; &quot;),$&quot;reviewText&quot; ))

//  remove neutral ratings
val df4 = df3.filter(&quot;overall !=3&quot;)

// add label column
val bucketizer = new Bucketizer()
.setInputCol(&quot;overall&quot;)
.setOutputCol(&quot;label&quot;)
.setSplits(Array(Double.NegativeInfinity,3.0,Double.PositiveInfinity))

val df5= bucketizer.transform(df4)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Enriching the DataFrame of Reviews with Sentiment Predictions&lt;/h2&gt;
&lt;p&gt;Next we transform the DataFrame with the model pipeline, which will transform the features according to the pipeline, estimate and then return the predictions in a column of a new DateFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
// transform the DataFrame with the model pipeline
val predictions = model.transform(df5)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image16-1607499077714.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This returns a DataFrame with the following schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;predictions.printSchema()

result:
root
 |-- asin: string (nullable = true)
 |-- helpful: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- overall: double (nullable = true)
 |-- reviewText: string (nullable = true)
 |-- reviewTime: string (nullable = true)
 |-- reviewerID: string (nullable = true)
 |-- reviewerName: string (nullable = true)
 |-- summary: string (nullable = true)
 |-- unixReviewTime: long (nullable = true)
 |-- reviewTS: string (nullable = true)
 |-- label: double (nullable = true)
 |-- reviewTokensUf: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- reviewTokens: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- cv: vector (nullable = true)
 |-- features: vector (nullable = true)
 |-- rawPrediction: vector (nullable = true)
 |-- probability: vector (nullable = true)
 |-- prediction: double (nullable = false)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Adding a Unique ID for MapR Database&lt;/h2&gt;
&lt;p&gt;In the code below we:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;drop the columns that we do not want to store&lt;/li&gt;
&lt;li&gt;create a unique id “_id” composed of the product id and review timestamp, to us as the row key for storing in MapR Database.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;// drop the columns that we do not want to store
val df6 = predictions.drop(&quot;cv&quot;,&quot;probability&quot;, &quot;features&quot;, &quot;reviewTokens&quot;, &quot;helpful&quot;, &quot;reviewTokensUf&quot;, &quot;rawPrediction&quot;)

// create column with unique id for MapR Database
val df7 = df6.withColumn(&quot;_id&quot;, concat($&quot;asin&quot;,lit(&quot;_&quot;), $&quot;unixReviewTime&quot;))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This returns a DataFrame with the following schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;predictions.printSchema()


Result:

root
 |-- asin: string (nullable = true)
 |-- overall: double (nullable = true)
 |-- reviewText: string (nullable = true)
 |-- reviewTime: string (nullable = true)
 |-- reviewerID: string (nullable = true)
 |-- reviewerName: string (nullable = true)
 |-- summary: string (nullable = true)
 |-- unixReviewTime: long (nullable = true)
 |-- label: double (nullable = true)
 |-- reviewTokens: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- prediction: double (nullable = false)
 |-- _id: string (nullable = true)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spark Streaming Writing to MapR Database&lt;/h2&gt;
&lt;p&gt;The MapR Database Connector for Apache Spark enables you to use MapR Database as a sink for Spark Structured Streaming or Spark Streaming.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image9-1607499085506.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;One of the challenges when you are processing lots of streaming data is: where do you want to store it? For this application, MapR Database a high performance NoSQL database, was chosen for its scalability and flexible ease of use with JSON.&lt;/p&gt;
&lt;h2&gt;JSON Schema Flexibility&lt;/h2&gt;
&lt;p&gt;MapR Database supports JSON documents as a native data store. MapR Database makes it easy to store, query, and build applications with JSON documents. The Spark connector makes it easy to build real-time or batch pipelines between your JSON data and MapR Database and leverage Spark within the pipeline.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image7-1607499093783.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With MapR Database, a table is automatically partitioned into tablets across a cluster by key range, providing for scalable and fast reads and writes by row key. In this use case, the row key, the _id, consists of the cluster ID and reverse timestamp, so the table is automatically partitioned and sorted by cluster ID with the most recent first.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image11-1607499101776.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Spark MapR Database Connector architecture has a connection object in every Spark Executor, allowing for distributed parallel writes, reads, or scans with MapR Database tablets (partitions).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image25-1607499112026.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Writing to a MapR Database Sink&lt;/h2&gt;
&lt;p&gt;To write a Spark Stream to MapR Database, specify the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/StructuredSparkStreaming.html&quot;&gt;format with the tablePath, idFieldPath, createTable, bulkMode, and sampleSize parameters&lt;/a&gt;. The following example writes out the df7 DataFrame to MapR Database and starts the stream.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import com.mapr.db.spark.impl._
import com.mapr.db.spark.streaming._
import com.mapr.db.spark.sql._
import com.mapr.db.spark.streaming.MapRDBSourceConfig

var tableName: String = &quot;/user/mapr/reviewtable&quot;
val writedb = df7.writeStream
   .format(MapRDBSourceConfig.Format)
   .option(MapRDBSourceConfig.TablePathOption, tableName)
   .option(MapRDBSourceConfig.IdFieldPathOption, &quot;_id&quot;)
   .option(MapRDBSourceConfig.CreateTableOption, false)
   .option(&quot;checkpointLocation&quot;, &quot;/tmp/reviewdb&quot;)
   .option(MapRDBSourceConfig.BulkModeOption, true)
   .option(MapRDBSourceConfig.SampleSizeOption, 1000)

writedb.start()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image8-1607499120962.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Querying MapR Database JSON with Spark SQL&lt;/h2&gt;
&lt;p&gt;The Spark MapR Database Connector enables users to perform complex SQL queries and updates on top of MapR Database using a Spark Dataset, while applying critical techniques such as projection and filter pushdown, custom partitioning, and data locality.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image19-1607499129679.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Loading Data from MapR Database into a Spark Dataset&lt;/h2&gt;
&lt;p&gt;To &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/LoadDataFromMapRDBasDataset.html&quot;&gt;load data from a MapR Database JSON&lt;/a&gt; table into an Apache Spark Dataset, we invoke the &lt;code&gt;loadFromMapRDB&lt;/code&gt; method on a SparkSession object, providing the tableName, schema, and case class. This returns a Dataset of with the product review schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;val schema = StructType(Array(
    StructField(&quot;_id&quot;, StringType, true),
    StructField(&quot;asin&quot;, StringType, true),
    StructField(&quot;overall&quot;, DoubleType, true),
    StructField(&quot;reviewText&quot;, StringType, true),
    StructField(&quot;reviewTime&quot;, StringType, true),
    StructField(&quot;reviewerID&quot;, StringType, true),
    StructField(&quot;reviewerName&quot;, StringType, true),
    StructField(&quot;summary&quot;, StringType, true),
    StructField(&quot;label&quot;, StringType, true),
    StructField(&quot;prediction&quot;, StringType, true),
    StructField(&quot;unixReviewTime&quot;, LongType, true)
  ))

var tableName: String = &quot;/user/mapr/reviewtable&quot;
val df = spark
    .loadFromMapRDB(tableName, schema)

df.createOrReplaceTempView(&quot;reviews&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Explore and Query the Product Review Data with Spark SQL&lt;/h2&gt;
&lt;p&gt;Now we can query the data that is continuously streaming into MapR Database to ask questions with the Spark DataFrames domain-specific language or with Spark SQL.&lt;/p&gt;
&lt;p&gt;Below, we use the DataFrames select and show methods to display the first  5 rows review summary , overall rating, label, and  prediction in tabular format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df.select(&quot;summary&quot;,&quot;overall&quot;,&quot;label&quot;,&quot;prediction&quot;).show(5)

result:
+--------------------+-------+-----+----------+
|             summary|overall|label|prediction|
+--------------------+-------+-----+----------+
|  Excellent Ammo Can|    5.0|  1.0|       1.0|
|    Glad I bought it|    5.0|  1.0|       1.0|
|WILL BUY FROM AGA...|    5.0|  1.0|       1.0|
|looked brand new ...|    5.0|  1.0|       1.0|
|   I LOVE THIS THING|    5.0|  1.0|       1.0|
+--------------------+-------+-----+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the most high ratings?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df.filter($&quot;overall&quot; === 5.0)
.groupBy(&quot;overall&quot;,&quot;asin&quot;)
.count
.orderBy(desc(&quot;count&quot;)).show(2)

result:
+-------+----------+-----+
|overall|      asin|count|
+-------+----------+-----+
|    5.0|B004TNWD40|  242|
|    5.0|B004U8CP88|  201|
+-------+----------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Or in SQL What are the products with the most high ratings?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql

SELECT asin,overall, count(overall)  
FROM  reviews where overall=5.0
GROUP BY asin, overall
order by count(overall) desc limit 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Display the best rated product reviews text&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df.select(&quot;summary&quot;,&quot;reviewText&quot;,&quot;overall&quot;,&quot;label&quot;,&quot;prediction&quot;).filter(&quot;asin=&apos;B004TNWD40&apos;&quot;).show(5)

result:
+--------------------+--------------------+-------+-----+----------+
|             summary|          reviewText|overall|label|prediction|
+--------------------+--------------------+-------+-----+----------+
|             Awesome|This is the perfe...|    5.0|  1.0|       1.0|
|for the price you...|Great first knife...|    5.0|  1.0|       1.0|
|Great Mora qualit...|I have extensive ...|    4.0|  1.0|       1.0|
|       Amazing knife|All I can say is ...|    5.0|  1.0|       1.0|
|Swedish Mil. Mora...|Overall a nice kn...|    4.0|  1.0|       1.0|
+--------------------+--------------------+-------+-----+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Or in SQL:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql
select summary, label, prediction, overall
from reviews
where asin=&apos;B004TNWD40&apos;
order by overall desc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image17-1607499137170.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the highest count of low ratings?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df.filter($&quot;overall&quot; === 1.0)
.groupBy(&quot;overall&quot;,&quot;asin&quot;)
.count.orderBy(desc(&quot;count&quot;)).show(2)

result:
+-------+----------+-----+
|overall|      asin|count|
+-------+----------+-----+
|    1.0|B00A17I99Q|   18|
|    1.0|B00BGO0Q9O|   17|
+-------+----------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Display the product reviews text  for Product with highest count of low ratings&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;df.select(&quot;summary&quot;,&quot;reviewText&quot;,&quot;overall&quot;,&quot;label&quot;,&quot;prediction&quot;)
.filter(&quot;asin=&apos;B00A17I99Q&apos;&quot;)
.orderBy(&quot;overall&quot;).show(8)

result:
result:
+--------------------+--------------------+-------+-----+----------+
|             summary|          reviewText|overall|label|prediction|
+--------------------+--------------------+-------+-----+----------+
|         DO NOT BUY!|Do your research ...|    1.0|  0.0|       0.0|
|         Returned it|I could not get t...|    1.0|  0.0|       0.0|
| didn&apos;t do it for me|didn&apos;t like it.  ...|    1.0|  0.0|       0.0|
|Fragile, just lik...|Update My second....|    1.0|  0.0|       0.0|
|Almost perfect de...|I waited a while ...|    1.0|  0.0|       0.0|
|Not all its crack...|I started with th...|    1.0|  0.0|       0.0|
|         Returned...|I gave it as a gi...|    1.0|  0.0|       0.0|
|Defective product...|1st jawbone up 2n...|    1.0|  0.0|       0.0|
+--------------------+--------------------+-------+-----+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we calculate some prediction evaluation metrics for the streaming data continuously stored in MapR Database. The number of false/true positives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;True positives are how often the model correctly positive sentiment.&lt;/li&gt;
&lt;li&gt;False positives are how often the model incorrectly positive sentiment..&lt;/li&gt;
&lt;li&gt;True negatives indicate how often the model correctly negative sentiment.&lt;/li&gt;
&lt;li&gt;False negatives indicate how often the model incorrectly negative sentiment.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;val lp = predictions.select(&quot;label&quot;, &quot;prediction&quot;)
val counttotal = predictions.count()
val correct = lp.filter($&quot;label&quot; === $&quot;prediction&quot;).count()
val wrong = lp.filter(not($&quot;label&quot; === $&quot;prediction&quot;)).count()
val ratioWrong = wrong.toDouble / counttotal.toDouble
val lp = predictions.select(  &quot;prediction&quot;,&quot;label&quot;)
val counttotal = predictions.count().toDouble
val correct = lp.filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count()
val wrong = lp.filter(&quot;label != prediction&quot;)
.count()
val ratioWrong=wrong/counttotal
val ratioCorrect=correct/counttotal

val truen =( lp.filter($&quot;label&quot; === 0.0)
 .filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count()) /counttotal

val truep = (lp.filter($&quot;label&quot; === 1.0)
 .filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count())/counttotal

val falsen = (lp.filter($&quot;label&quot; === 0.0)
 .filter(not($&quot;label&quot; === $&quot;prediction&quot;))
 .count())/counttotal

val falsep = (lp.filter($&quot;label&quot; === 1.0)
 .filter(not($&quot;label&quot; === $&quot;prediction&quot;))
 .count())/counttotal

val precision= truep / (truep + falsep)
val recall= truep / (truep + falsen)
val fmeasure= 2 * precision * recall / (precision + recall)
val accuracy=(truep + truen) / (truep + truen + falsep + falsen)


results:
counttotal: Double = 84160.0
correct: Double = 76925.0
wrong: Double = 7235.0
truep: Double = 0.8582461977186312
truen: Double = 0.05578659695817491
falsep: Double = 0.014543726235741445
falsen: Double = 0.07142347908745247
ratioWrong: Double = 0.08596720532319392
ratioCorrect: Double = 0.9140327946768061
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Projection and Filter Push Down into MapR Database&lt;/h2&gt;
&lt;p&gt;You can see the physical plan for a DataFrame query by calling the explain method shown below. Here in red we see projection and filter push down, which means that the scanning of the &lt;em&gt;overall&lt;/em&gt; and &lt;em&gt;summary&lt;/em&gt; columns and the filter on the &lt;em&gt;overall&lt;/em&gt; column are pushed down into MapR Database, which means that the scanning and filtering will take place in MapR Database before returning the data to Spark. Projection pushdown minimizes data transfer between MapR Database and the Spark engine by omitting unnecessary fields from table scans. It is especially beneficial when a table contains many columns. Filter pushdown improves performance by reducing the amount of data passed between MapR Database and the Spark engine when filtering data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;// notice projection of selected fields [summary]
// notice PushedFilters: overall
df.filter(&quot;overall &gt; 3&quot;).select(&quot;summary&quot;).explain

result:
== Physical Plan ==
\*(1) Project [summary#7]
+- \*(1) Filter (isnotnull(overall#2) &amp;#x26;&amp;#x26; (overall#2 &gt; 3.0))
+- \*(1) Scan MapRDBRelation MapRDBTableScanRDD
[summary#7,overall#2]
**PushedFilters: [IsNotNull(overall),
GreaterThan(overall,3.0)],**
ReadSchema: struct&amp;#x3C;summary:string,overall:double&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Querying the Data with Apache Drill&lt;/h2&gt;
&lt;p&gt;Apache Drillis an open source, low-latency query engine for big data that delivers interactive SQL analytics at petabyte scale. Drill provides a massively parallel processing execution engine, built to perform distributed query processing across the various nodes in a cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image12-1607499144193.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With Drill, you can use SQL to interactively query and join data from files in JSON, Parquet, or CSV format, Hive, and NoSQL stores, including HBase, MapR Database, and Mongo, without defining schemas. MapR provides a &lt;a href=&quot;https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/&quot;&gt;Drill JDBC&lt;/a&gt; driver that you can use to connect Java applications, BI tools, such as SquirreL and Spotfire, to Drill.&lt;/p&gt;
&lt;h2&gt;Below are some example SQL queries using the Drill shell.&lt;/h2&gt;
&lt;p&gt;Start the Drill shell with:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sqlline -u jdbc:drill:zk=localhost:5181 -n mapr -p mapr&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How many streaming product reviews were stored in MapR Database?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select count(_id) as totalreviews from dfs.`/user/mapr/reviewtable`;

result:
+---------------+
| totalreviews  |
+---------------+
| 84160         |
+---------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;How many reviews are there for each rating?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select overall, count(overall) as countoverall from dfs.`/user/mapr/reviewtable` group by overall order by overall desc;

result:
+----------+---------------+
| overall  | countoverall  |
+----------+---------------+
| 5.0      | 57827         |
| 4.0      | 20414         |
| 2.0      | 3166          |
| 1.0      | 2753          |
+----------+---------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the most high review ratings?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select overall, asin, count(*) as ratingcount, sum(overall) as ratingsum
from dfs.`/user/mapr/reviewtable`
group by overall, asin
order by  sum(overall) desc limit 5;

result:
+----------+-------------+--------------+------------+
| overall  |    asin     | ratingcount  | ratingsum  |
+----------+-------------+--------------+------------+
| 5.0      | B004TNWD40  | 242          | 1210.0     |
| 5.0      | B004U8CP88  | 201          | 1005.0     |
| 5.0      | B006QF3TW4  | 186          | 930.0      |
| 5.0      | B006X9DLQM  | 183          | 915.0      |
| 5.0      | B004RR0N8Q  | 165          | 825.0      |
+----------+-------------+--------------+------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the most positive review predictions?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select prediction, asin, count(*) as predictioncount, sum(prediction) as predictionsum
from dfs.`/user/mapr/reviewtable`
group by prediction, asin
order by sum(prediction) desc limit 5;

result:
+-------------+-------------+------------------+----------------+
| prediction  |    asin     | predictioncount  | predictionsum  |
+-------------+-------------+------------------+----------------+
| 1.0         | B004TNWD40  | 263              | 263.0          |
| 1.0         | B004U8CP88  | 252              | 252.0          |
| 1.0         | B006X9DLQM  | 218              | 218.0          |
| 1.0         | B006QF3TW4  | 217              | 217.0          |
| 1.0         | B004RR0N8Q  | 193              | 193.0          |
+-------------+-------------+------------------+----------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Show the review summaries for the  product with the most high review ratings&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select summary, prediction
from dfs.`/user/mapr/reviewtable`
where asin=&apos;B004TNWD40&apos; limit 5;

result:
+---------------------------------------------------+-------------+
|                      summary                      | prediction  |
+---------------------------------------------------+-------------+
| Awesome                                           | 1.0         |
| for the price you  cant go wrong with this knife  | 1.0         |
| Great Mora quality and economy                    | 1.0         |
| Amazing knife                                     | 1.0         |
| Swedish Mil. Mora Knife                           | 1.0         |
+---------------------------------------------------+-------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Show the review tokens for the product with the most positive reviews&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select reviewTokens from dfs.`/user/mapr/reviewtable` where asin=&apos;B004TNWD40&apos; limit 1;

 [ &quot;awesome&quot;, &quot;perfect&quot;, &quot;belt/pocket/neck&quot;, &quot;knife&quot;, &quot;carbon&quot;, &quot;steel&quot;, &quot;blade&quot;, &quot;last&quot;, &quot;life&quot;, &quot;time!&quot;, &quot;handle&quot;, &quot;sheath&quot;, &quot;plastic&quot;, &quot;cheap&quot;, &quot;kind&quot;, &quot;plastic&quot;, &quot;durable&quot;, &quot;also&quot;, &quot;last&quot;, &quot;life&quot;, &quot;time&quot;, &quot;everyone&quot;, &quot;loves&quot;, &quot;doors&quot;, &quot;this!&quot;, &quot;yes&quot;, &quot;ones&quot;, &quot;bone&quot;, &quot;handles&quot;, &quot;leather&quot;, &quot;sheaths&quot;, &quot;$100+&quot; ]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the most low review ratings?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;SELECT asin,overall, count(overall) as rcount
FROM dfs.`/user/mapr/reviewtable`
where overall=1.0
GROUP BY asin, overall
order by count(overall) desc limit 2

result:
+-------------+----------+---------+
|    asin     | overall  | rcount  |
+-------------+----------+---------+
| B00A17I99Q  | 1.0      | 18      |
| B008VS8M58  | 1.0      | 17      |
+-------------+----------+---------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the products with the most negative review predictions?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select prediction, asin, count(*) as predictioncount, sum(prediction) as predictionsum from dfs.`/user/mapr/reviewtable` group by prediction, asin order by  sum(prediction)  limit 2;

result:
+-------------+-------------+------------------+----------------+
| prediction  |    asin     | predictioncount  | predictionsum  |
+-------------+-------------+------------------+----------------+
| 0.0         | B007QEUWSI  | 4                | 0.0            |
| 0.0         | B007QTHPX8  | 4                | 0.0            |
+-------------+-------------+------------------+---------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Show the review summaries for the  product with the most low review ratings&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;select summary
from dfs.`/user/mapr/reviewtable`
where asin=&apos;B00A17I99Q&apos; and prediction=0.0 limit 5;

result:
+---------------------------------------------------------+
|                         summary                         |
+---------------------------------------------------------+
| A comparison to Fitbit One -- The Holistic Wrist        |
| Fragile, just like the first Jawbone UP!  Overpriced    |
| Great concept, STILL horrible for daily use             |
| Excellent idea, bad ergonomics, worse manufacturing...  |
| get size larger                                         |
+---------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Querying the Data with the MapR Database Shell&lt;/h2&gt;
&lt;p&gt;The mapr dbshell is a tool that enables you to create and perform basic manipulation of JSON tables and documents. You run dbshell by typing mapr dbshell on the command line after logging into a node in a MapR cluster.&lt;/p&gt;
&lt;h2&gt;Below are some example queries using the MapR dbshell&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Show the review summary, id, prediction  for the  product with the most high review ratings (&lt;code&gt;_id starts with B004TNWD40&lt;/code&gt;)&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;find /user/mapr/reviewtable --where &apos;{&quot;$and&quot;:[{&quot;$eq&quot;:{&quot;overall&quot;:5.0}}, { &quot;$like&quot; : {&quot;_id&quot;:&quot;%B004TNWD40%&quot;} }]}&apos; --f _id,prediction,summary --limit 5

result:

{&quot;_id&quot;:&quot;B004TNWD40_1256083200&quot;,&quot;prediction&quot;:1,&quot;summary&quot;:&quot;Awesome&quot;}
{&quot;_id&quot;:&quot;B004TNWD40_1257120000&quot;,&quot;prediction&quot;:1,&quot;summary&quot;:&quot;for the price you  cant go wrong with this knife&quot;}
{&quot;_id&quot;:&quot;B004TNWD40_1279065600&quot;,&quot;prediction&quot;:1,&quot;summary&quot;:&quot;Amazing knife&quot;}
{&quot;_id&quot;:&quot;B004TNWD40_1302393600&quot;,&quot;prediction&quot;:1,&quot;summary&quot;:&quot;Great little knife&quot;}
{&quot;_id&quot;:&quot;B004TNWD40_1303257600&quot;,&quot;prediction&quot;:1,&quot;summary&quot;:&quot;AWESOME KNIFE&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Show the review summary, id, for 10 products with negative sentiment prediction and label&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;find /user/mapr/reviewtable --where &apos;{&quot;$and&quot;:[{&quot;$eq&quot;:{&quot;prediction&quot;:0.0}},{&quot;$eq&quot;:{&quot;label&quot;:0.0}} ]}&apos; --f _id,summary --limit 10

result:
{&quot;_id&quot;:&quot;B003Y64RBA_1312243200&quot;,&quot;summary&quot;:&quot;A $3.55 rubber band!&quot;}
{&quot;_id&quot;:&quot;B003Y64RBA_1399334400&quot;,&quot;summary&quot;:&quot;cheap not worthy&quot;}
{&quot;_id&quot;:&quot;B003Y71V2C_1359244800&quot;,&quot;summary&quot;:&quot;Couple of Problems&quot;}
{&quot;_id&quot;:&quot;B003Y73EPY_1349740800&quot;,&quot;summary&quot;:&quot;Short Term Pedals - Eggbeaters 1&quot;}
{&quot;_id&quot;:&quot;B003Y9CMGY_1306886400&quot;,&quot;summary&quot;:&quot;Expensive batteries.&quot;}
{&quot;_id&quot;:&quot;B003YCWFRM_1336089600&quot;,&quot;summary&quot;:&quot;Poor design&quot;}
{&quot;_id&quot;:&quot;B003YCWFRM_1377043200&quot;,&quot;summary&quot;:&quot;Great while it lasted&quot;}
{&quot;_id&quot;:&quot;B003YD0KZU_1321920000&quot;,&quot;summary&quot;:&quot;No belt clip!!!  Just like the other reviewer...&quot;}
{&quot;_id&quot;:&quot;B003YD0KZU_1338768000&quot;,&quot;summary&quot;:&quot;Useless&quot;}
{&quot;_id&quot;:&quot;B003YD1M5M_1354665600&quot;,&quot;summary&quot;:&quot;Can&apos;t recomend this knife.&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this post, you learned how to use the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Spark machine learning model in a Spark Structured Streaming application&lt;/li&gt;
&lt;li&gt;Spark Structured Streaming with MapR Event Store to ingest messages using the Kafka API&lt;/li&gt;
&lt;li&gt;Spark Structured Streaming to persist to MapR Database for continuously rapidly available SQL analysis&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform. The MapR Data Platform integrates global event streaming, real-time database capabilities, and scalable enterprise storage with Spark, Drill, and machine learning libraries to power the development of next-generation intelligent applications, which take advantage of modern computational paradigms powered by modern computational infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image23-1607499151723.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Code&lt;/h2&gt;
&lt;p&gt;All of the data and code to train the models and make your own conclusions, using Apache Spark, are located in GitHub, Refer to github readme for more information about running the code.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/mapr-sparkml-sentiment-classification&quot;&gt;https://github.com/caroljmcdonald/mapr-sparkml-sentiment-classification&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Editor&apos;s Note:  This tutorial is the second part in a series related to this topic. The first part in this series is found here: &lt;a href=&quot;/blog/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-&quot;&gt;Part 1.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes Tutorial part 2 of 3: How to Install and Deploy Applications at Scale on K8s ]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/kubernetes-tutorial-part-2-of-3-how-to-install-and-deploy-applications-at-scale-on-k8s/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-tutorial-part-2-of-3-how-to-install-and-deploy-applications-at-scale-on-k8s/</guid><pubDate>Wed, 31 Mar 2021 15:14:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Martijn Kieboom&quot;,
&quot;publish&quot;: &quot;2018-04-26T10:46:00.000&quot;,
&quot;tags&quot;: &quot;open-source-software&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt;&lt;/em&gt; &lt;a href=&quot;/blog/kubernetes-tutorial-how-to-install-and-deploy-applications-at-scale-on-k&quot;&gt;Part 1 in this series&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my previous blog, I described the reasoning behind containers and how to manage them in large-scale production environments. This blog describes two ways to get started with Kubernetes:&lt;/p&gt;
&lt;p&gt;For a small Kubernetes deployment, we will be using Minikube, a simple solution to run a single node environment locally on your environment using virtualization technology like VirtualBox or VMware.&lt;/p&gt;
&lt;p&gt;After having some hands-on experience with Kubernetes using the Minikube environment, it is time to deploy an actual multi-node Kubernetes cluster. For that, we will be using a three node environment, running Red Hat/CentOS 7.x.&lt;/p&gt;
&lt;p&gt;Time to roll up your sleeves and get started!&lt;/p&gt;
&lt;h2&gt;Kubernetes Installation Using Minikube&lt;/h2&gt;
&lt;p&gt;If you don&apos;t have experience with Kubernetes, Minikube is a perfect way to take your first steps into the container management world. Minikube is an easy way to deploy a single node Kubernetes environment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Minikube leverages virtualization technology, like VirtualBox, VMware, Hyper-V, and others. So prior to running Minikube, make sure you have one of these virtualization technologies installed on your client. For a complete list of supported virtualization layers, visit the Minikube GitHub page at: &lt;a href=&quot;https://github.com/kubernetes/minikube&quot;&gt;https://github.com/kubernetes/minikube&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;With a virtualization technology installed on your machine, let&apos;s go ahead and install Minikube.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Download Minikube&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Minikube comes in a precompiled, single executable file that you can simply download to your client:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install minikube on MacOS
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 &amp;#x26;&amp;#x26; chmod +x minikube &amp;#x26;&amp;#x26; sudo mv minikube /usr/local/bin/

# Install minikube on Linux
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 &amp;#x26;&amp;#x26; chmod +x minikube &amp;#x26;&amp;#x26; sudo mv minikube /usr/local/bin/

# Install minikube on Windows
# Download the executable file and save it as minikube.exe:
https://storage.googleapis.com/minikube/releases/latest/minikube-windows-amd64.exe

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Launch a Kubernetes Virtual Cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;With Minikube installed, it is time to launch our virtual single node Kubernetes cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Launch Kubernetes cluster
minikube start

# Alternatively, launch Kubernetes cluster with a specific Memory allocation
# default memory is 2048 MB
minikube start --memory 6144

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s all! No, really, it is.&lt;/p&gt;
&lt;p&gt;Minikube will automatically download, import,and launch an image matching your virtualization technology. For that reason it does take a few minutes to complete. Notice that Minikube automatically imported a virtual machine into your virtualization environment.&lt;/p&gt;
&lt;p&gt;There are many parameters you can configure as part of the &apos;start&apos; command (&apos;minikube start --help&apos;), but as Minikube is aimed at ease of usage, the default values are a very solid standard to get started. One recommended parameter is to set the virtual machine&apos;s memory as configuration. It defaults to 2GB which might be on the small size if you want to try out a few containers on the Minikube cluster.&lt;/p&gt;
&lt;p&gt;With the Kubernetes cluster running, we can use Minikube to connect to the VM and launch the Kubernetes Web Dashboard:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Launch Kubernetes Dashboard
minikube dashboard

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will automatically launch your default browser and opens the Kubernetes dashboard. Whereas Kubernetes clusters will normally ask for a security token to login, Minikube is aimed at ease of use and therefore no login is required. Have a look around at the various Kubernetes menu options.&lt;/p&gt;
&lt;p&gt;From here, you can now start deploying the MapR Volume Driver Plugin for Kubernetes.&lt;/p&gt;
&lt;p&gt;If at any point in time you want to stop the Minikube cluster, it is as easy as you can imagine:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Stop Kubernetes
minikube stop

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &apos;stop&apos; command will gently shutdown your virtualized Kubernetes cluster, including all containers that you might have deployed on it. A simple &apos;minikube start&apos; will relaunch the existing VM again, after which you can use &apos;minikube dashboard&apos; to open the Kubernetes Dashboard.&lt;/p&gt;
&lt;p&gt;Once you&apos;re done with the Minikube Kubernetes cluster, simply delete the VM:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Delete the Kubernetes VM
minikube delete

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Kubernetes Installation on Red Hat/CentOS 7.x&lt;/h2&gt;
&lt;p&gt;Once you have experience with Kubernetes by using Minikube, it is time to deploy a multi-node Kubernetes cluster. This paragraph describes how to deploy a multi-node Kubernetes cluster on an environment running CentOS as the Operating System.&lt;/p&gt;
&lt;p&gt;One final note before we start: do not openly connect this deployed cluster to the internet as securing the Kubernetes cluster is out of scope for this blog.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The commands in this prerequisite chapter have to be executed on each of the Kubernetes cluster nodes individually.&lt;/p&gt;
&lt;p&gt;If you&apos;re using AWS EC2 nodes, make sure to enable the &apos;extra’ repository:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# On AWS EC2 enable the &apos;extra&apos; repository containing git, docker, etc.

yum-config-manager --enable rhui-REGION-rhel-server-extras
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To get started, we need to disable SELinux as well as memory swapping on all nodes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Disable SELinux
setenforce 0
sed -i &apos;/^SELINUX./ { s/enforcing/disabled/; }&apos; /etc/selinux/config

# Disable swap
swapoff -a
sudo sed -i &apos;/ swap / s/^\(.*\)$/#\1/g&apos; /etc/fstab

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Additionally, we also need to enable bridged networking for Kubernetes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Set iptables
cat &amp;#x3C; /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And, as the last step of the prerequisites, we need to install Docker to run containers:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install docker
yum install -y docker

# Launch Docker and enable it on system boot
systemctl start docker
systemctl enable docker

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes Installation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The following commands also have to be executed on each of the individual cluster nodes.&lt;/p&gt;
&lt;p&gt;Add the Kubernetes repository, and install the Kubernetes tools—kubelet, kubeadm, and kubectl:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install kubernetes repo
cat &amp;#x3C; /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Install Kubernetes and start it
yum install -y kubelet kubeadm kubectl
systemctl start kubelet
systemctl enable kubelet

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, we have the Kubernetes packages installed on all cluster nodes. We can now start configuring Kubernetes. To do so, run the following commands on only one (1) single cluster node.&lt;/p&gt;
&lt;p&gt;On the first node of the cluster, initialize the Kubernetes master by running kubeadm init. Please note that this task might take minutes to complete, as it will pull in all required containers.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Initialize Kubernetes master
# Validate the ip-address of the node:
hostname --ip-address

# If the ip address in the above command is correct, run the following.
# Otherwise manually provide the correct address for apiserver-advertise-address
kubeadm init --pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=$(hostname --ip-address)

# The kubeadm command will take a few minutes and it will print a &apos;kubeadm join&apos;
# command once completed. Make sure to capture and store this &apos;kubeadm join&apos;
# command as it is required to add other nodes to the Kubernetes cluster
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the cluster is initialized, we can copy the generated configuration file (admin.conf) to the home directory ($HOME/.kube/config) for easy cluster administration using the kubectl cli:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Deploy kube config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To allow the pods and containers to communicate over the network with each other, a cluster network is required to set up. Flannel is one of the various cluster networking solutions that we will use in this blog. For more information on Kubernetes networking, visit: &lt;a href=&quot;https://kubernetes.io/docs/concepts/cluster-administration/networking/&quot;&gt;https://kubernetes.io/docs/concepts/cluster-administration/networking/&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Install Flanner for network
# Doc: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Per default,Kubernetes doesn&apos;t run pods on the master node as that could potentially result in a resource as well as security conflict. Pods might require such a large amount of system resources that the Kubernetes master might be negatively impacted. For single node clusters, however, (in case of testing, etc.), you can enforce pods to run on the master node as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Allow pods on master (not recommended for production clusters)
kubectl taint nodes --all node-role.kubernetes.io/master-
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It will take a couple of minutes for all containers to start. Use the following command to validate that all pods are running prior to continuing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Validate all pods are running
kubectl get pods --all-namespaces
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can manage Kubernetes completely via the command line tool kubectl, but having a visual and graphical user interface to manage the cluster state can be very useful as well. To do so, let’s deploy the Kubernetes Dashboard:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
# Deploy Dashboard web ui
# https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As the Kubernetes dashboard is also running in a Docker container, we need to modify its networking to access the dashboard from the outside world:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Edit the dashboard to be open to the world (for demos only!)
# Change type from &apos;ClusterIP&apos; to &apos;NodePort&apos;
kubectl -n kube-system edit service kubernetes-dashboard
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Kubernetes user administration is out of scope for this blog post. Instead, we will be allowing the default kube-system user to become a cluster administrator user. Again: not for production clusters!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Make default user part of cluster admin
# not for production clusters, for demos only!
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To log on into the Kubernetes dashboard, a login token is required. To obtain the login ticket:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Get the login token
kubectl describe serviceaccount default -n kube-system
kubectl describe secret default-token -n kube-system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Alternatively, it is possible to open the dashboard without login required (once again: not required for production systems). Simply click the &apos;skip&apos; button in the Kubernetes dashboard login page after applying the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Allow opening the k8 dashboard without login (not for production clusters, for demos only!)
cat &amp;#x3C; k8auth.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF
kubectl create -f k8auth.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pods on Kubernetes will, by default, open networking ports in the 30000+ range. To get the Kubernetes Dashboard port number, execute the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Get the port that kubernetes dashboard runs on (should be a port in 30000+ range)
kubectl -n kube-system get service kubernetes-dashboard
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Launch the Kubernetes dashboard in your favorite internet browser:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Open browser and connect to the Kubernetes Dashboard
https://:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you run into networking issues when connecting to the Dashboard, try using the Kubernetes proxy to connect to the Kubernetes internal networking:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# If unable to connect to Dashboard, try using the Kubernetes proxy:
kubectl proxy

# With proxy running, open the Dashboard using the following url:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the master node up and running, it is possible to add additional nodes to your Kubernetes cluster. To do so, use the &apos;kubeadm join …&apos; command, as noted earlier in this blog. Please note, however, that the kubeadm command uses security tokens to authenticate itself with the master node. These tokens will expire after 24 hours, after which a new token has to be generated as explained below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Add additional nodes to the cluster (if required) using the earlier noted kubeadm join command
kubeadm join …

# On Master, show all nodes part of the cluster:
kubectl get nodes

# In case the token to join has expired, create a new token:
# On Master, list the existing tokens:
kubeadm token list

# On Master, if there are no valid tokens, create a new token and list it:
kubeadm token create
kubeadm token list

# Join additional nodes in the cluster with the newly created token, e.g.,:
kubeadm join 172.16.1.125:6443 --discovery-token-unsafe-skip-ca-verification --token 5d4164.15b01d9af2e64824
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s it: you now have a multi-node Kubernetes environment running!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Troubleshooting and Reset&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When running into issues, use the following command to print logging information:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Troubleshooting
journalctl -xeu kubelet
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To remove a node from the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# On master, remove a node from the cluster (hard)
kubectl get nodes
kubectl delete nodes 

# On the removed node, reset and uninstall ubernetes installation
kubeadm reset
yum erase kube* -y
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;MapR Volume Driver Plugin for Kubernetes&lt;/h2&gt;
&lt;p&gt;The MapR Volume Driver Plugin for Kubernetes allows running anyDocker container from Docker Hub on a Kubernetes cluster where MapR is the persistent data store for the container. Deployment of the MapR Volume Driver Plugin is very straightforward and will work both on the previously described Minikube and on the CentOS Kubernetes environments.&lt;/p&gt;
&lt;p&gt;Make sure to check the latest documentation of the Volume Driver Plugin for any changes: &lt;a href=&quot;https://docs.datafabric.hpe.com/62/PersistentStorage/kdf_installation.html&quot;&gt;https://docs.datafabric.hpe.com/62/PersistentStorage/kdf_installation.html&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The MapR Volume Driver Plugin (like any other Kubernetes configuration and deployment) consists of various so-called yaml files to configure and deploy pods and containers. The yaml files for the MapR Volume Driver Plugin can be found on the public &lt;a href=&quot;https://package.mapr.com/&quot;&gt;package.mapr.com&lt;/a&gt; repository:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Locate and check the latest version of the MapR Volume Driver Plugin:
https://package.mapr.com/tools/KubernetesDataFabric/

# To download the version 1.0.0 files, for example:
wget https://package.mapr.com/tools/KubernetesDataFabric/v1.0.0/kdf-namespace.yaml
wget https://package.mapr.com/tools/KubernetesDataFabric/v1.0.0/kdf-rbac.yaml
wget https://package.mapr.com/tools/KubernetesDataFabric/v1.0.0/kdf-plugin-centos.yaml
wget https://package.mapr.com/tools/KubernetesDataFabric/v1.0.0/kdf-provisioner.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the yaml files downloaded, it is required to specify the ip address of the Kubernetes master in the Volume Driver Plugin yaml configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Configure the MapR Kubernetes storage plugin to point to the Kubernetes Master:
vi kdf-plugin-centos.yaml

- name : KUBERNETES_SERVICE_LOCATION
  value: &quot;changeme!:6443&quot;

# Set the KUBERNETES_SERVICE_LOCATION ip to match your Kubernetes master node
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The final step is to deploy the Kubernetes configuration files to launch the MapR Volume Driver Plugin. We will start with creating the &quot;mapr-system&quot; namespace (kdf-namespace.yaml) to run the Volume Driver Plugin in. Additionally, we set the role-based access control (kdf-rbac.yaml) so that containers on Kubernetes can access the MapR Volume Driver Plugin. Finally, we will deploy the MapR Volume Driver Plugin.&lt;/p&gt;
&lt;p&gt;Simply use the kubectl command line tool to load the yaml files into Kubernetes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Launch the various yaml files to deploy the MapR Volume Driver Plugin
kubectl create -f kdf-namespace.yaml
kubectl create -f kdf-rbac.yaml
kubectl create -f kdf-plugin-centos.yaml
kubectl create -f kdf-provisioner.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s it! Check your Kubernetes Dashboard to validate the deployment status by navigating to the overview of the &apos;mapr-system&apos; namespace.&lt;/p&gt;
&lt;p&gt;To remove the MapR Volume Driver Plugin from your Kubernetes cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Remove the MapR Volume Driver Plugin:
kubectl delete -f kdf-plugin-centos.yaml
kubectl delete -f kdf-provisioner.yaml
kubectl delete -f kdf-rbac.yaml
kubectl delete -f kdf-namespace.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Additional Resources:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;/blog/kubernetes-tutorial-how-to-install-and-deploy-applications-at-scale-on-k&quot;&gt;Part 1 in this series&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/docs/setup/independent/install-kubeadm/&quot;&gt;https://kubernetes.io/docs/setup/independent/install-kubeadm/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/&quot;&gt;https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://mapr.com/docs/home/PersistentStorage/kdf_overview.html&quot;&gt;https://mapr.com/docs/home/PersistentStorage/kdf_overview.html&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Open Source Contributor Explains How KubeDirector Empowers Data Intensive Apps]]></title><description><![CDATA[Kartik Mathur As a leading global, edge-to-cloud platform-as-a-service company, Hewlett Packard Enterprise (HPE) prides itself in employing…]]></description><link>https://developer.hpe.com/open-source-contributor-explains-how-kubedirector-empowers-data-intensive-apps/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-contributor-explains-how-kubedirector-empowers-data-intensive-apps/</guid><pubDate>Thu, 18 Mar 2021 09:52:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/3/kartik-blog-small-1616160879068.jpg&quot; alt=&quot;Kartik Mathur&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a leading global, &lt;strong&gt;edge-to-cloud platform-as-a-service company&lt;/strong&gt;, Hewlett Packard Enterprise (HPE) prides itself in employing team members who share one common purpose: to advance the way people live and work. In this blog series, you’ll get to meet a number of them as I interview some of the &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;open source&lt;/a&gt; experts who make up the HPE team.&lt;/p&gt;
&lt;p&gt;After having graduated with his MS in computer science from Indiana University Bloomington with a research focus on parallel computing and distributed systems, Kartik Mathur started his professional career at AMD. He then worked at several technology startup companies, including BlueData, which was acquired by HPE in 2019. Kartik is currently a Master Technologist at HPE and leads the MLOps initiatives on the HPE Ezmeral Software Platform. He has a keen interest in container orchestration for large scale data processing clusters and is a major contributor to the KubeDirector open source project.&lt;/p&gt;
&lt;h2&gt;How did you get started with open source technologies?&lt;/h2&gt;
&lt;p&gt;I first started contributing to open source for a project that lets you query Cassandra tables using sparkSQL. But honestly, almost any piece of software that I’ve written or consumed was based on open source projects or libraries in some capacity. In my role at BlueData, I first used KubeDirector to orchestrate Big Data and ML pipelines, but then I started contributing as well. I found it so engaging that I’m currently the leading contributor and maintainer for the KubeDirector project.&lt;/p&gt;
&lt;h2&gt;What makes KubeDirector so special?&lt;/h2&gt;
&lt;p&gt;KubeDirector empowers application developers to deploy their applications as a custom resource without having to implement a full-blown Kubernetes Operator. It decouples the operator boiler plate using application intelligence.&lt;/p&gt;
&lt;p&gt;Basically, KubeDirector works as a custom controller for generic applications. It uses standard Kubernetes (K8s) facilities of custom resources and API extensions to implement stateful scaleout application clusters. This is a unique approach that enables the transparent integration with K8s user/resource management, as well as existing K8s clients and tools.&lt;/p&gt;
&lt;p&gt;KubeDirector is unique since it has a rich catalog of complex stateful applications as part of the open-source codebase. And we are constantly adding more and more applications that developers can use as a template/example to onboard their application of choice on Kubernetes. Without KubeDirector, this would be an intimidating task with a huge learning curve, especially for implementing Day2 operations for their applications, like scaling in and out.&lt;/p&gt;
&lt;h2&gt;How do HPE customers benefit from KubeDirector?&lt;/h2&gt;
&lt;p&gt;Over the course of the last few years, Kubernetes has become the de-facto standard for orchestrating containerized applications. It’s worked quite well for stateless applications; less so for stateful applications, such as those focused on AI, machine learning, and big data analytics.&lt;/p&gt;
&lt;p&gt;When containers were first introduced as a way to package microservices, they were designed to be entirely stateless and ephemeral. A container would spin up, do its job, and then disappear, without leaving any record of what happened while it was running. Stateful applications save data to persistent disk storage for use by the server, clients, and other applications. An example of a stateful application is a database or key-value store to which data is saved and retrieved by other applications. There are tools, such as Statefulset and Persistent Volumes, which help developers build stateful applications on Kubernetes, but it all becomes quite difficult to manage as an application scales.&lt;/p&gt;
&lt;p&gt;Since KubeDirector provides an application-agnostic deployment pattern, it enables developers to run non-cloud native stateful applications on Kubernetes without modifying any code. It makes it easier to deploy data-intensive distributed applications for AI and analytics use cases, such as Hadoop, Spark, Kafka, TensorFlow, etc., on Kubernetes. HPE Ezmeral customers automatically receive the benefits of KubeDirector because it’s integrated into the HPE Ezmeral Software Platform, enabling them to more easily develop AI and ML applications.&lt;/p&gt;
&lt;h2&gt;What’s up next for KubeDirector?&lt;/h2&gt;
&lt;p&gt;The next big ticket item for me to work on is to tighten up the security. I’m currently looking at implementing Istio awareness for Kubedirector application endpoints, providing virtual endpoints to secure the microservices communication between them using JSON Web tokens (JWT).&lt;/p&gt;
&lt;h2&gt;Is there any advice you’d like to give others who might follow in your footsteps?&lt;/h2&gt;
&lt;p&gt;Robert Noyce, co-founder of Intel Corporation, once said “Knowledge shared is power multiplied.” I’m a huge fan of this quote. I feel as though open source projects are the biggest testament to the power of knowledge sharing. It gives me immense pleasure in being able to learn and collaborate with people worldwide, as being a part of the open source community helps me grow as an engineer and lift others as well.&lt;/p&gt;
&lt;p&gt;To learn more about the open source projects that HPE is involved with, please visit &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;our website&lt;/a&gt;. Interested in exploring what HPE offers for developers and data scientists? Check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; for a ton of articles, workshops, tutorials, and other resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Boost Your Analytics Factory into Hyperdrive]]></title><description><![CDATA[You know that there’s a treasure trove of value hidden in the data streaming all around you. But how do you capture it, analyze it, and…]]></description><link>https://developer.hpe.com/boost-your-analytics-factory-into-hyperdrive/</link><guid isPermaLink="false">https://developer.hpe.com/boost-your-analytics-factory-into-hyperdrive/</guid><pubDate>Tue, 09 Mar 2021 13:03:33 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ezmeral-day-picture-1615293804678.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You know that there’s a treasure trove of value hidden in the data streaming all around you. But how do you capture it, analyze it, and extract that golden nugget of information that impacts your applications? More importantly, how do you do this when the data is constantly changing? Critical applications need to be able to make decisions in real-time or risk the possibility of failure.&lt;/p&gt;
&lt;p&gt;In today’s data-driven world, being able to distill data into meaningful insights is key. But the deployment of analytics and data science in today’s organizations tends to be spotty and run through uncoordinated efforts. Organizations are looking for a unified experience that spans across environments. In the free, &lt;a href=&quot;https://hpe.events.cube365.net/hpe/hpeezmeral&quot;&gt;HPE Ezmeral \\ Analytics Unleashed&lt;/a&gt; virtual event being held on March 17, 2021, experts will show you how you can industrialize your data to accelerate your data-driven transformation.&lt;/p&gt;
&lt;p&gt;This 90-minute event, packed with multiple sessions, talks, and demos, will focus on the top challenges many companies face as they attempt to harness the power of their data, and how data scientists are using HPE Ezmeral to overcome these challenges. Experts will demo three real-world Day in the Life examples and discuss best practices in scaling artificial intelligence (AI) to meet the needs of the enterprise.&lt;/p&gt;
&lt;p&gt;To start the day, Kirk Borne, globally known data scientist and thought leader on all things data-at-edge, will share his perspective on a few key questions many in the industry are asking. He’ll also offer his thoughts on how the HPE Ezmeral portfolio can help. Afterwards, Kumar Sreekanti, CTO &amp;#x26; Head of Software and Robert Christiansen, VP of Strategy, Office of the CTO, will give you an inside scoop on how the HPE Ezmeral portfolio helps ensure data is always accessible, scalable, and secure. Later, you’ll hear from three customers, Ericsson, DXC, and ORock, who will describe how they each are able to unlock the value of their data. Some of the topics they’ll cover include containerizing apps in complex enterprise environments, scaling data intensive workloads with speed and security, and managing legacy transformations.&lt;/p&gt;
&lt;p&gt;The Day in the Life Demo Showcase will focus on three scenarios:&lt;/p&gt;
&lt;p&gt;•	A Day in the Life of Data, which follows the arc of data as it is intelligently stretched and stitched across the HPE Ezmeral Data Fabric.&lt;/p&gt;
&lt;p&gt;•	A Day in the Life of a Data Scientist, where you’ll get to see a demo on how HPE Ezmeral enables a data scientist to self-serve data, spin-up infrastructure, and leverage analytic toolkits.&lt;/p&gt;
&lt;p&gt;•	A Day in the Life of an IT Admin, where you’ll be walked through basic infrastructure set up so you can see how HPE Ezmeral simplifies the management of infrastructure for analytics.&lt;/p&gt;
&lt;p&gt;By seeing how others have overcome the challenges of harnessing the power of data through analytics, artificial intelligence, and machine learning, you’ll be able to bring back ideas that will work for your organization. So, take advantage of this free event. There’s no registration required. Simply join us for &lt;a href=&quot;https://hpe.events.cube365.net/hpe/hpeezmeral&quot;&gt;HPE Ezmeral \\ Analytics Unleashed&lt;/a&gt; on March 17th, 2021 at 8 a.m. PT / 4 p.m. GMT.&lt;/p&gt;
&lt;p&gt;We’ll also be sharing details on how HPE is collaborating with technology partners, like Dataiku and Run:AI through the HPE Ezmeral marketplace – a one-stop shop where customers can come to explore, learn, engage, and deploy with technology partners and open source projects. Join the ISV Ecosystem team and our partners to see how you can leverage a wide variety of different validated partner tools to improve the quality and reliability of your end-to-end solutions.&lt;/p&gt;
&lt;p&gt;For more details on the event, check out our blog post on &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/Virtual-event-Data-is-your-pot-of-gold-Time-to-unleash-its-value/ba-p/7123545#.YEZsf51Kg2x&quot;&gt;HPE Ezmeral: Uncut&lt;/a&gt;. Check out the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-container-platform/home&quot;&gt;HPE Ezmeral Container Platform - now known as HPE Ezmeral Runtime Enterprise&lt;/a&gt; on &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt; for more information on the industry’s first enterprise-grade container platform designed to deploy both cloud-native and non cloud-native applications using Kubernetes – running on bare-metal or virtualized infrastructure, on any public cloud, and at the edge. And for more information on the HPE Ezmeral Data Fabric, check out its &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home&quot;&gt;platform page here&lt;/a&gt;. It enables you to run the right application at the right time in the right place on the right data.&lt;/p&gt;
&lt;p&gt;Finally, if you want to get some real hands-on experience with the HPE Ezmeral Container Platform, take our &lt;strong&gt;free&lt;/strong&gt; &lt;a href=&quot;https://learn.ezmeral.software.hpe.com/path/kubernetes-stateful-apps/container-platform-on-demand-developer-demos&quot;&gt;Workshop-on-Demand&lt;/a&gt;. This is an on-demand training course that leverages Jupyter Notebooks to help you really get to know the technology. You take the course on your own time and interact with our team on Slack if you have any questions. Just &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;join us on slack&lt;/a&gt; and reach out for the dedicated &lt;a href=&quot;https://hpedev.slack.com/archives/C01B60X8SSD&quot;&gt;#hpe-workshop-on-demand Channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How Big Data is Reducing Costs and Improving Outcomes in Health Care]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/how-big-data-is-reducing-costs-and-improving-outcomes-in-health-care/</link><guid isPermaLink="false">https://developer.hpe.com/how-big-data-is-reducing-costs-and-improving-outcomes-in-health-care/</guid><pubDate>Tue, 09 Mar 2021 12:19:17 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2016-06-07T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning, use-case, healthcare&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;The Motivation for Big Data&lt;/h2&gt;
&lt;p&gt;Health care costs are driving the demand for big-data driven Healthcare applications. U.S. health care spending has outpaced GDP growth for the past several decades and exceeds spending in any other developed country. Despite being more expensive, according to the Organisation for Economic Co-operation and Development (OECD), the US Health System ranks last among eleven countries on measures of access, equity, quality, efficiency, and healthy lives. Standards and incentives for the digitizing and sharing of healthcare data along with improvements and decreasing costs in storage and parallel processing on commodity hardware, are causing a &lt;a href=&quot;http://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-big-data-revolution-in-us-health-care&quot;&gt;big data revolution in health care&lt;/a&gt; with the goal of &lt;a href=&quot;http://www.nationalacademies.org/hmd/Reports/2012/Best-Care-at-Lower-Cost-The-Path-to-Continuously-Learning-Health-Care-in-America.aspx&quot;&gt;better care at lower cost.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/life-expectancy-1-1611296455217.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Value Based Care&lt;/h2&gt;
&lt;p&gt;A goal of the Affordable Care Act is to improve health care through the meaningful use of health information technology in order to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Improve healthcare quality and coordination so that outcomes are consistent with current professional knowledge&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reduce healthcare costs, reduce avoidable overuse&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provide support for reformed payment structures&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/improved-health-care-2-1611296469408.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Health insurance companies, such as Medicare and Medicaid, are shifting from fee-for-service compensation to value-based data-driven incentives that reward high-quality, cost-effective patient care and demonstrate meaningful use of electronic health records.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/improve-health0care-3-1611296489579.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Health Care Data&lt;/h2&gt;
&lt;p&gt;Unstructured data forms about 80% of information in the healthcare industry and is growing exponentially. Getting access to this unstructured data—such as output from medical devices, doctor’s notes, lab results, imaging reports, medical correspondence, clinical data, and financial data—is an invaluable resource for improving patient care and increasing efficiency.&lt;/p&gt;
&lt;p&gt;Examples of healthcare data sources that will benefit from big data and analytics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Claims: are the documents providers submit to insurance companies to get paid. A key component of the &lt;a href=&quot;http://www.edibasics.com/edi-resources/document-standards/hipaa/&quot;&gt;Health Insurance Portability and Accountability Act (HIPAA)&lt;/a&gt; is the establishment of national standards for electronic healthcare transactions in order to improve efficiency by encouraging the widespread use of Electronic Document Interchange (EDI) between healthcare providers and insurance companies. Claim transactions include International Classification of Diseases (ICD) diagnostic codes, medications, dates, provider IDs, the cost, etc.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Electronic Health/Medical Record data (EHR or EMR): Medicare and Medicaid EHR &lt;a href=&quot;https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/index.html?redirect=/EHRincentivePrograms/&quot;&gt;incentive programs&lt;/a&gt; were established to encourage professionals and hospitals to adopt and demonstrate meaningful use of certified EHR technology. EHRs facilitate a comprehensive sharing of data with other providers and medical applications. EHRs contain the data from the delivery of healthcare, which includes diagnosis, treatment, prescriptions, lab tests, and radiology. &lt;a href=&quot;http://www.hl7.org/implement/standards/index.cfm?ref=nav&quot;&gt;Health Level Seven International (HL7)&lt;/a&gt; provides standards for the exchange, integration, sharing, and retrieval of electronic health record data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Pharmaceutical R&amp;#x26;D: Clinical Trials Data, Genomic Data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Patient behavior and sentiment data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Medical Device Data: Patient sensor data from the home or hospital.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/health-data-inputs-4-1611296501518.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Big Data Trends in Healthcare&lt;/h2&gt;
&lt;p&gt;There is a move toward evidence-based medicine, which involves making use of all clinical data available and factoring that into clinical and advanced analytics. Capturing and bringing all of the information about a patient together gives a more complete view for insight into care coordination and outcomes-based reimbursement, population health management, and patient engagement and outreach.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/converged-data-platform-mapr-5-1611296512091.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Example Healthcare Big Data Use Cases&lt;/h2&gt;
&lt;h3&gt;Reducing Fraud Waste and Abuse with Big Data Analytics&lt;/h3&gt;
&lt;p&gt;The cost of fraud, waste and abuse in the healthcare industry is a key contributor to spiraling health care costs in the United States, but big data analytics can be a game changer for health care fraud. The Centers for Medicare and Medicaid Services prevented more than $210.7 million in healthcare fraud in one year using predictive analytics. UnitedHealthcare transitioned to a predictive modeling environment based on a Hadoop big data platform, in order to identify inaccurate claims in a systematic, repeatable way and generated a 2200% return on their big data/advanced technology.&lt;/p&gt;
&lt;p&gt;The key to identifying fraud is the ability to store and go back in history to analyze large unstructured datasets of historical claims and to use machine-learning algorithms to detect anomalies and patterns.&lt;/p&gt;
&lt;p&gt;Healthcare organizations can analyze patient records and billing to detect anomalies such as a hospital’s overutilization of services in short time periods, patients receiving healthcare services from different hospitals in different locations simultaneously, or identical prescriptions for the same patient filled in multiple locations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/machine-learning-in-health-care-6-1611296525640.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Centers for Medicare and Medicaid Services uses predictive analytics to assign risk scores to specific claims and providers, to identify billing patterns, and claim aberrancies difficult to detect by previous methods. Rules-based models flag certain charges automatically. Anomaly models raise suspicion based on factors that seem improbable. Predictive models compare charges against a fraud profile and raise suspicion. Graph models raise suspicion based on the relations of a provider; fraudulent billers are often organized as tight networks.&lt;/p&gt;
&lt;h3&gt;Predictive Analytics to Improve Outcomes&lt;/h3&gt;
&lt;p&gt;Initiatives such as meaningful use are accelerating the adoption of Electronic Health Records and the volume and detail of patient information is growing rapidly. Being able to combine and analyze a variety of structured and unstructured data across multiple data sources aids in the accuracy of diagnosing patient conditions, matching treatments with outcomes, and predicting patients at risk for disease or readmission.&lt;/p&gt;
&lt;p&gt;Predictive modeling over data derived from EHRs is being used for early diagnosis and is reducing mortality rates from problems such as congestive heart failure and &lt;a href=&quot;http://www.healthcareitnews.com/news/data-analytics-strategy-slashes-sepsis-death-rates&quot;&gt;sepsis.&lt;/a&gt; Congestive Heart Failure (CHF) accounts for the most health care spending. The earlier it is diagnosed the better it can be treated, avoiding expensive complications, but early manifestations can be easily missed by physicians. A machine learning example from Georgia Tech demonstrated that machine-learning algorithms could look at many more factors in patients’ charts than doctors and, by adding additional features, there was a substantial increase in the ability of the model to distinguish people who have CHF from people who don’t.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/uncovering-features-7-1611296534274.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Predictive modeling and machine learning on large sample sizes, with more patient data, can uncover nuances and patterns that couldn’t be previously uncovered. &lt;a href=&quot;http://www.datapine.com/blog/big-data-examples-in-healthcare/&quot;&gt;Optum Labs has collected EHRs&lt;/a&gt; of over 30 million patients to create a database for predictive analytics tools that will help doctors make Big Data-informed decisions to improve patients’ treatment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/health-care-model-building-and-scoring-8-1611296546479.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Real-time Monitoring of Patients&lt;/h3&gt;
&lt;p&gt;Healthcare facilities are looking to provide more proactive care to their patients by constantly monitoring patient vital signs. The data from these various monitors can be analyzed in real time and send alerts to care providers so they know instantly about changes in a patient’s condition. Processing real-time events with machine learning algorithms can provide physicians’ insights to make lifesaving decisions and allow for effective interventions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/medical-device-to-data-streaming-9-1611296558176.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Big Data Architecture for Healthcare: What do we need to do? And how do we do this at scale?&lt;/h2&gt;
&lt;p&gt;We need to collect the data, process the data, store the data, and finally serve the data for analysis, machine learning, and dashboards.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/health-care-data-architecture-10-1611296569137.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Data Ingestion with NFS&lt;/h3&gt;
&lt;p&gt;The Network File System (NFS) protocol provides remote access to shared disks across networks. An NFS-enabled server can share directories and files with clients, allowing users and programs to access files on remote systems as if they were stored locally.&lt;/p&gt;
&lt;p&gt;Unlike other Hadoop distributions that only allow cluster data import or import as a batch operation, MapR lets you mount the cluster itself via NFS so that your applications can read and write data directly. The MapR Distributed File and Object Store enables direct file modification and multiple concurrent reads and writes via POSIX semantics. An NFS-mounted cluster allows easy data ingestion of data sources, such as files, images, etc., from other machines leveraging standard Linux commands, utilities, applications, and scripts.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/data-sources-collection-storage-11-1611296581906.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;From your MapR Cluster, you can move data to and from more expensive storage using NFS. For example, you can move processed hot data to a relational database or Data Warehouse and you can also move off colder data into lower cost Hadoop storage.&lt;/p&gt;
&lt;h3&gt;Streaming Data Ingestion with the Kafka API&lt;/h3&gt;
&lt;p&gt;As more and more healthcare solutions require real-time analytics and fast moving data, ingesting data into the system using event streaming will become critical. MapR Event Store is a new distributed messaging system that enables producers and consumers to exchange events in real time via the Apache Kafka 0.9 API. Topics are logical collections of messages that organize events into categories.&lt;/p&gt;
&lt;p&gt;Topics are partitioned, spreading the load for parallel messaging across multiple servers, which provides for faster throughput and scalability.&lt;/p&gt;
&lt;p&gt;Messages are not deleted from topics when read and topics can have multiple different consumers. This allows processing of the same messages by different consumers for different purposes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/kafka-api-producers-and-consumers-12-1611296593323.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Batch Processing&lt;/h3&gt;
&lt;p&gt;Batch processing is for processing bulk loads of data gathered over a period where a fast response time is not critical; for example, EDI claims gathered over a day and submitted together in a file for processing overnight.&lt;/p&gt;
&lt;p&gt;Apache Hive is an open source Hadoop application for data warehousing. It offers a simple way to apply structure to large amounts of unstructured data, and then perform batch SQL-like queries on that data.&lt;/p&gt;
&lt;p&gt;Apache Spark is a next generation distributed parallel processing framework that provides a rich set of APIs for machine learning, graph processing, SQL. Spark is much faster than MapReduce for iterative algorithms, because Spark tries to keep things in memory, whereas MapReduce involves more reading and writing from disk.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/batch-processing-collect-to-processing-data-13-1611296604993.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Stream Processing&lt;/h3&gt;
&lt;p&gt;Spark Streaming brings Spark&apos;s APIs to stream processing, letting you write streaming jobs the same way you write batch jobs. Other popular options for Stream processing are Apache Flink and Apache Storm.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/stream-processing-streaming-and-processing-14-1611296616672.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Storing in a NoSQL Database&lt;/h3&gt;
&lt;p&gt;For Storing lots of Data we need a data store that supports fast writes and scales. MapR Database was designed to scale due to the fact that data that is accessed together is stored together.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/storing-in-nosql-rdbms-to-mapr-db-15-1611296626000.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With MapR Database, data is automatically distributed or partitioned across the cluster by key range. Each server is the source for a subset of data. Grouping the data by row key provides for really fast read and writes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/group-by-row-for-fast-read-writes-16-1611296635923.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MapR Database has 2 APIs:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;JSON API for storing document models.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HBase API for wide column data models (typical for time series data).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/mapr-db-apis-17-1611296648222.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Serving the Data&lt;/h2&gt;
&lt;p&gt;End applications like dashboards, business intelligence tools and other applications use the processed data. The output can also be stored back in our database for further processing later.&lt;/p&gt;
&lt;p&gt;Apache Drill enables self-service data exploration on big data with a schema-free SQL query engine. Drill offers the following benefits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Drill can read from all kinds of data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drill is optimized for interactive applications, and thus is designed to process petabytes of data and trillions of records in seconds.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Drill can be used by data analysts, in conjunction with tools like Tableau, for fast visualizations.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/big-data-architecture-components-18-1611296658517.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;All of the components of the architecture we just discussed can run on the same cluster with the MapR Data Platform. There are several advantages of integrating Hadoop, Spark, real-time database capabilities, global event streaming, and scalable enterprise storage:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Maintaining only one cluster means less infrastructure to provision, manage, and monitor for security, reliability, and performance, dramatically lowering both hardware and operational costs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Having producers and consumers on the same cluster means fewer delays related to copying and moving data between clusters, and between applications.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/mapr-health-care-architecture-19-1611296669267.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Use Case Example Architectures&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/data-lake-architecture-20-1611296679971.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The National Institutes for Health built a data lake to combine separate institute data sets, in order to have all the information at one location where it could be shared and manipulated.&lt;/p&gt;
&lt;p&gt;UnitedHealthcare IT used Hadoop as the basic data framework and built a single platform equipped with the tools needed to analyze information generated by claims, prescriptions, plan participants and contracted care providers, and associated claim review outcomes.&lt;/p&gt;
&lt;h3&gt;Streaming System of Record for Healthcare&lt;/h3&gt;
&lt;p&gt;Liaison Technologies provides cloud-based solutions to help organizations integrate, manage and secure data across the enterprise. One vertical solution they provide is for the healthcare and life sciences industry, which comes with two challenges–meeting HIPAA compliance requirements and the proliferation of data formats and representations. With MapR Event Store, the data lineage portion of the compliance challenge is solved because the stream becomes a system of record by being an infinite, immutable log of each data change.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/streaming-system-of-record-health-care-21-1611296690735.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To illustrate the latter challenge, a patient record may be consumed in different ways—a document representation, a graph representation, or search—by different users, such as pharmaceutical companies, hospitals, clinics, physicians, etc. By streaming data changes in real-time to the MapR Database HBase, MapR Database JSON document, graph, and search databases, users always have the most up-to-date view of data in the most appropriate format. Further, by implementing this service on the MapR Data Platform, Liaison is able to secure all of the data components together, avoiding data and security silos that alternate solutions require.&lt;/p&gt;
&lt;h3&gt;Genome Processing&lt;/h3&gt;
&lt;p&gt;The Novartis team chose Hadoop and Apache Spark to build a workflow system that allows them to integrate, process and analyze diverse data for Next Generation Sequencing (NGS) research while being responsive to advances in the scientific literature.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/genome-processing-22-1611296700172.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The ability to capture, share, and store huge amounts of electronic healthcare data and transactions, along with advances in technology allowing the storage and fast processing of big data on commodity hardware, is transforming the healthcare industry by improving outcomes and reducing costs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;References and More Information:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-big-data-revolution-in-us-health-care&quot;&gt;The Big Data Revolution in Health Care&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.nationalacademies.org/hmd/Reports/2012/Best-Care-at-Lower-Cost-The-Path-to-Continuously-Learning-Health-Care-in-America.aspx&quot;&gt;Better Care at Lower Cost the Path to Continuously Learning Health Care&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From Pig to Spark: An Easy Journey to Spark for Apache Pig Developers]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/from-pig-to-spark-an-easy-journey-to-spark-for-apache-pig-developers/</link><guid isPermaLink="false">https://developer.hpe.com/from-pig-to-spark-an-easy-journey-to-spark-for-apache-pig-developers/</guid><pubDate>Tue, 09 Mar 2021 11:42:12 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Philippe de Cuzey&quot;,
&quot;publish&quot;: &quot;2016-07-13T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;As a data analyst that primarily used Apache Pig in the past, I eventually needed to program more challenging jobs that required the use of Apache Spark, a more advanced and flexible language. At first, Spark may look a bit intimidating, but this blog post will show that the transition to Spark (especially PySpark) is quite easy.&lt;/p&gt;
&lt;p&gt;However, I&apos;m not advocating that you move from Apache Pig to Spark in all cases. Pig is a wonderful language. It&apos;s simple yet efficient when it comes to transforming data through projections and aggregations, and the productivity of Pig can&apos;t be beat for standard Map/Reduce jobs.&lt;/p&gt;
&lt;h2&gt;Apache Pig has great features, but …&lt;/h2&gt;
&lt;p&gt;I like to think of Pig as a high-level Map/Reduce commands pipeline. As a former SQL programmer, I find it quite intuitive and, in my organization, our Hadoop jobs are still mostly developed in Pig.&lt;/p&gt;
&lt;p&gt;Pig has a lot of qualities: it is stable, scales very well, and integrates natively with the Hive metastore HCatalog. By describing each step atomically, it minimizes conceptual bugs that you often find in complicated SQL code.&lt;/p&gt;
&lt;p&gt;But sometimes, Pig has some limitations that makes it a poor programming paradigm to fit your needs. The three main limitations are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Pig is a pipeline and doesn’t offer loops or code indirections (IF..THEN) which can sometimes be mandatory in your code.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As beautifully stated in &lt;a href=&quot;https://databricks.com/blog/2014/03/20/apache-spark-a-delight-for-developers.html&quot;&gt;an article by Jai Ranganathan and Matei Zaharia&lt;/a&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;While scripting frameworks like Apache Pig provide many high-level operators as well, Spark allows you to access these operators in the context of a full programming language—thus, you can use control statements, functions, and classes as you would in a typical programming environment.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Finally, a third Pig limitation is related to input data formats: although Pig is good with CSV and HCatalog, it seems a bit less comfortable with reading and processing some other data formats like JSON (through JsonLoader), whereas Spark integrates them natively.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Give Apache Spark a try&lt;/h2&gt;
&lt;p&gt;Time to take a dip in Spark! Pig and Spark share a common programming model that makes it easy to move from one to the other. Basically, you work through immutable transformations identified by an alias (Pig) or an RDD variable (Spark). Transformations are usually projections (maps), filters, or aggregations like GroupBy, sorts, etc.&lt;/p&gt;
&lt;p&gt;This common programming approach means that, for a Pig developer, the learning curve to Spark is fairly quick.&lt;/p&gt;
&lt;p&gt;PySpark is quite a natural choice for the data analyst who already has some Python basic skills, but the code would be similar in another flavor of Spark, such as Java or Scala.&lt;/p&gt;
&lt;h2&gt;A complete example&lt;/h2&gt;
&lt;p&gt;As an illustration, let’s take an example of a Pig script that loads a log file, filters it for a specific day, calculates the number of log entries grouped by item, and adds the item description from another file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;/* load a log file of user sessions. Filter for a specific date and count entries per item
*/

f0 = LOAD &apos;logfile&apos; using PigStorage(&apos;\t&apos;) AS (log_date:chararray, item_id:chararray, some_stuff:chararray);
f1 = FILTER f0 BY log_date == &apos;20160515&apos;;
f2 = FOREACH f1 GENERATE item_id;
f3 = GROUP f2 BY item_id;
f4 = FOREACH f3 GENERATE group AS item_id, COUNT(f2) AS nb_entries;

/* add item name
*/

item1 = LOAD &apos;item&apos; using PigStorage(&apos;\t&apos;) AS (item_id:chararray, item_name:chararray);
join1 = JOIN f4 BY item_id LEFT, item1 BY item_id;
result = FOREACH join1 GENERATE f4::item_id, item_name, nb_entries;

STORE result INTO &apos;result_file&apos; USING PigStorage(&apos;\t&apos;);

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code is fairly simple, and each step performs one transformation.&lt;/p&gt;
&lt;p&gt;Now in Spark, we start with raw Spark using low-level RDDs to show similarities with Pig code. In the code, things are detailed one alias at a time, but obviously production code would be more compact.&lt;/p&gt;
&lt;h2&gt;Raw Spark (using RDD)&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;conf **=** SparkConf()
sc **=** SparkContext(conf=conf)
f0 **=** sc.textFile(&apos;logfile&apos;).map(**lambda** x: x.split(&apos;\t&apos;))
f1 **=** f0.filter(**lambda** x: x[0] == &apos;20160515&apos;)
f3 **=** f1.groupBy(**lambda** (log_date, item_id, some_stuff): item_id)
f4 **=** f3.map (**lambda** (item_id, iterable): (item_id, len(iterable)))

# add item name
item1 **=** sc.textFile(&apos;item&apos;).map(**lambda** x: x.split(&apos;\t&apos;))

# no need to set the key item_id on both parts before performing the join,
# It&apos;s already on first place on each part.

join1 **=** f4.leftOuterJoin(item1)

result **=** join1.map(**lambda** (item_id, (nb_entries, item_name)): (item_id, item_name, str(nb_entries)))

# creating a line of tab separated fields, and save it in the result file
result_to_store **=** result.map (**lambda** record : &apos;\t&apos;.join(record))
result_to_store.saveAsTextFile(&apos;result_file&apos;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can see here a similar code outline between Pig and Spark, which makes it easier for a Pig developer to start coding in Spark. One drawback, however, is that for relatively simple operations like this, Pig is still more productive than Spark, even if execution time is better (but not astoundingly better) with Spark.&lt;/p&gt;
&lt;p&gt;Now that we are getting familiar with this low-level RDD, code could be improved by using DataFrames and SparkSQL. The previous code could be rewritten in a more readable form:&lt;/p&gt;
&lt;h2&gt;Spark with DataFrames and SparkSQL&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;conf **=** SparkConf()
sc **=** SparkContext(conf=conf)

sqlContext **=** SQLContext(sc)

f0 **=** sc.textFile(&apos;logfile&apos;).map(**lambda** x: x.split(&apos;\t&apos;))

fpFields = [ \
   StructField(&apos;log_date&apos;, StringType(), True), \
   StructField(&apos;item_id&apos;, StringType(), True), \
   StructField(&apos;some_stuff&apos;, StringType(), True) \
]

fpSchema **=** StructType(fpFields)
df_f0 **=** sqlContext.createDataFrame(f0, fpSchema)
df_f0.registerTempTable(&apos;log&apos;)

f1_df **=** sqlContext.sql(
   &quot;SELECT log.item_id, count(*) AS nb_entries \
      FROM log \
     WHERE log_date **=** &apos;20160515&apos;\
  GROUP BY item_id&quot;
)
f1_df.registerTempTable(&apos;log_agg&apos;)
# items dataframe

item1 **=** sc.textFile(&apos;item&apos;).map(**lambda** x: x.split(&apos;\t&apos;))

itemFields **=** [ \
   StructField(&apos;item_id&apos;, StringType(), True), \
   StructField(&apos;item_name&apos;, StringType(), True) \
]

itemSchema **=** StructType(itemFields)
df_item1 **=** sqlContext.createDataFrame(item1, itemSchema)

df_item1.registerTempTable(&apos;item&apos;)

result **=** sqlContext.sql(
   &apos;SELECT log_agg.item_id, item_name, format_number(nb_entries, 0) \
      FROM log_agg \
  LEFT OUTER JOIN item ON log_agg.item_id = item.item_id&apos;
)

result_to_store **=** result.rdd \
     .map (**lambda** record : &apos;\t&apos;.join(record))

result_to_store.saveAsTextFile(&apos;result_file&apos;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I’m sure there are even more compact and elegant ways to do it in Spark SQL, but this is the outline.&lt;/p&gt;
&lt;p&gt;Now we have named fields, type safety, and compact SQL code that is more readable by a data analyst. Productivity has increased, and this is a better alternative to Pig.&lt;/p&gt;
&lt;p&gt;The drawback is that each piece of SQL is now a black box that can be only tested as a whole, which can prove tricky if the result differs from the expected or if execution time is slow. It is then up to the developer to design steps that are still readable and could be executed as individual units of code.&lt;/p&gt;
&lt;h2&gt;Loading data from Hive metastore HCatalog&lt;/h2&gt;
&lt;p&gt;If our data would have been stored in Hive HCatalog, all the DataFrame metadata would be inherited from the metastore and the Spark code would have been even simpler:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;conf **=** SparkConf()
sc **=** SparkContext(conf=conf)
sqlContext = HiveContext(sc)

f1_df **=** sqlContext.sql(
   &quot;SELECT item_id, count(*****) AS nb_entries \
   FROM my_db.log \
   WHERE log_date **=** &apos;20160515&apos; \
   GROUP BY item_id&quot;
)

f1_df.registerTempTable(&apos;log_agg&apos;)

result **=** sqlContext.sql(
   &quot;SELECT log_agg.item_id, item_name, format_number(nb_entries, 0) \
      FROM log_agg \
LEFT OUTER JOIN my_db.item item ON log_agg.item_id **=** item.item_id&quot;
)

result_to_store = result.rdd \
   .map (**lambda** record : &apos;\t&apos;.join(record))

result_to_store.saveAsTextFile(outputFileName)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, this is a more compact and readable piece of code :)&lt;/p&gt;
&lt;p&gt;Let&apos;s push the advantage a bit further in favor of Spark: user-defined functions.&lt;/p&gt;
&lt;h2&gt;User-defined functions&lt;/h2&gt;
&lt;p&gt;As stated previously, in Spark there is obviously no need for UDFs; you would just write the function as a Python method:&lt;/p&gt;
&lt;p&gt;in Pig:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;/* the **function** below has been written and deployed in a jar file */
DEFINE myFancyUdf com.mydomain.myfunction1;

...

log1 = FOREACH log0 GENERATE field1, myFancyUdf (field1l);

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In Spark:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;**def** myFancyUdf(f1):
   someStuff
   **return** result

log1 = log0.map (**lambda** field1: (field1, myFancyUdf(field1))

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;More advanced topics&lt;/h2&gt;
&lt;p&gt;In this section, let&apos;s take a look at more powerful features of Pig in Spark through two examples:&lt;/p&gt;
&lt;h2&gt;Map-side joins&lt;/h2&gt;
&lt;p&gt;One handy feature of Pig is map-side joins, where one of the tables to join is small enough to be sent to each worker to take part in the Map job (not requiring the more expensive Reduce job). This is conveniently performed by using the “replicated” hint on the &lt;code&gt;JOIN&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Imagine that in our previous example, the ‘item’ table is small enough to fit in memory. The &lt;code&gt;join1&lt;/code&gt; alias becomes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;join1 = JOIN f4 BY item_id, item1 BY item_id USING ‘replicated;

result = FOREACH join1 GENERATE f4::item_id, item_name, nb_entries;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In Spark this is performed quite easily with &lt;a href=&quot;http://spark.apache.org/docs/latest/rdd-programming-guide.html#broadcast-variables&quot;&gt;broadcast variables&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# broadcast items
item_bc **=** sc.broadcast(item.collect())

 &apos;&apos;&apos;
gets item name from its id
&apos;&apos;&apos; 
**def** getItemName (item_id_to_match): # we know there will be only one result, so we take the first from the list
  (id, name) **=** filter(**lambda** (id, name): id **==** item_id_to_match, item_bc.value)[0]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The item table is broadcasted on each worker node. The &lt;code&gt;getItemName()&lt;/code&gt; function then finds in the broadcasted table which record holds a given &lt;code&gt;item_id&lt;/code&gt; and returns its name. This function is called in the map side of the Spark job, for each record processed.&lt;/p&gt;
&lt;p&gt;The complete code now looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt; &apos;&apos;&apos;
gets item name from its id
&apos;&apos;&apos; 

**def** getItemName (item_id_to_match):
 # we know there will be only one result, so we take the first from the
   (id, name) **=** filter(**lambda** (id, name): id **==** item_id_to_match, item_bc.value)[0]
   return name

f1_df **=** sqlContext.sql(
  &quot;SELECT item_id, count(*****) AS nb_entries \
     FROM my_db.log \
    WHERE log_date **=** &apos;20160515&apos; \
   GROUP BY item_id&quot;
)

item_df **=** sqlContext.sql(
   &quot;SELECT item_id, item_name \
      FROM my_db.item&quot;
)

item_bc **=** sc.broadcast(item_df.rdd.collect())

result **=** f1_df.rdd.map (**lambda****=** result.map (**lambda** record : &apos;\t&apos;.join(record))
result_to_store.saveAsTextFile(&apos;result_file&apos;)

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Window function: get n first occurrences of a sorted list of Grouped By items&lt;/h2&gt;
&lt;p&gt;It is sometimes required to find the top-n first records of a table, grouped by a common feature. From the log files of our example, let&apos;s get, for each item, the 10 most recent records (in SQL this would be a windowing function like &lt;code&gt;PARTITION BY&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;In Pig, this can be accomplished with a piece of code like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;f0 = LOAD ‘logfile’ using PigStorage(&apos;\t&apos;) AS (log_date:char array, item_id:chararray, some_stuff:chararray); 

f1 = GROUP f0 BY item_id; 

f2 = FOREACH f1 {
   o = ORDER f0 BY log_date DESC;
   l = LIMIT o 10;
   GENERATE FLATTEN(l) AS (log_date, item_id, some_stuff);
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In Spark it’s also feasible, either with low-level RDD stuff or with SparkSQL &lt;a href=&quot;https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html&quot;&gt;Windowing capabilities&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Let’s start with the RDD low-level solution:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# create a tuple with the key for the GroupBy
f1 **=** f0.map (**lambda** (log_date, item_id, some_stuff): (item_id, (log_date, some_stuff)))

f2 **=** f1.groupByKey()

# result of the GroupBy is a tuple (item_id, iterable over grouped items)
# we sort the iterable according to log_date and retain only first 10 elements
f3 **=** f2.map (**lambda** (item_id, iter1): (item_id, sorted(list(iter1), key=lambda (log_date, item_id, some_stuff):log_date, reverse=True)[:10]))

# transform tuples of (item_id, [(log_date, item_id, some_stuff), ...]) into tuples of (log_date, item_id, some_stuff)
f4 = f3.flatMapValues(**lambda** x:x) \
.map (**lambda** (item_id, (log_date, some_stuff)):(log_date, item_id, some_stuff)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s not very elegant, but it does the job.&lt;/p&gt;
&lt;p&gt;Then the SparkSQL solution:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;f1_df **=** sqlContext.sql(
&apos;SELECT \
  log_date, \
  item_id,  \
  some_stuff  \
FROM (  \
  SELECT  \
  log_date, \
  item_id,  \
  some_stuff, \
  dense_rank() OVER (PARTITION BY item_id ORDER BY log_date DESC) as rank \
FROM my_db.log) tmp \
WHERE rank &amp;#x3C;**=** 10&apos;)

f2 = f1_df.rdd.map (**lambda** row: (row.log_date, row.item_id, row.some_stuff))

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Much better!&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I have voluntarily excluded from this blog post some interesting topics such as deploying, debugging, execution monitoring, dynamic resource allocation, partition and split size tuning, sampling, etc. The goal of this particular blog post is to show Pig developers how to start coding in Spark; I hope that from this perspective, you find it is helpful.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Spark Streaming and Twitter Sentiment Analysis]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/spark-streaming-and-twitter-sentiment-analysis/</link><guid isPermaLink="false">https://developer.hpe.com/spark-streaming-and-twitter-sentiment-analysis/</guid><pubDate>Tue, 09 Mar 2021 09:59:08 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-04-19T07:00:00.000Z&quot;,
&quot;tags&quot;: [&quot;apache-spark&quot;,&quot;use-case&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;This blog post is the result of my efforts to show to a coworker how to get the insights he needed by using the streaming capabilities and concise API of Apache Spark. In this blog post, you&apos;ll learn how to do some simple, yet very interesting,  analytics that will help you solve real problems by analyzing specific areas of a social network.&lt;/p&gt;
&lt;p&gt;Using a subset of a Twitter stream was the perfect choice to use in this demonstration, since it had everything we needed: an endless and continuous data source that was ready to be explored.&lt;/p&gt;
&lt;h2&gt;Spark Streaming, Minimized&lt;/h2&gt;
&lt;p&gt;Spark Streaming is very well explained &lt;a href=&quot;http://spark.apache.org/docs/latest/streaming-programming-guide.html&quot;&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;,  so we are going to skip some of the details about the Streaming API and move on to setting up our app.&lt;/p&gt;
&lt;h2&gt;Setting Up Our App&lt;/h2&gt;
&lt;p&gt;Let’s see how to prepare our app before doing anything else.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val config = new SparkConf().setAppName(&quot;twitter-stream-sentiment&quot;)
val sc = new SparkContext(config) sc.setLogLevel(&quot;WARN&quot;)
val ssc = new StreamingContext(sc, Seconds(5))
System.setProperty(&quot;twitter4j.oauth.consumerKey&quot;, &quot;consumerKey&quot;)
System.setProperty(&quot;twitter4j.oauth.consumerSecret&quot;, &quot;consumerSecret&quot;)
System.setProperty(&quot;twitter4j.oauth.accessToken&quot;, accessToken)
System.setProperty(&quot;twitter4j.oauth.accessTokenSecret&quot;, &quot;accessTokenSecret&quot;)
val stream = TwitterUtils.createStream(ssc, None)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, we have created the Spark Context &lt;strong&gt;&lt;em&gt;sc&lt;/em&gt;&lt;/strong&gt; and set the log level to &lt;em&gt;WARN&lt;/em&gt; to eliminate the noisy log Spark generates. We also created a Streaming Context &lt;strong&gt;&lt;em&gt;ssc&lt;/em&gt;&lt;/strong&gt; using &lt;strong&gt;&lt;em&gt;sc&lt;/em&gt;&lt;/strong&gt;. Then we set up our Twitter credentials (before doing this we needed to follow &lt;a href=&quot;https://iag.me/socialmedia/how-to-create-a-twitter-app-in-8-easy-steps/&quot;&gt;these steps&lt;/a&gt;) that we got from the Twitter website. &lt;em&gt;Now the real fun starts&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;What is Trending Right Now on Twitter?&lt;/h2&gt;
&lt;p&gt;It is easy to find out what is trending on Twitter at any given moment; it is just a matter of counting the appearances of each tag on the stream. Let’s see how Spark allows us to do this operation.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val tags = stream.flatMap {
  status =&gt; status.getHashtagEntities.map(_.getText)  
}
tags.countByValue() .foreachRDD {
  rdd =&gt; val now = org.joda.time.DateTime.now() rdd.sortBy(_._2) .map(x =&gt; (x, now)) .saveAsTextFile(s&quot;~/twitter/$now&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;First, we got the tags from the Tweets, counted how many times it (a tag) appeared, and sorted them by the count. After that, we persisted the result in order to point Splunk (or any other tool for that matter) to it. We could build some interesting dashboards using this information in order to track the most trending hashtags. Based on this information, my coworker could create campaigns and use these popular tags to attract a bigger audience.&lt;/p&gt;
&lt;h2&gt;Analyzing Tweets&lt;/h2&gt;
&lt;p&gt;Now we want to add functionality to get an overall opinion of what people think about a set of topics. For the sake of this example, let’s say that we want to know the &lt;strong&gt;&lt;em&gt;sentiment&lt;/em&gt;&lt;/strong&gt; of Tweets about &lt;strong&gt;Big Data&lt;/strong&gt; and &lt;strong&gt;Food&lt;/strong&gt;, two very unrelated topics.&lt;/p&gt;
&lt;p&gt;There are several APIs for analyzing sentiments from Tweets, but we are going to use an interesting library from &lt;strong&gt;The Stanford Natural Language Processing Group&lt;/strong&gt; in order to extract the corresponding &lt;strong&gt;&lt;em&gt;sentiments&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;In our &lt;strong&gt;&lt;em&gt;build.sbt&lt;/em&gt;&lt;/strong&gt; file we need to add the corresponding dependencies.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;libraryDependencies += &quot;edu.stanford.nlp&quot; % &quot;stanford-corenlp&quot; % &quot;3.5.1&quot; libraryDependencies += &quot;edu.stanford.nlp&quot; % &quot;stanford-corenlp&quot; % &quot;3.5.1&quot;classifier &quot;models&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we need to select only those Tweets we really care about by filtering the &lt;strong&gt;&lt;em&gt;stream&lt;/em&gt;&lt;/strong&gt; using certain &lt;em&gt;hashtag (#)&lt;/em&gt;. This filtering is quite easy, thanks to a unified Spark API.&lt;/p&gt;
&lt;p&gt;Let’s see how.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val tweets = stream.filter {
  t =&gt; val tags = t.getText.split(&quot; &quot;).filter(_.startsWith(&quot;#&quot;)).map(_.toLowerCase) tags.contains(&quot;#bigdata&quot;) &amp;#x26;&amp;#x26; tags.contains(&quot;#food&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, we get all tags in each Tweet, checking that it has been tagged with**&lt;em&gt;#bigdata&lt;/em&gt;** and &lt;strong&gt;&lt;em&gt;#food&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Once we have our Tweets, extracting the corresponding sentiment is quite easy. Let’s define a function that extracts the sentiment from the Tweet’s content so we can plug it in our pipeline.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def detectSentiment(message: String): SENTIMENT_TYPE
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We are going to use this function, assuming it does what it should, and we will put its implementation at the end, since it&apos;s not the focus of this post. In order to get an idea of how it works, let&apos;s build some tests around it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;it(&quot;should detect not understood sentiment&quot;) {
  detectSentiment(&quot;&quot;)should equal (NOT_UNDERSTOOD) 
}
it(&quot;should detect a negative sentiment&quot;) {
  detectSentiment(&quot;I am feeling very sad and frustrated.&quot;)should equal (NEGATIVE)
}
it(&quot;should detect a neutral sentiment&quot;) {
  detectSentiment(&quot;I&apos;m watching a movie&quot;)should equal (NEUTRAL)
}
it(&quot;should detect a positive sentiment&quot;) {
  detectSentiment(&quot;It was a nice experience.&quot;)should equal (POSITIVE)
}
it(&quot;should detect a very positive sentiment&quot;) {
  detectSentiment(&quot;It was a very nice experience.&quot;)should equal (VERY_POSITIVE)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These tests should be enough to show how &lt;strong&gt;&lt;em&gt;detectSentiment&lt;/em&gt;&lt;/strong&gt; works.&lt;/p&gt;
&lt;p&gt;Let’s see an example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val data = tweets.map {
  status =&gt; val sentiment = SentimentAnalysisUtils.detectSentiment(status.getText)
val tags = status.getHashtagEntities.map(_.getText.toLowerCase) (status.getText, sentiment.toString, tags)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, &lt;strong&gt;&lt;em&gt;data&lt;/em&gt;&lt;/strong&gt; represents a &lt;em&gt;DStream&lt;/em&gt; of Tweets we want, the associated sentiment, and the hashtags within the Tweet (here we should find the tags we used to filter).&lt;/p&gt;
&lt;h2&gt;SQL Interoperability&lt;/h2&gt;
&lt;p&gt;Now we want to cross reference the sentiment data with an external dataset that we can query using SQL. For my coworker, it makes a lot of sense to be able to &lt;strong&gt;&lt;em&gt;join&lt;/em&gt;&lt;/strong&gt; the Twitter stream with his other dataset.&lt;/p&gt;
&lt;p&gt;Let’s take a look at how we could achieve this.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val sqlContext = new SQLContext(sc)
import sqlContext.implicits._ 
data.foreachRDD {
  rdd =&gt; rdd.toDF().registerTempTable(&quot;sentiments&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have transformed our stream into a different representation (a &lt;strong&gt;&lt;em&gt;DataFrame&lt;/em&gt;&lt;/strong&gt;), which is also backed by all Spark concepts (resilient, distributed, very fast) and exposed it as a table so my coworker can use his beloved SQL to query different sources.&lt;/p&gt;
&lt;p&gt;The table &lt;em&gt;sentiment&lt;/em&gt; (that we defined from our DataFrame) will be queried as any other table in his system. Another possibility is that we could query other data sources (Cassandra, Xmls, or our own binary formatted files) using Spark SQL and cross them with the stream.&lt;/p&gt;
&lt;p&gt;You can find out more information about this topic &lt;a href=&quot;https://medium.com/@anicolaspp/apache-spark-as-a-distributed-sql-engine-4373e254e0f9#.55n08p6w4&quot;&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt; and &lt;a href=&quot;https://medium.com/@anicolaspp/extending-our-spark-sql-query-engine-5f4a088de986#.9jm66wp3o&quot;&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An example of querying a DataFrame is shown next.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;sqlContext.sql(&quot;select * from sentiments&quot;).show()
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Windowed Operations&lt;/h2&gt;
&lt;p&gt;Spark Streaming has the ability to look back in the stream, a functionality most streaming engines lack (if they do have this functionality, it&apos;s very hard to implement).&lt;/p&gt;
&lt;p&gt;In order to implement a windowed operation, you&apos;ll need to &lt;em&gt;checkpoint&lt;/em&gt; the stream, but this is an easy task. You&apos;ll find more information about this &lt;a href=&quot;http://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing&quot;&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here&apos;s a small example of this kind of operation:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;tags  
   .window(Minutes(1)) . (...)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Even though our examples are quite simple, we were able to solve a real life problem using Spark. We now have the ability to identify trending topics on Twitter, which helps us both target and increase our audience. At the same time, we are able to access different data sets using a single set of tools such as SQL.&lt;/p&gt;
&lt;p&gt;Very interesting results came back from &lt;strong&gt;&lt;em&gt;#bigdata&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;#food&lt;/em&gt;&lt;/strong&gt; at the same time. Perhaps people Tweet about big data at lunch time—who knows?&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open source and partners - Newsletter]]></title><link>https://developer.hpe.com/2021-March-04/</link><guid isPermaLink="false">https://developer.hpe.com/2021-March-04/</guid><pubDate>Thu, 04 Mar 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[SPIRE Maintainer, Agustín Martínez Fayó, Reveals His Passion for Information Security]]></title><description><![CDATA[agustin martinez fayo As a leading global, edge-to-cloud platform-as-a-service company, Hewlett Packard Enterprise (HPE) prides itself in…]]></description><link>https://developer.hpe.com/spire-maintainer-agustn-martnez-fay-reveals-his-passion-for-information-/</link><guid isPermaLink="false">https://developer.hpe.com/spire-maintainer-agustn-martnez-fay-reveals-his-passion-for-information-/</guid><pubDate>Tue, 02 Mar 2021 14:43:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/agustin-martinez-fayo-1614696396460.PNG&quot; alt=&quot;agustin martinez fayo&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a leading global, &lt;strong&gt;edge-to-cloud platform-as-a-service company&lt;/strong&gt;, Hewlett Packard Enterprise (HPE) prides itself in employing team members who share one common purpose: to advance the way people live and work. In this blog series, you’ll get to meet a number of them as I interview some of the &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;open source&lt;/a&gt; experts on our team.&lt;/p&gt;
&lt;p&gt;Agustín Martínez Fayó is a principal engineer for SPIRE, the &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation&apos;s&lt;/a&gt; (CNCF) open source project that provides the ability to securely identify software systems in dynamic and heterogeneous environments. He’s a graduate of the Universidad Tecnológica Nacional in Information Systems Engineering and comes to HPE through its recent acquisition of Scytale where he helped to design and implement solutions to connect cloud and container-based services with on-premises services, extending existing identity providers to the cloud. Previously in his career, he worked on the development of database vulnerability assessment software.&lt;/p&gt;
&lt;h2&gt;Can you tell me a little about the SPIRE project and why it’s special?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/software/spiffe-spire-open-source.html#:~:text=SPIRE%20is%20an%20open%2Dsource,a%20wide%20variety%20of%20environments.&amp;#x26;text=The%20open%2Dsource%20SPIFFE%20and,between%20multiple%20clouds%20and%20clusters.&quot;&gt;SPIRE&lt;/a&gt; is an open source project hosted by the Cloud Native Computing Foundation (CNCF). It implements the &lt;a href=&quot;https://spiffe.io/&quot;&gt;SPIFFE&lt;/a&gt; (Secure Production Identity Framework for Everyone) standard to securely identify software systems in dynamic and heterogeneous environments. It does so through the use of platform-agnostic cryptographic identities in an automated way. In essence, SPIRE exposes the SPIFFE Workload API, which can attest running software systems and issue SPIFFE IDs and SVIDs to them. This allows two workloads to establish trust between each other.&lt;/p&gt;
&lt;p&gt;Both SPIFFE and SPIRE were &lt;a href=&quot;https://www.infoq.com/news/2020/06/spire-identity-framework/&quot;&gt;recently accepted into the CNCF Incubator&lt;/a&gt; to provide a standard and tooling for establishing trust between software services without necessarily using secrets or network-based security controls. The projects enable organizations to deploy consistent, fine-grained cross-service authentication via a “dial-tone” API across heterogeneous environments. Similar to being able to just pick up a phone and connect with anyone because the system just knows how to do it, the Workload API offers a standard way for authentication when connecting with other workloads, no matter where they are or the infrastructure on which they are running. This enables zero trust architectures by delivering continuously attested service identity across cloud, container, and on-premise enterprise IT infrastructures.&lt;/p&gt;
&lt;h2&gt;How did you get involved with SPIFFE/SPIRE?&lt;/h2&gt;
&lt;p&gt;I tend to be pretty passionate about information security. As I was off looking for new challenges, the SPIFFE project was just getting started. Being able to contribute to SPIFFE and SPIRE right from the start looked like the perfect opportunity to engage with a community of security experts and be able to contribute to the development of software that would really help improve the security posture across organizations.&lt;/p&gt;
&lt;h2&gt;What are some of the things you’d like to work on in regards to this project?&lt;/h2&gt;
&lt;p&gt;Continuing to maintain the SPIRE project is important to me. There are a lot of ongoing efforts and proposals to enhance and extend the SPIRE capabilities that will help make it a suitable solution in more environments and use cases. As a maintainer of the project, I would like to help drive those efforts. It&apos;s great to have the opportunity to contribute to the growth and adoption of these projects.&lt;/p&gt;
&lt;h2&gt;Is there anything else you’d like to share with our readers?&lt;/h2&gt;
&lt;p&gt;Yes, actually. If you’re interested in learning more about SPIFFE, the standard for service identity, and SPIRE, the reference implementation for SPIFFE, you might want to check out the book &lt;a href=&quot;https://spiffe.io/book/&quot;&gt;&lt;em&gt;Solving the Bottom Turtle&lt;/em&gt;&lt;/a&gt;. The book distills the experience of renowned security experts to provide a deep understanding of the identity problem and how to solve it. I think you’ll find it very informative and interesting.&lt;/p&gt;
&lt;p&gt;To learn more about the open source projects that HPE is involved with, please visit our &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;website&lt;/a&gt;. Interested in exploring what HPE offers for developers and data scientists? Check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; for a ton of articles, workshops, tutorials, and other resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Spark Streaming with HBase]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/spark-streaming-with-hbase/</link><guid isPermaLink="false">https://developer.hpe.com/spark-streaming-with-hbase/</guid><pubDate>Fri, 19 Feb 2021 07:00:12 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2015-09-04T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;This post will help you get started using &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/streaming/&apos;&gt;Apache Spark Streaming&lt;/a&gt; with HBase on MapR. Spark Streaming is an extension of the core Spark API that enables continuous data stream processing.&lt;/p&gt;
&lt;h2&gt;What is Spark Streaming?&lt;/h2&gt;
&lt;p&gt;First of all, what is streaming? A data stream is an unbounded sequence of data arriving continuously. Streaming divides continuously flowing input data into discrete units for processing. Stream processing is low latency processing and the analysis of streaming data. Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data. Spark Streaming is for use cases that require a significant amount of data to be quickly processed as soon as it arrives. Example real-time use cases are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Website monitoring&lt;/li&gt;
&lt;li&gt;Network monitoring&lt;/li&gt;
&lt;li&gt;Fraud detection&lt;/li&gt;
&lt;li&gt;Web clicks&lt;/li&gt;
&lt;li&gt;Advertising&lt;/li&gt;
&lt;li&gt;IoT sensors&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Spark Streaming supports data sources such as HDFS directories, TCP sockets, Kafka, Flume, Twitter, etc. Data Streams can be processed with Spark’s core APIS, DataFrames SQL, or machine learning APIs, and can be persisted to a filesystem, HDFS, databases, or any data source offering a Hadoop OutputFormat.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream1-blog-1613718331029.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How Spark Streaming Works&lt;/h2&gt;
&lt;p&gt;Streaming data is continuous and needs to be batched to process. Spark Streaming divides the data stream into batches of &lt;em&gt;x&lt;/em&gt; seconds called Dstreams, which internally is a sequence of RDDs. Your Spark Application processes the RDDs using Spark APIs, and the processed results of the RDD operations are returned in batches.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream2-blog-1613718342309.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;The Streaming Application Example Architecture&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream3-blog-1613718349956.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Spark Streaming example code does the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads streaming data&lt;/li&gt;
&lt;li&gt;Processes the streaming data&lt;/li&gt;
&lt;li&gt;Writes the processed data to an HBase Table&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Other Spark example code does the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads HBase Table data written by the streaming code&lt;/li&gt;
&lt;li&gt;Calculates daily summary statistics&lt;/li&gt;
&lt;li&gt;Writes summary statistics to the HBase table Column Family stats&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Example data set&lt;/h2&gt;
&lt;p&gt;The Oil Pump Sensor data comes in as comma separated value (csv) files dropped in a directory. Spark Streaming will monitor the directory and process any files created in that directory. (As stated before, Spark Streaming supports different streaming data sources. For simplicity, this example will use files.) Below is an example of the csv file with some sample data:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream4-blog-1613718358120.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We use a Scala case class to define the Sensor Schema corresponding to the sensor data csv files, and a parseSensor function to parse the comma separated values into the sensor case class.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// schema for sensor data
case class Sensor(resid: String, date: String, time: String, hz: Double, disp: Double, flo: Double,
          sedPPM: Double, psi: Double, chlPPM: Double)

object Sensor {
   // function to parse line of csv data into Sensor class
   def parseSensor(str: String): Sensor = {
       val p = str.split(&quot;,&quot;)
        Sensor(p(0), p(1), p(2), p(3).toDouble, p(4).toDouble, p(5).toDouble, p(6).toDouble,
            p(7).toDouble, p(8).toDouble)
  }
…
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;HBase Table Schema&lt;/h2&gt;
&lt;p&gt;The HBase Table Schema for the streaming data is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Composite row key of the pump name date and time stamp&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Schema for the daily statistics summary rollups is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Composite row key of the pump name and date&lt;/li&gt;
&lt;li&gt;Column Family stats&lt;/li&gt;
&lt;li&gt;Columns for min, max, avg.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream5-blog-1613718366157.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The function below converts a sensor object into an HBase Put object, which is used to insert a row into HBase.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val cfDataBytes = Bytes.toBytes(&quot;data&quot;)

object Sensor {
. . .
  //  Convert a row of sensor object data to an HBase put object
  def convertToPut(sensor: Sensor): (ImmutableBytesWritable, Put) = {
      val dateTime = sensor.date + &quot; &quot; + sensor.time
      // create a composite row key: sensorid_date time
      val rowkey = sensor.resid + &quot;_&quot; + dateTime
      val put = new Put(Bytes.toBytes(rowkey))
      // add to column family data, column  data values to put object
      put.add(cfDataBytes, Bytes.toBytes(&quot;hz&quot;), Bytes.toBytes(sensor.hz))
      put.add(cfDataBytes, Bytes.toBytes(&quot;disp&quot;), Bytes.toBytes(sensor.disp))
      put.add(cfDataBytes, Bytes.toBytes(&quot;flo&quot;), Bytes.toBytes(sensor.flo))
      put.add(cfDataBytes, Bytes.toBytes(&quot;sedPPM&quot;), Bytes.toBytes(sensor.sedPPM))
      put.add(cfDataBytes, Bytes.toBytes(&quot;psi&quot;), Bytes.toBytes(sensor.psi))
      put.add(cfDataBytes, Bytes.toBytes(&quot;chlPPM&quot;), Bytes.toBytes(sensor.chlPPM))
      return (new ImmutableBytesWritable(Bytes.toBytes(rowkey)), put)
  }
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configuration for Writing to an HBase Table&lt;/h2&gt;
&lt;p&gt;You can use the &lt;a target=&apos;\_blank&apos;  href=&apos;https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html&apos;&gt;TableOutputFormat&lt;/a&gt; class with Spark to write to an HBase table, similar to how you would write to an HBase table from MapReduce. Below, we set up the configuration for writing to HBase using the &lt;code&gt;TableOutputFormat&lt;/code&gt; class.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;   val tableName = &quot;sensor&quot;

   // set up Hadoop HBase configuration using TableOutputFormat
    val conf = HBaseConfiguration.create()
    conf.set(**TableOutputFormat.OUTPUT_TABLE**, tableName)
    val jobConfig: jobConfig = new JobConf(conf, this.getClass)
    jobConfig.setOutputFormat(classOf[**TableOutputFormat**])
    jobConfig.set(**TableOutputFormat**.OUTPUT_TABLE, tableName)

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Spark Streaming Example Code&lt;/h2&gt;
&lt;p&gt;These are the basic steps for Spark Streaming code:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Initialize a Spark StreamingContext object.&lt;/li&gt;
&lt;li&gt;Apply transformations and output operations to DStreams.&lt;/li&gt;
&lt;li&gt;Start receiving data and processing it using &lt;code&gt;streamingContext.start()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Wait for the processing to be stopped using &lt;code&gt;streamingContext.awaitTermination()&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We will go through each of these steps with the example application code.&lt;/p&gt;
&lt;h2&gt;Initializing the StreamingContext&lt;/h2&gt;
&lt;p&gt;First we create a &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.streaming.StreamingContext&apos;&gt;StreamingContext&lt;/a&gt;, the main entry point for streaming functionality, with a 2 second &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/streaming-programming-guide.html#setting-the-right-batch-interval&apos;&gt;batch interval&lt;/a&gt;. (In the code boxes, comments are in grey)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val sparkConf = new SparkConf().setAppName(&quot;HBaseStream&quot;)

//  create a StreamingContext, the main entry point for all streaming functionality
val ssc = new StreamingContext(sparkConf, Seconds(2))

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, we use the &lt;code&gt;StreamingContext textFileStream(directory)&lt;/code&gt; method to create an input stream that monitors a Hadoop-compatible file system for new files and processes any files created in that directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// create a DStream that represents streaming data from a directory source
val linesDStream = ssc.textFileStream(&quot;/user/user01/stream&quot;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The linesDStream represents the stream of data, each record is a line of text. Internally a DStream is a sequence of RDDs, one RDD per batch interval.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream6-blog-1613718375057.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Apply Transformations and Output Operations to DStreams&lt;/h2&gt;
&lt;p&gt;Next, we parse the lines of data into Sensor objects, with the map operation on the linesDStream.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// parse each line of data in linesDStream  into sensor objects

val sensorDStream = linesDStream.map(Sensor.parseSensor)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The map operation applies the &lt;code&gt;Sensor.parseSensor&lt;/code&gt; function on the RDDs in the linesDStream, resulting in RDDs of Sensor objects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream7-blog-1613718382720.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, we use the DStream &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/streaming/dstream/DStream.html&apos;&gt;foreachRDD&lt;/a&gt; method to apply processing to each RDD in this DStream. We filter the sensor objects for low psi to create alerts, then we write the sensor and alert data to HBase by converting them to Put objects, and using the PairRDDFunctions &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/rdd/PairRDDFunctions.html#saveAsHadoopDataset(org.apache.hadoop.mapred.JobConf)&apos;&gt;saveAsHadoopDataset&lt;/a&gt; method, which outputs the RDD to any Hadoop-supported storage system using a Hadoop Configuration object for that storage system (see Hadoop Configuration for HBase above).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// for each RDD. performs function on each RDD in DStream
sensorRDD.foreachRDD { rdd =&gt;
        // filter sensor data for low psi
     val alertRDD = rdd.filter(sensor =&gt; sensor.psi &amp;#x3C; 5.0)

      // convert sensor data to put object and write to HBase  Table CF data
      rdd.map(Sensor.convertToPut).saveAsHadoopDataset(jobConfig)

     // convert alert to put object write to HBase  Table CF alerts
     rdd.map(Sensor.convertToPutAlert).saveAsHadoopDataset(jobConfig)
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The sensorRDD objects are converted to put objects, and then written to HBase.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream8-blog-1613718390677.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Start Receiving Data&lt;/h2&gt;
&lt;p&gt;To start receiving data, we must explicitly call &lt;code&gt;start()&lt;/code&gt; on the StreamingContext, then call &lt;code&gt;awaitTermination&lt;/code&gt; to wait for the streaming computation to finish.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;    // Start the computation
    ssc.start()
    // Wait for the computation to terminate
    ssc.awaitTermination()

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spark Reading from and Writing to HBase&lt;/h2&gt;
&lt;p&gt;Now we want to read the HBase sensor table data, calculate daily summary statistics and write these statistics to the stats column family.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream9-blog-1613718398619.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The code below reads the HBase table sensor table psi column data, calculates statistics on this data using &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/0.6.2/api/core/spark/util/StatCounter.html&apos;&gt;StatCounter&lt;/a&gt;, and then writes the statistics to the sensor stats column family.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;     // configure HBase for reading
    val conf = HBaseConfiguration.create()
    conf.set(TableInputFormat.INPUT_TABLE, HBaseSensorStream.tableName)
    // scan data column family psi column
    conf.set(TableInputFormat.SCAN_COLUMNS, &quot;data:psi&quot;)

// Load an RDD of (row key, row Result) tuples from the table
    val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
      classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
      classOf[org.apache.hadoop.hbase.client.Result])

    // transform (row key, row Result) tuples into an RDD of Results
    val resultRDD = hBaseRDD.map(tuple =&gt; tuple._2)

    // transform into an RDD of (RowKey, ColumnValue)s , with Time removed from row key
    val keyValueRDD = resultRDD.
              map(result =&gt; (Bytes.toString(result.getRow()).
              split(&quot; &quot;)(0), Bytes.toDouble(result.value)))

    // group by rowkey , get statistics for column value
    val keyStatsRDD = keyValueRDD.
             groupByKey().
             mapValues(list =&gt; StatCounter(list))

    // convert rowkey, stats to put and write to hbase table stats column family
    keyStatsRDD.map { case (k, v) =&gt; convertToPut(k, v) }.saveAsHadoopDataset(jobConfig)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The diagram below shows that the output from &lt;code&gt;newAPIHadoopRDD&lt;/code&gt; is an RDD of row key, result pairs. The &lt;code&gt;PairRDDFunctions saveAsHadoopDataset&lt;/code&gt; saves the Put objects to HBase.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream10-blog-1613718405899.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Software&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You can download the &lt;strong&gt;code and data&lt;/strong&gt; to run these examples from here:
&lt;ul&gt;
&lt;li&gt;Code: &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/caroljmcdonald/SparkStreamingHBaseExample&apos;&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/SparkStreamingHBaseExample&quot;&gt;https://github.com/caroljmcdonald/SparkStreamingHBaseExample&lt;/a&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running the Application&lt;/h2&gt;
&lt;p&gt;You can run the code as a standalone application.&lt;/p&gt;
&lt;p&gt;Here are the steps summarized:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log into MapR using userid user01, password mapr.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the application using maven.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the jar file and data file to your home directory /user/user01 using scp.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the streaming app:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt; /opt/mapr/spark/spark-1.3.1/bin/spark-submit --driver-class-path `hbase classpath`
   --class examples.HBaseSensorStream sparkstreamhbaseapp-1.0.jar

&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the streaming data file to the stream directory:&lt;br&gt;
&lt;code&gt;cp sensordata.csv /user/user01/stream/&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Read data and calculate stats for one column:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;   /opt/mapr/spark/spark-1.3.1/bin/spark-submit --driver-class-path `hbase classpath`
    --class examples.HBaseReadWrite sparkstreamhbaseapp-1.0.jar

&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Calculate stats for whole row:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;  /opt/mapr/spark/spark-1.3.1/bin/spark-submit --driver-class-path `hbase classpath`
   --class examples.HBaseReadRowWriteStats sparkstreamhbaseapp-1.0.jar

&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This concludes the tutorial on Spark Streaming with HBase. You can find more information here:&lt;/p&gt;
&lt;h2&gt;References and More Information:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/streaming-programming-guide.html&apos;&gt;Apache Spark Streaming Programming guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://shop.oreilly.com/product/0636920028512.do&apos;&gt;Learning Spark O&apos;Reilly Book&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://databricks.com/blog/2015/07/30/diving-into-spark-streamings-execution-model.html&apos;&gt;Databricks Spark Streaming&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Getting Started with MapR Event Store]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/getting-started-with-mapr-event-store/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-mapr-event-store/</guid><pubDate>Fri, 19 Feb 2021 06:51:45 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Tugdual Grall&quot;,
&quot;publish&quot;: &quot;2016-03-10T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;streaming&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;MapR Event Store is a distributed messaging system for streaming event data at scale. It is integrated into the MapR converged platform. MapR Event Store uses the Apache Kafka API, so if you’re already familiar with Kafka, you’ll find it particularly easy to get started with MapR Event Store.&lt;/p&gt;
&lt;p&gt;Although MapR Event Store generally uses the &lt;a target=&apos;\_blank&apos;  href=&apos;http://kafka.apache.org/documentation.html#introduction&apos;&gt;Apache Kafka programming model&lt;/a&gt;, there are a few key differences. For instance, there is a new kind of object in the MapR file-system called, appropriately enough, a stream. Each stream can handle a huge number of topics, and you can have many streams in a single cluster. Policies such as time-to-live or ACEs (Access Control Expressions) can be set at the stream level for convenient management of many topics together.
If you already have Kafka applications, it’s easy to migrate them over to MapR Event Store.
In this blog post we describe how to run a simple application we originally wrote for Kafka using MapR Event Store instead.  &lt;/p&gt;
&lt;h2&gt;Sample Programs&lt;/h2&gt;
&lt;p&gt;As mentioned above, MapR Event Store uses &lt;a target=&apos;\_blank&apos;  href=&apos;http://kafka.apache.org/documentation.html#api&apos;&gt;Kafka API 0.9.0&lt;/a&gt;, which means it is possible to reuse the same application with minor changes. Before diving into a concrete example, let’s take a look at what has to be changed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The topic names change from &quot;&lt;code&gt;topic-name&lt;/code&gt;&quot; to &quot;&lt;code&gt;/stream-name:topic-name&lt;/code&gt;&quot;, as MapR organizes the topics in streams for management reasons (security, TTL, etc.).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The producer and consumer configuration parameters that are not used by MapR Event Store are automatically ignored, so no change here.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The producer and consumer applications are using jars from MapR rather than the Apache Kafka jars.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can find a complete application on the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/mapr-demos/mapr-streams-sample-programs&apos;&gt;Sample Programs for MapR Event Store&lt;/a&gt; page. It’s a simple copy that includes minor changes of the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/mapr-demos/kafka-sample-programs&apos;&gt;Sample Programs for Kafka 0.9 API&lt;/a&gt; project.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;You will need basic Java programming skills, as well as access to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A running MapR Cluster&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://maven.apache.org/&apos;&gt;Apache Maven 3.0&lt;/a&gt; or later&lt;/li&gt;
&lt;li&gt;Git to clone the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/mapr-demos/mapr-streams-sample-programs&apos;&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-streams-sample-programs&quot;&gt;https://github.com/mapr-demos/mapr-streams-sample-programs&lt;/a&gt;&lt;/a&gt; repository&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running Your First MapR Event Store Application&lt;/h2&gt;
&lt;h3&gt;Step 1: Create the stream&lt;/h3&gt;
&lt;p&gt;A &lt;em&gt;stream&lt;/em&gt; is a collection of topics that you can manage together by:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Setting security policies that apply to all topics in that stream&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setting a default number of partitions for each new topic that is created in the stream&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Setting a time-to-live for messages in every topic in the stream.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Run the following command, as &lt;code&gt;mapr&lt;/code&gt; user, on your MapR cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream create -path /sample-stream
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By default, the produce and consume topic permissions are defaulted to the creator of the streams — the Unix user you are using to run the maprcli command. It is possible to configure the permission by editing the streams. For example, to make all of the topics available to anybody (public permission), you can run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream edit -path /sample-stream -produceperm p -consumeperm p -topicperm p
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Create the topics&lt;/h3&gt;
&lt;p&gt;We need two topics for the example program, which we can be created using &lt;code&gt;maprcli&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream topic create -path /sample-stream  -topic fast-messages
$ maprcli stream topic create -path /sample-stream  -topic summary-markers
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These topics can be listed using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream topic list -path /sample-stream
topic            partitions  logicalsize  consumers  maxlag  physicalsize
fast-messages    1           0            0          0       0
summary-markers  1           0            0          0       0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that the program will automatically create the topic if it does not already exist. For your applications, you should decide whether it is better to allow programs to automatically create topics simply by virtue of having mentioned them or whether it is better to strictly control which topics exist.&lt;/p&gt;
&lt;h3&gt;Step 3: Compile and package the example programs&lt;/h3&gt;
&lt;p&gt;Go back to the directory where you have the example programs and build an example program.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd ..
$ mvn package
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The project creates a jar with all the external dependencies ( &lt;code&gt;./target/mapr-streams-examples-1.0-SNAPSHOT-jar-with-dependencies.jar&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Note that you can build the project with the Apache Kafka dependencies as long as you do not package them into your application when you run and deploy it. This example has a dependency on the MapR Event Store client instead, which can be found in the &lt;code&gt;mapr.com&lt;/code&gt; maven repository.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;   &amp;#x3C;repositories&gt;
       &amp;#x3C;repository&gt;
           &amp;#x3C;id&gt;mapr-maven&amp;#x3C;/id&gt;
           &amp;#x3C;url&gt;http://repository.mapr.com/maven&amp;#x3C;/url&gt;
           &amp;#x3C;releases&gt;&amp;#x3C;enabled&gt;true&amp;#x3C;/enabled&gt;&amp;#x3C;/releases&gt;
           &amp;#x3C;snapshots&gt;&amp;#x3C;enabled&gt;false&amp;#x3C;/enabled&gt;&amp;#x3C;/snapshots&gt;
       &amp;#x3C;/repository&gt;
   &amp;#x3C;/repositories&gt;
   ...
       &amp;#x3C;dependency&gt;
           &amp;#x3C;groupId&gt;org.apache.kafka&amp;#x3C;/groupId&gt;
           &amp;#x3C;artifactId&gt;kafka-clients&amp;#x3C;/artifactId&gt;
           &amp;#x3C;version&gt;0.9.0.0-mapr-1602&amp;#x3C;/version&gt;
           &amp;#x3C;scope&gt;provided&amp;#x3C;/scope&gt;
       &amp;#x3C;/dependency&gt;
  ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 4: Run the example producer&lt;/h3&gt;
&lt;p&gt;You can install the MapR Client and run the application locally or copy the jar file onto your cluster (any node). If you are installing the MapR Client, be sure you also install the MapR Kafka package using the following command on CentOS/RHEL:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;`yum install mapr-kafka`
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ scp ./target/mapr-streams-examples-1.0-SNAPSHOT-jar-with-dependencies.jar mapr@&amp;#x3C;YOUR_MAPR_CLUSTER&gt;:/home/mapr
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The producer will send a large number of messages to &lt;code&gt;/sample-stream:fast-messages&lt;/code&gt; along with occasional messages to &lt;code&gt;/sample-stream:summary-markers&lt;/code&gt;. Since there isn&apos;t any consumer running yet, nobody will receive the messages.
If you compare this with the Kafka example used to build this application, the topic name is the only change to the code.
Any MapR Event Store application will need the MapR Client libraries. One way to make these libraries available is to add them to the application classpath using the &lt;code&gt;/opt/mapr/bin/mapr classpath&lt;/code&gt; command. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ java -cp $(mapr classpath):./mapr-streams-examples-1.0-SNAPSHOT-jar-with-dependencies.jar com.mapr.examples.Run producer
Sent msg number 0
Sent msg number 1000
...
Sent msg number 998000
Sent msg number 999000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only important difference here between an Apache Kafka application and MapR Event Store application is that the client libraries are different. This causes the MapR Producer to connect to the MapR cluster to post the messages, and not to a Kafka broker.&lt;/p&gt;
&lt;h3&gt;Step 5: Start the example consumer&lt;/h3&gt;
&lt;p&gt;In another window, you can run the consumer using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ java -cp $(mapr classpath):./mapr-streams-examples-1.0-SNAPSHOT-jar-with-dependencies.jar com.mapr.examples.Run consumer
1 messages received in period, latency(min, max, avg, 99%) = 20352, 20479, 20416.0, 20479 (ms)
1 messages received overall, latency(min, max, avg, 99%) = 20352, 20479, 20416.0, 20479 (ms)
1000 messages received in period, latency(min, max, avg, 99%) = 19840, 20095, 19968.3, 20095 (ms)
1001 messages received overall, latency(min, max, avg, 99%) = 19840, 20479, 19968.7, 20095 (ms)
...
1000 messages received in period, latency(min, max, avg, 99%) = 12032, 12159, 12119.4, 12159 (ms)
&amp;#x3C;998001 1000=&quot;&quot; 12095=&quot;&quot; 19583=&quot;&quot; 999001=&quot;&quot; messages=&quot;&quot; received=&quot;&quot; overall,=&quot;&quot; latency(min,=&quot;&quot; max,=&quot;&quot; avg,=&quot;&quot; 99%)=&quot;12032,&quot; 20479,=&quot;&quot; 15073.9,=&quot;&quot; (ms)=&quot;&quot; in=&quot;&quot; period,=&quot;&quot; 12095,=&quot;&quot; 12064.0,=&quot;&quot; 15070.9,=&quot;&quot; (ms)&amp;#x3C;=&quot;&quot; pre=&quot;&quot;&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that there is a latency listed in the summaries for the message batches. This is because the consumer wasn&apos;t running when the messages were sent to MapR Event Store, and thus it is only getting them much later, long after they were sent.&lt;/p&gt;
&lt;h2&gt;Monitoring your topics&lt;/h2&gt;
&lt;p&gt;At any time you can, use the maprcli tool to get some information about the topic. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream topic info -path /sample-stream -topic fast-messages -json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;-json&lt;/code&gt; option is used to get the topic information as a JSON document.&lt;/p&gt;
&lt;h2&gt;Cleaning up&lt;/h2&gt;
&lt;p&gt;When you are done playing, you can delete the stream and all associated topics using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream delete -path /sample-stream
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Using this example built from an Apache Kafka application, you have learned how to write, deploy, and run your first MapR Event Store application.
As you can see, the application code is really similar, and only a few changes need to be made (such as changing the topic names). This means it is possible to easily deploy your Kafka applications on MapR and reap the benefits of all the features of MapR Event Store, such as advanced security, geographically distributed deployment, very large number of topics, and much more. This also means that you can immediately use all of your Apache Kafka skills on a MapR deployment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Real-Time Streaming Data Pipelines with Apache APIs: Kafka, Spark Streaming, and HBase]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/real-time-streaming-data-pipelines-with-apache-apis-kafka-spark-streamin/</link><guid isPermaLink="false">https://developer.hpe.com/real-time-streaming-data-pipelines-with-apache-apis-kafka-spark-streamin/</guid><pubDate>Fri, 19 Feb 2021 06:29:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2016-04-22T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Many of the systems we want to monitor happen as a stream of events. Examples include event data from web or mobile applications, sensors, or medical devices.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag01-1613716304443.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Real-time analysis examples include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Website monitoring&lt;/li&gt;
&lt;li&gt;Network monitoring&lt;/li&gt;
&lt;li&gt;Fraud detection&lt;/li&gt;
&lt;li&gt;Web clicks&lt;/li&gt;
&lt;li&gt;Advertising&lt;/li&gt;
&lt;li&gt;IoT sensors&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Batch processing can give great insights into things that happened in the past, but it lacks the ability to answer the question of &quot;what is happening right now?”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag02-1613716314047.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;It is becoming important to process events as they arrive for real-time insights, but high performance at scale is necessary to do this. In this blog post, I&apos;ll show you how to integrate Apache Spark Streaming, MapR Database, and MapR Event Store for fast, event-driven applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag03-1613716321972.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example Use Case&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s go over an example which generates lots of data and needs real-time preventive alerts. Remember what happened back in 2010 with BP in the Gulf of Mexico oil spill?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag04-1613716329966.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The example use case we will look at here is an application that monitors oil wells. Sensors in oil rigs generate streaming data, which is processed by Spark and stored in HBase for use by various analytical and reporting tools. We want to store every single event in HBase as it streams in. We also want to filter for, and store, alarms. Daily Spark processing will store aggregated summary statistics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag05-1613716341142.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What do we need to do? And how do we do this with high performance at scale?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We need to collect the data, process the data, store the data, and, finally, serve the data for analysis, machine learning, and dashboards.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag06-1613716353632.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Streaming Data Ingestion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spark Streaming supports data sources such as HDFS directories, TCP sockets, Kafka, Flume, Twitter, etc. In our example, we will use MapR Event Store for Apache Kafka, a new distributed messaging system for streaming event data at scale. MapR Event Store enables producers and consumers to exchange events in real time via the Apache Kafka 0.9 API. MapR Event Store integrates with Spark Streaming via the Kafka direct approach.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag07-1613716362975.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store (or Kafka) topics are logical collections of messages. Topics organize events into categories. Topics decouple producers, which are the sources of data, from consumers, which are the applications that process, analyze, and share data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag08-1613716380545.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Topics are partitioned for throughput and scalability. Partitions make topics scalable by spreading the load for a topic across multiple servers. Producers are load balanced between partitions and consumers can be grouped to read in parallel from multiple partitions within a topic for faster performance. Partitioned parallel messaging is a key to high performance at scale.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag09-1613716516266.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another key to high performance at scale is minimizing time spent on disk reads and writes. Compared with older messaging systems, Kafka and MapR Event Store eliminated the need to track message acknowledgements on a per-message, per-listener basis. Messages are persisted sequentially as produced, and read sequentially when consumed. These design decisions mean that non sequential reading or writing is rare, and allow messages to be handled at very high speeds. MapR Event Store performance scales linearly as servers are added within a cluster, with each server handling more than 1 million messages per second.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Real-time Data Processing Using Spark Streaming&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Spark Streaming brings Spark&apos;s APIs to stream processing, letting you use the same APIs for streaming and batch processing. Data streams can be processed with Spark’s core APIs, DataFrames, GraphX, or machine learning APIs, and can be persisted to a file system, HDFS, MapR XD, MapR Database, HBase, or any data source offering a Hadoop OutputFormat or Spark connector.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag10-1613716527479.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Spark Streaming divides the data stream into batches of x seconds called Dstreams, which internally is a sequence of RDDs, one for each batch interval. Each RDD contains the records received during the batch interval.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag11-1613716535163.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Resilient distributed datasets, or RDDs, are the primary abstraction in Spark. An RDD is a distributed collection of elements, like a Java Collection, except that it’s spread out across multiple nodes in the cluster. The data contained in RDDs is partitioned and operations are performed in parallel on the data cached in memory. Spark caches RDDs in memory, whereas MapReduce involves more reading and writing from disk. Here again, the key to high performance at scale is partitioning and minimizing disk I/O.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag12-1613716543034.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There are two types of operations on DStreams: transformations and output operations.&lt;/p&gt;
&lt;p&gt;Your Spark application processes the DStream RDDs using Spark transformations like map, reduce, and join, which create new RDDs. Any operation applied on a DStream translates to operations on the underlying RDDs, which, in turn, applies the transformation to the elements of the RDD.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag13-1613716550447.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Output operations write data to an external system, producing output in batches.&lt;/p&gt;
&lt;p&gt;Examples of output operations are saveAsHadoopFiles, which saves to a Hadoop-compatible file system, and saveAsHadoopDataset, which saves to any Hadoop-supported storage system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag14-1613716557963.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Storing Streaming Data Using HBase&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For storing lots of streaming data, we need a data store that supports fast writes and scales.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag15-1613716565379.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With MapR Database (HBase API), a table is automatically partitioned across a cluster by key range, and each server is the source for a subset of a table. Grouping the data by key range provides for really fast read and writes by row key.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag16-1613716573353.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Also with MapR Database, each partitioned subset or region of a table has a write and read cache. Writes are sorted in cache, and appended to a Write-Ahead Log (WAL HBase); writes and reads to disk are always sequential; recently read or written data and cached column families are available in memory and all of this provides for really fast read and writes.&lt;/p&gt;
&lt;p&gt;With a relational database and a normalized schema, query joins cause bottlenecks with lots of data. MapR Database and a de-normalized schema scales because data that is read together is stored together.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag17-1613716581199.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;So how do we collect, process, and store real-time events with high performance at scale? The key is partitioning, caching, and minimizing time spent on disk reads and writes for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Messaging with MapR Event Store&lt;/li&gt;
&lt;li&gt;Processing with Spark Streaming&lt;/li&gt;
&lt;li&gt;Storage with MapR Database&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag18-1613716588993.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Serving the Data&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;End applications like dashboards, business intelligence tools, and other applications use the processed event data. The processing output can also be stored back in MapR Database, in another Column Family or Table, for further processing later.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag19-1613716597526.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Example Use Case Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now we will step through the code for a MapR Event Store producer sending messages, and for Spark Streaming processing the events and storing data in MapR Database.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MapR Event Store Producer Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The steps for a producer sending messages are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Set producer properties&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The first step is to set the KafkaProducer configuration properties, which will be used later to instantiate a KafkaProducer for publishing messages to topics.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a KafkaProducer&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You instantiate a KafkaProducer by providing the set of key-value pair configuration properties, which you set up in the first step. Note that the KafkaProducer&amp;#x3C;K,V&gt; is a Java generic class. You need to specify the type parameters as the type of the key-value of the messages that the producer will send.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the ProducerRecord message&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The ProducerRecord is a key-value pair to be sent to Kafka. It consists of a topic name to which the record is being sent, an optional partition number, and an optional key and a message value. The ProducerRecord is also a Java generic class, whose type parameters should match the serialization properties set before. In this example, we instantiate the ProducerRecord with a topic name and message text as the value, which will create a record with no key.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Send the message&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Call the send method on the KafkaProducer passing the ProducerRecord, which will asynchronously send a record to the specified topic. This method returns a Java Future object, which will eventually contain the response information. The asynchronous send() method adds the record to a buffer of pending records to send, and immediately returns. This allows sending records in parallel without waiting for the responses and allows the records to be batched for efficiency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Finally, call the close method on the producer to release resources. This method blocks until all requests are complete.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The code is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

String topic=&quot;/streams/pump:warning&quot;;
public static KafkaProducer producer;

Properties properties = new Properties();
Properties.put(&quot;Value.serializer&quot;,&quot;org.apache.kafka.common.serialization.StringSerializer&quot;);
producer= new KafkaProducer&amp;#x3C;String,String&gt;(properties);
String txt = &quot;sample msg&quot;;
ProducerRecord&amp;#x3C;String, String&gt; rec = new ProducerRecord&amp;#x3C;String, String&gt;(topic, txt);
producer.send(rec);
producer.close();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Spark Streaming Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;These are the basic steps for Spark Streaming code:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Initialize a Spark StreamingContext object. Using this context, create a DStream.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We use the KafkaUtils createDirectStream method to create an input stream from a Kafka or MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each record is a line of text.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;val ssc = new StreamingContext(sparkConf, Seconds(5))
cal dStream = KafkaUtils.createDirectStream[String, String](ssc, kafkaParams, topicsSet)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag20-1613716623810.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Apply transformations (which create new DStreams)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We parse the message values into Sensor objects, with the map operation on the dStream. The map operation applies the Sensor.parseSensor function on the RDDs in the dStream, resulting in RDDs of Sensor objects.&lt;/p&gt;
&lt;p&gt;Any operation applied on a DStream translates to operations on the underlying RDDs. The map operation is applied on each RDD in the dStream to generate the sensorDStream RDDs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;val sensorDStream = dStream.map(_._2).map(parseSensor)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag21-1613716640674.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The oil pump sensor data comes in as strings of comma separated values. We use a Scala case class to define the Sensor schema corresponding to the sensor data, and a parseSensor function to parse the comma separated values into the sensor case class.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag22-1613716657923.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;case class Sensor(resid: String, date: String, time: String,
  hz: Double, disp: Double, flo: Double, sedPPM: Double,
  psi: Double, ch1PPM: Double)

def parseSensor(str: String): Sensor = {
  val p = str.split(&quot;,&quot;)
  Sensor(p(0), p(1), p(2), p(3).toDouble, p(4).toDouble, p(5).toDouble, p(6).toDouble, p(7).toDouble, p(8).toDouble)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, we use the DStream foreachRDD method to apply processing to each RDD in this DStream. We register the DataFrame as a table, which allows us to use it in subsequent SQL statements. We use an SQL query to find the max, min, and average for the sensor attributes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;// for Each RDD
sensorDStream.foreachRDD { rdd =&gt;
  val sqlContext = SQLContext.getOrCreate(rdd.sparkContext)
  rdd.toDF().registerTempTable(&quot;sensor&quot;)
  val res = sqlContext.sql( &quot;SELECT resid. date,
    max(hz) as maxhz, min(hz) as minhz, avg(hz) as avghz,
    max(disp) as maxdisp, min(disp) as mindisp, avg(disp) as avgdisp,
    max(flo) as maxflo, min(flo) as minflo, avg(flo) as avgflo,
    max(psi) as maxpsi, min(psi) as minpsi, avg(psi) as avgpsi,
    FROM sensor GROUP BY resid,date&quot;)

    res.show()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is example output from the query which shows the max, min, and average output from our sensors.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag24-1613716769080.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And/or Apply output operations&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The sensorRDD objects are filtered for low psi, the sensor and alert data is converted to Put objects, and then written to HBase, using the saveAsHadoopDataset method. This outputs the RDD to any Hadoop-supported storage system using a Hadoop Configuration object for that storage system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rdd.map(Sensor.convertToPut).saveAsHadoopDataset(jobConfig)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag25-1613716782608.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start receiving data and processing it. Wait for the processing to be stopped.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To start receiving data, we must explicitly call start() on the StreamingContext, then call awaitTermination to wait for the streaming computation to finish.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;sensorDStream.foreachRDD { rdd =&gt; 
      .  .  .
}
// Start the computation
scc.start()
// Wait for the coputation to terminate
scc.awaitTermination()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/image27-1613716793452.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;HBase Table schema&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The HBase Table Schema for the streaming data is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Composite row key of the pump name date and time stamp&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Schema for the daily statistics summary rollups is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Composite row key of the pump name and date&lt;/li&gt;
&lt;li&gt;Column Family stats&lt;/li&gt;
&lt;li&gt;Columns for min, max, avg.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/sparkstream5-blog-1613716819138.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform. There are several advantages of having MapR Event Store on the same cluster as all the other components. For example, maintaining only one cluster means less infrastructure to provision, manage, and monitor. Likewise, having producers and consumers on the same cluster means fewer delays related to copying and moving data between clusters, and between applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/imag26-1613716839874.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Software&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can download the code, data, and instructions to run this example from here:&lt;/p&gt;
&lt;p&gt;Code: &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/caroljmcdonald/mapr-streams-sparkstreaming-hbase&apos;&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/mapr-streams-sparkstreaming-hbase&quot;&gt;https://github.com/caroljmcdonald/mapr-streams-sparkstreaming-hbase&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, you learned how the MapR Data Platform integrates Hadoop and Spark with real-time database capabilities, global event streaming, and scalable enterprise storage.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;References and More Information:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.ezmeral.software.hpe.com/&quot;&gt;Free Online training &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog/8nDR4EW79KclyzwBwR5z/getting-started-with-mapr-event-store&quot;&gt;Getting Started with MapR Event Store Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/streaming-programming-guide.html&apos;&gt;Apache Spark Streaming Programming Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[How to Get Started with Spark Streaming and MapR Event Store Using the Kafka API]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/how-to-get-started-with-spark-streaming-and-mapr-event-store-using-the-k/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-get-started-with-spark-streaming-and-mapr-event-store-using-the-k/</guid><pubDate>Fri, 19 Feb 2021 06:14:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2017-05-15T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;The telecommunications industry is on the verge of a major transformation through the use of advanced analytics and big data technologies like the MapR Data Platform.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This post will help you get started using Apache Spark Streaming for consuming and publishing messages with MapR Event Store and the Kafka API. Spark Streaming is an extension of the core Spark API that enables continuous data stream processing. MapR Event Store is a distributed messaging system for streaming event data at scale. MapR Event Store enables producers and consumers to exchange events in real time via the Apache Kafka 0.9 API. MapR Event Store integrates with Spark Streaming via the Kafka direct approach. This post is a simple &quot;how to&quot; example. If you are new to Spark Streaming and the Kafka API you might want to read these first:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/blog/PkVBvojrpwSOLo5J5xzM/real-time-streaming-data-pipelines-with-apache-apis-kafka-spark-streamin&quot;&gt;&lt;u&gt;Real-Time Streaming Data Pipelines with Apache APIs: Kafka, Spark Streaming, and HBase&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog/8nDR4EW79KclyzwBwR5z/getting-started-with-mapr-event-store&quot;&gt;&lt;u&gt;Getting Started with MapR Event Store&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog/oogwLnyPqLuMyBK6KzV8/spark-streaming-with-hbase&quot;&gt;&lt;u&gt;Spark Streaming with HBase&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Example Use Case Data Set&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The example data set consists of aggregated mobile network data generated by the Telecom Italia cellular network over the city of Milano and Trento. The data measures the location and level of interaction of the users with the mobile phone network based on mobile events that occurred on the mobile network over the course of two months in 2013. The projects in the challenge used this data to provide insights, identify and predict mobile phone-based location activity trends and patterns of a population in a large metropolitan area.&lt;/p&gt;
&lt;h2&gt;**The Data Set Schema **&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Square id&lt;/strong&gt;: id of the location in the city grid&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Time interval&lt;/strong&gt;: the beginning of the time interval&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Country code&lt;/strong&gt;: the phone country code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SMS-in activity&lt;/strong&gt;: SMS received inside the Square id&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SMS-out activity&lt;/strong&gt;: SMS sent inside the Square id&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Call-in activity&lt;/strong&gt;: received calls inside the Square id&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Call-out activity&lt;/strong&gt;: issued calls inside the Square id&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Internet traffic activity&lt;/strong&gt;: internet traffic inside the Square id&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The Data Records are in TSV format.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Example Use Case Code&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;First, you import the packages needed to integrate MapR Streams (Now called MapR Event Store) with Spark Streaming and Spark SQL.&lt;/p&gt;
&lt;p&gt;In order for Spark Streaming to read messages from MapR Event Store, you need to import classed from  &lt;em&gt;org.apache.spark.streaming.kafka.v09&lt;/em&gt;. &lt;br&gt;
In order for Spark Streaming to write messages to MapR Event Store, you need to import classes from &lt;em&gt;org.apache.spark.streaming.kafka.producer&lt;/em&gt;.  &lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Parsing the Data Set Records&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;A Scala &lt;code&gt;CallDataRecord&lt;/code&gt; case class defines the schema corresponding to the TSV records. The &lt;code&gt;parseCallDataRecord&lt;/code&gt; function parses the tab separated values into the CallDataRecord case class.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Spark Streaming Code&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;These are the basic steps for the Spark Streaming Consumer Producer code:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Configure Kafka Consumer Producer properties.&lt;/li&gt;
&lt;li&gt;Initialize a Spark StreamingContext object. Using this context, create a DStream which reads message from a Topic.&lt;/li&gt;
&lt;li&gt;Apply transformations (which create new DStreams).&lt;/li&gt;
&lt;li&gt;Write messages from the transformed DStream to a Topic.&lt;/li&gt;
&lt;li&gt;Start receiving data and processing. Wait for the processing to be stopped.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;We will go through each of these steps with the example application code.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;1) Configure Kafka Consumer Producer properties&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The first step is to set the KafkaConsumer and KafkaProducer configuration properties, which will be used later to create a DStream for receiving/sending messages to topics. You need to set the following parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Key and value deserializers: for deserializing the message.&lt;/li&gt;
&lt;li&gt;Auto offset reset: to start reading from the earliest or latest message.&lt;/li&gt;
&lt;li&gt;Bootstrap servers: this can be set to a dummy host:port since the broker address is not actually used by MapR Event Store.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;​For more information on the configuration parameters, &lt;a href=&quot;https://docs.datafabric.hpe.com/52/MapR_Streams/differences_in_configuration_parameters_for_producers_and_consumers.html&quot;&gt;see the MapR Event Store documentation.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;2) Initialize a Spark StreamingContext object.&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;We use the KafkaUtils createDirectStream method with a StreamingContext object, the Kafka configuration parameters, and a list of topics to create an input stream from a MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each message is a key value pair. We use the DStream map transformation to create a DStream with the message values.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;3) Apply transformations (which create new DStreams)&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Next we use the DStream &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/streaming/dstream/DStream.html&apos;&gt;foreachRDD&lt;/a&gt; method to apply processing to each RDD in this DStream.  We parse the message values into CallDataRecord objects, with the map operation on the DStream, then we convert the RDD to a DataFrame, which allows you to use &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/sql-programming-guide.html&apos;&gt;DataFrames and SQL&lt;/a&gt; operations on streaming data. &lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;4) Write messages from the transformed DStream to a Topic&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The CallDataRecord RDD objects are grouped and counted by the squareId. Then this sendToKafka method is used to send the messages with the squareId and count to a topic.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;5) Start receiving data and processing it. Wait for the processing to be stopped.&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;To start receiving data, we must explicitly call start() on the StreamingContext, then call awaitTermination to wait for the streaming computation to finish.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Software&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;You can download the code, data, and instructions to run this example from here: Code: &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/caroljmcdonald/mapr-streams-spark&apos;&gt;&lt;u&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/mapr-streams-spark&quot;&gt;https://github.com/caroljmcdonald/mapr-streams-spark&lt;/a&gt;&lt;/u&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In this blog post, you learned how to integrate Spark Streaming with MapR Event Store to consume and produce messages using the Kafka API.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;References and More Information:&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;This use case example builds upon an example from the book &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.apress.com/us/book/9781484214800&apos;&gt;&lt;u&gt;Pro Spark Streaming By Zubair Nabi&lt;/u&gt;&lt;/a&gt;, which contains other interesting Spark Streaming examples.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/52/Spark/Spark_IntegrateMapRStreams.html&quot;&gt;&lt;u&gt;Integrate Spark with MapR Event Store Documentation&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.ezmeral.software.hpe.com/&quot;&gt;&lt;u&gt;Free Online training on MapR Event Store, Spark&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/streaming-programming-guide.html&apos;&gt;&lt;u&gt;Apache Spark Streaming Programming Guide&lt;/u&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;This blog was originally published on September 6, 2016.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[MapR Database Spark Connector with Secondary Indexes Support]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/mapr-database-spark-connector-with-secondary-indexes-support/</link><guid isPermaLink="false">https://developer.hpe.com/mapr-database-spark-connector-with-secondary-indexes-support/</guid><pubDate>Fri, 19 Feb 2021 05:57:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Nicolas A Perez&quot;],
&quot;publish&quot;: &quot;2019-03-08T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/image2-1613714507521.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Data Platform offers significant advantages over any other tool on the big data space. MapR Database is one of the core components of the platform, and it offers state-of-the-art capabilities that blow away most of the NoSQL databases out there.&lt;/p&gt;
&lt;p&gt;An important add-on to MapR Database is the ability to use, for writing and querying, Apache Spark through the &lt;a href=&quot;https://docs.datafabric.hpe.com/61/Spark/SparkConnectorsMapRDB.html&quot;&gt;&lt;em&gt;&lt;strong&gt;Connector for Apache Spark&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt;. Using this connector comes in very handy, since it can read and write from Spark to MapR Database, using the different Spark APIs, such as RDDs, &lt;a href=&quot;https://docs.datafabric.hpe.com/60/Spark/SparkSQLandDataFrames.html&quot;&gt;DataFrames&lt;/a&gt;, and Streams.&lt;/p&gt;
&lt;p&gt;Using the connector, we can issue queries like the following one:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val df: DataFrame = sparkSession.loadFromMapRDB(&quot;/tmp/user_profiles&quot;, someSchema)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The resulting type is a &lt;code&gt;DataFrame&lt;/code&gt; that we can use as any other DataFrame from any other source, as we normally do in Spark.&lt;/p&gt;
&lt;p&gt;But when using the provided connector for Apache Spark, and if we filter our dataset out, problems may start to emerge as any filter that is being applied to a field that is part of an index will not be used by the connector to optimize the reading from MapR Database. For instance, let&apos;s look at the following query:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val df = sparkSession.loadFromMapRDB(&quot;/tmp/user_profiles&quot;)

val filteredDF = df.filter(&quot;first_name = &apos;Bill&apos;&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The filter is being pushed down, so MapR Database does the filtering and only sends back the data that complies with the filter, reducing the amount of data transferred between MapR Database and Spark. However, if the field &lt;em&gt;&lt;strong&gt;first_name&lt;/strong&gt;&lt;/em&gt; is part of an index, the index is ignored and the table is fully scanned, trying to find the rows that comply with the filter resulting in a non-optimized query.&lt;/p&gt;
&lt;p&gt;By having an index on a field, we expect to use it so queries on that field are optimized, ultimately speeding up the computation. The provided connector for Apache Spark is simply not using the index capabilities of MapR Database to optimize the reading from MapR Database.&lt;/p&gt;
&lt;h2&gt;Necessity&lt;/h2&gt;
&lt;p&gt;Our team, MapR Professional Services, knows that filtering using MapR Database secondary indexes makes a huge difference in performance. Since many of our customers actually try to take advantages of this feature (secondary indexes), we have taken different approaches in order to force the use of the indexes when using Spark.&lt;/p&gt;
&lt;p&gt;In another blog post, &quot;&lt;a href=&quot;/blog/GJVN37RWmoumz0L2LM8V/how-to-use-secondary-indexes-in-spark-with-open-json-application-interfa&quot;&gt;&lt;em&gt;&lt;strong&gt;How to Use Secondary Indexes in Spark with OJAI&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt;,&quot; a fellow coworker explains some ways to overcome the issue on hand.&lt;/p&gt;
&lt;p&gt;Even when we take some shortcuts, we have to give up some of the nice constructs the default connector has, such as &lt;code&gt;.loadFromMapRDB(...)&lt;/code&gt;. Even though this solution is not scalable, we can use some of these ideas, which aim generalizing the concept that can be used for general purpose computation with Spark in a generic approach.&lt;/p&gt;
&lt;h2&gt;An Independent Connector&lt;/h2&gt;
&lt;p&gt;In the past, I have extended Apache Spark in &lt;em&gt;&lt;strong&gt;so&lt;/strong&gt;&lt;/em&gt; many ways. I have written my own &lt;a href=&quot;https://hackernoon.com/extending-our-spark-sql-query-engine-5f4a088de986&quot;&gt;&lt;em&gt;&lt;strong&gt;Custom Data Sources&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt; and most recently a &lt;a href=&quot;https://hackernoon.com/spark-custom-stream-sources-ec360b8ae240&quot;&gt;&lt;em&gt;&lt;strong&gt;Custom Streaming Source for Spark Structured Streams&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once again, I have sailed into the adventure of writing my own Spark data source, but this time for MapR Database, so we can leverage the full advantage of secondary indexes while keeping the same API that the current MapR Database Connector for Apache Spark already has.&lt;/p&gt;
&lt;p&gt;By the end of this post, we will be able to write a query in the following way while fully using secondary indexes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val schema = StructType(Seq(StructField(&quot;_id&quot;, StringType), StructField(&quot;uid&quot;, StringType)))

val data = sparkSession
  .loadFromMapRDB(&quot;/user/mapr/tables/data&quot;, schema)
  .filter(&quot;uid = &apos;101&apos;&quot;)
  .select(&quot;_id&quot;)

data.take(3).foreach(println)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spark Data Sources, Version 2&lt;/h2&gt;
&lt;p&gt;The following data source implementation uses Spark 2.3.1 and uses the data source API V2.&lt;/p&gt;
&lt;p&gt;Let&apos;s start by looking at the things we need.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;ReadSupportWithSchema&lt;/strong&gt;, which allows us to create a DataSourceReader.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DataSourceReader&lt;/strong&gt;, which allows us to get the schema for our data, while we specify how to create a &lt;strong&gt;DataReaderFactory&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SupportsPushDownFilters&lt;/strong&gt;, which allows us to intercept the query filters, so we can push them down to MapR Database.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SupportsPushDownRequiredColumns&lt;/strong&gt;, which allows us to intercept the query projections, so we can push them down to MapR Database.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Let&apos;s start by implementing &lt;code&gt;ReadSupportWithSchema&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class Reader extends ReadSupportWithSchema {

  override def createReader(schema: StructType, options: DataSourceOptions): DataSourceReader = {

    val tablePath = options.get(&quot;path&quot;).get()

    new MapRDBDataSourceReader(schema, tablePath)
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As we can see, we simply get the table path and the schema we want to use when reading from MapR Database. Then we pass them to &lt;code&gt;MapRDBDataSourceReader&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;MapRDBDataSourceReader&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;MapRDBDataSourceReader&lt;/code&gt; implements &lt;code&gt;DataSourceReader&lt;/code&gt;, and we are also mixing in &lt;code&gt;SupportsPushDownFilters&lt;/code&gt; and &lt;code&gt;SupportsPushDownRequiredColumns&lt;/code&gt; to indicate that we want to push filters and projections down to MapR Database.&lt;/p&gt;
&lt;p&gt;Let&apos;s look at each piece separately, so we can understand them better.  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class MapRDBDataSourceReader(schema: StructType, tablePath: String)
  extends DataSourceReader
    with SupportsPushDownFilters
    with SupportsPushDownRequiredColumns {

  private var projections: Option[StructType] = None

  override def readSchema(): StructType = ???

  override def pushFilters(filters: Array[Filter]): Array[Filter] = ???

  override def pushedFilters(): Array[Filter] = ???

  override def pruneColumns(requiredSchema: StructType): Unit = ???

  override def createDataReaderFactories(): util.List[DataReaderFactory[Row]] = ???

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;projections&lt;/code&gt; variable will hold the schema we want to project, if any. In case we don&apos;t explicitly project fields by doing &lt;code&gt;.select&lt;/code&gt;, we will project all the fields on the &lt;code&gt;schema&lt;/code&gt; variable.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;readSchema&lt;/code&gt; works in conjunction with &lt;code&gt;projections&lt;/code&gt; and &lt;code&gt;pruneColumns&lt;/code&gt;. If in our Spark query we specify a &lt;code&gt;select&lt;/code&gt;, then the selected fields are passed to &lt;code&gt;pruneColumns&lt;/code&gt;, and those are the only fields we will bring from MapR Database.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;private var projections: Option[StructType] = None

override def readSchema(): StructType = projections match {
  case None                  =&gt; schema
  case Some(fieldsToProject) =&gt; fieldsToProject
}

override def pruneColumns(requiredSchema: StructType): Unit = projections =
  Some(requiredSchema)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;pushFilters&lt;/code&gt; indicates what filters we have specified in the &lt;code&gt;where&lt;/code&gt; or &lt;code&gt;filter&lt;/code&gt; clause in our Spark query. Basically, we have to decide which of those we want to push down to MapR Database; the other ones will be applied by Spark after the data is in memory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;private var supportedFilters: List[Filter] = List.empty

override def pushFilters(filters: Array[Filter]): Array[Filter] =
 filters.partition(isSupportedFilter) match {
   case (supported, unsupported) =&gt;
     supportedFilters = supported.toList

     unsupported
 }

override def pushedFilters(): Array[Filter] = supportedFilters.toArray

private def isSupportedFilter(filter: Filter) = filter match {
 case _: And =&gt; true
 case _: Or =&gt; true
 case _: IsNull =&gt; true
 case _: IsNotNull =&gt; true
 case _: In =&gt; true
 case _: StringStartsWith =&gt; true
 case _: EqualTo =&gt; true
 case _: LessThan =&gt; true
 case _: LessThanOrEqual =&gt; true
 case _: GreaterThan =&gt; true
 case _: GreaterThanOrEqual =&gt; true

 case _ =&gt; false
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the snippet above, the series of filters we are pushing down match the filters handled by the official connector, so we provide the same functionality as the official connector at this level.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;createDataReaderFactories&lt;/code&gt; creates a list of data readers that actually do the heavy work of reading from our source, MapR Database. In our case here, we are getting the table information and creating a reader for each region/partition in the table, so we can take advantage of the parallelism offered by MapR Database.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def createDataReaderFactories(): util.List[DataReaderFactory[Row]] =
    com.mapr.db.MapRDB
       .getTable(tablePath)
       .getTabletInfos
       .zipWithIndex
       .map { case (descriptor, idx) =&gt;
          logTabletInfo(descriptor, idx)

          MapRDBTabletInfo(idx,
                           descriptor.getLocations,           
                           descriptor.getCondition.asJsonString)
       }
       .map(createReaderFactory)
       .toList

private def createReaderFactory(tabletInfo: MapRDBTabletInfo) =
 new MapRDBDataPartitionReader(
   tablePath,
   supportedFilters,
   readSchema(),
   tabletInfo,
   hintedIndexes)

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;MapRDBDataPartitionReader&lt;/h2&gt;
&lt;p&gt;We are almost done, yet the most important part is about to come.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;MapRDBDataPartitionReader&lt;/code&gt; is where we actually build the MapR Database query and execute it in our MapR Database table. Notice that we are passing the table we are going to read from, and the filters and projections we want to push down, along with the partition each particular reader will be reading from. Remember that we are creating multiple instances of this class; each will read from a different MapR Database region/partition.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class MapRDBDataPartitionReader(table: String,
                               filters: List[Filter],
                               schema: StructType,
                               tabletInfo: MapRDBTabletInfo,
                               hintedIndexes: List[String]
) extends DataReaderFactory[Row] {

  override def createDataReader(): DataReader[Row] = ???
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we need to connect to MapR Database by opening a connection and creating a document store object.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class MapRDBDataPartitionReader(table: String,
                               filters: List[Filter],
                               schema: StructType,
                               tabletInfo: MapRDBTabletInfo,
                               hintedIndexes: List[String]
) extends DataReaderFactory[Row] {


  import org.ojai.store._

  @transient private lazy val connection = DriverManager.getConnection(&quot;ojai:mapr:&quot;)

  @transient private lazy val store: DocumentStore = connection.getStore(table)

  override def createDataReader(): DataReader[Row] = ???
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;query&lt;/code&gt; creates the final command to be sent to MapR Database. This task is a matter of applying the query condition and the projections to our &lt;code&gt;connection&lt;/code&gt; object.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;private def query = {
  val condition = buildQueryConditionFrom(filters)(connection)

  val query = connection
    .newQuery()
    .select(schema.fields.map(_.name): _*)  // push projections down
    .where(condition)                       // push filters down
    .build()

  query
}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;buildQueryConditionFrom&lt;/code&gt; method reads the Spark filters and transforms them into OJAI filters with the corresponding data types; this is where we push the filters down.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;It is very important to notice that since we are using &lt;strong&gt;OJAI&lt;/strong&gt;, it will automatically use any secondary indexes for fields that are part of the filters we are applying. Make sure you check the output at the end of this post.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;documents&lt;/code&gt; is a stream of data coming from MapR Database, based on &lt;code&gt;query&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;@transient private lazy val documents = {
  val queryResult = store.find(query)

  println(s&quot;QUERY PLAN: ${queryResult.getQueryPlan}&quot;)

  queryResult.asScala.iterator
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;createDataReader&lt;/code&gt; uses the stream we have created (&lt;code&gt;documents&lt;/code&gt;) to do the actual reading and returning of the data back to Spark.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def createDataReader(): DataReader[Row] = new DataReader[Row] {
    override def next(): Boolean = documents.hasNext

    override def get(): Row = {
      val document = ParsableDocument(documents.next())

      val values = schema
                      .fields
                      .foldLeft(List.empty[Any])((xs, field) =&gt;
                         document.get(field) :: xs)
                      .reverse

      Row.fromSeq(values)
    }

    override def close(): Unit = {
      store.close()
      connection.close()
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that &lt;code&gt;ParsableDocument(document).get(field)&lt;/code&gt; handles the transformation from OJAI types back to Spark types. We support all OJAI types, except for &lt;em&gt;Interval&lt;/em&gt;. Types are transformed recursively, so if we have a &lt;em&gt;Map&lt;/em&gt; that has another &lt;em&gt;Map&lt;/em&gt; inside with &lt;em&gt;Arrays&lt;/em&gt; of &lt;em&gt;Ints&lt;/em&gt;, we&apos;ve got you covered.&lt;/p&gt;
&lt;h2&gt;Using Our Connector&lt;/h2&gt;
&lt;p&gt;At this point, we are ready to plug in our custom data source into Spark in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;sparkSession
  .read
  .format(&quot;com.github.anicolaspp.spark.sql.Reader&quot;)
  .schema(schema)
  .load(path)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This allows us to use our own way to read from MapR Database, so that any filter being applied that is part of a secondary index on the physical table will be used to optimize the reading.&lt;/p&gt;
&lt;h2&gt;Syntax&lt;/h2&gt;
&lt;p&gt;In order to maintain a similar API to the one offered by the default MapR Database Connector, we added some syntax to our library in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object MapRDB {

  implicit class ExtendedSession(sparkSession: SparkSession) {

    def loadFromMapRDB(path: String, schema: StructType): DataFrame = {

      sparkSession
        .read
        .format(&quot;com.github.anicolaspp.spark.sql.Reader&quot;)
        .schema(schema)
        .load(path)
    }
  }

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that our &lt;code&gt;loadFromMapRDB&lt;/code&gt; method requires a &lt;code&gt;schema&lt;/code&gt; to be passed in. This is a small difference from the official connector that supports schema inference. However, this is a design decision, since we know that most of the time we have the schema available. On the other hand, we know that inferring the schema does not always work correctly on the official connector.&lt;/p&gt;
&lt;p&gt;We can now use our connector in the same way we used the default/official connector.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val schema = StructType(Seq(StructField(&quot;_id&quot;, StringType), StructField(&quot;uid&quot;, StringType)))

val data = sparkSession
  .loadFromMapRDB(&quot;/user/mapr/tables/data&quot;, schema)
  .filter(&quot;uid = &apos;101&apos;&quot;)
  .select(&quot;_id&quot;)

data.take(3).foreach(println)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using MapR Database Secondary Indexes&lt;/h2&gt;
&lt;p&gt;When we run the code above, the &lt;strong&gt;TRACE&lt;/strong&gt; output from &lt;strong&gt;OJAI&lt;/strong&gt; looks similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;QUERY PLAN: {&quot;QueryPlan&quot;:[
  [{
    &quot;streamName&quot;:&quot;DBDocumentStream&quot;,
    &quot;parameters&quot;:{
      &quot;queryConditionPath&quot;:false,
      &quot;indexName&quot;:&quot;uid_idx&quot;,
      &quot;projectionPath&quot;:[
        &quot;uid&quot;,
        &quot;_id&quot;
      ],
      &quot;primaryTable&quot;:&quot;/user/mapr/tables/data&quot;
    }
  }
  ]
]}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that it automatically uses the index called &lt;code&gt;uid_idx&lt;/code&gt;, which is an index for the field &lt;code&gt;uid&lt;/code&gt; that at the same time is the field being used in the Spark filter.&lt;/p&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;MapR Database is a powerful tool that runs as part of the MapR Data Platform. The Spark Connector offers an interesting way to interact with MapR Database, since it allows us to use all Spark constructs at scale when working with this NoSQL system. However, sometimes the default connector falls short because it does not use the secondary index capabilities of MapR Database when we need them the most.&lt;/p&gt;
&lt;p&gt;On the other hand, our implementation mimics the Connector API and ensures that the implemented Spark data source uses MapR Database secondary indexes, since it relies on pure OJAI queries that are able to support secondary indexes out of the box.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Our library code can be found here: &lt;a href=&quot;https://github.com/anicolaspp/MapRDBConnector&quot;&gt;MapRDBConnector&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;You can get the binaries directly from Maven Central:&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&amp;#x3C;dependency&gt;
  &amp;#x3C;groupId&gt;com.github.anicolaspp&amp;#x3C;/groupId&gt;
  &amp;#x3C;artifactId&gt;maprdbconnector_2.11&amp;#x3C;/artifactId&gt;
  &amp;#x3C;version&gt;1.0.2&amp;#x3C;/version&gt;
&amp;#x3C;/dependency&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or using sbt:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;libraryDependencies += &quot;com.github.anicolaspp&quot; % &quot;maprdbconnector_2.11&quot; % &quot;1.0.2&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Disclaimer: This is an independent effort to improve querying MapR Database. This library is not a substitute for the official &lt;a href=&quot;https://docs.datafabric.hpe.com/61/Spark/SparkConnectorsMapRDB.html&quot;&gt;&lt;strong&gt;Connector for Apache Spark&lt;/strong&gt;&lt;/a&gt; offered by MapR as part of its distribution.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This blog post was originally published on &lt;a href=&quot;https://hackernoon.com/mapr-db-spark-connector-with-secondary-indexes-df41909f28ea?sk%3D3dd8eb1038b07bfbc11ae35c37f60743&quot;&gt;Medium&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Python with Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/using-python-with-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/using-python-with-apache-spark/</guid><pubDate>Sat, 13 Feb 2021 07:12:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Jim Scott&quot;,
&quot;publish&quot;: &quot;2015-11-23T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Apache Spark is awesome. &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.python.org/&apos;&gt;Python&lt;/a&gt; is awesome. This post will show you how to use your favorite programming language to process large datasets quickly.&lt;/p&gt;
&lt;h2&gt;Why Python?&lt;/h2&gt;
&lt;p&gt;Python has become one of the major programming languages, joining the pantheon of essential languages like C, C++, and HTML. Why has it become so popular? Because Guido van Rossum designed it as a teaching tool, making Python easy to learn.&lt;/p&gt;
&lt;p&gt;But it’s more than just easy. It’s also super useful due to its “batteries included” standard library. Python comes with a number of modules for interacting with the operating system, searching text with regular expressions, accessing the Internet, and just about anything else you could think of. You can download lots more or roll your own by interfacing with a C library.&lt;/p&gt;
&lt;p&gt;Since it’s a dynamic, interpreted language, you don’t have to declare variables or deal with memory management bugs like you do with C.&lt;/p&gt;
&lt;p&gt;For this reason, Python appeals to experienced programmers as well as beginners. Google uses it extensively.&lt;/p&gt;
&lt;p&gt;While you can use Scala, which Spark is built upon, there are good reasons to use Python. More people will likely be familiar with Python than with Scala, which will flatten the learning curve.&lt;/p&gt;
&lt;p&gt;Actually installing Spark is beyond the scope of this tutorial. This post will assume you have Python on your machine. Python is standard on most Linux/Unix distributions and Mac OS X. You can easily install Python from its homepage if you’re running Windows or just want a more recent version.&lt;/p&gt;
&lt;p&gt;You can launch the interactive Python shell for Spark with the command ./bin/pyspark from the Spark directory.&lt;/p&gt;
&lt;p&gt;The Spark equivalent of “Hello, world” is a word count. Here it is using Spark on Python, borrowed from the Apache Spark &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/&apos;&gt;homepage:&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;text_file = spark.textFile(&quot;hdfs://...&quot;)
text_file.flatMap(lambda line: line.split())
    .map(lambda word: (word, 1))
    .reduceByKey(lambda a, b: a+b)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This creates a text_file object, splits it into lines, counts all the words in the line, and adds them back together.&lt;/p&gt;
&lt;h2&gt;Transformations&lt;/h2&gt;
&lt;p&gt;The defining feature of Apache Spark is its Resilient Distributed Datasets, or RDDs. These RDDs represent the data as immutable collections across all of the nodes in a Spark cluster. Operations are made out of transformations which are chained together. This means that all the operations on your data are nondestructive, allowing the system to recover from failures.&lt;/p&gt;
&lt;p&gt;Transformations can be applied to RDDs, thusly generating more RDDs. One of the more popular transformations available is &lt;code&gt;filter()&lt;/code&gt;, which applies a function as an argument to all the values in your data and returns only the values that return true.&lt;/p&gt;
&lt;p&gt;To transform an external text file into an RDD, just use the command &lt;code&gt;MyFile = sc.TextFile(“data.txt”)&lt;/code&gt; where MyFile is the name you want to use and “data.txt” is the name of your file. &lt;code&gt;map()&lt;/code&gt; is similar but applies the values to every value. &lt;code&gt;flatMap()&lt;/code&gt;, seen in the earlier example, does the same thing but returns Seq instead of a value.&lt;/p&gt;
&lt;p&gt;Here are some other transformations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;reduceByKey()&lt;/code&gt; takes a function as an argument and aggregates the data, such as adding values together.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;groupByKey()&lt;/code&gt; takes the values and turns them into an iterable object.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;distinct()&lt;/code&gt; removes duplicates from an RDD.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Using Collections&lt;/h2&gt;
&lt;p&gt;You can use the &lt;code&gt;collect()&lt;/code&gt; action to get all the data in an RDD and turn it into an array. This lets you apply an action to your data.&lt;/p&gt;
&lt;p&gt;You can also parallelize an array for use on a Spark cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;data = [1, 2, 3, 4, 5]
distData = sc.parallelize(data)

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Actions&lt;/h2&gt;
&lt;p&gt;While transformations create new RDDs, actions give you some kind of result. The &lt;code&gt;collect()&lt;/code&gt; and &lt;code&gt;sc.parallelize()&lt;/code&gt; are two examples of actions we’ve seen.&lt;/p&gt;
&lt;p&gt;Here are some common actions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;count()&lt;/code&gt; counts the number of elements in an RDD.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;take()&lt;/code&gt; fetches the first n elements as an argument from an RDD.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;foreach()&lt;/code&gt; applies a function to each element in an RDD.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;saveAsTextFile()&lt;/code&gt; saves an RDD into a text file in the specified path.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Python is a powerful programming language that’s easy to code with. Combined with Apache Spark, you have a powerful, easy way to process Big Data either in real time or with scripts. The MapR distribution gives you everything you need to process Big Data in your favorite language. The &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/documentation.html&apos;&gt;Apache Spark documentation&lt;/a&gt; will give you more info on how to do so.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Getting Started with MapR Client Container]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/getting-started-with-mapr-client-container/</link><guid isPermaLink="false">https://developer.hpe.com/getting-started-with-mapr-client-container/</guid><pubDate>Sat, 13 Feb 2021 06:58:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Tugdual Grall&quot;,
&quot;publish&quot;: &quot;2017-02-07T06:00:00.000Z&quot;,
&quot;tags&quot;: &quot;streaming&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The MapR Persistent Application Client Container (PACC) is a Docker-based container image that includes a container-optimized MapR client. The PACC provides secure access to MapR Data Platform services, including MapR XD, MapR Database, and MapR Event Store. The PACC makes it fast and easy to run containerized applications that access data in MapR. &lt;/p&gt;
&lt;p&gt;In this article, you will learn how you can easily deploy and run your MapR application in the container.&lt;/p&gt;
&lt;h2&gt;About the application&lt;/h2&gt;
&lt;p&gt;This sample application is made of two services:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&quot;MapR Sensor&quot; is a Java application that captures the information about the computer (CPU, Memory, ...) and publishes the data on a topic every 500ms.&lt;/li&gt;
&lt;li&gt;&quot;MapR Web Application&quot; is a Web application, powered by Jetty, that prints the messages from this topic into a Web page. Jetty HTTP Request log files are saved into MapR XD. This is interesting to have a centralized log management but also to run some processing or analytics jobs using Apache Spark or Drill.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/1-1613199562954.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Prerequisite&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Docker 1.12.5 or later&lt;/li&gt;
&lt;li&gt;Apache Maven 3.x or later&lt;/li&gt;
&lt;li&gt;A Git client&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Build and Deploy the Application using Docker&lt;/h2&gt;
&lt;p&gt;Clone the application from Github using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git clone https://github.com/mapr-demos/mapr-pacc-sample

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s look at the project structure:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mapr-pacc-sample
├── pom.xml
├── sensor-service
│   ├── Dockerfile
│   ├── docker.env
│   ├── pom.xml
│   ├── run.sh
│   └── src
└── webserver-service
    ├── Dockerfile
    ├── docker.env
    ├── pom.xml
    ├── run.sh
    └── src
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The project is made of two modules: sensor-service and webserver-service, part of the Apache Maven project.&lt;/p&gt;
&lt;p&gt;Each of the projects contains a Dockerfile defining a custom image.&lt;/p&gt;
&lt;h2&gt;Create Custom Docker Files&lt;/h2&gt;
&lt;p&gt;The Docker files should be configured to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Install and configure the MapR Client. This is done by building the container from the MapR PACC image.&lt;/li&gt;
&lt;li&gt;Deploy the Java application. This is done by copying the Jar into the container.&lt;/li&gt;
&lt;li&gt;Run the Java application. This is done by simply calling the Java command.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Docker file for Sensor/Producer&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Open the file /mapr-pacc-sample/sensor-service/Dockerfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;FROM maprtech/pacc:5.2.0_2.0_centos7

# Create a directory for your MapR Application and copy the Application
RUN mkdir -p /usr/share/mapr-apps/
COPY ./target/sensor-service-1.0-SNAPSHOT.jar /usr/share/mapr-apps/sensor-service.jar
COPY run.sh /usr/share/mapr-apps/run.sh
RUN chmod +x /usr/share/mapr-apps/run.sh

CMD [&quot;start&quot;, &quot;/usr/share/mapr-apps/run.sh&quot;, &quot;/apps/sensors:computer&quot;]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Docker file for Web/Consumer&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Open the file /mapr-pacc-sample/webserver-service/Dockerfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;FROM maprtech/pacc:5.2.0_2.0_centos7

EXPOSE 8080

# Create a directory for your MapR Application and copy the Application
RUN mkdir -p /usr/share/mapr-apps/
COPY ./target/webserver-service-1.0-SNAPSHOT-jar-with-dependencies.jar /usr/share/mapr-apps/webserver-service.jar
COPY run.sh /usr/share/mapr-apps/run.sh
RUN chmod +x /usr/share/mapr-apps/run.sh

CMD [&quot;start&quot;, &quot;/usr/share/mapr-apps/run.sh&quot;, &quot;/apps/sensors:computer&quot; , &quot;/mapr/my.cluster.com/apps/logs/&quot;]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The docker files are based on the MapR pacc:5.2.0_2.0_centos7 image. The application will automatically inherit from the various components installed by this container: MapR Database and Streams Client, POSIX Client for Containers, and Java 8.&lt;/p&gt;
&lt;p&gt;Both Docker files are defined with the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a new directory /usr/share/mapr-apps/ to deploy the application &lt;/li&gt;
&lt;li&gt;Copy the application from maven target directory into this directory&lt;/li&gt;
&lt;li&gt;Copy the run.sh file to this folder and make it executable&lt;/li&gt;
&lt;li&gt;For the Web service the HTTP port 8080 is exposed&lt;/li&gt;
&lt;li&gt;The run.sh is automatically started using the CMD command&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The run.sh script will create /apps/logs using mount point and will start the Java application when the container starts.&lt;/p&gt;
&lt;p&gt;Now that you understand how to create your Dockerfile based on MapR PACC, let&apos;s build and run the application services.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Build and Run Docker images&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before starting the application, you must create a new MapR Event Store and new folder in MapR Distributed File and Object Store. Execute the following steps on MapR cluster:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1- Create a MapR Stream and Topic&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On your MapR cluster, using a terminal window, run the following commands to create the /apps/sensors:computer topic:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli stream create -path /apps/sensors -produceperm p -consumeperm p -topicperm p

$ maprcli stream topic create -path /apps/sensors -topic computer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2- Monitor the MapR Stream Topic&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Start the Kafka Consumer Console to monitor the messages in the /apps/sensors:computer topic; in the terminal window of your MapR Cluster run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ /opt/mapr/kafka/kafka-0.9.0/bin/kafka-console-consumer.sh --new-consumer --bootstrap-server this.will.be.ignored:9092 --topic /apps/sensors:computer

&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You&apos;ll need to install mapr-kafka package on cluster to execute kafka-console-*.sh commands.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Application &quot;Sensor Producer&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1- Build the new customer image&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Build a new custom image for the sensor producer using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd sensor-service

$ mvn clean package

$ docker build -t mapr-sensor-producer .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2- Run the new container&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Run the container with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ docker run -it -e MAPR_CLDB_HOSTS=192.168.99.18 -e MAPR_CLUSTER=my.cluster.com -e MAPR_CONTAINER_USER=mapr --name producer -i -t mapr-sensor-producer

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command creates a new container based on the mapr-sensor-producer image that we just built. The command use the following mandatory variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The name of the container is producer&lt;/li&gt;
&lt;li&gt;MAPR_CLDB_HOSTS: the list of CLDB hosts of your MapR cluster&lt;/li&gt;
&lt;li&gt;MAPR_CLUSTER: the name of the cluster&lt;/li&gt;
&lt;li&gt;MAPR_CONTAINER_USER : the user that will be used to run the application&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These two variables are used to configure the MapR Client embedded in the container.&lt;/p&gt;
&lt;p&gt;The Java application is automatically started by Docker, and you should see messages in the Kafka Console.&lt;/p&gt;
&lt;p&gt;You can start/stop the container, using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ docker start producer

$ docker stop producer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Application &quot;Web Consumer&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1- Build the new customer image&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Build a new custom image for the sensor producer, using the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd webserver-service

$ mvn clean package

$ docker build -t mapr-web-consumer .

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;2- Run the new container&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Run the container with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ docker run -it --privileged --cap-add SYS_ADMIN --cap-add SYS_RESOURCE --device /dev/fuse -e MAPR_CLDB_HOSTS=192.168.99.18 -e MAPR_CLUSTER=my.cluster.com -e MAPR_CONTAINER_USER=mapr -e MAPR_MOUNT_PATH=/mapr -p 8080:8080 --device /dev/fuse --name web -i -t mapr-web-consumer

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This command creates a new container based on the mapr-web-consumer image that we just built. The command use the following variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The name of the container is web&lt;/li&gt;
&lt;li&gt;The -p 8080:8080 is used to map the HTTP port from the container to your host&lt;/li&gt;
&lt;li&gt;--device /dev/fuse is used to add a fuse client to the container&lt;/li&gt;
&lt;li&gt;--privileged --cap-add SYS_ADMIN --cap-add SYS_RESOURCE are needed for the /dev/fuse to work&lt;/li&gt;
&lt;li&gt;MAPR_CLDB_HOSTS: the list of CLDB hosts of your MapR cluster&lt;/li&gt;
&lt;li&gt;MAPR_CLUSTER: the name of the cluster&lt;/li&gt;
&lt;li&gt;MAPR_CONTAINER_USER : the user that will be used to run the application&lt;/li&gt;
&lt;li&gt;MAPR_MOUNT_PATH: the mount path of MapR FUSE client, providing direct access to MapR XD&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This container starts the Java application that is an embedded Jetty. You can access the Web application from your host, using &lt;code&gt;http://localhost:8080&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This application is a simple web page that show the last messages published to the topic. This page is automatically refreshed every 5 seconds.&lt;/p&gt;
&lt;p&gt;The Jetty server is saving the HTTP Request logs directly in MapR XD, using the Fuse client; you can look at the logs in the /apps/logs folder of your MapR cluster.&lt;/p&gt;
&lt;p&gt;You can start/stop the container, using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ docker start web

$ docker stop web
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this article, you have learned how to use the MapR PACC to deploy and run applications. The security on the MapR cluster used by this sample has not been enabled; if your cluster is secured, you can use additional MapR configuration to authenticate your services using security token.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Chapel Technical Lead, Brad Chamberlain, Opens Up About Open Source]]></title><description><![CDATA[brad chamberlain for blog As a leading global, edge-to-cloud Platform-as-a-Service company, Hewlett Packard Enterprise (HPE) prides itself…]]></description><link>https://developer.hpe.com/chapel-technical-lead-brad-chamberlain-opens-up-about-open-source/</link><guid isPermaLink="false">https://developer.hpe.com/chapel-technical-lead-brad-chamberlain-opens-up-about-open-source/</guid><pubDate>Fri, 12 Feb 2021 11:26:55 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/brad-chamberlain-for-blog-1613129198318.jpg&quot; alt=&quot;brad chamberlain for blog&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a leading global, edge-to-cloud Platform-as-a-Service company, Hewlett Packard Enterprise (HPE) prides itself in employing team members who share one common purpose: to advance the way people live and work. Because of this, HPE boasts some of the finest &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;open source&lt;/a&gt; engineering talent. In this blog series, you’ll get to meet a number of them as I interview some of the open source experts who make up our team.&lt;/p&gt;
&lt;p&gt;Dr. Bradford Chamberlain is the principal engineer for the &lt;a href=&quot;https://chapel-lang.org/&quot;&gt;Chapel parallel programming language&lt;/a&gt;. His research interests center around parallel computation, particularly with respect to programming languages, compilers, algorithms, applications, and scalability. Brad comes to HPE through its recent acquisition of Cray Inc.&lt;/p&gt;
&lt;h3&gt;What attracted you to open source technologies?&lt;/h3&gt;
&lt;p&gt;I like open source because I tend to be a community-minded person. I like the idea of creating technologies that can span organizations and draw upon the expertise of people from a variety of backgrounds. And given that I benefit greatly from other open-source efforts, I really like the notion of giving back in a way that, hopefully, others will find similarly valuable.&lt;/p&gt;
&lt;p&gt;When code is open source, users have more confidence that a project can garner enough interest to live on, even if you, as the original developer, move on to pursue new endeavors. Users can continue to make changes, if needed, even if you aren’t there. For me, this is one of the many reasons that open source technologies are so attractive.&lt;/p&gt;
&lt;h3&gt;How did you first get involved with open source?&lt;/h3&gt;
&lt;p&gt;I got started with open source when I was working on the ZPL programming language while pursuing my Ph.D. in Computer Science &amp;#x26; Engineering at the University of Washington. ZPL was open source only in the sense that we released our source code to users—the development itself was done in an internal repository.  With the Chapel programming language that I work on now, our notion of open source extends to the entire development process.  As an example, our project is &lt;a href=&quot;https://github.com/chapel-lang/chapel&quot;&gt;hosted at GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In both cases, the choice to use open source was very much for practical reasons. It’s generally quite challenging to get people to try any new programming language; making a language that’s proprietary and closed source only makes it more difficult.  In contrast, when a language is free and open source, people are far more willing to give it a try, due in large part to the flexibility afforded by open source.  It also has the practical benefit of dramatically simplifying our release process given all the architectures, operating systems, and compilers that Chapel is compatible with.&lt;/p&gt;
&lt;h3&gt;Can you tell me a little more about your work on Chapel and why you’re so excited about it?&lt;/h3&gt;
&lt;p&gt;Chapel is a programming language designed to make parallel programming on supercomputers far more productive than current approaches.  However, it is also designed for portability such that Chapel programs can be developed and run on laptops, commodity clusters, or the cloud, for anyone who doesn’t have easy access to a supercomputer.  In practice, I do most of my Chapel programming on a Mac laptop with a high degree of confidence that, once I’ve got it working, I can run it on an HPE Cray supercomputer without any problems.&lt;/p&gt;
&lt;p&gt;Chapel is also designed to appeal to programmers of all expertise levels.  We strive to make it so easy to use that essentially every programmer could program a high-performance computing (HPC) system. Today, programming at such scales tends to require expertise in a number of fairly specialized and explicit programming notations, like C/C++, MPI, OpenMP, and CUDA. With Chapel, we strive to combine the ease-of-use of a language like Python with the performance and scalability you’d expect from conventional HPC techniques.&lt;/p&gt;
&lt;h3&gt;What’s currently on your list of things to do?&lt;/h3&gt;
&lt;p&gt;My team’s current and upcoming projects that I’m most excited about include compiling Chapel to GPUs, refactoring our compiler for speed and flexibility, and optimizing Chapel’s performance for HPE Cray EX. I’m also excited to engage with our community more as we modernize our website using some open source technologies that are new to me (Jekyll and Rouge).  That said, my most immediate task is helping to wrap up our Spring 2021 release, due out in March.&lt;/p&gt;
&lt;h3&gt;As a developer, what you create says a lot about who you are. What impression do you hope you leave with others?&lt;/h3&gt;
&lt;p&gt;When people think of me, I hope they envision someone who is passionate about making supercomputers as straightforward and fun to program as laptops are, yet without sacrificing performance or scalability.  I also strive to be as approachable and affable as my own mentors were, and hope that comes through to people.&lt;/p&gt;
&lt;p&gt;To learn more about the open source projects that HPE is involved with, please visit our &lt;a href=&quot;https://www.hpe.com/us/en/open-source.html&quot;&gt;open source page&lt;/a&gt;. Interested in exploring what HPE offers for developers and data scientists? Check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; for a ton of articles, workshops, tutorials, and other resources.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Accessing iLO Redfish APIs and HPE OneView APIs on Ansible AWX]]></title><description><![CDATA[There has been a growing demand in infrastructure management automation and an adaption toward infrastructure-as-code (IAC) with Ansible and…]]></description><link>https://developer.hpe.com/accessing-ilo-redfish-apis-and-hpe-oneview-apis-on-ansible-awx/</link><guid isPermaLink="false">https://developer.hpe.com/accessing-ilo-redfish-apis-and-hpe-oneview-apis-on-ansible-awx/</guid><pubDate>Tue, 09 Feb 2021 03:40:59 GMT</pubDate><content:encoded>&lt;p&gt;There has been a growing demand in infrastructure management automation and an adaption toward infrastructure-as-code (IAC) with Ansible and AWX. Backed by Red Hat, Ansible has become one of the most popular IAC toolings for its simple and easy to understand coding style, masterless and agentless design, and the ability to create custom playbooks and roles for providing extra support to process automation. &lt;a href=&quot;https://github.com/ansible/awx&quot;&gt;AWX&lt;/a&gt;, on the other hand, is the open-sourced version of Ansible Tower, which along with a set of tools, provides a web-based graphical user interface hub for consuming Ansible playbooks.&lt;/p&gt;
&lt;p&gt;Both the &lt;a href=&quot;https://github.com/HewlettPackard/python-ilorest-library&quot;&gt;HPE Python iLO REST Library&lt;/a&gt; and the &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;HPE OneView SDK for Ansible&lt;/a&gt; do not come bundled with AWX. The AWX project does provide instructions on &lt;a href=&quot;https://github.com/ansible/awx/blob/devel/docs/custom_virtualenvs.md#managing-custom-python-dependencies&quot;&gt;managing custom Python dependencies&lt;/a&gt; on AWX. This blog post is to share the process that we took to set up a custom Python environment for the Python iLO REST Library and the HPE OneView SDK in order to access the iLO Redfish APIs and the HPE OneView APIs from an AWX job.&lt;/p&gt;
&lt;h1&gt;Ansible and AWX setup on localhost&lt;/h1&gt;
&lt;p&gt;Ansible and AWX setup instructions can be found  &lt;a href=&quot;https://github.com/ansible/awx/blob/devel/INSTALL.md#installing-awx&quot;&gt;here&lt;/a&gt; on GitHub. If running behind proxies, make sure the proxy parameters, such as &lt;em&gt;http_proxy&lt;/em&gt;, &lt;em&gt;https_proxy&lt;/em&gt;, and &lt;em&gt;no_proxy&lt;/em&gt; are configured accordingly in the installation inventory file. Once installation completes, the Ansible command becomes available on the localhost, and AWX runs as a containerized application, as shown here:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bash-4.4# ansible --help
usage: ansible [-h] [--version] [-v] [-b] [--become-method BECOME_METHOD]
               [--become-user BECOME_USER] [-K] [-i INVENTORY] [--list-hosts]
               [-l SUBSET] [-P POLL_INTERVAL] [-B SECONDS] [-o] [-t TREE] [-k]
               [--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
               [-c CONNECTION] [-T TIMEOUT]
               [--ssh-common-args SSH_COMMON_ARGS]
               [--sftp-extra-args SFTP_EXTRA_ARGS]
               [--scp-extra-args SCP_EXTRA_ARGS]
               [--ssh-extra-args SSH_EXTRA_ARGS] [-C] [--syntax-check] [-D]
               [-e EXTRA_VARS] [--vault-id VAULT_IDS]
               [--ask-vault-pass | --vault-password-file VAULT_PASSWORD_FILES]
               [-f FORKS] [-M MODULE_PATH] [--playbook-dir BASEDIR]
               [-a MODULE_ARGS] [-m MODULE_NAME]
               pattern
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                COMMAND                  CREATED      STATUS      PORTS                  NAMES
79248db66699   ansible/awx:17.0.1   &quot;/usr/bin/tini -- /u…&quot;   5 days ago   Up 5 days   8052/tcp               awx_task
11e9c78d53cf   ansible/awx:17.0.1   &quot;/usr/bin/tini -- /b…&quot;   5 days ago   Up 5 days   0.0.0.0:80-&gt;8052/tcp   awx_web
46c529f34016   postgres:12          &quot;docker-entrypoint.s…&quot;   5 days ago   Up 5 days   5432/tcp               awx_postgres
75c8e09ad2de   redis                &quot;docker-entrypoint.s…&quot;   5 days ago   Up 5 days   6379/tcp               awx_redis
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Installing the HPE libraries on AWX&lt;/h1&gt;
&lt;p&gt;The installation of the HPE Python iLO REST library and the HPE OneView Ansible library can be achieved by the following three steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log in to the &lt;em&gt;awx_task&lt;/em&gt; container. Create a custom Python virtual environment and install the HPE Python iLO REST library and the HPE OneView Ansible library using the Python package manager.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Log in to the &lt;em&gt;awx_web&lt;/em&gt; containers. Create a new virtual environment with the same name as the one newly created in the &lt;em&gt;awx_task&lt;/em&gt; container, and then install these two HPE Python libraries again using the Python package manager.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lastly, add the new Python environment to the &lt;em&gt;custom_virtualenvs&lt;/em&gt; in AWX through its REST APIs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Keep in mind to create the virtual environments first in &lt;em&gt;awx_web&lt;/em&gt; and &lt;em&gt;awx_task&lt;/em&gt; before adding it to the AWX custom virtual environment. The following sections talk more about each of the steps.&lt;/p&gt;
&lt;h2&gt;Create new custom virtual environments on &lt;em&gt;awx_task&lt;/em&gt; and &lt;em&gt;awx_web&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;First, access the container BASH shell with the command &lt;em&gt;Docker exec -it&lt;/em&gt;  . For example, accessing the &lt;em&gt;awx task&lt;/em&gt; shell :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[root@localhost ~]# docker exec -it awx_task /bin/bash
bash-4.4#
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Within the container, create a new Python virtual environment. For this example, the virtual environment is created at &lt;em&gt;/opt/hpeAutomation/venv&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bash-4.4# mkdir -p /opt/hpeAutomation/
bash-4.4# chmod 0755 /opt/hpeAutomation/
bash-4.4# python3 -m venv /opt/hpeAutomation/venv
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Install the HPE libraries on  &lt;em&gt;awx_task&lt;/em&gt; and &lt;em&gt;awx_web&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;On each of the AWX containers, proceed as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;YUM install the pre-requisites gcc:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bash-4.4# yum install gcc -y
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Install the psutil Python module and the HPE libraries:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;bash-4.4# /opt/hpeAutomation/venv/bin/pip3 install psutil
bash-4.4# /opt/hpeAutomation/venv/bin/pip3 install ansible hpOneView hpICsp python-ilorest-library
bash-4.4# git clone https://github.com/HewlettPackard/oneview-ansible.git
bash-4.4# cd oneview-ansible
bash-4.4# cp library/*.py  /opt/hpeAutomation/venv/lib/python3.6/site-packages/ansible/modules/remote_management/oneview/
bash-4.4# cp library/module_utils/oneview.py  /opt/hpeAutomation/venv/lib/python3.6/site-packages/ansible/module_utils 
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Configure the &lt;em&gt;custom_virtualenvs&lt;/em&gt; in AWX&lt;/h2&gt;
&lt;p&gt;Once finished installing the Python modules to &lt;em&gt;awx_task&lt;/em&gt; and &lt;em&gt;awx_web&lt;/em&gt; , the last step is to add the newly created virtual environment to &lt;em&gt;custom_virtualenvs&lt;/em&gt;  in AWX. This can be done with a HTTP PATCH to the AWX:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[root@localhost ~]# curl -X PATCH http://AWX_admin_username:AWX_adminpassword@AWX_ip_address/api/v2/settings/system/ \
-d &apos;{&quot;CUSTOM_VENV_PATHS&quot;: [&quot;/var/lib/awx/venv/ansible&quot;, &quot;/opt/hpeAutomation/&quot;]}&apos; \
-H &apos;Content-Type:application/json&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can verify the &lt;em&gt;custom_virtualenvs&lt;/em&gt; in AWX with a HTTP GET request to &lt;em&gt;/api/v2/config/&lt;/em&gt; , as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[root@localhost ~]# curl -u AWX_admin_username:AWX_admin_password http://AWX_IP_address/api/v2/config/
{
  &quot;time_zone&quot;: &quot;UTC&quot;,
  &quot;license_info&quot;: {
    &quot;license_type&quot;: &quot;open&quot;,
    &quot;valid_key&quot;: true,
    &quot;subscription_name&quot;: &quot;OPEN&quot;,
    &quot;product_name&quot;: &quot;AWX&quot;
  },
  ...
  ..
  .
  &quot;custom_virtualenvs&quot;: [
    &quot;/var/lib/awx/venv/ansible/&quot;,
    &quot;/opt/hpeAutomation/venv/&quot;
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once completed, the custom virtual environment becomes available as an &lt;em&gt;Ansible Environment&lt;/em&gt; in the &lt;em&gt;AWX Projects&lt;/em&gt; and the Python libraries become accessible by the Job Templates in the project.
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/pic1-1612842533941.png&quot; alt=&quot;project&quot;&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/pic2-1612842545279.png&quot; alt=&quot;template&quot;&gt;&lt;/p&gt;
&lt;p&gt;There you have it. The AWX is now ready to run jobs for HPE OneView and iLO.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Event-Driven Microservices on the MapR Data Platform]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/event-driven-microservices-on-the-mapr-data-platform/</link><guid isPermaLink="false">https://developer.hpe.com/event-driven-microservices-on-the-mapr-data-platform/</guid><pubDate>Fri, 05 Feb 2021 07:05:46 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Rachel Silver&quot;,
&quot;publish&quot;: &quot;2016-09-27T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;MapR is pleased to announce support for event-driven microservices on the MapR Data Platform. In this blog post, I’d like to explain what this means and how it fits into our bigger idea of “convergence.”&lt;/p&gt;
&lt;h2&gt;What are event-driven microservices?&lt;/h2&gt;
&lt;p&gt;Microservices are simple, single-purpose applications that work in unison via lightweight communications, such as data streams. They allow you to more easily manage segmented efforts to build, integrate, and coordinate your applications in ways that have traditionally been impossible with monolithic applications.&lt;/p&gt;
&lt;p&gt;By breaking up the pieces of a large application and isolating them into smaller microservice apps, you introduce agility, as these can typically be built and maintained by small, often cross-functional teams. And, they offer flexibility by promoting reuse across different solutions.&lt;/p&gt;
&lt;p&gt;Event-driven microservices leverage event streaming engines like MapR Event Store for Apache Kafka as the communications vehicles between them. And by converging file, database, and streaming services using our publish-and-subscribe framework, we can enable you to run analytical workloads using a combination of recent and historical data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/operational-analytics-in-mapr-converged-platform-1612508715159.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What advantages do microservices provide?&lt;/h2&gt;
&lt;p&gt;The logical and functional isolation of services provided by our microservices support is ideal for all complex workflows, but for machine learning training in particular. This is because it’s a natural infrastructure for tracking the different outputs of evolving application versions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To illustrate this, let’s take a look at an example using data versioning and snapshots:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;During your application development life cycle, enhancements to your code will result in different outputs, and these outputs are important to preserve during the development life cycle to compare results and verify improvement.&lt;/p&gt;
&lt;p&gt;MapR Volumes are logical partitions in your cluster that can contain databases, files, and streams. Thus, each application version output can be directed to a specific volume with the associated output data. And, in a microservices architecture, all versions can be deployed in parallel to make live comparisons and ensure a more graceful upgrade process.&lt;/p&gt;
&lt;p&gt;In addition, input data can be organized in a volume and then actively preserved using a snapshot. This creates an immutable copy of the data that can be used as the basis for ongoing testing against future versions of your application. You can keep enhancing your application and run it against a known data set to ensure you can identify changes that are a direct result of your code changes, not due to changes in the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/snapshots-in-app-development-1612508723835.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Different versions of database records, files, and event data need to be tracked and managed together in a streaming environment.&lt;/p&gt;
&lt;p&gt;Ready for a real-world example? The diagram below shows the high-level architecture of the converged application blueprint.&lt;/p&gt;
&lt;p&gt;This was built to be specific to stock trading data analysis, but the concepts apply to any environment that deals with combining real-time streams &amp;#x26; historical data. This application is a great example of how you can process a high-speed stream of incoming data and enable both operational and analytical workloads on a single converged cluster. Take a look!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/blueprint-arch-final-1612508815719.jpg&quot; alt=&quot;App Blueprint&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How does this all work?&lt;/h2&gt;
&lt;p&gt;An integrated publish-and-subscribe framework to support event-driven applications.&lt;/p&gt;
&lt;p&gt;The foundation of our microservices offering is our low latency messaging system. It’s adaptable, scalable, and allows you to leverage your converged platform to integrate data-in-motion and data-at-rest to support real-time applications.&lt;/p&gt;
&lt;p&gt;It’s a remarkably versatile framework allowing communication pipelines in hybrid cloud microservice architectures, between local applications, and among Docker containers. Built-in resource multi-tenancy allow you to run both processing and messaging services in the same cluster and on the same nodes.&lt;/p&gt;
&lt;p&gt;MapR Event Store consumers will automatically load-balance across partitions, enabling the application to scale linearly with increasing data rates, and the stream can be queried directly with the results integrated with the output of any microservice app in the pipeline.&lt;/p&gt;
&lt;h2&gt;Simplified Microservices Monitoring and Management&lt;/h2&gt;
&lt;p&gt;As more organizations adopt these microservices architectures, they will need better tools for monitoring and management. The MapR Data Platform has already accomplished much of this with the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Comprehensive monitoring of resource usage using MapR Monitoring components&lt;/li&gt;
&lt;li&gt;Support for containerized applications in Docker&lt;/li&gt;
&lt;li&gt;Continuous high availability and multi-master disaster recovery capabilities&lt;/li&gt;
&lt;li&gt;Unified security with access control expressions for stream access and analytics&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Kubernetized Machine Learning and AI Using KubeFlow]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/kubernetized-machine-learning-and-ai-using-kubeflow/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetized-machine-learning-and-ai-using-kubeflow/</guid><pubDate>Fri, 05 Feb 2021 07:00:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Rachel Silver&quot;,
&quot;publish&quot;: &quot;2018-08-28T07:00:00.000&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In my previous blog, &lt;a href=&quot;/blog/EEVLz2X9vmSmZg6k6Dqx/end-to-end-machine-learning-using-containerization&quot;&gt;End-to-End Machine Learning Using Containerization&lt;/a&gt;, I covered the advantages of doing machine learning using microservices and how containerization can improve every step of the workflow by providing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Personalized Development Environments&lt;/li&gt;
&lt;li&gt;Agile Training Capabilities&lt;/li&gt;
&lt;li&gt;Microservices Frameworks for Model Serving&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Today, I&apos;d like to talk about an example open source framework called &lt;a href=&quot;https://www.kubeflow.org/&quot;&gt;KubeFlow&lt;/a&gt;. The KubeFlow infrastructure provides the means to deploy best-of-breed open source systems for machine learning to any cluster running Kubernetes, whether on-premises or in the cloud.&lt;/p&gt;
&lt;h2&gt;Introduction to KubeFlow&lt;/h2&gt;
&lt;p&gt;As Suzy Visvanathan states in her blog, &lt;a href=&quot;/blog/JM9k0E924rtRj1QgQYnM/containers-kubernetes-and-mapr-the-time-is-now&quot;&gt;Containers, Kubernetes, and MapR: The Time is Now&lt;/a&gt;,  &quot;Kubernetes has won the container orchestration war.&quot;&lt;/p&gt;
&lt;p&gt;While the early playing field was rife with competitors, like Mesosphere Marathon, Google Kubernetes, Docker Swarm, OpenStack Magnum, and VMware Photon, it&apos;s become clear that Kubernetes is now the industry&apos;s de facto standard. And, as a result, an ecosystem of tools has begun to emerge around Kubernetes, similarly to how it did when Hadoop first emerged from &lt;a href=&quot;https://www.apache.org/&quot;&gt;Apache&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As it began and remains in the Hadoop ecosystem, the Kubernetes ecosystem is starting out as a conglomerate of occasionally integrated and interrelated tools intended for use by data scientists and data engineers. The advantage so far for Kubernetes, in this regard, has been the ability to deploy pre-built offerings from container registries, allowing tools to be easily downloaded (‘pulled&apos;) and deployed on systems, without the traditional install pain around compiling from source that was frequently present in Hadoop ecosystem projects.&lt;/p&gt;
&lt;p&gt;And this is sufficient for simple deployments of single containers running isolated processes. But, in most cases, users want to scale workflows up and down, using multiple containers to run parallel processes. In order to do this, templatized offerings and the ability to easily deploy them are needed. The most common way that this is managed in Kubernetes is by using &lt;a href=&quot;https://docs.helm.sh/developing_charts/&quot;&gt;Helm Charts&lt;/a&gt;, &lt;a href=&quot;https://coreos.com/operators/&quot;&gt;Operators&lt;/a&gt;, or ksonnets, which are collections of YAML files that describe a deployment template such that it&apos;s reproducible and can be used to generate interconnected pods of containers on demand.&lt;/p&gt;
&lt;p&gt;What KubeFlow does is make all of this functionality a bit more user-friendly by providing some of the commonly used machine learning projects as pre-built templatized offerings (ksonnets) that are pretested to integrate together in one Kubernetes &lt;a href=&quot;https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/&quot;&gt;namespace&lt;/a&gt; –not unlike our &lt;a href=&quot;https://docs.datafabric.hpe.com/60/MapREcoSystemPacks.html&quot;&gt;MapR Ecosystem Packs&lt;/a&gt;. The initial list is based off of a common &lt;a href=&quot;https://www.tensorflow.org/&quot;&gt;TensorFlow&lt;/a&gt; deployment pattern and has been opened up, or &lt;a href=&quot;https://en.wikipedia.org/wiki/Democratization_of_technology&quot;&gt;&apos;democratized,&apos;&lt;/a&gt; to support other engines and modules.&lt;/p&gt;
&lt;p&gt;Here are a few of the offerings that are available in KubeFlow, but the list is always growing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://jupyter.org/index.html&quot;&gt;Jupyter Notebook&lt;/a&gt;: an open source web-based data science notebook&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/kubeflow/katib&quot;&gt;Katib&lt;/a&gt;: a hyperparameter tuning framework&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.kubeflow.org/docs/guides/components/tftraining/&quot;&gt;TensorFlow training&lt;/a&gt;: support for TensorFlow training jobs on Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.kubeflow.org/docs/guides/components/pytorch/&quot;&gt;PyTorch training&lt;/a&gt;: support for PyTorch training jobs on Kubernetes&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.kubeflow.org/docs/guides/components/seldon/&quot;&gt;Seldon serving&lt;/a&gt;: a model serving framework for Kubernetes that uses Istio&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Integrating MapR with KubeFlow&lt;/h2&gt;
&lt;p&gt;MapR and KubeFlow are a very natural fit. Both are modeled on the concept of a namespace but use it to manage separate and complementary functions.&lt;/p&gt;
&lt;p&gt;In MapR, the global namespace is the key to unified data access and allows the joining of data across any divide, whether it be geographical or architectural. The MapR Global Namespace allows read/write access to any dataset to which the user has access, as if it were a local resource. This enables data security and isolation at the user, team, and tenant levels, and &lt;a href=&quot;https://docs.datafabric.hpe.com/60/SecurityGuide/Authentication.html&quot;&gt;MapR-SASL tickets&lt;/a&gt; are used to securely authenticate users.&lt;/p&gt;
&lt;p&gt;In KubeFlow, a &lt;a href=&quot;https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/&quot;&gt;Kubernetes namespace&lt;/a&gt; is used to manage cluster compute resources, Kubernetes objects (e.g., pods), and application/job deployments. Namespaces are logical entities that are used to isolate and represent cluster compute resources and jobs at the user and tenant level. &lt;a href=&quot;https://kubernetes.io/docs/concepts/configuration/secret/&quot;&gt;Kubernetes Secrets&lt;/a&gt; are used to authenticate users and can be set up to synchronize with MapR-SASL tickets for seamless integration with platform security.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/image2-1612508426888.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In both cases, namespaces are used for access control and to logically isolate tenant processes and data, which is ideal for multi-tenant organizations looking to easily manage security and performance. These namespaces complement and integrate with each other very nicely, leaving the end user with a seamless experience and the DataOps teams with a simple architecture to manage.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Containerization and a microservices architecture are critical across the entire data science workflow from prototyping to monitoring models in production. KubeFlow is a possible solution that does a really nice job of solving administrative and infrastructure problems while still allowing users to select their own tools. And, with MapR, these workflows can benefit from a best-of-breed data platform to speed the time from sandbox to production.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[End-to-End Machine Learning Using Containerization]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/end-to-end-machine-learning-using-containerization/</link><guid isPermaLink="false">https://developer.hpe.com/end-to-end-machine-learning-using-containerization/</guid><pubDate>Fri, 05 Feb 2021 06:50:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Rachel Silver&quot;,
&quot;publish&quot;: &quot;2018-06-28T10:00:00.000&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Lately, we&apos;ve been talking a lot about containerization and how Kubernetes and MapR can pair up to enhance the productivity of your data science teams and decrease the time to insights. In this multi-part blog series, I will start with a high-level overview of why Kubernetes and containerization are appealing for data science environments. In a later iteration, I will provide an example of a framework that enables Kubernetized data science on your MapR cluster.&lt;/p&gt;
&lt;p&gt;Earlier this year, we released the MapR Volume Driver for Kubernetes, which enabled MapR customers to use Kubernetes clusters as extensions of their MapR computing space. This volume plugin provides the ability to mount directories from the MapR global namespace easily to Kubernetes pods, enabling stateful applications to run using your data in place.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/kubernetes-1612507940753.jpg&quot; alt=&quot;Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;While the &lt;a href=&quot;/blog/VqVzX3gAzrT7p5PzPAZA/event-driven-microservices-on-the-mapr-data-platform&quot;&gt;microservices approach&lt;/a&gt; is useful for all types of applications, it&apos;s particularly well-suited for the data science life cycle, starting with the exploration phase through to model deployment. Let&apos;s look at how it could benefit each phase:&lt;/p&gt;
&lt;h2&gt;Personalized Development Environments&lt;/h2&gt;
&lt;p&gt;The need to experiment in isolation is paramount for data scientists during the exploration phase when experimentation with different algorithms is an important part of developing the executable code used for training models. New, innovative, and domain-specific libraries are emerging every day, and data scientists need the ability to experiment with them in an agile way that does not run afoul of IT policy.&lt;/p&gt;
&lt;p&gt;Traditionally, client applications have been enabled using edge nodes. But this paradigm has not worked well for data science as, frequently, the data science teams will be comparing the results from multiple algorithms. The overhead to manage these libraries with all of their dependencies in a shared or distributed environment can be very painful for IT. This makes the potential for an isolated experimental environment very appealing for all sides – especially since security is handled around the perimeter of the container.&lt;/p&gt;
&lt;p&gt;The ability to run multiple containers in parallel, each containing and running their own set of libraries and tools, dramatically reduces friction between IT and data scientists and serves to enhance the productivity of data science teams overall.&lt;/p&gt;
&lt;h2&gt;Training the Trainer: Separating Compute from Storage&lt;/h2&gt;
&lt;p&gt;Traditionally, compute was kept very close to the data and, frequently, this required that data be moved or replicated across many cluster silos to accomplish a variety of analytical tasks. There&apos;s a lot of overhead involved in moving data, so, frequently, less superior tools were adopted to minimize this cost.&lt;/p&gt;
&lt;p&gt;The ability to meaningfully separate compute and storage is a breakthrough that allows developers to easily adjust and test different compute footprints. One example would be moving long-running CPU-based workflows to a GPU to get faster results as research or business priorities change.&lt;/p&gt;
&lt;p&gt;While this can be useful in the exploration and development phase, it&apos;s critical to the training phase. This is the phase when the executable code, typically developed in a data science notebook, is submitted to a scheduler to run against a larger dataset. The output of this job is typically a model, and the faster this training job completes, the more frequently the data scientist can tune their parameters to update and refine their model.&lt;/p&gt;
&lt;p&gt;Where containerization really benefits the training workflow is by allowing you to peg your containerized training code to whatever compute resource that you choose, while not requiring you to move any data. In MapR, this is done in Kubernetes using FUSE via the volume plugin. Because this is a POSIX-compliant interface, any Python algorithm can speak to your distributed data store as if it were stored in the container itself.&lt;/p&gt;
&lt;p&gt;This sort of flexibility can really make a difference in getting models out of the sandbox and into production quickly.&lt;/p&gt;
&lt;h2&gt;Machine Learning Models as Microservices&lt;/h2&gt;
&lt;p&gt;Microservices have been described as simple, single-purpose applications that work in unison via lightweight communications, such as data streams. They&apos;ve traditionally enabled developers to more easily build, integrate, and manage their applications in an agile way that had typically been impossible with monolithic applications.&lt;/p&gt;
&lt;p&gt;Data science models are typically integrated into applications in order to generate insights, and containerization frameworks are fundamentally &lt;a href=&quot;/blog/0N796xBvYxcyGq8Yo35N/event-driven-microservices-architecture-patterns-and-examples&quot;&gt;microservices architectures&lt;/a&gt;. In this capacity, model deployment architectures benefit greatly from microservices frameworks, because microservices frameworks are typically intended to accommodate functionally and logically isolated applications, running in parallel. This becomes very useful in model deployment scenarios when A/B testing is used, models need to be updated or replaced in place, or inter-model routing can benefit from a streaming data fabric or services mesh.&lt;/p&gt;
&lt;h2&gt;In Conclusion&lt;/h2&gt;
&lt;p&gt;Data science workflows benefit from containerization in every phase of the pipeline from exploration, training, and deploying models into production.&lt;/p&gt;
&lt;p&gt;Checkout the next iteration of this series, &lt;a href=&quot;/blog/Oj0pNxBE3JsJB02E2KOj/kubernetized-machine-learning-and-ai-using-kubeflow&quot;&gt;Kubernetized Machine Learning and AI Using KubeFlow&lt;/a&gt;, where we&apos;ll dive deep into a new Kubernetes framework that supports end-to-end machine learning and data science.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Containers, Kubernetes, and MapR: The Time is Now]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/containers-kubernetes-and-mapr-the-time-is-now/</link><guid isPermaLink="false">https://developer.hpe.com/containers-kubernetes-and-mapr-the-time-is-now/</guid><pubDate>Fri, 05 Feb 2021 06:41:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suzy Visvanathan&quot;,
&quot;publish&quot;: &quot;2018-03-06T10:45:00.000&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;If you have been following the world of containers, you already know that Kubernetes has won the container orchestration war. Docker was instrumental in commercializing the use of containers as a method to deploy applications. However, Docker and Kubernetes only address a minor percentage of total applications. They struggle to support stateful applications that require persisted data. With the announcement today, MapR changes all that by enabling Kubernetes to support the containerization of all applications.&lt;/p&gt;
&lt;p&gt;Containers allow an application to be packaged with a skeleton of its dependencies, giving it lightweight attributes, which ensures that the application can remain independent of the environment, infrastructure, and configuration. Containers, by nature, are stateless, which is a term to indicate that containers retain knowledge of data only during their lifecycle. Containers quickly became a developer&apos;s preferred platform, but it wasn&apos;t long before customers started thinking of deploying applications in production.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/kubernetes-list-1612507221998.png&quot; alt=&quot;Kubernetes List&quot;&gt;&lt;/p&gt;
&lt;p&gt;Containers in production bring forth a completely different set of requirements than containers in development. The biggest set of requirements is around data. The stateless nature of containers runs into significant challenges when relying on data that needs to be persisted across sessions or shared across applications.&lt;/p&gt;
&lt;p&gt;Customers have various issues to consider, resulting in a steep learning curve. The checklist will vary from customer to customer, due in part to the complexity and disparity in environments. Docker, which has become synonymous with containers, even though containers have been in use for a long time, does not necessarily offer solutions for all of these aspects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/standard-1612507235714.png&quot; alt=&quot;Management de facto standard&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you consider management and orchestration, many solutions quickly cropped up to fill the gap of container management. Mesosphere Marathon, Google Kubernetes, Docker Swarm, OpenStack Magnum, and VMware Photon were all fighting for a piece of the container management pie. Initially, it appeared that Mesos would come out on top, but it has been quickly overtaken by Kubernetes, as is evident in the survey done by CNCF.&lt;/p&gt;
&lt;p&gt;Kubernetes is fast becoming the container orchestration and management de facto standard. A quick primer on Kubernetes summarizes its benefits and many of the reasons for its growth in popularity.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Offers intelligent and balanced scheduling of containers&lt;/li&gt;
&lt;li&gt;Automates creation, deletion, and movement of containers&lt;/li&gt;
&lt;li&gt;Assists with easy scaling of containers&lt;/li&gt;
&lt;li&gt;Offers monitoring and self-healing abilities&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Leading public cloud vendors enable many aspects of services on Kubernetes. Kubernetes itself only manages and orchestrates containers but does not organically offer all of the aspects required for an organization to successfully deploy in production. The MapR integration with Kubernetes can assist customers through their journey of deploying containers in production.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/mapr-cdp-1612507248730.png&quot; alt=&quot;MapR CDP&quot;&gt;&lt;/p&gt;
&lt;p&gt;The MapR Data Platform (MCDP) provides an ideal platform for containerized applications. MCDP is built on the foundation of scale, high availability, and versatility in hosting different types of applications and across different environments. The distinctive aspect of the MapR Platform lies in how it combines a distributed data store that offers enterprise storage features to support all applications, including those running in containers, with a full-fledged, robust, big data analytics ecosystem. MapR addresses the aspects required for production deployments, starting with high availability, policy-driven data placement, and extending all the way to sound disaster recovery strategy. These are fundamental requirements that a data fabric must address. MapR customers build such data fabrics to capture, store, process, and analyze petabytes of data.&lt;/p&gt;
&lt;p&gt;For Kubernetes-based container deployments, MapR brings its data fabric that extracts and exposes the underlying storage capacity as persistent storage volumes to Kubernetes. Customers can now run applications in containers managed by Kubernetes and take advantage of the benefits of the MapR platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/versatility-1612507262825.png&quot; alt=&quot;Versatility&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR achieves this versatility by starting with a few basics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Vast scalability&lt;/li&gt;
&lt;li&gt;Global namespace&lt;/li&gt;
&lt;li&gt;Support for diverse data&lt;/li&gt;
&lt;li&gt;Native multi-temperature management&lt;/li&gt;
&lt;li&gt;Cloud-grade reliability&lt;/li&gt;
&lt;li&gt;Multi-tenancy and security&lt;/li&gt;
&lt;li&gt;Naturally analytics-ready&lt;/li&gt;
&lt;li&gt;Built-in operational application capability&lt;/li&gt;
&lt;li&gt;Reliable global pub/sub stream transport&lt;/li&gt;
&lt;li&gt;Capable of spanning from edge to edge&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Armed with these strong capabilities, MapR easily and naturally expands to varying use cases, applications, and organizations.&lt;/p&gt;
&lt;p&gt;With Kubernetes and containers starting to play a major role in many of these use cases where MapR has distinct complementing features already baked into its platform, it would be remiss of organizations not to take advantage of such a powerful combined solution.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/mapr-data-fabric-kubernetes-1612507274729.png&quot; alt=&quot;MapR Data Fabric for Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;This unique position of MapR Data Fabric for Kubernetes can assist with an enterprise organization&apos;s journey with containers and deliver a powerful impact on day-to-day business that will have a long-standing benefit. The time is now to take advantage of MapR Data Fabric for Kubernetes.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kafka REST Proxy - Performance Tuning for MapR Event Store]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/kafka-rest-proxy-performance-tuning-for-mapr-event-store/</link><guid isPermaLink="false">https://developer.hpe.com/kafka-rest-proxy-performance-tuning-for-mapr-event-store/</guid><pubDate>Fri, 05 Feb 2021 06:32:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;publish&quot;: &quot;2017-04-04T12:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;MapR Event Store is a “Kafka-esque” message streaming system which, similarly to Apache Kafka, provides very high throughput performance combined with low message latency and high reliability. Unique to MapR Event Store, however, is a broker-less design that vastly simplifies configuration and increases reliability, in addition to providing replication capabilities that enable some pretty cool use cases.&lt;/p&gt;
&lt;p&gt;With MEP 2.0, the MapR Data Platform adds a Kafka REST Proxy server. This upgrade opens MapR Event Store to use any language that supports REST API calls over HTTP, which is to say, virtually all modern languages. For example, Python and the requests module work really well.&lt;/p&gt;
&lt;p&gt;But is the Kafka REST Proxy able to access the tremendous performance potential of MapR Event Store at the same level as its primary Java API?&lt;/p&gt;
&lt;p&gt;In this post, I’d like to go over a few performance objectives and provide some guidance to help data engineers get the most out of this very useful technology.&lt;/p&gt;
&lt;h2&gt;The default case&lt;/h2&gt;
&lt;p&gt;We should start with some good news. MapR Event Store is very fast and is shipped by default with settings that should provide enough performance for most applications.&lt;/p&gt;
&lt;h2&gt;Fix very high latency for single API call (with CURL)&lt;/h2&gt;
&lt;p&gt;You have a shiny new MapR 5.2 cluster installed with all the bells and whistles. Everything works great, and you get around to wanting to give MapR Event Store a try. With the REST Proxy, this is a piece of cake.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -X POST -H &quot;Content-Type:application/vnd.kafka.json.v1+json&quot; --data &apos;{&quot;records&quot;:[{&quot;value&quot;:{&quot;foo&quot;:&quot;bar&quot;}}]}&apos; &quot;http://demo1:8082/topics/%2Fstreams%2Ftest%3Atopic1&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the response takes about 3 seconds to come back. This very high latency is because of the default streams buffer time value of 3000ms.&lt;/p&gt;
&lt;p&gt;To fix, add the following to the kafka-rest.properties file (in /opt/mapr/kafka-rest/kafka-rest-&lt;version&gt;/config):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;consumer.request.timeout.ms=125
streams.buffer.max.time.ms=125
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Reference: &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Kafka/REST-config-parameters.html&quot;&gt;https://docs.datafabric.hpe.com/62/Kafka/REST-config-parameters.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Beware of high CPU if the timeout is very low&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Lowering the value of this property seems to correlate to much higher CPU utilization. When the value is 0, one or two of my cores get pegged to 100%. Above about 125ms, the impact to CPU utilization isn’t noticeable, at least to something like top.&lt;/p&gt;
&lt;h2&gt;About the URL for the topic&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;“/%2Fstreams%2Ftest%3Atopic1”&lt;/code&gt; in the URL is because MapR Event Store includes a path and topic (i.e. /path/to/stream:topic) and that’s going to need to be URL encoded or else it won’t work.&lt;/p&gt;
&lt;p&gt;It’s possible to avoid this by setting a default stream, adding the following property to kafka-rest.properties:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;streams.default.streams=/streams/test
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In that case, the above example URL would simplify to &lt;code&gt;“http://demo1:8082/topics/topic1”.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Reference: &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Kafka/REST-get-topic-metadata.html&quot;&gt;https://docs.datafabric.hpe.com/62/Kafka/REST-get-topic-metadata.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Increase Throughput Performance&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Number of topics and partitions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store is fast by default and handles a lot, albeit not everything, automatically. Some performance tuning comes from design considerations and just aren’t up to the streams messaging system at all.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Partitions &gt; topics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Pros&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Throughput should be good, and data spread out evenly across the cluster&lt;/li&gt;
&lt;li&gt;Easier to create and use, less moving parts&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finding data specific to a particular object/event type/location will require scanning through more data, which will be slower.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Topics &gt;&gt; partitions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Pros&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It’s very efficient to get data from a specific object/event type/location if they are all stored in their own stream.&lt;/li&gt;
&lt;li&gt;A very high number of streams (hundreds of thousands or even millions) will naturally spread across the cluster and will spread out well on all nodes of the cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The consumer needs to specify a regex pattern to pick all (or a group of) data. This may come at a performance penalty compared to a single topic with many partitions.&lt;/li&gt;
&lt;li&gt;Stream split is a relatively heavy operation, and it could trigger high load as new topics are created after the initial creation of topics is done.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Of course, one could also decide to use an intermediate solution, in which there are lots of topics and each topic has some number of partitions. The way to decide is to consider how the application is going to be used and where flexibility is needed. In any case, the default number of partitions for new topics is one, so that’s something to change for sure.&lt;/p&gt;
&lt;p&gt;How to create streams with a custom number of partitions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;stream create
	 -path Stream Path
	[ -ttl Time to live in seconds. default:604800 ]
	[ -autocreate Auto create topics. default:true ]
	[ -defaultpartitions Default partitions per topic. default:1 ]

$&gt; maprcli stream create -path /streams/test -defaultpartitions 10
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As a rule of thumb, try to keep about 10 partitions per node per topic.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Session keep-alive and record arrays&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To get the highest throughput, it’s going to be important to reduce overhead to maximize the CPU/network resources that do useful work moving your bits around. Here are some findings from recent engagements with customers using MapR Event Store in pilot and production projects:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use an array of records as payload&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Instead of producing a single record on each API call, push an array of records.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bad:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;{&quot;value&quot;:{&quot;foo&quot;:&quot;bar&quot;}}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Good:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;{&quot;records&quot;:[ {&quot;value&quot;:{&quot;foo1&quot;:&quot;bar1&quot;}},{&quot;value&quot;:{&quot;foo2&quot;:&quot;bar2&quot;}} ,… ]}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Getting the best performance will require some experimentation to find the balance between how frequently to make calls vs. how many records to pack into each call.&lt;/p&gt;
&lt;p&gt;Our own experience shows that the Proxy can handle as much as 280MB/s on very large (100-200KB) message sizes. Internal tests demonstrate modest 5 node AWS clusters that are able to handle millions of small (1-200B) messages per second.&lt;/p&gt;
&lt;p&gt;There is no substitute for experimentation, given variability of data set, throughput, and cluster hardware resources as well as the business requirements of a specific use case.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reuse a session to push data into the REST Proxy&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We’ve found significant gains from switching from single, isolated POST calls to multiple calls within the same session.&lt;/p&gt;
&lt;p&gt;Here is an example with Python and the excellent requests module:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bad:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def produce(payload):  
    headers = {&apos;Content-Type&apos;:&apos;application/vnd.kafka.binary.v1+json&apos;}
    r = requests.post(&apos;http://gw1:8082/topics/test&apos;, headers=headers, json=payload)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Good:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def send_messages(url, payload):
    session = requests.Session()
    headers = {&apos;Content-Type&apos;:&apos;application/vnd.kafka.binary.v1+json&apos;}
    while not is_done:
response = session.post(url, headers=headers, data=payload)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Tuning the embedded Jetty server&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the resources that limits the throughput performance of the Kafka REST Proxy is CPU resource. Well, it turns out that the Proxy is running the Jetty 9 server in embedded mode. It is possible to do some tuning at that level.&lt;/p&gt;
&lt;p&gt;There is a good article about tuning the operating system (of both load generator and server) and load generators and jetty for high load in Jetty server. For sure, we cannot tune Jetty as it&apos;s embedded. But have a look at the following link. You can certainly tune the following meetings for high load:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;TCP buffer sizes&lt;/li&gt;
&lt;li&gt;Queue sizes for connection listening queue&lt;/li&gt;
&lt;li&gt;Port range at the load generator side, so it won’t starve on parts during high load&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Reference: &lt;a href=&quot;http://wiki.eclipse.org/Jetty/Howto/High_Load&quot;&gt;http://wiki.eclipse.org/Jetty/Howto/High_Load&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to increase the memory buffer&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It is possible to tune the “buffer.memory” parameter. Its default value is 32m. However, this setting cannot exceed the total memory that the producer is going to use. At the end of the day, the kaka-rest is a JVM process.&lt;/p&gt;
&lt;p&gt;Without changing any parameters, the Kafka REST API uses 256m of memory at most. Therefore, the “buffer.memory” parameter cannot exceed this value. How come 256m? See the kaka-rest-run-class script (in /opt/mapr/kafka-rest/kakfa-rest-&lt;version&gt;/bin). It says the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# Memory options
if [ -z &quot;$KAFKAREST_HEAP_OPTS&quot; ]; then
  KAFKAREST_HEAP_OPTS=&quot;-Xmx256M&quot;
Fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So, if you want to increase “&lt;code&gt;buffer.memory&lt;/code&gt;” beyond 256m, provide the KAFKAREST_HEAP_OPTS value accordingly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Waste-of-time parameters&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The producer throughput of a single Kafka REST Proxy doesn&apos;t scale by increasing the “producer.threads” parameter. We tried to set it to 20, 50, 500, and even 10,000, but there were no visible performance differences.&lt;/p&gt;
&lt;p&gt;According to &lt;a href=&quot;https://github.com/confluentinc/kafka-rest/issues/181&quot;&gt;https://github.com/confluentinc/kafka-rest/issues/181&lt;/a&gt;, it is not used in Kafka REST code, and the Kafka REST Proxy that runs on MapR is largely identical to the Confluent implementation, only with the libraries changed to MapR libraries. Our implementation shares this known issue for now.&lt;/p&gt;
&lt;h2&gt;Cluster Architecture&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Run the Proxy on dedicated server(s)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A great way to ensure optimal performance for performance-critical use cases is to use one or more dedicated servers for the Kafka REST Proxy. Instead of installing it on a shared cluster node, you can install the MapR Client on a separate server and install the REST Proxy there.&lt;/p&gt;
&lt;p&gt;To boost performance further, add additional servers and put them behind a load balancer. From the Client to the cluster, ensure that the network connectivity is as fast as can be afforded, since MapR will take advantage of all the network interfaces on the node automatically.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture1-1612507057820.png&quot; alt=&quot;Kafka REST Proxy&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Run two or more Proxy processes on a dedicated node&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This can be done by running the other server on a different port (e.g. 8083 instead of the default 8082). Given a server with enough physical cores, such as a two-socket design, this strategy can further increase the throughput.&lt;/p&gt;
&lt;p&gt;Note that running two proxy processes on a single server will not scale linearly the throughput. Our testing, in one instance, showed throughput to increase from 1,580 msg/s to 2,660 msg/s, good for close to a 70% increase.&lt;/p&gt;
&lt;h2&gt;About message size&lt;/h2&gt;
&lt;p&gt;The performance characteristics of MapR Event Store and the Kafka REST Proxy change depending on the message size. Very small messages will be handled faster than very large messages. Your design should take this difference into consideration and favor smaller messages.&lt;/p&gt;
&lt;p&gt;Keep in mind that the largest message size that can be handled very efficiently is about 100KB. Larger messages will come at some cost in peak performance, with a maximum best practice size of 2MB. Smaller messages are super-efficiently handled, so those are always fine.&lt;/p&gt;
&lt;p&gt;Given the large sweet spot, we’d advise favoring development simplicity and not worrying about it too much until individual messages get over about 100KB in size.&lt;/p&gt;
&lt;h2&gt;Do&apos;s and Don&apos;ts&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;DO choose your performance targets based on the business needs and the use case.&lt;/li&gt;
&lt;li&gt;DO monitor the CPU, memory, and network load of the server running the Kafka REST Proxy.&lt;/li&gt;
&lt;li&gt;DO consider your design (cluster architecture, topics vs. partitions) before changing parameters.&lt;/li&gt;
&lt;li&gt;DO use a session if throughput is important.&lt;/li&gt;
&lt;li&gt;Do favor lots of smaller messages.&lt;/li&gt;
&lt;li&gt;DON&apos;T change default parameters without a clear performance goal (latency, throughput, lower CPU usage, etc.).&lt;/li&gt;
&lt;li&gt;DON’T create too large messages (2MB+).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Some Additional Resources&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Script to measure throughput in MapR Event Store&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;#!/bin/bash

STREAM=&quot;/streams/stream1&quot;
TOPIC=&quot;test&quot;

function sum_of_offset {
  maprcli stream topic info -path $STREAM -topic $TOPIC -json | awk -F&apos;:|,&apos; &apos;/maxoffset/ {n+=$2} END {print n}&apos; 2&gt; /dev/null
}

function epoch_ms {
  date +%s%3N
}

date +%T,%3N

o=$(sum_of_offset); t=$(epoch_ms)

while true
do
  prev_o=$o; prev_t=$t
  o=$(sum_of_offset); t=$(epoch_ms)
  echo &quot;$(date +%T,%3N) $((($o - $prev_o)*1000/($t - $prev_t))) msg/s&quot;
done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Thank you to Vince Gonzalez, Akihiko Kusanagi, Ted Dunning and Muthu Lalapet for their contributions to this blog post.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A Functional Approach to Logging in Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/a-functional-approach-to-logging-in-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/a-functional-approach-to-logging-in-apache-spark/</guid><pubDate>Fri, 05 Feb 2021 05:32:01 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-07-28T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;A Functional Approach to Logging in Apache Spark&lt;/h2&gt;
&lt;p&gt;Logging in Apache Spark is very easy to do, since Spark offers access to a logobject out of the box; only some configuration setups need to be done. In a &lt;a href=&quot;/blog/0NBjLpX5VAF3JKoDEqOo/how-to-log-in-apache-spark&quot;&gt;previous post&lt;/a&gt;, we looked at how to do this while identifying some problems that may arise. However, the solution presented might cause some problems when you are ready to collect the logs, since they are distributed across the entire cluster. Even if you utilize YARN log aggregation capabilities, there will be some contentions that might affect performance, or you could end up with log interleaves that corrupt the nature of the log itself.&lt;/p&gt;
&lt;p&gt;In this blog post, I will demonstrate how to solve these problems by taking a different, more functional approach.&lt;/p&gt;
&lt;h2&gt;The Monad Writer&lt;/h2&gt;
&lt;p&gt;I do not intend to go over the details about monads or the Monad Writer, so if you would like to learn more, please read “&lt;a target=&apos;\_blank&apos;  href=&apos;https://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html&apos;&gt;Functors, Applicatives, And Monads In Pictures&lt;/a&gt;” which is very informative about this topic.&lt;/p&gt;
&lt;p&gt;Just to put things in context, let’s say that the monad writer (&lt;em&gt;writer&lt;/em&gt;) is a container that holds the current value of a computation in addition to the history (log) of the value (set of transformation on the value).&lt;/p&gt;
&lt;p&gt;Because the &lt;em&gt;writer&lt;/em&gt; has monadic properties, it allows us to do functional transformations, and we will soon see how everything sticks together.&lt;/p&gt;
&lt;h2&gt;A Simplistic Log&lt;/h2&gt;
&lt;p&gt;The following code demonstrates a simplistic log.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {
  def main(args: Array[String]) {
    val log = LogManager.getRootLogger
    log.setLevel(Level.WARN)

    val conf = new SparkConf().setAppName(&quot;demo-app&quot;)
    val sc = new SparkContext(conf)

    log.warn(&quot;Hello demo&quot;)

    val data = sc.parallelize(1 to 100000)

    log.warn(&quot;I am done&quot;)
  }
}	

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only thing to note is that logging is actually happening on the Spark driver, so we don’t have synchronization or contention problems. However, everything starts to get complicated once we start distributing our computations.&lt;/p&gt;
&lt;p&gt;The following code won’t work (read this &lt;a href=&quot;/blog/0NBjLpX5VAF3JKoDEqOo/how-to-log-in-apache-spark&quot;&gt;&lt;em&gt;previous post&lt;/em&gt;&lt;/a&gt; to know why)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val log = LogManager.getRootLogger
val data = sc.parallelize(1 to 100000)

data.map { value =&gt; 
    log.info(value)
    value.toString
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A solution to this was also presented in the &lt;a href=&quot;/blog/0NBjLpX5VAF3JKoDEqOo/how-to-log-in-apache-spark&quot;&gt;previous post&lt;/a&gt;, but it requires extra work to manage the logs.&lt;/p&gt;
&lt;p&gt;Once we start logging on each node of the cluster, we need to go to each node and collect each log file in order to make sense of whatever is in the logs. Hopefully, you are using some kind of tool to help you with this task, such as Splunk, Datalog, etc. However, you still need to know how to get those logs into your system.&lt;/p&gt;
&lt;h2&gt;Our Data Set&lt;/h2&gt;
&lt;p&gt;Our data set is a collection of the class “Person” that is going to be transformed while keeping an unified log of the operations on our data set.&lt;/p&gt;
&lt;p&gt;Let’s suppose we want our data set to get loaded, filter each person who is less than 20 years old, and finally, extract his/her name. This is a very silly example, but it will demonstrate how the logs are produced. You could replace these computations, but the idea of building a unified log will remain.&lt;/p&gt;
&lt;h2&gt;Getting the Writer&lt;/h2&gt;
&lt;p&gt;In order to use the &lt;a target=&apos;\_blank&apos;  href=&apos;https://typelevel.org/projects/&apos;&gt;TypeLevel / Cats&lt;/a&gt; library to import the monad writer, we add the following line to our build.sbt file.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;libraryDependencies += &quot;org.typelevel&quot; %% &quot;cats&quot; % &quot;0.6.1&quot;&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Playing with our Data&lt;/h2&gt;
&lt;p&gt;Now, let’s define the transformations we are going to use. First, let’s load the data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def loadPeopleFrom(path: String)(implicit sc: SparkContext) = 
  s&quot;loading people from $path&quot; ~&gt; sc.textFile(path)
                                    .map(x =&gt; User(x.split(&quot;,&quot;)(0), x.split(&quot;,&quot;)(1).toInt))

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In here, the ~&gt; operation is defined via implicit conversions as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;implicit class toW(s: String) {
  def ~&gt;[A](rdd: RDD[A]): Writer[List[String], RDD[A]] = Writer(s :: Nil, rdd)
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you look closely, our loading operation is not returning an RDD; in fact, it returns the monad writer that keeps track of the logs.&lt;/p&gt;
&lt;p&gt;Let’s define the filter that we want to apply over the collection of users.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def filter(rdd: RDD[User])(f: User =&gt; Boolean) = &quot;filtering users&quot; ~&gt; rdd.filter(f)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again, we are applying the same function (~&gt;) to keep track of this transformation.&lt;/p&gt;
&lt;p&gt;Lastly, we define the mapping, which follows the same pattern we just saw.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def mapUsers(rdd: RDDUser])(prefix: String): Writer[List[String], RDD[String]] = 
  &quot;mapping users&quot; ~&gt; rdd.map(p =&gt; prefix + p.name)

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Putting it together&lt;/h2&gt;
&lt;p&gt;So far we have only defined our transformations, but we need to stick them together. Scala for is a very convenient way to work with monadic structures. Let’s see how.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val result = for {
  person          &amp;#x3C;-&amp;#x3C; span=&quot;&quot;&gt; loadPeopleFrom(&quot;~/users_dataset/&quot;)(sc)
  filtered        &amp;#x3C;-&amp;#x3C; span=&quot;&quot;&gt; filter(person)(_.age &amp;#x3C; 20)
  namesWithPrefix &amp;#x3C;-&amp;#x3C; span=&quot;&quot;&gt; mapUsers(filtered)(&quot;hello&quot;)
} yield namesWithPrefix

val (log, rdd) = result.run 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Please note that the result is of the type:&lt;code&gt;Writer[List[String], RDD[String]]&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Calling result.run will give us the &lt;code&gt;log: List[String]&lt;/code&gt; and the final computation is expressed by &lt;code&gt;rdd: RDD[String]&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;At this point, we could use Spark logger to write down the log generated by the chain of transformations. Note that this operation will be executed on the Spark master, which implies that one log file will be created that contains all of the log information. We are also removing potential contention problems during the log writes. In addition, we are not locking the log file, which avoids performance issues by creating and writing to the file in a serial way.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In this blog post, I’ve shown you how to improve how to log on Apache Spark by using the Monad Writer. This functional approach allows you to distribute the creation of logs along with your computations, which is something that Spark does very well. However, instead of writing the logs on each worker node, you are collecting them back to the master to write them down.&lt;/p&gt;
&lt;p&gt;This mechanism has certain advantages over the previous implementation. You can now control exactly how and when your logs are going to be written down, you can boost performance by removing IO operations on the worker nodes, you can remove synchronization issues by writing the logs in a serial way, and you can avoid the hazard of fishing logs across your entire cluster.&lt;/p&gt;
&lt;p&gt;This post was originally published &lt;a target=&apos;\_blank&apos;  href=&apos;https://medium.com/hackernoon/how-to-log-in-apache-spark-a-functional-approach-e48ffbbd935b#.87l91o1r3&apos;&gt;here.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Use Secondary Indexes in Spark With Open JSON Application Interface (OJAI)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/how-to-use-secondary-indexes-in-spark-with-open-json-application-interfa/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-use-secondary-indexes-in-spark-with-open-json-application-interfa/</guid><pubDate>Fri, 05 Feb 2021 05:25:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Ranjit Lingaiah&quot;],
&quot;publish&quot;: &quot;2019-02-12T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Starting with MapR 6.0, MapR Database supports secondary indexes on fields in JSON tables.  &lt;/p&gt;
&lt;p&gt;Indexes provide flexibility to access data stored in MapR Database JSON. Secondary indexes provide efficient access to a wide range of queries on MapR Database JSON tables.&lt;/p&gt;
&lt;p&gt;By default, there is only one index on &lt;code&gt;_id&lt;/code&gt; column; if applications query any other column, it can result in a full table scan to extract data from the underlying JSON tables. Secondary indexes solve this limitation by reducing the number of documents that applications would have to read from large tables. These indexes can be used with OJAI API, MapR Database JSON REST API, or MapR Drill, but not from Spark.&lt;/p&gt;
&lt;p&gt;In this blog post, we will look into how Spark can use OJAI API to leverage secondary indexes.&lt;/p&gt;
&lt;h2&gt;How to Use Secondary Indexes in Spark?&lt;/h2&gt;
&lt;p&gt;OJAI is the API to interface with MapR Database JSON. Most applications build using OJAI for filtering and sorting, which leverages secondary indexes to improve query response times. Here we will see how we can use this API in Spark to leverage secondary indexes.&lt;/p&gt;
&lt;h2&gt;Here are the steps:&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Create a JSON table (user-info) with some JSON documents. One of the fields from this table will be used to look up fields from another table. The sample program, below, ingests the data into the table.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;// Create JSON table
$ maprcli table create -path /tmp/user-info -tabletype json

// After the data is ingested into the table
$ echo &quot;find /tmp/user-info&quot; | mapr dbshell
====================================================
*                  MapR Database Shell                   *
* NOTE: This is a shell for JSON table operations. *
====================================================
Version: 6.0.1-mapr

MapR Database Shell
maprdb mapr:&gt; find /tmp/user-info
{&quot;_id&quot;:&quot;101&quot;,&quot;address&quot;:{&quot;Pin&quot;:{&quot;$numberLong&quot;:95985},&quot;city&quot;:&quot;sunnyvale&quot;,&quot;street&quot;:&quot;35 town way&quot;},&quot;dob&quot;:{&quot;$dateDay&quot;:&quot;1987-05-04&quot;},&quot;interests&quot;:[&quot;squash&quot;,&quot;comics&quot;,&quot;movies&quot;]}
{&quot;_id&quot;:&quot;102&quot;,&quot;address&quot;:{&quot;Pin&quot;:{&quot;$numberLong&quot;:95652},&quot;city&quot;:&quot;san jose&quot;,&quot;street&quot;:&quot;305 city way&quot;},&quot;dob&quot;:{&quot;$dateDay&quot;:&quot;1976-01-09&quot;},&quot;interests&quot;:[&quot;cricket&quot;,&quot;sketching&quot;]}
2 document(s) found.
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Create another JSON table (data-table) with JSON documents, with one of the fields (uid) in this table matching the field in the table created in step #1, and ingest sample data.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;// Create JSON table
$ maprcli table create -path /tmp/data-table -tabletype json

//Sample data for querying
$ cat data-type.json
{&quot;_id&quot;:&quot;1&quot;,&quot;uid&quot;:&quot;101&quot;,&quot;first_name&quot;:&quot;tom&quot;}
{&quot;_id&quot;:&quot;2&quot;,&quot;uid&quot;:&quot;102&quot;,&quot;first_name&quot;:&quot;john&quot;}
{&quot;_id&quot;:&quot;3&quot;,&quot;uid&quot;:&quot;103&quot;,&quot;first_name&quot;:&quot;sam&quot;}
{&quot;_id&quot;:&quot;4&quot;,&quot;uid&quot;:&quot;104&quot;,&quot;first_name&quot;:&quot;thomas&quot;}
{&quot;_id&quot;:&quot;5&quot;,&quot;uid&quot;:&quot;105&quot;,&quot;first_name&quot;:&quot;david&quot;}
{&quot;_id&quot;:&quot;6&quot;,&quot;uid&quot;:&quot;106&quot;,&quot;first_name&quot;:&quot;robert&quot;}
{&quot;_id&quot;:&quot;7&quot;,&quot;uid&quot;:&quot;107&quot;,&quot;first_name&quot;:&quot;william&quot;}
{&quot;_id&quot;:&quot;8&quot;,&quot;uid&quot;:&quot;108&quot;,&quot;first_name&quot;:&quot;michael&quot;}
{&quot;_id&quot;:&quot;9&quot;,&quot;uid&quot;:&quot;109&quot;,&quot;first_name&quot;:&quot;bill&quot;}
{&quot;_id&quot;:&quot;10&quot;,&quot;uid&quot;:&quot;110&quot;,&quot;first_name&quot;:&quot;jarred&quot;}

// Put the sample data in hdfs
$ hadoop fs -put data-table.json /tmp

// Ingest the sample data into the JSON table
$ mapr importJSON -idfield &apos;_id&apos; -mapreduce true -src /tmp/data-table.json -dst /tmp/data-table
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Create a secondary index on data-table field uid.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli table index add -path /tmp/data-table -index uid_idx -indexedfields uid
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/60/MapR-DB/Indexes/ExaminingOJAIQueryPlan.html&quot;&gt;Enable log tracing&lt;/a&gt; by setting the property &quot;log4j.logger.com.mapr.ojai.store.impl=TRACE, stdout&quot; in &lt;code&gt;log4j.properties&lt;/code&gt;, located in &lt;code&gt;/opt/mapr/conf&lt;/code&gt; directory. This step is optional; it is used to see if the OJAI query plan used the secondary index.  This is not recommended for production clusters, as it will generate lot of log data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The complete sample program is listed below. In this sample program, the &lt;code&gt;getDocuments()&lt;/code&gt; method invokes the OJAI API to leverage the secondary index and returns an RDD.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;/* Copyright (c) 2009 &amp;#x26; onwards. MapR Tech, Inc., All rights reserved */

package com.mapr.demo.spark.ojai.secondaryindex

import com.fasterxml.jackson.annotation.{JsonIgnoreProperties, JsonProperty}
import com.mapr.db.spark.impl.OJAIDocument
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SparkSession
import org.ojai.types.ODate
import com.mapr.db.spark.{field, _}
import org.ojai.store.DriverManager
import scala.collection.mutable.ListBuffer

object SparkOjaiApplication {

  val userInfo = &quot;/tmp/user-info&quot;
  val dataTable = &quot;/tmp/data-table&quot;

  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder.appName(&quot;Spark-OJAI Secondary Index Application&quot;).master(&quot;local[*]&quot;).getOrCreate()
    val allUsers = spark.sparkContext.parallelize(getUsers())
    val sc = spark.sparkContext

    //Save users to JSON table
    allUsers.saveToMapRDB(userInfo, createTable = false
    )

    //Load all the people from the JSON table
    val allUsersInfo = sc.loadFromMapRDB(userInfo)

    //Extract JSON documents using secondary index
    val documentsRDD = allUsersInfo.mapPartitions(getDocuments)

    // print a few documents
    documentsRDD.take(3).foreach(println(_))

    System.out.println(&quot;Number of documents extracted:&quot; + documentsRDD.count())
  }

  //Invokes OJAI api to query JSON documents using seconary index.
  def getDocuments(iterator: Iterator[OJAIDocument]): Iterator[String] = {
    val connection = DriverManager.getConnection(&quot;ojai:mapr:&quot;)
    val store = connection.getStore(dataTable)
    val dm  = ListBuffer[String]()

    iterator.foreach(r =&gt; {
      val qs = &quot;{\&quot;$eq\&quot;: {\&quot;uid\&quot;:\&quot;%s\&quot;}}&quot;.format(r.getDoc.getId.getString)
      System.out.println(&quot;Finding  documents for qs:&quot; + qs);
      val  query = connection.newQuery().select(&quot;_id&quot;)
        //This option is not required. OJAI client makes the determination to use secondary index.
        // Since the sample data set is small, I&apos;m enabling this option to use secondary index.
        .setOption(com.mapr.ojai.store.impl.OjaiOptions.OPTION_USE_INDEX, &quot;uid_idx&quot;)
        .where(qs).build()
      val iterator = store.find(query).iterator()
      if (iterator.hasNext) {
        dm += iterator.next().asJsonString()
      }
    })

    //Close the Document Store
    store.close()

    //Close the OJAI connection
    connection.close()

    dm.toIterator
  }

  // User documents. The _id field of users is used to query  the user in the data-table.
  def getUsers(): Array[Person] = {
    val users: Array[Person] = Array(
      Person(&quot;101&quot;, ODate.parse(&quot;1976-1-9&quot;), Seq(&quot;cricket&quot;, &quot;sketching&quot;), Map(&quot;city&quot; -&gt; &quot;san jose&quot;, &quot;street&quot; -&gt; &quot;305 city way&quot;, &quot;Pin&quot; -&gt; 95652)),
      Person(&quot;102&quot;, ODate.parse(&quot;1987-5-4&quot;), Seq(&quot;squash&quot;, &quot;comics&quot;, &quot;movies&quot;), Map(&quot;city&quot; -&gt; &quot;sunnyvale&quot;, &quot;street&quot; -&gt; &quot;35 town way&quot;, &quot;Pin&quot; -&gt; 95985))
    )
    users
  }
}

@JsonIgnoreProperties(ignoreUnknown = true)
case class Person (@JsonProperty(&quot;_id&quot;) id: String, @JsonProperty(&quot;dob&quot;) dob: ODate,
                   @JsonProperty(&quot;interests&quot;) interests: Seq[String], @JsonProperty(&quot;address&quot;) address: Map[String, Any])
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;To build the sample program, clone the Git repo and use Maven to build the program.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git clone https://github.com/ranjitreddy2013/spark-using-ojai-secondary-index-example
$ cd spark-using-ojai-secondary-index-example
$ mvn clean install
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;To run, copy spark-ojai-secondaryindex-1.0-SNAPSHOT.jar from target folder to an edge node or cluster node and submit to the cluster using &lt;code&gt;spark-submit&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-2.2.1/bin/spark-submit --class com.mapr.demo.spark.ojai.secondaryindex.SparkOjaiApplication --master yarn --deploy-mode client --driver-java-options &quot;-Dlog4j.configuration=file:///opt/mapr/conf/log4j.properties&quot; --conf &quot;spark.yarn.executor.memoryOverhead=1G&quot;  --executor-memory 2G --num-executors 1 --executor-cores 1 /home/mapr/spark-ojai-secondaryindex-1.0-SNAPSHOT.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;The sample output is shown below. In addition to this output, there will be DEBUG and TRACE logs.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;Finding  documents for qs:{&quot;$eq&quot;: {&quot;uid&quot;:&quot;101&quot;}}
Finding  documents for qs:{&quot;$eq&quot;: {&quot;uid&quot;:&quot;102&quot;}}
{&quot;_id&quot;:&quot;1&quot;,&quot;first_name&quot;:&quot;tom&quot;,&quot;uid&quot;:&quot;101&quot;}
{&quot;_id&quot;:&quot;2&quot;,&quot;first_name&quot;:&quot;john&quot;,&quot;uid&quot;:&quot;102&quot;}

Number of documents extracted:2
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;9&quot;&gt;
&lt;li&gt;Verify the logs if the secondary index is used by the OJAI query plan. Note the &lt;code&gt;indexName&lt;/code&gt; used by the OJAI query plan.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;2019-01-11 10:49:35,876 TRACE com.mapr.ojai.store.impl.OjaiDocumentStore logQueryPlan Executor task launch worker for task 204: Ojai Query Plan: &apos;[{&quot;streamName&quot;:&quot;DBDocumentStream&quot;,&quot;parameters&quot;:{&quot;queryConditionPath&quot;:true,&quot;indexName&quot;:&quot;uid_idx&quot;,&quot;projectionPath&quot;:[&quot;_id&quot;],&quot;primaryTable&quot;:&quot;/tmp/data-table&quot;}},{&quot;streamName&quot;:&quot;RowkeyLookup&quot;,&quot;parameters&quot;:{&quot;condition&quot;:&quot;(uid = &quot;88d800cf-39c9-482f-856c-486090c3de2c&quot;)&quot;,&quot;primaryTable&quot;:&quot;/tmp/data-table&quot;}}]&apos;

2019-01-28 12:04:33,087 TRACE com.mapr.ojai.store.impl.OjaiDocumentStore logQueryPlan Executor task launch worker for task 201: Ojai Query Plan: &apos;[{&quot;streamName&quot;:&quot;DBDocumentStream&quot;,&quot;parameters&quot;:{&quot;queryConditionPath&quot;:true,&quot;indexName&quot;:&quot;uid_idx&quot;,&quot;projectionPath&quot;:[&quot;_id&quot;],&quot;primaryTable&quot;:&quot;/tmp/data-table&quot;}}]&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion:&lt;/h2&gt;
&lt;p&gt;In this blog post, you learned how to use OJAI secondary indexes from Spark.&lt;/p&gt;
&lt;h2&gt;Additional Resources:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapR-DB/JSON_DB/getting_started_json_ojai_build_java_app.html&quot;&gt;Developing OJAI Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/60/MapR-DB/Indexes/DeterminingSIUsage.html&quot;&gt;Determining Secondary Index Usage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/60/MapR-DB/Indexes/admin-adding-indexes.html&quot;&gt;Adding Secondary Indexes on JSON Tables&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Setting Up Spark Dynamic Allocation on MapR]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/setting-up-spark-dynamic-allocation-on-mapr/</link><guid isPermaLink="false">https://developer.hpe.com/setting-up-spark-dynamic-allocation-on-mapr/</guid><pubDate>Fri, 05 Feb 2021 05:19:26 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Tugdual Grall&quot;,
&quot;publish&quot;: &quot;2016-11-03T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Apache Spark can use various cluster managers to execute applications (Standalone, YARN, Apache Mesos). When you install Apache Spark on MapR, you can submit an application in Standalone mode or by using YARN.&lt;/p&gt;
&lt;p&gt;This blog post focuses on YARN and dynamic allocation, a feature that lets Spark add or remove executors dynamically based on the workload. You can find more information about this feature in this presentation from Databricks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://www.slideshare.net/databricks/dynamic-allocation-in-spark&apos;&gt;Dynamic Allocation in Spark&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s see how to configure Spark and YARN to use dynamic allocation (that is disabled by default).&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;MapR Data Platform cluster&lt;/li&gt;
&lt;li&gt;Apache Spark for MapR installed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The example below is for MapR 5.2 with Apache Spark 1.6.1; you just need to adapt the version to your environment.&lt;/p&gt;
&lt;h2&gt;Enabling Dynamic Allocation in Apache Spark&lt;/h2&gt;
&lt;p&gt;The first thing to do is to enable dynamic allocation in Spark. To do this, you need to edit the Spark configuration file on each Spark node&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-1.6.1/conf/spark-defaults.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and add the following entries:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;spark.dynamicAllocation.enabled = true
spark.shuffle.service.enabled = true
spark.dynamicAllocation.minExecutors = 5 
spark.executor.instances = 0

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find additional configuration options in the &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/1.6.1/configuration.html#dynamic-allocation&apos;&gt;Apache Spark Documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Enabling Spark External Shuffle for YARN&lt;/h2&gt;
&lt;p&gt;Now you need to edit the YARN configuration to add information about Spark Shuffle Service. Edit the following file on each YARN node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop/yarn-site.xml

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and add these properties:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;&amp;#x3C;property&gt;
    &amp;#x3C;name&gt;yarn.nodemanager.aux-services&amp;#x3C;/name&gt;
    &amp;#x3C;value&gt;mapreduce_shuffle,mapr_direct_shuffle,spark_shuffle&amp;#x3C;/value&gt;
&amp;#x3C;/property&gt;
&amp;#x3C;property&gt;
    &amp;#x3C;name&gt;yarn.nodemanager.aux-services.spark_shuffle.class&amp;#x3C;/name&gt;
    &amp;#x3C;value&gt;org.apache.spark.network.yarn.YarnShuffleService&amp;#x3C;/value&gt;
&amp;#x3C;/property&gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Add Spark Shuffle to YARN classpath&lt;/h2&gt;
&lt;p&gt;Spark Shuffle service must be added to the YARN classpath. The jar is located in the Spark distribution:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-1.6.1/lib/spark-1.6.1-mapr-1605-yarn-shuffle.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To do this, add the jar in the following folder on each node:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/yarn/lib

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can either copy the file or create a symlink:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ ln -s /opt/mapr/spark/spark-1.6.1/lib/spark-1.6.1-mapr-1605-yarn-shuffle.jar /opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/yarn/lib

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Restart YARN&lt;/h2&gt;
&lt;p&gt;Since you have changed the YARN configuration, you must restart your node managers using the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ maprcli node services -name nodemanager -action restart -nodes [list of nodes]

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Submitting a Spark Job&lt;/h2&gt;
&lt;p&gt;Your MapR cluster is now ready to use Spark dynamic allocation. This means that when you submit a job, you do not need to specify any resource configuration. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-1.6.1/bin/spark-submit \
  --class com.mapr.demo.WordCountSorted \
  --master yarn \
  ~/spark-examples-1.0-SNAPSHOT.jar \
  /mapr/my.cluster.com/input/4gb_txt_file.txt \
  /mapr/my.cluster.com/user/mapr/output/

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that you can still specify the resources, but in this case, the dynamic allocation will not be used for this specific job. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/mapr/spark/spark-1.6.1/bin/spark-submit \
  --class com.mapr.demo.WordCountSorted \
  --master yarn \
  --num-executors 3
  --executor-memory 1G \
  ~/spark-examples-1.0-SNAPSHOT.jar \
  /mapr/my.cluster.com/input/4gb_txt_file.txt \
  /mapr/my.cluster.com/user/mapr/output/

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this blog post, you learned how to set up Spark dynamic allocation on MapR.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Challenges of Sharing GPUs and How to Solve Them]]></title><description><![CDATA[pinkish image 4 run ai post Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on…]]></description><link>https://developer.hpe.com/the-challenges-of-sharing-gpus-and-how-to-solve-them/</link><guid isPermaLink="false">https://developer.hpe.com/the-challenges-of-sharing-gpus-and-how-to-solve-them/</guid><pubDate>Wed, 03 Feb 2021 15:54:07 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/pinkish-image-4-run-ai-post-1612367610553.JPG&quot; alt=&quot;pinkish image 4 run ai post&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Whether purchasing GPUs for on-premise machines or renting them in the cloud, GPUs for AI can be expensive. Organizations want to get the most out of their on-premise GPUs and use GPUs in the cloud as efficiently as possible. The HPE Ezmeral team and &lt;a href=&quot;http://www.run.ai/&quot;&gt;Run:AI&lt;/a&gt; recently worked together in a number of customer engagements to help researchers take better advantage of their GPU resources. This post offers some of our takeaways and includes resources that can help you get started in doing the same.&lt;/p&gt;
&lt;p&gt;Though many &lt;a href=&quot;https://www.zdnet.com/article/facebooks-latest-giant-language-ai-hits-computing-wall-at-500-nvidia-gpus/&quot;&gt;articles&lt;/a&gt; cite examples of the &lt;a href=&quot;https://syncedreview.com/2018/05/17/ai-doubling-its-compute-every-3-5-months/&quot;&gt;insatiable demand&lt;/a&gt; for compute resources, in practice, GPUs are often underutilized. When Run:AI starts work with a new customer, we typically see a GPU utilization rate of between 25 and 30 percent. IT is typically surprised by this – they assume that resources are being fully utilized. But, if you think about it, it’s actually quite logical:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;GPUs tend to be idle during non-work hours (e.g. nights, weekends).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;They can also be idle during work breaks (e.g. coffee breaks, lunch).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;They can be idle when a researcher is building a model (e.g. developing a model in a Jupyter notebook). Note: A Jupyter Notebook is a classic example. A user working with a Jupyter Notebook usually alternates between writing code, executing it on the GPU, and examining the results. The GPU is kept idle for long periods of time during this process.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;They can even be idle during the execution of a GPU-consuming application, e.g. training workloads. This is because the application has some work to do on the CPU as well and wait for I/O. Note: Most applications have CPU and I/O work in between launching GPU kernels. The GPU utilization of a deep-learning model running solely on a GPU can be much less than 100%.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Increasing GPU utilization and minimizing idle times can drastically reduce costs and help achieve model accuracy faster. To do this, one needs to improve the sharing of GPU resources.&lt;/p&gt;
&lt;h3&gt;Sharing a GPU is complex&lt;/h3&gt;
&lt;p&gt;Applications running on the same GPU share its memory. Every byte allocated by one application leaves one less byte for other applications to use. The only way for multiple applications to run simultaneously is to cooperate with one another. Otherwise, applications can easily, and even mistakenly, impact each other.&lt;/p&gt;
&lt;p&gt;In addition, many applications arrogantly assume they are the only ones running on that GPU, and allocate the entire GPU memory upfront by default (e.g. TensorFlow). This is a common paradigm when using an external processing unit (in addition to the CPU). Code modifications are required to change this default behavior (e.g. allow growth in TensorFlow) and might impact the application performance (i.e. due to fragmentation). This might be even impossible in some cases; for example, when executing sealed Docker images without access to the source code they contain.&lt;/p&gt;
&lt;p&gt;When running multiple containers on the same GPU, cooperation is much less a legitimate requirement, as containers should not be aware of each other, and certainly should not be accessible to one another.&lt;/p&gt;
&lt;p&gt;When multiple users are supposed to share a GPU, they require coordination between one another, making this a logistical issue as well and not only technical.&lt;/p&gt;
&lt;p&gt;All the above makes sharing a GPU between applications, and containers in particular, inconvenient and not scalable or dynamic.&lt;/p&gt;
&lt;h3&gt;What do we mean by “dynamic”?&lt;/h3&gt;
&lt;p&gt;Ideally, sharing a GPU would work like this; for me to have the resources I need, when I need them, and for you to have what you need, when you need them. That would require dynamic allocation of resources.&lt;/p&gt;
&lt;p&gt;Sharing the same GPU between multiple applications and containers has two requirements.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The first requirement is to be able to modify the application to use only a portion of the GPU memory (i.e. code changes and manual tweaks). This might seem easy but not every user can do so, as it might require deep knowledge with the internals of the application and how to configure it. Sometimes it might not even be possible - as explained above, an example of this would be when receiving a docker image without controlling what application is running inside and with which configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The second requirement is to decide how much GPU memory should be used by each and every application. This might be relatively easy for a single user to do as there is only a single person who needs to decide. A team might also be able to do so, but in a very inefficient way.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In instances where a team is involved, the team can decide on a strict policy in which each member gets an equal, static share of the GPU. For example, in a team of three members, each one would be given exactly one third of the GPU memory.&lt;/p&gt;
&lt;p&gt;This might sound satisfactory but the truth is that the GPU would be underutilized, based on the scenarios we outlined above. If, at any time, one of the team members did not use his or her share of the GPU memory, it would still be left unused. Additionally, the team members would never be able to use more than their share without breaking their agreement and risking applications of other members with Out Of Memory failures.&lt;/p&gt;
&lt;p&gt;This unused GPU memory could have been allocated by another team member, allowing him or her to run more GPU-consuming applications (e.g. larger deep learning models).&lt;/p&gt;
&lt;h3&gt;The answer: Share GPUs by enabling access to fractions of GPUs&lt;/h3&gt;
&lt;p&gt;A fractional GPU system, such as one built by Run:AI, transparently gives data science and AI engineering teams the ability to run multiple workloads simultaneously on a single GPU. Virtualized logical GPUs have their own memory and computing space that containers can use and access as if they were self-contained processors. This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/gpu-orchestration-dashboard-1612369520951.png&quot; alt=&quot;gpu orchestration dashboard&quot;&gt;&lt;/p&gt;
&lt;p&gt;Fractional GPU capabilities enable simplified sharing of single and multiple GPUs. In the figure above, you can see that more than 70% of the 160 pooled GPUs are fully utilized. Researchers have maximized cluster utilization, and can see that there are still idle GPUs available to any researchers who need them. This is one way &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/runai.html&quot;&gt;HPE Ezmeral Container Platform and Run:AI&lt;/a&gt; are able to help customers bring AI solutions to market faster. By maximizing the utilization of their GPU clusters, customers can build and train concurrent AI models without resource limitations. We’ve &lt;a href=&quot;https://docs.run.ai/Researcher/Walkthroughs/walkthrough-fractions/&quot;&gt;shared some resources here&lt;/a&gt; to help you get started with these concepts.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/run-ai-gpu-orchestration-architecture-2-1612435001011.jpg&quot; alt=&quot;run ai gpu orchestration architecture 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;As shown in the picture above, Run:AI GPU orchestration solution creates a virtualization and acceleration layer over GPU resources that manage granular scheduling, prioritization, and allocation of compute power for the HPE Ezmeral Container Platform. Run:AI provides a dedicated batch scheduler, running on top of HPE Ezmeral Container Platform to manage GPU-based workloads. Find out more about the Run:AI GPU orchestration solution running on top of HPE Ezmeral Container Platform, including how you can get a free trial of the solution, by visiting the [HPE Ezmeral Marketplace] (&lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/runai.html&quot;&gt;https://www.hpe.com/us/en/software/marketplace/runai.html&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Keep coming back to the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; site for more interesting articles and tutorials on related topics.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Build more intelligent apps  - Newsletter]]></title><link>https://developer.hpe.com/2021-February-03/</link><guid isPermaLink="false">https://developer.hpe.com/2021-February-03/</guid><pubDate>Wed, 03 Feb 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[DevSecOps: What it is and where it's going]]></title><description><![CDATA[Editor’s note: This article was originally posted on HPE Enterprise.nxt on February 2, 2021 Experts basically agree on the definition of…]]></description><link>https://developer.hpe.com/devsecops-what-it-is-and-where-its-going/</link><guid isPermaLink="false">https://developer.hpe.com/devsecops-what-it-is-and-where-its-going/</guid><pubDate>Tue, 02 Feb 2021 11:03:18 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s note: This article was originally posted on HPE Enterprise.nxt on February 2, 2021&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Experts basically agree on the definition of DevSecOps, but there isn&apos;t a full consensus on how to do it and where it is leading.&lt;/p&gt;
&lt;p&gt;While the practices that drive DevOps are more than a decade old, integrating security with DevOps, often termed &lt;a href=&quot;https://resources.whitesourcesoftware.com/blog-whitesource/devsecops&quot;&gt;DevSecOps&lt;/a&gt;, remains a challenge. What does DevSecOps mean within an organization? What does successful DevSecOps look like? And can a company improve its DevSecOps practices so security can scale as the organization scales?&lt;/p&gt;
&lt;p&gt;Like DevOps itself, DevSecOps developed organically in the community of developers. It didn&apos;t come down as standards from on high; rather, developers shared ideas about best practices. Consequently, there isn&apos;t one agreed-upon definition of what DevSecOps is, how one implements it, and where current DevSecOps trends are going.&lt;/p&gt;
&lt;p&gt;In 2020, at the KubeCon conference, a media panel comprised of technical leaders in the DevSecOps space sought to tackle the question on how to implement DevSecOps in the enterprise. Panel members illustrated where there was and was not consensus.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/devsecops_-what-it-is-and-where-it-s-going.png&quot; alt=&quot;Block Text&quot; title=&quot;Block text&quot;&gt;&lt;/p&gt;
&lt;h3&gt;DevSecOps: Building apps better for the bottom line&lt;/h3&gt;
&lt;p&gt;In defining DevSecOps, there was considerable agreement among the panelists—with some caveats. For instance, Emily Fox, DevOps security lead at the U.S. Department of Defense, said DevSecOps is about how DevOps organizations move to a security-first risk posture. What that means, she said, is building services with self-sustaining microperimeters, vetted by &quot;trust but verify&quot; defense-in-depth strategies and interactions with the services around them, within their workload environments.&lt;/p&gt;
&lt;p&gt;Nicolas Chaillan, U.S. Air Force chief software officer, agreed, except for security being first. &quot;I want to be careful with &apos;first,&apos;&quot; Chaillan said. &quot;We don&apos;t build software just to be secured. Security needs to be baked in and security needs to be a central piece of the process, but I don&apos;t think it&apos;s first. I think it&apos;s a continuous cycle.&quot;&lt;/p&gt;
&lt;p&gt;He added, &quot;I certainly agree with continuous monitoring and zero trust enforcement. For us, that means behavior detection and monitoring zero trust enforcement down to the function layer—really reducing the attack surface and continuously monitoring.&quot;&lt;/p&gt;
&lt;p&gt;According to Peter Bosch, distinguished engineer at Cisco Systems, when it comes to successful DevSecOps, security team members must be adequately integrated with development teams. As a developer &quot;long before DevSecOps was a term,&quot; Bosch recalled being stuck writing code and then waiting for three months for a firewall rule to be opened so it could function properly. By integrating those teams, making sure the security team becomes part of the application delivery process, the speed of application development goes up and the robustness of applications increases, he said.&lt;/p&gt;
&lt;p&gt;&quot;You will not run into your &lt;a href=&quot;https://owasp.org/www-project-top-ten/&quot;&gt;OWASP Top 10 security issues&lt;/a&gt; as frequently,&quot; Bosch added. &quot;And you will start making applications that actually contribute better to the bottom line.&quot;&lt;/p&gt;
&lt;p&gt;Sunil James, senior director at Hewlett Packard Enterprise, also saw DevSecOps as a way to move development efforts forward securely and quickly. When done correctly, DevSecOps enables enterprises to move at &quot;cloud speed,&quot; he said.&lt;/p&gt;
&lt;p&gt;&quot;The impetus for this [DevOps] movement is cloud. And cloud materially drives the cadence of our ability to deliver,&quot; James noted. &quot;While not many organizations are delivering at the scale and the pace of a Google or a Netflix, they&apos;d like to. They&apos;d like to get to that point because they can then bring value to their customers faster and faster.&quot;&lt;/p&gt;
&lt;p&gt;And that&apos;s what DevSecOps provides: In addition to a continuous way to deliver secure development and security policies deeper into the application development and operations, DevSecOps also helps &quot;create the consistency and speed with which you can actually offer these capabilities,&quot; James said.&lt;/p&gt;
&lt;h3&gt;DevSecOps and regulatory compliance&lt;/h3&gt;
&lt;p&gt;DevSecOps isn&apos;t just about software code security, but also about maintaining compliance with various industry and government regulations. As Fox explained, teams must contend with many different types of compliance, including legal compliance, compliance that mandates how data is handled, security policy, and control compliance. Each compliance requirement, she said, helps ensure that organizations can secure their mission, systems, applications, and datasets.&lt;/p&gt;
&lt;p&gt;Still, &quot;we have a lot of challenges in this space because cloud native is continually evolving and security is constantly playing catch up,&quot; Fox said. &quot;Therefore, the security tooling to allow us to do compliance in a secure, automated, easy-to-understand fashion is still behind and leaves a lot of room for improvement.&quot;&lt;/p&gt;
&lt;p&gt;Fortunately, better tools are on the way. For one, there is the &lt;a href=&quot;https://pages.nist.gov/OSCAL/&quot;&gt;Open Security Controls Assessment Language (OSCAL) from NIST&lt;/a&gt;, which aims to provide a standardized and automated way to publish, implement, and assess security controls. Chaillan noted that the U.S. Department of Defense is partnering with NIST to implement OSCAL development in its environment and to help automate some of the department&apos;s security controls.&lt;/p&gt;
&lt;p&gt;Still, the sheer size and complexity of the Defense Department&apos;s environment make it challenging. &quot;If you try to reach a complete analysis of your stack every time you make a change, it&apos;s very difficult,&quot; Chaillan said.&lt;/p&gt;
&lt;p&gt;However, by slicing the technology stack into layers, one can automate the mapping of &lt;a href=&quot;https://nvd.nist.gov/800-53&quot;&gt;NIST Special Publication 800-53&lt;/a&gt; controls with OSCAL. &quot;I think there is something [to this approach],&quot; he said. &quot;Of course, it&apos;s very nascent, and there are very few tools that support OSCAL today, so we need companies to bring that capability as a tool.&quot;&lt;/p&gt;
&lt;h3&gt;Automating security and compliance with &apos;policy first&apos;&lt;/h3&gt;
&lt;p&gt;Another way to help DevOps teams succeed at integrating security more tightly into their workflow is to place security controls directly within development environments and continuous pipelines as part of a so-called policy-first initiative. As Fox explained, when developers attempt to commit a project, such in-line controls can help ensure, for example, that secrets aren&apos;t part of the package.&lt;/p&gt;
&lt;p&gt;One of the most straightforward ways to potentially stop that from happening is with pre-commit hooks. Fox said she expects to see policies like these baked into developer tools. Such checks are then reinforced with additional tests throughout the development pipeline to ensure that workloads being deployed remain compliant.&lt;/p&gt;
&lt;p&gt;In addition to NIST&apos;s work on the &lt;a href=&quot;https://www.openpolicyagent.org/&quot;&gt;Open Policy Agent&lt;/a&gt;, other standards have been developed to help organizations enable a policy-first footing. One such set of standards is &lt;a href=&quot;https://developer.hpe.com/platform/spiffe-and-spire-projects/home&quot;&gt;SPIFFE (Secure Production Identity Framework For Everyone) and SPIRE&lt;/a&gt; (a software implementation of the SPIFFE API). &quot;It&apos;s been happening around SPIFFE and other standards,&quot; James said.&lt;/p&gt;
&lt;p&gt;Essentially, SPIFFE is a workload API that creates trust between workloads and system actions. Because SPIFFE is an API and eliminates manual key generation and distribution, authentication such as Kerberos and OAuth can be fully automated within cloud workloads. SPIRE is a software implementation of the SPIFFE API and can be integrated with call providers, middleware layers, hardware trust mechanisms, and more.&lt;/p&gt;
&lt;p&gt;According to James, the approaches to policy-first DevOps are moving toward allow/deny lists that help create scalable, automated mechanisms that can determine what something is and what it will be allowed to do and then create a centralized framework to manage the policy over time.&lt;/p&gt;
&lt;p&gt;James added that the policy-first mindset isn&apos;t optional in today&apos;s fast-paced environment. &quot;When you&apos;re looking at organizations scaling their use of microservices—whether they are going from five, to 10, to 100, to whatever the ultimate number might be—they&apos;re not going to be able to get their hands around [management challenges] unless they have a policy-first mindset,&quot; he said.&lt;/p&gt;
&lt;p&gt;&quot;We&apos;ve seen these patterns before DevSecOps, though. We&apos;ve seen policy-first in other domains, such as application security and information security. And I think there are a lot of lessons learned that are going to find their way back into the cloud-native landscape.&quot;&lt;/p&gt;
&lt;h3&gt;Lessons for leaders&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;When done right, DevSecOps enables enterprises to make secure software at &quot;cloud speed.&quot;&lt;/li&gt;
&lt;li&gt;The security policy-first mindset isn&apos;t optional in today&apos;s fast-paced environment.&lt;/li&gt;
&lt;li&gt;DevSecOps isn&apos;t just about software code security but also about maintaining compliance with various industry and government regulations.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;p&gt;&lt;u&gt;&lt;strong&gt;About the author:&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;
&lt;p&gt;George V. Hulme is an award-winning journalist and internationally recognized information security and business technology writer. He has covered business, technology, and IT security topics for more than 20 years. His work has appeared in CSOOnline, ComputerWorld, InformationWeek, and dozens of other technology publications. He is also a founding editor at DevOps.com&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Integrate Custom Data Sources Into Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/how-to-integrate-custom-data-sources-into-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-integrate-custom-data-sources-into-apache-spark/</guid><pubDate>Fri, 29 Jan 2021 05:41:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-05-10T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Streaming data is a hot topic these days, and Apache Spark is an excellent framework for streaming. In this blog post, I&apos;ll show you how to integrate custom data sources into Spark.&lt;/p&gt;
&lt;p&gt;Spark Streaming provides the ability to stream from a variety of sources while using the same concise API for accessing data streams, performing SQL queries, or creating machine learning algorithms. These abilities make Spark a preferable framework for streaming (or any type of workflow) applications, since we can use all aspects of the framework.&lt;/p&gt;
&lt;p&gt;The challenge is figuring out how to integrate custom data sources into Spark so we can leverage its power without needing to change to more standard sources. It might seem logical to change, but in some cases it is just not possible or convenient to do so.&lt;/p&gt;
&lt;h2&gt;Streaming Custom Receivers&lt;/h2&gt;
&lt;p&gt;Spark offers different extension points, as we could see when we extended the Data Source API for our example here in order to integrate our custom data store into Spark SQL.&lt;/p&gt;
&lt;p&gt;In this example, we are going to do the same, but we are also going to extend the streaming API so we can stream from &lt;strong&gt;anywhere&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;In order to implement our custom receiver, we need to extend the Receiver[A] class. Note that it has type annotation, so we can enforce type safety on our DStream from the streaming client side point of view.&lt;/p&gt;
&lt;p&gt;We are going to use this custom receiver to stream orders that one of our applications sent over a socket.&lt;/p&gt;
&lt;p&gt;The structure of the data traveling through the network looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;1 5
1 1 2
2 1 1
2 1 1
4 1 1
2 2
1 2 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We first receive the order ID and the total amount of the order, and then we receive the line items of the order. The first value is the item ID, the second is the order ID, (which matches the order ID value) and then the cost of the item. In this example, we have two orders. The first one has four items and the second has only one.&lt;/p&gt;
&lt;p&gt;The idea is to hide all of this from our Spark application, so what it receives on the DStream is a complete order defined on a stream as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val orderStream: DStream[Order] = .....
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At the same time, we are also using the receiver to stream our custom streaming source. Even though it sends the data over a socket, it will be quite complicated to use the standard socket stream from Spark, since we will not be able to control how the data is coming in. In addition, we have the problem of conforming orders on the application itself. This could be very complicated, since, once we are in the app space, we are running in parallel, and it is hard to sync all of this incoming data. However, in the receiver space it is easy to create orders from the raw input text.&lt;/p&gt;
&lt;p&gt;Let’s take a look at what our initial implementation looks like.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;case class Order(id: Int, total: Int, items: List[Item] = null)
case class Item(id: Int, cost: Int)

class OrderReceiver(host: String, port: Int) extends Receiver[Order](StorageLevel.MEMORY_ONLY)  {

  override def onStart(): Unit = {

    println(&quot;starting...&quot;)

    val thread = new Thread(&quot;Receiver&quot;) {
      override def run() {receive() }
    }

    thread.start()
  }

  override def onStop(): Unit = stop(&quot;I am done&quot;)

  def receive() = ....
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Our OrderReceiver extends Receiver[Order] which allows us to store an Order (type annotated) inside Spark. We also need to implement the onStart() and onStop() methods. Note that onStart() creates a thread so it is non-blocking, which is very important for proper behavior.&lt;/p&gt;
&lt;p&gt;Now, let’s take a look at the receive method. This is where the magic really happens.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def receive() = {
    val socket = new Socket(host, port)
    var currentOrder: Order = null
    var currentItems: List[Item] = null

    val reader = new BufferedReader(new InputStreamReader (socket.getInputStream(), &quot;UTF-8&quot;))

    while (!isStopped()) {
      var userInput = reader.readLine()

      if (userInput == null) stop(&quot;Stream has ended&quot;)
      else {
        val parts = userInput.split(&quot; &quot;)

        if (parts.length == 2) {
          if (currentOrder != null) {
            store(Order(currentOrder.id, currentOrder.total, currentItems))
          }

          currentOrder = Order(parts(0).toInt, parts(1).toInt)
          currentItems = List[Item]()
        }
        else {
          currentItems = Item(parts(0).toInt, parts(1).toInt) :: currentItems
        }
      }
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, we create a socket and point it to our source. Then we simply start reading from it until a stop command has been dispatched or our socket has no more data on it. Note that we are reading the same structure we have defined previously (how our data is being sent). Once we have completely read an Order, we call store(…) so it gets saved into Spark.&lt;/p&gt;
&lt;p&gt;There is nothing left to do here but to use our receiver in our application, which looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val config = new SparkConf().setAppName(&quot;streaming&quot;)
val sc = new SparkContext(config)
val ssc = new StreamingContext(sc, Seconds(5))

val stream: DStream[Order] = ssc.receiverStream(new OrderReceiver(port))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note how we have created the stream using our custom OrderReceiver (the val stream has been annotated only for clarity but it is not required). From now on, we use the stream (DString[Order]) as any other stream we have used in any other application.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;stream.foreachRDD { rdd =&gt;
      rdd.foreach(order =&gt; {
            println(order.id))              
            order.items.foreach(println)
      }
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;Spark Streaming comes in very handy when processing sources that generate endless data. You can use the same API that you use for Spark SQL and other components in the system, but it is also flexible enough to be extended to meet your particular needs.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Containers vs. VMs: A 5-Minute Guide to Understanding Their Differences]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/containers-vs-vms-a-5-minute-guide-to-understanding-their-differences/</link><guid isPermaLink="false">https://developer.hpe.com/containers-vs-vms-a-5-minute-guide-to-understanding-their-differences/</guid><pubDate>Fri, 29 Jan 2021 05:33:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suzy Visvanathan&quot;,
&quot;publish&quot;: &quot;2018-05-16T10:45:00.000&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Almost every organization is investing in or thinking of investing in containers. Containers are transitioning from being a hype to becoming a tangible entity in which applications are deployed, but that doesn’t mean virtual machines are suddenly out of date or not needed. While there are many articles highlighting the technical comparisons between how a container and VM image differ, articulating the advantages of one over the other for one’s business needs, and when to use which, requires a closer look.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/containers-vs-vms-wide-1611898725434.jpg&quot; alt=&quot;Containers vs. VMs&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What’s in the package&lt;/h2&gt;
&lt;p&gt;Containers take only what the application needs and share system resources like OS, CPU, and memory, thereby making them easier to deploy. VMs, on the other hand, take up a lot of system resources within each image, which translates to a lot of memory and CPU cycles. Because of their agile nature, containers are the deployment of choice in development and testing environments.&lt;/p&gt;
&lt;p&gt;Containers tend to run on one operating system, whereas the virtual hypervisor can run on any operating system. If you have one or a few consistent sets of applications on a single operating system, consider containers. If you have diverse applications with varying operating systems, consider VMs.&lt;/p&gt;
&lt;h2&gt;It’s all about the money&lt;/h2&gt;
&lt;p&gt;Since containers don’t package system resources as much as VMs, you can run at least twice (or more) the number of applications on the same server with containers than if you were to run them with VMs. This advantage maximizes resource usage and brings down operating costs. Agile development and testing speeds up time to market with containers, more so than if done with VMs.&lt;/p&gt;
&lt;p&gt;Both container and VM sprawl are a real problem that administrators face; however, because of the elasticity and portability of containers, they can easily be doubled or tripled to run the same set of applications, as opposed to VMs. So, while you can package more applications on a single host server with containers, running more containers than you need will end up consuming more resources.&lt;/p&gt;
&lt;p&gt;Put simply, if you are looking to develop applications or run a single or a handful of applications in multiple instances and resource footprint is a concern, consider containers. If you are looking to run multiple applications and resource footprint can be fluid, consider VMs.&lt;/p&gt;
&lt;h2&gt;About that security...&lt;/h2&gt;
&lt;p&gt;Security, by and large, has been the single biggest problem around containers. Containers, by their very nature of sharing OS, require root access, which makes the data vulnerable and at risk for unauthorized access. While there are several workarounds for this issue, they are quite lengthy and need to be thought out in detail. VMs, on the other hand, have a very robust, rich set of security services that make them attractive for sensitive data and for production environments. VMs also have a very mature ecosystem in terms of network, storage, data protection, and recovery that can make them better for production environments.&lt;/p&gt;
&lt;h2&gt;Forecast is mostly cloudy&lt;/h2&gt;
&lt;p&gt;Cloud solutions have made huge strides in the services they offer.  However, the cost of using the cloud is not always significantly lower – as one might expect. The elasticity of containers allows organizations to create containers on demand and tear them down when done. Scaling up and down your services on cloud by spawning new containers is easier and more cost-effective than it is with VMs. Containers in cloud also allow you to use only the minimum cloud resources you need for your service, thereby keeping your subscription costs down.&lt;/p&gt;
&lt;p&gt;To keep it simple, if you have a heavy development, test, or integration environment, switch to containers. If you have multiple applications with varying characteristics requiring a secure environment, remain on VMs. If you are looking to deploy in the cloud or offer services in the cloud, standardizing the deployment in containers will be a good idea. Eventually, it will be prudent to envision and plan for both containers and VMs to coexist in both data centers and in the cloud, since each has its own benefits and challenges.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kafka Connect and Kafka REST API on MapR: Streaming Just Became a Whole Lot Easier!]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/kafka-connect-and-kafka-rest-api-on-mapr-streaming-just-became-a-whole-l/</link><guid isPermaLink="false">https://developer.hpe.com/kafka-connect-and-kafka-rest-api-on-mapr-streaming-just-became-a-whole-l/</guid><pubDate>Fri, 29 Jan 2021 05:29:20 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ankur Desai&quot;,
&quot;publish&quot;: &quot;2016-12-09T06:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In my previous blog post &lt;a href=&quot;/blog/LOV2B97WzAiAzmYlY17y/real-time-event-streaming-what-are-your-options&quot;&gt;Real-Time Event Streaming: What Are Your Options?&lt;/a&gt;, I explained the three major components of a streaming architecture. Most streaming architectures have three major components – producers, a streaming system, and consumers. Producers (such as Apache Flume) publish event data into a streaming system after collecting it from the data source, transforming it into the desired format, and optionally filtering, aggregating, and enriching it. The streaming or messaging system (such as Apache Kafka or MapR Event Store) takes the data published by the producers, persists it, and reliably delivers it to consumers. Consumers are typically stream processing engines (such as Apache Spark) that subscribe to data from streams and manipulate or analyze that data to look for alerts and insights. Furthermore, once the data is processed, it may need to be persisted in a database or a file for future use by downstream applications.&lt;/p&gt;
&lt;p&gt;The following diagram illustrates the typical streaming architecture:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture1-1611898227918.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;However, as streaming becomes more pervasive, we are looking to simplify this architecture and, at the same time, make it more agile. Enter Kafka Connect and the Kafka REST API. The following diagram illustrates a new, simple, agile way of setting up your streaming:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture2-1611898241971.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kafka Connect: Easily connect common data systems with Kafka&lt;/h2&gt;
&lt;p&gt;Kafka Connect provides pre-built connectors that allow legacy data stores (such as databases and data warehouses) and modern data stores (such as HDFS) to connect with Kafka. This connection eliminates the need of building a custom “producer” or “consumer” application to help these data systems to publish/subscribe to Kafka. It also eliminates the need for a third party data collector that provides connectors to these data stores.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture3-1611898250133.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Kafka Connect provides a convenient, reliable connection to the most common data stores. It helps to ingest data into Kafka as well as push data from Kafka into the most commonly used data systems. Moreover, to eliminate the need of custom producer apps, it allows pull-based ingestion of data, supporting sources that don&apos;t know how to push. Similarly, to eliminate custom consumer apps, it allows push-based export of data from Kafka, supporting data systems that don&apos;t know how to pull data from Kafka. As Kafka Connect continues to mature, more connectors will be created, opening up a large range of sources and sinks that can connect to Kafka out of the box.&lt;/p&gt;
&lt;h2&gt;Kafka REST Proxy: Connect with Kafka using HTTP&lt;/h2&gt;
&lt;p&gt;New age data sources such as sensors, mobile devices, etc., know how to communicate using HTTP. However, they often do not have enough computing resources to run a Kafka producer application and a Kafka client. This deficiency is why the Kafka REST API is a game changer. It allows these devices to publish/subscribe to Kafka topics easily, which makes the architecture much more agile. Any device that can communicate using HTTP can now communicate directly with Kafka. This development has massive implications in simplifying IoT architectures. Any car, thermostat, machine sensor, etc., can now communicate directly with Kafka.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture4-1611898257677.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Kafka REST API eliminates intermediate data collectors and simplifies the architecture by directly connecting the data sources with Kafka. Any programming language in any runtime environment can now connect with Kafka using HTTP. This ability gives developers the freedom to use the development framework of their choice and connect with Kafka using simple REST APIs, which reduces the time-to-market for streaming applications.&lt;/p&gt;
&lt;h2&gt;MapR Data Platform: Further simplifying the architecture&lt;/h2&gt;
&lt;p&gt;The MapR Platform further simplifies the streaming architecture by providing event streaming, stream processing, and persistence (both database and files) on one single platform, in one system, in one cluster. You can connect the data sources with MapR Event Store - which is a more secure, reliable, and performant replacement for Kafka - using the Kafka REST API or Kafka Connect. All the components of your streaming architecture will be available on MapR, within one platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/picture5-1611898264954.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store uses the same APIs as Kafka (0.9), which means that applications built with Kafka as the messaging system can be easily ported over to MapR Event Store and vice versa. Without the converged platform, event streaming, stream processing, and persistence would run as separate systems that would need to be connected. Connected systems require cross-cluster data movement, which introduces additional latency. They also require more hardware, since resources cannot be shared across siloed systems and have higher administration cost. The MapR Data Platform eliminates these problems by providing one single system for all components (except the original data sources) of the streaming architecture.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Real-Time Event Streaming: What Are Your Options?]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/real-time-event-streaming-what-are-your-options/</link><guid isPermaLink="false">https://developer.hpe.com/real-time-event-streaming-what-are-your-options/</guid><pubDate>Fri, 29 Jan 2021 05:22:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ankur Desai&quot;,
&quot;publish&quot;: &quot;2016-04-14T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;open-source&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;With the Internet of Things expected to bring billions of devices online, a lot of people are excited about the potential value of event streaming, that is, ingesting and analyzing lots of real-time data for immediate decision-making. But streaming also introduces new concepts and components that need a closer look. This blog post is intended to provide an introduction to the components of a typical streaming architecture and various options available at each stage.&lt;/p&gt;
&lt;h2&gt;Three Components of a Streaming Architecture&lt;/h2&gt;
&lt;p&gt;Most streaming architectures have three major components – producers, a streaming system, and consumers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A producer&lt;/strong&gt; is a software-based system that is connected to the data source. Producers publish event data into a streaming system after collecting it from the data source, transforming it into the desired format, and optionally filtering, aggregating, and enriching it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The streaming system&lt;/strong&gt; takes the data published by the producers, persists it, and reliably delivers it to consumers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Consumers&lt;/strong&gt; are typically stream processing engines that subscribe to data from streams and manipulate or analyze that data to look for alerts and insights. There are lots of options to choose from, and more are on the way. Let’s look at what your options are for each stage.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Stage 1: Producers&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Data producers collect the data from data sources, convert it to the desired format, and publish the data into streaming platforms such as &lt;a target=&apos;\_blank&apos;  href=&apos;http://kafka.apache.org/&apos;&gt;Apache Kafka&lt;/a&gt; and MapR Event Store. Apache Flume is commonly used as a producer to Kafka. StreamSets is an up-and-coming data collector that may be worth a look.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://flume.apache.org/&apos;&gt;Apache Flume&lt;/a&gt;&lt;/strong&gt; is a distributed system for efficiently collecting, aggregating, and moving large amounts of data. Flume has a source and sink architecture. A Flume source collects the event data from the data sources. A Flume sink puts the event into an external repository, which is often a streaming system like Apache Kafka or MapR Event Store.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://streamsets.com/product/&apos;&gt;StreamSets Data Collector&lt;/a&gt;&lt;/strong&gt; is open source software for the development and operation of complex data flows. It provides a graphical IDE for building ingest pipelines. StreamSets can help you connect with Kafka without writing a single line of code. StreamSets Data Collector includes out-of-the-box connectors for Kafka and many other sources and destinations.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Stage 2: Streaming System&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Two event transport systems that can easily scale to deliver billions of events per second are Apache Kafka and MapR Event Store. The ability to linearly scale to deliver billions of events per second differentiates Kafka and MapR Event Store from traditional messaging queues like &lt;a target=&apos;\_blank&apos;  href=&apos;http://www.tibco.com/products/automation/enterprise-messaging/enterprise-message-service&apos;&gt;Tibco EMS&lt;/a&gt; and &lt;a target=&apos;\_blank&apos;  href=&apos;http://www-03.ibm.com/software/products/en/ibm-mq&apos;&gt;IBM MQ&lt;/a&gt;. Kafka and MapR Event Store both use a publish-subscribe model in which the data producer is the publisher and the data consumer is the subscriber.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Apache Kafka&lt;/strong&gt; is great at handling large volumes of data. You can set up a cluster as a data backbone, which gives you great scalability. You can then easily expand the cluster as needed without downtime. Kafka also stores messages on disk and replicates them within the cluster to reduce the risk of data loss when you encounter a hardware failure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MapR Event Store for Apache Kafka&lt;/strong&gt; is like Kafka – in fact, it uses &lt;a target=&apos;\_blank&apos;  href=&apos;http://kafka.apache.org/documentation.html&apos;&gt;the Kafka 0.9 API&lt;/a&gt; – but it has certain enterprise features that provide additional support for very large and geographically diverse networks where data integrity is essential. MapR Event Store is integrated with the MapR Data Platform, which combines file storage, database services, and processing frameworks in a single cluster. That means batch, interactive, and stream processing engines all have direct access to event streams, which reduces data movement and ensures consistency.&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Stage 3: Consumers (Processing)&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;MapR Event Store and Kafka can deliver data from a wide variety of sources, at IoT scale. It’s then up to the processing engines to do something with it. Four important engines to know about include Apache Spark Streaming, Apache Flink, Apache Storm, and Apache Apex.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Spark Streaming&lt;/strong&gt; is a built-in component of Apache Spark. Spark Streaming can consume event streams from MapR Event Store, Kafka, and many other systems. By being a built-in component of Spark, Spark Streaming runs in-memory, and allows you to run ad-hoc queries on stream data. Spark Streaming can be more accurately described as “micro-batching,” or processing small amounts of batch information in quick bursts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://storm.apache.org/&apos;&gt;Apache Storm&lt;/a&gt;&lt;/strong&gt; is another popular event processing engine. Unlike Spark, Storm is a pure real-time event-based analytics engine, which makes it most useful in situations in which each event needs to be processed instantaneously. Storm actually processes each event as soon as it is delivered.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://flink.apache.org/&apos;&gt;Apache Flink&lt;/a&gt;&lt;/strong&gt; works in-memory and is notable for its speed and scalability. Similar to Storm, it is a pure real-time event-based processing engine. The differences between the two are quite technical, having to do with such things as the way each ensures data reliability, the programming languages they support, and the capabilities of their respective APIs. Flink supports the Apache Storm API, to make the transition from Storm easy for developers familiar with Storm.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://apex.incubator.apache.org/&apos;&gt;Apache Apex&lt;/a&gt;&lt;/strong&gt; is a YARN-native platform that unifies stream and batch processing. It lowers the expertise required to write big data applications by providing a simple API that enables users to write or reuse generic Java code.&lt;/p&gt;
&lt;p&gt;You can see that the enthusiasm over real-time processing is being met with a host of technologies. And the landscape is constantly evolving.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Scaling with Kafka – Common Challenges Solved]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/scaling-with-kafka-common-challenges-solved/</link><guid isPermaLink="false">https://developer.hpe.com/scaling-with-kafka-common-challenges-solved/</guid><pubDate>Fri, 29 Jan 2021 05:13:45 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Will Ochandarena&quot;,
&quot;publish&quot;: &quot;2016-05-05T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;streaming&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;I attended several Kafka Summit sessions held in San Francisco given by Kafka users and talked to many more at the booth and in the hallway. The general consensus is the Kafka model and API is well suited to building horizontally-scalable, asynchronous data pipelines for data integration and stream processing. That said, the companies that are operating these systems at scale – billions of events per day or multiple data centers – described a consistent set of challenges.&lt;/p&gt;
&lt;p&gt;We launched MapR Event Store, a publish-subscribe event streaming system built on top of the MapR platform, exposing the Kafka API. By leveraging the strong MapR foundation that supports our distributed file and object store (MapR XD) and database (MapR Database), both of which effortlessly scale to petabytes of data and thousands of nodes across multiple data centers, MapR Event Store inherently avoids several of the challenges described by the companies operating it at scale. In this blog, I’d like to highlight a few of these challenges and how MapR Event Store overcomes them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Topic Balancing&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The most universal pain point I heard had to do with how Kafka balances topic partitions between cluster nodes. Because Kafka assumes all partitions are equal in terms of size and throughput, a common occurrence is for multiple “heavy” partitions to be placed on the same node, resulting in hot spotting and storage imbalances. To overcome this, these companies devote a lot of resources to monitoring and manually intervene each time an issue is found to migrate partitions between nodes.  &lt;/p&gt;
&lt;p&gt;Rather than pin partitions to a single node, MapR Event Store splits partitions into smaller linked objects, called “partitionlets”, that are spread among the nodes in the cluster. As data is written to the cluster, the active partitionlets (those handling new data) are dynamically balanced according to load, minimizing hotspotting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Infinite Topic Persistence&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Another common issue that was talked about was handling long-term persistence of streaming data. As mentioned above, Kafka partitions are pinned to a single node, meaning they can’t outgrow the storage capacity of that node.  Because of this, a common design pattern is to shovel streaming data from Kafka into HDFS for long-term persistence, also known as the Lamba architecture.  This creates huge complications around reprocessing of old data (either for new use cases or fixing bugs on existing streaming apps), as it forces companies to write two versions of each app – one for processing from Kafka, the other from HDFS.  &lt;/p&gt;
&lt;p&gt;The MapR partitionlet approach described above also eliminates this issue, as it allows all historical data to be stored in a single system – MapR Event Store. This allows streaming apps, new or existing, to simply “scrollback to 0” and reprocess months or even years of historical data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Global Replication&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;LinkedIn gave a presentation to a packed room called “More Clusters, More Problems” on designing for multi-datacenter. They listed several design challenges, to name a few -&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Establishing tiered cluster types (local and aggregate) to avoid forming topology loops that infinitely replicate data.&lt;/li&gt;
&lt;li&gt;Ensuring topic names are globally unique due to lack of namespacing.&lt;/li&gt;
&lt;li&gt;Ensuring identical partition configuration on both ends to prevent re-ordering.&lt;/li&gt;
&lt;li&gt;No ability to handle failover of applications in a disaster recovery scenario, as message offsets aren’t synchronized between clusters.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By building global replication into the platform, MapR Event Store overcomes all of the challenges above and more.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exploring Data Fabric and Containers in HPE DEVs new Munch & Learn monthly gatherings]]></title><description><![CDATA[munch learn feb 2021 HPE DEV Munch & Learn technical talks are designed to provide attendees with an opportunity to learn more about today’s…]]></description><link>https://developer.hpe.com/exploring-data-fabric-and-containers-in-hpe-devs-new-munch-learn-monthly/</link><guid isPermaLink="false">https://developer.hpe.com/exploring-data-fabric-and-containers-in-hpe-devs-new-munch-learn-monthly/</guid><pubDate>Thu, 28 Jan 2021 18:22:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/munch-learn-feb-2021-1611858129501.jpg&quot; alt=&quot;munch learn feb 2021&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV Munch &amp;#x26; Learn technical talks are designed to provide attendees with an opportunity to learn more about today’s most popular technologies. These free, 60-minute informative meetups are sponsored by the HPE DEV Community and provide the unique opportunity to connect with some of the industry’s leading technologists.&lt;/p&gt;
&lt;h3&gt;January’s inaugural talk on data fabric with Ted Dunning and Ellen Friedman&lt;/h3&gt;
&lt;p&gt;In the January 27th Munch &amp;#x26; Learn session, over 160 technologists attended to listen to Ellen Friedman and Ted Dunning explore the impact of a unifying data fabric and how it works. Ellen moderated the session, starting with an invitation to make these informal meetups fun, interactive, and informative. She introduced Ted Dunning, HPE Ezmeral Data Fabric CTO, who started with a business-level look at three different use cases and then quickly moved to a deeper technical level to examine how data infrastructure played a role in each one.&lt;/p&gt;
&lt;p&gt;Ted showed a variety of processes that are handled efficiently by the data platform rather than having to be coded over and over again into applications. The examples included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Event stream replication from many edge sources to the core data center&lt;/li&gt;
&lt;li&gt;How data infrastructure handles massive scale of data mirrored from edge to core in the case of autonomous car manufacture&lt;/li&gt;
&lt;li&gt;How a unifying data infrastructure supports true multitenancy for a large customer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The data infrastructure used is the HPE Ezmeral Data Fabric, a software solution based on technology from the acquisition of MapR Technologies by HPE and now part of the HPE Ezmeral Software Portfolio. This ability to run very different applications – including AI and analytics, modern and legacy, containerized and non-containerized – together on the same data system is a central theme of their free, newly published O’Reilly ebook: &lt;em&gt;&lt;a href=&quot;https://www.hpe.com/us/en/resources/software/ai-and-analytics-systems.html&quot;&gt;AI and Analytics at Scale: Lessons from Real-World Production Systems&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;During the talk, Ted acknowledged that people may be accepting certain trade-offs between scale, resilience, performance, flexibility, or cost that aren’t actually necessary. He pointed out that, when you release yourself from the limitations imposed by some data infrastructure technologies, you can avoid these tradeoffs, and then proceeded to dive into technical detail on how the HPE Ezmeral Data Fabric makes this possible.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ellen-munch-and-learn-2-1612197191552.jpg&quot; alt=&quot;ellen munch and learn 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;While listening to the session, attendees participated in polls, had the opportunity to ask questions. In keeping with the Munch and Learn theme several of the attendees had an opportunity to share pictures of their favorite “munchies” on the accompanying Slack channel.&lt;/p&gt;
&lt;h3&gt;February talk on containers with Tom Phelan and Nigel Poulton&lt;/h3&gt;
&lt;p&gt;During the next session, February 24th, Nigel Poulton hosted Tom Phelan, HPE Fellow and CTO of the HPE Ezmeral Runtime Enterprise, who discussed container architectures and how they can leverage the Kubernetes Container orchestrator to deploy and manage stateful, as well as microservice-based, applications.&lt;/p&gt;
&lt;p&gt;As part of the discussion, Tom described how containers require less system resources than traditional virtual machine environments, allowing applications to be deployed more easily and run on different operating systems and hardware platforms. Tom also explained how the HPE Ezmeral Machine Learning Operations package helps support collaboration between data scientists.&lt;/p&gt;
&lt;p&gt;To view what’s up for our next Munch &amp;#x26; Learn session, check out our &lt;a href=&quot;/blog/munch-and-learn&quot;&gt;schedule&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Top Trends: Machine Learning, Microservices, Containers, Kubernetes, Cloud to Edge. What are they and how do they fit together?]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/top-trends-machine-learning-microservices-containers-kubernetes-cloud-to/</link><guid isPermaLink="false">https://developer.hpe.com/top-trends-machine-learning-microservices-containers-kubernetes-cloud-to/</guid><pubDate>Fri, 22 Jan 2021 06:42:26 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-02-28T12:00:00.000&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Developers, data scientists, and IT operations are working together to build intelligent apps with new technologies and architectures because of the flexibility, speed of delivery, and maintainability that they make possible. This post will go over some top trending technologies, such as machine learning, containers, Kubernetes, event streams (Kafka API), DataOps, and cloud to edge computing, which are driving this revolution.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/industries-1611298009552.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI, Machine Learning, Deep Learning&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Predictive machine learning uses algorithms to find patterns in data and then uses a model that recognizes those patterns to make predictions on new data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/predictive-ml-1611298021253.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Why is this so hot? Analytical technology has changed dramatically over the last decade, with more powerful and less expensive distributed computing across commodity servers, streaming analytics, and improved machine learning technologies, enabling companies to store and analyze both far more data and many different types of it. According to &lt;a href=&quot;https://www.gartner.com/newsroom/id/3812063&quot;&gt;Gartner&lt;/a&gt;, over the next few years, virtually every app, application, and service will incorporate some level of AI or machine learning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/ml-examples-1611298030328.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Microservices&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://martinfowler.com/articles/microservices.html&quot;&gt;microservice architectural style&lt;/a&gt; is an approach to developing an application as a suite of small, independently deployable services built around specific business capabilities.&lt;/p&gt;
&lt;p&gt;A monolithic application puts all of its functionality into a single process; scaling requires replicating the whole application, which has limitations. With microservices, functionality is put into separate services, allowing these services to be distributed and replicated across servers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/microservices-1611298039137.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://ostatic.com/blog/q-a-maprs-jack-norris-on-the-impact-of-microservices&quot;&gt;A microservices approach is well-aligned to a typical big data deployment&lt;/a&gt;. You can gain modularity, extensive parallelism, and cost-effective scaling by deploying services across many commodity hardware servers. Microservices modularity facilitates independent updates/deployments and helps to avoid single points of failure, which can help prevent large-scale outages.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Event-Driven Microservices&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A common architecture pattern combined with microservices is event sourcing using an append-only publish subscribe event stream such as MapR Event Streams (which provides a Kafka API).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/microservices-with-cdp-1611298050238.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store provides high performance messaging, which can scale to very high throughput levels, easily delivering millions of messages per second on modest hardware. The publish/subscribe Kafka API provides decoupled communications, wherein producers don&apos;t know who subscribes, and consumers don&apos;t know who publishes, making it easy to add new listeners or new publishers without disrupting existing processes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/kafka-api-integration-1611298061346.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;When you combine these messaging capabilities with the simple concept of microservices, you can greatly enhance the agility with which you build, deploy, and maintain complex data pipelines. Pipelines are constructed by simply chaining together multiple microservices, each of which listens for the arrival of some data, performs its designated task, and optionally publishes its own messages to a topic.&lt;/p&gt;
&lt;p&gt;Take, for example, an online shopping application&apos;s item rating functionality, as shown in the image below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/online-shopping-application-1611298073294.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This could be decomposed into the following microservices:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a service publishes &quot;Rate Item&quot; events to a Topic&lt;/li&gt;
&lt;li&gt;a service reads from the stream and persists a materialized view of the ratings in a NoSQL document datastore&lt;/li&gt;
&lt;li&gt;a browse item ratings service reads from the NoSQL document datastore&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/rating-functionality-decomposed-1611298085852.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With event-driven microservices, new functionality can easily be added by deploying new services; for example recommendations, predictive services, and fraud detection service, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/deploying-new-services-1611298100670.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Development teams can deploy new services or service upgrades more frequently and with less risk, because the production version does not need to be taken offline. Both versions of the service simply run in parallel, consuming new data as it arrives and producing multiple versions of output. Both output streams can be monitored over time; the older version can be decommissioned when it ceases to be useful.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Event Streams and Machine Learning Logistics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Combining event streams with machine learning can handle the logistics of machine learning in a flexible way by:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Making input and output data available to independent consumers&lt;/li&gt;
&lt;li&gt;Managing and evaluating multiple models and easily deploying new models&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/event-streams-and-ml-logistics-1611298114104.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Architectures for these types of applications are discussed in more detail in the &lt;a href=&quot;https://www.scribd.com/document/435442431/Spark2018eBook-pdf&quot;&gt;eBook&lt;/a&gt; &lt;em&gt;Machine Learning Logistics&lt;/em&gt;, &lt;em&gt;Streaming Architecture&lt;/em&gt;, and &lt;em&gt;Microservices and Containers&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A &lt;a href=&quot;https://www.docker.com/what-container&quot;&gt;container image&lt;/a&gt; packages an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to execute the application. Compared to virtual machines, containers have similar resources and isolation benefits, but are more lightweight, because containers virtualize the operating system instead of the hardware. Containers are more portable and efficient; they take up less space, use far fewer system resources, and can be spun up in seconds.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/containers-1611298125788.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To learn more about containers, click &lt;a href=&quot;https://www.docker.com/what-container&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DevOps and Containers&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Similar to how the agile software development movement broke down the handoff between business requirements, development, and testing, &lt;a href=&quot;https://en.wikipedia.org/wiki/DevOps&quot;&gt;DevOps&lt;/a&gt; breaks down silos between developers and operations with a collaborative process.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/devops-and-containers-1611298138209.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To learn more about DevOps, click [here] (&lt;a href=&quot;https://en.wikipedia.org/wiki/DevOps&quot;&gt;https://en.wikipedia.org/wiki/DevOps&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Containers provide greater efficiency for developers: instead of waiting for operations to provision machines, DevOps teams can quickly package an application into a container and deploy it easily and consistently across different platforms, whether a laptop, a private data center, a public cloud, or hybrid environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/with-and-without-containers-1611298147637.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Containers and Microservices&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Containers are perfect for microservices; each service can be packaged, and each instance deployed as a container, providing the following benefits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Services can be isolated to specific resources&lt;/li&gt;
&lt;li&gt;Each container can be health checked&lt;/li&gt;
&lt;li&gt;Containers can be started upon demand and stopped independently of each other&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Container and Cloud&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The National Institute of Standards and Technology &lt;a href=&quot;https://martinfowler.com/bliki/CloudComputing.html&quot;&gt;defines a cloud&lt;/a&gt; as access to a pool of computing resources that can be rapidly provisioned and made available with four deployment models: private, community, public, and hybrid. With containers, developers can deploy their microservices directly into production without porting efforts. This ability to deploy across different platforms is destined to become much more important in the emerging hybrid IT environment, in which infrastructure is a combination of existing legacy systems, on-premises and off-premises, private cloud and public cloud.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Orchestration of Containers and Cloud&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/&quot;&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt;&lt;/a&gt; has been a big step toward making containers mainstream. Kubernetes automates &quot;container orchestration&quot;: deployment, scaling, and management of containerized applications.&lt;/p&gt;
&lt;p&gt;Kubernetes introduced a high-level abstraction layer called a &quot;pod&quot; that enables multiple containers to run on a host machine and share resources without the risk of conflict. A pod can be used to define shared services, like a directory or storage, and expose it to all the containers in the pod.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/kubernetes-1611298156941.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This simplifies the management of machines and services, enabling a single administrator to manage thousands of containers running simultaneously.&lt;/p&gt;
&lt;p&gt;Kubernetes allows you to orchestrate across on-site deployments to public or private clouds and to hybrid deployments in-between. On-premises computation also is moving quickly to containerized orchestration, and when you can interchangeably schedule services anywhere, you have real revolution.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DataOps&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Just as the broader IT world has embraced the concept of DevOps, which uses new technologies and processes to brings application developers and operations together in a cohesive and mutually beneficial manner, the data world today is moving toward DataOps. DataOps is an emerging practice utilized by large organizations with teams of data scientists, developers, and other data-focused roles that train machine learning models and deploy them to production. The goal of using a DataOps methodology is to create an agile, self-service workflow that fosters collaboration and boosts creativity while respecting data governance policies. A DataOps practice supports cross-functional collaboration and fast time-to-value. It is characterized by processes as well as the use of enabling technologies, such as the MapR Data Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/dataops-1611298166619.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Combining microservices, containers, and event streams with DataOps makes managing and evaluating multiple models and easily deploying new models more efficient and agile.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/continuous-model-deployment-1611298180998.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IoT, Edge Computing, Machine Learning, and the Cloud&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;From automobile manufacturers to oil and gas companies, businesses across the globe seek to derive real business value from outcomes like predicting equipment failures, avoiding accidents, improving diagnostics, and more. There is a growing requirement for edge computing, which brings analytics and machine learning models close to IoT data sources. What makes &lt;a href=&quot;https://www.cbronline.com/feature/edge-computing-artificial-intelligence-iot&quot;&gt;Edge different&lt;/a&gt; is the ability to enable real-time analytics, leveraging local compute for running and feeding machine learning models. In the world of IoT, fast analytics is essential for anomaly detection, fraud detection, aircraft monitoring, oil rig monitoring, manufacturing monitoring, utility monitoring, and health sensor monitoring, where alerts may need to be acted upon rapidly. Imagine how, if machine learning had detected the BP valve pressure anomaly before the Deepwater Horizon explosion in the Gulf of Mexico, &lt;a href=&quot;https://en.wikipedia.org/wiki/Deepwater_Horizon_explosion&quot;&gt;the largest environmental disaster in U.S. history could have been avoided&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/cloud-to-the-edge-1611298192469.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.gartner.com/newsroom/id/3812063&quot;&gt;Cloud to the Edge&lt;/a&gt;, also called &lt;a href=&quot;https://www.cbronline.com/in-depth/iot-fog-computing-cio&quot;&gt;Fog&lt;/a&gt;, is one of the Gartner&apos;s top technology trends for 2018, in which a cloud service-oriented model is combined with edge computing for distributed processing that spans the continuum between the cloud and edge. Ted Dunning, Chief Application Architect at MapR, predicts that we will see a full-scale data fabric extend right to the edge next to devices, and, in some cases, we will see threads of the fabric extend right into the devices themselves.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A confluence of several different technology shifts have dramatically changed the way that applications are being built. The combination of machine learning, event-driven microservices, containers, DataOps, and cloud to edge computing is accelerating the development of next-generation intelligent applications, which are taking advantage of modern computational paradigms, powered by modern computational infrastructure. The MapR Data Platform integrates global event streaming, real-time database capabilities, and scalable enterprise storage with a collection of data processing and analytical engines to power this new generation of data processing pipelines and intelligent applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/mapr-cdp-1611298203271.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Want to learn more?&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://thenewstack.io/microservices-running-containers-need-streaming-platform/&quot;&gt;Why Microservices Running in Containers Need a Streaming Platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.gartner.com/newsroom/id/3812063&quot;&gt;Gartner Identifies the Top 10 Strategic Technology Trends for 2018&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-technology-trends-for-2018/&quot;&gt;Gartner Top 10 Strategic Technology Trends for 2018&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Association Rule Mining – Not Your Typical Data Science Algorithm]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/association-rule-mining-not-your-typical-data-science-algorithm/</link><guid isPermaLink="false">https://developer.hpe.com/association-rule-mining-not-your-typical-data-science-algorithm/</guid><pubDate>Fri, 22 Jan 2021 06:28:06 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Kirk Borne&quot;,
&quot;publish&quot;: &quot;2014-04-28T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Many machine learning algorithms that are used for data mining and data science work with numeric data. And many algorithms tend to be very mathematical (such as Support Vector Machines). But, association rule mining is perfect for categorical (non-numeric) data and it involves little more than simple counting! That’s the kind of algorithm that &lt;a href=&quot;http://codecapsule.com/2010/04/15/efficient-counting-mapreduce/&quot;&gt;MapReduce is really good at&lt;/a&gt;, and it can also lead to some really interesting discoveries.&lt;/p&gt;
&lt;p&gt;Association rule mining is primarily focused on finding frequent co-occurring associations among a collection of items. It is sometimes referred to as “Market Basket Analysis”, since that was the original application area of association mining. The goal is to find associations of items that occur together more often than you would expect from a random sampling of all possibilities. The classic example of this is the famous Beer and Diapers association that is often mentioned in data mining books. The story goes like this: men who go to the store to buy diapers will also tend to buy beer at the same time. Let us illustrate this with a simple example.&lt;/p&gt;
&lt;p&gt;Suppose that a store’s retail transactions database includes the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There are 600,000 transactions in total.&lt;/li&gt;
&lt;li&gt;7,500 transactions contain diapers (1.25 percent)&lt;/li&gt;
&lt;li&gt;60,000 transactions contain beer (10 percent)&lt;/li&gt;
&lt;li&gt;6,000 transactions contain &lt;u&gt;both&lt;/u&gt; diapers and beer (1.0 percent)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If there was no association between beer and diapers (i.e., they are statistically independent), then we expect only 10% of diaper purchasers to also buy beer (since 10% of all customers buy beer). However, we discover that 80% (=6000/7500) of diaper purchasers also buy beer. This is a factor of 8 increase over what was expected – that is called Lift, which is the ratio of the observed frequency of co-occurrence to the expected frequency. This was determined simply by counting the transactions in the database. So, in this case, the association rule would state that diaper purchasers will also buy beer with a Lift factor of 8.&lt;/p&gt;
&lt;p&gt;In statistics, Lift is simply estimated by the ratio of the joint probability of two items x and y, divided by the product of their individual probabilities:  Lift = P(x,y)/[P(x)P(y)]. If the two items are statistically independent, then P(x,y)=P(x)P(y), corresponding to Lift = 1 in that case. Note that anti-correlation yields Lift values less than 1, which is also an interesting discovery – corresponding to mutually exclusive items that rarely co-occur together.&lt;/p&gt;
&lt;p&gt;The above simple example was made up, and it is very rare in real world cases to have Lift factors as high as 8. But, there was a case where it did happen. That case was discovered by Walmart in 2004 when a series of hurricanes crossed the state of Florida. After the first hurricane, there were several more hurricanes seen in the Atlantic Ocean heading toward Florida, and so &lt;a href=&quot;http://www.nytimes.com/2004/11/14/business/yourmoney/14wal.html&quot;&gt;Walmart mined their massive retail transaction database&lt;/a&gt; to see what their customers really wanted to buy prior to the arrival of a hurricane. They found one particular item that increased in sales by a factor of 7 over normal shopping days. That was a huge Lift factor for a real-world case. That one item was not bottled water, or batteries, or beer, or flashlights, or generators, or any of the usual things that we might imagine. The item was &lt;a href=&quot;http://www.hurricaneville.com/pop_tarts.html&quot;&gt;strawberry pop tarts&lt;/a&gt;! One could imagine lots of reasons why this was the most desired product prior to the arrival of a hurricane – pop tarts do not require refrigeration, they do not need to be cooked, they come in individually wrapped portions, they have a long shelf life, they are a snack food, they are a breakfast food, kids love them, and we love them. Despite these “obvious” reasons, it was a still a huge surprise! And so Walmart stocked their stores with tons of strawberry pop tarts prior to the next hurricanes, and they sold them out. That is a win-win: Walmart wins by making the sell, and customers win by getting the product that they most want.&lt;/p&gt;
&lt;p&gt;Another example of association mining was provided to me by a colleague of mine at George Mason University. He is a professor of geoinformation systems and earth science. He used this algorithm to examine the characteristics of hurricanes (internal wind speed, atmospheric pressure in the eye of the hurricane, wind shear, rainfall amounts, direction and propagation speed of the hurricane, etc.​), and he found a strong association between the final strength (category) of the hurricane and the values of those different characteristics. He was able to predict hurricane intensification and its ultimate strength more accurately with association mining than the standard hurricane model used by the national hurricane center. That was an amazing application of an algorithm that was initially developed for retail store transaction mining.&lt;/p&gt;
&lt;p&gt;There was an equally impressive scientific application that I encountered several years ago when I was working at NASA. Every summer we would have student interns working with us. These students were usually college undergraduates, and they were always very bright. In one of those summers, we had a student who was not yet a senior in high school. He heard me give a lunch talk on data mining, which I presented to the full cohort of summer interns that year. He was working on a project with a NASA space physicist to try to predict when solar energetic particles would reach the earth after the occurrence of a major solar storm on the Sun. He decided to try association mining. What he did was very clever. Similar to the hurricane example mentioned above, he collected characteristics of the solar storms on the Sun and geomagnetic events around the earth (as measured by NASA’s satellites) to look for predictive patterns. But the special thing that he did was to look at time-shifted data values. For example, he compared events on the Sun with geo events with time lags of 1 hour, 2 hours, 3 hours, 4 hours, etc. in order to see when the peak correlation (association!) occurred. He found it – the strongest geomagnetic effects were measured around the earth at approximately 2-3 hours after the solar storm. His NASA mentor called me into his office to show me the amazing discovery by this high school junior, using the simple techniques that I taught him in my lunchtime seminar. We were all quite impressed!&lt;/p&gt;
&lt;p&gt;The above examples illustrate two very useful approaches when mining your own big data collections: (1) search for rare and unusual co-occurring associations of non-numeric items (which then makes for powerful predictive analytics); and (2) if you have time-based data, consider the effects of introducing a time lag in your data mining experiments to see if the strength of the correlation reaches it peak some time later​. My final example of association rule mining was discovered many years ago by a major electronics store that sold video cameras and video (VHS) players/recorders. They mined their retail customer database and found that customers who buy a VHS player/recorder tend to come back to the store about 3-4 months later to buy a video camera (camcorder). The store then used this information to send discount coupons for camcorders to all of its customers who bought VHS player/recorders a few months earlier, in order to bring those customers back into the store to purchase a camcorder. Apparently, this customer engagement program worked!  And its success was due to association rule mining. With the massive quantities of big data that are now available, and with powerful technologies to perform analytics on those data, one can only imagine what surprising and useful associations are waiting to be discovered that can boost your bottom line. Start counting!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[An Inside Look at the Components of a Recommendation Engine]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/an-inside-look-at-the-components-of-a-recommendation-engine/</link><guid isPermaLink="false">https://developer.hpe.com/an-inside-look-at-the-components-of-a-recommendation-engine/</guid><pubDate>Fri, 22 Jan 2021 06:01:22 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2015-04-09T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Recommendation engines help narrow your choices to those that best meet your particular needs. In this post, we’re going to take a closer look at how all the different components of a recommendation engine work together. We’re going to use collaborative filtering on movie ratings data to recommend movies. The key components are a collaborative filtering algorithm in &lt;a target=&apos;\_blank&apos;  href=&apos;http://mahout.apache.org/&apos;&gt;Apache Mahout&lt;/a&gt; to build and train a machine learning model and search technology from &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.elastic.co/elasticsearch/&apos;&gt;Elasticsearch&lt;/a&gt; to simplify deployment of the recommender.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/recommendation-engine-video-1611295605722.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Recommendation?&lt;/h2&gt;
&lt;p&gt;Recommendation is a class of machine learning that uses data to predict a user&apos;s preference for or rating of an item.  Recommender systems are used in industry to recommend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Books and other products (e.g. Amazon)&lt;/li&gt;
&lt;li&gt;Music (e.g. Pandora)&lt;/li&gt;
&lt;li&gt;Movies (e.g. Netflix)&lt;/li&gt;
&lt;li&gt;Restaurants (e.g. Yelp)&lt;/li&gt;
&lt;li&gt;Jobs (e.g. LinkedIn)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/netflix-recommendation-engine-1611295620220.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The recommender relies on the following observations:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Behavior of users is the best clue to what they want.&lt;/li&gt;
&lt;li&gt;Co-occurrence is a simple basis that allows Apache Mahout to compute significant indicators of what should be recommended.&lt;/li&gt;
&lt;li&gt;There are similarities between the weighting of indicator scores in output of such a model and the mathematics that underlie text retrieval engines.&lt;/li&gt;
&lt;li&gt;This mathematical similarity makes it possible to exploit text-based search to deploy a Mahout recommender using a search engine like Elasticsearch.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/recommendation-engine-architecture-1611295634594.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Architecture of the Recommendation Engine&lt;/h2&gt;
&lt;p&gt;The architecture of the recommendation engine is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/architecture-recommendation-engine-1611295647852.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Movie information data is reformatted and then stored in Elasticsearch for searching.&lt;/li&gt;
&lt;li&gt;An item-similarity algorithm from Apache Mahout is run with user movie ratings data to create recommendation indicators for movies. These indicators are added to the movie documents in Elasticsearch.  &lt;/li&gt;
&lt;li&gt;Searches of a user&apos;s preferred movies among the indicators of other movies will return a list of new films sorted by relevance to the user&apos;s taste.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Collaborative Filtering with Mahout&lt;/h2&gt;
&lt;p&gt;A Mahout-based collaborative filtering engine looks at what users have historically done and tries to estimate what they might likely do in the future, if given a chance. This is accomplished by looking at a history of which items users have interacted with. In particular, Mahout looks at how items co-occur in user histories.  Co-occurrence is a simple basis that allows Apache Mahout to compute significant indicators of what should be recommended. Suppose that Ted likes movie A, B, and C. Carol likes movie A and B. To recommend a movie to Bob, we can note that since he likes movie B and since Ted and Carol also liked movie B, movie A is a possible recommendation. Of course, this is a tiny example. In real situations, we would have vastly more data to work with.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/recommendation-grid-1611295660227.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In order to get useful indicators for recommendation, Mahout’s ItemSimilarity program builds three matrices from the user history:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. History matrix:&lt;/strong&gt;  contains the interactions between users and items as a user-by-item binary matrix.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/history-matrix-1611295671003.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Co-occurrence matrix:&lt;/strong&gt;  transforms the history matrix into an item-by-item matrix, recording which items co-occur or appear together in user histories.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/co-occurrence-matrix-1611295696721.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this example movie A and movie B co-occur once, while movie A and movie C co-occur twice. The co-occurrence matrix cannot be used directly as recommendation indicators because very common items will tend to occur with lots of other items simply because they are common.  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Indicator matrix:&lt;/strong&gt; The indicator matrix retains only the anomalous (interesting) co-occurrences that will serve as clues for recommendation. Some items (in this case, movies) are so popular that almost everyone likes them, meaning they will co-occur with almost every item, which makes them less interesting (anomalous) for recommendations.  Co-occurrences that are too sparse to understand are also not anomalous and thus are not retained. In this example, movie A is an indicator for movie B.    &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/indicator-matrix-1611295706790.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Mahout runs multiple MapReduce jobs to calculate the co-occurrences of items in parallel. (Mahout 1.0 runs on Apache Spark). Mahout’s ItemSimilarityJob uses the log likelihood ratio test (LLR) to determine which co-occurrences are sufficiently anomalous to be of interest as indicators. The output gives pairs of items with a similarity greater than the threshold you provide.&lt;/p&gt;
&lt;p&gt;The output of the Mahout ItemSimilarity job gives items that identify interesting co-occurrences, or that indicate recommendation, for each item. For example, the Movie B row shows Movie A is indicated, and this means that liking Movie A is an indicator that you will like Movie B.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/indicator-matrix-2-1611295716322.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Elasticsearch Search Engine&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/elasticsearch-search-engine-1611295728868.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Elasticsearch is an open-source search engine built on top of Apache Lucene™, a full-text search engine library. Full-text search uses precision and recall to evaluate search results:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Precision = proportion of top-scoring results that are relevant&lt;/li&gt;
&lt;li&gt;Recall = proportion of relevant results that are top-scoring&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Elasticsearch stores documents, which are made up of different fields. Each field has a name and content. Fields can be indexed and stored to allow documents to be found by searching for content found in fields.&lt;/p&gt;
&lt;p&gt;For our recommendation engine, we store movie meta data such as id, title, genre, and also movie recommendation indicators, in a JSON document:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
&quot;id&quot;: &quot;65006&quot;,
&quot;title&quot;: &quot;Electric Horseman&quot;,
&quot;year&quot;: &quot;2008&quot;,
&quot;genre&quot;: [&quot;Mystery&quot;,&quot;Thriller&quot;]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output row from the indicator matrix that identified significant or interesting co-occurrence is stored in the Elasticsearch movie document indicator field. For example, since Movie A is an indicator for Movie B, we will store Movie A in the indicator field in the document for Movie B. That means that when we search for movies with Movie A as an indicator, we will find Movie B and present it as a recommendation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2021/1/recommendation-matrix-1-1611295740359.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Search engines are optimized to find a collection of fields by similarity to a query. We will use the search engine to find movies with the most similar indicator fields to a query.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE InfoSight Northbound APIs: Bringing the Power of InfoSight Home]]></title><description><![CDATA[infosight There is just so much to keep tabs on, isn’t there? There are the tools you use to monitor your Data Center. Then, there are the…]]></description><link>https://developer.hpe.com/hpe-infosight-northbound-apis-bringing-the-power-of-infosight-home/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-infosight-northbound-apis-bringing-the-power-of-infosight-home/</guid><pubDate>Wed, 20 Jan 2021 11:20:19 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/infosight-1611162508158.jpg&quot; alt=&quot;infosight&quot;&gt;&lt;/p&gt;
&lt;p&gt;There is just so much to keep tabs on, isn’t there? There are the tools you use to monitor your Data Center. Then, there are the support &amp;#x26; case management and other IT service management (ITSM) &amp;#x26; BI tools that you are using such as Service Now, not to mention the complexity introduced when your infrastructure stack consists of products from different companies.  It costs you time and effort to constantly have to manually consolidate information from all of these different places. And, with so many tools each having their own stack, your IT environment can grow quite siloed and you can miss out on critical information related to your infrastructure, which can cost you a lot of money. This can all be quite nightmarish and those of us who developed HPE InfoSight understand that.&lt;/p&gt;
&lt;p&gt;Using HPE InfoSight as an integral part of your cloud-based AI Ops and monitoring solution stack, you now have the ability to pull all the critical information regarding the health and wellness of your HPE infrastructure directly from InfoSight into any of your ITSM/BI/Home grown tools such as Service Now. You can use it to apply your own business logic &amp;#x26; rules to optimize your work and the environment, quickly and easily. HPE InfoSight does this through harnessing the power of REST APIs in our new offering: “InfoSight North Bound APIs for Nimble Storage”. This product is built on top of the case automation system and integrated with the Wellness Dashboard in HPE InfoSight.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The current offering:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;HPE InfoSight North Bound APIs for Nimble Storage consists of a set of two public API endpoints for programmatic access to HPE InfoSight Wellness Issue details, enabling synchronization with service desks and other operational tools. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;/Issues:&lt;/strong&gt; Retrieve a list of wellness issues with filters for severity, date range, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;/Issue:&lt;/strong&gt; Get details for a specific wellness issue&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It also provides a complementary Service Now Integration Configuration Pack to easily integrate the APIs for those of you who use Service Now. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How to access the APIs:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It is super easy and fast to integrate these APIs into your environment.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Log in to &lt;a href=&quot;https://infosight.hpe.com/&quot;&gt;https://infosight.hpe.com&lt;/a&gt;. Note: Admin role is required.&lt;/li&gt;
&lt;li&gt;Select the correct Organization and navigate to Settings/API Access&lt;/li&gt;
&lt;li&gt;Add a new Application, select the Wellness API, and generate a Client Key and Client Secret pair. These will be required when making Wellness API calls.&lt;/li&gt;
&lt;li&gt;Integrate Wellness APIs into your application.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You can learn more about the technical specifications by checking out our &lt;a href=&quot;https://infosight.hpe.com/InfoSight/media/cms/active/public/pubs_HPE_infosight_wellness_spec.pdf&quot;&gt;technical specification document&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What’s in store for the future:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Right now, this offering is available for Nimble Storage and allows access to Wellness Information. We plan on extending this offering to other partner products, as well as add performance and asset metrics, in the future.&lt;/p&gt;
&lt;p&gt;Go check out these APIs. We would love to hear what you think. Connect with us on &lt;a href=&quot;https://app.slack.com/client/T5SNJCC7K/GLWKH9CG5&quot;&gt;HPE DEV Slack workspace&lt;/a&gt; to share your feedback.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Making AI a Reality ]]></title><description><![CDATA[data drive ai Main body of this blog post was originally published on H2O.ai blog. Published here with permission. Do you want to make AI a…]]></description><link>https://developer.hpe.com/making-ai-a-reality/</link><guid isPermaLink="false">https://developer.hpe.com/making-ai-a-reality/</guid><pubDate>Fri, 15 Jan 2021 07:48:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/data-drive-ai-1610724616865.JPG&quot; alt=&quot;data drive ai&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Main body of this blog post was originally published on &lt;a href=&quot;https://www.h2o.ai/blog/making-ai-a-reality/&quot;&gt;H2O.ai blog&lt;/a&gt;. Published here with permission.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Do you want to make AI a part of your company? You can’t just mandate AI. But you can lead by example. &lt;/p&gt;
&lt;p&gt;All too often, especially in companies new to AI and machine learning, team leaders may be tasked by their managers to “start using AI” without having clear goals set about how AI may help the business or how to build an effective AI team. It’s understandable that executives want to take advantage of the great potential value they see that AI and machine learning are delivering for other companies, but successfully putting AI to work in your own enterprise requires more than passing down the order to do it. Whether you are the executive choosing to add AI to your business or a team leader responsible for making it so, be aware that there are &lt;a href=&quot;https://www.h2o.ai/blog/in-a-world-wh&quot;&gt;practical steps to put in place&lt;/a&gt; before you plunge into hiring someone to make models. One of the first practical steps to develop and implement AI is to build a data-aware culture within your organization. &lt;/p&gt;
&lt;h2&gt;Data-Driven Choices&lt;/h2&gt;
&lt;p&gt;One of the challenges in developing serious data skills is to get buy-in about the importance of data-driven decisions. People with extensive experience in their particular business sector hold valuable insights – knowledge that is important to the success of AI projects built to address essential business goals. But this experience may result in people relying on “gut feeling” about business processes, predicted performance and the potential for new lines of business. A key step in building the data-aware culture needed to support AI systems is to develop an appreciation among stakeholders in your organization &lt;strong&gt;for what data can tell you.&lt;/strong&gt;  Show - rather than tell - how data can augment, reinforce or refute their experienced view or gut feeling. &lt;/p&gt;
&lt;p&gt;One way to do this is to &lt;a href=&quot;https://www.h2o.ai/blog/the-benefits-of-budget-allocation-with-ai-driven-marketing-mix-models/&quot;&gt;make use of data in making your own decisions&lt;/a&gt; and to provide transparency to your teams about how data is influencing your decisions. Develop the habit of asking, “What do we know and &lt;strong&gt;how do we know it?&lt;/strong&gt;” This habit may involve collecting more metrics about essential business processes than you already do, but more often it means making use of the data you already have. Help people within your organization to see data as a way to expand understanding, not just as a report card on their own performance. &lt;/p&gt;
&lt;p&gt;Data sources have proliferated in the last few years – data is literally everywhere. This variety of data provides a rich resource for AI and machine learning. You also can raise awareness within your organization of the wide range of data sources you have available and how they can inform you not only about your own business processes but also about the behavior of your customers. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ellen-ai-picture1-1610696869470.png&quot; alt=&quot;ellen ai picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using automated, machine-based learning systems makes it feasible to make decisions based on data at a level and speed that may not be practical for humans to do. This, in turn, lets you take advantage of how data tells you about conditions in the world around you, and how they affect your business.  It also is the best way to inform you about how your business needs to adjust to changes in the world, a situation underlined dramatically by the &lt;a href=&quot;https://www.h2o.ai/covid-19/&quot;&gt;COVID pandemic&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Why is it important to get buy-in from stakeholders? One of the biggest barriers to building effective AI systems is to set the business context for AI in realistic ways. Getting buy-in from stakeholders is important to ensure cooperation in &lt;a href=&quot;https://www.h2o.ai/webinars/?commid=433866&quot;&gt;defining business goals for AI&lt;/a&gt; and for ensuring adequate resources are assigned to develop, implement and maintain AI systems. &lt;/p&gt;
&lt;h2&gt;AI is a Team Sport&lt;/h2&gt;
&lt;p&gt;Getting up-to-speed with AI may require hiring talented data scientists or selecting the right company to provide data science as a service. But whether you build an in-house data science team of AI and machine learning experts or work with external talent, there’s more to building an AI system than just the specialists who code and train the models.   &lt;/p&gt;
&lt;p&gt;Who needs to be on your list of AI talent? Data engineers are a critical part of the success of AI and machine learning projects. In fact, it’s essential to budget a major part of the time and resources allotted to a particular AI project to the effort needed to handle the logistics, from data preparation for training sets to &lt;a href=&quot;https://www.h2o.ai/blog/deploying-models-to-maximise-the-impact-of-machine-learning-part-1/&quot;&gt;deployment and management of AI models.&lt;/a&gt; The following figure shows the relative effort of model building to all the other parts of a machine learning project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ellen-ai-picture2-1610696879470.png&quot; alt=&quot;ellen ai picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;This requirement for many steps surrounding the specific model building process creates a need for a range of skills beyond the ability to work with algorithms. It’s important, then, to think of AI as a team sport. &lt;/p&gt;
&lt;h2&gt;DataOps helps bring AI to life&lt;/h2&gt;
&lt;p&gt;Making AI practical and profitable for your business requires the cooperation and collaboration of people with different skills. One of the best ways to achieve this is through a DataOps approach in which you assemble a cross-skill team that has members collectively focused on a shared goal. That style of work improves intra-team communication and avoids the sense that asking someone with a particular specialized skill to do their part is “asking a favor” or imposing on their time. Instead, people are more apt to work together efficiently when they share a goal and understand what needs to be done. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ellen-ai-picture3-1610696885974.png&quot; alt=&quot;ellen ai picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Technology can help with this team approach as well. Tools that improve &lt;a href=&quot;https://www.h2o.ai/blog/interview-with-patrick-hall-machine-learning-h2o-ai-machine-learning-interpretability/&quot;&gt;machine learning interpretability&lt;/a&gt; make it easier for data scientists to communicate clearly to others how models are making decisions and how data is influencing those decisions. The &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/h2o.html&quot;&gt;H2O Driverless AI&lt;/a&gt; platform is an example of technology that not only makes it easier to develop AI models but also easier to explain them. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/ellen-ai-picture4-1610696892273.png&quot; alt=&quot;ellen ai picture4&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;Building effective AI systems also requires efficient access to data. AI is only as good as the data used to train its models, and &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; provides multi-API direct global access to data. H2O Driverless AI, for example, can access data stored in the HPE Ezmeral Data Fabric &lt;em&gt;directly,&lt;/em&gt; without having to copy it out to another system. Furthermore, HPE Ezmeral Data Fabric is the core data layer in the &lt;a href=&quot;https://www.hpe.com/us/en/software/ezmeral-runtime.html&quot;&gt;HPE Ezmeral Runtime Enterprise&lt;/a&gt; for cloud-native and distributed non-cloud native applications. Containerization of AI applications is a big advantage in being able to train, deploy and run AI models in predictable environments that you can easily control. H2O Driverless AI has been tested and &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/h2o.html&quot;&gt;validated for compatibility&lt;/a&gt; with the HPE Ezmeral Container platform.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Another offering in the HPE Ezmeral Software Portfolio, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/machine-learning-operations.html&quot;&gt;HPE Ezmeral ML Ops&lt;/a&gt;, also works together with H2O Driverless AI to make data science more efficient through automatic machine learning.  &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Find out more about these AI-enabling technologies, including how you can try a 21-day free trial of H2O Driverless AI, by visiting the &lt;a href=&quot;https://www.hpe.com/us/en/software/marketplace/h2o.html&quot;&gt;HPE Ezmeral Marketplace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;For additional information about building a data-aware culture, as well as other practical steps in bringing AI into your business, read the free ebook &lt;em&gt;&lt;a href=&quot;https://www.h2o.ai/resources/ebook/practical-advice-for-making-ai-part-of-your-companys-future/&quot;&gt;Practical Advice for Making AI Part of Your Company’s Future&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;And please check out the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV blog&lt;/a&gt; for more articles on developer-focused topics.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Synchronized Volume Snapshots for Distributed Workloads on Kubernetes]]></title><description><![CDATA[Typically, Persistent Volume Claims on Kubernetes are treated as a singular entity completely decoupled from your workload. The actual…]]></description><link>https://developer.hpe.com/synchronized-volume-snapshots-for-distributed-workloads-on-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/synchronized-volume-snapshots-for-distributed-workloads-on-kubernetes/</guid><pubDate>Thu, 14 Jan 2021 18:10:48 GMT</pubDate><content:encoded>&lt;p&gt;Typically, Persistent Volume Claims on Kubernetes are treated as a singular entity completely decoupled from your workload. The actual physical location doesn&apos;t really matter. But what if you wanted an atomic operation where all Persistent Volume Claims that make up an application in a microservice architecture need to be atomically protected to ensure referential integrity? Would you stop the application, sequence the operation or take a shotgun approach and hope for the best?&lt;/p&gt;
&lt;p&gt;In this blog post, we&apos;ll use the HPE CSI Driver for Kubernetes to create Volume Groups that allow users to group Persistent Volume Claims together and use those Volume Groups to perform CSI Volume Snapshots through Snapshot Groups.&lt;/p&gt;
&lt;p&gt;In other storage infrastructure management systems, the term &quot;Volume Groups&quot; is usually referred to as &quot;Consistency Groups&quot;. It is the industry standard to create volume snapshots with referential integrity. This capability was introduced in the HPE CSI Driver for Kubernetes v1.4.0 and more information about the release may be found on &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-v1-4-0-with-expanded-ecosystem-and/ba-p/7118180&quot;&gt;Around The Storage Block&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;TL;DR&lt;/h1&gt;
&lt;p&gt;A variant of the demonstrative steps below has been captured in a screencast that is available on YouTube. If you prefer watching and listening instead of reading, please go ahead and watch the screencast.&lt;/p&gt;
&lt;p&gt;Just don’t forget to come back to read the &quot;Learn more&quot; section at the end of this article for important information.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://youtu.be/zUj-bJ_KqHU&quot; title=&quot;Synchronize Volume Snapshots for Distributed Workloads using the HPE CSI Driver for Kubernetes&quot;&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/untitled-1610648109128.png&quot; alt=&quot;Synchronize Volume Snapshots for Distributed Workloads using the HPE CSI Driver for Kubernetes&quot;&gt;&lt;/a&gt;
Watch on &lt;a href=&quot;https://youtu.be/zUj-bJ_KqHU&quot;&gt;YouTube&lt;/a&gt;!&lt;/p&gt;
&lt;h1&gt;Prerequisites&lt;/h1&gt;
&lt;p&gt;The examples we&apos;re going to walk through require that the HPE CSI Driver for Kubernetes v1.4.0 or later has been installed along with the CSI external snapshotter. Examples also assume a &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; named &quot;hpe-snapshot&quot; exists on the cluster.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the HPE CSI Driver using the &lt;a href=&quot;https://artifacthub.io/packages/helm/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Enable &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#enabling_csi_snapshots&quot;&gt;CSI snapshots&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Add a &lt;a href=&quot;https://scod.hpedev.io/csi_driver/deployment.html#add_a_hpe_storage_backend&quot;&gt;HPE storage backend&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Create a &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#using_csi_snapshots&quot;&gt;VolumeSnapshotClass&lt;/a&gt; and of course a &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#base_storageclass_parameters&quot;&gt;StorageClass&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No particular parameters are needed in either the &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; or &lt;code&gt;StorageClass&lt;/code&gt; but the backend &lt;code&gt;Secret&lt;/code&gt; is assumed to be named &quot;hpe-backend&quot; and reside in the &quot;hpe-storage&quot; &lt;code&gt;Namespace&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In the examples below, we&apos;re using HPE Nimble Storage. Any Container Storage Provider (CSP) will work that supports &lt;code&gt;VolumeGroups&lt;/code&gt; and &lt;code&gt;SnapshotGroups&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Pick an application&lt;/h1&gt;
&lt;p&gt;In order to illustrate the fact that multiple snapshots are being created, either pick an application that requires multiple volumes or deploy a microservice stack comprised of multiple stateful applications. In this example we&apos;ll use WordPress from the &lt;code&gt;bitnami/wordpress&lt;/code&gt; Helm Chart.&lt;/p&gt;
&lt;p&gt;We&apos;re using the following &quot;values&quot; file for the deployment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;mariadb:
  architecture: replication
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add the Bitnami repo:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install the WordPress chart:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;helm install my-wordpress bitnami/wordpress -f wp-values.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once deployed, there should be three &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; on the cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pvc -o name
persistentvolumeclaim/data-my-wordpress-mariadb-primary-0
persistentvolumeclaim/data-my-wordpress-mariadb-secondary-0
persistentvolumeclaim/my-wordpress
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Adding content to the WordPress site being deployed is completely optional as restore procedures are not covered in this tutorial. However, if you&apos;re interested in performing a restore from the &lt;code&gt;VolumeSnapshots&lt;/code&gt; we&apos;re creating, please see the CSI snapshots tutorial in this &lt;a href=&quot;/blog/PklOy39w8NtX6M2RvAxW/hpe-csi-driver-for-kubernetes-snapshots-clones-and-volume-expansion&quot;&gt;HPE DEV blog post&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Volume Groups&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;CustomResourceDefinition&lt;/code&gt; (CRD) that users interact with is called a &lt;code&gt;VolumeGroup&lt;/code&gt;. In order to facilitate the creation of &lt;code&gt;VolumeGroups&lt;/code&gt; a Kubernetes cluster administrator needs to create another &lt;code&gt;CRD&lt;/code&gt; called a &lt;code&gt;VolumeGroupClass&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The Volume Group Provisioner (depicted below) is a Kubernetes CSI sidecar container that performs a number of duties to facilitate volume grouping.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/groups-1610651079691.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;VolumeGroupContent&lt;/code&gt; &lt;code&gt;CRD&lt;/code&gt; is managed solely by the Volume Group Provisioner.&lt;/p&gt;
&lt;p&gt;Let&apos;s start by creating a &lt;code&gt;VolumeGroupClass&lt;/code&gt;. The remaining YAML in this tutorial is assumed to be pasted into a terminal verbatim, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create -f- &amp;#x3C;hit ENTER&gt;
&amp;#x3C; paste the YAML content &gt;
^D &amp;#x3C;hit CTRL+D on a new line&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s begin:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.hpe.com/v1
kind: VolumeGroupClass
metadata:
  name: my-volume-group-class
provisioner: csi.hpe.com
deletionPolicy: Delete
parameters:
  description: &quot;HPE CSI Driver for Kubernetes Volume Group&quot;
  csi.hpe.com/volume-group-provisioner-secret-name: hpe-backend
  csi.hpe.com/volume-group-provisioner-secret-namespace: hpe-storage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this stage nothing has been created on the backend storage system. The next step is to create a blank &lt;code&gt;VolumeGroup&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.hpe.com/v1
kind: VolumeGroup
metadata:
  name: my-volume-group
spec:
  volumeGroupClassName: my-volume-group-class
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, logging in to the backend HPE Nimble Storage array, we&apos;ll see a new Volume Collection has been created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Nimble OS $ volcoll --list
--------------------+---------------+-------------------------------------------
Volume Collection    Application     Owned By
Name                 Synchronization
--------------------+---------------+-------------------------------------------
volumegroup-e96aa858-93e9-424c-a593-6a6f216368c0 none            nva-grp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Inspecting further:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Nimble OS $ volcoll --info volumegroup-e96aa858-93e9-424c-a593-6a6f216368c0
Name: volumegroup-e96aa858-93e9-424c-a593-6a6f216368c0
Description: HPE CSI Driver for Kubernetes Volume Group
Owned by: nva-grp
Application synchronization: none
Application server: N/A
Application ID: N/A
Cluster name: N/A
Service name: N/A
VMware vCenter hostname: N/A
VMware vCenter username: N/A
VMware vCenter password: N/A
Backup agent hostname: N/A
Backup agent username: N/A
Backup agent password: N/A
Associated volumes: none
Associated pinned volumes: none
Snapshot collection count: 0
Created: Jan 13 2021 16:40:48
Last configuration change: Jan 13 2021 16:40:48
Replication Type: Periodic Snapshot
Synchronous Replication State: N/A
Synchronous Replication Last In Sync: N/A
Synchronous Replication Resync %: N/A
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Adding members to the &lt;code&gt;VolumeGroup&lt;/code&gt; is done by adding annotations to &lt;code&gt;PersistentVolumeClaims&lt;/code&gt;. This can be done in a number of ways. Either the claims may be created with the annotation, using the &lt;code&gt;kubectl patch&lt;/code&gt; command and the &lt;code&gt;kubectl annotate&lt;/code&gt; command. I prefer using the &lt;code&gt;kubectl annotate&lt;/code&gt; command. Let&apos;s do it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl annotate pvc/data-my-wordpress-mariadb-primary-0 \
  pvc/data-my-wordpress-mariadb-secondary-0 \
  pvc/my-wordpress \
  csi.hpe.com/volume-group=my-volume-group 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Annotations doesn&apos;t need to be created all at once. They can be added (or removed) individually.&lt;/p&gt;
&lt;p&gt;At this stage you can inspect the &lt;code&gt;VolumeGroup&lt;/code&gt; on the Kubernetes cluster or the Volume Collection on the array to confirm that &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; have been added to the &lt;code&gt;VolumeGroup&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get -o yaml volumegroup/my-volume-group \
  -o &apos;jsonpath={.spec.persistentVolumeClaimNames[*]}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let&apos;s create some atomic &lt;code&gt;VolumeSnapshots&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Snapshot Groups&lt;/h1&gt;
&lt;p&gt;The &lt;code&gt;SnapshotGroup&lt;/code&gt; &lt;code&gt;CRD&lt;/code&gt; is primarily what users interact with. The Snapshot Group Snapshotter depicted below carries out all the backend work and populates Kubernetes with the necessary &lt;code&gt;CRDs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/snaps-1610647869530.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;In a similar fashion to &lt;code&gt;VolumeGroupClasses&lt;/code&gt; a &lt;code&gt;SnapshotGroupClass&lt;/code&gt; needs to be created by an administrator.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.hpe.com/v1
kind: SnapshotGroupClass
metadata:
  name: my-snapshot-group-class
snapshotter: csi.hpe.com
deletionPolicy: Delete
parameters:
  csi.hpe.com/snapshot-group-snapshotter-secret-name: hpe-backend
  csi.hpe.com/snapshot-group-snapshotter-secret-namespace: hpe-storage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, users may create &lt;code&gt;SnapshotGroups&lt;/code&gt;. Upon creation, the &lt;code&gt;SnapshotGroups&lt;/code&gt; will automatically be populated by &lt;code&gt;VolumeSnapshots&lt;/code&gt; by the Snapshot Group Snapshotter. Let&apos;s create a &lt;code&gt;SnapshotGroup&lt;/code&gt; and see.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.hpe.com/v1
kind: SnapshotGroup
metadata:
  name: my-snapshot-group-1
spec:
  source:
    kind: VolumeGroup
    apiGroup: storage.hpe.com
    name: my-volume-group
  snapshotGroupClassName: my-snapshot-group-class
  volumeSnapshotClassName: hpe-snapshot
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;While unrelated to this tutorial, the &quot;hpe-snapshot&quot; &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; should already exist on the cluster as described in the prerequisites above. The &lt;code&gt;.spec.source&lt;/code&gt; indicates our previously created &lt;code&gt;VolumeGroup&lt;/code&gt; to snapshot. Now, let&apos;s check for &lt;code&gt;VolumeSnapshots&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get volumesnapshots -o name
volumesnapshot.snapshot.storage.k8s.io/my-snapshot-group-1-data-my-wordpress-mariadb-primary-0
volumesnapshot.snapshot.storage.k8s.io/my-snapshot-group-1-data-my-wordpress-mariadb-secondary-0
volumesnapshot.snapshot.storage.k8s.io/my-snapshot-group-1-my-wordpress
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Presto! Further, logging in to the backend array, we can now see a new Snapshot Collection with populated entries.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Nimble OS $ snapcoll --list
--------------------+---------------------------------------+-------+-----------
Volume Collection    Snapshot Collection                     Num     Replication
Name                 Name                                    Snaps   Status
--------------------+---------------------------------------+-------+-----------
volumegroup-e96aa858-93e9-424c-a593-6a6f216368c0 snapshot-fde72bb7-6633-4e6f-841c-3efaa0444710      3 N/A
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A few points to note.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SnapshotGroups&lt;/code&gt; may be deleted from the cluster and will in turn delete the &lt;code&gt;VolumeSnapshots&lt;/code&gt; and backend Snapshot Collection&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VolumeSnapshots&lt;/code&gt; may be used to perform restores (clone) from as any other &lt;code&gt;VolumeSnapshot&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Learn more&lt;/h1&gt;
&lt;p&gt;This is just the tip of the iceberg when it comes to data management innovation from HPE. Stay tuned to the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; to learn more about upcoming features and capabilities.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Explore the HPE CSI Driver for Kubernetes &lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check out the release announcement on &lt;a href=&quot;https://community.hpe.com/t5/Around-the-Storage-Block/HPE-CSI-Driver-for-Kubernetes-v1-4-0-with-expanded-ecosystem-and/ba-p/7118180&quot;&gt;Around The Storage Block&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Install the latest HPE CSI Driver using the &lt;a href=&quot;https://artifacthub.io/packages/helm/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart from ArtifactHub.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Visit the &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestration Documentation&lt;/a&gt; (SCOD) portal for everything pertaining to the HPE CSI Driver&lt;/li&gt;
&lt;li&gt;Sign up for a &lt;a href=&quot;/hackshack/workshops&quot;&gt;HPE DEV Hack Shack Workshop&lt;/a&gt; to learn about CSI on the HPE Ezmeral Container Platform&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE is eager to learn what our customers and partners are doing with Kubernetes and data management. Join the HPE DEV Slack community to share your thoughts and engage with the team. Sign up at &lt;a href=&quot;http://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; and sign in at &lt;a href=&quot;http://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;, we hang out in #nimblestorage, #3par-primera and #kubernetes.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streaming Data Pipeline to Transform, Store and Explore Healthcare Dataset With Apache Kafka API, Apache Spark, Apache Drill, JSON and MapR Database]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/streaming-data-pipeline-to-transform-store-and-explore-healthcare-datase/</link><guid isPermaLink="false">https://developer.hpe.com/streaming-data-pipeline-to-transform-store-and-explore-healthcare-datase/</guid><pubDate>Thu, 14 Jan 2021 05:59:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-02-27T12:00:00.000&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In the past, big data was interacted with in batch on a once-a-day basis. Now data is dynamic and data driven businesses need instant results from continuously changing data. Data Pipelines, which combine real-time Stream processing with the collection, analysis and storage of large amounts of data, enable modern, real-time applications, analytics and reporting.&lt;/p&gt;
&lt;p&gt;This post is based on a recent workshop I helped develop and deliver at a large health services and innovation company&apos;s analytics conference. This company is combining streaming data pipelines with data science on top of the MapR Data Platform to improve healthcare outcomes, improve access to appropriate care, better manage cost, and reduce fraud, waste and abuse.&lt;/p&gt;
&lt;p&gt;In this post we will:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use Apache Spark streaming to consume &lt;a href=&quot;https://www.cms.gov/openpayments/&quot;&gt;Medicare Open payments&lt;/a&gt; data using the Apache Kafka API&lt;/li&gt;
&lt;li&gt;Transform the streaming data into JSON format and save to the MapR Database document database.&lt;/li&gt;
&lt;li&gt;Query the MapR Database JSON table with Apache Spark SQL, Apache Drill, and the Open JSON API (OJAI) and Java.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/example-streamline-processing-pipeline-1610604239860.png&quot; alt=&quot;Example Streamline Processing Pipeline&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Example Use Case Data Set&lt;/h2&gt;
&lt;p&gt;Since 2013, &lt;a href=&quot;https://www.cms.gov/openpayments/&quot;&gt;Open Payments&lt;/a&gt; is a federal program that collects information about the payments drug and device companies make to physicians and teaching hospitals for things like travel, research, gifts, speaking fees, and meals.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/facts-about-open-payments-data-1610604259299.png&quot; alt=&quot;Facts About Open Payments Data&quot;&gt;&lt;/p&gt;
&lt;p&gt;A large Health payment dataset, JSON, Apache Spark, MapR Event Store, and MapR Database are an interesting combination for a health analytics workshop because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;JSON is an open-standard and efficient format that is easy for computer languages to manipulate. Newer standards for exchanging healthcare information such as &lt;a href=&quot;https://www.hl7.org/fhir/overview.html&quot;&gt;FHIR&lt;/a&gt; are easier to implement because they use a modern suite of API technology, including JSON.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/sql-programming-guide.html&quot;&gt;Apache Spark SQL, Dataframes, and Datasets&lt;/a&gt; make it easy to load, process, transform, and analyze JSON data. MapR Event Store is a distributed messaging system for streaming event data at scale. MapR Event Store integrates with Spark Streaming via the Kafka API.&lt;/li&gt;
&lt;li&gt;MapR Database, a high performance NoSQL database, supports JSON documents as a native data store. MapR Database makes it easy to store, query and build applications with JSON documents. The Spark connector makes it easy to build real-time or batch pipelines between your JSON data and MapR Database and leverage Spark within the pipeline.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/mapr-db-table-1610604274960.png&quot; alt=&quot;MapR Database table&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How Do You Build a Data Pipeline that handles millions of events in Real Time at Scale?&lt;/h2&gt;
&lt;p&gt;A common data pipeline architecture pattern is event sourcing using an append only publish subscribe event stream such as MapR Event Streams (which provides a Kafka API). MapR Event Store Topics are logical collections of events, which organize events into categories and decouple producers from consumers, making it easy to add new producers and consumers. Topics are partitioned for throughput and scalability, producers are load balanced and consumer can be grouped to read in parallel. MapR Event Store can scale to very high throughput levels, easily delivering millions of messages per second using very modest hardware.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/kafka-api-1610604290555.png&quot; alt=&quot;Kafka API&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Processing Streaming Data with Spark&lt;/h2&gt;
&lt;p&gt;Apache Spark Streaming is an extension of the core Spark API that enables continuous data stream processing. Data streams can be processed with Spark&apos;s core, SQL, GraphX, or machine learning APIs, and can be persisted to a file system, HDFS, MapR XD, MapR Database, HBase, or any data source offering a Hadoop OutputFormat or Spark connector. Stream processing of events is useful for filtering, transforming, creating counters and aggregations, correlating values, joining streams together, machine learning, and publishing to a different topic for pipelines.&lt;/p&gt;
&lt;p&gt;MapR Event Streams integrates with Spark Streaming via the Kafka direct approach. The MapR Database OJAI Connector for Apache Spark enables you to use &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/SavingDStreamMapRDB.html&quot;&gt;MapR Database as a sink for Apache Spark Data Streams&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/application-1610604303183.png&quot; alt=&quot;Application&quot;&gt;&lt;/p&gt;
&lt;p&gt;The incoming data is in CSV format, an example is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&quot;NEW&quot;,&quot;Covered Recipient Physician&quot;,,,,&quot;132655&quot;,&quot;GREGG&quot;,&quot;D&quot;,&quot;ALZATE&quot;,,&quot;8745 AERO DRIVE&quot;,&quot;STE 200&quot;,&quot;SAN DIEGO&quot;,&quot;CA&quot;,&quot;92123&quot;,&quot;United States&quot;,,,&quot;Medical Doctor&quot;,&quot;Allopathic &amp;#x26; Osteopathic Physicians|Radiology|Diagnostic Radiology&quot;,&quot;CA&quot;,,,,,&quot;DFINE, Inc&quot;,&quot;100000000326&quot;,&quot;DFINE, Inc&quot;,&quot;CA&quot;,&quot;United States&quot;,90.87,&quot;02/12/2016&quot;,&quot;1&quot;,&quot;In-kind items and services&quot;,&quot;Food and Beverage&quot;,,,,&quot;No&quot;,&quot;No Third Party Payment&quot;,,,,,&quot;No&quot;,&quot;346039438&quot;,&quot;No&quot;,&quot;Yes&quot;,&quot;Covered&quot;,&quot;Device&quot;,&quot;Radiology&quot;,&quot;StabiliT&quot;,,&quot;Covered&quot;,&quot;Device&quot;,&quot;Radiology&quot;,&quot;STAR Tumor Ablation System&quot;,,,,,,,,,,,,,,,,,&quot;2016&quot;,&quot;06/30/2017&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a lot of fields in this data that we will not use; we will parse the following fields:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/parse-fields-1610604316662.png&quot; alt=&quot;Parse Fields&quot;&gt;&lt;/p&gt;
&lt;p&gt;And transform them into the following JSON document for storing in MapR Database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{

    &quot;_id&quot; :&quot;317150_08/26/2016_346122858&quot;,
    &quot;physician_id&quot; :&quot;317150&quot;,
    &quot;date_payment&quot; :&quot;08/26/2016&quot;,
    &quot;record_id&quot; :&quot;346122858&quot;,
    &quot;payer&quot; :&quot;Mission Pharmacal Company&quot;,
    &quot;amount&quot; :9.23,
    &quot;Physician_Specialty&quot; :&quot;Obstetrics &amp;#x26; Gynecology&quot;,
    &quot;Nature_of_payment&quot; :&quot;Food and Beverage&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/transform-1610604333567.png&quot; alt=&quot;Transform&quot;&gt;&lt;/p&gt;
&lt;p&gt;Spark Kafka Consumer Producer Code&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Code snippets are shown here. You can download the complete code and instructions from the GitHub link at the end of this post.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Parsing the Data Set Records&lt;/h2&gt;
&lt;p&gt;A Scala Payment case class defines the schema corresponding to the CSV data that we are interested in. The parsePayment function parses a line of comma separated values into the Payment case class.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/parsing-data-set-records-1610604356991.png&quot; alt=&quot;Parsing the Data Set Records&quot;&gt;&lt;/p&gt;
&lt;p&gt;A PaymentwId class defines the JSON document schema for MapR Database.&lt;/p&gt;
&lt;p&gt;In order to save the JSON objects to MapR Database, we need to define the_id field, which is the row key and primary index for MapR Database. The parsePaymentwID function creates an object with the id equal to a combination of the physician id, the date, and the record id. Since MapR Database row keys are partitioned and sorted by row key when inserted, the payment documents will be grouped by physician and date in MapR Database. This will give really fast queries, aggregations and sorting by physician id and date. We will also look at secondary indexes later in this post.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/parsing-data-set-records-2-1610604371777.png&quot; alt=&quot;Parsing the Data Set Records&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Spark Streaming Code&lt;/h3&gt;
&lt;p&gt;We use the KafkaUtils createDirectStream method with Kafka configuration parameters to create an input stream from a MapR Event Store topic. This creates a DStream that represents the stream of incoming data, where each message is a key value pair. We use the DStream map transformation to create a DStream with the message values, and then another map transformation with the parsePaymentwID function to create a DStream of PaymentwID objects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/spark-streaming-code-1-1610604388626.png&quot; alt=&quot;Spark Streaming Code&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/spark-streaming-code-2-1610604403278.png&quot; alt=&quot;Spark Streaming Code&quot;&gt;&lt;/p&gt;
&lt;p&gt;The output of the paymentDStream.print(3) is shown below&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/output-paymentdstream-1610604421331.png&quot; alt=&quot;output of the paymentDStream.print(3)&quot;&gt;&lt;/p&gt;
&lt;p&gt;For storing lots of streaming data, we need a data store that supports fast writes and scales. The MapR Database Spark Connector DStream saveToMapRDB method performs a parallel partitioned bulk insert of JSON PaymentwID objects into MapR Database:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/mapr-db-spark-connector-dstream-savetomaprdb-method-1610604444456.png&quot; alt=&quot;MapR Database Spark Connector DStream saveToMapRDB method&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/save-to-mapr-db-json-1610604463458.png&quot; alt=&quot;Save to MapR Database JSON&quot;&gt;&lt;/p&gt;
&lt;h3&gt;MapR Database&lt;/h3&gt;
&lt;p&gt;One of the challenges when you are processing lots of data is where do you want to store it? With MapR Database (HBase API or JSON API), a table is automatically partitioned into tablets across a cluster by key range, providing for scalable and fast reads and writes by row key.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/scalable-fast-reads-writes-row-key-1610604478573.png&quot; alt=&quot;Scalable and Fast Reads and Writes by Row Key&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Spark MapR Database Connector leverages the Spark &lt;a href=&quot;https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html&quot;&gt;DataSource API&lt;/a&gt;. The connector architecture has a connection object in every Spark Executor, allowing for distributed parallel writes, reads, or scans with MapR Database tablets.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/connection-in-spark-executor-1610604492674.png&quot; alt=&quot;Connection in Every Spark Executor&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Querying MapR Database JSON with Spark SQL&lt;/h2&gt;
&lt;p&gt;The Spark MapR Database Connector enables users to perform complex SQL queries and updates on top of MapR Database using a Spark Dataset, while applying critical techniques such as Projection and filter pushdown, custom partitioning, and data locality.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/querying-application-1610604507438.png&quot; alt=&quot;Querying Application&quot;&gt;&lt;/p&gt;
&lt;p&gt;A Spark &lt;a href=&quot;http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset&quot;&gt;Dataset&lt;/a&gt; is a distributed collection of data. Dataset is a newer interface, which provides the benefits of strong typing, the ability to use powerful lambda functions, efficient object serialization/deserialization , combined with the benefits of Spark SQL&apos;s optimized execution engine.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/dataset-1610604520672.png&quot; alt=&quot;Dataset&quot;&gt;&lt;/p&gt;
&lt;p&gt;A DataFrame is a Dataset organized into named columns Dataset[Row]. (In Spark 2.0, the DataFrame APIs merged with &lt;a href=&quot;https://databricks.com/blog/2016/01/04/introducing-apache-spark-datasets.html&quot;&gt;Datasets&lt;/a&gt; APIs.)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/unified-apache-spark-1610604532100.png&quot; alt=&quot;Unified Apache Spark&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Loading data from MapR Database into a Spark Dataset&lt;/h3&gt;
&lt;p&gt;To &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/LoadDataFromMapRDBasDataset.html&quot;&gt;load data from a MapR Database JSON&lt;/a&gt; table into an Apache Spark Dataset, we first define the Scala class and &lt;a href=&quot;https://spark.apache.org/docs/latest/sql-programming-guide.html&quot;&gt;Spark StructType &lt;/a&gt;matching the structure of the JSON objects in the MapR Database table.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/load-data-from-mapr-db-json-1610604545905.png&quot; alt=&quot;load data from a MapR Database JSON&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, we invoke the loadFromMapRDB method on a SparkSession object, providing the tableName , schema and case class. This will return a Dataset of PaymentwId objects:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/dataset-paymentwid-objects-1610604560801.png&quot; alt=&quot;Dataset of PaymentwId objects&quot;&gt;&lt;/p&gt;
&lt;p&gt;Explore and query the Payment data with Spark SQL&lt;/p&gt;
&lt;p&gt;Datasets provide a domain-specific language for structured data manipulation in Scala, Java, and Python. Below are some examples in scala. The Dataset show() action displays the top 20 rows in a tabular form.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/20-rows-tabular-form-1610604575925.png&quot; alt=&quot;20 rows in a tabular form&quot;&gt;&lt;/p&gt;
&lt;p&gt;What are the top 5 nature of payments by count?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/5-nature-payments-by-count-1610604589757.png&quot; alt=&quot;What are the top 5 nature of payments by count&quot;&gt;&lt;/p&gt;
&lt;p&gt;What is the count of Payers with payment amounts &gt; $1000?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/payments-greater-than-1000-1610604608196.png&quot; alt=&quot;Payers with payment amounts &gt; $1000&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can register a Dataset as a temporary table using a given name and then run Spark SQL. Here are some example Spark SQL queries on the payments dataset:&lt;/p&gt;
&lt;p&gt;What are the top 5 physician specialties by amount with count?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/5-physician-speicialties-1610604623826.png&quot; alt=&quot;5 physician specialties by amount with count&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Querying the Data with Apache Drill&lt;/h1&gt;
&lt;p&gt;Apache Drill is an open source, low-latency query engine for big data that delivers interactive SQL analytics at petabyte scale. Drill provides a massively parallel processing execution engine, built to perform distributed query processing across the various nodes in a cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/apache-drill-1610604648035.png&quot; alt=&quot;Apache Drill&quot;&gt;&lt;/p&gt;
&lt;p&gt;With Drill, you can use SQL to interactively query and join data from files in JSON, Parquet, or CSV format, Hive, and NoSQL stores, including HBase, MapR Database, and Mongo, without defining schemas. MapR provides a &lt;a href=&quot;https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/&quot;&gt;Drill JDBC&lt;/a&gt; driver that you can use to connect Java applications, BI tools, such as SquirreL and Spotfire, to Drill. Below is an snippit of Java code for querying MapR Database using Drill and JDBC:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/java-code-querying-1610604663404.png&quot; alt=&quot;Java code for querying MapR Database using Drill and JDBC&quot;&gt;&lt;/p&gt;
&lt;p&gt;Partial output for this query is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/partial-output-for-query-1610604688017.png&quot; alt=&quot;Partial output for this query&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below are some examples SQL queries using the Drill shell.&lt;/p&gt;
&lt;p&gt;What are the top 5 physicians by total amount?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/5-physicians-by-total-amount-1610604702767.png&quot; alt=&quot;top 5 physicians by total amount&quot;&gt;&lt;/p&gt;
&lt;p&gt;What are the distinct payers in the Payments table?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/distinct-payers-payments-table-1610604719490.png&quot; alt=&quot;distinct payers in the Payments table&quot;&gt;&lt;/p&gt;
&lt;p&gt;Follow the instructions in the github code README to &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Drill/optimizing_queries_with_indexes.html&quot;&gt;add a secondary index to MapR Database&lt;/a&gt; and try more queries.&lt;/p&gt;
&lt;h2&gt;Querying with the Open JSON API (OJAI)&lt;/h2&gt;
&lt;p&gt;Below is a Java example of using the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR-DB/JSON_DB/QueryingWithOJAI.html&quot;&gt;OJAI Query interface to query documents in a MapR Database JSON&lt;/a&gt; table:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/java-example-1610604732007.png&quot; alt=&quot;Java example&quot;&gt;&lt;/p&gt;
&lt;p&gt;Partial output for this query is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/partial-output-1610604746301.png&quot; alt=&quot;Partial output&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, you&apos;ve learned how to consume streaming Open Payments CSV data, transform to JSON, store in a document database, and explore with SQL using Apache Spark, MapR Event Store MapR Database, OJAI, and Apache Drill.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can download the code and data to run these examples from &lt;a href=&quot;https://github.com/mapr-demos/mapr-es-db-spark-payment&quot;&gt;here&lt;/a&gt;. Refer to the README for complete instructions to run the code.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running the Code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/mapr-cdp-1610604758846.png&quot; alt=&quot;MapR-CDP&quot;&gt;&lt;/p&gt;
&lt;p&gt;This example was developed using the &lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapRContainerDevelopers/MapRContainerDevelopersOverview.html&quot;&gt;MapR 6.0 container for developers&lt;/a&gt;, a docker container that enables you to create a single node MapR cluster. The container is lightweight and designed to run on your laptop. Refer to the code README &lt;a href=&quot;https://github.com/mapr-demos/mapr-es-db-spark-payment&quot;&gt;here&lt;/a&gt; for instructions on running the code.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Predicting Forest Fires with Spark Machine Learning]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/predicting-forest-fires-with-spark-machine-learning/</link><guid isPermaLink="false">https://developer.hpe.com/predicting-forest-fires-with-spark-machine-learning/</guid><pubDate>Thu, 14 Jan 2021 05:41:09 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ian Downard&quot;,
&quot;publish&quot;: &quot;2018-07-26T10:45:00.000&quot;,
&quot;tags&quot;: &quot;use-case&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Anytime you have lat/long coordinates, you have an opportunity to do data science with k-means clustering and visualization on a map. This is a story about how I used geo data with k-means clustering on a topic that has affected me personally - wildfires!&lt;/p&gt;
&lt;p&gt;Every summer wildfires become front of mind for thousands of people who live in the West, Pacific Northwest, and Northern Rockies regions of the United States. Odds are, if you don&apos;t see the flames first hand, you will probably see smoke influenced the weather, road closures, and calls for caution by local authorities.&lt;/p&gt;
&lt;p&gt;I&apos;ve lived in Oregon for about 10 years. In that time I&apos;ve had more than one close encounter with a forest fire. The summer of 2017 was especially bad. A fire in the Columbia River Gorge blew smoke and ash through my neighborhood. Earlier in the year, I crossed paths with firefighters attempting to control a fire in steep rugged terrain in southern Washington. I was stunned to see the size of their equipment and trucks so badass they could be in a Mad Max movie.&lt;/p&gt;
&lt;p&gt;Fire fighting is expensive! &lt;a href=&quot;https://www.usda.gov/media/press-releases/2017/09/14/forest-service-wildland-fire-suppression-costs-exceed-2-billion&quot;&gt;Wildland fire suppression costs exceeded $2 billion in 2017&lt;/a&gt;, making it the most expensive year on record for the U.S. Forest Service. Fires also have a tendency to explode in size. It&apos;s not at all unusual for fires to grow by 50,000 acres in one day when winds are high and the terrain is steep. Take a moment to conceptualize those numbers. $2 billion! More than 50,000 acres, burned in one day!&lt;/p&gt;
&lt;h2&gt;A Beckoning to Data Science&lt;/h2&gt;
&lt;p&gt;I focused my study on optimizing against the following two constraints:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The cost it takes to move water tanks and other heavy equipment.&lt;/li&gt;
&lt;li&gt;The propensity for fires to explode in size if not suppressed quickly.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Both of these things can be minimized by staging equipment as close as possible to where fires are likely to occur.&lt;/p&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Minimize the cost and time to respond to fires by staging firefighting assets as close as possible to where fires are likely to occur.&lt;/p&gt;
&lt;h2&gt;The Solution&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image10-1610603073723.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;My goal is to predict where forest fires are prone to occur by partitioning the locations of past burns into clusters whose centroids can be used to optimally place water tanks and heavy fire fighting equipment as near as possible to where fires are likely to occur. The k-means clustering algorithm is aptly suited for this purpose.&lt;/p&gt;
&lt;h2&gt;The Data&lt;/h2&gt;
&lt;p&gt;The U.S. Forest Service provides datasets that describe forest fires that have occurred in Canada and the United States since the year 2000. That data can be downloaded from the Forest Service at &lt;a href=&quot;https://fsapps.nwcg.gov/gisdata.php&quot;&gt;https://fsapps.nwcg.gov/gisdata.php&lt;/a&gt;. For my purposes, this dataset is provided in an inconvenient &lt;a href=&quot;http://doc.arcgis.com/en/arcgis-online/reference/shapefiles.htm&quot;&gt;shapefile&lt;/a&gt; format. It needs to be transformed to CSV in order to be easily analyzed with Apache Spark (no pun intended). Also, the records after 2008 have a different schema than prior years, so, after converting the shapefiles to CSV, they&apos;ll need to be ingested into Spark using separate user-defined schemas.&lt;/p&gt;
&lt;p&gt;By the way, this complexity is typical. Raw data is hardly ever suitable for machine learning without cleansing. The process of cleaning and unifying messy data sets is called data wrangling, and it frequently comprises the bulk of the effort involved in real-world machine learning.&lt;/p&gt;
&lt;h2&gt;The Science&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image7-1610603093049.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The data wrangling that precedes machine learning typically involves writing expressions in R, SQL, Scala, and/or Python to join and transform sampled datasets. Often, getting these expressions right involves a lot of trial and error. Ideally, you want to test those expressions without the burden of compiling and running a full program. Data scientists have embraced web-based notebooks such as Apache Zeppelin for this purpose because they allow you to interactively transform datasets and let you know right away if what you&apos;re trying to do will work properly.&lt;/p&gt;
&lt;p&gt;The Zeppelin notebook I wrote for this study contains a combination of Bash, Python, Scala, and Angular code. A screenshot of the Zeppelin notebook I created for this study is shown &lt;a href=&quot;https://github.com/iandow/iandow.github.io/blob/master/img/firenotebook.png&quot;&gt;here&lt;/a&gt; and the notebook file can be downloaded &lt;a href=&quot;https://raw.githubusercontent.com/iandow/iandow.github.io/master/_includes/Forest%20Fire%20Prediction.json&quot;&gt;here&lt;/a&gt;. Essentially, I used Zeppelin to accomplish the following tasks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Download historical forest fire data&lt;/li&gt;
&lt;li&gt;Transform raw data into a desirable format&lt;/li&gt;
&lt;li&gt;Identify lat/long feature columns for k-means clustering&lt;/li&gt;
&lt;li&gt;Load data into Spark ML and train a k-means model&lt;/li&gt;
&lt;li&gt;Map the clusters showing regions where fires have been concentrated in the past&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The code and documentation I wrote for this study is available here:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/mapr-demos/mapr-sparkml-streaming-wildfires&quot;&gt;https://github.com/mapr-demos/mapr-sparkml-streaming-wildfires&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Code Walkthrough&lt;/h2&gt;
&lt;p&gt;Here&apos;s the bash code I use to download the dataset:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image2-1610603101302.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here&apos;s the Python code I used to convert the downloaded datasets to CSV files:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image9-1610603111027.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here&apos;s the Scala code I use to ingest the CSV files and train a k-means model with Spark libraries:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image12-1610603119677.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The resulting cluster centers are shown below. Where would you stage fire fighting equipment?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image3-1610603128020.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These circles were calculated by analyzing the locations for fires that have occurred in the past. These points can be used to help stage fire fighting equipment as near as possible to regions prone to burn, but how do we know which staging area should respond when a new forest fire starts? We can use our previously saved model to answer that question. The Scala code for that looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image4-1610603136768.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Operationalizing This Model as a Real-Time &quot;Fire Response&quot; App&lt;/h2&gt;
&lt;p&gt;The previous code excerpt shows how the model we developed could be used to identify which fire station (i.e., centroid) should be assigned to a given wildfire. We could operationalize this as a real-time fire response application with the following ML pipeline:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image1-1610603349597.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Most machine learning applications are initially architected with a synchronous pipeline like the one shown above, but there are limitations to this simplistic approach. Since it is only architected for a single model, your options are limited when it comes to the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How do you A/B test different versions of your model?&lt;/li&gt;
&lt;li&gt;How do you load balance inference requests?&lt;/li&gt;
&lt;li&gt;How do you process inference requests with multiple models optimized for different objectives (e.g., speed versus accuracy)?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In order to do these things, the model must be a modular component in the pipeline, and model results should rendezvous at a point where they can be compared, monitored, and selected based upon user-defined criteria. This design pattern can be achieved with an architecture called the rendezvous architecture.&lt;/p&gt;
&lt;h2&gt;The Rendezvous Architecture&lt;/h2&gt;
&lt;p&gt;The rendezvous architecture is a machine learning pipeline that allows multiple models to process inference requests and rendezvous at a point where user-defined logic can be applied to choose which ML result to return to the requester. Such logic could say, &quot;Give me the fastest result,&quot; or &quot;give me the highest confidence score after waiting 10 seconds.&quot; The rendezvous point also gives us a point where models can be monitored and requests can be captured where model results significantly disagree with each other.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image5-1610603365540.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note the emphasis on streams. Streams buffer requests in an infinite, resilient, and replayable queue. This makes it easy to hotswap models and scale ML executors in a microservices fashion. It also guarantees traceability for every inference request and response.&lt;/p&gt;
&lt;p&gt;If you&apos;d like to learn more about the rendezvous architecture, read the highly recommended Machine Learning Logistics by Ted Dunning and Ellen Friedman. It was published in 2017 and &lt;a href=&quot;https://www.oreilly.com/library/view/machine-learning-logistics/9781491997628/&quot;&gt;is available as a free downloadable eBook&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This was a story about how I used geo data with k-means clustering that relates to a topic that deeply affects a lot of people - wildfires! Anytime you have lat / long coordinates you have an opportunity to do data science with k-means clustering and visualization on a map. I hope this example illustrates the basics of k-means clustering and also gives some perspective on how machine learning models can be operationalized in production scenarios using streaming interfaces.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Spark Custom Streaming Sources]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/spark-custom-streaming-sources/</link><guid isPermaLink="false">https://developer.hpe.com/spark-custom-streaming-sources/</guid><pubDate>Thu, 14 Jan 2021 05:34:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Nicolas A Perez&quot;],
&quot;publish&quot;: &quot;2019-03-14T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Apache Spark is one of the most versatile big data frameworks out there. The ability to mix different kinds of workloads, in memory processing and functional style, makes it desirable for anyone coming to code in the data processing world.&lt;/p&gt;
&lt;p&gt;One important aspect of Spark is that it has been built for extensibility. Writing new connectors for the &lt;strong&gt;RDD&lt;/strong&gt; API or extending the &lt;strong&gt;DataFrame/Dataset&lt;/strong&gt; API allows third parties to integrate with Spark with ease. Most people will use one of the built-in APIs, such as Kafka for streams processing or JSON/CVS for file processing. However, there are times where we need more specific implementations, closer to us. For example, we might have a proprietary database we use in our company, and there will not be a connector for it. We can simply write one, as we explained in this previous post &lt;a href=&quot;/blog/XvlK6AnLW6cRQAzVL8XL/spark-data-source-api-extending-our-spark-sql-query-engine&quot;&gt;&lt;em&gt;&lt;strong&gt;Spark Data Source API. Extending Our Spark SQL Query Engine&lt;/strong&gt;&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Starting with Spark 2.0, we could create sources from streams, which gave life to the &lt;em&gt;Spark Structured Streaming API&lt;/em&gt;. As we would imagine, there are some built-in streaming sources, Kafka being one of them, alongside &lt;strong&gt;FileStreamSource, TextSocketSource&lt;/strong&gt;, etc.&lt;/p&gt;
&lt;p&gt;Using the new &lt;em&gt;Structured Streaming API&lt;/em&gt; should be preferred over the old &lt;strong&gt;DStream&lt;/strong&gt; API. However, the same problem as before presents again. How can we extend this new API, so we can use our own streaming sources? The answer to this question is in this blog post.&lt;/p&gt;
&lt;h2&gt;Extensibility Points&lt;/h2&gt;
&lt;p&gt;Let&apos;s start by reviewing the main components that we need to touch on in order to create our own streaming source.&lt;/p&gt;
&lt;p&gt;First of all, &lt;strong&gt;StreamSourceProvider&lt;/strong&gt; is what indicates what source will be used as the stream reader.&lt;/p&gt;
&lt;p&gt;Secondly, &lt;strong&gt;DataSourceRegister&lt;/strong&gt; will allow us to register our source within Spark, so it becomes available to the stream processing.&lt;/p&gt;
&lt;p&gt;Thirdly, &lt;strong&gt;Source&lt;/strong&gt; is the interface that we need to implement, so we provide streaming source-like behavior.&lt;/p&gt;
&lt;h2&gt;Our Streaming Source&lt;/h2&gt;
&lt;p&gt;For the sake of this post, we will implement a rather easy streaming source, but the same concepts apply to any streaming source that you need to implement on your own.&lt;/p&gt;
&lt;p&gt;Our streaming source is called &lt;strong&gt;InMemoryRandomStrings&lt;/strong&gt;. It basically generates a sequence of random strings and their length, which are viewed as a &lt;strong&gt;DataFrame&lt;/strong&gt; of pairs.&lt;/p&gt;
&lt;p&gt;Since we want to keep it simple, we will store the batches in memory and discard them when the process is done. &lt;strong&gt;InMemoryRandomStrings&lt;/strong&gt; is not fault-tolerant, since data is generated at the processing time in contrast to the built-in Kafka source, where data actually lives in a Kafka cluster. In most real-case scenarios, our data is consistently stored in advance systems that keep it secured and consistent; MapR Event Store for Apache Kafka and MapR Database are just a couple of these examples.&lt;/p&gt;
&lt;p&gt;We can start by defining our &lt;strong&gt;StreamSourceProvider&lt;/strong&gt;, which defines how our &lt;strong&gt;Source&lt;/strong&gt; is created.&lt;/p&gt;
&lt;p&gt;The class &lt;strong&gt;DefaultSource&lt;/strong&gt; is our &lt;strong&gt;StreamSourceProvider&lt;/strong&gt;, and we need to implement the two required functions, &lt;strong&gt;sourceSchema&lt;/strong&gt; and &lt;strong&gt;createSource&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class DefaultSource extends StreamSourceProvider with DataSourceRegister {

  override def sourceSchema(sqlContext: SQLContext,
                            schema: Option[StructType],
                            providerName: String,
                            parameters: Map[String, String]): (String, StructType) = {

    (shortName(), InMemoryRandomStrings.schema)
  }

  override def createSource(sqlContext: SQLContext,
                            metadataPath: String,
                            schema: Option[StructType],
                            providerName: String,
                            parameters: Map[String, String]): Source = {

    new InMemoryRandomStrings(sqlContext)
  }

  override def shortName(): String = &quot;InMemoryRandomStrings&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;InMemoryRandomStrings.schema&lt;/strong&gt; is the fixed schema we are going to use for the example, but the schema can be dynamically passed in.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;createSource&lt;/strong&gt; function then returns an instance of &lt;strong&gt;InMemoryRandomStrings&lt;/strong&gt; that is our actual &lt;strong&gt;Source&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;InMemoryRandomStrings&lt;/h2&gt;
&lt;p&gt;Now, let&apos;s see &lt;strong&gt;InMemoryRandomStrings&lt;/strong&gt; code in parts, so we can focus on all the details.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class InMemoryRandomStrings(sqlContext: SQLContext) extends Source {

  override def schema: StructType = InMemoryRandomStrings.schema

  override def getOffset: Option[Offset] = ???

  override def getBatch(start: Option[Offset], end: Offset): DataFrame = ???

  override def commit(end: Offset): Unit = ???

  override def stop(): Unit = ???
}

object InMemoryRandomStrings {

  lazy val schema = StructType(List(StructField(&quot;value&quot;, StringType), StructField(&quot;ts&quot;, LongType)))

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;schema&lt;/code&gt; returns the schema that our source uses; in our case, we know that the schema is fixed.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;getOffset&lt;/code&gt; should return the latest offset seen by our source.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class InMemoryRandomStrings(sqlContext: SQLContext) extends Source {
  private var offset: LongOffset = LongOffset(-1)

  override def schema: StructType = InMemoryRandomStrings.schema

  override def getOffset: Option[Offset] = this.synchronized {
    println(s&quot;getOffset: $offset&quot;)

    if (offset.offset == -1) None else Some(offset)
  }

  override def getBatch(start: Option[Offset], end: Offset): DataFrame = ???

  override def commit(end: Offset): Unit = ???

  override def stop(): Unit = ???
}

object InMemoryRandomStrings {

  lazy val schema = StructType(List(StructField(&quot;value&quot;, StringType), StructField(&quot;ts&quot;, LongType)))

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that we added a variable called &lt;code&gt;offset&lt;/code&gt; that will keep track of the seen data. Then, we return &lt;code&gt;None&lt;/code&gt; if our source has never seen any data, &lt;code&gt;Some(offset)&lt;/code&gt; otherwise.&lt;/p&gt;
&lt;p&gt;Now, let&apos;s see how our source can produce some data; we will use a running thread for it. Please, notice the &lt;strong&gt;dataGeneratorStartingThread&lt;/strong&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class InMemoryRandomStrings(sqlContext: SQLContext) extends Source {
  private var offset: LongOffset = LongOffset(-1)

  private var batches = collection.mutable.ListBuffer.empty[(String, Long)]

  private val incrementalThread = dataGeneratorStartingThread()

  override def schema: StructType = InMemoryRandomStrings.schema

  override def getOffset: Option[Offset] = this.synchronized {
    println(s&quot;getOffset: $offset&quot;)

    if (offset.offset == -1) None else Some(offset)
  }

  override def getBatch(start: Option[Offset], end: Offset): DataFrame = ???

  override def commit(end: Offset): Unit = ???

  override def stop(): Unit = incrementalThread.stop()

  private def dataGeneratorStartingThread() = {
    val t = new Thread(&quot;increment&quot;) {
      setDaemon(true)
      override def run(): Unit = {

        while (true) {
          try {
            this.synchronized {
              offset = offset + 1

              val value = Random.nextString(Random.nextInt(5))

              batches.append((value, offset.offset))
            }
          } catch {
            case e: Exception =&gt; println(e)
          }

          Thread.sleep(100)
        }
      }

    }

    t.start()

    t
  }
}

object InMemoryRandomStrings {

  lazy val schema = StructType(List(StructField(&quot;value&quot;, StringType), StructField(&quot;ts&quot;, LongType)))

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In here, we have added a thread that generates random values and increments the offset while storing the value and offset on an internal buffer. The thread starts running as soon as our source is created. The &lt;code&gt;stop&lt;/code&gt; function stops the running thread.&lt;/p&gt;
&lt;p&gt;At this point, we are only two functions away from our goal.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;getBatch&lt;/code&gt; returns a &lt;strong&gt;DataFrame&lt;/strong&gt; back to Spark with data within the passed offset range.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def getBatch(start: Option[Offset], end: Offset): DataFrame = this.synchronized {

    val s = start.flatMap(LongOffset.convert).getOrElse(LongOffset(-1)).offset + 1
    val e = LongOffset.convert(end).getOrElse(LongOffset(-1)).offset + 1

    println(s&quot;generating batch range $start ; $end&quot;)

    val data = batches
      .par
      .filter { case (_, idx) =&gt; idx &gt;= s &amp;#x26;&amp;#x26; idx &amp;#x3C;= e }
      .map { case (v, _) =&gt; (v, v.length) }
      .seq

    val rdd = sqlContext
      .sparkContext
      .parallelize(data)
      .map { case (v, l) =&gt; InternalRow(UTF8String.fromString(v), l.toLong) }

    sqlContext.sparkSession.internalCreateDataFrame(rdd, schema, isStreaming = true)
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can see that we are getting the data from our internal buffer so that the data has the corresponding indexes. From there, we generate the &lt;strong&gt;DataFrame&lt;/strong&gt; that we then send back to Spark.&lt;/p&gt;
&lt;p&gt;Finally, &lt;code&gt;commit&lt;/code&gt; is how Spark indicates to us that it will not request offsets less or equal to the one being passed. In other words, we can remove all data from our internal buffer with an offset less than or equal to the one passed to &lt;code&gt;commit&lt;/code&gt;. In this way, we can save some memory and avoid running out of heap space.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def commit(end: Offset): Unit = this.synchronized {

    val committed = LongOffset.convert(end).getOrElse(LongOffset(-1)).offset

    val toKeep = batches.filter { case (_, idx) =&gt; idx &gt; committed }

    batches = toKeep
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we have completed our source; the entire code is the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class InMemoryRandomStrings(sqlContext: SQLContext) extends Source {
  private var offset: LongOffset = LongOffset(-1)

  private var batches = collection.mutable.ListBuffer.empty[(String, Long)]

  private val incrementalThread = dataGeneratorStartingThread()

  override def schema: StructType = InMemoryRandomStrings.schema

  override def getOffset: Option[Offset] = this.synchronized {
    println(s&quot;getOffset: $offset&quot;)

    if (offset.offset == -1) None else Some(offset)
  }

  override def getBatch(start: Option[Offset], end: Offset): DataFrame = this.synchronized {

    val s = start.flatMap(LongOffset.convert).getOrElse(LongOffset(-1)).offset + 1
    val e = LongOffset.convert(end).getOrElse(LongOffset(-1)).offset + 1

    println(s&quot;generating batch range $start ; $end&quot;)

    val data = batches
      .par
      .filter { case (_, idx) =&gt; idx &gt;= s &amp;#x26;&amp;#x26; idx &amp;#x3C;= e }
      .map { case (v, _) =&gt; (v, v.length) }
      .seq

    val rdd = sqlContext
      .sparkContext
      .parallelize(data)
      .map { case (v, l) =&gt; InternalRow(UTF8String.fromString(v), l.toLong) }

    sqlContext.sparkSession.internalCreateDataFrame(rdd, schema, isStreaming = true)
  }

  override def commit(end: Offset): Unit = this.synchronized {

    val committed = LongOffset.convert(end).getOrElse(LongOffset(-1)).offset

    val toKeep = batches.filter { case (_, idx) =&gt; idx &gt; committed }

    batches = toKeep
  }

  override def stop(): Unit = incrementalThread.stop()

  private def dataGeneratorStartingThread() = {
    val t = new Thread(&quot;increment&quot;) {
      setDaemon(true)
      override def run(): Unit = {

        while (true) {
          try {
            this.synchronized {
              offset = offset + 1

              val value = Random.nextString(Random.nextInt(5))

              batches.append((value, offset.offset))
            }
          } catch {
            case e: Exception =&gt; println(e)
          }

          Thread.sleep(100)
        }
      }
    }

    t.start()

    t
  }
}


object InMemoryRandomStrings {
  lazy val schema = StructType(List(StructField(&quot;value&quot;, StringType), StructField(&quot;ts&quot;, LongType)))
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using Our Custom Source&lt;/h2&gt;
&lt;p&gt;Now, we need to plug in our source into the Spark Structured Streaming API by indicating the correct format to be used.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val r = sparkSession
    .readStream
    .format(&quot;com.github.anicolaspp.spark.sql.streaming.DefaultSource&quot;)
    .load()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In here, we use the regular &lt;code&gt;.readStream&lt;/code&gt; API and specify that the stream format is our implementation of &lt;code&gt;StreamSourceProvide&lt;/code&gt;, that is: &lt;em&gt;&lt;strong&gt;com.github.anicolaspp.spark.sql.streaming.DefaultSource.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now we can query our streaming source as any other &lt;strong&gt;DataFrame&lt;/strong&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;r.createTempView(&quot;w&quot;)

    sparkSession
      .sql(&quot;select ts, count(*) as c from w group by ts order by ts, c desc&quot;)
      .writeStream
      .format(&quot;console&quot;)
      .outputMode(OutputMode.Complete())
      .start()
      .awaitTermination()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output will look similar to this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;-------------------------------------------
Batch: 3
-------------------------------------------
+---+---+
| ts|  c|
+---+---+
|  0| 81|
|  1| 78|
|  2| 74|
|  3| 82|
|  4| 80|
+---+---+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What we see is a continuous aggregation of the data generated by our source.&lt;/p&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;Apache Spark is the way to go when processing data at scale. Its features outperform almost any other tool out there. Also, it can be extended in many different ways, and as we can see, we can write our own data sources and streaming sources, so they can be plugged into our Spark code with ease.&lt;/p&gt;
&lt;p&gt;Originally posted January 14, 2019, &lt;a href=&quot;https://hackernoon.com/spark-custom-stream-sources-ec360b8ae240&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;The entire project and source code can be found here: &lt;a href=&quot;https://github.com/anicolaspp/SparkStreamSources&quot;&gt;SparkStreamSources&lt;/a&gt;.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Happy Coding.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Journey Science in Telecom: Take Customer Experience to the Next Level]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/journey-science-in-telecom-take-customer-experience-to-the-next-level/</link><guid isPermaLink="false">https://developer.hpe.com/journey-science-in-telecom-take-customer-experience-to-the-next-level/</guid><pubDate>Thu, 14 Jan 2021 05:27:21 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ronald van Loon&quot;,
&quot;publish&quot;: &quot;2017-06-03T12:00:00.000&quot;,
&quot;tags&quot;: &quot;mapr&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Journey Science, being derived from connected data from different customer activities, has become pivotal for the telecommunications industry, providing the means to drastically improve the customer experience and retention. It has the ability to link together scattered pieces of data and enhance a telco business’s objectives. Siloed approaches are becoming obsolete – take call centers as an example – there is only so much that you can do with data from only one system.&lt;/p&gt;
&lt;p&gt;By using insights from customer journey analytics, telco businesses can better measure the user experience, and make informed decisions for refining it. The data not only allows them to take a proactive approach towards customer satisfaction, but enables the prediction of future failures as well. With customer journey analytics, you can evaluate the touchpoints to journeys, and revamp your strategies to better cater to customers’ needs.&lt;/p&gt;
&lt;p&gt;In the telecom industry, it is difficult for a business to effectively manage the massive volume of data with the existing systems and technology. There are several aspects where telecom companies need to make improvements, such as reduce costs, improve customer experience, increase conversion rates, and many more. To do so, they need to derive meaning from the collected data by finding connections among them. This linked data is also known as journeys. Journeys provide you with relevant data that enable you to make well-grounded business decisions by looking at customer transactions as a whole and determining where direct improvements are needed.&lt;/p&gt;
&lt;h2&gt;Customer Journey Analytics is Transforming Telecommunications&lt;/h2&gt;
&lt;p&gt;Many leading telco businesses are embracing the Journey Science concept and deem it to be the best way to make greater impact on the target audience. One good way to better understand digital journeys is through a multi-channel, end-2-end, view. Journey Sciences, at its best, provides enhanced data accessibility and increased analytics agility, and helps in weaving together disparate pieces of data. This makes it possible for telco businesses to link together structured and unstructured data back to their strategic objectives, and quickly modify them to ensure they cope with the evolving customer demands. However, in order to get insight into customer experience through journey analytics, it is critical to focus not only on the individual moments but the customers’ end-to-end experiences as well.&lt;/p&gt;
&lt;h2&gt;Customer Experience Boost&lt;/h2&gt;
&lt;p&gt;The main benefit of customer journey analytics for telco companies is that it enables them to better recognize customer needs and assess their satisfaction level. While most people think Journey Science is all about marketing, it mainly focuses on the services domain. For example, a customer seeking technical support for their device has multiple paths to resolution. Journey Science enables businesses to evaluate each step of the journey experience and figure out the critical points that could negatively impact customer experience. With this kind of information, businesses can develop strategies to overcome hurdles customers face on all such touchpoints, resulting in improved customer experience.&lt;/p&gt;
&lt;h2&gt;Improving Customer Journeys through Transparency&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Connecting the Dots&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For improving customer experience, it is essential to connect all the data down to the individual customer level to fully understand the required changes. For telco businesses to completely understand customer journeys, they must gather data from many different channels and track the individual journey the customer experiences. Typically, more than 50 percent of customers make multi-channel journeys; meaning, in order to understand their behavior, establishing connection among all the data is extremely important. Because of the deep roots of technology in today’s common lifestyle, many journeys start from digital channels, but eventually go into a human channel for completion.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Utilizing Aggregate and Raw Data&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Apart from giving a complete picture of customer journeys, the analytics let you tap into different levels of aggregation, allowing you to view raw data as well. With journey mapping, telco businesses can benefit from both in-depth data points and aggregated data sets. Since a single customer journey can compile hundreds of thousands of data points, having aggregated views makes it much easier to pinpoint and prioritize the problematic areas. On the other hand, some journeys may yield unclear results, for example, unusual behavior of a customer on a webpage. In such a case, having access into the raw data renders the ability to focus on one key area and get invaluable insights.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Making Changes through Data Availability&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Effective utilization of data from customer journey analytics allows telco to revamp their strategy as well as make smaller improvements on a continuous basis. Getting immediate feedback regarding a certain change is critical for understanding its impact. You can determine whether the intended results will be realized, or if you should scale-up or sustain the change. However, a manual, project-based approach that only provides an overview of the required data will not be enough to transform journeys successfully. Instead, you should opt for an agile, iterative, analytic approach that uses continuous data availability.&lt;/p&gt;
&lt;p&gt;It wouldn’t be wrong to say that all those ad-hoc, manual, project-based approaches using snapshots of data have severe limitations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/picture1-1610602144488.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Better data accessibility to more than 18 telco raw data sources as a prerequisite&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;How the Customer Journey differs in both Fixed and Mobile Telco&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Mobile (mobile data usage, subscriptions, charges, and mobile data access)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Several small customer journeys can be linked together to make improvements to a mobile telco operation. One great way is through customer engagement, i.e. moving down to individualized journeys of each customer instead of mass-segmentation. Journey Science opens doors for mobile telco companies to take personalization up a notch and provide customized recommendations based on the journeys of each customer. You should also utilize real-time context to enhance customer engagement for better results.&lt;/p&gt;
&lt;p&gt;Mobile customer experience is comprised of several touchpoints where a subscriber interacts with a service provide agent – it can be during retail, billing, customer support, visible marketing campaigns, and others. Consider three customers below that have 3 different journeys to perform the same action.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/picture2-1610602154635.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Fixed line providers (phone, internet, entertainment)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Fixed line providers have an additional interaction channel with field technicians being deployed to customers’ homes for service. These field service appointments are a major part of customer experience and often have significant variability for different customers. Consider the following journey which involves multiple appointments, agent phone calls, and delays:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/picture3-1610602163126.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Improve key journeys for fixed telco’s&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Journey Science is Moving towards Predictive Analytics&lt;/h2&gt;
&lt;p&gt;The Journey Science concept is increasingly becoming popular across the telco industry, as it greatly benefits by assessing journeys of individual customers and allows them to develop customized strategies. Moreover, it allows telco businesses to anticipate the potential pitfalls leading to negative customer experience and prevent it altogether. By tapping into the data from customer journey, telco can streamline their operations and provide a better, more satisfying experience to their customers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/picture4-1610602170940.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Derived value from Customer Journey data by Journey Science &amp;#x26; Journey Analytics&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In today’s world, customer satisfaction is the keystone for success in every industry, including telco. Businesses should turn to the Journey Science movement, and optimize their processes by carefully analyzing customer journeys and making improvements accordingly. Effective utilization of customer journey analytics leads to better redesigning efforts, ultimately reducing costs, enhancing customer experience, and stretching bottom-line.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This article was originally published on February 1, 2017 at &lt;a href=&quot;http://www.datasciencecentral.com/profiles/blogs/journey-science-in-telecom-take-customer-experience-to-the-next&quot;&gt;Data Science Central&lt;/a&gt;. Follow Ronald van Loon on &lt;a href=&quot;https://www.linkedin.com/in/ronald-van-loon-5411a/&quot;&gt;LinkedIn&lt;/a&gt; or &lt;a href=&quot;https://twitter.com/Ronald_vanLoon&quot;&gt;Twitter&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing the HPE GL20 IoT Gateway]]></title><description><![CDATA[HPE IoT Gateways enable organizations to rapidly acquire, analyze and take action on real-time data as it’s being collected for additional…]]></description><link>https://developer.hpe.com/introducing-the-hpe-gl20-iot-gateway/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-the-hpe-gl20-iot-gateway/</guid><pubDate>Tue, 12 Jan 2021 10:36:54 GMT</pubDate><content:encoded>&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/h4-1610447678726.jpg&quot; width=&quot;80%&quot; height=&quot;80%&quot;&gt;  
&lt;p&gt;HPE IoT Gateways enable organizations to rapidly acquire, analyze and take action on real-time data as it’s being collected for additional analysis at a later stage. Bringing computing and analytics close to the edge accelerates the speed of your decision-making and reduces the chance of lost opportunities or a missed red flag. In this post, I’ll be discussing the HPE GL20 IoT Gateway, a fun device for the edge. The HPE GL20 has an 8-bit digital input/output (DIO) capability designed specifically to address IoT needs. For detailed product information, check out the &lt;a href=&quot;https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=c04884769&quot;&gt;Quick Spec&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In this post, I would like to focus on the DIO capability and demonstrate how to develop using the digital input/output pin. After reading this tutorial, you will understand how to work with the DIO feature using Python. Two GitHub repositories shown below are highly related to this blog. Make sure you can access these:&lt;/p&gt;
&lt;p&gt;•	&lt;a href=&quot;https://github.com/helloezmeral/HPE-GL20-GPIO&quot;&gt;GitHub – helloezmeral/HPE-GL20-GPIO&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;•	&lt;a href=&quot;https://github.com/helloezmeral/HPE-GL20-gRPC&quot;&gt;GitHub - helloezmeral/HPE-GL20-gRPC&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The first repository contains information used for controlling input/output pins using Python directly, and the second repository is about wrapping the code into a gRPC service and preparing a docker image for developers to access DIO on the HPE GL20 easily.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; gRPC is a high-performance, open source universal RPC framework. This open source remote procedure call system was initially developed at Google in 2015.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here we go. Let’s have some fun!&lt;/p&gt;
&lt;h2&gt;Potential use case&lt;/h2&gt;
&lt;p&gt;Digital input/output pins tend to be important for edge applications because they need to interact with microcontrollers, like IoT sensors or actuators, which may not have the ability to communicate wirelessly. It can be very handy to work with DIO.  You can connect the output pin to LEDs for indication, relays, or even interrupt pins on microcontrollers to initiate an edge process.&lt;/p&gt;
&lt;p&gt;For example, you can design an IoT device that sleeps all the time to reserve battery power and wake it up using output pins. Input pins on an HPE GL20 can be used to measure the environment or used as a hardware control button. A touch sensor that sends a signal high when touching the board, or a PIR sensor that sends a signal high when human passed by, can be easily integrated with HPE GL20. For example, if you want to design an edge device that detects any human that passes by and takes a photo of them and sends back to the cloud, HPE GL20 is a great device to use for achieving this target.&lt;/p&gt;
&lt;h2&gt;Demo&lt;/h2&gt;
&lt;p&gt;To illustrate how to use the DIO feature on an HPE GL20, I set up a touch sensor as the input to the HPE GL20 and two LEDs as the output. See the picture below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/h3-1610447669565.png&quot; alt=&quot;h3&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/h2-1610447657582.jpg&quot; alt=&quot;h2&quot;&gt;&lt;/p&gt;
&lt;p&gt;DIO is enabled by an i2c device internally. Thus, we need to install an i2c environment before we can use it. In Bash, you may need to install with &lt;code&gt;sudo apt install i2c-tools&lt;/code&gt;.  In Python, you may need to install with &lt;code&gt;sudo pip3 install smbus2&lt;/code&gt;. You&apos;ll need to determine which bus the i2c-gpio chip is connected to. Run &lt;code&gt;sudo i2cdetect -l&lt;/code&gt; to obtain the bus number. After installing the complete package, you can start downloading the file &lt;strong&gt;&lt;em&gt;pyGL20.py&lt;/em&gt;&lt;/strong&gt;  into the same folder as your Python script.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo apt install i2c-tools
sudo pip3 install smbus2
sudo i2cdetect -l
wget https://raw.githubusercontent.com/helloezmeral/HPE-GL20-GPIO/master/pyGL20.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before writing the script, we need to know how the circuit is routed internally. The setting of input pin is pull-up, which means that when nothing is connected to the pin, it will return a high signal. The output pin is configured as open collector, which means it controls the pins connect to the ground or not. In technical terms, this is referred to as output low or high-impedance.  Now you can start writing your code. The syntax of writing GPIO code is highly similar to Arduino code. First, import the module with &lt;code&gt;from pyGL20 import GPIO&lt;/code&gt;. Next, initiate the i2C bus communication channel. If your GPIO chip is connected to i2c-0, you should write &lt;code&gt;IO = GPIO(0)&lt;/code&gt;. Once this is done, you can control all pins with commands like &lt;code&gt;IO.digitalWrite(IO.PIN6, True)&lt;/code&gt; to set pin6 output high,  or &lt;code&gt;IO.digitalRead(IO.PIN0)&lt;/code&gt;  to read the level of pin0. In the Arduino world, you can see signature &lt;code&gt;setup()&lt;/code&gt; and &lt;code&gt;loop()&lt;/code&gt; functions. You can do the same thing here with the &lt;code&gt;while(True):&lt;/code&gt; loop. Another useful feature in Arduino is &lt;code&gt;delay(1000)&lt;/code&gt;. This command creates a 1000ms delay of the processor. In Python, you can do a similar thing with importing the time module and calling &lt;code&gt;time.sleep(1)&lt;/code&gt; for sleeping one second.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# blink.py
from pyGL20 import GPIO
import time

&quot;&quot;&quot;
Please run 
::
sudo i2cdetect -l
::
to find the coresponding i2c-x bus of &quot;SMBus I801 adapter at f040&quot;
replace the number x of the below variable.
&quot;&quot;&quot;
IO = GPIO(0) # This means i2c-0

while True:
    print(IO.digitalWrite(IO.PIN6, True))
    time.sleep(0.1)
    print(IO.digitalWrite(IO.PIN(6), False)) # both work
    time.sleep(0.1)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can find more information about this in this repository:
&lt;a href=&quot;https://github.com/helloezmeral/HPE-GL20-GPIO&quot;&gt;https://github.com/helloezmeral/HPE-GL20-GPIO&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Using gRPC service&lt;/h2&gt;
&lt;p&gt;To further assist you in writing a GPIO application, I have prepared a docker-compose file that you can use to deploy a gRPC server to control digital I/O pins within a second. gRPC is a modern open-source high performance RPC framework. With this service installed, you can control the GPIO of an HPE GL20 over the network. I have prepared all necessary files below in a Github release which includes a docker-compose yaml file and two gRPC code files. &lt;strong&gt;&lt;em&gt;GL20_pb2.py&lt;/em&gt;&lt;/strong&gt; contains generated request and response classes and &lt;strong&gt;_ GL20_pb2_grpc.py_&lt;/strong&gt;  contains our generated client and server classes.&lt;br&gt;
I have prepared all necessary files &lt;a href=&quot;https://github.com/helloezmeral/HPE-GL20-gRPC/releases/tag/v0.9&quot;&gt;here&lt;/a&gt; in a GitHub release which includes a docker-compose yaml file and two gRPC code files.&lt;/p&gt;
&lt;p&gt;First, set up the gRPC service on the HPE GL20. Download the file &lt;strong&gt;&lt;em&gt;docker-compose.yaml&lt;/em&gt;&lt;/strong&gt;  and run it as &lt;code&gt;docker-compose up&lt;/code&gt;/&lt;code&gt;docker-compose up -d&lt;/code&gt; (i.e.: flag d represents detached mode). Hooray! You just deployed a gRPC service for controlling the HPE GL20 GPIO.&lt;/p&gt;
&lt;p&gt;The next step prepares the required file for the client call. &lt;strong&gt;&lt;em&gt;GL20_pb_2.py&lt;/em&gt;&lt;/strong&gt;  and &lt;strong&gt;&lt;em&gt;GL20_pb2_grpc.py&lt;/em&gt;&lt;/strong&gt;  is required since it contains code used to define the calling procedure and information exchange. The syntax of controlling the GPIO pins is nearly the same as the first one. You can read the example code in gRPC folder of the repository shown below.&lt;/p&gt;
&lt;p&gt;You can read the example code in gRPC folder of the GitHub repository &lt;a href=&quot;https://github.com/helloezmeral/HPE-GL20-gRPC&quot;&gt;here&lt;/a&gt;. This is the docker image for gRPC microservice for you to use the GPIO of the GL20.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this tutorial, I have shown you two ways to write a simple Python script to access HPE GL20 DIO. The first way utilizes Bash utility named &lt;strong&gt;&lt;em&gt;i2c-tools&lt;/em&gt;&lt;/strong&gt;  and Python module called &lt;strong&gt;&lt;em&gt;smbus2&lt;/em&gt;&lt;/strong&gt; to access the DIO directly. This is suitable for those who want to take all control of your code. I have also prepared a second way for you with an easy-to-use gRPC service docker image exposing DIO over network. You can control DIO within the same device or in the cloud remotely.&lt;/p&gt;
&lt;p&gt;I just wanted to note that using &lt;code&gt;time.sleep(1)&lt;/code&gt; as a delay is not always a wise option when it comes to writing an embedded application. During delay, Python does nothing but wait one second, which causes inefficiency in your code. Concurrency in Python like threading or multiprocessing is recommended. However, this really isn&apos;t today&apos;s topic. If you want to learn more about this, you might want to check out these blog posts on Python Concurrency.&lt;/p&gt;
&lt;p&gt;•	&lt;a href=&quot;/blog/understanding-concurrency-in-python-part-1-threading&quot;&gt;HPE Developer | Understanding Concurrency in Python Part 1 – Threading&lt;/a&gt;&lt;br&gt;
•	&lt;a href=&quot;/blog/understanding-concurrency-in-python-part-2-multiprocessing&quot;&gt;HPE Developer | Understanding Concurrency in Python Part 2 – Multiprocessing&lt;/a&gt;&lt;br&gt;
•	&lt;a href=&quot;/blog/understanding-concurrency-in-python-part-3-asyncio&quot;&gt;HPE Developer | Understanding Concurrency in Python Part 3 – Asyncio&lt;/a&gt;&lt;br&gt;
•	&lt;a href=&quot;https://docs.python.org/3/library/concurrent.futures.html&quot;&gt;concurrent.futures — Launching parallel tasks&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Another quick note for you: sometimes you might require serial/USART communications with other devices. A Python module called &lt;strong&gt;&lt;em&gt;PySerial&lt;/em&gt;&lt;/strong&gt; is handy for dealing with serial communications.&lt;/p&gt;
&lt;p&gt;There are numerous features on HPE GL20. I&apos;ll just stop here in my discussion of the digital input/output capability of HPE GL20. I hope you enjoyed my tutorial on how to use DIO with the HPE GL20 and find many great ways to apply this feature. If you have any questions regarding this blog post, you can drop a message to me. My email is: &lt;a href=&quot;mailto:cenz@hpe.com&quot;&gt;cenz@hpe.com&lt;/a&gt; . Or reach me via &lt;a href=&quot;https://hpedev.slack.com/&quot;&gt;HPE DEV slack workspace&lt;/a&gt;: &lt;strong&gt;&lt;em&gt;@hpe.cenz&lt;/em&gt;&lt;/strong&gt;. Happy hacking.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Architecting the World’s Largest Biometric Identity System: The Aadhaar Experience]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/architecting-the-worlds-largest-biometric-identity-system-the-aadhaar-ex/</link><guid isPermaLink="false">https://developer.hpe.com/architecting-the-worlds-largest-biometric-identity-system-the-aadhaar-ex/</guid><pubDate>Thu, 07 Jan 2021 23:06:37 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Michele Nemschoff&quot;,
&quot;publish&quot;: &quot;2015-02-13T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;use-case&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Dr. Pramod Varma, Chief Architect and Technology Advisor to Unique Identification Authority of India (UIDAI), gave an informative talk titled “Architecting World&apos;s Largest Biometric Identity System - Aadhaar Experience” at the Strata + Hadoop World 2014 conference held in New York.&lt;/p&gt;
&lt;p&gt;Dr. Varma began his talk by talking about why the Aadhaar project was created. In India, the inability to prove one’s identity is one of the biggest barriers that prevents the poor from accessing benefits and subsidies. India is a country with 1.2 billion residents in over 640,000 villages. The Indian government spends $50 billion on direct subsidies (food coupons for rice, cooking gas, etc.) every year. Both public and private agencies in India require proof of identity before providing services or benefits to those living in India.&lt;/p&gt;
&lt;p&gt;Until the introduction of the Aadhaar program, there was no verifiable identity number program that both residents and agencies could use. As a result, every time Indian residents tried to receive benefits, they had to undergo an arduous personal identification process. What made it even more difficult was that the various service providers had different document and information requirements. This made it especially hard for India’s poor residents, who often lacked documentation and found it difficult to access services.&lt;/p&gt;
&lt;p&gt;The &lt;a target=&apos;\_blank&apos;  href=&apos;http://en.wikipedia.org/wiki/Unique_Identification_Authority_of_India&apos;&gt;&lt;u&gt;Unique Identification (Aadhaar) project&lt;/u&gt;&lt;/a&gt; was created in order to provide every resident of India with a unique identification number that can be used to access a variety of services and benefits. The project enables residents in India to receive food coupons, receive cooking gas deliveries, open checking accounts, apply for loans, insurance, pensions, property deeds, etc. In addition, the program makes it possible for the Indian government to make sure that welfare benefits go directly to the right person.&lt;/p&gt;
&lt;p&gt;They are using MapR to build the project’s biometric database (the largest in the world), which can verify a person’s identity within 200 milliseconds. The database includes an iris scan, digital fingerprints, a digital photo, and text-based data for every resident.&lt;/p&gt;
&lt;p&gt;The uniqueness of an Aadhaar identity makes it possible to eliminate fake and duplicate accounts, and the online authentication system provides a mechanism for the paperless, electronic and instantaneous verification of a person’s identity. Currently, over 690 million Aadhaars (IDs) have been issued so far, with 10 million new IDs issued every 10 days. The target is to complete 1 billion IDs by 2015.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/aadhaar-number-id-1610061218903.png&quot; alt=&quot;Aadhaar Project ID Number&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Architecture at a Glance&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The entire technology architecture behind Aadhaar is based on principles of openness, linear scalability, strong security, and most importantly vendor neutrality. The backbone of the Aadhaar technology was developed using the following principles:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Open architecture&lt;/strong&gt; – Building the Aadhaar system with true openness meant that they relied on open standards to ensure interoperability; the platform approach with open APIs made it possible for the ecosystem to build on top of Aadhaar APIs; vendor neutrality was ensured across the application components by using open and standard interfaces. The identity system was designed to work with any device, any form factor, and on any network.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Design for scale&lt;/strong&gt; – The Aadhaar system is expected to issue more than 1.2 billion identities, and will continue to grow as the resident population expands. Since every new enrollment requires biometric de-duplication across the entire system, every component needs to scale to very large volumes. This meant that the system needed to be able to handle hundreds of millions of transactions across billions of records doing hundreds of trillions of biometric matches every day. In addition all online services such as Aadhaar authentication, e-KYC services, and update services must work with high availability and sub-second performance. In order to achieve such massive scalability, the program established network and data center load balancing and a multi-location distributed architecture for horizontal scale.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Data Security&lt;/strong&gt; – The security and privacy of one’s data is a foundation of the Aadhaar system. The system uses 2048-bit PKI encryption and tamper detection using HMAC in order to ensure that no one can decrypt and misuse the data. Resident data and raw biometrics are always kept encrypted, even within UIDAI data centers. In addition, the system does not keep track of any transactional data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enrollment&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Since the Aadhaar system launched four years ago, the Aadhaar platform has grown in capability. Over 690 million Aadhaar numbers have been issued so far using the system, making it the largest biometric identity repository in the world, resulting in 600+ trillion biometric matches every day. They are on target to complete 1 billion Aadhaars by 2015.&lt;/p&gt;
&lt;p&gt;The amount of biometric data that is collected per person is approximately 3-5MB per person, which maps to a total of 10-15 petabytes of data.&lt;/p&gt;
&lt;p&gt;60,000-80,000 small laptops that include the installed Aadhaar system are used in remote villages. Laptops, generators, chairs and tables are transported to the villages via donkeys, and the systems are then set up in each camp. There are currently 150,000 certified operators and supervisors who are trained and certified to operate the enrollment station. On average, each station enrolls about 50 people per day, resulting in approximately 1 million new enrollments every day.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Aadhaar authentication system provides multi-factor authentication, based on “what you have” (something the user uniquely has, such as a mobile phone or laptop that accesses email, etc.) and “who you are” using resident fingerprints patterns, iris scans, a signature, or handwriting. By combining one or more factors, the resident’s authentication could be strengthened. In addition, Authentication User Agency (AUA)-specific factors such as ATM cards or smart cards or passwords may also be used in conjunction with Aadhaar authentication to further strengthen user authentication.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/aadhaar-project-server-provider-1610061236041.png&quot; alt=&quot;Aadhaar project server provider&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Authentication Overview&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Aadhaar in Action&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/aadhaar-action-1-1610061249888.png&quot; alt=&quot;Aadhaar in action&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Payments through micro ATMs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/aadhaar-action-2-1610061264885.png&quot; alt=&quot;Aadhaar project in action&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Future Plans&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Future plans call for Aadhaar to be used for digital signatures, electronic documents and digital locker services. Also, Aadhaar could also be used for college/university certificates, as well as credit registries. Dr. Varma emphasized that the Aadhaar system is a great example of using Hadoop technology to make a difference to every resident of India. The Aadhaar identity platform and Aadhaar-enabled applications are helping a billion people in India to participate in the digital economy, and benefit from government, public and private sector services that are tailored to them.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Apache Spark as a Distributed SQL Engine]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/apache-spark-as-a-distributed-sql-engine/</link><guid isPermaLink="false">https://developer.hpe.com/apache-spark-as-a-distributed-sql-engine/</guid><pubDate>Thu, 07 Jan 2021 22:53:50 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-03-17T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;SQL has been here for awhile and people like it. However, the engines that power SQL have changed with time in order to solve new problems and keep up with demands from consumers.&lt;/p&gt;
&lt;p&gt;Traditional engines such as Microsoft SQL Server had some problems with scalability that they have solved with time and cloud-based solutions. On the other hand, others have been built from the ground up to work in a distributed environment so they can put performance at the top of their priority list.&lt;/p&gt;
&lt;p&gt;There is not a tool for all use cases. In fact, we believe that tools are built with use cases in mind, to solve a specific problem. Then they evolve to a more mature stage where they can be used to solve many other problems.&lt;/p&gt;
&lt;p&gt;In a traditional SQL environment, the data is represented by tables and the relationships between them, but this representation is sometimes not enough, so new tools have been developed to solve this. We can find everywhere organizations that don’t use relational databases; instead, they prefer to go to the non-SQL ones.&lt;/p&gt;
&lt;h2&gt;Hadoop&lt;/h2&gt;
&lt;p&gt;In the Hadoop world, we have a variety of different query engines; each of them has its own particularities, and they each solve a wide variety of problems.&lt;/p&gt;
&lt;p&gt;In any Hadoop distribution, we can find Apache Hive, a SQL-like tool that offers data warehouse infrastructure and capabilities for big data queries and analysis.&lt;/p&gt;
&lt;p&gt;Depending on the Hadoop distribution, we can also find Apache Impala and Apache Drill. All of them offer more or less the same capabilities, sharing a common goal. We can use SQL or SQL-like languages to query data stored in Hadoop. They also have their own limitations and advantages that you should be aware of.&lt;/p&gt;
&lt;h2&gt;Apache Spark&lt;/h2&gt;
&lt;p&gt;Apache Spark is a lightning-fast cluster computing that can be deployed in a Hadoop cluster or stand alone mode. It can also be used as a SQL engine like the others we mentioned. Spark, however, offers some advantages over the previous ones.&lt;/p&gt;
&lt;p&gt;Spark exposes APIs for different languages such as Scala, Java, Python, and R. This makes it accessible by many types of people, such as developers, data scientists, and those with statistics experience.&lt;/p&gt;
&lt;p&gt;Interactive algorithms are easily implemented in Spark, especially machine learning ones.&lt;/p&gt;
&lt;p&gt;Let’s walk through an example of how to use Spark as a SQL engine.&lt;/p&gt;
&lt;h2&gt;Exploring Our Data Source&lt;/h2&gt;
&lt;p&gt;Our data set is a simple folder with a few terabytes in CSV-formatted files, and each file is about 40MB each. The size of the files does not affect the performance, because they are stored in a MapR cluster. MapR take cares of the &lt;a href=&quot;https://blog.cloudera.com/the-small-files-problem/&quot;&gt;Hadoop small file problem&lt;/a&gt; as I explain in &lt;a href=&quot;https://medium.com/hackernoon/how-mapr-improves-our-productivity-and-simplify-our-design-2d777ab53120&quot;&gt;this post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Because we are using MapR, copying files to the cluster is quite easy, since we have mounted a volume to our local file system.&lt;/p&gt;
&lt;p&gt;In order to mount the MapR volume, we run this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo mount_nfs -o &quot;hard,nolock&quot; 10.21.112.209:/mapr/mapr.domain.com/datalake /Users/anicolaspp/mapr/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, if we run POSIX commands again in our local folder, they will in fact be executed in the MapR cluster.&lt;/p&gt;
&lt;h2&gt;Preparing the Environment for Auto Schema Discovery&lt;/h2&gt;
&lt;p&gt;We are going to create a Spark application using Scala that will allow us to execute SQL statements over our data stored in the MapR Distribution.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://medium.com/@anicolaspp/sbt-scala-and-spark-6a57c0a2623a&quot;&gt;this post&lt;/a&gt; I explained how to create an application in Spark and the previous steps we need to follow.&lt;/p&gt;
&lt;p&gt;Our app class will look as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;/**
 * Created by anicolaspp.
 */
 
import org.apache.spark
import org.apache.spark._
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.thriftserver.HiveThriftServer2
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.types.{StringType, StructField, StructType}

object app {  
    
    def main(args: Array[String]) {    
        
        val conf = new SparkConf().setAppName(&quot;testing&quot;)
        val sc = new SparkContext(conf)
        val sql = new HiveContext(sc)
    
        sql.setConf(&quot;hive.server2.thrift.port&quot;, &quot;10001&quot;)   

        val delimiter = &quot;\t&quot;    
        val data = sc.textFile(&quot;datalake/myTestDataFolder/&quot;)   
        val headers = data.first.split(delimiter)
        val schema = StructType(headers.map(h =&gt; StructField(h, StringType)))
        val rowRDD  = data.map(p =&gt; Row.fromSeq(p.split(delimiter)))
        val dataFrame = sql.createDataFrame(rowRDD, schema)
    
        dataFrame.registerTempTable(&quot;someTableName&quot;)
        
        HiveThriftServer2.startWithContext(sql)
        
        while (true) {      
            
            Thread.`yield`()    
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s review our code.&lt;/p&gt;
&lt;p&gt;First, we create the Spark Context based on a Config object.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val conf = new SparkConf().setAppName(&quot;testing&quot;)
val sc = new SparkContext(conf)
val sql = new HiveContext(sc)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, we set the thrift port to avoid conflicts with other components such as Hive.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;sql.setConf(&quot;hive.server2.thrift.port&quot;, &quot;10001&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we set our CSV delimiter that in this case is the tab character. We also set the location of our data set by creating a Resilient Distributed Dataset (RDD) using the Spark Context (sc).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val delimiter = &quot;\t&quot;
val data = sc.textFile(&quot;datalake/myTestDataFolder/&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, we want to be able to serve our data without worrying about the schema of our file; we want a self-service BI environment as I explained &lt;a href=&quot;https://medium.com/hackernoon/how-mapr-improves-our-productivity-and-simplify-our-design-2d777ab53120&quot;&gt;here&lt;/a&gt;. Using the headers from our data files, we can create the schema automatically, so we don’t have to worry about schema changes in the future. Once we have the schema, we create a DataFrame that we are going to expose in order to be queried using SQL.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val headers = data.first.split(delimiter)
val schema = StructType(headers.map(h =&gt; StructField(h, StringType)))
val rowRDD = data.map(p =&gt; Row.fromSeq(p.split(delimiter)))
val dataFrame = sql.createDataFrame(rowRDD, schema)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only part missing is the one that registers our data set as a table in the Hive meta store; we do that by doing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;dataFrame.registerTempTable(&quot;someTableName&quot;)
HiveThriftServer2.startWithContext(sql)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have a loop just to keep our app alive. Note that RDD transformations are lazy and they will be only executed when a query is submitted for execution.&lt;/p&gt;
&lt;h2&gt;Deploying Our Application&lt;/h2&gt;
&lt;p&gt;We build and test our app using SBT, and the resulting .jar can be copied to the cluster in the same way we copy files in our local file system.&lt;/p&gt;
&lt;p&gt;cp pathToOurJar/app.jar /Users/anicolaspp/mapr/testing&lt;/p&gt;
&lt;p&gt;Remember this is possible because we have previously mounted a MapR volume in our local file system.&lt;/p&gt;
&lt;p&gt;Now we need to submit our application in the cluster, and we do that by using the spark-submit command. Detailed documentation about submitting Spark applications can be found on the &lt;a href=&quot;https://spark.apache.org/docs/latest/running-on-yarn.html&quot;&gt;Spark website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In our cluster, we run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/spark-submit --master yarn /mapr/mapr.domain.com/datalake/testing/testing_2.10-1.0.jar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Our application should start running on YARN as we indicated when submitting it.&lt;/p&gt;
&lt;p&gt;Our SQL engine is ready to be queried, so let’s move forward and test it out.&lt;/p&gt;
&lt;h2&gt;SQL Clients&lt;/h2&gt;
&lt;p&gt;An easy way to test our SQL engine is to run &lt;a href=&quot;https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline%E2%80%93NewCommandLineShell&quot;&gt;beeline&lt;/a&gt;, a command line tool that works as an SQL client.&lt;/p&gt;
&lt;p&gt;We can find beeline in the Spark bin folder. To start it, we type ./beeline.&lt;/p&gt;
&lt;p&gt;Within beeline, we need connect to the end point we have defined in our application, so we run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;!connect jdbc:hive2://localhost:10001
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We should be ready to run SQL statements, but let’s verify we can see the table we registered.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;show tables;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Spark SQL will return a table with the registered tables including the one we registered in our application (someTableName).&lt;/p&gt;
&lt;p&gt;In the same way, we can connect using other clients such as Microstrategy or Tableau. We have tried both and they both can build and execute queries on tables registered by Spark applications. We can also combine different sources (Spark SQL, MS SQL Server, Hive, Impala, etc.) which gives us the flexibility of combining relational sources with non-relational data.&lt;/p&gt;
&lt;p&gt;Spark SQL performs quite well and often better than the other providers in Hadoop, but be aware that performance can be degraded under certain conditions and use cases.&lt;/p&gt;
&lt;h2&gt;Why Apache Spark&lt;/h2&gt;
&lt;p&gt;Certainly, Spark SQL offers some of the functionalities that other tools have within Hadoop. However, the possibility of exploring complex data sets is rather unique to Spark, since we can code custom serialization / deserialization processes in our application. Using Spark SQL, we can connect to any data source and present it as tables to be consumed by SQL clients. This is as easy as changing how we ready the data in those sources by changing our serializer in our application.&lt;/p&gt;
&lt;h2&gt;Endings&lt;/h2&gt;
&lt;p&gt;There are very useful tools that we can use within Hadoop to query data in an SQL fashion and all of them have their advantages. The Spark SQL module from Apache Spark offers some flexibility that others lack while keeping performance as one of the main priorities.&lt;/p&gt;
&lt;p&gt;Spark is not the only tool you can use, but we strongly advise that you include it in big data solutions where SQL statements are to be executed. You might need to use a mix of different tools, but Spark should be an important part of the system you are building.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://medium.com/@anicolaspp/apache-spark-as-a-distributed-sql-engine-4373e254e0f9&quot;&gt;View the original.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[More community activities - Newsletter]]></title><link>https://developer.hpe.com/2021-January-06/</link><guid isPermaLink="false">https://developer.hpe.com/2021-January-06/</guid><pubDate>Wed, 06 Jan 2021 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Enabling Python 3 with OpenSSL/FIPS on Microsoft Windows]]></title><description><![CDATA[Introduction Federal Information Processing Standard (FIPS) are a set of encryption algorithms and is mandatory in all computer systems and…]]></description><link>https://developer.hpe.com/enabling-python-3-with-opensslfips-on-microsoft-windows/</link><guid isPermaLink="false">https://developer.hpe.com/enabling-python-3-with-opensslfips-on-microsoft-windows/</guid><pubDate>Mon, 21 Dec 2020 08:48:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/fips-compliant-1611244310749.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Federal Information Processing Standard (FIPS) are a set of encryption algorithms and is mandatory in all computer systems and software used by non-military American government agencies, government contractors and vendors who work with the agencies.  When new software is developed, it needs to be FIPS-compliant.  Thus, there is a need to enable Python with FIPS,  but the default Python package comes without FIPS as shown in screenshot below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/openssl_init-1609929682543.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This blog will explain, step-by-step, how to enable Python 3 with the OpenSSL/FIPS standard on a Microsoft Windows platform so that any new software compiled out of it, is FIPS-compliant.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cygwin environment&lt;/li&gt;
&lt;li&gt;Python 3&lt;/li&gt;
&lt;li&gt;Visual Studio 2017&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;STEPS&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;
Download the OpenSSL and FIPS source from &lt;a href=&quot;http://www.openssl.org&quot;&gt;http://www.openssl.org&lt;/a&gt; and the Python 3 source from &lt;a href=&quot;http://www.python.org&quot;&gt;http://www.python.org&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;
Convert all symlinks in the archive to regular files.  This is done in  Linux, Cygwin environment.&lt;/p&gt;
&lt;p&gt;Untar&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ tar -zxvf openssl-fips-2.0.16.tar.gz
$ tar -zxvf openssl-1.0.2u.tar.gz
$ tar -zxvf Python-3.8.6.tgz
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Zip -9&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ zip -9 -r openssl-fips-2.0.16.zip openssl-fips-2.0.16
$ zip -9 -r openssl-1.0.2u.zip openssl-1.0.2u
$ zip -9 -r Python-3.8.6.zip Python-3.8.6
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, unzip in Windows to the c:\work\ folder.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;
Install NASM for x64 and copy the contents of C:\Program Files\NASM to the C:\work\openssl-1.0.2u\ folder.&lt;/p&gt;
&lt;p&gt;Note: NASM is a netwide assembler program and can be downloaded from   [&lt;a href=&quot;https://www.nasm.us/pub/nasm/releasebuilds&quot;&gt;https://www.nasm.us/pub/nasm/releasebuilds&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;
Build the FIPS module using the VS 2015 Native Tools Command prompt.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;&gt; cd openssl-fips-2.0.16
&gt; ms\do_fips
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Rename the folder “out32dll” to “lib”.&lt;/p&gt;
&lt;p&gt;Rename the folder “util” to “bin”.&lt;/p&gt;
&lt;p&gt;Move “fips_standalone_sha1.exe” from “lib” to “bin”.&lt;/p&gt;
&lt;p&gt;This is done so that OpenSSL can compile in the next steps.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;
Build OpenSSL module using the VS 2015 Native Tools Command prompt.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd openssl-1.0.2u
perl Configure VC-WIN64A no-zlib no-idea no-mdc2 no-rc5 no-ssl2 no-ssl3 fips --with-fipslibdir=C:\usr\local\ssl\fips-2.0
ms\do_win64a
nmake -f ms\nt.mak all
nmake -f ms\nt.mak install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The openssl.exe and libeay32.dll and ssleay32.dll files are generated be in the C:\usr\local\ssl\bin\ folder.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221151001541-1608549015693.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;
Unzip Python in Windows and create an &apos;externals&apos; directory at the root (c:\work\Python-3.8.6) folder.&lt;/p&gt;
&lt;p&gt;Under &apos;externals&apos;, create a directory for openssl-1.0.2u and copy all the contents of c:\usr\local\ssl\ to this directory.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221152015627-1608548808308.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 7&lt;/strong&gt;
Under &apos;externals&apos;, create a directory for openssl-bin-1.0.2u/amd64 and copy all the files from c:\work\openssl-1.2.u\out32dll to this directory.&lt;/p&gt;
&lt;p&gt;Also, copy all the files from c:\work\openssl-1.2.u\inc32 to the  openssl-bin-1.0.2u/amd64/include directory.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 8&lt;/strong&gt;
Patch/Add/Modify these codes to files as shown below under Python-3.8.6 source (c:\work\Python-3.8.6).&lt;/p&gt;
&lt;p&gt;Lib\ssl.py:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;try:
    from _ssl import FIPS_mode, FIPS_mode_set
    print(&apos;successful import&apos;)
except ImportError as e:
    print(&apos;error in importing&apos;)
    print(e)   
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Modules/_ssl.c:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;static PyObject *
_ssl_FIPS_mode_impl(PyObject *module) {
    return PyLong_FromLong(FIPS_mode());
}

static PyObject *
_ssl_FIPS_mode_set_impl(PyObject *module, int n) {
    if (FIPS_mode_set(n) == 0) {
        _setSSLError(ERR_error_string(ERR_get_error(), NULL) , 0, __FILE__, __LINE__);
        return NULL;
    }
    Py_RETURN_NONE;
}

static PyMethodDef PySSL_methods[] = {
    _SSL__TEST_DECODE_CERT_METHODDEF
    _SSL_RAND_ADD_METHODDEF
    _SSL_RAND_BYTES_METHODDEF
    _SSL_RAND_PSEUDO_BYTES_METHODDEF
    _SSL_RAND_EGD_METHODDEF
    _SSL_RAND_STATUS_METHODDEF
    _SSL_GET_DEFAULT_VERIFY_PATHS_METHODDEF
    _SSL_ENUM_CERTIFICATES_METHODDEF
    _SSL_ENUM_CRLS_METHODDEF
    _SSL_TXT2OBJ_METHODDEF
    _SSL_NID2OBJ_METHODDEF
    _SSL_FIPS_MODE_METHODDEF
    _SSL_FIPS_MODE_SET_METHODDEF
    {NULL,                  NULL}            /* Sentinel */
}; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Modules/clinic/_ssl.c.h:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;PyDoc_STRVAR(_ssl_FIPS_mode__doc__,
&quot;FIPS Mode&quot;);

#define _SSL_FIPS_MODE_METHODDEF    \
    {&quot;FIPS_mode&quot;, (PyCFunction)_ssl_FIPS_mode, METH_NOARGS, _ssl_FIPS_mode__doc__},    

static PyObject *
_ssl_FIPS_mode_impl(PyObject *module);

static PyObject *
_ssl_FIPS_mode(PyObject *module, PyObject *Py_UNUSED(ignored))
{
    return _ssl_FIPS_mode_impl(module);
}

PyDoc_STRVAR(_ssl_FIPS_mode_set_doc__,
&quot;FIPS Mode Set&quot;);

#define _SSL_FIPS_MODE_SET_METHODDEF    \
    {&quot;FIPS_mode_set&quot;, (PyCFunction)_ssl_FIPS_mode_set, METH_O, _ssl_FIPS_mode_set_doc__},   

static PyObject *
_ssl_FIPS_mode_set_impl(PyObject *module, int n);

static PyObject *
_ssl_FIPS_mode_set(PyObject *module, PyObject *arg)
{
    PyObject *return_value = NULL;
    int n;

    if (!PyArg_Parse(arg, &quot;i:FIPS_mode_set&quot;, &amp;#x26;n)) {
        goto exit;
    }
    return_value = _ssl_FIPS_mode_set_impl(module, n);

exit:
    return return_value;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above code is used to enable functions ssl.FIPS_mode() and ssl.FIPS_mode_set().&lt;/p&gt;
&lt;p&gt;Reference: &lt;a href=&quot;https://stackoverflow.com/questions/49493537/how-to-implement-fips-mode-and-fips-mode-set-in-python-3-6s-ssl-module&quot;&gt;https://stackoverflow.com/questions/49493537/how-to-implement-fips-mode-and-fips-mode-set-in-python-3-6s-ssl-module&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 9&lt;/strong&gt;
Modify PCbuild/openssl.props as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/openssl-1608656128730.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 10&lt;/strong&gt;
Open PCbuild/python.props and change entries as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221155609284-1608549075946.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 11&lt;/strong&gt;
Open Python Solution under PCbuild/pcbuild.sln in VS 2017/2015.&lt;/p&gt;
&lt;p&gt;Change the link settings of _hashlib and _ssl projects under Python as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221155243791-1608549098426.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Change compile settings _hashlib and _ssl projects as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221155403908-1608549116108.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 12&lt;/strong&gt;
Now, build _hashlib.pyd and _ssl.pyd in VS 2017/2015.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 13&lt;/strong&gt;
Copy these built pyd files to a Python binary installation directory c:\python38\DLLs folder.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Copy ssl.py to c:\python38\Lib folder.

Copy OpenSSL DLLs (libeay32.dll and ssleay32.DLL) to the c:\python38\DLLs folder.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Step 14&lt;/strong&gt;
Start Python and use these commands to check the OpenSSL/FIPS version.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image-20201221161218408-1608549142246.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Voila...now Python 3.8 has OpenSSL with FIPS.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In this blog, I have covered the following steps in regards to enabling Python 3 with FIPS.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Download the required packages.&lt;/li&gt;
&lt;li&gt;Compile both OpenSSL and FIPS and link them both.&lt;/li&gt;
&lt;li&gt;Make the required changes to Python source before linking to the new OpenSSL and compile.&lt;/li&gt;
&lt;li&gt;Copy the newly generated binaries to the installed Python location.&lt;/li&gt;
&lt;li&gt;Test the version and check if FIPS is enabled.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope this blog is useful to the entire developer community!! Make sure you check out other blog posts on &lt;a href=&quot;/blog&quot;&gt;HPE DEV&lt;/a&gt; for more useful tutorials.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Best Practices on Migrating from a Data Warehouse to a Big Data Platform]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/best-practices-on-migrating-from-a-data-warehouse-to-a-big-data-platform/</link><guid isPermaLink="false">https://developer.hpe.com/best-practices-on-migrating-from-a-data-warehouse-to-a-big-data-platform/</guid><pubDate>Wed, 16 Dec 2020 07:00:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Michael Farnbach&quot;,
&quot;publish&quot;: &quot;2016-10-24T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Offloading cold or unused data and ETL workloads from a data warehouse to Hadoop/big data platforms is a very common starting point for enterprises beginning their big data journey. Platforms like Hadoop provide an economical way to store data and do bulk processing of large data sets; hence, it’s not surprising that cost is the primary driver for this initial use case.&lt;/p&gt;
&lt;p&gt;What do these projects look like when they are actually implemented? In this post, we’ll take a look at the different factors to think about, we’ll provide a methodology for implementing data warehouse offloads, and demonstrate how things translate in a Hadoop/big data world. In the traditional data warehouse world, people are very used to sequencing tasks and workflows. Data has to be extracted from source systems, transformed, and then loaded into the target, i.e., data warehouses. &lt;/p&gt;
&lt;p&gt;In the traditional data warehousing world, structure and schemas are essential, which lead to clearly defined transformations. In the Hadoop and big data world, data doesn’t need to be stored as a structured format. New tools work without schema, or apply schema on read, or are optimized for columnar, key value pair and document databases as such. There is no real extract and loading—it’s all about the transformations that occur after the data lands in the cluster. When offloading from a data warehouse, both data and transformations are being moved. &lt;strong&gt;Data lifecycle&lt;/strong&gt; is an important topic with three main areas to consider: data ingest, data integrations, and data delivery.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Ingest&lt;/strong&gt;: When it comes to data ingestion, what’s important is to map out your existing data flows to understand what modifications might be necessary in the Hadoop architecture. MapR offers various alternatives when it comes to ingest. You can use the unique NFS capability that comes only with the MapR Platform to ingest data into the cluster. On the storage side, it’s important to understand if data needs to be partitioned by day, for example, and whether updates will be incremental or full rewrites. When it comes to transformations, the big difference in the Hadoop world is that these occur after the fact and the critical step of defining a schema for transforming data is not a requirement.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Integrations&lt;/strong&gt;: In the traditional world of data warehousing, customers have often built their data models using a star schema methodology, or a 3NF, or perhaps a mix of both. These techniques provide a compact relational understanding of the data and comprise a centralized data model. This can be leveraged in the Hadoop architecture and one can build data microservices on top of this which can be denormalized, cubed, or otherwise aggregated and interpreted for specific applications.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Delivery:&lt;/strong&gt; At some point within the big data journey, customers will want to have to some sort of OLAP-like capabilities and build cubes to surface data easily to end users. Using tools from the broad Hadoop ecosystem, these data “microservices” can be built on top of streaming and batch models, using SQL and full programming languages.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The below visual shows how the data lifecycle is accomplished and aids in offloading data and transformations to a Hadoop-based environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/data-lifecycle-mapr-1608102123491.png&quot; alt=&quot;MapR - Data Lifecycle&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another key topic to address is around &lt;strong&gt;data structures.&lt;/strong&gt; There are potential decisions to be made on data models regarding architecture in the context of data warehouse offloads.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Some of the unique capabilities offered in the MapR Platform with MapR Database can help facilitate key-based lookups, even through SQL as an example.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When it comes to delimited files, the MapR Platform can work directly on these, removing the need to architect metadata for these files, which can then be put in compressed indexed formats like Avro and Parquet to speed up regular reporting and exploration queries.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;JSON is increasingly becoming a key format for nested data and is very flexible. Handling JSON data is a key strength for the MapR Platform. In the data warehousing world, it’s common to find 2D table structures and various aggregations to put nested dimensional data into different entities. These can be interpreted differently by different functions within an organization. Restructuring these entities provides a distinct challenge in a pure relational data warehousing data flow, but can be handled more easily by using JSON in the MapR Platform.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/billing-strategy-etl-offload-mapr-1608102141072.png&quot; alt=&quot;MapR - Billing Strategy - ETL Offload&quot;&gt;&lt;/p&gt;
&lt;p&gt;Above is an example of a customer in the telecommunications space that migrated portions of their data warehouse workload to a MapR cluster. You can see that they are benefiting from both a performance and price perspective.&lt;/p&gt;
&lt;p&gt;When it finally comes to data migration, there are a couple of different perspectives to consider and where the MapR Platform can help this effort.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Existing stored procedures implemented in a current data warehouse implementation can be moved over with reasonable effort, though this will require some development efforts to cover for platform specific features and SQL compatibility.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the systems architecture front, components can be ported to different toolsets altogether (e.g., SQL to Pig) but with the ability to retain their interfaces to other parts of the data workflow. Re-architecting at this level can improve development speed and manageability, while helping facilitate a more intuitive and efficient interface to other processes in the data workflow.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;After the data migration and transformation efforts are done, there are quite a few downstream benefits. New analytical tools and methods can be applied to derive new business insights. Use cases such as customer 360 and deeper analytics on existing business processes can be made available to business stakeholders and improve operational efficiency. As seen in the customer example above, there are cost savings and performance improvements to be gained as well. Clearly, data warehouse migration and offload initiatives can benefit not only the bottom line but also the top line. For more information on solutions MapR offers in this area, we encourage you to check out the data warehouse optimization area on our website.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cloud vs. On-Premises – What Are the Best Options for Deploying Microservices with Containers?]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/cloud-vs-on-premises-what-are-the-best-options-for-deploying-microservic/</link><guid isPermaLink="false">https://developer.hpe.com/cloud-vs-on-premises-what-are-the-best-options-for-deploying-microservic/</guid><pubDate>Wed, 16 Dec 2020 06:55:47 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Jim Scott&quot;,
&quot;publish&quot;: &quot;2018-04-18T11:00:00.000&quot;,
&quot;tags&quot;: &quot;cloud-computing&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Microservices are great. Don&apos;t just take my word for it: there are many sources saying the same thing, and the industry is transitioning to this model. Containers are a key piece of the success of microservices. Once a microservice is built and put into a container, the task of deploying the application comes next. For the sake of simplicity, I am going to focus on the type of infrastructure to deploy services. Deployment options fall into four basic models, which can be &lt;strong&gt;mixed and matched&lt;/strong&gt; as needed.&lt;/p&gt;
&lt;h2&gt;On-Premises Infrastructure&lt;/h2&gt;
&lt;p&gt;On-premises infrastructure has been the dominant enterprise computing model for more than 50 years. Organizations maintain their own equipment and software in a captive data center with full control over all aspects of processing, scheduling, administration, and maintenance. Many organizations in regulated industries have had no choice but to use an on-premises model because of the need to tightly control data and document processes. However, the cost and significant capital expense involved with building and maintaining on-premises infrastructure is prompting many organizations to shift some or all of their workloads to more-flexible cloud options. On-premises computing won&apos;t go away anytime soon, however. Legacy equipment and applications may be incompatible with a cloud environment, and organizations that want to protect investments in hardware and software may choose to maintain on-premises investments for years until depreciation cycles have run their course and applications can be redeveloped.&lt;/p&gt;
&lt;h2&gt;Public Cloud&lt;/h2&gt;
&lt;p&gt;Public cloud makes resources, such as processors, memory, operating systems, applications, and storage, available over the public internet on a pay-per-usage basis. Think of it as a computer in the sky. Public cloud is like using a local server, but the server is virtualized and managed elsewhere by a cloud provider with a high degree of automation.&lt;/p&gt;
&lt;p&gt;Organizations use public cloud for a variety of reasons, but the most popular are flexibility, scalability, and ease of administration. Public cloud instances can be launched with a few mouse clicks and just as easily taken down when no longer needed. Developers and end-users can, in many cases, deploy their own cloud instances without approval from IT and its accompanying delays. Billing is usually based upon usage, which gives organizations accountability and flexibility to pay only for the resources they use. Public cloud instances can be scaled up or down with relative ease, and many cloud providers offer best-of-breed automation tools to make administration easy. Public cloud is also an excellent platform for developing applications that will &quot;live&quot; in the cloud, such as those meant for use on mobile devices or with services that are exposed via APIs.&lt;/p&gt;
&lt;h2&gt;Private Cloud&lt;/h2&gt;
&lt;p&gt;For organizations that want the flexible automation benefits of public cloud but need to keep resources on-premises for control or compliance reasons, private cloud is a popular alternative. This model provides the same scalability, automation, and flexibility advantages of public cloud and on-premises environments that can be physically secured and tightly managed. Private clouds can be built using existing data center equipment architecture or licensed from public cloud providers, which could deliver what is essentially a version of their existing services in a secure environment. True private cloud is more than just virtualization. The research firm Wikibon &lt;a href=&quot;https://wikibon.com/true-private-cloud-will-begin-shipping-to-the-market-in-2016/&quot;&gt;defines&lt;/a&gt; it as encompassing converged architecture, virtualized software and hardware, self-service provisioning, orchestration/automation, and a single point of control.&lt;/p&gt;
&lt;h2&gt;Hybrid Cloud&lt;/h2&gt;
&lt;p&gt;When you combine a public and private cloud, you get a hybrid cloud. This architecture combines both models in a manner that is seamless and that permits workloads to easily move back and forth. This gives organizations a combination of control and flexibility that can be adjusted to the situation. Hybrid architecture preserves existing hardware and software investments while giving companies the flexibility to move applications to the cloud as resources and budgets permit. Not all applications can be moved easily, and some may continue to live for a long time in private data centers. In those cases, organizations may opt for a &quot;cloud bursting&quot; approach, in which demand spills over to a duplicate or compatible cloud application as needed. This reduces the need to add on-premises infrastructure that sits idle much of the time. There are even cloud-cloud options, in which applications move back and forth between multiple public clouds.&lt;/p&gt;
&lt;h2&gt;The Plethora of Options&lt;/h2&gt;
&lt;p&gt;Perhaps the biggest challenge facing IT with respect to cloud is the obvious realization that there is no one single cloud. Rather, there are many, and most enterprises will deploy applications to several of them simultaneously. The challenge is one of orchestration and integration of data across various clouds.&lt;/p&gt;
&lt;p&gt;For example, consider the app/dev environment. Ideally, a lot of prototyping and testing would be done in the public cloud, with its scale-on-demand capabilities that make it easy for developers to get this vital work done without imposing on internal resources. Most organizations still choose to retain their most sensitive data on premises, however. Yet to complete the app/dev process from development to test to production, data must flow securely and seamlessly among these different environments in what is known as the hybrid IT world (a combination of on-premises and off-premises, public and private cloud as well as use of traditional on-premises non-cloud clusters).&lt;/p&gt;
&lt;p&gt;What is sorely needed to make all this happen is a distributed data platform and processing model that scales easily across all locations and environments. In other words, a converged data platform.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Spark Data Source API: Extending Our Spark SQL Query Engine]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may…]]></description><link>https://developer.hpe.com/spark-data-source-api-extending-our-spark-sql-query-engine/</link><guid isPermaLink="false">https://developer.hpe.com/spark-data-source-api-extending-our-spark-sql-query-engine/</guid><pubDate>Wed, 16 Dec 2020 06:52:16 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019 may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-03-23T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In my last post, &lt;a href=&quot;/blog/xArv3gJz67Tl0Rv0kLVn/apache-spark-as-a-distributed-sql-engine&quot;&gt;Apache Spark as a Distributed SQL Engine&lt;/a&gt;, we explained how we could use SQL to query our data stored within Hadoop. Our engine is capable of reading CSV files from a distributed file system, auto discovering the schema from the files and exposing them as tables through the Hive meta store. All this was done to be able to connect standard SQL clients to our engine and explore our dataset without manually define the schema of our files, avoiding ETL work.&lt;/p&gt;
&lt;p&gt;Spark provides a framework that can be extended and we will push its capabilities even further by extending some of its functionalities.&lt;/p&gt;
&lt;h2&gt;Spark Data Source API&lt;/h2&gt;
&lt;p&gt;The Data Source API allows us to manage structured data in any format. Spark already has some standard structures built in such as Avro and Parquet, yet third parties have created new readers for CSV, JSON and others by extending this API. Today we are going to create our own.&lt;/p&gt;
&lt;p&gt;We have two reasons to extend the API.&lt;/p&gt;
&lt;p&gt;First, we want a library that is capable of reading our legacy format and transform our current data source into a new one that is easier to use.&lt;/p&gt;
&lt;p&gt;Second, we want to share this library across all our applications that use our data avoiding complex packaging of applications that need to be shared in order to achieve the same goal.&lt;/p&gt;
&lt;h2&gt;The Data Source&lt;/h2&gt;
&lt;p&gt;Our data source consists in a collection of files where each file is an entity by itself. For the sake of this example, we have defined a simple format where each file is a text file containing the information of a user, each field by line. Let’s see an example of a file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Pepe
20
Miami
Cube

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This file represents a user called ‘Pepe’ who is 20 years old, lives in Miami and was born in Cuba.&lt;/p&gt;
&lt;p&gt;In the real world, the format can be as complicated as we want, but the process we are going to explain will not change.&lt;/p&gt;
&lt;p&gt;Each file has the same format and we have millions of them. We also want to expose them to be queried in SQL.&lt;/p&gt;
&lt;h2&gt;Our Implementation&lt;/h2&gt;
&lt;p&gt;In order to extend the Data Source API, we need to implement certain classes from the Spark framework, so our custom reader can be loaded and used.&lt;/p&gt;
&lt;p&gt;Let’s start by creating a Spark application as the entry point to our example. We can do this by following the post &lt;a href=&quot;https://medium.com/@anicolaspp/sbt-scala-and-spark-6a57c0a2623a#.yzj69ycnz&quot;&gt;SBT, Scala and Spark&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The first thing we need to do once the app has been created is to link the correct Spark libraries. We are going to be running the examples on Spark 1.5.1 and our sbt file is defined as follow.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;name := &quot;spark-datasource&quot;  
version := &quot;1.0&quot;  
scalaVersion := &quot;2.11.7&quot;  
libraryDependencies += &quot;org.apache.spark&quot; % &quot;spark-core_2.11&quot; % &quot;1.5.1&quot;  
libraryDependencies += &quot;org.apache.spark&quot; % &quot;spark-sql_2.11&quot; % &quot;1.5.1&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creating Our Schema&lt;/h2&gt;
&lt;p&gt;The starting extension point of the Data Source API is the RelationProvider class. The RelationProvider class will be used to create the necessary relations of our data.&lt;/p&gt;
&lt;p&gt;We also need to mix the SchemaRelationProvider trait, which allows us to create the schema that we want.&lt;/p&gt;
&lt;p&gt;We need to create a class named DefaultSource and Spark will look for it in a given package. The DefaultSource class will extend RelationProvider and mix SchemaRelationProvider&lt;/p&gt;
&lt;p&gt;Our code so far looks as follow:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class DefaultSource extends RelationProvider with SchemaRelationProvider {  
  override def createRelation(sqlContext: SQLContext, parameters: Map[String, String])  
    : BaseRelation = {  
    createRelation(sqlContext, parameters, null)  
  }  
  override def createRelation(sqlContext: SQLContext, parameters: Map[String, String]  
    , schema: StructType)  
    : BaseRelation = {  
    parameters.getOrElse(&quot;path&quot;, sys.error(&quot;&apos;path&apos; must be specified for our data.&quot;))  
    return new LegacyRelation(parameters.get(&quot;path&quot;).get, schema)(sqlContext)  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the code, we are basically creating a LegacyRelation object, which defined the Relation we want to create. Think about a relation like a collection of tuples with a known schema.&lt;/p&gt;
&lt;p&gt;Let’s see how our Relation class is implemented.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class LegacyRelation(location: String, userSchema: StructType)  
(@transient val sqlContext: SQLContext)  
  extends BaseRelation  
       with Serializable {  
  override def schema: StructType = {  
    if (this.userSchema != null) {  
      return this.userSchema  
    }  
    else {  
      return StructType(Seq(StructField(&quot;name&quot;, StringType, true),   
                            StructField(&quot;age&quot;, IntegerType, true)))  
    }  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we are overriding the schema function so it returns the schema we want. In this example, we know the schema of our data, but in here, we could do anything we want to obtain the required schema. If the data were CSV, we could infer the schema using the headers of the file or do any other operations we need.&lt;/p&gt;
&lt;p&gt;Notice that we only want the name and age fields instead of the entire content of our entities.&lt;/p&gt;
&lt;p&gt;The next step is to test that we are getting the correct schema and we can do this by adding the following code to our app.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {  
  def main(args: Array[String]) {  
    val config = new SparkConf().setAppName(&quot;testing provider&quot;)  
    val sc = new SparkContext(config)  
    val sqlContext = new SQLContext(sc)  

    val df = sqlContext  
              .read  
              .format(&quot;com.nico.datasource.dat&quot;)  
              .load(&quot;/Users/anicolaspp/data/&quot;)     

    df.printSchema()  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code creates a SparkContext and an SQLContext from it. Using the SQLContext we set the format by passing the package name(Remember Spark will look at this package for the DefaultSource class). Then we load the data in the specified path using our provider into a DataFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.printSchema()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will print the schema we defined and the output should look as follows.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;root  
 |-- name: string (nullable = true)  
 |-- age: integer (nullable = true)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, we only have created the schema we want, but there is nothing that says how to ready the data and how to structure it into our defined schema.&lt;/p&gt;
&lt;h2&gt;Reading Data Into Our Schema&lt;/h2&gt;
&lt;p&gt;In order to read from our data source, our LegacyRelation class needs to mix the TableScan trait. TableScan has a method we need to implemented with the following signature:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;def buildScan(): RDD[Row]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The method buildScan should return all rows from our data source. In our particular case, each row will be the selected content of each file. Let’s take a look at our implementation of the buildScan.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def buildScan(): RDD[Row] = {  
    val rdd = sqlContext  
                .sparkContext  
                .wholeTextFiles(location)  
                .map(x =&gt; x._2)  

    val rows = rdd.map(file =&gt; {  
      val lines = file.split(&quot;\n&quot;)  
      Row.fromSeq(Seq(lines(0), lines(1)))  
    })  
    rows  
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we are using the wholeTextFiles method that reads the entire file (each file is an entity), reads the first two lines (the only fields we want) and creates a row from each of them. The result is a collection of rows where each row is created using only the part of the file we care about.&lt;/p&gt;
&lt;p&gt;This will be enough to modify our app so it prints out the content of our data source. The app now looks as follows.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {  
  def main(args: Array[String]) {  
    val config = new SparkConf().setAppName(&quot;testing provider&quot;)  
    val sc = new SparkContext(config)  
    val sqlContext = new SQLContext(sc)  

    val df = sqlContext  
              .read  
              .format(&quot;com.nico.datasource.dat&quot;)  
              .load(&quot;/Users/anicolaspp/data/&quot;)     

    df.show()  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Even though we are reading the desired format into a data frame, there is no information about the field types of our data. Our schema definition supports different data types, yet we are not enforcing them.&lt;/p&gt;
&lt;p&gt;Let’s modify our buildScan method so it infers the type information when creating each row.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def buildScan(): RDD[Row] = {  
    val schemaFields = schema.fields  
    val rdd = sqlContext  
                .sparkContext  
                .wholeTextFiles(location)  
                .map(x =&gt; x._2)  

    val rows = rdd.map(file =&gt; {  
      val lines = file.split(&quot;\n&quot;)  

      val typedValues = lines.zipWithIndex.map {  
        case (value, index) =&gt; {  
          val dataType = schemaFields(index).dataType  
          castValue(value, dataType)  
        }  
    nbsp;  }  
      Row.fromSeq(typedValues)  
    })  

    rows  
  }  

   private def castValue(value: String, toType: DataType) = toType match {  
    case _: StringType      =&gt; value  
    case _: IntegerType     =&gt; value.toInt  
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here, the only change is that we are casting each value read from our files into its correct type, inferred from the schema.fields object. In our particular case we are only interested that name is a String and age an Integer, but again, we could be very creative at this point.&lt;/p&gt;
&lt;p&gt;Now, our final LegacyRelation class will look as follows.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class LegacyRelation(location: String, userSchema: StructType)  
  (@transient val sqlContext: SQLContext)  
  extends BaseRelation  
      with TableScan with Serializable {  
  override def schema: StructType = {  
    if (this.userSchema != null) {  
      return this.userSchema  
    }  
    else {  
      return StructType(Seq(StructField(&quot;name&quot;, StringType, true),   
                            StructField(&quot;age&quot;, IntegerType, true)))  
    }  
  }  
  private def castValue(value: String, toType: DataType) = toType match {  
    case _: StringType      =&gt; value  
    case _: IntegerType     =&gt; value.toInt  
  }  
  override def buildScan(): RDD[Row] = {  
    val schemaFields = schema.fields  
    val rdd = sqlContext  
              .sparkContext  
              .wholeTextFiles(location)  
              .map(x =&gt; x._2)  

    val rows = rdd.map(file =&gt; {  
      val lines = file.split(&quot;\n&quot;)  
      val typedValues = lines.zipWithIndex.map{  
        case (value, index) =&gt; {  
          val dataType = schemaFields(index).dataType  
          castValue(value, dataType)  
        }  
      }  
      Row.fromSeq(typedValues)  
    })  
    rows  
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we can load our data into a DataFrame and register it to be used by SQL clients as we explain in our previous post. Our app is as simple as shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {  
  def main(args: Array[String]) {  
    val config = new SparkConf().setAppName(&quot;testing provider&quot;)  
    val sc = new SparkContext(config)  
    val sqlContext = new SQLContext(sc)  
    val df = sqlContext  
              .read  
              .format(&quot;com.nico.datasource.dat&quot;)  
              .load(&quot;/Users/anicolaspp/data/&quot;)     

    df.registerTempTable(&quot;users&quot;)  
    sqlContext.sql(&quot;select name from users&quot;).show()  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have shown enough to read a custom format into a data frame so we can take advantage from the DataFrame API, yet more can be done.&lt;/p&gt;
&lt;p&gt;The Data Source API not only offers functionalities for reading data, but also to write it in a custom format. This functionality is very powerful if we want to transform a data set from one  format to another one. Let’s see how we add these capabilities to our existing driver.&lt;/p&gt;
&lt;h2&gt;Writing a Formatter&lt;/h2&gt;
&lt;p&gt;Let’s suppose we want to save our data so it can be read from other standard systems. We are going to load our custom data source and create a CSV-like output from it.&lt;/p&gt;
&lt;p&gt;In order to support save calls from the API, our DefaultSource class has to mix with the CreatableRelationProvider trait. This trait has a method called createRelation we need to implement. Let’s take a look at it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;override def createRelation(sqlContext: SQLContext, mode: SaveMode,   
    parameters: Map[String, String], data: DataFrame): BaseRelation = {  

    saveAsCsvFile(data, parameters.get(&quot;path&quot;).get)  
    createRelation(sqlContext, parameters, data.schema)  
  }  

  def saveAsCsvFile(data: DataFrame, path: String) = {  
    val dataCustomRDD = data.rdd.map(row =&gt; {  
      val values = row.toSeq.map(value =&gt; value.toString)  
      values.mkString(&quot;,&quot;)  
    })  
    dataCustomRDD.saveAsTextFile(path)  
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We are basically saving our data frame as a CSV-like file and then returning a relation with a known schema.&lt;/p&gt;
&lt;p&gt;The saveAsCsvFile method is creating a RDD[String] with our data formatted as CSV, then it saves it to the given path. For simplicity we did not include the headers in our output files, but remember we can do whatever we need to output the data in the format we require.&lt;/p&gt;
&lt;p&gt;The entire code of our DefaultSource class is the following.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class DefaultSource extends RelationProvider   
    with SchemaRelationProvider   
    with CreatableRelationProvider {  
  override def createRelation(sqlContext: SQLContext,   
    parameters: Map[String, String]): BaseRelation = {  

        createRelation(sqlContext, parameters, null)  
  }  
  override def createRelation(sqlContext: SQLContext,   
    parameters: Map[String, String], schema: StructType): BaseRelation = {  

        parameters.getOrElse(&quot;path&quot;, sys.error(&quot;&apos;path&apos; must be specified for CSV data.&quot;))  
        return new LegacyRelation(parameters.get(&quot;path&quot;).get, schema)(sqlContext)  
  }  
  def saveAsCsvFile(data: DataFrame, path: String) = {  
    val dataCustomRDD = data.rdd.map(row =&gt; {  
      val values = row.toSeq.map(value =&gt; value.toString)  
      values.mkString(&quot;,&quot;)  
    })  
    dataCustomRDD.saveAsTextFile(path)  
  }  
  override def createRelation(sqlContext: SQLContext, mode: SaveMode,   
    parameters: Map[String, String], data: DataFrame): BaseRelation = {  

        saveAsCsvFile(data, parameters.get(&quot;path&quot;).get)  
        createRelation(sqlContext, parameters, data.schema)  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to save our original data as CSV-like format, we modify our app as follow.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {  
  def main(args: Array[String]) {  
    val config = new SparkConf().setAppName(&quot;testing provider&quot;)  
    val sc = new SparkContext(config)  
    val sqlContext = new SQLContext(sc)  

    val df = sqlContext  
              .read  
              .format(&quot;com.nico.datasource.dat&quot;)  
              .load(&quot;/Users/anicolaspp/data/&quot;)     

    df.write  
      .format(&quot;com.nico.datasource.dat&quot;)  
      .save(&quot;/Users/anicolaspp/data/output&quot;)  
  }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that every time we read/write our data, we need to specify the package name where our DefaultSource class is located.&lt;/p&gt;
&lt;p&gt;We now can package our library and include it in any project we need to use the data source we described. Many other libraries are being created to support all possible formats we can imagine and now you can create your own to contribute to the community or just to be used in your own projects.&lt;/p&gt;
&lt;h2&gt;Endings&lt;/h2&gt;
&lt;p&gt;We have seen how to load data from a custom format into data frames using the Spark Data Source API. We also reviewed the classes involved in the process, especially how Spark uses our DefaultSource from our package to perform the required operations. We also implemented an output formatter so our data frames can be saved, as we like to.&lt;/p&gt;
&lt;p&gt;There is much more we can do with the Data Source API, but finding the right documentation has been quite hard in my experience. I believe that better documentation could be created, specifically for those parts of the API that are very useful when extending them.&lt;/p&gt;
&lt;p&gt;Even though our example shows how to extend the Data Source API to support a simple format, it can be modified to read and write more complex types such as binary encoded entities.&lt;/p&gt;
&lt;p&gt;The ability to integrate our own data types into Spark makes it one of the top frameworks for data processing out there.&lt;/p&gt;
&lt;p&gt;In the Hadoop world we can find a lot of tools that share goals and functionalities, but none of them is as flexible and versatile as Spark. This makes Spark very desirable in this field. If we are interested in a processing framework capable of work under limitless circumstances, then Apache Spark is the way to go.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://medium.com/@anicolaspp/extending-our-spark-sql-query-engine-5f4a088de986&quot;&gt;View the original&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Analyzing Flight Delays with Apache Spark GraphFrames and MapR Database]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/analyzing-flight-delays-with-apache-spark-graphframes-and-mapr-database/</link><guid isPermaLink="false">https://developer.hpe.com/analyzing-flight-delays-with-apache-spark-graphframes-and-mapr-database/</guid><pubDate>Wed, 16 Dec 2020 06:33:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-11-16T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Apache Spark GraphX made it possible to run graph algorithms within Spark. GraphFrames integrates GraphX and DataFrames and makes it possible to perform Graph pattern queries without moving data to a specialized graph database.&lt;/p&gt;
&lt;p&gt;This blog will help you get started using Apache Spark GraphFrames Graph Algorithms and Graph Queries with MapR Database JSON document database. We will begin with an overview of Graph and GraphFrames concepts, then we will analyze a real flight dataset for January-August 2018 stored in a MapR Database table.  &lt;/p&gt;
&lt;p&gt;Graphs provide a powerful way to analyze the connections in a Dataset. GraphX is the Apache Spark component for graph-parallel and data-parallel computations, built upon a branch of mathematics called graph theory. It is a distributed graph processing framework that sits on top of the Spark core. GraphX brings the speed and scalability of parallel, iterative processing to graphs for big datasets. It partitions graphs that are too large to fit in the memory of a single computer among multiple computers in a cluster. In addition, GraphX partitions vertices independently of edges, which avoids the load imbalance often suffered when putting all the edges for a vertex onto a single machine.&lt;/p&gt;
&lt;p&gt;With Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster: Spark SQL and the Dataset/DataFrame APIs provide ease of use, space efficiency, and performance gains with Spark SQL&apos;s optimized execution engine. GraphFrames extends Spark GraphX to provide the DataFrame API, making the analysis easier to use, more efficient, and simplifying data pipelines.&lt;/p&gt;
&lt;h2&gt;Overview of Some Graph Concepts&lt;/h2&gt;
&lt;p&gt;A graph is a mathematical structure used to model relations between objects. A graph is made up of vertices and edges that connect them. The vertices are the objects, and the edges are the relationships between them.  &lt;/p&gt;
&lt;p&gt;A regular graph is a graph where each vertex has the same number of edges. An example of a regular graph is Facebook friends. If Ted is a friend of Carol, then Carol is also a friend of Ted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image7-1608101013070.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;directed graph&lt;/strong&gt; is a graph where the edges have a direction associated with them. An example of a directed graph is a Twitter follower. Carol can follow Oprah without implying that Oprah follows Carol.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image18-1608101023047.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;GraphFrame Property Graph&lt;/h2&gt;
&lt;p&gt;Spark GraphFrames support graph computation with a distributed property graph. A property graph is a directed multigraph, which can have multiple edges in parallel. Every edge and vertex has user-defined properties associated with it. The parallel edges allow multiple relationships between the same vertices.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image28-1608101031846.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With GraphFrames, vertices and edges are represented as DataFrames, which adds the advantages of querying with Spark SQL and support for DataFrame data sources like Parquet, JSON, CSV, and also MapR Database with the MapR Database Spark Connector.&lt;/p&gt;
&lt;h2&gt;Graph Algorithms verses Graph Queries&lt;/h2&gt;
&lt;p&gt;Graph analysis comes in two forms:  graph algorithms and graph pattern queries. Let’s look at an example of each and how GraphFrames integrates the two.&lt;/p&gt;
&lt;h2&gt;PageRank Graph Algorithm&lt;/h2&gt;
&lt;p&gt;The breakthrough for the creators of the Google search engine was to create the PageRank graph algorithm, which represents pages as nodes and links as edges and measures the importance of a page by the number and rank of linking pages,  plus the number and rank of each of the linking pages. The PageRank algorithm is useful for measuring the importance of a vertex in a graph. Example of this could be a Twitter tweeter with lots of important followers or an airport with lots of connections to other airports with lots of connections.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image2-1608101039047.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Many graph algorithms such as PageRank, shortest path, and connected components repeatedly aggregate properties of neighboring vertices.  These Algorithms can be implemented as a sequence of steps where vertices pass message functions to their neighboring vertices and then the aggregate of the message functions is calculated at the destination vertex.&lt;/p&gt;
&lt;p&gt;You can visualize the PageRank algorithm as each page sending a message with it’s rank of importance to each page it points to. In the beginning, each page has the same rank equal to the ratio of the number of pages.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image1-1608101046388.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The messages are aggregated and calculated at each vertex with the sum of all of the incoming messages becoming the new page rank. This is calculated iteratively so that links from more important pages are more important.&lt;/p&gt;
&lt;h2&gt;Graph Motif Queries  &lt;/h2&gt;
&lt;p&gt;Graph motifs are recurrent patterns in a graph. Graph queries search a graph for all occurrences of a given motif or pattern. As an  example, to recommend who to follow on Twitter, you could search for patterns where A follows B and B follows C, but A does not follow C. Here is a  GraphFrames Motif Query for this pattern, to find the edges from a to b and b to c  for which there is no edge from a to c:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;graph.find(&quot;(a)-[]-&gt;(b); (b)-[]-&gt;(c); !(a)-[]-&gt;(c)&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt; (vertices are denoted by parentheses ( ), while edges are denoted by square brackets [ ] )&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image24-1608101055589.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image3-1608101062281.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;GraphFrames integrate graph algorithms and graph queries, enabling optimizations across graph and Spark SQL queries, without requiring data to be moved to specialized graph databases.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image12-1608101070628.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Graph Examples&lt;/h2&gt;
&lt;p&gt;Examples of connected data that can be represented by graphs include:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Recommendation Engines:&lt;/strong&gt; Recommendation algorithms can use graphs where the nodes are the users and products, and their respective attributes and the edges are the ratings or purchases of the products by users. Graph algorithms can calculate weights for how similar users rated or purchased similar products.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image11-1608101079964.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fraud:&lt;/strong&gt; Graphs are useful for fraud detection algorithms in banking, healthcare, and network security. In healthcare, graph algorithms can explore the connections between patients, doctors, and pharmacy prescriptions. In banking, graph algorithms can explore the relationship between credit card applicants and phone numbers and addresses or between credit cards customers and merchant transactions. In network security, graph algorithms can explore data breaches.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image13-1608101087917.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These are just some examples of the uses of graphs. Next, we will look at a specific example, using Spark GraphFrames.&lt;/p&gt;
&lt;h2&gt;Example Flight Scenario&lt;/h2&gt;
&lt;p&gt;As a starting simple example, we will analyze 3 flights; for each flight, we have the following information:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image9-1608101096479.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Originating Airport&lt;/th&gt;
&lt;th&gt;Destination Airport&lt;/th&gt;
&lt;th&gt;Distance&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;1800 miles&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;DFW&lt;/td&gt;
&lt;td&gt;800 miles&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DFW&lt;/td&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;1400 miles&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In this scenario, we are going to represent the airports as vertices and flight routes as edges. For our graph, we will have three vertices, each representing an airport. The vertices each have the airport code as the ID, and the city as a property:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Vertex Table for Airports&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;city&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;San Francisco&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;Chicago&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DFW&lt;/td&gt;
&lt;td&gt;Texas&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The edges have the Source ID, the Destination ID, and the distance as a property.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Edges Table for Routes&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Src&lt;/th&gt;
&lt;th&gt;Dst&lt;/th&gt;
&lt;th&gt;Distance&lt;/th&gt;
&lt;th&gt;Delay&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;1800&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;DFW&lt;/td&gt;
&lt;td&gt;800&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DFW&lt;/td&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;1400&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Launching the Spark Interactive Shell with GraphFrames&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because GraphFrames is a separate package from Spark, start the Spark shell, specifying the GraphFrames package as shown below: &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$SPARK_HOME/bin/spark-shell --packages
graphframes:graphframes:0.6.0-spark2.3-s_2.11
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Define Vertices&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;First, we will import the DataFrames, GraphX, and GraphFrames packages.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.types.StructType
import org.graphframes._
import spark.implicits._
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We define airports as vertices. A vertex DataFrame must have an ID column and may have multiple attribute columns. In this example, each airport vertex consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Vertex ID→ id&lt;/li&gt;
&lt;li&gt;Vertex Property → city&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Vertex Table for Airports&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;city&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;San Francisco&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;We define a DataFrame with the above properties, which will be used for the vertices in the GraphFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// create  vertices  with ID and Name
case class Airport(id: String, city: String) extends Serializable

val airports=Array(Airport(&quot;SFO&quot;,&quot;San Francisco&quot;),Airport(&quot;ORD&quot;,&quot;Chicago&quot;),Airport(&quot;DFW&quot;,&quot;Dallas Fort Worth&quot;))

val vertices = spark.createDataset(airports).toDF

vertices.show

--- result: ---
+---+-----------------+
| id|             city|
+---+-----------------+
|SFO|    San Francisco|
|ORD|          Chicago|
|DFW|Dallas Fort Worth|
+---+-----------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Define Edges&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Edges are the flights between airports. An edge DataFrame must have src and dst columns and may have multiple relationship columns. In our example, an edge consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Edge origin ID → src&lt;/li&gt;
&lt;li&gt;Edge destination ID → dst&lt;/li&gt;
&lt;li&gt;Edge property distance → dist&lt;/li&gt;
&lt;li&gt;Edge property delay→ delay&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Edges Table for Flights&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;src&lt;/th&gt;
&lt;th&gt;dst&lt;/th&gt;
&lt;th&gt;dist&lt;/th&gt;
&lt;th&gt;delay&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO_ORD_2017-01-01_AA&lt;/td&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;1800&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;We define a DataFrame with the above properties, which will be used for the edges in the GraphFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// create  flights with srcid, destid , distance
case class Flight(id: String, src: String,dst: String, dist: Double, delay: Double)

val flights=Array(Flight(&quot;SFO_ORD_2017-01-01_AA&quot;,&quot;SFO&quot;,&quot;ORD&quot;,1800, 40),Flight(&quot;ORD_DFW_2017-01-01_UA&quot;,&quot;ORD&quot;,&quot;DFW&quot;,800, 0),Flight(&quot;DFW_SFO_2017-01-01_DL&quot;,&quot;DFW&quot;,&quot;SFO&quot;,1400, 10))

val edges = spark.createDataset(flights).toDF
edges.show

--- result: ---
+--------------------+---+---+------+-----+
|                  id|src|dst|  dist|delay|
+--------------------+---+---+------+-----+
|SFO_ORD_2017-01-0...|SFO|ORD|1800.0| 40.0|
|ORD_DFW_2017-01-0...|ORD|DFW| 800.0|  0.0|
|DFW_SFO_2017-01-0...|DFW|SFO|1400.0| 10.0|
+--------------------+---+---+------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Create the GraphFrame&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Below, we create a GraphFrame by supplying a vertex DataFrame and an edge DataFrame. It is also possible to create a GraphFrame with just an edge DataFrame; then the vertices will be equal to the unique src and dst ids from the edge DataFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// define the graph
val graph = GraphFrame(vertices, edges)

// show graph vertices
graph.vertices.show

+---+-----------------+
| id|             name|
+---+-----------------+
|SFO|    San Francisco|
|ORD|          Chicago|
|DFW|Dallas Fort Worth|
+---+-----------------+

// show graph edges
graph.edges.show

--- result: ---
+--------------------+---+---+------+-----+
|                  id|src|dst|  dist|delay|
+--------------------+---+---+------+-----+
|SFO_ORD_2017-01-0...|SFO|ORD|1800.0| 40.0|
|ORD_DFW_2017-01-0...|ORD|DFW| 800.0|  0.0|
|DFW_SFO_2017-01-0...|DFW|SFO|1400.0| 10.0|
+--------------------+---+---+------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Querying the GraphFrame&lt;/h2&gt;
&lt;p&gt;Now we can query the GraphFrame to answer the following questions:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How many airports are there?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// How many airports?
graph.vertices.count

--- result: --- Long = 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;How many flights are there between airports?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// How many flights?
graph.edges.count

--- result: --- = 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Which flight routes are greater than 1000 miles in distance?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// routes &gt;  1000 miles distance?
graph.edges.filter(&quot;dist &gt; 800&quot;).show

+--------------------+---+---+------+-----+
|                  id|src|dst|  dist|delay|
+--------------------+---+---+------+-----+
|SFO_ORD_2017-01-0...|SFO|ORD|1800.0| 40.0|
|DFW_SFO_2017-01-0...|DFW|SFO|1400.0| 10.0|
+--------------------+---+---+------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The GraphFrames triplets put all of the edge, src, and dst columns together in a DataFrame.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// triplets = src edge dst
graph.triplets.show

--- result: ---
+--------------------+--------------------+--------------------+
|                 src|                edge|                 dst|
+--------------------+--------------------+--------------------+
| [SFO,San Francisco]|[SFO_ORD_2017-01-...|       [ORD,Chicago]|
|       [ORD,Chicago]|[ORD_DFW_2017-01-...|[DFW,Dallas Fort ...|
|[DFW,Dallas Fort ...|[DFW_SFO_2017-01-...| [SFO,San Francisco]|
+--------------------+--------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the longest distance routes?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// print out longest routes
graph.edges
  .groupBy(&quot;src&quot;, &quot;dst&quot;)
  .max(&quot;dist&quot;)
  .sort(desc(&quot;max(dist)&quot;)).show

+---+---+---------+
|src|dst|max(dist)|
+---+---+---------+
|SFO|ORD|   1800.0|
|DFW|SFO|   1400.0|
|ORD|DFW|    800.0|
+---+---+---------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next we will analyze flight delays and distances using the real flight data.&lt;/p&gt;
&lt;h2&gt;Loading and Querying Flight Data with Spark DataFrames and MapR Database JSON&lt;/h2&gt;
&lt;p&gt;The Spark MapR Database Connector enables users to perform complex SQL queries and updates on top of MapR Database using a Spark Dataset, while applying critical techniques such as projection and filter pushdown, custom partitioning, and data locality.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image5-1608101104461.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With MapR Database, a table is automatically partitioned into tablets across a cluster by key range, providing for scalable and fast reads and writes by row key. In this use case, the row key, the id, starts with the origin, destination airport codes, so the table is automatically partitioned and sorted by the edge src, dst with Atlanta first.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image15-1608101113190.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Spark MapR Database Connector leverages the Spark &lt;a href=&quot;https://databricks.com/blog/2015/01/09/spark-sql-data-sources-api-unified-data-access-for-the-spark-platform.html&quot;&gt;DataSource API&lt;/a&gt;. The connector architecture has a connection object in every Spark Executor, allowing for distributed parallel writes, reads, or scans with MapR Database tablets (partitions).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image29-1608101121307.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Database supports JSON documents as a native data store, making it easy to store, query, and build applications with JSON documents. For the flights MapR Database table we have the following JSON schema. We will focus on the highlighted id, the src, dst, depdelay, and distance attributes. The MapR Database table will be automatically partitioned and sorted by the rowkey or id, which consists of the src, dst, date, carrier and flight number.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image4-1608101128380.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Loading The Flight MapR Database Table Data into a Spark Dataset&lt;/h2&gt;
&lt;p&gt;In our use case Edges are the flights between airports. An edge must have src and dst columns and can have multiple relationship columns. In our example, an edge consists of:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;src&lt;/th&gt;
&lt;th&gt;dst&lt;/th&gt;
&lt;th&gt;distance&lt;/th&gt;
&lt;th&gt;depdelay&lt;/th&gt;
&lt;th&gt;carrier&lt;/th&gt;
&lt;th&gt;crsdephour&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO_ORD_2017-01-01_AA&lt;/td&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;ORD&lt;/td&gt;
&lt;td&gt;1800&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;td&gt;AA&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Below, we define the flight schema, corresponding to a row in the MapR Database JSON Flight Table .&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// define the Flight Schema
case class Flight(id: String,fldate: String,month:Integer, dofW: Integer, carrier: String, src: String,dst: String, crsdephour: Integer, crsdeptime: Integer, depdelay: Double, crsarrtime: Integer, arrdelay: Double, crselapsedtime: Double, dist: Double)

val schema = StructType(Array(
    StructField(&quot;id&quot;, StringType, true),
    StructField(&quot;fldate&quot;, StringType, true),
    StructField(&quot;month&quot;, IntegerType, true),
    StructField(&quot;dofW&quot;, IntegerType, true),
    StructField(&quot;carrier&quot;, StringType, true),
    StructField(&quot;src&quot;, StringType, true),
    StructField(&quot;dst&quot;, StringType, true),
    StructField(&quot;crsdephour&quot;, IntegerType, true),
    StructField(&quot;crsdeptime&quot;, IntegerType, true),
    StructField(&quot;depdelay&quot;, DoubleType, true),
    StructField(&quot;crsarrtime&quot;, IntegerType, true),
    StructField(&quot;arrdelay&quot;, DoubleType, true),
    StructField(&quot;crselapsedtime&quot;, DoubleType, true),
    StructField(&quot;dist&quot;, DoubleType, true)
  ))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To &lt;a href=&quot;https://docs.datafabric.hpe.com/62/Spark/LoadDataFromMapRDBasDataset.html&quot;&gt;load data from a MapR Database JSON&lt;/a&gt; table into an Apache Spark Dataset, we invoke the loadFromMapRDB method on a SparkSession object, providing the tableName, schema, and case class. This returns a Dataset of Flight objects:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import com.mapr.db._
import com.mapr.db.spark._
import com.mapr.db.spark.impl._
import com.mapr.db.spark.sql._

val df = spark.sparkSession.loadFromMapRDB\[Flight](tableName, schema)

flights.show

--- result: ---
![](https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image22-1608101142872.png)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Define Vertices&lt;/h2&gt;
&lt;p&gt;We define airports as vertices. Vertices can have properties or attributes associated with them. For each airport, we have the following information:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Vertex Table for Airports&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;city&lt;/th&gt;
&lt;th&gt;state&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SFO&lt;/td&gt;
&lt;td&gt;San Francisco&lt;/td&gt;
&lt;td&gt;CA&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Note that our dataset contains only a subset of the airports in the USA; below are the airports in our dataset shown on a map.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image23-1608101155741.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below, we read the airports information into a DataFrame from a JSON file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// create  airports DataFrame

val airports = spark.read.json(&quot;maprfs:///data/airports.json&quot;)
airports.createOrReplaceTempView(&quot;airports&quot;)
airports.show

--- result: ---
+-------------+-------+-----+---+  
|         City|Country|State| id|  
+-------------+-------+-----+---+  
|      Chicago|    USA|   IL|ORD|  
|     New York|    USA|   NY|LGA|  
|       Boston|    USA|   MA|BOS|  
|      Houston|    USA|   TX|IAH|  
|       Newark|    USA|   NJ|EWR|  
|       Denver|    USA|   CO|DEN|  
|        Miami|    USA|   FL|MIA|  
|San Francisco|    USA|   CA|SFO|  
|      Atlanta|    USA|   GA|ATL|  
|       Dallas|    USA|   TX|DFW|  
|    Charlotte|    USA|   NC|CLT|  
|  Los Angeles|    USA|   CA|LAX|  
|      Seattle|    USA|   WA|SEA|  
+-------------+-------+-----+---+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Create the Property Graph&lt;/h2&gt;
&lt;p&gt;Again, in this scenario, we are going to represent the airports as vertices and flights as edges.  Below, we create a GraphFrame by supplying a vertex DataFrame and an edge DataFrame. The airports and flights Dataframes are available as the graph.edges and graph.vertices. Since GraphFrame vertices and edges are stored as DataFrames, many queries are just DataFrame (or SQL) queries.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// define the graphframe
val graph = GraphFrame(airports, df)

// filter graph vertices
graph.vertices.filter(&quot;State=&apos;TX&apos;&quot;).show

+-------+-------+-----+---+
|   City|Country|State| id|
+-------+-------+-----+---+
|Houston|    USA|   TX|IAH|
| Dallas|    USA|   TX|DFW|
+-------+-------+-----+---+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Querying the GraphFrame&lt;/h2&gt;
&lt;p&gt;Now we can query the GraphFrame to answer the following questions:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How many airports are there?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// How many airports?
val numairports = graph.vertices.count

--- result: ---
 Long = 13
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;How many flights are there?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// How many flights
val numflights = graph.edges.count

--- result: ---
// Long = 282628
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Which flight routes have the longest distance?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// show the longest distance routes
graph.edges
.groupBy(&quot;src&quot;, &quot;dst&quot;)
.max(&quot;dist&quot;)
.sort(desc(&quot;max(dist)&quot;)).show(4)

--- result: ---
+---+---+---------+  
|src|dst|max(dist)|  
+---+---+---------+  
|MIA|SEA|   2724.0|  
|SEA|MIA|   2724.0|  
|BOS|SFO|   2704.0|  
|SFO|BOS|   2704.0|  
+---+---+---------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Which flight routes have the highest average delays?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.edges
.groupBy(&quot;src&quot;, &quot;dst&quot;)
.avg(&quot;depdelay&quot;)
.sort(desc(&quot;avg(delay)&quot;)).show(5)

--- result: ---

+---+---+------------------+  
|src|dst|     avg(depdelay)|  
+---+---+------------------+  
|ATL|EWR|25.520159946684437|  
|DEN|EWR|25.232164449818622|  
|MIA|SFO|24.785953177257525|  
|MIA|EWR|22.464104423495286|  
|IAH|EWR| 22.38344914718888|  
+---+---+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Which flight hours have the highest average delays?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.edges
.groupBy(&quot;crsdephour&quot;)
.avg(&quot;depdelay&quot;)
.sort(desc(&quot;avg(delay)&quot;)).show(5)

--- result: ---

+----------+------------------+                                                
|crsdephour|        avg(delay)|
+----------+------------------+
|        18| 24.24118415324336|
|        19|23.348782771535582|
|        21|19.617375231053604|
|        16| 19.30346232179226|
|        17| 18.77857142857143|
+----------+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the longest delays for flights that are greater than 1500 miles in distance?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// flights &gt;  1500 miles distance ordered by delay

graph.edges.filter(&quot;dist &gt; 1500&quot;)
.orderBy(desc(&quot;depdelay&quot;)).show(3)

--- result: ---
![](https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image21-1608101167386.png)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What is the average delay  for delayed flights departing from Atlanta?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.edges.filter(&quot;src = &apos;ATL&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;)
.avg(&quot;depdelay&quot;).sort(desc(&quot;avg(depdelay)&quot;)).show

--- result: ---

+---+---+------------------+
|src|dst|     avg(depdelay)|
+---+---+------------------+
|ATL|EWR|  58.1085801063022|
|ATL|ORD| 46.42393736017897|
|ATL|DFW|39.454460966542754|
|ATL|LGA| 39.25498489425982|
|ATL|CLT| 37.56777108433735|
|ATL|SFO| 36.83008356545961|
+---+---+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Projection and filter push down into MapR Database&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can see the physical plan for a DataFrame query by calling the explain method shown below. Here we see projection and filter push down, which means that the scanning of the src, dst and depdelay columns and the filter on the depdelay column are pushed down into MapR Database, which means that the scanning and filtering will take place in MapR Database before returning the data to Spark. Projection pushdown minimizes data transfer between MapR Database and the Spark engine by omitting unnecessary fields from table scans. It is especially beneficial when a table contains many columns. Filter pushdown improves performance by reducing the amount of data passed between MapR Database and the Spark engine when filtering data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.edges.filter(&quot;src = &apos;ATL&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;)
.avg(&quot;depdelay&quot;).sort(desc(&quot;avg(depdelay)&quot;))&amp;#x3C;span style=&quot;color: red;&quot;&gt;.explain&amp;#x3C;/span&gt;

== Physical Plan ==
\*(3) Sort [avg(depdelay)#273 DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(avg(depdelay)#273 DESC NULLS LAST, 200)
   +- \*(2) HashAggregate(keys=[src#5, dst#6],
         functions=[avg(depdelay#9)])
      +- Exchange hashpartitioning(src#5, dst#6, 200)
         +- \*(1) HashAggregate(keys=[src#5, dst#6],
            functions=[partial_avg(depdelay#9)])
            +- \*(1) Filter (((isnotnull(src#5) &amp;#x26;&amp;#x26;
                isnotnull(depdelay#9)) &amp;#x26;&amp;#x26;
                            (src#5 = ATL)) &amp;#x26;&amp;#x26; (depdelay#9 &gt; 1.0))
               +- \*(1) Scan MapRDBRelation(/user/mapr/flighttable &amp;#x3C;span style=&quot;color: red;&quot;&gt;[src#5,dst#6,depdelay#9] PushedFilters: [IsNotNull(src), IsNotNull(depdelay), EqualTo(src,ATL), GreaterThan(depdelay,1.0)]&amp;#x3C;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the worst hours for delayed flights departing from Atlanta?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.edges.filter(&quot;src = &apos;ATL&apos; and delay &gt; 1&quot;)
 .groupBy(&quot;crsdephour&quot;)
 .avg(&quot;delay&quot;)
 .sort(desc(&quot;avg(delay)&quot;)).show(5)

--- result: ---
![](https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image25-1608101178701.png)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;What are the four most frequent flight routes in the data set? or What is the count of flights for all possible flight routes, sorted?&lt;/strong&gt; (Note: we will use the DataFrame returned later)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val flightroutecount=graph.edges
 .groupBy(&quot;src&quot;, &quot;dst&quot;)
 .count().orderBy(desc(&quot;count&quot;)).show(4)

--- result: ---
+---+---+-----+  
|src|dst|count|  
+---+---+-----+  
|LGA|ORD| 4442|  
|ORD|LGA| 4426|  
|LAX|SFO| 4406|  
|SFO|LAX| 4354|  
+---+---+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;result in  a bar chart:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image14-1608101187466.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Vertex Degrees&lt;/h2&gt;
&lt;p&gt;The degree of a vertex is the number of edges that touch the vertex. GraphFrames provides vertex inDegree, outDegree, and degree operations, which determine the number of incoming edges, outgoing edges, and total edges. Using GraphFrames the degree operation we can answer the following question.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Which airports have the most incoming and outgoing flights?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.degrees.orderBy(desc(&quot;degree&quot;)).show(3)

--- result: ---
+---+------+  
| id|degree|  
+---+------+  
|ATL| 60382|  
|ORD| 64386|  
|LAX| 53733|  
+---+------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;result in  a bar chart:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image16-1608101195455.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;PageRank&lt;/h2&gt;
&lt;p&gt;Another GraphFrames query is PageRank, which is based on the Google PageRank algorithm.&lt;/p&gt;
&lt;p&gt;PageRank measures the importance of each vertex in a graph, by determining which vertices have the most edges with other vertices. In our example, we can use PageRank to determine which airports are the most important, by measuring which airports have the most connections to other airports with lots of connections. We have to specify the probability tolerance, which is the measure of convergence. Note that the results are similar to the degrees operation, but the algorithm is different.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What are the most important airports, according to PageRank?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// use pageRank
val ranks = graph.pageRank.resetProbability(0.15).maxIter(10).run()

ranks.vertices.orderBy($&quot;pagerank&quot;.desc).show()

--- result: ---
+-------------+-------+-----+---+------------------+  
|         City|Country|State| id|          pagerank|  
+-------------+-------+-----+---+------------------+  
|      Chicago|    USA|   IL|ORD| 1.421132695625478|  
|      Atlanta|    USA|   GA|ATL|1.3389970164746383|  
|  Los Angeles|    USA|   CA|LAX|1.2010647369509115|  
|       Dallas|    USA|   TX|DFW|1.1270726146978445|  
|       Denver|    USA|   CO|DEN|1.0590628954667447|  
|San Francisco|    USA|   CA|SFO| 1.024613545715222|  
|     New York|    USA|   NY|LGA|0.9449041443648624|  
|       Boston|    USA|   MA|BOS|0.8774889102400271|  
|       Newark|    USA|   NJ|EWR|0.8731704325953235|  
|        Miami|    USA|   FL|MIA|0.8507611366339813|  
|      Houston|    USA|   TX|IAH|0.8350494969577277|  
|    Charlotte|    USA|   NC|CLT|0.8049025258215664|  
|      Seattle|    USA|   WA|SEA|0.6417798484556717|  
+-------------+-------+-----+---+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Message Passing via AggregateMessages&lt;/h2&gt;
&lt;p&gt;Many important graph algorithms are iterative algorithms, since properties of vertices depend on properties of their neighbors, which depend on properties of their neighbors. Pregel is an iterative graph processing model, developed at Google, which uses a sequence of iterations of messages passing between vertices in a graph. GraphFrames provides aggregateMessages, which implements an aggregation message-passing API, based on the Pregel model.  GraphFrames aggregateMessages sends messages between vertices and aggregates message values from the neighboring edges and vertices of each vertex.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image19-1608101204300.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The code below shows how to use aggregateMessages to compute the average flight delay by the originating airport. The flight delay for each flight is sent to the src vertex, then the average is calculated at the vertices.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.graphframes.lib.AggregateMessages

val AM = AggregateMessages
val msgToSrc = AM.edge(&quot;depdelay&quot;)
val agg = { graph.aggregateMessages
  .sendToSrc(msgToSrc)    
  .agg(avg(AM.msg).as(&quot;avgdelay&quot;))
  .orderBy(desc(&quot;avgdelay&quot;))
  .limit(5) }
agg.show()

--- result: ---
+---+------------------+  
| id|          avgdelay|  
+---+------------------+  
|EWR|17.818079459546404|  
|MIA|17.768691978431264|  
|ORD|  16.5199551010227|  
|ATL|15.330084535057185|  
|DFW|15.061909338459074|  
+---+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Motif Find for Graph Pattern Queries&lt;/h2&gt;
&lt;p&gt;Motif finding searches for structural patterns in a graph. In this example, we want to find flights with no direct connection. First, we create a subgraph from the flightroutecount DataFrame that we created earlier, which gives us a subgraph with all the possible flight routes. Then, we do a find on the pattern shown here to search for flights from a to b and b to c, that do not have a flight from a to c.  Finally, we use a DataFrame filter to remove duplicates. This shows how Graph queries can be easily combined with DataFrame operations like filter.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What are the flight routes with no direct connection?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image8-1608101212457.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val subGraph = GraphFrame(graph.vertices, flightroutecount)
val res = subGraph
 .find(&quot;(a)-[]-&gt;(b); (b)-[]-&gt;(c); !(a)-[]-&gt;(c)&quot;)
 .filter(&quot;c.id !=a.id”)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image26-1608101221169.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Shortest Path Graph Algorithm&lt;/h2&gt;
&lt;p&gt;Shortest path computes the shortest paths from each vertex to the given sequence of landmark vertices. Here, we search for the shortest path from each airport to LGA. The results show that there are no direct flights from LAX, SFO, SEA, and EWR to LGA (the distances greater than 1).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val results = graph.shortestPaths.landmarks(Seq(&quot;LGA&quot;)).run()
+---+----------+
| id| distances|
+---+----------+
|IAH|[LGA -&gt; 1]|
|CLT|[LGA -&gt; 1]|
|LAX|[LGA -&gt; 2]|
|DEN|[LGA -&gt; 1]|
|DFW|[LGA -&gt; 1]|
|SFO|[LGA -&gt; 2]|
|LGA|[LGA -&gt; 0]|
|ORD|[LGA -&gt; 1]|
|MIA|[LGA -&gt; 1]|
|SEA|[LGA -&gt; 2]|
|ATL|[LGA -&gt; 1]|
|BOS|[LGA -&gt; 1]|
|EWR|[LGA -&gt; 2]|
+---+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Breadth First Search  Graph Algorithm&lt;/h2&gt;
&lt;p&gt;Breadth-first search (BFS) finds the shortest path from beginning vertices to end vertices. The beginning and end vertices are specified as DataFrame expressions, maxPathLength specifies the limit on the length of paths. Here we see that there are no Direct flights between LAX and LGA.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val paths = graph.bfs.fromExpr(&quot;id = &apos;LAX&apos;&quot;)
 .toExpr(&quot;id = &apos;LGA&apos;&quot;)
 .maxPathLength(1).run()

paths.show()

+----+-------+-----+---+  
|City|Country|State| id|  
+----+-------+-----+---+  
+----+-------+-----+---+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we set the maxPathLength to 2. The results show some flights connecting through IAH for flights from LAX to LGA.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val paths = graph.bfs.fromExpr(&quot;id = &apos;LAX&apos;&quot;)
 .toExpr(&quot;id = &apos;LGA&apos;&quot;)
 .maxPathLength(2).run().limit(4)
paths.show()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image17-1608101229234.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can combine motif searching with DataFrames operations. Here we want to find connecting flights between LAX and LGA using a Motif find query. We use a Motif query to search for the pattern of a flying to b, connecting through c, then we use a DataFrame filter on the the results for A=LAX and C=LGA. The results show some flights connecting through IAH for flights from LAX to LGA.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;graph.find(&quot;(a)-[ab]-&gt;(b); (b)-[bc]-&gt;(c)&quot;)
.filter(&quot;a.id = &apos;LAX&apos;&quot;)
.filter(&quot;c.id = &apos;LGA&apos;&quot;).limit(4)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image6-1608101238194.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Combining a Motif find with DataFrame operations, we can narrow these results down further, for example flights with the arrival flight time before the departure flight time, and/or with a specific carrier.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;GraphFrames provides a scalable and easy way to query and process large graph datasets, which can be used to solve many types of analysis problems. In this chapter, we gave an overview of the GraphFrames graph processing APIs. We encourage you to try out GraphFrames in more depth on some of your own projects.&lt;/p&gt;
&lt;p&gt;All of the components of the use case we just discussed can run on the same cluster with the MapR Data Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image27-1608101246907.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Code&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You can download the code and data to run these examples from here: &lt;a href=&quot;https://github.com/mapr-demos/mapr-spark2-ebook&quot;&gt;https://github.com/mapr-demos/mapr-spark2-ebook&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Apache Spark Packages, from XML to JSON]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/apache-spark-packages-from-xml-to-json/</link><guid isPermaLink="false">https://developer.hpe.com/apache-spark-packages-from-xml-to-json/</guid><pubDate>Fri, 11 Dec 2020 03:26:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-08-23T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;The Apache Spark community has put a lot of effort into extending Spark. Recently, we wanted to transform an XML dataset into something that was easier to query. We were mainly interested in doing data exploration on top of the billions of transactions that we get every day. XML is a well-known format, but sometimes it can be complicated to work with. In Apache Hive, for instance, we could define the structure of the schema of our XML and then query it using SQL.&lt;/p&gt;
&lt;p&gt;However, it was hard for us to keep up with the changes on the XML structure, so the previous option was discarded. We were using Spark Streaming capabilities to bring these transactions to our cluster, and we were thinking of doing the required transformations within Spark. However, the same problem remained, as we had to change our Spark application every time the XML structure changed.&lt;/p&gt;
&lt;p&gt;There must be another way!&lt;/p&gt;
&lt;p&gt;There is an Apache Spark package from the community that we could use to solve these problems. In this blog post, I&apos;ll walk you through how to use an Apache Spark package from the community to read any XML file into a DataFrame.&lt;/p&gt;
&lt;p&gt;Let’s load the Spark shell and see an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Scala&quot;&gt;./spark-shell — packages com.databricks:spark-xml_2.10:0.3.3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In here, we just added the XML package to our Spark environment. This of course can be added when writing a Spark app and packaging it into a jar file.&lt;/p&gt;
&lt;p&gt;Using the package, we can read any XML file into a DataFrame. When loading the DataFrame, we could specify the schema of our data, but this was our main concern in the first place, so we will let Spark infer it. The inference of the DataFrame schema is a very powerful trick since we don’t need to know the schema anymore so it can change at any time.&lt;/p&gt;
&lt;p&gt;Let’s see how we load our XML files into a DataFrame:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Scala&quot;&gt;val df = sqlContext
          .read
          .format(&quot;com.databricks.spark.xml&quot;)
          .option(&quot;rowTag&quot;, &quot;OrderSale&quot;)
          .load(&quot;~/transactions_xml_folder/&quot;)

df.printSchema

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Printing the DataFrame schema gives us an idea of what the inference system has done.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Scala&quot;&gt;root
 |-- @ApplicationVersion: string (nullable = true)
 |-- @BusinessDate: string (nullable = true)
 |-- @Change: double (nullable = true)
 |-- @EmployeeId: long (nullable = true)
 |-- @EmployeeName: string (nullable = true)
 |-- @EmployeeUserId: long (nullable = true)
 |-- @MealLocation: long (nullable = true)
 |-- @MessageId: string (nullable = true)
 |-- @OrderNumber: long (nullable = true)
 |-- @OrderSourceTypeId: long (nullable = true)
 |-- @PosId: long (nullable = true)
 |-- @RestaurantType: long (nullable = true)
 |-- @SatelliteNumber: long (nullable = true)
 |-- @SpmHostOrderCode: string (nullable = true)
 |-- @StoreNumber: long (nullable = true)
 |-- @TaxAmount: double (nullable = true)
 |-- @TaxExempt: boolean (nullable = true)
 |-- @TaxInclusiveAmount: double (nullable = true)
 |-- @TerminalNumber: long (nullable = true)
 |-- @TimeZoneName: string (nullable = true)
 |-- @TransactionDate: string (nullable = true)
 |-- @TransactionId: long (nullable = true)
 |-- @UTCOffSetMinutes: long (nullable = true)
 |-- @Version: double (nullable = true)
 |-- Items: struct (nullable = true)
 |    |-- MenuItem: struct (nullable = true)
 |    |    |-- #VALUE: string (nullable = true)
 |    |    |-- @AdjustedPrice: double (nullable = true)
 |    |    |-- @CategoryDescription: string (nullable = true)
 |    |    |-- @DepartmentDescription: string (nullable = true)
 |    |    |-- @Description: string (nullable = true)
 |    |    |-- @DiscountAmount: double (nullable = true)
 |    |    |-- @Id: long (nullable = true)
 |    |    |-- @PLU: long (nullable = true)
 |    |    |-- @PointsRedeemed: long (nullable = true)
 |    |    |-- @Price: double (nullable = true)
 |    |    |-- @PriceLessIncTax: double (nullable = true)
 |    |    |-- @PriceOverride: boolean (nullable = true)
 |    |    |-- @ProductivityUnitQuantity: double (nullable = true)
 |    |    |-- @Quantity: long (nullable = true)
 |    |    |-- @TaxAmount: double (nullable = true)
 |    |    |-- @TaxInclusiveAmount: double (nullable = true)
 |-- OrderTaxes: struct (nullable = true)
 |    |-- TaxByImposition: struct (nullable = true)
 |    |    |-- #VALUE: string (nullable = true)
 |    |    |-- @Amount: double (nullable = true)
 |    |    |-- @ImpositionId: long (nullable = true)
 |    |    |-- @ImpositionName: string (nullable = true)
 |-- Payments: struct (nullable = true)
 |    |-- Payment: struct (nullable = true)
 |    |    |-- #VALUE: string (nullable = true)
 |    |    |-- @AccountIDLast4: string (nullable = true 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, we could use any SQL tool to query our XML using Spark SQL. Please read this post &lt;a href=&quot;https://medium.com/@anicolaspp/apache-spark-as-a-distributed-sql-engine-4373e254e0f9#.w77z4ml3r&quot;&gt;Apache Spark as a Distributed SQL Engine&lt;/a&gt; to learn more about Spark SQL. Going a step further, we could use tools that can read data in JSON format. Having JSON datasets are especially useful if you have something like Apache Drill.&lt;/p&gt;
&lt;p&gt;As we could expect, with Spark we can do any kind of transformations, but there is no need to write a fancy JSON encoder because Spark already supports these features. Let’s convert our DataFrame to JSON and save it our file system.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-Scala&quot;&gt;val jsons = df.toJSON
jsons.saveAsTextFile(&quot;~/json_folder/&quot;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When applying the JSON function to the DataFrame, we get &lt;code&gt;an RDD[String]&lt;/code&gt; with the JSON representation of our data. Then we save the RDD as a plain text file. Now, we could use Drill to read and query our new dataset and of course, we can always go back to Spark if we need to do something more complicated operations / transformations.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Transforming our dataset from XML to JSON is an easy task in Spark, but the advantages of JSON over XML are a big deal. We now can rest assured that XML schema changes are not going to affect us at all. We have removed ourselves from the burden of changing our application for every XML change. We can also use powerful tools to query our JSON dataset such as Apache Drill in a schema free fashion while our clients can report on our data using SQL.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Types of Machine Learning – Part #2 in the Intro to AI/ML Series]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/types-of-machine-learning-part-2-in-the-intro-to-aiml-series/</link><guid isPermaLink="false">https://developer.hpe.com/types-of-machine-learning-part-2-in-the-intro-to-aiml-series/</guid><pubDate>Wed, 09 Dec 2020 07:38:36 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Saira Kennedy&quot;,
&quot;publish&quot;: &quot;2018-09-28T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Based on MapR Academy course, &lt;a href=&quot;https://learn.ezmeral.software.hpe.com/bus-introduction-to-artificial-intelligence-and-machine-learning&quot;&gt;Introduction to Artificial Intelligence and Machine Learning&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In this post – second in the Intro to AI/ML Series – we discuss the different methods of machine learning and some of the most common algorithms available for your projects. &lt;a href=&quot;/blog/kkLZ0mNpXqhQMZLRJQqr/artificial-intelligence-and-machine-learning-what-are-they-and-why-are-t&quot;&gt;Read the first blog in this series here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image12-1607499687612.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Types of Machine Learning&lt;/h2&gt;
&lt;p&gt;There are a few different types of machine learning, but they generally fall into these main groups. Supervised and unsupervised learning are the primary learning types, along with semi-supervised learning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image8-1607499695694.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Other methods sit in the middle or outskirts of these methodologies, such as reinforcement learning, which describes a machine that creates a cyclical learning cycle as it continuously trains itself from its own results. These results are then fed back into itself as input data. We won&apos;t be going into much detail on these other techniques, but further information can be found online.&lt;/p&gt;
&lt;p&gt;First, let&apos;s understand what differentiates each of these learning types from each other and how they work.&lt;/p&gt;
&lt;h2&gt;Supervised Learning: Defined&lt;/h2&gt;
&lt;p&gt;Supervised learning uses labeled data to train machines to learn the relationships between given inputs and outputs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image4-1607499702919.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A label is a known description given to objects in the data, which trains the machine on what to look for. Labels also provide the structure of the algorithm output, as any result must be one of these labels. Therefore, you can think of labels as a schema, defining the possible output that we want the machine to look for.&lt;/p&gt;
&lt;p&gt;Think of supervised learning as the algorithm to use when data scientists have labeled input data and when the type of behavior to predict is known. We want the machine to learn the patterns used to classify this data and apply those patterns to classify new data.&lt;/p&gt;
&lt;h2&gt;How Supervised Learning Works&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;First, labeled or classified data is loaded into the system. The preparation of labeled data makes this step the most time-consuming, as it is often done by a human trainer.&lt;/li&gt;
&lt;li&gt;The model is trained and connections to inputs and outputs are made.&lt;/li&gt;
&lt;li&gt;As new data is introduced, the algorithm is applied.&lt;/li&gt;
&lt;li&gt;Output is categorized data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image9-1607499710729.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the previous post, where we provided the cat example on labeled data, the trained labels include ears, nose, tail, paws, and cat, which the algorithm then applies to presented data, in this case an image of a cat, and returns the results of known output as &quot;cat,&quot; yes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image10-1607499718508.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Pros and Cons of Supervised Learning&lt;/h2&gt;
&lt;p&gt;Supervised learning always has a clear objective and can be easily measured for accuracy. The training of the machine is also tightly controlled, which leads to very specific behavioral outcomes.&lt;/p&gt;
&lt;p&gt;On the downside, it is often very labor-intensive, as all data needs to be labeled before the model is trained, which can take hundreds of hours of specialized human effort. The costs can become astronomical. This creates an overall slower training process and may also limit the data that it can work with.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Very clear objective&lt;/td&gt;
&lt;td&gt;Often labor-intensive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Easy to measure accuracy&lt;/td&gt;
&lt;td&gt;Limited data to work with&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controlled training&lt;/td&gt;
&lt;td&gt;Limited insights&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Finally, insights may be more limited, as the predicted behavior is described in advance. There is no freedom for the machine to explore other possibilities, as we will see with unsupervised learning.&lt;/p&gt;
&lt;p&gt;In supervised learning, there are primarily two categories of algorithms: classification and regression.&lt;/p&gt;
&lt;p&gt;A classification algorithm organizes input data as belonging to one of several predefined classes. This algorithm is the most useful for providing categorical results that fit within the predefined labels. It is very effective with well-calculated if-then rules and distinguishes one class of objects from another.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Algorithm or Task&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Supervised&lt;/td&gt;
&lt;td&gt;Classification (used to predict a categorical result)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Supervised&lt;/td&gt;
&lt;td&gt;Regression (used to predict the output value given the input value)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Supervised Algorithms&lt;/h2&gt;
&lt;h2&gt;Classification: Used to predict a categorical result&lt;/h2&gt;
&lt;p&gt;Some common use cases for classification algorithms include credit card fraud detection and email spam detection, both of which are binary classification problems, meaning there are only two possible output values. Data is labeled, for example, as fraud/non-fraud or spam/non-spam.&lt;/p&gt;
&lt;p&gt;Generally, if the question we are asking of a model is open-ended or if the potential answers are not categorical, then we aren&apos;t dealing with a classification problem, but more likely a regression one.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image13-1607499726836.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A regression algorithm attempts to predict the output value given the input value. Regression problems are predictive of a continuous numerical, as opposed to categorical, result.&lt;/p&gt;
&lt;p&gt;Think of this continuous value as a range or average, something that is estimating the relationship between variables.&lt;/p&gt;
&lt;h2&gt;Regression: Used to predict the output value given the input value&lt;/h2&gt;
&lt;p&gt;For example, this type of algorithm can be used to determine how profitable a credit card model is. It is also used to predict customer or employee churn models.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image2-1607499735202.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Regression algorithms determine the strength of correlation between two attributes, allowing you to find a predictive range of likelihood.&lt;/p&gt;
&lt;h2&gt;Supervised Algorithms Table&lt;/h2&gt;
&lt;p&gt;This table depicts some of the most common algorithms used with supervised learning types. It is important to understand that many machine learning data models will use more than one, and sometimes many, different algorithms for a project.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Algorithm or Task&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Classification&lt;/td&gt;
&lt;td&gt;Naïve Bayes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Classification&lt;/td&gt;
&lt;td&gt;Logistic Regression&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Classification&lt;/td&gt;
&lt;td&gt;Support Vector Machines (SVMs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Regression&lt;/td&gt;
&lt;td&gt;Linear Regression&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;Decision Trees/Random Forest&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;&lt;em&gt;k&lt;/em&gt;-Nearest Neighbors (&lt;em&gt;k&lt;/em&gt;-NN)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;Gradient Boosting Algorithms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Unsupervised Learning: Defined&lt;/h2&gt;
&lt;p&gt;While supervised learning involves having labeled data to find input-output relationships during the training phase, unsupervised learning has no knowledge of the output label. In this type of ML, the machine finds groups and patterns in the data on its own, and there is no specific outcome or target to predict.&lt;/p&gt;
&lt;p&gt;Think of unsupervised learning as the algorithm to use when we don&apos;t know how to classify the data and we want the machine to classify or group it for us.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image11-1607499742282.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How Unsupervised Learning Works&lt;/h2&gt;
&lt;p&gt;Here are the steps of how the unsupervised learning algorithm works:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First, unlabeled raw data is loaded into the system.&lt;/li&gt;
&lt;li&gt;Next, the algorithm analyzes the data.&lt;/li&gt;
&lt;li&gt;It looks for patterns on its own.&lt;/li&gt;
&lt;li&gt;Then, it identifies and groups patterns of behavior and provides output results.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image1-1607499749885.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Pros and Cons of Unsupervised Learning&lt;/h2&gt;
&lt;p&gt;Compared to supervised learning, unsupervised learning projects are much faster to implement, as no data labeling is required. In this regard, it uses fewer human resources. It also interprets data on its own and has the potential to provide unique, disruptive insights for a business to consider.&lt;/p&gt;
&lt;p&gt;However, unsupervised learning can be difficult to measure for accuracy because there is no expected result to compare it to. It can require more experimentation and tuning to get meaningful results.&lt;/p&gt;
&lt;p&gt;Lastly, unsupervised learning does not natively handle high-dimensional data, or massively large datasets with considerable variance, well. This is known as the curse of dimensionality. In some cases, the dimensions, or number of variables, may need to be reduced for it to work effectively. This requires human-intensive data cleansing.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Very fast to start&lt;/td&gt;
&lt;td&gt;Difficult to measure accuracy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disruptive insights&lt;/td&gt;
&lt;td&gt;Requires more experience&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Curse of dimensionality&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Common Use Cases for Unsupervised Algorithms&lt;/h2&gt;
&lt;p&gt;Let&apos;s take a look at a common use case example using cluster analysis. Cluster analysis has the goal of organizing raw data into related groups and is often used for anomaly detection.&lt;/p&gt;
&lt;p&gt;A security company uses it to identify unusual patterns in network traffic, indicating potential signs of a security breach or intrusion.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image5-1607499758343.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Unsupervised Learning Use Case – Anomaly Detection&lt;/h2&gt;
&lt;p&gt;Recall the steps of how this type of algorithm works.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First, the security company streams in raw network traffic data.&lt;/li&gt;
&lt;li&gt;Next, the algorithm analyzes the data on its own and looks for unusual patterns.&lt;/li&gt;
&lt;li&gt;Then, it identifies patterns of behavior as either normal or suspect.&lt;/li&gt;
&lt;li&gt;When suspect behavior is identified, the output is provided and the company is notified.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this example using anomaly detection, a scatter plot may return results looking something like the image below. The green dots indicate behavior that is grouped together as normal, and the red dots show the potential outliers that are sent back as suspect.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image14-1607499766291.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This table depicts some unsupervised learning algorithms. The most common algorithm here is &lt;em&gt;k&lt;/em&gt;-means, for cluster analysis, which is what we&apos;ve just focused on with our security use case example on anomaly detection.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Algorithm or Task&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unsupervised&lt;/td&gt;
&lt;td&gt;_k_-Means: Cluster Analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unsupervised&lt;/td&gt;
&lt;td&gt;Association Rule Learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unsupervised&lt;/td&gt;
&lt;td&gt;Dimensionality Reduction Techniques (PCA, SVD)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Semi-Supervised Learning: Defined&lt;/h2&gt;
&lt;p&gt;Semi-supervised learning includes a combination of supervised and unsupervised learning types together. Usually, this means that only a part of the provided input data is labeled, which the machine is trained on. It then learns to create additional labels and classifiers for raw data, on its own, which in turn gets added back to the original training data set.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image6-1607499774519.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How Semi-Unsupervised Learning Works&lt;/h2&gt;
&lt;p&gt;As a combination of the previous two learning types, let&apos;s look at how a common self-training algorithm, from the semi-supervised learning method, works:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;First, an initial set of labeled input training data is loaded into the system.&lt;/li&gt;
&lt;li&gt;The model is trained on the data. Then, a new data set of unlabeled data is presented.&lt;/li&gt;
&lt;li&gt;The algorithm infers new labels and classifiers to apply to the new data. High-confidence data, or data that scores well based on the algorithm, is added back to the original labeled data set. From here, the machine progressively adapts and learns in an iterative process.&lt;/li&gt;
&lt;li&gt;In some cases, when the labels and the rule-based engine conflicts, a human is needed for verification.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image15-1607499782353.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Semi-Supervised Algorithms Table&lt;/h2&gt;
&lt;p&gt;This table depicts some semi-supervised learning algorithms.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Algorithm or Task&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Semi-Supervised&lt;/td&gt;
&lt;td&gt;Self-Training Algorithms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Semi-Supervised&lt;/td&gt;
&lt;td&gt;Generative Model – Gaussian Mixture Model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Semi-Supervised&lt;/td&gt;
&lt;td&gt;Graph-Based Algorithms – Label Propagation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Check your Knowledge&lt;/h2&gt;
&lt;p&gt;Classify the items listed below as a supervised, semi-supervised, or unsupervised learning method:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image16-1607499789317.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Answer Key:&lt;/p&gt;
&lt;p&gt;Works on both labeled data and raw data: Semi-supervised&lt;/p&gt;
&lt;p&gt;Easiest data preparation method: Unsupervised&lt;/p&gt;
&lt;p&gt;Only uses labeled input data: Supervised&lt;/p&gt;
&lt;p&gt;Infers patterns on its own: Unsupervised&lt;/p&gt;
&lt;p&gt;Output is predefined: Supervised&lt;/p&gt;
&lt;p&gt;Can be used to automate data labeling: Semi-supervised&lt;/p&gt;
&lt;h2&gt;More Resources:  Machine Learning Libraries&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/12/image3-1607499796807.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Where do we go from here&lt;/h2&gt;
&lt;p&gt;Keep your eyes out for the next post in this series, discussing real world use cases for AI and ML.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Weave your data fabric - Newsletter]]></title><link>https://developer.hpe.com/2020-December-01/</link><guid isPermaLink="false">https://developer.hpe.com/2020-December-01/</guid><pubDate>Tue, 01 Dec 2020 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[ Introducing HPE Nimble Storage SDK for Go]]></title><description><![CDATA[In the cloud-native, API-first economy, the infrastructure platform relevance increases when gauged against how well it integrates with…]]></description><link>https://developer.hpe.com/introducing-hpe-nimble-storage-sdk-for-go/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-hpe-nimble-storage-sdk-for-go/</guid><pubDate>Wed, 25 Nov 2020 18:25:52 GMT</pubDate><content:encoded>&lt;p&gt;In the cloud-native, API-first economy, the infrastructure platform relevance increases when gauged against how well it integrates with others. The Go Programming Language is considered to be the language of choice for the cloud and the building microservices. Many large, recognized open source projects use Go, such as Kubernetes, Docker, InfluxDB and Terraform. Some of the most successful companies in the new API economy use Go, including Uber, Netflix, Google, and of course, Hewlett Packard Enterprise (HPE). Go was invented by Google and designed to scale concurrency across networked systems and support large codebases, which many of the successful projects can attest to.&lt;/p&gt;
&lt;p&gt;Today HPE introduces the HPE Nimble Storage SDK for Go. Built on top of the robust REST API available in NimbleOS, developers may take advantage of the full suite of APIs to build robust applications utilizing HPE Nimble Storage features and capabilities.&lt;/p&gt;
&lt;h1&gt;Example usage&lt;/h1&gt;
&lt;p&gt;There are some more elaborate examples available &lt;a href=&quot;https://github.com/hpe-storage/nimble-golang-sdk/tree/master/examples&quot;&gt;in the GitHub repo&lt;/a&gt;, but let&apos;s see a how a complete example program could work. Instructions on how to install Go on your computer is &lt;a href=&quot;https://golang.org/doc/install&quot;&gt;available here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Save this file to &lt;code&gt;createvolume.go&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;package main

import (
	&quot;fmt&quot;
	&quot;flag&quot;
	&quot;github.com/hpe-storage/nimble-golang-sdk/pkg/client/v1/nimbleos&quot;
	&quot;github.com/hpe-storage/nimble-golang-sdk/pkg/service&quot;
)

func main() {

	// Input parameters
	groupParam      := flag.String(&quot;g&quot;, &quot;myarray&quot;, &quot;Nimble group `IP/hostname`&quot;)
	usernameParam   := flag.String(&quot;u&quot;, &quot;admin&quot;, &quot;Nimble array `username`&quot;)
	passwordParam   := flag.String(&quot;p&quot;, &quot;admin&quot;, &quot;Nimble array `password`&quot;)
	volumeNameParam := flag.String(&quot;v&quot;, &quot;myvol&quot;, &quot;Volume `name` to create&quot;)
	volumeSizeParam := flag.Int64(&quot;s&quot;, 5120, &quot;Volume `size in MB`&quot;)

	// Parse flags
	flag.Parse()

	// Create new group service
	groupService, err := service.NewNsGroupService(
		*groupParam,
		*usernameParam,
		*passwordParam,
		&quot;v1&quot;,
		true)

	if err != nil {
		fmt.Printf(&quot;Unable to connect to group, %+v\n&quot;, err.Error())
		return
	}

	// Logout when finished
	defer groupService.LogoutService()

	// Set mandatory volume attributes
	newVolume := &amp;#x26;nimbleos.Volume{
		Name:	volumeNameParam,
		Size:	volumeSizeParam,
	}

	// Assign volume service instance
	volSvc := groupService.GetVolumeService()

	// Create volume
	volume, err := volSvc.CreateVolume(newVolume)

	if err != nil {
		fmt.Printf(&quot;Failed to create volume, %+v\n&quot;, err.Error())
		return
	}

	// Volume created
	fmt.Printf(&quot;Volume \&quot;%s\&quot; created (%s)\n&quot;, *volume.Name, *volume.ID)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Go is a compiled language, so, in order to run the example, it needs to be compiled:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-go&quot;&gt;go build createvolume.go
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we have the ability to create volumes from a remote command-line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;./createvolume -h
Usage of ./createvolume:
  -g IP/hostname
    	Nimble group IP/hostname (default &quot;myarray&quot;)
  -p password
    	Nimble array password (default &quot;admin&quot;)
  -s size in MB
    	Volume size in MB (default 5120)
  -u username
    	Nimble array username (default &quot;admin&quot;)
  -v name
    	Volume name to create (default &quot;myvol&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s create a new volume:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;./createvolume -g myarray1.example.com -u admin -p admin -v myvol1 -s 10240
Volume &quot;myvol1&quot; created (06412618225c784ec900000000000000000000000b)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And that is how simple it is with only a few lines of code to create a custom CLI for a common operation.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; It&apos;s extremely bad practice to use a password on the command-line, please use other means to supply the password to the SDK in a production scenario.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Available endpoints&lt;/h1&gt;
&lt;p&gt;Each of the API endpoints available on the REST service is available to the SDK and is mapped from the plural version of the endpoint to the singular version of the service. For example, &lt;code&gt;v1/volumes&lt;/code&gt; is accessible on &lt;code&gt;myService.GetVolumeService()&lt;/code&gt; (&lt;code&gt;myService&lt;/code&gt; being a &lt;code&gt;GroupService&lt;/code&gt; object).&lt;/p&gt;
&lt;p&gt;These are the currently available services in the SDK.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;AccessControlRecordService		MasterKeyService
ActiveDirectoryMembershipService	NetworkConfigService
AlarmService				NetworkInterfaceService
ApplicationCategoryService		PerformancePolicyService
ApplicationServerService		PoolService
ArrayService				ProtectionScheduleService
AuditLogService				ProtectionTemplateService
ChapUserService				ProtocolEndpointService
ControllerService			ReplicationPartnerService
DiskService				ShelfService
EventService				SnapshotCollectionService
FibreChannelConfigService		SnapshotService
FibreChannelInitiatorAliasService	SoftwareVersionService
FibreChannelInterfaceService		SpaceDomainService
FibreChannelPortService			SubnetService
FibreChannelSessionService		TokenService
FolderService				UserGroupService
GroupService				UserService
InitiatorGroupService			VersionService
InitiatorService			VolumeCollectionService
JobService				VolumeService
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The list of services will grow as new versions of NimbleOS introduce new features.&lt;/p&gt;
&lt;h1&gt;Start writing code&lt;/h1&gt;
&lt;p&gt;The SDK is available immediately and we encourage feedback and collaboration. Check out the following resources to learn more.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE Nimble Storage SDK for Go &lt;a href=&quot;https://github.com/hpe-storage/nimble-golang-sdk&quot;&gt;on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://tour.golang.org/&quot;&gt;Learn Go&lt;/a&gt; from the official language creators&lt;/li&gt;
&lt;li&gt;Learn more about HPE Nimble Storage on &lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;the HPE DEV platform page&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The HPE Nimble Storage team hangs out on the HPE DEV Slack community in &lt;a href=&quot;https://hpedev.slack.com/archives/C7TTAHRUN&quot;&gt;#NimbleStorage&lt;/a&gt;. Sign up at &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; and login to the Slack at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;. Please tell us what you&apos;re building!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Apache Spark Machine Learning Tutorial]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/apache-spark-machine-learning-tutorial/</link><guid isPermaLink="false">https://developer.hpe.com/apache-spark-machine-learning-tutorial/</guid><pubDate>Wed, 25 Nov 2020 03:08:41 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2016-02-22T08:00:00.000Z&quot;,
&quot;update&quot;: &quot;2019-02-20T08:00:00.000Z&quot;,
&quot;tags&quot;: [&quot;machine-learning&quot;,&quot;tutorial&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this blog post, we will give an introduction to machine learning and deep learning,  and we will go over the main Spark machine learning algorithms and techniques with some real-world use cases. The goal is to give you a better understanding of what you can do with machine learning. Machine learning is becoming more accessible to developers, and data scientists work with domain experts, architects, developers, and data engineers, so it is important for everyone to have a better understanding of the possibilities. Every piece of information that your business generates has potential to add value. This overview is meant to provoke a review of your own data to identify new opportunities.&lt;/p&gt;
&lt;p&gt;With Apache Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the Spark SQL and the Dataset/DataFrame APIs provide ease of use, space efficiency, and performance gains with Spark SQL&apos;s optimized execution engine.  &lt;/li&gt;
&lt;li&gt;Spark ML provides a uniform set of high-level APIs, built on top of DataFrames with the goal of making machine learning scalable and easy.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image16-1606274834703.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What Is Machine Learning?&lt;/h2&gt;
&lt;p&gt;Machine learning uses algorithms to find patterns in data and then uses a model that recognizes those patterns to make predictions on new data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image4-1606274844591.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There are typically two phases in machine learning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Data Discovery: The first phase involves analysis on historical data to build and train the machine learning model.&lt;/li&gt;
&lt;li&gt;Analytics Using the Model: The second phase uses the model in production on new data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In production, models need to be continuously monitored and updated with new models when needed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image21-1606274852952.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In general, machine learning may be broken down into two types: supervised, unsupervised, and in between those two. Supervised learning algorithms use labeled data; unsupervised learning algorithms find patterns in unlabeled data. Semi-supervised learning uses a mixture of labeled and unlabeled data. Reinforcement learning trains algorithms to maximize rewards based on feedback.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image24-1606274861604.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Three Common Categories of Techniques for Machine Learning&lt;/h2&gt;
&lt;p&gt;Three common categories of machine learning techniques are classification, clustering, and collaborative filtering.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image25-1606274869972.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Classification: Gmail uses a machine learning technique called classification to designate if an email is spam or not, based on the data of an email: the sender, recipients, subject, and message body. Classification takes a set of data with known labels and learns how to label new records based on that information.&lt;/li&gt;
&lt;li&gt;Clustering: Google News uses a technique called clustering to group news articles into different categories, based on title and content. Clustering algorithms discover groupings that occur in collections of data.&lt;/li&gt;
&lt;li&gt;Collaborative Filtering: Amazon uses a machine learning technique called collaborative filtering (commonly referred to as recommendation) to determine which products users will like, based on their history and similarity to other users.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Supervised Learning: Classification and Regression&lt;/h2&gt;
&lt;p&gt;Supervised algorithms use labeled data in which both the input and target outcome, or label, are provided to the algorithm.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image13-1606274878211.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Supervised learning is also called predictive modeling or predictive analytics, because you build a model that is capable of making predictions.&lt;/p&gt;
&lt;p&gt;Some examples of predictive modeling are classification and regression. Classification identifies which category an item belongs to (e.g., whether a transaction is fraud or not fraud), based on labeled examples of known items (e.g., transactions known to be fraud or not).  Logistic regression predicts a probability (e.g., the probability of fraud). Linear regression predicts a numeric value (e.g., the amount of fraud).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image23-1606274887440.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Classification and Regression Example&lt;/h2&gt;
&lt;p&gt;Classification and regression take a set of data with known labels and predetermined features and learns how to label new records based on that information. Features are the &quot;if questions&quot; that you ask. The label is the answer to those questions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image6-1606274895544.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Regression Example&lt;/h3&gt;
&lt;p&gt;Let&apos;s go through an example of car insurance fraud:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?
&lt;ul&gt;
&lt;li&gt;This is the label: the amount of fraud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the &quot;if questions&quot; or properties that you can use to predict?
&lt;ul&gt;
&lt;li&gt;These are the features: to build a classifier model, you extract the features of interest that most contribute to the classification.&lt;/li&gt;
&lt;li&gt;In this simple example, we will use the claimed amount.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image7-1606274904051.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Linear regression models the relationship between the Y &quot;Label&quot; and the X &quot;Feature,&quot; in this case the relationship between the amount of fraud and the claimed amount. The coefficient measures the impact of the feature, the claimed amount, and on the label, the fraud amount.&lt;/p&gt;
&lt;p&gt;Multiple linear regression models the relationship between two or more &quot;Features&quot; and a response &quot;Label.&quot; For example, if we wanted to model the relationship between the amount of fraud and the age of the claimant, the claimed amount, and the severity of the accident, the multiple linear regression function would look like this:&lt;/p&gt;
&lt;p&gt;   Y&lt;sub&gt;i&lt;/sub&gt; = β&lt;sub&gt;0&lt;/sub&gt; + β&lt;sub&gt;1&lt;/sub&gt;X&lt;sub&gt;1&lt;/sub&gt; + β&lt;sub&gt;2&lt;/sub&gt;X&lt;sub&gt;2&lt;/sub&gt; + · · · + β&lt;sub&gt;p&lt;/sub&gt; X&lt;sub&gt;p&lt;/sub&gt; + Ɛ&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Amount Fraud = intercept + (coefficient1 * age) + (coefficient2 * claimed Amount) + (coefficient3 * severity) + error.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The coefficients measure the impact on the fraud amount of each of the features.&lt;/p&gt;
&lt;p&gt;Some examples of linear regression include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Given historical car insurance fraudulent claims and features of the claims, such as age of the claimant, claimed amount, and severity of the accident, predict the amount of fraud.&lt;/li&gt;
&lt;li&gt;Given historical real estate sales prices and features of houses (square feet, number of bedrooms, location, etc.), predict a house&apos;s price.&lt;/li&gt;
&lt;li&gt;Given historical neighborhood crime statistics, predict crime rate.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Classification Example&lt;/h3&gt;
&lt;p&gt;Let&apos;s go through an example of debit card fraud:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?
&lt;ul&gt;
&lt;li&gt;This is the label: probability of fraud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the &quot;if questions&quot; or properties that you can use to make predictions?
&lt;ul&gt;
&lt;li&gt;Is the amount spent today &gt; historical average?&lt;/li&gt;
&lt;li&gt;Are there transactions in multiple countries today?&lt;/li&gt;
&lt;li&gt;Are the number of transactions today &gt; historical average?&lt;/li&gt;
&lt;li&gt;Are the number of new merchant types today high compared to the last 3 months?&lt;/li&gt;
&lt;li&gt;Are there multiple purchases today from merchants with a category code of risk?&lt;/li&gt;
&lt;li&gt;Is there unusual signing activity today, compared to historically using pin?&lt;/li&gt;
&lt;li&gt;Are there new state purchases compared to the last 3 months?&lt;/li&gt;
&lt;li&gt;Are there foreign purchases today compared to the last 3 months?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To build a classifier model, you extract the features of interest that most contribute to the classification.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image18-1606274916889.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Logistic regression measures the relationship between the Y &quot;Label&quot; and the X &quot;Features&quot; by estimating probabilities using a &lt;a href=&quot;https://en.wikipedia.org/wiki/Logistic_function&quot;&gt;logistic function&lt;/a&gt;. The model predicts a probability, which is used to predict the label class.&lt;/p&gt;
&lt;p&gt;Some examples of classification include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Given historical car insurance fraudulent claims and features of the claims, such as age of the claimant, claimed amount, and severity of the accident, predict the probability of fraud.&lt;/li&gt;
&lt;li&gt;Given patient characteristics, predict the probability of congestive heart failure.&lt;/li&gt;
&lt;li&gt;Credit card fraud detection (fraud, not fraud)&lt;/li&gt;
&lt;li&gt;Credit card application (good credit, bad credit)&lt;/li&gt;
&lt;li&gt;Email spam detection (spam, not spam)&lt;/li&gt;
&lt;li&gt;Text sentiment analysis (happy, not happy)&lt;/li&gt;
&lt;li&gt;Predicting patient risk  (high risk patient, low risk patient)&lt;/li&gt;
&lt;li&gt;Classifying a tumor (malignant, not malignant)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Spark Supervised Algorithms Summary&lt;/h2&gt;
&lt;p&gt;Classification&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Logistic regression&lt;/li&gt;
&lt;li&gt;Decision tree classifier&lt;/li&gt;
&lt;li&gt;Random forest classifier&lt;/li&gt;
&lt;li&gt;Gradient-boosted tree classifier&lt;/li&gt;
&lt;li&gt;Multilayer perceptron classifier&lt;/li&gt;
&lt;li&gt;Linear Support Vector Machine&lt;/li&gt;
&lt;li&gt;Naive Bayes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Regression&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Linear regression&lt;/li&gt;
&lt;li&gt;Generalized linear regression&lt;/li&gt;
&lt;li&gt;Decision tree regression&lt;/li&gt;
&lt;li&gt;Random forest regression&lt;/li&gt;
&lt;li&gt;Gradient-boosted tree regression&lt;/li&gt;
&lt;li&gt;Survival regression&lt;/li&gt;
&lt;li&gt;Isotonic regression&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Unsupervised Learning         &lt;/h2&gt;
&lt;p&gt;Unsupervised learning, also sometimes called descriptive analytics, does not have  labeled data provided in advance. These algorithms discover similarities, or regularities, in the input data.  An example of unsupervised learning is grouping similar customers, based on purchase data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image22-1606274925028.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Clustering&lt;/h3&gt;
&lt;p&gt;In clustering, an algorithm classifies inputs into categories by analyzing similarities between input examples.  Some clustering use cases include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Search results grouping&lt;/li&gt;
&lt;li&gt;Grouping similar customers&lt;/li&gt;
&lt;li&gt;Grouping similar patients&lt;/li&gt;
&lt;li&gt;Text categorization&lt;/li&gt;
&lt;li&gt;Network Security Anomaly detection (anomalies find what is &lt;strong&gt;not&lt;/strong&gt; similar, which means the outliers from clusters)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image12-1606274934428.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;k&lt;/em&gt;-means algorithm groups observations into &lt;em&gt;k&lt;/em&gt; clusters in which each observation belongs to the cluster with the nearest mean from its cluster center.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image17-1606274943600.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;An example of clustering is a company that wants to segment its customers in order to better tailor products and offerings. Customers could be grouped on features such as demographics and purchase histories. Clustering with unsupervised learning is often combined with supervised learning in order to get more valuable results. For example, in a banking customer 360 use case, customers were first clustered based on answers to a survey. The customer groups were analyzed and then labeled with customer personas. Next, the persona labels were linked by customer ID with customer features, such as types of accounts and purchases. Finally, supervised machine learning was applied and tested with the labeled customers, allowing it to link the survey customer personas with their banking actions and provide insights.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image3-1606275231189.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Frequent Pattern Mining, Association, Co-Occurrence, Market Basket Recommendations&lt;/h3&gt;
&lt;p&gt;Frequent pattern or association rule mining finds frequent co-occurring associations among a collection of items, such as products often purchased together. A famous story about association rule mining is the &quot;beer and diaper&quot; story. An analysis of behavior of grocery shoppers discovered that men who buy diapers often also buy beer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image2-1606275329553.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html&quot;&gt;Walmart mined their massive retail transaction database&lt;/a&gt; to see what their customers really wanted to buy prior to the arrival of a hurricane. They found one particular item which had an increase in sales by a factor of 7 over normal shopping days, a huge lift factor for a real-world case. The item was not bottled water, batteries, beer, flashlights, generators, or any of the usual things that you might imagine: it was strawberry pop tarts!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image20-1606274973982.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another example is from Target, which analyzed that when a woman starts buying scent-free lotion, vitamin supplements, and a combination of some other items, it signals she could be pregnant. Unfortunately, Target sent a coupon for baby items to a teenager whose father questioned why she was receiving such coupons.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image5-1606274983558.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Co-occurrence analysis is useful for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Store layouts&lt;/li&gt;
&lt;li&gt;Determining which products to put on specials, promotions, coupons, etc.&lt;/li&gt;
&lt;li&gt;Identifying healthcare patients, like mine cohorts&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Collaborative Filtering&lt;/h3&gt;
&lt;p&gt;Collaborative filtering algorithms recommend items (this is the filtering part) based on preference information from many users (this is the collaborative part). The collaborative filtering approach is based on similarity; people who liked similar items in the past will like similar items in the future. The goal of a collaborative filtering algorithm is to take preferences data from users and create a model that can be used for recommendations or predictions. Ted likes movies A, B, and C. Carol likes movies B and C. We take this data and run it through an algorithm to build a model. Then, when we have new data, such as Bob likes movie B, we use the model to predict that C is a possible recommendation for Bob.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image9-1606274992738.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Spark Unsupervised Algorithms Summary&lt;/h2&gt;
&lt;p&gt;Clustering&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;k&lt;/em&gt;-means&lt;/li&gt;
&lt;li&gt;Latent Dirichlet allocation (LDA)&lt;/li&gt;
&lt;li&gt;Gaussian mixture model (GMM)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Collaborative Filtering&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt; Alternating least squares (ALS)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Frequent Pattern Mining&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FP-Growth Algorithm&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Deep Learning&lt;/h2&gt;
&lt;p&gt;Deep learning is the name for multilayered neural networks, which are networks composed of several &quot;hidden layers&quot; of nodes between the input and output.  There are many variations of neural networks, which you can learn more about on this &lt;a href=&quot;http://www.asimovinstitute.org/neural-network-zoo/&quot;&gt;neural network cheat sheet.&lt;/a&gt;  Improved algorithms, GPUs, and massively parallel processing (MPP) have given rise to networks with thousands of layers.  Each node takes input data and a weight and outputs a confidence score to the nodes in the next layer, until the output layer is reached, where the error of the score is calculated.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image15-1606275001688.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With &lt;a href=&quot;https://en.wikipedia.org/wiki/Backpropagation&quot;&gt;backpropagation&lt;/a&gt; inside of a process called &lt;a href=&quot;https://en.wikipedia.org/wiki/Gradient_descent&quot;&gt;gradient descent&lt;/a&gt;, the errors are sent back through the network again and the weights are adjusted, improving the model. This process is repeated thousands of times, adjusting a model&apos;s weights in response to the error it produces, until the error can&apos;t be reduced anymore.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image26-1606275012401.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;During this process, the layers learn the optimal features for the model, which has the advantage that features do not need to be predetermined. However, this has the disadvantage that the model&apos;s decisions are not explainable. Because explaining the decisions can be important, researchers are developing &lt;a href=&quot;http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning&quot;&gt;new ways to understand the black box of deep learning&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are different variations of deep learning algorithms, which can be used to build data-driven applications, such as the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deep Neural Networks for improved traditional algorithms
&lt;ul&gt;
&lt;li&gt;Finance: enhanced fraud detection through identification of more complex patterns&lt;/li&gt;
&lt;li&gt;Manufacturing: enhanced identification of defects, based on deeper anomaly detection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Convolutional Neural Networks for images
&lt;ul&gt;
&lt;li&gt;Retail: in-store activity analysis of video to measure traffic&lt;/li&gt;
&lt;li&gt;Satellite images: labeling terrain, classifying objects&lt;/li&gt;
&lt;li&gt;Automotive: recognition of roadways and obstacles&lt;/li&gt;
&lt;li&gt;Healthcare: diagnostic opportunities from x-rays, scans, etc.&lt;/li&gt;
&lt;li&gt;Insurance: estimating claim severity, based on photographs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Recurrent Neural Networks for sequenced data
&lt;ul&gt;
&lt;li&gt;Customer satisfaction: transcription of voice data to text for NLP analysis&lt;/li&gt;
&lt;li&gt;Social media: real-time translation of social and product forum posts&lt;/li&gt;
&lt;li&gt;Photo captioning: search archives of images for new insights&lt;/li&gt;
&lt;li&gt;Finance: Predicting behavior based via time series analysis (also enhanced recommendation systems)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Deep Learning with Spark&lt;/h2&gt;
&lt;p&gt;Deep learning libraries or frameworks that can be leveraged with Spark include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;BigDL&lt;/li&gt;
&lt;li&gt;Spark Deep Learning Pipelines&lt;/li&gt;
&lt;li&gt;TensorFlowOnSpark&lt;/li&gt;
&lt;li&gt;dist-keras&lt;/li&gt;
&lt;li&gt;H2O Sparkling Water&lt;/li&gt;
&lt;li&gt;PyTorch&lt;/li&gt;
&lt;li&gt;Caffe&lt;/li&gt;
&lt;li&gt;MXNet&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;USING SPARK ML&lt;/h2&gt;
&lt;p&gt;With Apache Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the Spark SQL and the Dataset/DataFrame APIs provide ease of use, space efficiency, and performance gains.&lt;/li&gt;
&lt;li&gt;Spark ML provides a uniform set of high-level APIs, built on top of DataFrames with the goal of making machine learning scalable and easy. Having ML APIs built on top of DataFrames provides the scalability of partitioned data processing with the ease of SQL for data manipulation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image8-1606275020336.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can use a Spark ML Pipeline to pass your data through transformers in order to extract the features, an estimator to produce a model, and an evaluator to evaluate the model.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image19-1606275029231.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;A confluence of several different technology shifts have dramatically changed machine learning applications. The combination of distributed computing, streaming analytics, and machine learning is accelerating the development of next-generation intelligent applications, which are taking advantage of modern computational paradigms, powered by modern computational infrastructure. The MapR Data Platform integrates global event streaming, real-time database capabilities, and scalable enterprise storage with a collection of data processing and analytical engines to power this new generation of data processing pipelines and intelligent applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Demystifying AI, Machine Learning and Deep Learning]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/demystifying-ai-machine-learning-and-deep-learning/</link><guid isPermaLink="false">https://developer.hpe.com/demystifying-ai-machine-learning-and-deep-learning/</guid><pubDate>Wed, 25 Nov 2020 02:31:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2017-08-16T12:00:00.000&quot;,
&quot;tags&quot;: [&quot;machine-learning&quot;,&quot;deep learning&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Deep learning, machine learning, artificial intelligence - all buzzwords and representative of the future of analytics. In this post we will explain what is machine learning and deep learning at a high level with some real world examples. In future posts we will explore vertical use cases. The goal of this is not to turn you into a data scientist, but to give you a better understanding of what you can do with machine learning. Machine learning is becoming more accessible to developers, and Data scientists work with domain experts, architects, developers and data engineers, so it is important for everyone to have a better understanding of the possibilities. Every piece of information that your business generates has potential to add value. This and future posts are meant to provoke a review of your own data to identify new opportunities.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mlexamples-1606272628062.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Artificial Intelligence?&lt;/h2&gt;
&lt;p&gt;Throughout the history of AI the definition has been continuously redefined.  AI is an umbrella term, the idea started in the 50s, machine learning is a subset of AI and deep learning is a subset of ML.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/ai-1606272643389.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Image reference: &lt;a href=&quot;https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/&quot;&gt;AI/ML/DL&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In 1985, when I was a student interning at the NSA, AI was also a very hot topic. At the NSA I even took an  MIT video (VCR) class on AI, which was about expert systems. Expert systems capture an expert&apos;s knowledge in a rules engine. Rules engines have wide use in industries such as finance and healthcare, and more recently for event processing, but when data is changing, rules  can become difficult to update and maintain.  Machine learning has the advantage that it learns from the data, and it can give data driven probabilistic predictions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/system-1606272654657.png&quot; alt=&quot;Expert System&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Image reference: &lt;a href=&quot;https://www.pcmag.com/encyclopedia/term/42865/expert-system&quot;&gt;expert-system&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.foxbusiness.com/features/7-tips-for-machine-learning-success&quot;&gt;According to Ted Dunning&lt;/a&gt;, it is better to use precise terminology, like Machine Learning or Deep Learning instead of the word AI, because before we get something to work well, we call it AI, afterwards, we always call it something else. AI is better used as a word for the next frontier.&lt;/p&gt;
&lt;h2&gt;How Has Analytics Changed in the Last 10 Years?&lt;/h2&gt;
&lt;p&gt;According to &lt;a href=&quot;https://hbr.org/2017/06/how-analytics-has-changed-in-the-last-10-years-and-how-its-stayed-the-same&quot;&gt;Thomas Davenport in the HBR&lt;/a&gt;, analytical technology has changed dramatically over the last decade, with more powerful and less expensive distributed computing across commodity servers, streaming analytics, and improved machine learning technologies, enabling companies to store and analyze both far more data and many different types of it.&lt;/p&gt;
&lt;p&gt;Traditionally data was stored on a RAID system, sent to a multi-core server for processing and sent back for storage, which caused a bottleneck on data transfer, and was expensive. With file and table storage like MapR XD and MapR Database, data is distributed across a cluster, and Hadoop technologies like MapReduce, Pig, and Hive send the computing task to where the data resides.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/xd-1606272673595.png&quot; alt=&quot;MapR XD&quot;&gt;&lt;/p&gt;
&lt;p&gt;Technologies like Apache Spark speed up parallel processing of distributed data even more with iterative algorithms by caching data in memory across iterations and using lighter weight threads.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-1606272692913.png&quot; alt=&quot;Apache Spark&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store, a new distributed messaging system for streaming event data at scale, combined with Stream processing like Apache Spark streaming, or Apache Flink, speed up parallel processing of real time events with machine learning models.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/maprstreams-1606272706000.png&quot; alt=&quot;MapR Event Store&quot;&gt;&lt;/p&gt;
&lt;p&gt;Graphical Processing Units (GPUs) have sped up multi-core servers for parallel processing.  A GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.  Whereas a CPU consists of a few cores optimized for sequential serial processing. &lt;a href=&quot;https://www.kdnuggets.com/2017/06/deep-learning-demystifying-tensors.html&quot;&gt;In terms of potential performance, the evolution from the Cray-1 to today’s clusters with lots of GPU’s  is roughly a million times what was once the fastest computer on the planet at a tiny fraction of the cost&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/gpu-1606272719191.png&quot; alt=&quot;GPU&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Image reference: &lt;a href=&quot;http://www.nvidia.com/object/what-is-gpu-computing.html&quot;&gt;AI Computing&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;What is Machine Learning?&lt;/h2&gt;
&lt;p&gt;Machine learning uses algorithms to find patterns in data, and then uses a model that recognizes those patterns to make predictions on new data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/ml-1606272731259.png&quot; alt=&quot;ML&quot;&gt;&lt;/p&gt;
&lt;p&gt;In general, machine learning may be broken down into two types: supervised, unsupervised, and in between those two. Supervised learning algorithms use labeled data, unsupervised learning algorithms find patterns in unlabeled data. Semi-supervised learning uses a mixture of labeled and unlabeled data. Reinforcement learning trains algorithms to maximize rewards based on feedback.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/ml2-1606272742434.png&quot; alt=&quot;ML&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Supervised Learning&lt;/h2&gt;
&lt;p&gt;Supervised algorithms use labeled data in which both the input and target outcome, or label, are provided to the algorithm.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/sl-1606272757444.png&quot; alt=&quot;Supervised Learning&quot;&gt;&lt;/p&gt;
&lt;p&gt;Supervised Learning is also called predictive modeling or predictive analytics, because you build a model that is capable of making predictions. Some examples of predictive modeling are classification and regression. Classification identifies which category an item belongs to (for example whether a transaction is fraud or not fraud), based on labeled examples of known items (for example transactions known to be fraud or not).  Logistic regression predicts a probability, for example the probability of fraud. Linear regression predicts a numeric value, for example the amount of fraud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/fraud-1606272770701.png&quot; alt=&quot;Fraud&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some examples of Classification include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;credit card fraud detection (fraud, not fraud)&lt;/li&gt;
&lt;li&gt;credit card application   (good credit, bad credit)&lt;/li&gt;
&lt;li&gt;email spam detection (spam, not spam)&lt;/li&gt;
&lt;li&gt;text sentiment analysis (happy, not happy)&lt;/li&gt;
&lt;li&gt;Predicting patient risk  (high risk patient, low risk patient)&lt;/li&gt;
&lt;li&gt;classifying a tumor as malignant or not&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some examples of logistic regression (or other algorithms)  include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;given historical car insurance fraudulent claims and features of the claims such as age of the claimant, claimed amount, severity of the accident,  predict the probability of fraud.&lt;/li&gt;
&lt;li&gt;given patient characteristics predict the probability of congestive heart failure.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some examples of linear regression include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Given historical car insurance fraudulent claims and features of the claims such as age of the claimant, claimed amount, severity of the accident,  predict the amount of fraud.&lt;/li&gt;
&lt;li&gt;Given historical real estate sales prices and  features of houses(square feet, number of bedrooms, location..) predict a house’s price.&lt;/li&gt;
&lt;li&gt;Given historical neighborhood crime statistics, predict crime rate.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are other Supervised and Unsupervised learning Algorithms &lt;a href=&quot;https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html&quot;&gt;shown below&lt;/a&gt;, which we won’t go over, but we will look at one example of each in more detail.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/algorithm-1606272784378.png&quot; alt=&quot;Algorithms&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Image reference: &lt;a href=&quot;https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html&quot;&gt;Supervised and Unsupervised learning Algorithms&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Classification Example Debit Card Fraud&lt;/h2&gt;
&lt;p&gt;Classification takes a set of data with known labels and pre-determined features and learns how to label new records based on that information. Features are the “if questions” that you ask. The label is the answer to those questions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/duck-1606272802082.png&quot; alt=&quot;Duck&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s go through an example of Debit Card Fraud:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?
&lt;ul&gt;
&lt;li&gt;Whether a debit card transaction is fraud or not&lt;/li&gt;
&lt;li&gt;Fraud is the Label:  True or False&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the “if questions” or properties that you can use to make predictions?
&lt;ul&gt;
&lt;li&gt;Is the amount spent today &gt; historical average?&lt;/li&gt;
&lt;li&gt;Are there transactions in multiple countries today?&lt;/li&gt;
&lt;li&gt;Are the number of transactions today &gt; historical average?&lt;/li&gt;
&lt;li&gt;Are the number of new merchant types today high compared to the last 3 months?&lt;/li&gt;
&lt;li&gt;Are there multiple purchases today from merchants with a category code of risk?&lt;/li&gt;
&lt;li&gt;Is there unusual signing activity today compared to using historically using pin?&lt;/li&gt;
&lt;li&gt;Are there new state purchases compared to the last 3 months?&lt;/li&gt;
&lt;li&gt;Are there foreign purchases today compared to the last 3 months?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To build a classifier model, you extract the features of interest that most contribute to the classification.&lt;/p&gt;
&lt;h2&gt;Decision Trees&lt;/h2&gt;
&lt;p&gt;Decision trees create a model that predicts the class or label, based on several input features. Decision trees work by evaluating a question containing a feature at every node and selecting a branch to the next node, based on the answer. A possible decision tree for predicting debit card fraud is shown below. The feature questions are the nodes, and the answers “yes” or “no” are the branches in the tree to the child nodes. (Note that a real tree would have more nodes).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Q1: Is the amount spent in 24 hours &gt; average
&lt;ul&gt;
&lt;li&gt;Yes&lt;/li&gt;
&lt;li&gt;Q2: Are there multiple purchases today from risky merchants?&lt;/li&gt;
&lt;li&gt;Yes Fraud 90%&lt;/li&gt;
&lt;li&gt;Not Fraud 50%&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/decisiontree-1606272815273.png&quot; alt=&quot;Decision Tree&quot;&gt;&lt;/p&gt;
&lt;p&gt;Decision trees are popular because they are easy to visualize and explain. The accuracy of models can be improved by combining algorithms with ensemble methods.  An ensemble example is Random forest, which combines multiple random subsets of decision trees.&lt;/p&gt;
&lt;h2&gt;Unsupervised Learning&lt;/h2&gt;
&lt;p&gt;Unsupervised learning, also sometimes called descriptive analytics,  does not have  labeled data provided in advance. These algorithms discover similarities, or regularities in the input data.  An example of unsupervised learning is grouping similar customers, based on purchase data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/ul-1606272827176.png&quot; alt=&quot;Unsupervised Learning&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Clustering&lt;/h2&gt;
&lt;p&gt;In clustering, an algorithm classifies inputs into categories by analyzing similarities between input examples.  Some clustering use cases include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;search results grouping&lt;/li&gt;
&lt;li&gt;grouping similar customers&lt;/li&gt;
&lt;li&gt;grouping similar patients&lt;/li&gt;
&lt;li&gt;Text categorization&lt;/li&gt;
&lt;li&gt;Network Security Anomaly detection (finds what is not similar, the outliers from clusters)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/cluster-1606272838221.png&quot; alt=&quot;Clustering&quot;&gt;&lt;/p&gt;
&lt;p&gt;The K-means  algorithm groups observations into K clusters in which each observation belongs to the cluster with the nearest mean from its cluster center.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/k-means-1606272852036.png&quot; alt=&quot;K-means&quot;&gt;&lt;/p&gt;
&lt;p&gt;An example of clustering is a company that wants to segment its customers in order to better tailor products and offerings. Customers could be grouped on features such as demographics and purchase histories. Clustering with unsupervised learning is often combined with Supervised learning in order to get more valuable results. For example in this banking customer 360 use case, customers were first segmented based on answers to a survey. The customer groups were analyzed and labeled with customer personas. These labels were then linked by customer Id with features such as types of accounts and purchases. Finally supervised machine learning was applied and tested with the labeled customers, allowing to link the survey customer personas with their banking actions and provide insights.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/survey-1606272868644.png&quot; alt=&quot;Surveys&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Deep Learning&lt;/h2&gt;
&lt;p&gt;Deep learning is the name for multilayered neural networks, which are networks composed of several “hidden layers” of nodes between the input and output.  There are many variations of neural networks, which you can learn more about on this &lt;a href=&quot;https://www.asimovinstitute.org/neural-network-zoo/&quot;&gt;neural network cheat sheet&lt;/a&gt;. Improved algorithms, GPUs and massively parallel processing (MPP), have given rise to networks with thousands of layers.  Each node takes input data and a weight and outputs a confidence score to the nodes in the next layer, until the output layer is reached where the error of the score is calculated. With &lt;a href=&quot;https://en.wikipedia.org/wiki/Backpropagation&quot;&gt;backpropagation&lt;/a&gt; inside of a process called &lt;a href=&quot;https://en.wikipedia.org/wiki/Gradient_descent&quot;&gt;gradient descent&lt;/a&gt;, the errors are sent back through the network again and the weights are adjusted improving the model. This process is repeated thousands of times, adjusting a model’s weights in response to the error it produces, until the error can’t be reduced any more.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/process-1606272882881.png&quot; alt=&quot;Process&quot;&gt;&lt;/p&gt;
&lt;p&gt;During this process the layers learn the optimal features for the model, which has the advantage that features do not need to be predetermined. However this has the disadvantage that the model’s decisions are not explainable. Because explaining the decisions can be important, researchers are developing &lt;a href=&quot;https://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning&quot;&gt;new ways to understand the black box of deep learning&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are different variations of Deep Learning Algorithms, which can be used to build data-driven applications such as the following:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/qss-1606272896542.png&quot; alt=&quot;Deep Learning QSS&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Deep Neural Networks for Improved Traditional Algorithms&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Finance: Enhanced Fraud Detection through identification of more complex patterns&lt;/li&gt;
&lt;li&gt;Manufacturing: Enhanced identification of defects based on deeper anomaly detection&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Convolutional Neural Networks for images&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Retail: in-store activity analysis of video to measure traffic&lt;/li&gt;
&lt;li&gt;Satellite images: labeling terrain, classifying objects&lt;/li&gt;
&lt;li&gt;Automotive: recognition of roadways and obstacles&lt;/li&gt;
&lt;li&gt;Healthcare: diagnostic opportunities from x-rays, scans, etc.&lt;/li&gt;
&lt;li&gt;Insurance: estimating claim severity based on photographs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Recurrent Neural Networks for sequenced data&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Customer satisfaction: transcription of voice data to text for NLP analysis&lt;/li&gt;
&lt;li&gt;Social media: real-time translation of social and product forum posts&lt;/li&gt;
&lt;li&gt;Photo captioning: search archives of images for new insights&lt;/li&gt;
&lt;li&gt;Finance: Predicting behavior based via time series analysis (also enhanced recommendation systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Additional Resources and References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.foxbusiness.com/features/7-tips-for-machine-learning-success&quot;&gt;7 tips for Machine Learning Success&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.forbes.com/sites/bernardmarr/2016/09/30/what-are-the-top-10-use-cases-for-machine-learning-and-ai/&quot;&gt;The Top 10 AI And Machine Learning Use Cases Everyone Should Know About&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.r2d3.us/visual-intro-to-machine-learning-part-1/&quot;&gt;Visual Introduction to machine learning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://francesco-ai.medium.com/a-brief-history-of-ai-baf0f362f5d6&quot;&gt;A Brief History of AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/how-artificial-intelligence-can-deliver-real-value-to-companies&quot;&gt;How artificial intelligence can deliver real value to companies&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Best Practices for Migrating Your Apps to Containers and Kubernetes]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/best-practices-for-migrating-your-apps-to-containers-and-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/best-practices-for-migrating-your-apps-to-containers-and-kubernetes/</guid><pubDate>Wed, 25 Nov 2020 02:26:04 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suzy Visvanathan&quot;,
&quot;publish&quot;: &quot;2018-05-15T10:45:00.000&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;I was having a discussion with a potential customer.  As I listened, I realized that this was a company going through a massive digital transformation.  They were not doing it in bits and pieces but rather through a massive overhaul. The discussion inevitably centered around containers, managing container-based workloads, and the steps they are taking to make this transformation happen.&lt;/p&gt;
&lt;p&gt;That led me to ask: “What should an organization know, consider, and do in order to migrate to containers?” This response may not be specific to a digital transformation but, in general, applies to the considerations one must take in order to adopt containers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/containers-wide-1606271200572.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This blog assumes the reader has a fair knowledge of containers, Kubernetes, and some of their unique value adds.&lt;/p&gt;
&lt;p&gt;Many organizations made architecture, infrastructure, and solution decisions a long time ago and have followed the “if not broken, don’t fix it” model. Understandably so, since keeping up with technology trends came with a cost – investment in new skill sets, procurement of the latest, usually more expensive equipment, and, more importantly, disruption to business. For example, when VMware came out with the concept of virtualization, it took significant commitment from organizations to move from a physical to a virtual environment.&lt;/p&gt;
&lt;p&gt;A few years down the road, Amazon (and then Microsoft and Google) made public cloud and a subscription-based cost model very attractive for deploying on infrastructure managed by someone else. This required a significant change in mindset – to educate oneself in the benefits of infrastructure as a service and then to adopt cloud-native architecture in order to move to the cloud.&lt;/p&gt;
&lt;p&gt;Of all the technology trends introduced by innovators, Docker – with their containers technology – made it easy to be adopted, since it circumvented the complications of prior inventions by following the open source model. Open source brought containers more quickly to the enterprise and is propelling the maturity of containers, which is noticeably faster than the journey that VMs or cloud technology underwent. But with the proliferation of containers, managing them became a problem.  This prompted several vendors to come up with solutions, with Kubernetes largely becoming the de facto container management platform.&lt;/p&gt;
&lt;p&gt;Cloud Native Computing Foundation (CNCF), which spearheads the development of open source projects like Kubernetes, has been busy releasing improvements at a rapid pace to keep up with the pace of adoption.&lt;/p&gt;
&lt;h2&gt;Move legacy applications to containers? Or not?&lt;/h2&gt;
&lt;p&gt;Containers allow applications to be broken into smaller manageable microservices. Each microservice is self-sufficient and can be changed and updated on its own without the need to touch the other services. For instance, if a revision or an update needs to be made, only the affected services need be changed and compiled instead of the entire application having to be recompiled. Kubernetes can be used to schedule and manage these individual services. In order to benefit from the full benefits of containers and Kubernetes, assess your legacy applications to see if they can be broken into modules. A very classic example is that of how Uber uses thousands of microservices to improve scalability and reliability.&lt;/p&gt;
&lt;p&gt;In his book &lt;a href=&quot;https://www.academia.edu/41522528/A_Practical_Guide_to_Microservices_and_Containers_Mastering_the_Cloud_Data_and_Digital_Transformation&quot;&gt;A Practical Guide to Microservices and Containers&lt;/a&gt;, Jim Scott, Director of Enterprise Strategy at MapR, draws out the intricate details of scaling and microservices by citing a simple web application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/microservices-1606271160940.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Microservices sounds very fancy and not all legacy applications can be broken down into smaller modules.  Following these simple steps may help greatly toward the journey:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Not adding anything more to legacy applications. Starting fresh is definitely a viable choice, especially if the decision has been made to fully convert to containers. Rewriting legacy applications into microservices architecture must also be considered.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If legacy applications cannot be broken down into modules, the simplest thing to do may just be to enclose the application in a single container.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Just splitting applications into containers doesn’t make it scalable automatically. Proper planning is needed to determine how these individual containers are to be run. Kubernetes creates containers in Pods and offers a DaemonSet, which is an automated way to create Pods of containers as more server nodes are added. Using such features to scale with microservices needs to be considered upfront.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If certain applications are dependent on certain performance characteristics, pinning containers to certain hardware may be required. Kubernetes offers a feature, called stateful sets, which will allow containers to be locked to underlying infrastructure. If planned carefully, certain Pods carrying services can be spread across different performing servers to get the optimal environment.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If applications cannot simply be broken down, it may be easier and cheaper to just write a new.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Designing for containers&lt;/h2&gt;
&lt;p&gt;When designing new applications to run in containers, one key aspect to keep in mind is the concept of decoupling. Tightly coupling an application and its data together may be a non-starter. Keeping the characteristics of a container in mind, separating the data from the application’s dependencies from the start will speed up things greatly. Making use of cloud-native architecture and tools, along with a microservices approach, will make it easy to make the transition.&lt;/p&gt;
&lt;h2&gt;Decide the environment&lt;/h2&gt;
&lt;p&gt;Decide, before starting, whether these containers will be run in the data center or in the cloud.
Running them in the cloud offers certain advantages when the decision to choose the right servers and everything that comes with maintaining an on-premises cluster is not needed. Expanding and shrinking in the cloud with containers reduces cost and speeds up deployment.&lt;/p&gt;
&lt;p&gt;However, there is a value-add in running containers in data centers. Containers and Kubernetes are still a concept that is being familiarized in many organizations. Deploying them in small phases, in a controlled environment, will greatly assist in understanding the benefits of containers before “shifting” everything to containers and cloud.&lt;/p&gt;
&lt;p&gt;One aspect every organization must understand is that digital transformation is not an end but a journey that never really ends. With careful planning and understanding of the environment, concept, and benefits, and by using tips as described in this blog, organizations can successfully embark on their digital transformation.&lt;/p&gt;
&lt;p&gt;MapR offers a robust data platform to deploy applications in containers and Kubernetes. The MapR Data Platform can be run in on-premises data centers and across clouds, enabling disparate environments to be considered.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The 5-Minute Guide to Understanding the Significance of Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/the-5-minute-guide-to-understanding-the-significance-of-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/the-5-minute-guide-to-understanding-the-significance-of-apache-spark/</guid><pubDate>Wed, 25 Nov 2020 02:19:48 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nitin Bandugula&quot;,
&quot;publish&quot;: &quot;2015-06-23T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this blog, I’d like to talk about the differences between Apache Spark and MapReduce, why it’s easier to develop on Spark, and the top five use cases.&lt;/p&gt;
&lt;h2&gt;So what is Spark?&lt;/h2&gt;
&lt;p&gt;Spark is another execution framework. Like MapReduce, it works with the filesystem to distribute your data across the cluster, and process that data in parallel. Like MapReduce, it also takes a set of instructions from an application written by a developer. MapReduce was generally coded from Java; Spark supports not only Java, but also Python and Scala, which is a newer language that contains some attractive properties for manipulating data.&lt;/p&gt;
&lt;h2&gt;What are the key differences between Spark and MapReduce?&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Spark tries to keep things in memory, whereas MapReduce keeps shuffling things in and out of disk.&lt;/strong&gt; MapReduce inserts barriers, and it takes a long time to write things to disk and read them back. Hence MapReduce can be slow and laborious. The elimination of this restriction makes Spark orders of magnitude faster. For things like SQL engines such as Hive, a chain of MapReduce operations is usually needed, and this requires a lot of I/O activity. On to disk, off of disk—on to disk, off of disk. When similar operations are run on Spark, Spark can keep things in memory without I/O, so you can keep operating on the same data quickly. This results in dramatic improvements in performance, and that means Spark definitely moves us into at least the interactive category. For the record, there are some benefits to MapReduce doing all that recording to disk — as recording everything to disk allows for the possibility of restarting after failure. If you’re running a multi-hour job, you don’t want to begin again from scratch. For applications on Spark that run in the seconds or minutes, restart is obviously less of an issue.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;It’s easier to develop for Spark&lt;/strong&gt;. Spark is much more powerful and expressive in terms of how you give it instructions to crunch data. Spark has a Map and a Reduce function like MapReduce, but it adds others like Filter, Join and Group-by, so it’s easier to develop for Spark. In fact, Spark provides for lots of instructions that are a higher level of abstraction than what MapReduce provided. You can think more about how you want the data processed, rather than about how to cajole MapReduce into doing what you want. This might not seem that important, until you look at this:  &lt;a href=&quot;https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html#Example%3A+WordCount+v2.0&quot;&gt;MapReduce-Wordcount&lt;/a&gt;. This is the code to calculate a count of words in a text file, done in MapReduce (not Spark). It’s over 100 lines of code, and fairly unintuitive. The equivalent in Spark is found on this page: &lt;a href=&quot;https://spark.apache.org/examples.html&quot;&gt;Spark Examples&lt;/a&gt; (look for the Word Count example). It’s four lines versus over 100. If you’re trying to do risk calculations on Wall Street, which one are you going to choose? Same thing goes for someone writing a new analytics application or a new query engine. It’s a no-brainer.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Related to this theme of ease of development, Spark is more intelligent about how it operates on data. Spark supports lazy evaluation. Normally we don’t like anything to be lazy, but in this case, lazy evaluation means that if you tell Spark to operate on a set of data, it listens to what you ask it to do, writes down some shorthand for it so it doesn’t forget, and then does absolutely nothing. It will continue to do nothing, until you ask it for the final answer.&lt;/p&gt;
&lt;p&gt;Why is this great? Because often work magically goes away. This is a bit like when you were in high school, and your mom came in to ask you to do a chore (“fetch me some milk for tonight’s dinner”). Your response: say that you were going to do it, then keep right on doing what you were already doing. Sometimes your mom would come back in and say she didn’t need the chore done after all (“I substituted water instead”). Magic, work saved! Sometimes the laziest finish first.&lt;/p&gt;
&lt;p&gt;Spark is the same. It waits until you’re done giving it operators, and only when you ask it to give you the final answer does it evaluate, and it always looks to limit how much work it has to do. Suppose you first ask Spark to filter a petabyte of data for something—say, find you all the point of sale records for the Chicago store—then next you ask for it to give you just the first result that comes back. This is a really common thing to do. Sometimes a data analyst just wants to see a typical record for the Chicago store. If Spark were to run things explicitly as you gave it instructions, it would load the entire file, then filter for all the Chicago records, then once it had all those, pick out just the first line for you. That’s a huge waste of time and resources. Spark will instead wait to see the full list of instructions, and understand the entire chain as a whole. If you only wanted the first line that matches the filter, then Spark will just find the first Chicago POS record, then it will emit that as the answer, and stop. It’s much easier than first filtering everything, then picking out only the first line.&lt;/p&gt;
&lt;p&gt;Now, you could write your MapReduce jobs more intelligently to similarly avoid over-using resource, but it’s much more difficult to do that. Spark makes this happen automatically for you. Normally, software like Hive goes into contortions to avoid running too many MapReduce jobs, and programmers write very complex and hard-to-read code to force as much as possible into each Map and Reduce job. This makes development hard, and makes the code hard to maintain over time. By using Spark instead, you can write code that describes how you want to process data, not how you want the execution to run, and then Spark “does the right thing” on your behalf to run it as efficiently as possible. This is the same thing a good high-level programming language does: it raises the abstraction layer, letting the developer talk more powerfully and expressively, and does the work behind the scenes to ensure it runs as fast as possible.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Spark also adds libraries for doing things like machine learning, streaming, graph programming and SQL&lt;/strong&gt; (see image below). This also makes things much easier for developers. These libraries are integrated, so improvements in Spark over time provide benefits to the additional packages as well. Most data analysts would otherwise have to resort to using lots of other unrelated packages to get their work done, which makes things complex. Spark’s libraries are designed to all work together, on the same piece of data, which is more integrated and easier to use. Spark streaming in particular provides a way to do real-time stream processing. The aforementioned Apache Storm project was designed to do this kind of work, but Spark is much easier to develop for than Storm. Spark will enable developers to do real-time analysis of everything from trading data to web clicks, in an easy to develop environment, which tremendous speed.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-core-stack-db-1606270847095.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So to summarize, Spark is promising to speed up application development by 10-100x, make applications more portable and extensible, and make the actual application run 100x faster.&lt;/strong&gt; This should make it clear why people are excited.&lt;/p&gt;
&lt;h2&gt;How does someone develop an application for Spark?&lt;/h2&gt;
&lt;p&gt;Spark is a set of libraries. You can program to those libraries from three programming languages today: Java, Python, and a newer language called Scala. There aren’t a lot of Scala developers today, while there are millions of Java and Python developers. But Scala is better designed to work with Spark, and it offers the greatest reduction in the number of lines of code to stand up an application. Many complex applications that were hundreds of lines can be re-written in Scala in less than a hundred lines. The design methodology of Scala is more congruent with that of Spark than any other language. Scala also compiles down to the same bytecode that a Java Virtual Machine (JVM) executes, so any existing code you’ve got in Java can be used by Scala. This is one of Scala’s biggest wins: it gives you the best of both worlds. Scala offers first-class support for integrating Java; in fact, much of Scala is actually directly reliant on Java. All the Java code you’ve already got working can be repurposed. And the really important code that represents the critical part of the data-crunching application can be re-written in Scala in a much smaller form factor that is much easier for developers to read, repurpose and maintain. Note that with Java 8.0 supporting lambda expressions, a Java developer can become lot more productive without switching to Scala completely.&lt;/p&gt;
&lt;p&gt;The libraries provided by Spark were discussed previously. The developer community is excited about Spark because everything is integrated with Spark. If you wanted to do applications with MapReduce, you had a bunch of problems. First, MapReduce pretty much had to be done with Java. That’s not the case with Spark: Python and Scala are first-class citizens. Second, you had to marry up MapReduce with other technologies. Wanted machine learning? You’ve got to separately integrate something like Mahout, H2O, or Oryx to get things done, and you’ve got to figure out how it works, and how to bolt it on. Wanted a graph database, with inbuilt tools for graph analytics? Well, again, you’ve got to select from Giraph, TitanDB, neo4j, or some other technology. The point is that the integration of all these parts would definitely not be seamless; each of them wants to be used in its own way.&lt;/p&gt;
&lt;p&gt;Spark offers a different model. You get SQL, machine learning, graph analytics and streaming in a single set of libraries that all work together with the Spark core. You can manipulate the same datasets with all of these. And when Spark core gets improvements, all of the libraries also improve. Integration is much easier, applications are far easier to maintain, costs go down, developers are happier. Most important for the field teams to understand: If a company developing applications is going to make a bet on a single foundation for your applications, Spark is looking like the best choice right now.&lt;/p&gt;
&lt;p&gt;Spark does not replace Hadoop. You still need a single data layer, preferably one that is hyper-scalable and extremely fast, and that’s where MapR comes in. MapR makes Spark faster, more scalable, and more reliable.&lt;/p&gt;
&lt;h2&gt;What are the Spark use cases?&lt;/h2&gt;
&lt;p&gt;Databricks (a company founded by the creators of Apache Spark) lists the following cases for Spark:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Data integration and ETL&lt;/li&gt;
&lt;li&gt;Interactive analytics or business intelligence&lt;/li&gt;
&lt;li&gt;High performance batch computation&lt;/li&gt;
&lt;li&gt;Machine learning and advanced analytics&lt;/li&gt;
&lt;li&gt;Real-time stream processing&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Tons of people are doing data integration and ETL on MapReduce, as well as batch computation, machine learning and batch analytics. But these things are going to be much faster on Spark. Interactive analytics and BI are possible on Spark, and the same goes for real-time stream processing. So some of the new use cases are just the old use cases, done faster, while some are totally new. There are some things that just couldn’t have been done with acceptable performance on MapReduce.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, you’ve learned about the key differences between Spark and MapReduce, why it’s easier to develop on Spark, and the top five use cases.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Event Driven Microservices Architecture Patterns and Examples]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/event-driven-microservices-architecture-patterns-and-examples/</link><guid isPermaLink="false">https://developer.hpe.com/event-driven-microservices-architecture-patterns-and-examples/</guid><pubDate>Thu, 19 Nov 2020 00:06:45 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2017-02-08T06:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql,microservices&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this blog we will discuss some patterns that are often used in microservices applications that need to scale:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Event Stream&lt;/li&gt;
&lt;li&gt;Event Sourcing&lt;/li&gt;
&lt;li&gt;Polyglot Persistence&lt;/li&gt;
&lt;li&gt;Memory Image&lt;/li&gt;
&lt;li&gt;Command Query Responsibility Separation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Motivation&lt;/h2&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://eng.uber.com/service-oriented-architecture/&apos;&gt;Uber&lt;/a&gt;, &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.infoq.com/presentations/scale-gilt/&apos;&gt;Gilt&lt;/a&gt; and others have moved from a monolithic to a microservices architecture because they needed to scale. A monolithic application puts all of its functionality into a single process, meaning that scaling requires replicating the whole application, which has limitations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture1-1605744773149.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sharing normalized tables in a clustered RDBMS does not scale well because distributed transactions and joins can cause concurrency bottlenecks.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture2-1605744781148.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/articles/microservices.html&apos;&gt;microservice architectural style&lt;/a&gt; is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities.  &lt;a target=&apos;\_blank&apos;  href=&apos;https://ostatic.com/blog/q-a-maprs-jack-norris-on-the-impact-of-microservices&apos;&gt;A microservices approach is well aligned to a typical big data deployment&lt;/a&gt;.  You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many commodity hardware servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single points of failure, which can help prevent large-scale outages. &lt;/p&gt;
&lt;h2&gt;Event Stream&lt;/h2&gt;
&lt;p&gt;When moving from a monolithic to a microservices architecture  a common architecture pattern is event sourcing  using an append only event stream such as Kafka or MapR Event Store (which provides a Kafka 0.9 API). With MapR Event Store (or Kafka) events are grouped into logical collections of events called Topics. Topics are partitioned for parallel processing. You can think of a partitioned Topic like a queue, events are delivered in the order they are received.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture3-1605744789588.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Unlike a queue, events are persisted, even after they are delivered they remain on the partition, available to other consumers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture4-1605744796164.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Older messages are automatically deleted based on the Stream’s time-to-live setting. If the setting is 0 then they will never be deleted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture5-1605744806846.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Messages are not deleted from Topics when read, and topics can have multiple different consumers. This allows processing of the same messages by different consumers for different purposes. Pipelining is also possible where a consumer enriches an event and publishes it to another topic.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture6-1605744813964.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Event Sourcing&lt;/h2&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/eaaDev/EventSourcing.html&apos;&gt;Event Sourcing&lt;/a&gt; is an architectural pattern in which the state of the application is determined by a sequence of events each of which is recorded in an append-only Event store or Stream. As an example, imagine that each “event” is an incremental update to an entry in a database. In this case, the state of a particular entry is simply the accumulation of events pertaining to that entry. In the example below the Stream persists the queue of all deposit and withdrawal events, and the database table persists the current account balances.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture7-1605744821094.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Which one of these, the Stream or the Database, makes a better system of record? The events in the Stream can be used to reconstruct the current account balances in the Database, but not the other way around. Database replication actually works by suppliers writing changes to a change log, and consumers applying the changes locally. Another well known example of this is a source code version control system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture8-1605744828551.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With a Stream, events can be re-played to create a new view, index, cache, &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/bliki/MemoryImage.html&apos;&gt;memory image&lt;/a&gt;, or materialized view of the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture9-1605744835216.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Consumer simply reads the messages from the oldest to the latest to create a new View of the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture10-1605744842963.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There are several advantages for modeling application state with streams:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lineage&lt;/strong&gt;: to ask how did BradA’s balance get so low?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Auditing&lt;/strong&gt;:  it gives an audit trail, who deposited/withdrew from account id BradA? This is how accounting transactions work.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rewind&lt;/strong&gt;: to see what the status of the accounts were last year.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integrity&lt;/strong&gt;: can I trust the data hasn’t been tampered with?
&lt;ul&gt;
&lt;li&gt;yes because Streams are immutable.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/62/MapR_Streams/replicating_streams.html&quot;&gt;The Replication of MapR Event Store&lt;/a&gt; gives a powerful testing and debugging technique. A replica of a Stream can be used to replay a version of events for testing or debugging purposes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture11-1605744849895.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Different databases and schemas for different needs&lt;/h2&gt;
&lt;p&gt;There are lots of databases out there. Each &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/bliki/PolyglotPersistence.html&apos;&gt;use different technologies depending on how the data is used&lt;/a&gt;, optimized for a type of write or read pattern:  graph query, search, document. What if you need to have the same set of data for different databases, for different types of queries coming in? The Stream can act as the distribution point for multiple databases, each one providing a different read pattern.  All changes to the application state are persisted to an event store, which is the system of record. The event store provides a rebuilding state by re-running the events in the stream.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture12-1605744856503.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Events funnel out to databases, which are consumers of the stream. Polyglot persistence provides different specialized materialized views.&lt;/p&gt;
&lt;h2&gt;CQRS&lt;/h2&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/bliki/CQRS.html&apos;&gt;Command and Query Responsibility Segregation (CQRS)&lt;/a&gt; is a pattern that separates the read model and Queries from the write model and Commands often using event sourcing.  Let’s look at how an online shopping  application’s item rating functionality could be separated using the CQRS pattern.  The functionality, shown below in a monolithic application, consists of users rating items they have bought, and browsing item ratings while shopping.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture13-1605744863999.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the CQRS design shown below we isolate and separate the Rate Item write “command” from the Get Item Ratings read “query” using event sourcing.  Rate Item events are published to a Stream. A handler process reads from the stream and persists a materialized view of the ratings for an item in a NoSQL document-style database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture14-1605744871613.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture15-1605744879088.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;NoSQL and De-normalization&lt;/h2&gt;
&lt;p&gt;With MapR Database, a table is automatically partitioned across a cluster by key range, and each server is the source for a subset of a table. Grouping the data by key range provides for really fast read and writes by row key. With MapR Database you design your schema so that the data that is read together is stored together.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture16-1605744887363.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Often with MapR Database, you de-normalize or store in one table what would be multiple tables in a normalized relational database. If your entities exist in a one-to-many relationship, it’s possible to model it in MapR Database HBase as a single row or MapR Database JSON as a single document. In the example below, the item and related ratings are stored together and can be read together with a single get on the indexed row key. This makes the reads a lot faster than joining tables together.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/17b-1605744895310.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Event Sourcing: New Uses of Data&lt;/h2&gt;
&lt;p&gt;An advantage of using an Event Stream for the rate item and other shopping related events is shown here. This design lets us use this data more broadly. Raw or enriched events can be stored in inexpensive storage such as MapR XD. Historical ratings data can be used to build a machine learning model for recommendations. Having a long retention time for data in the queue is also very useful. For example, that data could be processed to build a collection of shopping transaction histories stored in a data format such as Parquet that allows very efficient querying. Other processes might use historical data and streaming shopping related events with machine learning to &lt;a target=&apos;\_blank&apos;  href=&apos;https://www.forbes.com/sites/bernardmarr/2015/11/10/big-data-a-game-changer-in-the-retail-sector/?sh=5884f9339f37&apos;&gt;predict shopping trends&lt;/a&gt;, to detect fraud, or to build a real-time display of where transactions are happening.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture18-1605744754814.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Fashion Retailer’s Event Driven Architecture&lt;/h2&gt;
&lt;p&gt;A major fashion retailer wanted to increase in-season agility and inventory discipline in order to react to demand changes and reduce markdowns. The Event driven solution architecture is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture19-1605744904534.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Weather, world events, and logistical data is collected in real time via MapR Event Store, allowing for real time analysis of potential logistical impacts, and rerouting of inventory.&lt;/li&gt;
&lt;li&gt;Apache Spark is used for batch and streaming analytics processing, and machine learning for predicting supply chain disruptions, and product recommendations.&lt;/li&gt;
&lt;li&gt;Data is stored in MapR Database providing scalable, fast reads and writes. Apache Drill is used for interactive exploration and preprocessing of the data with a schema-free SQL query engine.&lt;/li&gt;
&lt;li&gt;ODBC with Drill provides support for existing BI tools.&lt;/li&gt;
&lt;li&gt;MapR’s Enterprise capabilities provide for global data center replication.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, we discussed event driven microservice architecture using the following design patterns: &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/eaaDev/EventSourcing.html&apos;&gt;Event Sourcing,&lt;/a&gt; &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/bliki/CQRS.html&apos;&gt;Command Query Responsibility Separation&lt;/a&gt;, and &lt;a target=&apos;\_blank&apos;  href=&apos;https://martinfowler.com/bliki/PolyglotPersistence.html&apos;&gt;Polyglot Persistence&lt;/a&gt;. All of the components of the architectures we discussed can run on the same cluster with the MapR Data Platform. &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture20-1605744911661.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;References and More Information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://www.eweek.com/cloud/10-advantages-to-building-enterprise-applications-with-microservices&apos;&gt;10 Advantages to Building Enterprise Applications with Microservices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://ostatic.com/blog/q-a-maprs-jack-norris-on-the-impact-of-microservices&apos;&gt;MapR&apos;s Jack Norris on the Impact of Microservices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://martin.kleppmann.com/2015/03/04/turning-the-database-inside-out.html&apos;&gt;Turning the database upside down&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://milinda.pathirage.org/kappa-architecture.com/&apos;&gt;Kappa Architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://www.oreilly.com/library/view/making-sense-of/9781492042563/&apos;&gt;Making Sense of Stream Processing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://www.infoq.com/presentations/uber-stream-processing/&apos;&gt;Stream processing in Uber&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://highscalability.com/blog/2015/1/26/paper-immutability-changes-everything-by-pat-helland.html&apos;&gt;Immutability Changes Everything&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://highlyscalable.wordpress.com/2012/03/01/nosql-data-modeling-techniques/&apos;&gt;NoSQL Data Modeling Techniques&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[How Spark Runs Your Applications]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/how-spark-runs-your-applications/</link><guid isPermaLink="false">https://developer.hpe.com/how-spark-runs-your-applications/</guid><pubDate>Wed, 18 Nov 2020 23:48:48 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-10-31T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Recall that your Spark application runs as a set of parallel tasks. In this blog post, we will go over how Spark translates Dataset transformations and actions into an execution model.  &lt;/p&gt;
&lt;p&gt;With Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster.&lt;/p&gt;
&lt;p&gt;In order to understand how your application runs on a cluster, an important thing to know about Dataset transformations is that they fall into two types, narrow and wide, which we will discuss first, before explaining the execution model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image18-1605743734060.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Narrow and Wide Transformations  &lt;/h2&gt;
&lt;p&gt;As a review, transformations create a new Dataset from an existing one.  Narrow transformations do not have to move data between partitions when creating a new dataset from an existing one. Some example narrow transformations are &lt;code&gt;filter&lt;/code&gt; and &lt;code&gt;select&lt;/code&gt;, which are used in the example below to retrieve flight information for the carrier &quot;AA&quot;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// select and filter are narrow transformations
df.select($&quot;carrier&quot;,$&quot;origin&quot;,  $&quot;dest&quot;, $&quot;depdelay&quot;, $&quot;crsdephour&quot;).filter($&quot;carrier&quot; === &quot;AA&quot; ).show(2)

result:
+-------+------+----+--------+----------+
|carrier|origin|dest|depdelay|crsdephour|
+-------+------+----+--------+----------+
|     AA|   ATL| LGA|     0.0|        17|
|     AA|   LGA| ATL|     0.0|        13|
+-------+------+----+--------+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Multiple narrow transformations can be performed on a Dataset in memory, in a process called pipelining, making narrow transformations very efficient.&lt;/p&gt;
&lt;p&gt;Wide transformations cause data to be moved between partitions when creating a new Dataset, in a process called the shuffle.  With wide transformation shuffles, data is sent across the network to other nodes and written to disk, causing network and disk I/O, and making the shuffle a costly operation. Some example wide transformations are &lt;code&gt;groupBy&lt;/code&gt;, &lt;code&gt;agg&lt;/code&gt;, &lt;code&gt;sortBy&lt;/code&gt;, and &lt;code&gt;orderBy&lt;/code&gt;. Below is a wide transformation to count the number of flights by carrier.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.groupBy(&quot;carrier&quot;).count.show
result:

+-------+-----+
|carrier|count|
+-------+-----+
|     UA|18873|
|     AA|10031|
|     DL|10055|
|     WN| 2389|
+-------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image12-1605743742466.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;The Spark Execution Model&lt;/h2&gt;
&lt;p&gt;The Spark execution model can be defined in three phases: creating the logical plan, translating that into a physical plan, and then executing the tasks on a cluster.&lt;/p&gt;
&lt;p&gt;You can view useful information about your Spark jobs in real time in a web browser with this URL: http://&lt;driver-node&gt;:4040. For Spark applications that have finished, you can use the Spark history server to see this information in a web browser at this URL: http://&lt;server-url&gt;:18080.  Let’s walk through the three phases and the Spark UI information about the phases, with some example code.&lt;/p&gt;
&lt;h2&gt;The Logical Plan&lt;/h2&gt;
&lt;p&gt;In the first phase, the logical plan is created. This is the plan that shows which steps will be executed when an action gets applied. Recall that when you apply a transformation on a Dataset, a new Dataset is created. When this happens, that new Dataset points back to the parent, resulting in a lineage or directed acyclic graph (DAG) for how Spark will execute these transformations.  &lt;/p&gt;
&lt;h2&gt;The Physical Plan&lt;/h2&gt;
&lt;p&gt;Actions trigger the translation of the logical DAG into a physical execution plan. The Spark Catalyst query optimizer creates the physical execution plan for DataFrames, as shown in the diagram below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image9-1605743750828.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;(Image reference: Databricks)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The physical plan identifies resources, such as memory partitions and compute tasks, that will execute the plan.&lt;/p&gt;
&lt;h2&gt;Viewing the Logical and Physical Plan&lt;/h2&gt;
&lt;p&gt;You can see the logical and physical plan for a Dataset by calling the &lt;code&gt;explain(true)&lt;/code&gt; method. In the code below, we see that the DAG for df2 consists of a &lt;code&gt;FileScan&lt;/code&gt;, a &lt;code&gt;Filter&lt;/code&gt; on &lt;code&gt;depdelay&lt;/code&gt;, and a &lt;code&gt;Project&lt;/code&gt; (selecting columns).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._

var file = &quot;maprfs:///data/flights20170102.json&quot;

case class Flight(_id: String, dofW: Long, carrier: String, origin: String, dest: String, crsdephour: Long, crsdeptime: Double, depdelay: Double,crsarrtime: Double, arrdelay: Double, crselapsedtime: Double, dist: Double) extends Serializable

val df = spark.read.format(&quot;json&quot;).option(&quot;inferSchema&quot;, &quot;true&quot;).load(file).as[Flight]

val df2 = df.filter($&quot;depdelay&quot; &gt; 40)

df2.take(1)

result:
Array[Flight] = Array(Flight(MIA_IAH_2017-01-01_AA_2315, 7,AA,MIA,IAH,20,2045.0,80.0,2238.0,63.0,173.0,964.0))

df2.explain(true)

result:
== Parsed Logical Plan ==
&apos;Filter (&apos;depdelay &gt; 40)
+- Relation[_id#8,arrdelay#9,…] json

== Analyzed Logical Plan ==
_id: string, arrdelay: double…
Filter (depdelay#15 &gt; cast(40 as double))
+- Relation[_id#8,arrdelay#9…] json

== Optimized Logical Plan ==
Filter (isnotnull(depdelay#15) &amp;#x26;&amp;#x26; (depdelay#15 &gt; 40.0))
+- Relation[_id#8,arrdelay#9,…] json

== Physical Plan ==
*Project [_id#8, arrdelay#9,…]
+- *Filter (isnotnull(depdelay#15) &amp;#x26;&amp;#x26; (depdelay#15 &gt; 40.0))
   +- *FileScan json [_id#8,arrdelay#9,…] Batched: false, Format: JSON, Location: InMemoryFileIndex[maprfs:///..],
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image11-1605743758827.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see more details about the plan produced by Catalyst on the web UI SQL tab (http://&lt;driver-node&gt;:4040/SQL/).  Clicking on the query description link displays the DAG and details for the query.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image19-1605743768630.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the code below, after the &lt;code&gt;explain&lt;/code&gt;, we see that the physical plan for df3 consists of a &lt;code&gt;FileScan&lt;/code&gt;, &lt;code&gt;Filter&lt;/code&gt;, &lt;code&gt;Project&lt;/code&gt;, &lt;code&gt;HashAggregate&lt;/code&gt;, &lt;code&gt;Exchange&lt;/code&gt;, and &lt;code&gt;HashAggregate&lt;/code&gt;. The &lt;strong&gt;Exchange&lt;/strong&gt; is the shuffle caused by the &lt;code&gt;groupBy&lt;/code&gt; transformation. Spark performs a hash aggregation for each partition before shuffling the data in the Exchange. After the exchange, there is a hash aggregation of the previous sub-aggregations. Note that we would have an in-memory scan instead of a file scan in this DAG, if df2 were cached.  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val df3 = df2.groupBy(&quot;carrier&quot;).count

df3.collect

result:
Array[Row] = Array([UA,2420], [AA,757], [DL,1043], [WN,244])

df3.explain

result:
== Physical Plan ==
*HashAggregate(keys=[carrier#124], functions=[count(1)])
+- Exchange hashpartitioning(carrier#124, 200)
+- *HashAggregate(keys=[carrier#124], functions=[partial_count(1)])
      +- *Project [carrier#124]
         +- *Filter (isnotnull(depdelay#129) &amp;#x26;&amp;#x26; (depdelay#129 &gt; 40.0))
+- *FileScan json [carrier#124,depdelay#129]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image6-1605743777932.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Clicking on the SQL tab link for this query displays the DAG below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image5-1605743786007.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Executing the Tasks on a Cluster&lt;/h2&gt;
&lt;p&gt;In the third phase, the tasks are scheduled and executed on the cluster. The scheduler splits the graph into stages, based on the transformations. The narrow transformations (transformations without data movement) will be grouped (pipe-lined) together into a single stage. The physical plan for this example has two stages, with everything before the exchange in the first stage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image16-1605743813015.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each stage is comprised of tasks, based on partitions of the Dataset, which will perform the same computation in parallel.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image17-1605743851321.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The scheduler submits the stage task set to the task scheduler, which launches tasks via a cluster manager. These phases are executed in order, and the action is considered complete when the final phase in a job completes. This sequence can occur many times when new Datasets are created.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image4-1605743864900.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here is a summary of the components of execution:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Task:&lt;/strong&gt; a unit of execution that runs on a single machine&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stage:&lt;/strong&gt; a group of tasks, based on partitions of the input data, which will perform the same computation in parallel&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Job:&lt;/strong&gt; has one or more stages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pipelining:&lt;/strong&gt; collapsing of Datasets into a single stage, when Dataset transformations can be computed without data movement&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DAG:&lt;/strong&gt; Logical graph of Dataset operations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Exploring the Task Execution on the Web UI&lt;/h2&gt;
&lt;p&gt;Here is a screenshot of the web UI Jobs tab, after running the code above. The Jobs page gives you detailed execution information for active and recently completed Spark jobs. It gives you the performance of a job and also the progress of running jobs, stages, and tasks. In this example, Job Id 2 is the job that was triggered by the &lt;code&gt;collect&lt;/code&gt; action on df3.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image2-1605743873603.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Clicking the link in the Description column on the Jobs page takes you to the Job Details page. This page gives you details on the progress of the job, stages, and tasks.  We see this job consists of 2 stages, with 2 tasks in the stage before the shuffle and 200 in the stage after the shuffle.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image15-1605743880620.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The number of tasks correspond to the partitions: after reading the file in the first stage, there are 2 partitions; after a &lt;code&gt;shuffle&lt;/code&gt;, the default number of partitions is 200. You can see the number of partitions on a Dataset with the &lt;code&gt;rdd.partitions.size&lt;/code&gt; method shown below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df3.rdd.partitions.size
result: Int = 200

df2.rdd.partitions.size
result: Int = 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Under the Stages tab, you can see the details for a stage by clicking on its link in the description column.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image8-1605743887018.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here we have summary metrics and aggregated metrics for tasks and aggregated metrics by executor.  You can use these metrics to identify problems with an executor or task distribution. If your task process time is not balanced, then resources could be wasted.&lt;/p&gt;
&lt;p&gt;The Storage tab provides information about persisted Datasets. The Dataset is persisted if you called Persist or Cache on the Dataset, followed by an action to compute on that Dataset. This page tells you which fraction of the Dataset’s underlying RDD is cached and the quantity of data cached in various storage media. Look at this page to see if important Datasets are fitting into memory. You can also click on the link to view more details about the persisted Dataset. If you no longer need a cached Dataset, you can call &lt;code&gt;Unpersist&lt;/code&gt; to uncache it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image7-1605743893255.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Try caching df2, performing an action, then seeing how this gets persisted on the storage tab and how it changes the plan and execution time for df3 on the job details page. Notice how the execution time is faster after caching.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df2.cache
df2.count
df3.collect
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice how the first stage is skipped in job4, when df2 is cached and df3 collect is executed again.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image14-1605743900253.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Environment tab lists all the active properties of your Spark applicationenvironment. Use this page when you want to see which configuration flags are enabled.  Only values specified through spark-defaults.conf, SparkSession, or the command line will be displayed here. For all other configuration properties, the default value is used.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image3-1605743907003.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under the Executors tab, you can see processing and storage for each executor:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Shuffle Read Write Columns:&lt;/strong&gt; shows size of data transferred between stages&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Storage Memory Column:&lt;/strong&gt; shows the current used/available memory&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Task Time Column:&lt;/strong&gt; shows task time/garbage collection time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use this page to confirm that your application has the amount of resources you were expecting.  You can look at the thread call stack by clicking on the thread dump link.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image20-1605743914411.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this post, we discussed the Spark execution model, and we explored task execution on the Spark Web UI. This understanding of how Spark runs your applications is important when debugging, analyzing, and tuning the performance of your applications.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Real Time Credit Card Fraud Detection with Apache Spark and Event Streaming]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/real-time-credit-card-fraud-detection-with-apache-spark-and-event-stream/</link><guid isPermaLink="false">https://developer.hpe.com/real-time-credit-card-fraud-detection-with-apache-spark-and-event-stream/</guid><pubDate>Wed, 18 Nov 2020 23:37:40 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2016-05-03T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark, &quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this post we are going to discuss building a real time solution for credit card fraud detection.&lt;/p&gt;
&lt;p&gt;There are 2 phases to Real Time Fraud detection:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The first phase involves analysis and forensics on historical data to build the machine learning model.  &lt;/li&gt;
&lt;li&gt;The second phase uses the model in production to make predictions  on live events. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-1-1605742738606.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Building the Model&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Classification&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Classification is a family of supervised machine learning algorithms that identify which category an item belongs to (for example whether a transaction is fraud  or not fraud), based on labeled examples of known items (for example transactions known to be fraud or not). Classification takes a set of data with known labels and pre-determined features and learns how to label new records based on that information.  Features are the “if questions” that you ask. The label is the answer to those questions. In the example below, if it walks, swims, and quacks like a duck, then the label is &quot;duck&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-2-1605742750342.jpg&quot; alt=&quot;recommendation engine with ducks&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s go through an example of car insurance fraud:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?   
&lt;ul&gt;
&lt;li&gt;This is the Label: The Amount of Fraud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the  “if questions” or properties that you can use to predict ?
&lt;ul&gt;
&lt;li&gt;These are the Features, to build a classifier model, you extract the features of interest that most contribute to the classification.&lt;/li&gt;
&lt;li&gt;In this simple example we will use the the claimed amount.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-3-1605742767766.jpg&quot; alt=&quot;fraud detection with spark&quot;&gt;&lt;/p&gt;
&lt;p&gt;Linear regression models the relationship between the Y “Label” and the X “Feature”,  in this case the relationship between the amount of fraud and the claimed amount.  The coefficient measures the impact of the feature, the claimed amount, on the label, the fraud amount.  &lt;/p&gt;
&lt;p&gt;Multiple linear regression models the relationship between two or more “Features” and a response “Label”.  For example  if we wanted to model the relationship between the amount of fraud and the the age of the claimant, the claimed amount, and the severity of the accident, the multiple linear regression function would look like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-4-1605742785380.jpg&quot; alt=&quot;fraud detection algorithm &quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AmntFraud = intercept+ coeff1 * age + coeff2 * claimedAmnt + coeff3 * severity + error.&lt;/strong&gt;   &lt;/p&gt;
&lt;p&gt;The coefficients measure the impact on the fraud amount  of each of the features.&lt;/p&gt;
&lt;p&gt;Let’s take credit card fraud as another example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Example Features: transaction amount, type of merchant, distance from and time since last transaction .&lt;/li&gt;
&lt;li&gt;Example Label:  Probability of Fraud&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-5-1605742798937.jpg&quot; alt=&quot;fraud detection with spark&quot;&gt;&lt;/p&gt;
&lt;p&gt;Logistic regression measures the relationship between the Y “Label”  and the X “Features” by estimating probabilities using a &lt;a target=&apos;\_blank&apos;  href=&apos;https://en.wikipedia.org/wiki/Logistic_function&apos;&gt;logistic function&lt;/a&gt;. The model  predicts a probability  which is used to predict the label class.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Classification: identifies which category (eg fraud or not fraud)&lt;/li&gt;
&lt;li&gt;Linear Regression: predicts a value (eg amount of fraud)&lt;/li&gt;
&lt;li&gt;Logistic Regression: predicts a probability (eg probability of fraud)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Linear and Logistic Regression are just a couple of algorithms used in machine learning, there are many more as shown &lt;a target=&apos;\_blank&apos;  href=&apos;http://scikit-learn.org/stable/tutorial/machine_learning_map/&apos;&gt;in this cheat sheet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-6-1605742814936.png&quot; alt=&quot;fraud detection with spark&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Feature Engineering&lt;/h2&gt;
&lt;p&gt;Feature engineering is the process of transforming raw data into inputs for a machine learning algorithm. Feature engineering is extremely dependent on the type of use case and potential data sources.  &lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-7-1605742828673.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;(reference &lt;a target=&apos;\_blank&apos;  href=&apos;http://shop.oreilly.com/product/0636920028512.do&apos;&gt;Learning Spark&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Looking more in depth at the credit card fraud example for feature engineering,  our goal is to distinguish normal card usage from fraudulent card usage.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Goal: we are looking for someone using the card other than the cardholder&lt;/li&gt;
&lt;li&gt;Strategy: we want to design features to measure the differences between recent and historical activities.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a credit card transaction we have features associated with the transaction, features associated with the card holder, and features derived from transaction history. Some examples of each are shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-8-1605742836968.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9-1605742844958.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9a-1605742851836.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Model Building Workflow&lt;/h2&gt;
&lt;p&gt;A typical supervised machine learning workflow has the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Feature engineering to transform historical data into feature and label inputs for a machine learning algorithm.&lt;/li&gt;
&lt;li&gt;Split the data into two parts, one for building the model and one for testing the model.&lt;/li&gt;
&lt;li&gt;Build the model with the training features and labels&lt;/li&gt;
&lt;li&gt;Test the model with the test features to get predictions. Compare the test predictions to the test labels.&lt;/li&gt;
&lt;li&gt;Loop until satisfied with the model accuracy:
&lt;ul&gt;
&lt;li&gt;Adjust the model fitting parameters, and repeat tests.&lt;/li&gt;
&lt;li&gt;Adjust the features and/or machine learning algorithm and repeat tests.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9b-1605742860922.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Read Time Fraud Detection Solution in Production&lt;/h2&gt;
&lt;p&gt;The figure below shows the high level architecture of a real time fraud detection solution, which is capable of high performance at scale.  Credit card transaction events are delivered through the MapR Event Store messaging system, which supports the Kafka .09 API. The events are processed and checked for Fraud by Spark Streaming using Spark Machine Learning with the deployed model.  MapR XD, which supports the posix NFS API  and  HDFS API, is used for storing event data. MapR Database a NoSql database which supports the HBase API, is used for storing and providing fast access to credit card holder profile data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9c-1605742870860.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Streaming Data Ingestion&lt;/h2&gt;
&lt;p&gt;MapR Event Store is a new distributed messaging system which enables producers and consumers to exchange events in real time via the Apache Kafka 0.9 API. MapR Event Store topics are logical collections of messages which organize events into categories. In this solution there are 3 categories:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Raw Trans: raw credit card transaction events.&lt;/li&gt;
&lt;li&gt;Enriched: credit card transaction events enriched with card holder features,  which were predicted to be not fraud.&lt;/li&gt;
&lt;li&gt;Fraud Alert: credit card transaction events enriched with card holder features which were predicted to be fraud.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Topics are partitioned, spreading the load for parallel messaging across multiple servers,  which provides for faster throughput and scalability.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9d-1605742879420.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Real-time Fraud Prediction Using Spark Streaming&lt;/h2&gt;
&lt;p&gt;Spark Streaming lets you use the same Spark APIs for streaming and batch processing, meaning that well modularized Spark functions written for the offline machine learning can be re-used for the real time machine learning.&lt;/p&gt;
&lt;p&gt;The data flow for the real time fraud detection using Spark Streaming is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Raw events come into Spark Streaming as DStreams,  which internally is a sequence of RDDs.  RDDs are like a Java Collection, except that the data elements contained in RDDs are partitioned across a cluster. RDD operations are performed in parallel on the data cached in memory, making the iterative algorithms often used in machine learning much faster for processing lots of data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9e-wide-1605857312275.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;The credit card transaction data is parsed to get the features associated with the transaction.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9f-1605742904521.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Card holder features and profile history are read from MapR Database using the account number as the row key.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9g-1605742912597.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Some derived features are re-calculated with the latest transaction data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9h-1605742920965.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;Features are run with the model algorithm to produce fraud prediction scores.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Non fraud events enriched with derived features are published to the enriched topic. Fraud events with derived features are published to the fraud topic.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Storage of Credit Card Events&lt;/h2&gt;
&lt;p&gt;Messages are not deleted from Topics when read, and topics can have multiple different consumers, this allows processing of the same messages by different consumers for different purposes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9i-1605742927931.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this solution, MapR Event Store consumers read and store all raw events, enriched events, and alarms to MapR XD for future analysis, model training and updating. MapR Event Store consumers read enriched events and Alerts to update the Card holder features in MapR Database. Alerts events are also used to update Dashboards in real time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9j-1605742935016.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Rapid Reads and Writes with MapR Database&lt;/h2&gt;
&lt;p&gt;With MapR Database (HBase API), a table is automatically partitioned across a cluster by key range, and each server is the source for a subset of a table. Grouping the data by key range provides for really fast read and writes by row key.&lt;/p&gt;
&lt;p&gt;Also with MapR Database each partitioned subset or region of a table has a write and read cache. Recently read or written data and cached column families are available in memory; all of this provides for really fast read and writes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9k-1605742944012.jpg&quot; alt=&quot;fraud detection with Hadoop, NoSQL, streaming&quot;&gt;&lt;/p&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform. There are several advantages of having MapR Event Store on the same cluster as all the other components. For example, maintaining only one cluster means less infrastructure to provision, manage, and monitor. Likewise, having producers and consumers on the same cluster means fewer delays related to copying and moving data between clusters, and between applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/spark-fraud-9l-1605742956048.jpg&quot; alt=&quot;fraud detection software stack&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;In this blog post, you learned how the MapR Data Platform integrates Hadoop and Spark with real-time database capabilities, global event streaming, and scalable enterprise storage.&lt;/p&gt;
&lt;p&gt;References and More Information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://learn.ezmeral.software.hpe.com/&quot;&gt;Free Online training&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/latest/streaming-programming-guide.html&apos;&gt;Apache Spark Streaming Programming Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques: A Guide to Data Science for Fraud Detection Book, by Wouter Verbeke; Veronique Van Vlasselaer; Bart Baesens&lt;/li&gt;
&lt;li&gt;Learning Spark Book, By Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Artificial Intelligence and Machine Learning: What Are They and Why Are They Important?]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/artificial-intelligence-and-machine-learning-what-are-they-and-why-are-t/</link><guid isPermaLink="false">https://developer.hpe.com/artificial-intelligence-and-machine-learning-what-are-they-and-why-are-t/</guid><pubDate>Thu, 12 Nov 2020 08:03:23 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Saira Kennedy&quot;,
&quot;publish&quot;: &quot;2018-09-21T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Editor&apos;s Note: This post is based on the MapR Academy course, &lt;a href=&quot;https://learn.ezmeral.software.hpe.com/bus-introduction-to-artificial-intelligence-and-machine-learning&quot;&gt;&lt;em&gt;Introduction to Artificial Intelligence and Machine Learning&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;div style=&quot;padding:75% 0 0 0;position:relative;&quot;&gt;&lt;iframe src=&quot;https://player.vimeo.com/video/279683304?title=0&amp;byline=0&amp;portrait=0&quot; style=&quot;position:absolute;top:0;left:0;width:100%;height:100%;&quot; frameborder=&quot;0&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;p&gt;This post will give you a basic background on artificial intelligence and machine learning. It&apos;s a good place to gain an intro-level, working understanding of the categories, how they fit together, and their differences.&lt;/p&gt;
&lt;p&gt;In this post, we begin by defining the differences between artificial intelligence and machine learning and what these terms mean. In future posts, we will discuss the different methods of machine learning and some of the most common algorithms available for your projects. After that, you&apos;ll learn about real-world use cases, utilizing this technology, and the unique value MapR solutions provide for machine learning endeavors.&lt;/p&gt;
&lt;h2&gt;Where do AI and ML fit in data science?&lt;/h2&gt;
&lt;p&gt;Before we get into the finer details of artificial intelligence and machine learning, let&apos;s see how it fits in the larger world of data science. Because this field is rapidly changing, some people may be confused or disagree about the overall landscape and terms being used in the industry, so let&apos;s clarify how we will be defining them in this blog.&lt;/p&gt;
&lt;p&gt;Think of a series of Russian nesting dolls, but now imagine them as futuristic robot dolls, like the ones pictured below. With this analogy, we can have multiple robot dolls nested within each level.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image6-1605168354596.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With these nesting robot dolls, version 2.0, the largest doll represents the entire field of data science. The second doll represents artificial intelligence, and the next doll represents machine learning. A fourth doll, for deep learning, can also be nested within the machine learning doll, but we won&apos;t be going into much depth on that topic in this post.&lt;/p&gt;
&lt;h2&gt;What is Data Science?&lt;/h2&gt;
&lt;p&gt;Both artificial intelligence and machine learning nest under the largest doll of data science, whose purpose is to extract insights from data. Data science analyzes large amounts of data to deliver value and give businesses a competitive edge across all industries. As an example, retail businesses analyze buyer habits to better target recommendations and promotions to their customers.&lt;/p&gt;
&lt;p&gt;In the growing world of big data, it is important to have an effective data science strategy to help make informed business decisions. All these fields have become more prominent, as the attempts to meet the growing demand for finding more efficient ways to extract value from data at scale has increased.&lt;/p&gt;
&lt;p&gt;Using artificial intelligence to accomplish these goals is a natural outgrowth of the big data movement.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image13-1605168362957.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Artificial Intelligence?&lt;/h2&gt;
&lt;p&gt;Artificial intelligence describes a machine that is capable of imitating and performing intelligent human behavior. Some of these tasks could include problem-solving and decision-making or specific activities requiring acute perception, recognition, or translation abilities.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image11-1605168371076.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Remember from the robot nesting doll example that machine learning is nested within artificial intelligence; therefore, all machine learning counts as AI, but not all AI counts as machine learning. Other robot dolls within classic AI, without machine learning, would include expert systems using symbolic artificial intelligence and AI Planning.&lt;/p&gt;
&lt;h2&gt;Symbolic Artificial Intelligence&lt;/h2&gt;
&lt;p&gt;Symbolic artificial intelligence is one of the first approaches to AI, which is based on the assertion that human intelligence can be achieved through the manipulation of symbols. This is the basis for physical symbol systems, also called formal systems, which center around three basic concepts that follow human thinking abilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Symbols, like the plus sign as the physical joining of two perpendicular lines, are first encoded in our brains.&lt;/li&gt;
&lt;li&gt;Thoughts are the structures, like the plus sign means to add things together.&lt;/li&gt;
&lt;li&gt;The manipulation process is the act of thinking, or applying the symbol and its structure together, like when we use the plus sign in a mathematical equation for one plus two equals three.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image15-1605168382441.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;To understand this concept more clearly, let&apos;s take a look at a couple of specific examples. Physical Symbol System examples include algebra, formal logic, and even chess.&lt;/p&gt;
&lt;h2&gt;Physical Symbol Example:  Algebra&lt;/h2&gt;
&lt;p&gt;In algebra, the numbers and mathematical operators of plus, x, equals, et cetera, are the symbols. The equations and formulas are the expressions, and the calculation is the manipulated expression response.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image1-1605168391109.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Physical Symbol System Example:  Formal Logic&lt;/h2&gt;
&lt;p&gt;With formal logic problems, words like &quot;if,&quot; &quot;or,&quot; and &quot;not,&quot; are the symbols. The structures are true or false statements, and the process manipulation are the rules of logical deduction, which result in the final expression. So, we could say &quot;If a primarily healthy adult has a fever, then they may have the flu.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image5-1605168398975.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Physical Symbol System Example:  Chess&lt;/h2&gt;
&lt;p&gt;And in games, such as chess, the defined number of pieces are the symbols, and the legal chess moves are the structures. The manipulated expressions are the resulting positions of the pieces on the board after each move.&lt;/p&gt;
&lt;p&gt;This AI approach states that machines are capable of mimicking this behavior. Though interest in this approach has faded over time, it led to the development of expert systems, which are widely considered to be one of the first successful forms of artificial intelligence.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image10-1605168407973.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Expert Systems&lt;/h2&gt;
&lt;p&gt;An expert system is a computer system that is designed to solve problems by imitating the decision-making abilities of a human expert. This system uses two subsystems: a knowledge base and an inference engine.&lt;/p&gt;
&lt;p&gt;Input data is presented to an expert system for training. This data is then reasoned through production, or If-Then, rules. Together, the data and reasoning production rules create the knowledge base of an expert system.&lt;/p&gt;
&lt;p&gt;The inference engine applies the rules to the data and facts in the knowledge base, and then deduces new facts from it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image2-1605168415913.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;AI Planning&lt;/h2&gt;
&lt;p&gt;Automated planning and scheduling, also known as AI Planning, is another branch of classic AI. AI Planning can be done in known environments, and it describes a system that coordinates strategies or action sequences from an initial state, in order to achieve a specified goal state. The actions may be executed by autonomous robots, intelligent agents, or unmanned vehicles, or a combination of them.&lt;/p&gt;
&lt;p&gt;This field has such a wide variety of project scopes with varying complexity that the level of programming effort and the human resources required for AI Planning became too much for most organizations to support.&lt;/p&gt;
&lt;p&gt;Today, machine learning has taken over this field, as it offers a much more agile approach.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image3-1605168423145.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Artificial Intelligence Examples&lt;/h2&gt;
&lt;p&gt;As we now realize, artificial intelligence varies greatly in its potential complexity.&lt;/p&gt;
&lt;p&gt;It includes both piles of if-then statements, as with the simple rule-based, expert systems used in classic AI, along with more complex statistical models that use learning algorithms to generate predictions.&lt;/p&gt;
&lt;p&gt;Then, there is also the Hollywood version of AI: super-fancy computer systems, specialized robots, and advanced androids.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image7-1605168431287.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;It may all seem a bit campy, but every day we get closer and closer to this type of reality. Mainly, this is because we are now teaching machines how to learn, and grow, on their own.&lt;/p&gt;
&lt;h2&gt;What is Machine Learning?&lt;/h2&gt;
&lt;p&gt;Now that we know about artificial intelligence, how about machine learning? This is where AI really starts to get interesting.&lt;/p&gt;
&lt;p&gt;Machine learning describes machines that are taught to learn and make decisions by examining large amounts of input data. It makes calculated suggestions and/or predictions based on analyzing this information and performs tasks that are considered to require human intelligence. This includes activities like speech recognition, translation, visual perception, and more.&lt;/p&gt;
&lt;p&gt;The field of machine learning also encompasses the area of deep learning. The key difference between machine learning and artificial intelligence is the term &quot;learning.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image9-1605168439805.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Why is Machine Learning Important?&lt;/h2&gt;
&lt;p&gt;Machines learn and provide intelligent insights through a sophisticated use of learning algorithms.&lt;/p&gt;
&lt;p&gt;To provide business value, the machine is trained to learn patterns from data and then can proceed autonomously on new and changing data. This creates a dynamic feedback loop, which allows it to efficiently generate more models to gain further insights, even more accurately, without requiring additional resources or human interaction.&lt;/p&gt;
&lt;p&gt;With continuous advancement in this field, machines are becoming increasingly self-healing, self-organizing, and self-architecting, seamlessly producing greater value for businesses.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image12-1605168447402.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Deep Learning?&lt;/h2&gt;
&lt;p&gt;Also known as artificial neural networks, deep learning is one of the most talked about sub-areas of machine learning.&lt;/p&gt;
&lt;p&gt;Deep learning performs machine learning in a hierarchy of layers, where the output of decisions from one layer feeds into the next layer. This model is loosely patterned after the brain&apos;s neural networks and has been setting new records of accuracy when applied to sound and image recognition.&lt;/p&gt;
&lt;p&gt;The term &quot;deep&quot; describes the number of layers in a network and some go deeper than others by using many layers, versus just one layer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/image4-1605168455124.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Where do we go from here?&lt;/h2&gt;
&lt;p&gt;Next in this series, we&apos;ll discuss the different learning methods used in machine learning, such as supervised, unsupervised, and semi-supervised types, along with some of the most common algorithms available for your projects.&lt;/p&gt;
&lt;p&gt;For more information on this course, please visit: &lt;a href=&quot;https://learn.ezmeral.software.hpe.com/bus-introduction-to-artificial-intelligence-and-machine-learning&quot;&gt;&lt;em&gt;Introduction to Artificial Intelligence and Machine Learning&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Performance Tuning of an Apache Kafka/Spark Streaming System - Telecom Case Study]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/performance-tuning-of-an-apache-kafkaspark-streaming-system-telecom-case/</link><guid isPermaLink="false">https://developer.hpe.com/performance-tuning-of-an-apache-kafkaspark-streaming-system-telecom-case/</guid><pubDate>Thu, 12 Nov 2020 07:48:12 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;publish&quot;: &quot;2017-05-31T12:00:00.000&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;&lt;em&gt;Real-world case study in the telecom industry&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;/blog/MPxYKrwlr5SZQGoLnZ2X/performance-tuning-of-an-apache-kafkaspark-streaming-system&quot;&gt;In a previous post&lt;/a&gt;, I pointed out how we were successfully able to accelerate an Apache Kafka/Spark Streaming/Apache Ignite application and turn a development prototype into a useful, stable streaming application – one that actually exceeded the performance goals set for the application. In this post, I’ll cover how we were able to tune a Kafka/Spark Streaming system and run it stably, without backing up under maximum production load.&lt;/p&gt;
&lt;p&gt;Many of the lessons learned during this project would also apply to a similar system implemented using the MapR Data Platform. However, as we’ll explain later, a lot of the issues could have been avoided entirely, or at least greatly mitigated by using a converged platform instead of a multi-cluster approach.&lt;/p&gt;
&lt;p&gt;The MapR Data Platform is the only currently available production-ready implementation of such a platform as of this writing.&lt;/p&gt;
&lt;h2&gt;Goal of the System&lt;/h2&gt;
&lt;p&gt;The Kafka/Spark Streaming system aims to provide better customer support by providing their support staff with always up-to-date call quality information for all their mobile customers.&lt;/p&gt;
&lt;p&gt;Mobile customers, while making calls and using data, connect to the operator’s infrastructure and generate logs in many different systems. Three specific logs were identified that, if correlated with each other, give visibility in the actual quality of service experienced by each individual customer. The three logs were selected because they can be correlated through a simple relational database-like join operation.&lt;/p&gt;
&lt;p&gt;For improving customer support, the quality of call information needs to be kept updated in near to real time; otherwise, it has no value. This has led, down the road, to building a streaming architecture rather than a batch job.
The data volume at production load reaches several GB/s, generated by several million mobile customers, 24 hours a day, 365 days a year. Performance and stability at that scale is required for the system to reach production.&lt;/p&gt;
&lt;h2&gt;Project SLA Goals&lt;/h2&gt;
&lt;p&gt;The application has clear performance requirements based on the known worst-case throughput of the input data. This log data is generated by real-world use of the services of the company. If the application is to be useful at all, as a real-time streaming application, it must be able to handle this data without getting behind.&lt;/p&gt;
&lt;p&gt;In term of numbers, the goal is to handle up to 3GB/min of input data. For this large mobile operator, such throughput represents about 150-200,000 events/second. Ordinarily, the throughput is about half of that value or 1.5GB/min and 60,000-80,000 events/second.&lt;/p&gt;
&lt;h2&gt;Data Sources&lt;/h2&gt;
&lt;p&gt;The raw data source are the logs of three remote systems, labeled A, B, and C here, where the log from A comprises about 84-85% of the entries, the log from B about 1-2%, and the log from C about 14-15%. The fact that the data is unbalanced is one of the (many) sources of difficulty in this application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture1-1605167436242.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The raw data is ingested into the system by a single Kafka producer into Kafka running on 6 servers. The producer reads the various logs and adds each log&apos;s records into its own topic. As there are three logs, there are three Kafka topics.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture2-1605167445109.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The data is consumed by a Spark Streaming application, which picks up each topic, does a simple filter to cut out unnecessary fields, a map operation to transform the data, and then a foreachRDD operation (each micro-batch generates an RDD in Spark Streaming) that saves the data to Ignite and to HDFS as Hive tables for backup.&lt;/p&gt;
&lt;p&gt;A second batch Spark application runs once per hour on the data stored in-memory in Ignite to join the records from the three separate logs into a single table. The batch job has a maximum data size of about 100GB. The cluster CPU resources should be sufficient to process this amount of data in one hour or less.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture3-1605167453170.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Ignite stores 3 hours’ worth of data at all time to account for calls that begin in one hour and end in the hour getting processed, as well as calls that begin in the target hour and end in the next one. The telecom operator judges that calls that are so long they aren’t captured in this scheme can be ignored, as they are very rare.&lt;/p&gt;
&lt;p&gt;It’s worth noting that a better all-streaming architecture could have avoided the whole issue with the intermediate representation in the first place. An illustrative, real-world case with more time and thought upfront can make the entire project end faster than just rushing headlong into coding the first working solution that comes to mind.&lt;/p&gt;
&lt;h2&gt;System Hardware and Software: At the Bleeding Edge of Open Source Big Data&lt;/h2&gt;
&lt;p&gt;The cluster has a lot of CPU and memory resources. It has 12 nodes of enterprise-grade servers, each equipped with two E5 Xeon CPUs (16 physical cores), 256GB memory, and eight 6TB spinning HDD (2 for OS in RAID 1). Each server has one 10GbE network interface.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture4-1605167461409.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The technology stack selected for this project are centered around Kafka 0.8 for streaming the data into the system, &lt;a href=&quot;https://ignite.apache.org/&quot;&gt;Apache Spark 1.6&lt;/a&gt; for the ETL operations (essentially a bit of filter and transformation of the input, then a join), and the use of Apache Ignite 1.6 as an in-memory shared cache to make it easy to connect the streaming input part of the application with joining the data. Backup is done to HDFS, as Hive ORC tables are also used to serve as a just-in-case backup for Ignite and to serve future need for other analytics use cases (none at the time).&lt;/p&gt;
&lt;p&gt;The Spark applications are both coded in Scala 2.10 and &lt;a href=&quot;http://spark.apache.org/docs/1.6.2/streaming-kafka-integration.html#approach-2-direct-approach-no-receivers&quot;&gt;Kafka’s direct approach&lt;/a&gt; (no receivers). Apache Ignite has a really nice Scala API with a magic &lt;a href=&quot;https://ignite.apache.org/features/igniterdd.html&quot;&gt;IgniteRDD&lt;/a&gt; that can allow applications to share in-memory data, a key feature for this system to reduce coding complexity.&lt;/p&gt;
&lt;p&gt;The cluster is running Apache Hadoop&apos;s HDFS as a distributed storage layer, with resources managed by Mesos 0.28. Finally, HBase is used as the ultimate data store for the final joined data. It will be queried by other systems outside the scope of this project. The cluster design with all relevant services is shown in the table above.&lt;/p&gt;
&lt;h2&gt;Performance Issues&lt;/h2&gt;
&lt;p&gt;The original system had several issues:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;First Spark Streaming job is not stable&lt;/li&gt;
&lt;li&gt;Second Spark batch job can’t process 1 hour of data before the next hour of data arrives&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Stability: The application crashes under load&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A Spark Streaming application is said to be stable if the processing time of each micro-batch is less than or equal to that micro-batch time. In this case, the application processes each 30 seconds of data in as much as 6 minutes. We need a 12x speedup.&lt;/p&gt;
&lt;p&gt;Second, there is a batch process to join data one hour at a time that was targeted to run in 30 minutes but was taking over 2 hours to complete.&lt;/p&gt;
&lt;p&gt;Third, the application was randomly crashing after running for a few hours. Stability of such a complex, fully open-source stack should never be assumed. Rather, it is the result of a constant effort by the team to better understand the system. We can expect that there will still be a lot of learning required to keep the system up and running once it is moved to production as well.&lt;/p&gt;
&lt;h2&gt;Performance Tuning&lt;/h2&gt;
&lt;p&gt;In my opinion, all performance and stability issues stem from the terrible idea of management to push a very good POC project developed on AWS into production on some on-premises hardware. It’s hard to believe, but they fully expected the POC code to run as-is on a production system it was never tested on.&lt;/p&gt;
&lt;p&gt;Regardless, the task was set, and we had only a few short days to identify what could be done and get the system up to production speed. Final QA testing of the system was barely 1 week away, and management wasn’t in the mood to accept delays. We got to work...&lt;/p&gt;
&lt;h2&gt;First target: Improve Spark Streaming Performance&lt;/h2&gt;
&lt;p&gt;At maximum load, the Spark Streaming application is taking between 4.5 to 6 minutes for each micro-batch of 30 seconds. We need to find 9-12x speedup worth of improvements.&lt;/p&gt;
&lt;p&gt;Spark has a lot of moving parts, but it will always be true that fast algorithms beat tweaking the configuration. In this case, there is nothing to get from the code; it’s all very parallelizable with no obvious issues, like doing two computations separately when they could be combined or any O(n^2) loop-in-another loop issues. The job is nothing more than a filter and a map.&lt;/p&gt;
&lt;p&gt;What we need to determine, then, is whether the job is indeed being processed in parallel to make the most of all those CPU cores. In a Spark Streaming job, Kafka partitions map 1 to 1 with Spark partitions.&lt;/p&gt;
&lt;h2&gt;Increase Parallelism: Increase Number of Partitions in Kafka&lt;/h2&gt;
&lt;p&gt;A quick check of the Spark UI shows 36 partitions. As each server has 6 physical disks, I assume the choice of partitioning was selected by the formula node * physical disks = partition count per topic. Quickly checking online reveals that partitioning is quite a bit more complex than that and the formula to decide on partition number isn’t from any known Kafka best practices guide. (Ref: &lt;a href=&quot;https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/&quot;&gt;https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/&lt;/a&gt; )&lt;/p&gt;
&lt;p&gt;The input data was unbalanced and most of the application processing time was spent processing Topic 1 (with 85% of the throughput). Kafka partitions are matched 1:1 with the number of partitions in the input RDD, leading to only 36 partitions, meaning we can only keep 36 cores busy on this task. To increase the parallelism, we need to increase the number of partitions. What we did was split topic 1 into 12 topics each with 6 partitions for a total of 72 partitions. The way it was done was a simple modification to the producer to evenly divide the data from the first log into 12 topics instead of just one. Zero code needed to be modified on the consumer side.&lt;/p&gt;
&lt;p&gt;We also right-sized the number of partitions for the two other topics, in proportion to their relative importance in the input data, so we set topic 2 to two partitions and topic 3 to eight partitions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/completed-jobs-1605256886579.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Running more tasks in parallel. Before tuning, each stage always had 36 partitions!&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Fix RPC Timeout Exceptions&lt;/h2&gt;
&lt;p&gt;When looking at the application logs, we could see a lot of RPC timeout exceptions. We do a web search and find what we believe is the relevant JIRA (&lt;a href=&quot;https://issues.apache.org/jira/browse/SPARK-14140&quot;&gt;SPARK-14140 in JIRA&lt;/a&gt;). The recommended fix is to increase the spark.executor.heartbeatInterval from 10s (default) to 20s.&lt;/p&gt;
&lt;p&gt;I think this could be caused by nodes getting busy from disk or CPU spikes because of Kafka, Ignite, or garbage collector pauses. Since Spark runs on all nodes, the issue was random (see the cluster services layout table in the first section).&lt;/p&gt;
&lt;p&gt;The configuration change fixed this issue completely. We haven’t seen it happen since. (Yay!)&lt;/p&gt;
&lt;h2&gt;Increase Driver and Executor Memory&lt;/h2&gt;
&lt;p&gt;Out of memory issues and random crashes of the application were solved by increasing the memory from 20g per executor to 40g per executor as well as 40g for the driver. Happily, the machines in the production cluster were heavily provisioned with memory. This is a good practice with a new application, since you don’t know how much you will need at first.&lt;/p&gt;
&lt;p&gt;The issue was difficult to debug with precision and reliable information, since the Spark UI reports very little memory consumption. In practice, as this setting is easy to change, we empirically settled on 40g being the smallest memory size for the application to run stably.&lt;/p&gt;
&lt;h2&gt;Right Size the Executors&lt;/h2&gt;
&lt;p&gt;The original application was running only 3 executors with 72 total cores. We configured the application to run with 80 cores with a maximum of 10 cores per executor, for a total of 8 executors. Note that with 16 real cores per node on a 10 node cluster, we’re leaving plenty of resources for Kafka brokers, Ignite, and HDFS/NN to run on.&lt;/p&gt;
&lt;h2&gt;Increase the Batch Window from 30s to 1m&lt;/h2&gt;
&lt;p&gt;The data is pushed into Kafka by the producer as batches every 30s, as it is gathered by FTP batches from the remote systems. Such an arrangement is common in telecom applications due to a need to deal with equipment and systems from a bewildering range of manufacturers, technology, and age.&lt;/p&gt;
&lt;p&gt;This meant that the input stream was very spiky, when looking at the processing time from the Spark UI’s streaming tab.&lt;/p&gt;
&lt;p&gt;Increasing the window to 1m allowed us to smooth out the input and gave the system a chance to process the data in 1 minute or less and still be stable.&lt;/p&gt;
&lt;p&gt;To make sure of it, the team had a test data which simulated the known worst-case data, and with the new settings, the Spark Streaming job was now indeed stable. We also tried it on real production data, and everything looked good. Win!&lt;/p&gt;
&lt;h2&gt;Drop Requirement to Save Hive Tables to HDFS&lt;/h2&gt;
&lt;p&gt;Discussion with the project managers revealed that Hive was not actually part of the requirements for the streaming application! Mainly, this is because the other analytics, mostly SQL requests, could be serviced from the data in HBase.&lt;/p&gt;
&lt;p&gt;Considering the goal of the system, the worst-case scenario for missing data is that a customer&apos;s call quality information cannot be found... which is already the case. In other words, the consequence of data loss is not negative; rather, the consequence of gaining data is additional insights. If the great majority of the data is processed and stored, the business goals can be reached.&lt;/p&gt;
&lt;p&gt;There wasn’t much point in saving the data to Hive mid-flight for increased fault-tolerance either, as once the data is in Ignite, it’s safe even if the Spark application crashes. This made Ignite an even more critical part of the application, despite it having some issues of its own. It was a difficult decision that we made entirely due to the advanced stage of the project. As we’ll explain in more detail in the conclusion, the architecture itself was problematic, and it’s not time to play with architecture when you’re a week or two from production.&lt;/p&gt;
&lt;h2&gt;Spark Performance Tuning Results&lt;/h2&gt;
&lt;p&gt;The Spark Streaming application finally became stable, with an optimized runtime of 30-35s.&lt;/p&gt;
&lt;p&gt;As it turns out, cutting out Hive also sped up the second Spark application that joins the data together, so that it now ran in 35m, both now well within the project requirements.&lt;/p&gt;
&lt;p&gt;With improvements from the next part, the final performance of the Spark Streaming job went down in the low 20s range, for a final speedup of a bit over 12 times.&lt;/p&gt;
&lt;h2&gt;Second target: Improve System Stability&lt;/h2&gt;
&lt;p&gt;We had to work quite hard on stability. Several strategies were required, as we will explain below.&lt;/p&gt;
&lt;h2&gt;Make the Spark Streaming Application Stable&lt;/h2&gt;
&lt;p&gt;The work we did to fix the performance had a direct impact on system stability. If both applications are stable themselves and running on right-sized resources, then the system has the best chance to be stable overall.&lt;/p&gt;
&lt;h2&gt;Remove Mesos and Use Spark Standalone&lt;/h2&gt;
&lt;p&gt;The initial choice of Mesos to manage resources was forward-looking, but ultimately we decided to drop it from the final production system. At the onset, the plan was to have Mesos manage all the applications. But the team never could get Kafka and Ignite to play nice with Mesos, and so they were running in standalone mode, leaving only Spark to be managed by Mesos. Surely, with more time, there is little doubt all applications could be properly configured to work with Mesos.&lt;/p&gt;
&lt;p&gt;Proposing to remove Mesos was a bit controversial, as Mesos is much more advanced and cool than Spark running in standalone mode.&lt;/p&gt;
&lt;p&gt;But the issue with Mesos was twofold:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Control over executor size and number was poor, a known issue (SPARK-5095) with Spark 1.6 and now fixed in Spark 2.X.&lt;/li&gt;
&lt;li&gt;Ignite and Kafka aren’t running on Mesos, only Spark is. Given the schedule pressure, the team had given up trying to get those two services running in Mesos.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Mesos can only ever allocate resources well if it controls resources. In the case of this system, Kafka and Ignite are running outside of Mesos’ knowledge, meaning it’s going to assign resources to the Spark applications incorrectly.&lt;/p&gt;
&lt;p&gt;In addition, it’s a single purpose cluster, so we can live with customizing the sizing of the resources for each application with a global view of the system’s resources. There is little need for dynamic resource allocations, scheduling queues, multi-tenancy, and other buzzwords.&lt;/p&gt;
&lt;h2&gt;Change the Ignite Memory Model&lt;/h2&gt;
&lt;p&gt;It is a known issue that when the Heap controlled by the JVM gets very big (&gt; 32GB), the cost of garbage collection is quite large. We could indeed see this when the join application runs, where the stages with 25GB shuffle had some rows with spikes in GC time from 10 seconds range up to more than a minute.&lt;/p&gt;
&lt;p&gt;The initial configuration of Ignite was to run ONHEAP_TIERED, with 48GB worth of data cached on heap, then overflow drops to 12GB of off-heap memory. That setting was changed to the OFFHEAP_TIERED model. While slightly slower due to serialization cost, OFFHEAP_TIERED doesn&apos;t rely on the JVM’s garbage collection. It still runs in memory, so we estimated it would be a net gain.&lt;/p&gt;
&lt;p&gt;With this change, the run time for each batch dutifully came down by about five seconds, from 30 seconds down to about 25 seconds. In addition, successive batches tended to have much more similar processing time, with a delta of 1-3 seconds, whereas it would vary by over 5 to 10 seconds, previously.&lt;/p&gt;
&lt;h2&gt;Update the Ignite JVM Settings&lt;/h2&gt;
&lt;p&gt;We followed the recommended JVM options as found in Ignite documentation’s performance tuning section (&lt;a href=&quot;http://apacheignite.gridgain.org/docs/jvm-and-system-tuning&quot;&gt;http://apacheignite.gridgain.org/docs/jvm-and-system-tuning&lt;/a&gt;).&lt;/p&gt;
&lt;h2&gt;Improve the Spark Code&lt;/h2&gt;
&lt;p&gt;Some parts of the code assumed reliability, like queries to Ignite, when in fact there was possibility of the operations failing. These problems can be fixed in the code, which now handles exceptions more gracefully, though there is probably work left to increase the robustness of the code. We can only find these spots by letting the application run now.&lt;/p&gt;
&lt;h2&gt;Reassign ZooKeeper to Nodes 10-12&lt;/h2&gt;
&lt;p&gt;Given the cluster is of medium size, it’s worth spreading the services as much as possible. We moved the ZooKeeper services from nodes 1-3 to nodes 10-12.&lt;/p&gt;
&lt;h2&gt;Final System Architecture&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture6-1605167479768.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Tuning this application took about 1 week of full-time work. The main information we used was Spark UI and Spark logs, easily accessible from the Spark UI. The view of Jobs and Stages as well as the streaming UI are really very useful.&lt;/p&gt;
&lt;h2&gt;Essential Takeaways&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Migrating a streaming application from a prototype on AWS to an on-site cluster requires schedule time for testing&lt;/li&gt;
&lt;li&gt;Not testing the AWS prototype with realistic data was a big mistake&lt;/li&gt;
&lt;li&gt;Including many “bleeding-edge” OSS components (Apache Ignite and Mesos) with expectations of very high reliability is unrealistic&lt;/li&gt;
&lt;li&gt;A better architecture design could have simplified the system tremendously&lt;/li&gt;
&lt;li&gt;Tuning a Kafka/Spark Streaming application requires a holistic understanding of the entire system; it’s not just about changing parameter values of Spark. It’s a combination of the data flow characteristics, the application goals and value to the customer, the hardware and services, the application code, and then playing with Spark parameters.&lt;/li&gt;
&lt;li&gt;The MapR Data Platform would have cut the development time, complexity, and cost for this project.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This project was a hell of a dive in the deep end of the pool for a telecom operator with very little experience with the open-source enterprise big data world. They should be applauded for ambition and desire to take up such a challenge with the goal of benefiting their customers. But a better choice of platform and application architecture could have made their life a lot easier.&lt;/p&gt;
&lt;h2&gt;A Converged Platform is the Correct Approach&lt;/h2&gt;
&lt;p&gt;In fact, the requirements for this project show the real-world business need for a state-of-the-art converged platform with a fast distributed file system, high performance key-value store for persistence and real-time, and high performance streaming capabilities.&lt;/p&gt;
&lt;p&gt;A MapR-based solution would have been a lot easier to build and maintain, absolutely for sure. Since MapR Event Store is built-in, there is one less cluster to manage (bye bye, Kafka brokers). The Spark application could run with the same code but without needing to rely on a speculative open-source project like Apache Ignite.&lt;/p&gt;
&lt;p&gt;Saving to MapR Database uses the same HBase API, so likely no code change there either, and you’re saving to a DB that’s built into the native C MapR XD, so that’s going to be super fast as well. Finally, sharing the resources is simplified by running only Spark on YARN or standalone-mode, while the platform is left to deal with the resource requirements of the MapR Event Store, MapR XD, and MapR Database with reliability and performance, guaranteed, since highly trained support engineers are available 24/7 to support every single part of this application.&lt;/p&gt;
&lt;p&gt;Given this system is heading into production for a telecom operator with 24/7 reliability expectation, I’d argue that built-in simplicity, performance, and support are pretty compelling and hopefully will be adopted by this customer for the next iteration of the system. (Stay tuned!)&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes, Kafka Event Sourcing Architecture Patterns and Use Case Examples]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/kubernetes-kafka-event-sourcing-architecture-patterns-and-use-case-examp/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-kafka-event-sourcing-architecture-patterns-and-use-case-examp/</guid><pubDate>Wed, 11 Nov 2020 07:02:01 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-05-01T11:00:00.000&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;With the rapidly changing business and technology landscape of today, developers, data scientists, and IT operations are working together to build intelligent  applications with new technologies and dynamic architectures because of the flexibility, speed of delivery, and maintainability that they make possible.  This post will go over the technologies that are facilitating evolutionary architectures: containers, Kubernetes, and the Kafka API. Then we will look at some Kafka event sourcing architecture patterns and use case examples.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/containers-1605078458080.png&quot; alt=&quot;Containers&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Architecture&lt;/h2&gt;
&lt;p&gt;Containers simplify going from development to deployment, without having to worry about portability or reproducibility. Developers can package an application plus all its dependencies, libraries, and configuration files needed to execute the application into a container image.  A container is a runnable instance of an image. Container images can be pulled from a registry and deployed anywhere the container runtime is installed: your laptop, servers on-premises, or in the cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/docker-host-1605078472533.png&quot; alt=&quot;Docker_Host&quot;&gt;&lt;/p&gt;
&lt;p&gt;image reference &lt;a href=&quot;https://docs.docker.com/engine/docker-overview/#docker-architecture&quot;&gt;https://docs.docker.com/engine/docker-overview/#docker-architecture&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Compared to virtual machines, containers have similar resources and isolation benefits, but are lighter in weight, because containers virtualize the operating system instead of the hardware. Containers are more portable and efficient, take up less space, use far fewer system resources, and can be spun up in seconds.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/vm-containers-1605078486231.png&quot; alt=&quot;Virtual Machines and Containers&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Architecture&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://kubernetes.io/&quot;&gt;Kubernetes&lt;/a&gt; provides a platform to configure, automate, and manage:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;intelligent and balanced scheduling of containers&lt;/li&gt;
&lt;li&gt;creation, deletion, and movement of containers&lt;/li&gt;
&lt;li&gt;easy scaling of containers&lt;/li&gt;
&lt;li&gt;monitoring and self-healing abilities&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A Kubernetes cluster is comprised of at least one master node, which manages the cluster, and multiple worker nodes, where containerized applications run using Pods.  A Pod is a logical grouping of one or more containers, which are scheduled together and share resources. Pods enable multiple containers to run on a host machine and share resources, such as: storage, networking, and container runtime information.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/clusters-1605078500093.png&quot; alt=&quot;Clusters&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Master node manages the cluster in this way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The API server parses the YAML configuration and stores the  configuration in the etcd key value store.&lt;/li&gt;
&lt;li&gt;The etcd stores and replicates the current configuration and run state of the cluster.&lt;/li&gt;
&lt;li&gt;The scheduler schedules pods on worker nodes.&lt;/li&gt;
&lt;li&gt;The controller manager manages the state of non-terminating control loops, such as pod replicas.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The  &lt;a href=&quot;https://martinfowler.com/articles/microservices.html&quot;&gt;microservice architectural style&lt;/a&gt; is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. A microservice approach is well aligned to containers and Kubernetes. You can gain modularity, extensive parallelism, and cost-effective scaling by deploying services across many nodes. Microservices modularity facilitates independent updates/deployments and helps to avoid single points of failure, which can help prevent large-scale outages.&lt;/p&gt;
&lt;p&gt;The MapR Data Fabric includes a natively integrated Kubernetes volume driver to provide persistent storage volumes for access to any data located on-premises, across clouds, and to the edge. Stateful applications can now be easily deployed in containers for production use cases, machine learning pipelines, and multi-tenant use cases.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mapr-cdp-1605078512665.png&quot; alt=&quot;MapR CDP&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Event-Driven Microservices Architecture&lt;/h2&gt;
&lt;p&gt;Most business data is produced as a sequence of events, or an event stream: for example, web or mobile app interactions, sensor data, bank transactions, and medical devices all continuously generate events.  Microservices often have an event-driven architecture, using an append-only event stream, such as Kafka or MapR Event Streams (which provides a Kafka API).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mapr-es-1605078524402.png&quot; alt=&quot;MapR Event Store&quot;&gt;&lt;/p&gt;
&lt;p&gt;With MapR Event Store (or Kafka), events are grouped into logical collections of events called &quot;topics.&quot; Topics are partitioned for parallel processing. You can think of a partitioned topic like an event log, new events are appended to the end, and like a queue,events are delivered in the order they are received.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mapr-cluster-1605078538269.png&quot; alt=&quot;MapR Cluster&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mapr-cluster-2-1605078550838.png&quot; alt=&quot;MapR Cluster 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Unlike a queue, events are not deleted after they are delivered, they remain on the partition, available to other consumers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/events-1605078564306.png&quot; alt=&quot;Events&quot;&gt;&lt;/p&gt;
&lt;p&gt;Older messages are automatically deleted, based on the stream&apos;s timetolive setting; if the setting is 0, then they will never be deleted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/messages-1605078583134.png&quot; alt=&quot;Messages&quot;&gt;&lt;/p&gt;
&lt;p&gt;Messages are not deleted from topics when read, and topics can have multiple different consumers; this allows processing of the same messages by different consumers for different purposes. Pipelining is also possible, where a consumer enriches an event and publishes it to another topic.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/kafka-api-1605078598421.png&quot; alt=&quot;Kafka API&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store provides scalable high performance messaging, easily delivering millions of messages per second on modest hardware. The publish/subscribe Kafka API provides decoupled communications, making it easy to add new listeners or new publishers without disrupting existing processes.&lt;/p&gt;
&lt;p&gt;When you combine these messaging capabilities with the simple concept of microservices, you can greatly enhance the agility with which you build, deploy, and maintain complex data pipelines. Pipelines are constructed by simply chaining together multiple microservices, each of which listens for the arrival of some data, performs its designated task, and optionally publishes its own messages to a topic.&lt;/p&gt;
&lt;h2&gt;The Stream Is the System of Record&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://martinfowler.com/eaaDev/EventSourcing.html&quot;&gt;Event Sourcing&lt;/a&gt; is an architectural pattern in which the state of the application is determined by a sequence of events, each of which is recorded in an append-only event store or stream. As an example, imagine that each &quot;event&quot; is an incremental update to an entry in a database. In this case, the state of a particular entry is simply the accumulation of events pertaining to that entry. In the example below, the stream persists the queue of all deposit and withdrawal events, and the database table persists the current account balances.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/entry-database-1605078612009.png&quot; alt=&quot;Entry Database&quot;&gt;&lt;/p&gt;
&lt;p&gt;Which one of these, the stream or the database, makes a better system of record? The events in the stream can be used to reconstruct the current account balances in the database, but not the other way around. Database replication actually works by suppliers writing changes to a change log, and consumers applying the changes locally.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/replication-1605078622856.png&quot; alt=&quot;Replication&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Adding Microservices to a Bank Monolithic Application with Change Data Capture&lt;/h2&gt;
&lt;p&gt;Banks often have mainframe applications, which are expensive to run, difficult to update, and also difficult to completely replace.  Let&apos;s look at how we could incrementally add event-driven microservices to a monolithic bank application, which consists of payment transactions and batch jobs for fraud detection, statements, and promotion emails.&lt;/p&gt;
&lt;p&gt;In the design shown below, payment transactions from the monolithic database commit log are published to a stream, which is set to never throw data away.  The immutable event store (stream) becomes the system of record, with events processed by different data pipelines, based on the use case.  Event data pipelines funnel out to &lt;a href=&quot;https://martinfowler.com/bliki/PolyglotPersistence.html&quot;&gt;polyglot persistence&lt;/a&gt;, different data storage technologies, each one providing different materialized views: MapR Database HBase and MapR Database JSON document, graph, and search databases, so that microservices always have the most up-to-date view of their data in the most appropriate format.  Using a different model for reading than for writing is the &lt;a href=&quot;http://martinfowler.com/bliki/CQRS.html&quot;&gt;Command Query Responsibility Separation&lt;/a&gt; pattern.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/pattern-1605078634993.png&quot; alt=&quot;Pattern&quot;&gt;&lt;/p&gt;
&lt;p&gt;The event store provides for rebuilding state by re-running the events in the stream — this is the &lt;a href=&quot;http://martinfowler.com/eaaDev/EventSourcing.html&quot;&gt;Event Sourcing&lt;/a&gt; pattern. Events can be reprocessed to create a new index, cache, or view of the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/reprocessing-events-1605078645854.png&quot; alt=&quot;Reprocessing Events&quot;&gt;&lt;/p&gt;
&lt;p&gt;The consumer simply reads from the oldest message to the latest to create a new view of the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/create-new-view-1605078659099.png&quot; alt=&quot;Create New View&quot;&gt;&lt;/p&gt;
&lt;p&gt;With the payment transactions now coming in as an event stream, real time fraud detection, using Spark Machine Learning and Streaming, could be added more easily than before, as shown in the data flow below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/stream-processing-1605078670497.png&quot; alt=&quot;Stream Processing&quot;&gt;&lt;/p&gt;
&lt;p&gt;Having a long retention time for events in the stream allows for more analysis and functionality to be added. For example, a materialized view of card location histories could be stored in a data format such as Parquet, which provides very efficient querying.&lt;/p&gt;
&lt;p&gt;Evolving the Architecture by Adding Events and Microservices&lt;/p&gt;
&lt;p&gt;With more event sources, stream processing and machine learning can be added to provide new functionality.  Machine learning techniques across a wide range of interactions — including click stream, click through rates, call center reports, customer preferences, and purchase data — can be used to provide insights, such as: financial recommendations, predictions, alerts, and relevant offers.  For example, web click stream analysis combined with purchase history can be used to segment customers who share behavioral affinities into groups, in order to better target advertisements.  Lead events can be added to a stream when a customer clicks on targeted offers, triggering updates to the customer profile in MapR Database and automated campaigns to prospects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/microservices-1605078682511.png&quot; alt=&quot;Microservices&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Healthcare Event Sourcing Examples&lt;/h2&gt;
&lt;p&gt;Now let&apos;s look at how a stream-first architecture has been implemented in healthcare. Data from hospitals, providers, and labs flow into the ALLOY Health Platform.  MapR Event Store solves the data lineage problem of HIPAA compliance because the stream becomes a system of record by being an infinite, immutable log of each data change. Polyglot persistence solves the problem of storing multiple data formats. By streaming data changes in real time to the MapR Database HBase API/MapR Database JSON API, graph, and search databases, materialized views can be provided, explored, and analyzed for different use cases, such as population health queries and patient matching.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/sourcing-examples-1605078694102.png&quot; alt=&quot;Sourcing Examples&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Other healthcare stream processing and machine learning data pipeline examples include:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Using predictive analytics on claims events to reduce fraud waste and abuse for healthcare payments.&lt;/li&gt;
&lt;li&gt;Using predictive analytics across multiple sources from over &lt;a href=&quot;https://www.datapine.com/blog/big-data-examples-in-healthcare/&quot;&gt;30 million patients&lt;/a&gt; to:
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://healthcare-conversation.com/2015/07/06/todays-predictive-analytics-should-provide-timely-actionable-intelligence/&quot;&gt;provide doctors with timely, actionable intelligence&lt;/a&gt; to aid in the accuracy of diagnosing patient conditions&lt;/li&gt;
&lt;li&gt;aid in the matching of treatments with outcomes&lt;/li&gt;
&lt;li&gt;predict patients at risk for disease or readmission&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Retail Event Sourcing Example&lt;/h2&gt;
&lt;p&gt;A major retailer wanted to increase in-season agility and inventory discipline in order to react to demand changes and reduce markdowns.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/retail-event-example-1605078705622.png&quot; alt=&quot;Retail Event Example&quot;&gt;&lt;/p&gt;
&lt;p&gt;Data is collected from point of sale transactions, inventory status and pricing, competitive intelligence, social media, weather, and customers (scrubbed of personal identification), allowing for a centralized analysis of correlations and patterns that are relevant to improving business.  Big data algorithms analyze in-store and online purchases, Twitter trends, local sports events, and weather buying patterns to build innovative applications that personalize customer experience while increasing the efficiency of logistics. Point of sale transactions are analyzed to provide product recommendations or discounts based on which products were bought together or before another product. Predictive analytics is used to know what products sell more on particular days in certain kinds of stores, in order to reduce overstock and stay properly stocked on the most in-demand products, thereby helping to optimize the supply chain.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;A confluence of several different technology shifts have dramatically changed the way that applications are being built. The combination of event-driven microservices, containers, Kubernetes, and machine learning data pipelines is accelerating the development of next-generation intelligent applications, which are taking advantage of modern computational paradigms, powered by modern computational infrastructure. The MapR Data Platform integrates global event streaming, real-time database capabilities, and scalable enterprise storage with a collection of data processing and analytical engines to power this new generation of data processing pipelines and intelligent applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/mapr-converged-1605078721345.png&quot; alt=&quot;MapR CDP&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kafka vs. MapR Event Store: Why MapR?]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/kafka-vs-mapr-event-store-why-mapr/</link><guid isPermaLink="false">https://developer.hpe.com/kafka-vs-mapr-event-store-why-mapr/</guid><pubDate>Wed, 11 Nov 2020 06:51:11 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ian Downard&quot;,
&quot;publish&quot;: &quot;2017-01-11T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;streaming&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;A lot of people choose MapR as their core platform for processing and storing big data because of its advantages for speed and performance. MapR consistently performs faster than any other big data platform for all kinds of applications, including Hadoop, distributed file I/O, NoSQL data storage, and data streaming. In this post, I’m focusing on the latter to provide some perspective on how much better/faster/cheaper MapR Event Store can be compared to Apache Kafka as a data streaming technology.&lt;/p&gt;
&lt;p&gt;MapR Event Store for Apache Kafka is a cluster-based messaging system for streaming data at scale. It’s integrated into the MapR Data Platform and implements the Apache Kafka Java API so applications written for Kafka can also run on MapR Event Store. What differentiates the MapR Event Store technology from Kafka are its built-in features for global replication, security, multi-tenancy, high availability, and disaster recovery—all of which it inherits from the MapR Data Platform. From an operational perspective, these features make MapR Event Store easier to manage than Kafka, but there are speed advantages, too. I’ve been looking at this a lot lately, trying to understand where and why MapR Event Store outperforms Kafka. In this blog post, I will share with you how clearly &lt;strong&gt;MapR Event Store can transport a much faster stream of data, with much larger message sizes, and to far more topics than what can be achieved with Kafka&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Test Strategy&lt;/h2&gt;
&lt;p&gt;In this study, I wanted to compare Kafka and MapR Event Store as to how they perform “off the shelf” without the burden of tuning my test environment to perfectly optimize performance in each test scenario. So, I have pretty much stuck with the default settings for services and clients. The only exceptions are that I configured each Kafka topic with a replication factor of 3 and configured producers to send messages synchronously, since these are the default modes for MapR Event Store. I also disabled stream compression in order to control message sizes and measure throughput more precisely.&lt;/p&gt;
&lt;h2&gt;Test Configurations&lt;/h2&gt;
&lt;p&gt;I measured performance from both producer and consumer perspectives. However, consumers run faster than producers, so I focused primarily on the producer side since the throughput of a stream is bounded by the throughput of its producers. I used two threads in my producer clients so that message generation could happen in parallel with sending messages and waiting for acknowledgments. I used the following properties for producers and topics:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;acks = all
batch.size = 16384
latency.ms = 0ms
block.on.buffer.full = true
compression = none
default.replication.factor = 3

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;My test environment consisted of three Ubuntu servers running Kafka 2.11-0.10.0.1 or MapR 5.2 on Azure VMs sized with the following specs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Intel Xeon CPU E5-2660 2.2 GHz processor with 16 cores&lt;/li&gt;
&lt;li&gt;SSD disk storage with 64,000 Mbps cached / 51,200 uncached max disk throughput&lt;/li&gt;
&lt;li&gt;112GB of RAM&lt;/li&gt;
&lt;li&gt;Virtual networking throughput between 1 and 2 Gbits/sec (I measured this quantitatively since I couldn’t easily find virtual network throughput specs from Microsoft).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Performance Metrics&lt;/h2&gt;
&lt;p&gt;Throughput, latency, and loss are the most important metrics measuring the performance of a message bus system. MapR Event Store and Kafka both guarantee zero loss through at-least-once semantics. MapR provides some advantages when it comes to latency, but typically both MapR Event Store and Kafka deliver messages sufficiently quick for real-time applications. For those reasons, I chose to focus on throughput in this study.&lt;/p&gt;
&lt;p&gt;Throughput is important because if an application generates messages faster than a message bus can consume and deliver them, then those messages must be queued. Queueing increases end-to-end latency and destabilizes applications when queues grow too large.&lt;/p&gt;
&lt;p&gt;Furthermore, throughput in Kafka and MapR Event Store is sensitive to the size of the messages being sent and to the distribution of those messages into topics. So, I analyzed those two attributes independently in order to measure how message size and stream topics affect throughput.&lt;/p&gt;
&lt;h2&gt;Throughput Performance&lt;/h2&gt;
&lt;p&gt;To measure producer throughput, I measured how fast a single producer could publish a sustained flow of messages to single topic with 1 partition and 3x replication. I ran this test for a variety of message sizes to see how that affects throughput. The results show MapR Event Store consistently achieving much higher throughput than Kafka and having a much higher capacity for handling large message sizes, as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-tput-bytes-1605077662806.png&quot; alt=&quot;Throughput MB/s&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store doesn’t just send a faster volume of data than Kafka; it also has the capacity to send more records per second. We can see this by plotting throughput in terms of raw record count, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-tput-records-1605077683740.png&quot; alt=&quot;Throughput Msgs/s&quot;&gt;&lt;/p&gt;
&lt;p&gt;I recorded these results with two different code bases. First, I used custom tests that I wrote using the Java unit test framework (JUnit), then I used the performance test scripts included with Kafka and MapR. These different approaches did not produce exactly the same results but they were close, as shown below. This correlation helps validate the conclusions stated above, that &lt;strong&gt;MapR Event Store can transport a larger volume of data and more frequent messages than Kafka&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-tput-v1v2-1605077697749.png&quot; alt=&quot;Throughput Correlation&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How does MapR Event Store achieve more than 4x throughput than Kafka?&lt;/h2&gt;
&lt;p&gt;There are a lot of reasons why MapR Event Store is faster, and without getting too technical, I’ll mention just a few. First, the MapR Event Store client more efficiently flushes data to the MapR Event Store server. It spawns its own threads to do this work, whereas Kafka uses the client application threads directly to flush to a Kafka broker, which in many cases is limited to just a single thread.&lt;/p&gt;
&lt;p&gt;On the server side, MapR Event Store inherits efficient I/O patterns from the core MapR storage layer which keeps files coherent and clean so that I/O operations can be efficiently buffered and addressed to sequential locations on disk. Replication is more efficient, too, since the underlying MapR storage platform has distributed synchronous replication built in, along with other operational features that simply don’t exist in Kafka, such as snapshots, mirroring, quotas, access controls, etc.&lt;/p&gt;
&lt;h2&gt;Replicating this test&lt;/h2&gt;
&lt;p&gt;My JUnit tests for benchmarking Kafka and MapR Event Store is available at &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/iandow/kafka_junit_tests&apos;&gt;&lt;a href=&quot;https://github.com/iandow/kafka_junit_tests&quot;&gt;https://github.com/iandow/kafka_junit_tests&lt;/a&gt;&lt;/a&gt;. Here are the commands that I used to generate the data shown above:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/iandow/kafka_junit_tests
cd kafka_junit_tests
# Create a Kafka topic...
/opt/kafka_2.11-0.10.0.1/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic t-00000 --config compression.type=uncompressed
# or create a MapR Event Store topic.
maprcli stream create -path /user/mapr/iantest -produceperm p -consumeperm p -topicperm p -defaultpartitions 1 -compression off
# Then compile.
mvn -e -Dtest=MessageSizeSpeedTest test
# Test data will be saved in size-count.csv

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also measure throughput using the performance test utilities included with Kafka and MapR. Here are the commands that I used to do that:&lt;/p&gt;
&lt;p&gt;Kafka script:&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/bf5df0f9b4f19e6a19aa5a7a93b7c81c&apos;&gt;&lt;a href=&quot;https://gist.github.com/iandow/bf5df0f9b4f19e6a19aa5a7a93b7c81c&quot;&gt;https://gist.github.com/iandow/bf5df0f9b4f19e6a19aa5a7a93b7c81c&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;MapR script:&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/0750185f1d3631301d476b426c109a50&apos;&gt;&lt;a href=&quot;https://gist.github.com/iandow/0750185f1d3631301d476b426c109a50&quot;&gt;https://gist.github.com/iandow/0750185f1d3631301d476b426c109a50&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Topic Scalability&lt;/h2&gt;
&lt;p&gt;Another major advantage that MapR Event Store holds over Kafka relates to how well it can handle large quantities of stream topics. Topics are the primary means of organizing stream data; however, there is overhead associated with categorizing streams into topics, and producer throughput is sensitive to that overhead. I quantified this by measuring how fast a single producer could publish a sustained flow of messages to an increasingly large quantity of topics. This is essentially a &quot;fan-out&quot; producer (illustrated below) and it is very common for fast data pipelines to use this pattern so that data can be more easily consumed downstream.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-fanout-1605077711198.png&quot; alt=&quot;Fanout Producer&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each of the topics created for this scenario were configured with a single partition and 3x replication. Record size was held constant at 100 bytes.&lt;/p&gt;
&lt;p&gt;It’s clear from the following graph that &lt;strong&gt;MapR Event Store scales to a larger quantity of topics than Kafka&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-tput-topics-1605077727421.png&quot; alt=&quot;Topic Scalability&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How does MapR Event Store handle so many more topics than Kafka?&lt;/h2&gt;
&lt;p&gt;A topic is just metadata in MapR Event Store; it does not introduce overhead to normal operations. MapR Event Store uses only one data structure for a stream, no matter how many topics it has, and the MapR storage system provides extremely fast and scalable storage for that data.&lt;/p&gt;
&lt;p&gt;On the other hand, Kafka represents each topic by at least one directory and several files in a general purpose file system. The more topics/partitions Kafka has the more files it creates. This makes it harder to buffer disk operations, perform sequential I/O, and it increases the complexity of what ZooKeeper must manage.&lt;/p&gt;
&lt;h2&gt;Replicating this test&lt;/h2&gt;
&lt;p&gt;This scenario can be run with another JUnit test from &lt;a href=&quot;https://github.com/iandow/kafka_junit_tests&quot;&gt;https://github.com/iandow/kafka_junit_tests&lt;/a&gt;, as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone https://github.com/iandow/kafka_junit_tests
cd kafka_junit_tests
# For MapR only, create the stream first:
maprcli stream create -path /user/mapr/taq -produceperm p -consumeperm p -topicperm p -compression off
mvn -e -Dtest= ThreadCountSpeedTest test
# Test data will be saved in thread-count.csv

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Partition Scalability&lt;/h2&gt;
&lt;p&gt;Stream topics are often subdivided into partitions in order to allow multiple consumers to read from a topic simultaneously. Both Kafka and MapR Event Store allow topics to be partitioned, but partitions in MapR Event Store are much more powerful and easier to manage than partitions in Kafka. For example, Kakfa requires partitions to fit within the disk space of a single cluster node and cannot be split across machines. MapR Event Store is not limited by the storage capacity of any one node because the MapR storage system automatically grows (or shrinks) partitions across servers. I’ll talk more about these operational advantages later, but let’s consider the performance implications of partitioning now.&lt;/p&gt;
&lt;p&gt;ZooKeeper elects separate nodes to be leaders for each partition. Leaders are responsible for processing the client reads and writes for their designated partition. This helps load balance client requests across the cluster, but it complicates the work the ZooKeeper must do to keep topics synchronized and replicated. Leader election takes time and does not scale well. In my tests, I saw leader election take at least 0.1 seconds per partition and it ran serially. So, for example, it would take more than 10 seconds to configure a topic with 100 partitions, that is, if ZooKeeper didn’t crash, which it frequently did when I created topics with 100 or more partitions.&lt;/p&gt;
&lt;p&gt;In MapR Event Store, I had no problem streaming data to topics with thousands of partitions, as shown below. This graph shows the throughput for a producer sending synchronously to a 3x replicated topic subdivided into an increasingly large number of partitions. I could not run my test in Kafka beyond 400 partitions, so that line is cut short.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/streams-tput-partitions-1605077753761.png&quot; alt=&quot;Partitioning Scalability&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Replicating this test&lt;/h2&gt;
&lt;p&gt;I used the performance scripts included with Kafka and MapR to generate the partition vs. throughput data shown above. Here is the script I used to run this test in Kafka:&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/625d783333a53b592f0381e6b37ee9ab&apos;&gt;&lt;a href=&quot;https://gist.github.com/iandow/625d783333a53b592f0381e6b37ee9ab&quot;&gt;https://gist.github.com/iandow/625d783333a53b592f0381e6b37ee9ab&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That script will silently freeze if ZooKeeper fails, but it will continue once ZooKeeper starts again. So in another terminal, I simultaneously ran the following script to automatically restart ZooKeeper if it fails (which it is likely to do during this test):&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/2dc07bde132669706467e8ee45507561&apos;&gt;&lt;a href=&quot;https://gist.github.com/iandow/2dc07bde132669706467e8ee45507561&quot;&gt;https://gist.github.com/iandow/2dc07bde132669706467e8ee45507561&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Here is the script I used to generate partitions vs. throughput data in MapR:&lt;/p&gt;
&lt;p&gt;&lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/8074962f6205552c9cdc3fceccdd9793&apos;&gt;&lt;a href=&quot;https://gist.github.com/iandow/8074962f6205552c9cdc3fceccdd9793&quot;&gt;https://gist.github.com/iandow/8074962f6205552c9cdc3fceccdd9793&lt;/a&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Operational Advantages for MapR Event Store&lt;/h2&gt;
&lt;p&gt;Increasing throughput capacity and decreasing message latency can often be accomplished simply by adding nodes to your distributed messaging cluster. However, doing so costs money and complicates management, so essentially saying that MapR Event Store performs better than Kafka is another way of saying that operating a distributed messaging platform can be done with less hardware on MapR than with Kafka.&lt;/p&gt;
&lt;p&gt;However, unless you’re working on applications that scale to extreme lengths, then the challenges you face with Kafka are more likely to be operational rather than performance in nature. And this is where the MapR total cost of ownership really shines.&lt;/p&gt;
&lt;p&gt;Not only does MapR Event Store execute with higher performance, it also addresses major operational deficiencies in Kafka. Here are three examples relating to replication, scaling, and mirroring:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kafka requires that the MirrorMaker processes be manually configured in order to replicate across clusters. Replication is easy to configure with MapR Event Store and supports unique capabilities for replicating streams across data centers and allowing streams to be updated in multiple locations at the same time.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kafka’s mirroring design simply forwards messages to a mirror cluster. The offsets in the source cluster are useless in the mirror, which means consumers and producers cannot automatically failover from one cluster to a mirror. MapR continuously transfers updated records for near real-time replication and preserves message offsets in all replicated copies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kakfa requires partitions to fit within the disk space of a single cluster node and cannot be split across machines. This is especially risky, because ZooKeeper could automatically assign multiple large partitions to a node that doesn’t have space for them. You can move them manually, but that can quickly become unmanageable. MapR Event Store is not limited by the storage capacity of any one node because it distributes stream data across the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;MapR Event Store outperforms Kafka in big ways. I measured the performance of distributed streaming in a variety of cases that focused on the effects of message size and topic quantity, and I saw MapR Event Store transport a much faster stream of data, with much larger message sizes, and to far more topics than what could be achieved with Kafka on a similarly sized cluster. Although performance isn’t the only thing that makes MapR Event Store desirable over Kafka, it offers one compelling reason to consider it.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Configure Jupyter Notebook for Spark 2.1.0 and Python]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/configure-jupyter-notebook-for-spark-210-and-python/</link><guid isPermaLink="false">https://developer.hpe.com/configure-jupyter-notebook-for-spark-210-and-python/</guid><pubDate>Thu, 05 Nov 2020 17:04:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;publish&quot;: &quot;2017-07-07T12:00:00.000&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;I&apos;ll guess that many people reading this have spent time wrestling with a configuration to get Python and Spark to play nicely. Having gone through the process myself, I&apos;ve documented my steps and will share my knowledge, hoping it will save some time and frustration for some of you.&lt;/p&gt;
&lt;p&gt;This article targets the latest releases of MapR 5.2.1 and the MEP 3.0 version of Spark 2.1.0. It should work equally well for earlier releases of MapR 5.0 and 5.1. In fact, I&apos;ve tested this to work with MapR 5.0 with MEP 1.1.2 (Spark 1.6.1) for a customer.&lt;/p&gt;
&lt;p&gt;The version of Jupyter is 4.3. It seems like it changed quite a bit since the earlier versions and so most of the information I found in blogs were pretty outdated. Hence my having so much trouble getting everything working to my satisfaction.&lt;/p&gt;
&lt;p&gt;My goals:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run PySpark successfully from either a cluster node or an edge node&lt;/li&gt;
&lt;li&gt;Run python code in YARN distributed mode&lt;/li&gt;
&lt;li&gt;Have access to modules like numpy, scipy, pandas and others.&lt;/li&gt;
&lt;li&gt;Do all this using Jupyter in server mode that I access from my own laptop&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I&apos;m leaving out Jupyter server mode security, which could be the topic of a future blog, potentially. I&apos;ve implemented it before and found &lt;a href=&quot;https://jupyter-notebook.readthedocs.io/en/latest/security.html&quot;&gt;THE JUPYTER DOCUMENTATION&lt;/a&gt; explains setting it up for encryption (HTTPS) and authentication to be pretty good.&lt;/p&gt;
&lt;h2&gt;Installing Python&lt;/h2&gt;
&lt;p&gt;Verify your version of Python:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;python --version
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If it&apos;s Python 2.6.X, it&apos;s probably a good idea to use a recent build of Python 2.7 If it&apos;s Python 2.7.X, then you&apos;ll need to choose to use the system python or not.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;System python is easier to make work, it&apos;s already there and shared everywhere.&lt;/li&gt;
&lt;li&gt;Isolated separate python (anaconda or a separate python) is harder to get working but will provide a more consistent environment where each user can have their own (and only their own) modules installed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will use Miniconda for Python 2.7 64 bits throughout. It works very well. Using Python 3 would be just the same, with the only difference being in terms of code and module compatibility. Either will work fine with Spark.&lt;/p&gt;
&lt;p&gt;Note: Python 3.6 doesn&apos;t work with Spark 1.6.1 See &lt;a href=&quot;https://issues.apache.org/jira/browse/SPARK-19019&quot;&gt;SPARK-19019&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Installing Anaconda&lt;/h2&gt;
&lt;p&gt;There is a choice between Anaconda and Miniconda, as well as between python 2.7 and Python 3.6.&lt;/p&gt;
&lt;p&gt;Miniconda is very nice because the download is small and you only install what you need. Anaconda is very nice for having everything installed from the start, so all needed modules will be there from the start for most needs.&lt;/p&gt;
&lt;p&gt;Here, we show installing miniconda and Python 2.7 (64bits):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
bash Miniconda2-latest-Linux-x86_64.sh -b -p /opt/miniconda2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To install it on all nodes at once, we recommend to check out Clustershell.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#copy the file to all nodes
clush -ac Miniconda2-latest-Linux-x86_64.sh

#install on all nodes at same time:
clush -aB bash Miniconda2-latest-Linux-x86_64.sh -b -p /opt/miniconda2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Get all nodes in same exact state, with python/anaconda installed &lt;strong&gt;exactly&lt;/strong&gt; in the same location with all nodes having exactly the same modules installed. Miss that here and it guarantees weird errors that will be hard to diagnose.&lt;/p&gt;
&lt;h2&gt;Update Spark environment to use Python 2.7:&lt;/h2&gt;
&lt;p&gt;Add to &lt;code&gt;/opt/mapr/spark/spark-2.1.0/conf/spark-env.sh&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;    export PYSPARK_PYTHON=/opt/miniconda2/bin/python
    export PYSPARK_DRIVER_PYTHON=/opt/miniconda2/bin/python
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Update file on all nodes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# using clustershell to copy file (&quot;c&quot;) to all nodes (&quot;a&quot;)
clush -ac /opt/mapr/spark/spark-2.1.0/conf/spark-env.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: this is known to work on previous MEP versions. I have also tested it with MEP 1.1.2 (Spark 1.6.1) and it worked very well. just use the correct path to Spark and it will work just fine.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Testing&lt;/h2&gt;
&lt;p&gt;For testing, let&apos;s use some Ebay auction data.&lt;/p&gt;
&lt;p&gt;Copy the data into the folder: &lt;code&gt;/user/mapr/data&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Start pyspark and run the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;&gt;&gt;&gt; auctionRDD = sc.textFile(&quot;/user/mapr/data/auctiondata.csv&quot;).map(lambda line:line.split(&quot;,&quot;))
&gt;&gt;&gt; auctionRDD.first()
[u&apos;8213034705&apos;, u&apos;95&apos;, u&apos;2.927373&apos;, u&apos;jake7870&apos;, u&apos;0&apos;, u&apos;95&apos;, u&apos;117.5&apos;, u&apos;xbox&apos;, u&apos;3&apos;]
&gt;&gt;&gt; auctionRDD.count()
10654
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ok, so now we have a working pyspark shell!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Don&apos;t do this as &lt;code&gt;root&lt;/code&gt; or as user &lt;code&gt;MAPR&lt;/code&gt; on a production cluster. However, for tutorials, user &lt;code&gt;MAPR&lt;/code&gt; is convenient as it is a superuser and you don&apos;t need to worry about file permissions on MapR.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Errors:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;pyspark java.io.IOException: Cannot run program &quot;python2.7&quot;: error=2, No such file or directory&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This error is because the driver and/or the executors can&apos;t find the python executable. It&apos;s fixed by setting the PYSPARK_PYTHON (and PYSPARK_DRIVER_PYTHON) variables in &lt;code&gt;spark-env.sh&lt;/code&gt; (see above)&lt;/p&gt;
&lt;h2&gt;ipython Notebook&lt;/h2&gt;
&lt;p&gt;If you want to able to choose to use spark when launch ipython shell:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Ensure SPARK_HOME env variable is defined.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;export SPARK_HOME=/opt/mapr/spark/spark-2.1.0
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Install ipython with Anaconda&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/miniconda2/bin/conda install jupyter
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Add a ipython profile named pyspark&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;ipython profile create pyspark
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Add &lt;code&gt;~/.ipython/profile_pyspark/startup/00-pyspark-setup.py&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import os
import system
spark_home = os.environ.get(&apos;SPARK_HOME&apos;, None)
if not spark_home:
    raise ValueError(&apos;SPARK_HOME environment variable is not set&apos;)
sys.path.insert(0, os.path.join(spark_home, &apos;python&apos;))
sys.path.insert(0, os.path.join(spark_home, &apos;python/lib/py4j-0.10.4-src.zip&apos;))
exec(open(os.path.join(spark_home, &apos;python/pyspark/shell.py&apos;)).read())
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;Launch&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/opt/miniconda2/bin/ipython --profile=pyspark
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Try the sample code above. It should also work without issue.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/ipythonsamplecode-1604596105535.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Jupyter Notebook&lt;/h2&gt;
&lt;p&gt;Now on to Jupyter. In this case, we&apos;re looking to have the notebook run on an edge node (less ideally, on a cluster node) in server mode and access it from our development laptop.&lt;/p&gt;
&lt;p&gt;The following instructions assume the user &lt;code&gt;MAPR&lt;/code&gt;, but should work equally well for any other user. For production use, never use &lt;code&gt;MAPR&lt;/code&gt; user as it is a superuser with read-write access to all data in MapR.&lt;/p&gt;
&lt;p&gt;With Anaconda:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;clush -aB /opt/miniconda2/bin/conda install jupyter -y
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;Allow remote login to a notebook:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Generate a profile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  /opt/miniconda2/bin/jupyter notebook --generate-config
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;This generates the following file: 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;$HOME/.jupyter/jupyter_notebook_config.py&lt;/code&gt; In this file, we&apos;re going to update the following setting: &lt;code&gt;c.NotebookApp.ip&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The setting for this is “c.notebookapp.ip”. The default value is ‘localhost’, meaning you can’t login remotely.  Setting it to ‘*’ means it is accessible from anywhere, which is fine for development, but not so good for production.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  # c.NotebookApp.ip = &apos;*&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;About Security:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;It&apos;s a good time to remind you about security. It&apos;s pretty easy to configure Jupyter to use https and have a password. See [Jupyter documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#securing-a-notebook-server). Basic auth with https are a reasonable minimum security level for a production system.

Important: the user which runs the notebook is meaningful, as that user’s permissions are used to access files on MapR. If you run it as user mapr, then everything is accessible as it’s a superuser account. For production, you want to run it as a less privileged user.
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;The startup script from the ipython step is helpful:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  [mapr@ip-10-0-0-180 ~]$ cat .ipython/profile_default/startup/00-default-setup.py
  import os
  import sys
  spark_home = os.environ.get(&apos;SPARK_HOME&apos;, None)
  if not spark_home:
      raise ValueError(&apos;SPARK_HOME environment variable is not set&apos;)
  sys.path.insert(0, os.path.join(spark_home, &apos;python&apos;))
  sys.path.insert(0, os.path.join(spark_home, &apos;python/lib/py4j-0.10.4-src.zip&apos;))
  execfile(os.path.join(spark_home, &apos;python/pyspark/shell.py&apos;))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This step is essential or else the kernel won&apos;t initialize properly. Alternatively, you can past the code above in the first cell to initialize pyspark first. Another alternative is to use the module &lt;code&gt;findspark&lt;/code&gt;, which probably does something similar to this, but with less code.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Add a PySpark Kernel. create the &lt;code&gt;kernel.json&lt;/code&gt; file in the location as shown below:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;[mapr@ip-10-0-0-20 ~]$ cat .ipython/kernels/pyspark/kernel.json
{
 &quot;display_name&quot;: &quot;pySpark (Spark 2.1.0)&quot;,
 &quot;language&quot;: &quot;python&quot;,
 &quot;argv&quot;: [
  &quot;/opt/miniconda2/bin/python&quot;,
  &quot;-m&quot;,
  &quot;ipykernel&quot;,
  &quot;-f&quot;,
  &quot;{connection_file}&quot;
 ],
&quot;env&quot;: {
        &quot;CAPTURE_STANDARD_OUT&quot;: &quot;true&quot;,
        &quot;CAPTURE_STANDARD_ERR&quot;: &quot;true&quot;,
        &quot;SEND_EMPTY_OUTPUT&quot;: &quot;false&quot;,
        &quot;SPARK_HOME&quot;: &quot;/opt/mapr/spark/spark-2.1.0&quot;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Start a notebook and have fun with Spark and Python!&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  jupyter notebook --no-browser
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/jupyternotebook-nobrowser-1604596122293.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Open your browser to the indicated link and... Success!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/browsersuccess-1604596151125.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Launch Jupyter notebook instead of pyspark&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Update &lt;code&gt;$SPARK_HOME/conf/spark-env.sh&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  [mapr@ip-10-0-0-20 ~]$ tail /opt/mapr/spark/spark-2.1.0/conf/spark-env.sh
  export PYSPARK_DRIVER_PYTHON=/opt/miniconda2/bin/jupyter

  # Setup env variable for Jupyter + Pyspark
  export PYSPARK_DRIVER_PYTHON_OPTS=&quot;notebook --no-browser&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Launch pyspark, it will launch a Jupyter notebook&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;  $SPARK_HOME/bin/pyspark
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Thanks&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://mengdong.github.io/2016/08/08/fully-armed-pyspark-with-ipython-and-jupyter/&quot;&gt;Dong Meng&apos;s blog&lt;/a&gt; proved to be a life saver. Check him out: &lt;a href=&quot;https://mengdong.github.io&quot;&gt;https://mengdong.github.io&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Performance Tuning of an Apache Kafka/Spark Streaming System]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/performance-tuning-of-an-apache-kafkaspark-streaming-system/</link><guid isPermaLink="false">https://developer.hpe.com/performance-tuning-of-an-apache-kafkaspark-streaming-system/</guid><pubDate>Thu, 05 Nov 2020 16:39:38 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;publish&quot;: &quot;2017-01-17T06:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-hive&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Real-world case study in the telecom industry&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Debugging a real-life distributed application can be a pretty daunting task. Most common Google searches don&apos;t turn out to be very useful, at least at first. In this blog post, I will give a fairly detailed account of how we managed to accelerate by almost 10x an Apache Kafka/Spark Streaming/Apache Ignite application and turn a development prototype into a useful, stable streaming application that eventually exceeded the performance goals set for the application.&lt;/p&gt;
&lt;p&gt;The lessons learned here are fairly general and extend easily to similar systems using MapR Event Store as well as Kafka.&lt;/p&gt;
&lt;p&gt;This project serves as a concrete case for the need of a converged platform, which integrates the full software stack to support the requirements of this system: real-time streams and big data distributed processing and persistence. The MapR Data Platform is the only currently available production-ready implementation of such a platform as of this writing.&lt;/p&gt;
&lt;h2&gt;Goal of the system&lt;/h2&gt;
&lt;p&gt;To meet the needs of the telecom company, the goal of the application is to join together the log data from three separate systems. When the data is joined, it becomes possible to correlate the network conditions to a particular call for any particular customer, thus allowing customer support to provide accurate and useful information to customers who are unsatisfied with their phone service. The application has great additional value if it can do this work in real time rather than as a batch job, since call quality information that is 6 hours old has no real value for customer service or network operations.&lt;/p&gt;
&lt;p&gt;Basically, this is a fairly straight-up ETL job that would normally be done as a batch job for a data warehouse but now has to be done in real time as a streaming distributed architecture.&lt;/p&gt;
&lt;p&gt;More concretely, the overall picture is to stream the input data from a remote server into a distributed cluster, do some data cleaning and augmentation, join the records from the three logs, and persist the joined data as a single table into a database.&lt;/p&gt;
&lt;h2&gt;The problems with the original system&lt;/h2&gt;
&lt;p&gt;The original system had several issues centered around performance and stability.&lt;/p&gt;
&lt;p&gt;First, the streaming application was not stable. In a Spark Streaming application, the stream is said to be stable if the processing time of each microbatch is equal to or less than the batch time. In this case, the streaming part of the application was receiving data in 30 second windows but was taking between 4.5-6 minutes to process.&lt;/p&gt;
&lt;p&gt;Second, there is a batch process to join data one hour at a time that was targeted to run in 30 minutes but was taking over 2 hours to complete.&lt;/p&gt;
&lt;p&gt;Third, the application was randomly crashing after running for a few hours.&lt;/p&gt;
&lt;h2&gt;The cluster hardware, software stack, and input data&lt;/h2&gt;
&lt;p&gt;The cluster hardware is pretty good, with 12 nodes of enterprise servers, each equipped with two E5 Xeon CPUs each with 16 physical cores, 256GB memory, and eight 6TB spinning HDD. The network is 10GB Ethernet.&lt;/p&gt;
&lt;p&gt;The technology stack selected for this project is centered around Kafka 0.8 for streaming the data into the system, Apache Spark 1.6 for the ETL operations (essentially a bit of filter and transformation of the input, then a join), and the use of &lt;a target=&apos;\_blank&apos;  href=&apos;https://ignite.apache.org/&apos;&gt;Apache Ignite 1.6&lt;/a&gt; as an in-memory shared cache to make it easy to connect the streaming input part of the application with joining the data. Apache Hive is also used to serve as a disk backup for Ignite in case of failure and for separate analytics application.&lt;/p&gt;
&lt;p&gt;The initial cluster was configured as follows:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Node&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Zk&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;NN&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;HDFS&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Mesos&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Mesos Master&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Kafka&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Spark Worker&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;strong&gt;Ignite&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;1&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;2&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;3&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;...&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;7&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;8&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;...&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;12&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The cluster is running Apache Hadoop&apos;s HDFS as a distributed storage layer, with resources managed by Mesos 0.28. Finally, HBase is used as the ultimate data store for the final joined data. It will be queried by other systems outside the scope of this project.&lt;/p&gt;
&lt;p&gt;The performance requirement of the system is to handle an input throughput of up to 3GB/min, or 150-200,000 events/second, representing the known peak data throughput, plus an additional margin. The ordinary throughput is about half of that value or 1.5GB/min and 60,000-80,000 events/second.&lt;/p&gt;
&lt;p&gt;The raw data source are the logs of three remote systems, labeled A, B, and C here: Log A comprises about 84-85% of the entries, Log B about 1-2%, and Log C about 14-15%. The fact that the data is unbalanced is one of the (many) sources of difficulty in this application.&lt;/p&gt;
&lt;p&gt;The Spark applications are both coded in Scala 2.10 and &lt;a target=&apos;\_blank&apos;  href=&apos;http://spark.apache.org/docs/1.6.2/streaming-kafka-integration.html#approach-2-direct-approach-no-receivers&apos;&gt;Kafka’s direct approach&lt;/a&gt; (no receivers). Apache Ignite has a really nice Scala API with a magic &lt;a target=&apos;\_blank&apos;  href=&apos;https://ignite.apache.org/features/igniterdd.html&apos;&gt;IgniteRDD&lt;/a&gt; that can allow applications to share in-memory data, a key feature for this system to reduce coding complexity.&lt;/p&gt;
&lt;h2&gt;The application architecture&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture2-1604595435073.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The raw data is ingested into the system by a single Kafka producer into Kafka running on 6 servers. The producer reads the various logs and adds each log&apos;s records into its own topic. As there are three logs, there are three Kafka topics. Each topic is split into 36 partitions. Most likely, there are 36 partitions because there are 6 nodes with each 6 disks assigned to HDFS, and Kafka documentation seems to recommend having about one partition per physical disk as a guideline.&lt;/p&gt;
&lt;p&gt;The data is consumed by a Spark Streaming application which picks up each topic and then does a simple filter to cut out unnecessary fields, a map operation to transform the data, and a foreachRDD operation (each micro-batch generates an RDD in Spark Streaming) that saves the data to Ignite and to Hive.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture3-1604595444583.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The streaming app is very straightforward: map, filter, and foreach partition to save to Ignite&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A second &quot;regular&quot; Spark application runs on the data stored in-memory by Ignite to join the records from the three separate logs into a single table in batches of 1 hour. This job is done using Spark&apos;s DataFrame API, which is ideally suited to the task. The second part involves no more than 100GB worth of data, and the cluster hardware is properly sized to handle that amount of data.&lt;/p&gt;
&lt;p&gt;Three hours of data are accumulated into Ignite, because the vast majority of calls last for less than an hour, and we want to run the join on one hour’s worth of data at a time. Since some calls will start in one batch and finish in another, the system keeps three hours and only processes the middle one-hour batch, thus the join can succeed on close to 100% of the records.&lt;/p&gt;
&lt;p&gt;It’s worth noting that a better all-streaming architecture could have avoided the whole issue with the intermediate representation in the first place. An illustrative, real-world case, built with more time and thought up-front, can end the entire project faster, as opposed to rushing headlong into coding the first working solution that comes to mind.&lt;/p&gt;
&lt;h2&gt;Performance tuning&lt;/h2&gt;
&lt;p&gt;The main issues for these applications were caused by trying to run a development system&apos;s code, tested on AWS instances on a physical, on-premise cluster running on real data. The original developer was never given access to the production cluster or the real data.&lt;/p&gt;
&lt;p&gt;Apache Ignite was a huge source of problems, principally because it is such a new project that nobody had any real experience with it and also because it is not a very mature project yet.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;First target: Improve Spark Streaming performance&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The Spark Streaming application was running in about 4.5 minutes, and the project goal was to run in about 30 seconds. We needed to find 9x speedup worth of improvements, and due to time constraints, we couldn’t afford to change any code!&lt;/p&gt;
&lt;p&gt;The system had to be ready for production testing within a week, so the code from the architecture and algorithm point of view was assumed to be correct and good enough that we could reach the performance requirement only with tuning.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fix RPC timeout exceptions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We found the correct solution from somebody having the same problem, as seen in &lt;a target=&apos;\_blank&apos;  href=&apos;https://issues.apache.org/jira/browse/SPARK-14140&apos;&gt;SPARK-14140 in JIRA&lt;/a&gt;. They recommend increasing the spark.executor.heartbeatInterval from 10s to 20s.&lt;/p&gt;
&lt;p&gt;I think this problem may be caused by nodes getting busy from disk or CPU spikes because of Kafka, Ignite, or garbage collector pauses. Since Spark runs on all nodes, the issue was random. (See the cluster services layout table in the first section.)&lt;/p&gt;
&lt;p&gt;The configuration change fixed this issue completely. We haven’t seen it happen since.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Increase driver and executor memory&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Out of memory issues and random crashes of the application were solved by increasing the memory from 20g per executor to 40g per executor as well as 40g for the driver. Happily, the machines in the production cluster were heavily provisioned with memory. This is a good practice with a new application, since you don’t know how much you will need at first.&lt;/p&gt;
&lt;p&gt;The issue was difficult to debug with precision, lacking accurate information, since the Spark UI reports very little memory consumption. In practice, as this setting is easy to change, we empirically settled on 40g being the smallest memory size for the application to run stably.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Increase parallelism: increase number of partitions in Kafka&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The input data was unbalanced, and most of the application processing time was spent processing Topic 1 (with 85% of the throughput). Kafka partitions are matched 1:1 with the number of partitions in the input RDD, leading to only 36 partitions, meaning we can only keep 36 cores busy on this task. To increase the parallelism, we need to increase the number of partitions. So we split topic 1 into 12 topics each, with 6 partitions, for a total of 72 partitions. We did a simple modification to the producer to divide the data evenly from the first log into 12 topics, instead of just one. Zero code needed to be modified on the consumer side.&lt;/p&gt;
&lt;p&gt;We also right-sized the number of partitions for the two other topics, in proportion to their relative importance in the input data, so we set topic 2 to 2 partitions and topic 3 to 8 partitions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture4-1604595452674.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Running more tasks in parallel. Before tuning, each stage always had 36 partitions!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Right-size the executors&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The original application was running only 3 executors with 72 total cores. We configured the application to run with 80 cores at a maximum of 10 cores per executor, for a total of 8 executors. Note that with 16 real cores per node on a 10-node cluster, we’re leaving plenty of resources for Kafka brokers, Ignite, and HDFS/NN to run on.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Increase the batch window from 30s to 1m&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The data is pushed into Kafka by the producer as batches every 30s, as it is gathered by FTP batches from the remote systems. Such an arrangement is common in telecom applications due to a need to deal with equipment and systems from a bewildering range of manufacturers, technology, and ages.&lt;/p&gt;
&lt;p&gt;This meant that the input stream was very lumpy, as shown in the screenshot of Spark UI&apos;s Streaming tab:&lt;/p&gt;
&lt;p&gt;Increasing the window to 1m allowed us to smooth out the input and gave the system a chance to process the data in 1 minute or less and still be stable.&lt;/p&gt;
&lt;p&gt;To make sure of it, the team generated a test data, which simulated the known worst-case data, and with the new settings, the spark-streaming job was now indeed stable. The team was also able to switch easily between test data and the real production data stream as well as a throttle on the producers to configure how much data to let in to the system. This was extremely helpful to test various configurations quickly and see if we had made progress or not.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Drop requirement to save to Hive, only use Ignite&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/picture5-1604595575920.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Discussion with the project managers revealed that Hive was not actually part of the requirements for the streaming application! Mainly, this is because the data in HBase could just as easily be used by the analytics; also, in the context of this application, each individual record doesn&apos;t actually need to be processed with a 100% guarantee.&lt;/p&gt;
&lt;p&gt;Indeed, in light of the goal of the system, the worse-case scenario for missing data is that a customer&apos;s call quality information cannot be found... which is already the case. In other words, the risk of data loss is not a deal-breaker, and the upside to gaining data is additional insights. As long as the great majority of the data is processed and stored, the business goals can be reached.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Results of all optimizations&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The streaming application finally became stable, with an optimized runtime of 30-35s.&lt;/p&gt;
&lt;p&gt;As it turns out, cutting out Hive also sped up the second Spark application that joins the data together, so that it now ran in 35m, which meant that both applications were now well within the project requirements.&lt;/p&gt;
&lt;p&gt;With improvements from the next part, the final performance of the Spark Streaming job went down in the low 20s range, for a final speedup of a bit over 12 times.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Second target: Improve System Stability&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;We had to work quite hard on stability. Several strategies were required, as we will explain below.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Make the Spark Streaming application stable&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The work we did to fix the performance had a direct impact on system stability. If both applications are stable themselves and running on right-sized resources, then the system has the best chance to be stable overall.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Remove Mesos and use Spark Standalone&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The initial choice of Mesos to manage resources was forward-looking, but ultimately we decided to drop it from the final production system. At the onset, the plan was to have Mesos manage all the applications. But the team never could get Kafka and Ignite to play nice with Mesos, and so  they were running in standalone mode, leaving only Spark to be managed by Mesos. Surely, with more time, there is little doubt all applications could be properly configured to work with Mesos.&lt;/p&gt;
&lt;p&gt;Proposing to remove Mesos was a bit controversial, as Mesos is much more advanced and cool than Spark running in standalone mode.&lt;/p&gt;
&lt;p&gt;But the issue with Mesos was twofold:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Control over executor size and number was poor, a known issue (&lt;a target=&apos;\_blank&apos;  href=&apos;https://issues.apache.org/jira/browse/SPARK-5095&apos;&gt;SPARK-5095&lt;/a&gt;) with Spark 1.6 and fixed in Spark 2.0.&lt;/li&gt;
&lt;li&gt;Ignite and Kafka weren’t running inside Mesos, just Spark. Because of schedule pressure, the team had given up on trying to get those two services running in Mesos.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Mesos can only ever allocate resources well if it actually controls resources. In the case of this system, Kafka and Ignite are running outside of Mesos’ knowledge, meaning it’s going to assign resources to the Spark applications incorrectly.&lt;/p&gt;
&lt;p&gt;In addition, it’s a single-purpose cluster, so we can live with customizing the sizing of the resources for each application with a global view of the system’s resources. There is little need for dynamic resource allocations, scheduling queues, multi-tenancy, and other buzzwords.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Change the Ignite memory model&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It is a known issue that when the heap controlled by the JVM gets very big (&gt;32GB), the cost of garbage collection is quite large. We could indeed see this problem when the join application runs: the stages with 25GB shuffle had some rows with spikes in GC time, ranging from 10 seconds up to more than a minute.&lt;/p&gt;
&lt;p&gt;The initial configuration of Ignite was to run ONHEAP_TIERED with 48GB worth of data cached on heap, then overflow drops to 12GB of off-heap memory. That setting was changed to the OFFHEAP_TIERED model. While slightly slower due to serialization cost, OFFHEAP_TIERED doesn&apos;t result in big garbage collections. It still runs in memory, so we estimated it would be a net gain.&lt;/p&gt;
&lt;p&gt;With this change, the run time for each batch dutifully came down by about five seconds, from 30 seconds down to about 25 seconds. In addition, successive batches tended to have much more similar processing time with a delta of 1-3 seconds, whereas it would previously vary by over 5 to 10 seconds.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update the Ignite JVM settings&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We followed the recommended JVM options as found in Ignite documentation’s performance tuning section (&lt;a target=&quot;_blank&quot;  href=&quot;https://apacheignite.readme.io/docs/jvm-and-system-tuning&quot;&gt;&lt;a href=&quot;https://apacheignite.readme.io/docs/jvm-and-system-tuning&quot;&gt;https://apacheignite.readme.io/docs/jvm-and-system-tuning&lt;/a&gt;&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Improve the Spark code&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Some parts of the code assumed reliability, like queries to Ignite, when in fact there was a possibility of the operations failing. These problems can be fixed in the code, which now handles exceptions more gracefully, though there is probably work left to increase the robustness of the code. We can only find these spots by letting the application run now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reassign ZooKeeper to nodes 10-12&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Given that the cluster is medium-sized, it’s worth spreading the services as much as possible. We moved the ZooKeeper services from nodes 1-3 to nodes 10-12.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Tuning this application took about 1 week of full-time work. The main information we used was Spark UI and Spark logs, easily accessible from the Spark UI. The view of Jobs and Stages as well as the streaming UI are really very useful.&lt;/p&gt;
&lt;h2&gt;What I learned&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Migrating a streaming application from a prototype on AWS to an on-premise cluster requires schedule time for testing&lt;/li&gt;
&lt;li&gt;Not testing the AWS prototype with realistic data was a big mistake&lt;/li&gt;
&lt;li&gt;Including many “bleeding-edge” OSS components (Apache Ignite and Mesos) with expectations of very high reliability is unrealistic&lt;/li&gt;
&lt;li&gt;A better architecture design could have simplified the system tremendously&lt;/li&gt;
&lt;li&gt;Tuning a Kafka/Spark Streaming application requires a holistic understanding of the entire system. It’s not simply about changing the parameter values of Spark; it’s a combination of the data flow characteristics, the application goals and value to the customer, the hardware and services, the application code, and then playing with Spark parameters.&lt;/li&gt;
&lt;li&gt;MapR Data Platform would have cut the development time, complexity, and cost for this project.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The project is a first for this particular telecom company, and they decided to go all-out on such an advanced, 100% open-source platform. They should be applauded for their pioneering spirit. But a better choice of platform and application architecture would have made their lives a lot easier.&lt;/p&gt;
&lt;h2&gt;The need for a converged big-data platform is now&lt;/h2&gt;
&lt;p&gt;In fact, the requirements for this project show the real-world business need for a state-of-the-art converged platform with a fast distributed files system, high-performance key-value store for persistence, and real-time streaming capabilities.&lt;/p&gt;
&lt;p&gt;A MapR solution could probably skip the requirement for a still speculative open-source project like Ignite, since the full software stack required by the architecture is already built-in and fully supported. Given this system is heading into production for a telecom operator with 24/7 reliability expectation, such an advantage is considerable.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data Fabric: The Future of Data Management]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/data-fabric-the-future-of-data-management/</link><guid isPermaLink="false">https://developer.hpe.com/data-fabric-the-future-of-data-management/</guid><pubDate>Thu, 05 Nov 2020 16:35:05 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Karen Whipple&quot;,
&quot;publish&quot;: &quot;2018-08-08T07:00:00.000&quot;,
&quot;tags&quot;: &quot;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&quot;Data fabric&quot; is a relatively new term, but its importance is not. For years, enterprises have struggled to integrate all their data into a single, scalable platform. A data fabric simply describes a comprehensive way to achieve that goal.&lt;/p&gt;
&lt;p&gt;And, with &lt;a href=&quot;https://www.emc.com/leadership/digital-universe/2014iview/executive-summary.htm&quot;&gt;data growing at a rate of 40 percent per year&lt;/a&gt; (and not showing any signs of slowing down), achieving that goal is more important than ever.&lt;/p&gt;
&lt;p&gt;Let&apos;s learn more about what a data fabric is, why you should care, and how MapR is offering a data fabric that&apos;s more advanced than any other.&lt;/p&gt;
&lt;h2&gt;What Is Data Fabric?&lt;/h2&gt;
&lt;p&gt;As companies expand, use an increasing number of applications, and collect data into more and more silos, their need for a better data solution grows exponentially.&lt;/p&gt;
&lt;p&gt;When data is isolated and relegated to silos, it becomes stagnant and inaccessible. Legacy systems only make it more difficult to access data, and the result is lowered productivity and efficiency.&lt;/p&gt;
&lt;p&gt;Plus, it&apos;s not only silos that separate data — even state-of-the-art technologies like data centers and the cloud can inadvertently work to divide data into inconvenient clusters.&lt;/p&gt;
&lt;p&gt;The cloud is particularly troublesome because it makes it difficult for companies to leverage data from public clouds as well as that which is stored on-site.&lt;/p&gt;
&lt;p&gt;EnterpriseTech &lt;a href=&quot;https://enterprisetech.com/2017/10/02/global-data-fabric-takes-diversity-data-types/&quot;&gt;provides an excellent illustration&lt;/a&gt; of this problem:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;For example, you might have an analyst in Cleveland who needs to run a query on an exception report generated by a robotic arm in Munich, Germany. To detect anomalies, the data from that robotic arm needs to be consistent with your data warehouse. With global strong consistency, your data is consistent between the device – in this case the robotic arm – and the data center. The data fabric ensures this data is available and consistent regardless of where it originates and where it is accessed.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Just like an actual piece of fabric, a data fabric can be placed over a wide expanse of space. That&apos;s the main idea behind data fabrics, which can encompass a wide variety of data locations and sources.&lt;/p&gt;
&lt;p&gt;What a data fabric does is enable the processing, management, analysis, and storage of almost any amount of data from a multitude of sources. The data fabric then enables applications and tools to access that data using an array of interfaces. It&apos;s also important to note that data fabrics leverage data in real time.  &lt;/p&gt;
&lt;h2&gt;Why Should You Care?&lt;/h2&gt;
&lt;p&gt;As we mentioned above, enterprises to whom data is important often find that their current systems are too slow, too divided, and inefficient.&lt;/p&gt;
&lt;p&gt;And, in larger companies, having multiple groups of people store and retrieve data in their own way results in a fragmented network of data that&apos;s far from unified.&lt;/p&gt;
&lt;p&gt;Unlike other solutions, a data fabric is not a band-aid fix. Rather, it&apos;s a permanent and scalable way to bring all your data under the umbrella of one unified platform.&lt;/p&gt;
&lt;p&gt;A data fabric is able to solve a number of problems, such as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Low data availability&lt;/li&gt;
&lt;li&gt;Little to no reliability in terms of storage and security&lt;/li&gt;
&lt;li&gt;Siloed data&lt;/li&gt;
&lt;li&gt;Poor scalability that&apos;s unable to adapt to different amounts of data&lt;/li&gt;
&lt;li&gt;Reliance on underperforming legacy systems&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;What Makes MapR&apos;s Data Fabric Unique?&lt;/h2&gt;
&lt;p&gt;MapR&apos;s Vice President of Technology Strategy, Crystal Valentine, puts it best in &lt;a href=&quot;https://forbes.com/sites/danwoods/2017/11/14/mapr-why-data-fabric-is-now-vital-to-the-app-stack&quot;&gt;an interview with Forbes&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;From our founding, MapR has had a singular focus on building a platform with the capabilities to support next-gen applications. Our vision of a future in which applications can leverage large amounts of distributed data in real time to provide enormous competitive advantages is quickly becoming reality, and our strong growth numbers are testament to the fact that that is resonating with our customers.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Although other data fabric solutions are available, most focus on more basic functions, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Storing and retrieving data from various sources&lt;/li&gt;
&lt;li&gt;Accessing those sources, using one standard method or application program interface (API)&lt;/li&gt;
&lt;li&gt;Integrating data both within and across sources&lt;/li&gt;
&lt;li&gt;Applying processing or real-time analytics to data from any source&lt;/li&gt;
&lt;li&gt;Syncing data with cloud storage&lt;/li&gt;
&lt;li&gt;Providing backup and disaster recovery (DR) support&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;MapR&apos;s data fabric goes above and beyond by allowing you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unify data silos&lt;/li&gt;
&lt;li&gt;Deploy anywhere&lt;/li&gt;
&lt;li&gt;Scale up&lt;/li&gt;
&lt;li&gt;Replicate data across data centers, edge devices, and cloud instances&lt;/li&gt;
&lt;li&gt;Search and explore data with Apache Drill&lt;/li&gt;
&lt;li&gt;Safely secure your data with a single security model with granular controls&lt;/li&gt;
&lt;li&gt;Easily move exabytes of data across edge, on-premises, and cloud deployments&lt;/li&gt;
&lt;li&gt;Extend IoT applications to remote locations&lt;/li&gt;
&lt;li&gt;Increase your data transfer speed&lt;/li&gt;
&lt;li&gt;Make faster decisions&lt;/li&gt;
&lt;li&gt;Effortlessly access data.&lt;/li&gt;
&lt;li&gt;Reduce both operational and capital expenses&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Plus, our roster of high-profile clients, who are winning at data with the MapR Data Fabric, speaks for itself. Our customers include Hewlett-Packard, SAP SE, NTT Security, Mediahub, and more.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://dzone.com/articles/the-rise-of-the-data-fabric&quot;&gt;an article for DZone&lt;/a&gt;, Dan Kusnetzky says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;In my industry research, I&apos;ve come across a number of definitions of the data fabric notion. So far, the one being put forward by MapR Technologies seems to be the most comprehensive. Furthermore, the company has been enjoying some level of success as it persuades customers to adopt its approach. The company appears to understand the importance of distributed processing in the era of mobile, cloud, and the fast proliferation of IoT and how all of these are important members of enterprise computing applications. They also appear to understand that these devices also represent potential threats and have discussed how to bring them in safely. The company has also considered where and how data should be stored so that it is available broadly, but still secure and reliable.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Essentially, MapR goes beyond the basic requirements of a data fabric to provide our customers with a truly revolutionary product that makes data analysis, storage, and security fast, reliable, and easy.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Big Data Opportunities for Telecommunications]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/big-data-opportunities-for-telecommunications/</link><guid isPermaLink="false">https://developer.hpe.com/big-data-opportunities-for-telecommunications/</guid><pubDate>Thu, 05 Nov 2020 16:27:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2017-05-09T00:00:00.000Z&quot;,
&quot;tags&quot;: &quot;apache-spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;The telecommunications industry is on the verge of a major transformation through the use of advanced analytics and big data technologies like the MapR Data Platform.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Motivation for Big Data&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;With the rapid expansion of smart phones and other connected mobile devices, communications service providers (CSPs) need to rapidly process, store, and derive insights from the diverse volume of data traveling across their networks. Big data analytics can help CSPs improve profitability by optimizing network services/usage, enhancing customer experience, and improving security.  According to McKinsey, &lt;a target=&quot;_blank&quot; href=&quot;http://www.mckinsey.com/industries/telecommunications/our-insights/telcos-the-untapped-promise-of-big-data&quot;&gt;the potential for Telcos to profit from applying data science effectively is substantial&lt;/a&gt;. Examples include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Predicting the periods of heaviest network usage, and targeting steps to relieve congestion&lt;/li&gt;
&lt;li&gt;Identifying the customers most likely to defect, and targeting steps to prevent churn&lt;/li&gt;
&lt;li&gt;Identifying the customers most likely to have problems paying bills, and targeting steps to improve the recovery of payments&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/0-1605076495751.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Big Data Use Cases In Telecom&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Telecommunication companies collect massive amounts of data from call detail records, mobile phone usage, network equipment, server logs, billing, and social networks, providing lots of information about their customers and network, but how can telecom companies use this data to improve their business?&lt;/p&gt;
&lt;p&gt;Most telecom use cases fall into these main categories: customer acquisition and retention, network services optimization, and security.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/1-1605076513981.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Example Telecom Big Data Use Cases&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Data-Driven Improvement of Services or Product&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Telecoms need to share data between cell towers, users and processing centers and due to the sheer volume of this data, it is important to process it near the source and then efficiently transfer it to various data centers for further use. &lt;u&gt;MapR Event Store&lt;/u&gt;, a new distributed messaging system, is uniquely effective to transport huge amounts of data and to make this data available with reliable geo-distributed replication across multiple data centers. With MapR Event Store, you can replicate streams in a master-slave, many-to-one, or multi-master configuration between thousands of geographically distributed clusters.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/2-1605076542289.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;One MapR customer is utilizing MapR Event Store to collect real-time data from all of its regional data centers and bring it to the HQ Central Data Center.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;BEFORE&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before, the customer was using FTP to transfer data from antennas to regional data centers and to the HQ Central Data center, but the FTP transfer meant extreme latency throughout the data pipeline.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/3-1605076561221.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AFTER&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now data is collected at regional data centers with MapR Event Store and made available in real time to regional dashboards.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/4-1605076603434.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;MapR Event Store Topics at regional data centers are replicated in a many-to-one configuration to the HQ Central Data Center, making events available in real time to the HQ dashboard. This means they can now monitor global performance and react fast enough to improve customer services.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/11/5-1-1605076623532.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Being able to process high throughput geo-distributed events in real time enables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Understanding how and where service issues are trending and how that is affecting customers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Crowd-based antenna optimization&lt;/strong&gt;: Monitor quickly changing network usage patterns, and reconfigure network support to handle short-term surges, such as heavy usage near a stadium during a sporting event.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Optimizing Services with Equipment Monitoring, Capacity Planning, and Preventative Maintenance&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;Dropped calls&lt;/li&gt;
&lt;li&gt;Lack of network coverage, resulting in poor customer experience&lt;/li&gt;
&lt;li&gt;Bandwidth issues&lt;/li&gt;
&lt;li&gt;Poor download times&lt;/li&gt;
&lt;li&gt;Inordinate service wait times&lt;/li&gt;
&lt;li&gt;Switching, frequency utilization, capacity use&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Analyzing these events in real time is the key to timely insights on network services in order to improve customer satisfaction.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Customer 360&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Using data science in order to better understand and predict customer behavior is an iterative process, which involves:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Data Discovery and Model Creation:&lt;/li&gt;
&lt;li&gt;Analysis of historical data.&lt;/li&gt;
&lt;li&gt;Identifying new data sources, which traditional analytics or databases are not using due to the format, size, or structure.&lt;/li&gt;
&lt;li&gt;Collecting, correlating, and analyzing data across multiple data sources.&lt;/li&gt;
&lt;li&gt;Knowing and applying the right kind of machine learning algorithms to get value out of the data.&lt;/li&gt;
&lt;li&gt;Using the Model in production to make predictions&lt;/li&gt;
&lt;li&gt;Data Discovery and updating the Model with new data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In order to understand the customer, a number of factors can be analyzed such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Customer demographic data (age, marital status, etc.)&lt;/li&gt;
&lt;li&gt;Sentiment analysis of social media&lt;/li&gt;
&lt;li&gt;Customer usage patterns, geographical usage trends&lt;/li&gt;
&lt;li&gt;Calling-circle data&lt;/li&gt;
&lt;li&gt;Browsing behavior from clickstream logs&lt;/li&gt;
&lt;li&gt;Support call center statistics&lt;/li&gt;
&lt;li&gt;Historical data that shows patterns of behavior that suggest churn&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this analysis, telecom companies can gain insights to predict and enhance the customer experience, prevent churn, and tailor marketing campaigns.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Threat Detection&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Solutionary, a subsidiary of NTT Group, is a leader in Managed Security Services. They provide Threat Intelligence, Incident Response, Compliance and Vulnerability Management as a service to their clients. Their platform collects and correlates vast amounts of data from logs, endpoints, firewalls, and network devices.&lt;/p&gt;
&lt;p&gt;They needed to improve scalability as the data volume grew, but it was cost-prohibitive with their existing Oracle database solution. The old solution could not process the unstructured log data at scale and there were also major performance issues.&lt;/p&gt;
&lt;p&gt;They replaced their RDBMS solution with the MapR Data Platform to achieve scalability while still meeting reliability requirements. Their new solution combines machine learning algorithms, complex event processing, and predictive analytics to detect real-time security threats.&lt;/p&gt;
&lt;p&gt;All of the components of the use case architectures we just discussed can run on the same cluster with the MapR Data Platform, which provides advantages such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Less complexity, fewer moving parts, fewer things to manage: converging multiple clusters for Streams/HBase/Spark/Hadoop into one cluster.&lt;/li&gt;
&lt;li&gt;&quot;Joining&quot; data sources into one core data mediation platform so that applications consume data in an easier way.&lt;/li&gt;
&lt;li&gt;Unified security.&lt;/li&gt;
&lt;li&gt;High reliability and high availability, replication from datacenter to datacenter.&lt;/li&gt;
&lt;li&gt;Multi-tenancy: MapR Event Store is able to have essentially unlimited topics for lots of tenants.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Telecom is a classic example of the big data issues of huge volume and velocity of data flow, but CSPs also have high requirements for quick responses to events, security, and reliability. The use cases we just went over showed how telecom companies can not only address these requirements, but also profit from the huge amount of information in their data to improve their business.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[MapR, Kubernetes, Spark and Drill: A Two-Part Guide to Accelerating Your AI and Analytics Workloads]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/mapr-kubernetes-spark-and-drill-a-two-part-guide-to-accelerating-your-ai/</link><guid isPermaLink="false">https://developer.hpe.com/mapr-kubernetes-spark-and-drill-a-two-part-guide-to-accelerating-your-ai/</guid><pubDate>Tue, 03 Nov 2020 16:17:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suresh Ollala&quot;,
&quot;publish&quot;: &quot;2019-04-02T07:00:01.000Z&quot;,
&quot;tags&quot;: &quot;mapr-platform&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Highly available, scalable, and elastic data platform for analytics and AI/ML&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image3-1604420243941.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The early days of big data platforms focused on optimizing storage and compute on commodity hardware, mostly by software-level capabilities such as data locality support, better local caching, and effective data distribution. This worked well, but at a cost of flexibility.&lt;/p&gt;
&lt;p&gt;Things have changed over the last 5 years – hardware innovation in networking, CPU, and disk has made data locality no longer a bottleneck. A hybrid infrastructure of on-premises and cloud technology adoption forced innovation to scale storage and compute separately: AI/ML workloads require compute burst infrastructure, and cloud-native apps require flexibility of scheduling.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image1-1604420235871.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Scalability and Manageability&lt;/h2&gt;
&lt;p&gt;Separating storage from compute makes scaling infrastructure to match bursty and rapidly growing workloads as well as datasets more manageable. One can scale up or down strictly based on the workload and independent of each other. It also provides total flexibility in matching resources to the actual storage and compute capacity required at any given time. The separation of the two also reduces the complexity of managing multiple environments. For instance, production, development, and ad hoc analytics processes could run against the same dataset, thereby eliminating the administrative overhead of data replication and potential synchronization.&lt;/p&gt;
&lt;h2&gt;Agility&lt;/h2&gt;
&lt;p&gt;Decoupling storage and compute gives infrastructure greater agility. By relying on modern scalable storage systems, teams don&apos;t have to know the compute and storage capacity needed well in advance, freeing them from having to do a guesstimate, which results in either over-provisioning or under-provisioning the resources.&lt;/p&gt;
&lt;p&gt;In addition, with innovation in systems and cloud infrastructure, it&apos;s easier to choose storage-optimized, compute-optimized, and memory-optimized systems. This removes lock-in to particular set of machines or vendors. Flexible configurations enable admins to determine to what degree to optimize for cost vs. performance. For example, if a particular workload needs to be processed quickly and cost is not a key factor, then the admin can configure more nodes to speed up processing. If controlling cost is critical, then the admin can configure less nodes and utilize other cost-saving features, such as Auto Scaling and Spot Instances.&lt;/p&gt;
&lt;h2&gt;Lower TCO&lt;/h2&gt;
&lt;p&gt;Decoupling storage and compute can lead to lower total cost of ownership. Modern data infrastructure allows admins to configure the infrastructure for a pay-per-use model, so organizations only pay for storage space and point-in-time compute capacity. Today&apos;s organizations depend on a flexible infrastructure to speed up new initiatives, which requires low cost and faster infrastructure provisioning. DevOps culture is widely adapted to cut down on the red tape for system procurement and provisioning. It is important to note that organizations are looking for hybrid infrastructure, where cost is an important factor to consider along with consistent user and admin experience.&lt;/p&gt;
&lt;h2&gt;Evolution of Kubernetes as an Infrastructure Neutral Platform&lt;/h2&gt;
&lt;p&gt;Kubernetes has evolved to meet the needs of Developer, IT, and DevOps users.&lt;/p&gt;
&lt;p&gt;If you are a &lt;strong&gt;Developer&lt;/strong&gt;, Kubernetes provides the necessary APIs that are needed today to build the next generation of applications, while at the same time supporting the more traditional ones. This lets developers focus on the app&apos;s functionality, instead of worrying about all the nitty-gritty details of how to manage the application. With Kubernetes, the developer experience is consistent, whether they work on their laptops or in a production environment. The application will not break when moved from a local machine to production, which saves the headache of debugging it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IT Operations Perspective:&lt;/strong&gt; System administrators like Kubernetes because it automates a lot of the mundane, boring, and error-prone operational tasks to keep applications running smoothly at scale. With Kubernetes, a server outage no longer keeps them up at night.&lt;/p&gt;
&lt;h2&gt;An Infrastructure Framework for Today&lt;/h2&gt;
&lt;p&gt;These days, developers are called on to write applications that run across multiple operating environments, including dedicated on-prem servers, virtualized private clouds, and public clouds such as AWS and Azure. Traditionally, applications and the tooling that support them have been closely tied to the underlying infrastructure, so it was costly to use other deployment models, despite their potential advantages. This meant that applications became dependent on a particular environment in several respects, including performance issues related to a specific network architecture, adherence to cloud provider-specific constructs, such as proprietary orchestration techniques, and dependencies on a particular back-end storage system.&lt;/p&gt;
&lt;p&gt;PaaS tries to get around these issues, but often at the cost of imposing strict requirements in areas like programming languages and application frameworks. Thus, PaaS is off limits to many development teams.&lt;/p&gt;
&lt;p&gt;Kubernetes eliminates infrastructure lock-in by providing core capabilities for containers without imposing restrictions. It achieves this through a combination of features within the Kubernetes platform, including Pods and Services.&lt;/p&gt;
&lt;h2&gt;Better Management Through Modularity&lt;/h2&gt;
&lt;p&gt;Containers allow applications to be decomposed into smaller parts with a clear separation of concerns. The abstraction layer provided for an individual container image allows us to fundamentally rethink how distributed applications are built. This modular approach enables faster development by smaller, more focused teams that are each responsible for specific containers. It also allows us to isolate dependencies and make wider use of well-tuned, smaller components.&lt;/p&gt;
&lt;p&gt;But this can&apos;t be achieved by containers alone; it requires a system for integrating and orchestrating these modular parts. Kubernetes achieves this in part using Pods – typically a collection of containers that are controlled as a single application. The containers share resources, such as file systems, kernel namespaces, and an IP address. By allowing containers to be collocated in this manner, Kubernetes removes the temptation to cram too much functionality into a single container image.&lt;/p&gt;
&lt;p&gt;The concept of a Service in Kubernetes is used to group together a collection of Pods that perform a similar function. Services can be easily configured for discoverability, observability, horizontal scaling, and load balancing.&lt;/p&gt;
&lt;h2&gt;Deploying and Updating Software at Scale&lt;/h2&gt;
&lt;p&gt;DevOps emerged as a method to speed the process of building, testing, and releasing software. Its corollary has been a shift in emphasis from managing infrastructure to managing how software is deployed and updated at scale. Most infrastructure frameworks don&apos;t support this model, but Kubernetes does, in part through Kubernetes Controllers. Thanks to controllers, it&apos;s easy to use infrastructure to manage the application lifecycle.&lt;/p&gt;
&lt;p&gt;The Deployment Controller simplifies a number of complex management tasks. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scalability.&lt;/strong&gt; Software can be deployed for the first time in a scale-out manner across Pods, and deployments can be scaled in or out at any time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Visibility.&lt;/strong&gt; Identify completed, in-process, and failing deployments with status querying capabilities.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Time savings.&lt;/strong&gt; Pause a deployment at any time and resume it later.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Version control.&lt;/strong&gt; Update deployed Pods using newer versions of application images and roll back to an earlier deployment if the current version is not stable.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Among other possibilities, Kubernetes simplifies a few specific deployment operations that are especially valuable to developers of modern applications. These include the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Horizontal autoscaling.&lt;/strong&gt; Kubernetes automatically sizes a deployment&apos;s number of Pods based on the usage of specified resources (within defined limits).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rolling updates.&lt;/strong&gt; Updates to a Kubernetes deployment are orchestrated in &quot;rolling fashion,&quot; across the deployment&apos;s Pods. These rolling updates are orchestrated while working with optional predefined limits on the number of Pods that can be unavailable and the number of spare Pods that may exist temporarily.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Canary deployments.&lt;/strong&gt; A useful pattern when deploying a new version of a deployment is to first test the new deployment in production, in parallel with the previous version, and then scale up the new deployment while simultaneously scaling down the previous deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Kubernetes marks a breakthrough for DevOps because it allows teams to keep pace with the requirements of modern software development. In the absence of Kubernetes, teams have often been forced to script their own software deployment, scaling, and update workflows. Some organizations employ large teams to handle those tasks alone. Kubernetes allows us to derive maximum utility from containers and build cloud-native applications that can run anywhere, independent of cloud-specific requirements. This is clearly the efficient model for application development and operations we&apos;ve been waiting for.&lt;/p&gt;
&lt;h2&gt;Why MapR is Best Suited for Kubernetes&lt;/h2&gt;
&lt;p&gt;The MapR Data Platform, time-tested across very large deployments, aligns with Kubernetes by offering several &quot;add-on&quot; features. MapR augments the capabilities of Kubernetes by offering heuristics on optimal CPU resources required by app containers. For instance, to run a Spark cluster, MapR offers a typical t-shirt sizing of small, medium, and large options with CPU and memory resources to pick from. This ability, combined with the auto scaling features of Kubernetes, lends end users the capability to create and scale compute, up and down intelligently. MapR also introduces the concept of a tenant, with CPU resources dedicated to it, wherein a tenant can be any app container – Spark, Drill, or a custom app. Multiple tenants with resource isolation can be efficiently run on the same platform with MapR data volumes mounted to tenants in an exclusive or shared mode.&lt;/p&gt;
&lt;p&gt;With these succinct capabilities, MapR along with Kubernetes offers ease of use and management, making it attractive for organizations to embrace Kubernetes and running applications in containers.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Better Complex Event Processing at Scale Using a Microservices-based Streaming Architecture (Part 1)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/better-complex-event-processing-at-scale-using-a-microservices-based-str/</link><guid isPermaLink="false">https://developer.hpe.com/better-complex-event-processing-at-scale-using-a-microservices-based-str/</guid><pubDate>Tue, 03 Nov 2020 16:05:36 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Mathieu Dumoulin&quot;,
&quot;publish&quot;: &quot;2017-01-09T06:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;A microservice-based streaming architecture combined with an open source rule engine makes real-time business rules easy&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This post is intended as a detailed account of a project I have made to integrate an OSS business rules engine with a modern stream messaging system in the Kafka style. The goal of the project, better known as Complex Event Processing (CEP), is to enable real-time decisions on streaming data, such as in IoT use cases.&lt;/p&gt;
&lt;p&gt;After much writing, I’ve decided to split the post into two parts. In the first part, I’ll focus on explaining what is CEP, why it is useful, and explain the architectural solution and why we feel this is a good idea for a lot of useful production use cases.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/real-time-smart-city-traffic-monitoring-using-microservices-based-streaming-architecture-part-2/&quot;&gt;the second post&lt;/a&gt;, I’ll show a concrete example based on a road traffic monitoring system and give as much detail as possible about how it was made.&lt;/p&gt;
&lt;p&gt;So without further ado, on to part 1!&lt;/p&gt;
&lt;h2&gt;Overview&lt;/h2&gt;
&lt;p&gt;As of 2015, the worldwide enterprise application software market is worth around 150 billion USD, according to Gartner Inc. It’s a huge market where one of the most common types of application revolves around applying some kind of business logic to data generated from various aspects of the business.&lt;/p&gt;
&lt;p&gt;These days, modern enterprise applications need to connect to ever more types of data sources, scale with the size of the data and the number of users, be reliable, and perform quickly. Long, custom application development cycles of one year or more are unappealing as business needs and conditions change, thus rendering the application obsolete before it is even put into production.&lt;/p&gt;
&lt;p&gt;In very large, country-wide, regional or global organisations, or organisations with exceptional data use in industries like finance, healthcare or IT, the needs stay the same, but must be met using big data technologies. This opens up a whole new class of difficulties that have made the cost of developing enterprise applications at scale extremely expensive, and it puts up very high barriers in terms of IT infrastructure and know-how requirements.&lt;/p&gt;
&lt;p&gt;So what is needed is a way to run business logic on data collected across a variety of sources, potentially at very large scales and ideally in real time, like an Internet of Things-type of application.&lt;/p&gt;
&lt;h2&gt;Understanding Complex Event Processing (CEP)&lt;/h2&gt;
&lt;p&gt;Complex event processing, or CEP for short, is not as complex at the name might suggest. Fundamentally, CEP is about applying business rules to streaming event data. Event data is simply data with a timestamp field. Examples of this kind of data might be log entries for a web server, receipts from purchases, or sensor data, all of which can be viewed as a constant stream of events. Applying rules on this streaming data enable useful actions to be taken in response.&lt;/p&gt;
&lt;p&gt;Here is an example for a smart home which has sensors at the doors, a smart WiFi router, and room movement detectors. With CEP streaming all the data into a home server, a user could make some rules like the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;If it&apos;s daytime and the door is closed and no phones are connected to the WiFi, set the house to “nobody home”&lt;/li&gt;
&lt;li&gt;If nobody is home and the door is unlocked, then lock the door and turn on the alarm&lt;/li&gt;
&lt;li&gt;If nobody is home and it&apos;s winter, lower the house temperature to 18C&lt;/li&gt;
&lt;li&gt;If nobody is home and it&apos;s summer, turn off the air conditioning&lt;/li&gt;
&lt;li&gt;If nobody is home and the door is unlocked by a family member, then turn off the alarm and set the house to “people are home&apos;”&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Having a bunch of simple rules like these will quickly add up to a very smart home indeed. In fact, such capabilities are already available for purchase in several competing smart home &quot;hub&quot; devices that use common protocols to read information from compatible sensor devices around the house and then push actions back when some rules are met.&lt;/p&gt;
&lt;p&gt;This kind of example can be easily ported to many other domains. For example, in retail, purchase histories and beacons could be used to generate personalized, location-sensitive messages or coupons. In industrial applications, many machine tools could be operated and maintained more easily using a combination of relatively simple logical rules such as, &quot;If the red button of this machine is lit, then it must be stopped.&quot;&lt;/p&gt;
&lt;h2&gt;CEP Rule-engine vs. Hand Coding&lt;/h2&gt;
&lt;p&gt;The engineers reading this so far are probably not very impressed, as streaming events apply simple rules.  A smart home use case such as the one described above could easily (well, to a point) be handled entirely by hand coding using Python and running on a old repurposed PC or even a Raspberry Pi.&lt;/p&gt;
&lt;p&gt;What are the parts of this type of project?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Data ingest  &lt;/li&gt;
&lt;li&gt;Defining rules on the data&lt;/li&gt;
&lt;li&gt;Executing the rules  &lt;/li&gt;
&lt;li&gt;Taking action from rules when the conditions are met.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Good software architecture calls for trying to make the parts most likely to change easy to change, at the cost of making other parts more difficult. What is the part most likely to change? Data ingest will only change when a new sensor is added, but a given sensor&apos;s data will not change suddenly. Executing rules in the abstract is always the same; what varies is the rule itself. Taking an action, once coded and working, doesn&apos;t really change, but it should be easy to add new actions over time.&lt;/p&gt;
&lt;p&gt;When the use cases starts to scale, and the number of rules increases, the rules processing engine efficiency starts to become important. Also, when the number of rules increases, making rules easy to edit is not just a &quot;nice to have&quot; feature, but a core requirement.&lt;/p&gt;
&lt;p&gt;Another often used argument is the separation of business logic from the SDLC. Business needs to move faster than software development. By using a rules engine, the two streams can move independently for the most part.&lt;/p&gt;
&lt;h2&gt;CEP is “Baked Into” IoT Applications&lt;/h2&gt;
&lt;p&gt;CEP is almost a requirement for any kind of IoT application such as smart homes, smart agriculture, Industry 4.0, or telecom data. It&apos;s a requirement in the sense that putting aside how the feature is implemented, IoT needs to apply rules to streaming event data. This is true whether it&apos;s at small scale in a single private home or at large scale across several factories scattered across the globe.&lt;/p&gt;
&lt;p&gt;An ideal design, based on what we just described, argues against a hand- coded solution and uses what is known as a &quot;business rules processing engine.&quot; There are several which exist in the open source world, the most well known being Drools.&lt;/p&gt;
&lt;h2&gt;Drools: Open Source Business Rules Engine&lt;/h2&gt;
&lt;p&gt;Drools is an open source project developed under the JBoss umbrella of open source projects. It is a project with a long history of active development and it’s currently at version 6.5.0.Final with version 7 in beta. It is reasonably modern as it supports Java 8&apos;s vastly improved API.&lt;/p&gt;
&lt;p&gt;Drools has all the characteristics we are looking for in terms of a rules engine, with a well-defined DSL to define rules, and a rules engine based on the RETE algorithm that is well optimized and very fast. In addition, the documentation is thorough and there are a good number of books available to learn all about how to use this powerful framework.&lt;/p&gt;
&lt;p&gt;Finally, Drools comes with a GUI called Workbench that allows us to create and edit rules visually without any need for coding. This is a killer feature that puts the power of rules within the reach of business analysis.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture1-1604419877337.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Streaming Architecture Enables CEP for Big Data&lt;/h2&gt;
&lt;p&gt;A streaming architecture is a critical component to CEP. The entire point of CEP is to make decisions in (near) real-time over streaming data, as opposed to taking actions from analysis of historical data done as a batch process.&lt;/p&gt;
&lt;p&gt;CEP is all about agility and getting potentially complex behavior arising from the interaction of lots of simple rules all getting applied over the data, in memory in real time. A streaming, microservices-based architecture is becoming a standard for modern, large-scale architecture.&lt;/p&gt;
&lt;p&gt;I also presented a talk on this topic at Strata Singapore 2016. &lt;a target=&quot;_blank&quot; href=&quot;http://www.slideshare.net/mathieudumoulin2/cep-simplified-streaming-architecture-strata-singapore-2016&quot;&gt;Please go take a look on Slideshare&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture2-1604419889212.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In general terms, the solution will look like the graph above. Data sources, such as sensors, cash registers, or logs, are collected and with light ETL, are added to a stream. The data is then consumed by a program which simply passes the data as facts into the Drools KieSession. This is the in-memory workspace where the rule engine uses pattern matching to see what rules can fire based on the facts present in memory.&lt;/p&gt;
&lt;p&gt;In our proposed architecture, the rules reside in the Drools Workbench, a GUI rule editor which also serves as version control and as a repository for the rules to be deployed to production.&lt;/p&gt;
&lt;p&gt;The main benefit of this approach is to separate the process of maintaining the application itself completely independent from the process of editing rules that create value for the business. Engineers can be left with the clear task of making sure the system is performing well and is stable, while the business side can focus on the rules.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture3-1604419915057.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the diagram above, we can see how this may look more concretely with an implementation using a MapR cluster. It would be equally valid to use a Kafka cluster in its place for this particular application, although that would result in less potential for new use cases and an increased burden of system administration. The reason for this is that a Kafka cluster is strictly limited to supporting streaming, whereas using a cluster that is converged allows for additional use cases, both operational or analytical, right there on the same cluster.&lt;/p&gt;
&lt;p&gt;A key point here is the second arrow going &lt;strong&gt;back&lt;/strong&gt; from the CEP Engine to the stream. It illustrates the important concept of using streams &lt;strong&gt;for input and output&lt;/strong&gt; that is at the core of streaming architectures. That is also why Enterprise IT Systems is shown to get its data from the stream as well.&lt;/p&gt;
&lt;p&gt;The flow of data looks like so:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/picture4-1604419923661.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Data flows from the data source to an Event Producer, which is just a stream producer or calls to a REST endpoint using the new &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/confluentinc/kafka-rest&quot;&gt;Kafka REST Proxy&lt;/a&gt;. The REST proxy is also supported by MapR Event Store from MapR Ecosystem Pack 2.0.&lt;/p&gt;
&lt;p&gt;The CEP Engine can read data off the stream, and gets its rules from the Drools Workbench. From a streaming architecture point of view, the Drools Workbench and the CEP Engine are a unit, a single microservice, so to speak, as they are entirely self-contained and don’t have any external dependencies.&lt;/p&gt;
&lt;p&gt;As rules fire in the rules processing algorithm, some external actions will need to be taken. Those actions may be an insert or update of a table in a corporate DB, indexing to Elasticsearch to serve data to a Kibana dashboard, sending a notification. But instead of tightly coupling the systems together by making the call directly from the CEP Engine to the external system, we output the data from the CEP Engine back into another topic into the stream. Another microservice or application (like &lt;a target=&quot;_blank&quot; href=&quot;https://cdap.io/&quot;&gt;Cask.co&lt;/a&gt; or &lt;a target=&quot;_blank&quot;  href=&quot;https://streamsets.com/&quot;&gt;Streamsets&lt;/a&gt;) will handle that flow.&lt;/p&gt;
&lt;h2&gt;In Conclusion&lt;/h2&gt;
&lt;p&gt;Complex Event Processing has been around for quite a while, but is now finally coming into its own. On the hardware side, services with a lot of memory are much more commonplace. On the software side, it’s possible to create a useful, production-grade CEP system entirely out of OSS, without needing to resort to expensive, custom-coded streaming applications.&lt;/p&gt;
&lt;p&gt;Combining a Kafka-style stream messaging system with Drools provides an organization with much needed agility in separating the very different tasks for creating and maintaining an enterprise streaming application and defining and editing business logic for real-time decisions.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://developer.hpe.com/blog/real-time-smart-city-traffic-monitoring-using-microservices-based-streaming-architecture-part-2/&quot;&gt;the next blog post&lt;/a&gt;, we will cover a concrete use case that puts all this into practice and will show how such a system can be implemented using nothing more than Java, a MapR cluster, and the Drools Workbench running on a Wildfly application server.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Provisioning Secure Access Controls in MapR Database]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/provisioning-secure-access-controls-in-mapr-database/</link><guid isPermaLink="false">https://developer.hpe.com/provisioning-secure-access-controls-in-mapr-database/</guid><pubDate>Mon, 02 Nov 2020 06:21:10 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Jimit Shah&quot;,
&quot;publish&quot;: &quot;2016-05-27T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;The ability to store and retrieve JSON documents using the OJAI standard has introduced a very powerful way to work with data in your MapR cluster. MapR Boolean Access Control Expressions (ACEs) provide a powerful, easy way to enforce authorization for providing secure access to data across the MapR Data Platform including MapR XD, MapR Event Store, and MapR Database.&lt;/p&gt;
&lt;h2&gt;MULTIPLE LEVELS&lt;/h2&gt;
&lt;p&gt;Access control using ACEs can be administered at multiple levels, separating administration and data access operations. Permissions for operations like creating a column family, deleting a column family, or setting ACEs can be controlled by setting ACEs as a part of table properties. If you are not familiar with the concept of a column family, just think of it as a logical grouping of data elements for setting a consistent policy (like permissions, TTL, etc.) on those elements. This construct exists in the MapR Database wide column data model as well, though the underlying implementation is completely different in the JSON document model. In MapR Database JSON, a JSON document can be stored as a composite of multiple JSON sub-documents which are part of different user-defined column families. More information about column families in MapR Database JSON can be found at the &lt;a href=&quot;https://docs.datafabric.hpe.com/51/MapR-DB/JSON_DB/column_families_in_json_tables.html&quot;&gt;Docs site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Data access can be at column family and fieldpath levels within a table. In a JSON document, a field is the smallest unit that can be assigned a value which can be a scalar (integer, string, double, boolean etc) or a collection type (array, map). A fieldpath is a dot-separated set of fields. In fieldpath “address.home.street”, “address, “home” and “street” are individual fields—while “address” and “home” are map type, “street” is a scalar type.&lt;/p&gt;
&lt;p&gt;Both column family and fieldpath level permissions work in conjunction to decide the final access privileges to a single JSON fieldpath or sub-document.&lt;/p&gt;
&lt;p&gt;MapR Database supports three types of data access permissions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;READ: used for managing permissions for get and scan operations&lt;/li&gt;
&lt;li&gt;WRITE: used for managing permissions for put and update operations&lt;/li&gt;
&lt;li&gt;TRAVERSE: used for managing permissions for READ and WRITE access&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While read and write permissions are independent of each other, traverse works in tandem with read and write permissions, which I’ll explain shortly. For now, let’s take a look at how the ACEs work at multiple levels.&lt;/p&gt;
&lt;p&gt;In the following illustration, a JSON document is represented as a hierarchical structure with sub-documents rooted at respective column families. In this case, the JSON document has three column families: ‘default’, ‘CF1’, ‘CF2’. ‘Default’ is the root of the document. Each column family contains the data of all the fields and their descendant fields under it. While the number of column families and how they’re laid out in a document is entirely configurable by the admin, in MapR Database the ‘default’ column family is always the root of the document.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-1-1604298011058.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;While the boundaries between the different subcomponents of a document are invisible to the developer using MapR Database, the way these components are laid out is important for the admin to administer security and ensure good performance.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Permissions on column families are independent of each other. Permissions set on ‘default’ have no bearing on data access on ‘CF1’ or ‘CF2’. Likewise, permissions on ‘CF2’ have no effect on data access on ‘default’ or ‘CF1’.&lt;/li&gt;
&lt;li&gt;Different column families can have different access control configurations. This way, the admin can control which parts of the document are accessible to which user, group or role.&lt;/li&gt;
&lt;li&gt;Permissions set on a column family or fieldpath are applied to every row in the table that has data in that column family or fieldpath.&lt;/li&gt;
&lt;li&gt;The permissions of column families and fieldpaths work in conjunction:
&lt;ul&gt;
&lt;li&gt;Final Permission(Data in Column Family) = Permissions(Column Family) &amp;#x26; Permissions(Fieldpath)&lt;/li&gt;
&lt;li&gt;If a user, group, or role doesn’t have permission to read data on CF1, then the user, group, or role cannot read any data under CF1.&lt;/li&gt;
&lt;li&gt;Likewise, if a user, group, or role has permission to read data on CF2, but has only permission to read a subset of fieldpaths in CF2, then the user, group, or role can read only that subset of fieldpaths.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;FINE-GRAINED&lt;/h2&gt;
&lt;p&gt;With support for native JSON documents, we have extended ACEs to provide the most fine-grained control over your data’s security. Now, admins can set read, write, and traverse permissions for any user, group, or role, to any part of a JSON document or a JSON subdocument, down to a single JSON fieldpath, with a scalar value. This is independent of access privileges to other parts of a document. All this is possible while preserving maximum space efficiency and performance.&lt;/p&gt;
&lt;h2&gt;TRAVERSE PERMISSION&lt;/h2&gt;
&lt;p&gt;In order to provide the flexibility in enforcing access to data in MapR Database JSON, we introduced a new type of permission, called the ‘TRAVERSE’ permission, which will allow the admin to allow access to some users to some parts of the JSON document while masking access to the rest of the document. As the name suggests, this new type of permission allows ‘traversing’ a JSON document to be able to discover its ‘structure’ or ‘self-describing schema’, without giving away access to any part of the data without explicit permission.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The traverse permission has to be used with either a read or write permission. On its own, it has no effect on allowing or denying access. In a complex multi-level document, certain parts of the document are made accessible to certain users and groups while the rest are off limits and vice versa&lt;/li&gt;
&lt;li&gt;Traverse permission works in tandem with read/write permission in a hierarchical manner. If a traverse permission is set at a higher level in a JSON document, and a read/write permission is set at a lower level, then the read/write operation is allowed only on a fieldpath that terminates at or after the lower level.&lt;/li&gt;
&lt;li&gt;A good way of using the power of traverse permissions is to set it at the column family or at the highest field under a column family and then provide read/write access as desired at specific fieldpaths contained in the column family or top-level field.&lt;/li&gt;
&lt;li&gt;For instance, let’s say traverse permission for user ‘m7user1’ is set at fieldpath “a.b” and read permission for the same user is set at fieldpath “a.b.c.d”. For user ‘m7user1’, all read operations up to path “a.b.c” are disallowed and only read operations on fieldpaths “a.b.c.d” or beginning with fieldpath “a.b.c.d” are allowed. If no write permission has been set for the user ‘m7user1’ explicitly at the column family or the fieldpath level, no writes are allowed, even though traverse permission had been set at “a.b”.&lt;/li&gt;
&lt;li&gt;Now, let’s say we set a write permission for user ‘m7user1’ at “a.b.c” under a column family, all write operations are allowed at fieldpaths beginning with “a.b.c”, including “a.b.c” while all write operations at “”, “a” or “a.b” will be disallowed. Note that read operations will still be disallowed at “a.b.c” but write operations will be allowed for the user ‘m7user1’.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Another example is a simplified version of a ‘Personnel Record’ that may be used in a company-wide people database. Different business functions—such as Finance, HR, or Engineering—would use the same database. Depending on which business function an employee belongs to, she may belong to one of the following group IDs (‘gid’) on the MapR cluster: ‘finance’, ‘engineering’ or ‘hr’.&lt;/p&gt;
&lt;p&gt;&lt;u&gt;‘Personnel Record’ JSON Document&lt;/u&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    _id: &quot;ENG12345&quot;,
    address: {
        home: {
            street: &quot;116 Severn Dr&quot;,
            city: &quot;Mars&quot;,
            state: &quot;CA&quot;,
            zip: 96066
        },
        work: {
            street: &quot;5440 5th Ave&quot;,
            city: &quot;Cranberry&quot;,
            state: &quot;CA&quot;,
            zip: 95213
        }
    },
    dob: {
        month: &quot;January&quot;,
        day: &quot;30&quot;,
        year: &quot;1989&quot;
    },
    name: {
        first: &quot;John&quot;,
        last: &quot;Doe
    },
    photo: &quot;&amp;#x3C;binary:photo&gt;&quot;,
    salary: 123456.00
    sex: &quot;male&quot;
}&amp;#x3C;/binary:photo&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Depending on which gid a user belongs to, she may or may not have access to an employee’s information.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Engineering employees may be able to look at only the name, work address, sex, home city and zip, day and month of birth&lt;/li&gt;
&lt;li&gt;Finance employees may be able to look at the name, home and work addresses, full date of birth, name, and read/write the salary.&lt;/li&gt;
&lt;li&gt;HR employees may be able to read/write at all the fields except for salary.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following illustration shows how the permissions may be configured. Imagine the JSON document laid out as a hierarchical structure like a tree. The ‘Personnel Database’ table has only one column family ‘default’. The admin ‘root’ manages permissions for the different groups - ‘hr’, ‘finance’ and ‘engineering’. The following is one possible way for the admin to set permissions:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-2-700-1604298029724.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;COLUMN FAMILY PERMISSIONS&lt;/h2&gt;
&lt;p&gt;The admin can set independent read, write, and traverse permissions on the entire column family. When no column permissions are specified, all the columns (or fieldpaths in the JSON document) will inherit the same permissions as the column family they belong to&lt;/p&gt;
&lt;p&gt;In the illustration, no permissions have been set for the field ‘sex’. In that case, the field is readable by the user ‘root’, or anyone belonging to groups ‘hr’ or ‘finance’ while only a member of the ‘hr’ group can modify it.&lt;/p&gt;
&lt;p&gt;However, for the field ‘dob’, a column permission for read has been added so that anyone from ‘hr’, ‘finance’ or ‘engineering’ can read it. An additional column permission for read has been specified for ‘dob.year’ so that no one from ‘engineering’ can read it. Hence, the final read access can be computed as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fieldpath ‘default: dob.day’ is readable by anyone in groups ‘hr, ‘finance’ and ‘engineering’&lt;/li&gt;
&lt;li&gt;Fieldpath ‘default: dob.month’ is readable by anyone in groups ‘hr, ‘finance’ and ‘engineering’&lt;/li&gt;
&lt;li&gt;Fieldpath ‘default: dob.year’ is readable by anyone in groups ‘hr’ and ‘finance’ but not by anyone in ‘engineering’&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No permission for write has been specified so the column family permission for write will prevail for ‘dob’ and its descendants. Hence, the final write access can be computed as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Only user ‘root’ or any user from group ‘hr’ can insert or update a value for any fieldpath originating at ‘dob’ in the default column family​&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;COLUMN PERMISSIONS&lt;/h2&gt;
&lt;p&gt;Like column family permissions, the admin can set independent read, write, and traverse permissions on a fieldpath or the predecessor of multiple fieldpaths.&lt;/p&gt;
&lt;p&gt;In the ‘Personnel Record’ illustration, individual column permissions have been set on fieldpaths having scalar values, or fieldpaths having multiple children as documents and/or scalar values.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;All fieldpaths beginning with ‘default:address.home’, except for ‘default:address.home.street’, are readable by anyone from ‘hr’, ‘finance’ and ‘engineering’, whereas only ‘hr’ and ‘finance’ can read data at ‘default:address.home.street’ fieldpath. Only a user with gid ‘hr’ can modify any field under ‘default:address.home’.&lt;/li&gt;
&lt;li&gt;On the other hand, all the fieldpaths under ‘default:address.work’ are readable by everyone in groups ‘hr’, ‘finance’ and ‘engineering’ while only users from ‘hr’ are allowed to modify them&lt;/li&gt;
&lt;li&gt;For fieldpath ‘default:salary’, only users from ‘finance’ can access the data at the fieldpath. No one from ‘hr’, ‘engineering’ or even the DB admin can read or update the value under ‘default:salary’. The column permissions at ‘default:salary’, in conjunction with column family permissions at ‘default’, effectively allow only users of group ‘finance’ to access the ‘salary’ fieldpath under ‘default’.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While this illustration is by no means the most optimal way to administer security within a MapR Database JSON document, it demonstrates how a database administrator can use ACEs along with placement of correlated data in the same column family in a hierarchical fashion so that most coarse-grained permissions may be specified at the higher levels of the document, closer to the column family, and more specific permissions administered as one goes deeper in the hierarchy of information, along the different fieldpaths.&lt;/p&gt;
&lt;h2&gt;BEST PRACTICES&lt;/h2&gt;
&lt;p&gt;Let’s look at some really easy ways to begin using ACEs for administering access to JSON documents. In order to understand how ACE syntax works, we have some great docs over at &lt;a href=&quot;https://docs.datafabric.hpe.com/51/SecurityGuide/ACEs.html&quot;&gt;Docs&lt;/a&gt; explaining the different parts of an ACE Boolean Expression and how to set it for column families and fieldpaths.&lt;/p&gt;
&lt;p&gt;I’ve categorized the most basic use cases for using ACEs. We’ll see how to use ACEs for READ access. The same methods can be applied to manage WRITE access.&lt;/p&gt;
&lt;p&gt;Let’s take the example of a taxi trip database. The following is a simplified version of how a trip record may look.&lt;/p&gt;
&lt;p&gt;&lt;u&gt;Taxi Trip Record&lt;/u&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    trip_id: &quot;SFO12345&quot;,
    trip_info: {
        trip_source: {
            lat: 37.40308,
            long: -121.96983,
            address: {
                street: &quot;4900 Marie P DeBartolo Way&quot;,
                city: &quot;Santa Clara&quot;,
                state: &quot;CA&quot;,
                zip: 95054
            },
            name: &quot;Levi&apos;s Stadium&quot;
        },
        trip_destination: {
            lat: 37.418434,
            long: -121.94498,
            address: {
                street: &quot;350 Holger Way&quot;,
                city: &quot;San Jose&quot;,
                state: &quot;CA&quot;,
                zip: 95134
            },
            name: &quot;MapR Technologies&quot;
        }
    },
    rider_id: &quot;m7user1&quot;,
    driver_id: &quot;uberlyftcoexist&quot;,
    vehicle: {
        VIN: &quot;VWW12345354564D234DD&quot;,
        license: &quot;7GTZ113&quot;,
        make: &quot;Toyota&quot;,
        model: &quot;Camry&quot;,
        year: &quot;2015&quot;,
        color: &quot;black&quot;.
        photo: &amp;#x3C; binary: jpg &gt;
    },
    billing: {
        merchant: &quot;Visa&quot;,
        transactionKey: &quot;12345678901234&quot;,
        amount: &quot;10.45&quot;
    },
    reviews: {
        driver_review: {
            stars: 5,
            comment: &quot;Safe, on time&quot;
        },
        rider_review: {
            stars: 4
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For a taxi trip database table, the admin could lay out the column families as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;‘default’ column family at root of the document&lt;/li&gt;
&lt;li&gt;‘trip_info’ column family at ‘trip_info’ JSON path&lt;/li&gt;
&lt;li&gt;‘billing_info’ column family at ‘billing’ JSON path&lt;/li&gt;
&lt;li&gt;‘reviews’ column family at ‘reviews’ JSON path&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-3-1604298037997.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;INCLUDE FEW&lt;/h2&gt;
&lt;p&gt;In this use case, the admin wants to grant access to a document or part of a document to a limited set of users and/or groups.&lt;/p&gt;
&lt;p&gt;Let’s say the parties involved in a trip are ‘user_rider’, ‘user_driver’ and groups ‘billing’, ‘navigation’, ‘traffic’, ‘crm’, ‘cust_happiness’. Each group represents a team with a specific function in the taxi company’s business: ‘billing’ is responsible for processing and bookkeeping of payments; ‘navigation’ and ‘traffic’ are responsible for analyzing, computing routes and providing navigation to drivers; and ‘cust_happiness’ and ‘crm’ are responsible for customer satisfaction and addressing grievances.&lt;/p&gt;
&lt;p&gt;Example: The billing information should be visible only to the users in ‘billing’, while the reviews are relevant only for users in ‘cust_happiness’ and ‘crm’. The admin can set the permissions on ‘billing’ and ‘reviews’ column families to allow only ‘billing’ and ‘cust_happiness’ groups to have access to the data in respective column families.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-4-1604298045986.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;EXCLUDE FEW&lt;/h2&gt;
&lt;p&gt;In this use case, the admin wants to deny access to a document or part of a document to a limited set of users and/or groups.&lt;/p&gt;
&lt;p&gt;Example: The ‘trip_id’ is an internal unique trip identifier that should be visible to everyone using the database but the ‘user_rider’ and ‘user_driver’. The admin can set permissions on the ‘trip_id’ field to exclude access to ‘user_rider’ and ‘user_driver’ and give access to all the users and groups to read or write ‘default’ column family.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-5-1604298055030.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;1:1&lt;/h2&gt;
&lt;p&gt;In this use case, the admin wants to provide access to specific parts of a document to specific users and/or groups.&lt;/p&gt;
&lt;p&gt;Example: Let’s give read access to both ‘user_driver’ and ‘user_rider’ to each others’ reviews but also give them write access to reviews authored by them. The admin can set permissions on the ‘reviews’ column family to allow both the users to traverse the ‘reviews’ subdocument but provide only specific permissions to be writable for these users.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/aces-6-1604298064571.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;CLOSING COMMENTS&lt;/h2&gt;
&lt;p&gt;MapR Database ACEs are a very flexible and powerful method for provisioning secure access control for your data residing on MapR Database. If you have questions about ACEs or MapR Database, feel free to ask your questions on &lt;a href=&quot;https://community.datafabric.hpe.com/s/&quot;&gt;Ezmeral Data Fabric Community.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kubernetes Tutorial: How to Install and Deploy Applications at Scale on K8s - Part 1 of 3]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/kubernetes-tutorial-how-to-install-and-deploy-applications-at-scale-on-k/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-tutorial-how-to-install-and-deploy-applications-at-scale-on-k/</guid><pubDate>Mon, 02 Nov 2020 06:12:48 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Martijn Kieboom&quot;,
&quot;publish&quot;: &quot;2018-04-26T10:45:00.000&quot;,
&quot;tags&quot;: [&quot;open-source-software&quot;,&quot;kubernetes&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Containers are hot! But how do you manage hundreds or even thousands of containers in a production environment to support a 24/7 business? Various container management solutions have jumped into that business, but one is getting a lot of attention and adoption at the moment: Kubernetes.&lt;/p&gt;
&lt;p&gt;Originally designed by Google but now open-sourced, Kubernetes is being adopted by many commercial vendors, including Docker Enterprise, Red Hat OpenShift, and Mesosphere as well as all major cloud providers. So there are plenty of ways to manage your containers using Kubernetes.&lt;/p&gt;
&lt;p&gt;In this first out of three blog posts, we will look into what business benefits can be achieved by combining MapR with Kubernetes to run and manage your containers.&lt;/p&gt;
&lt;h2&gt;Why Containers&lt;/h2&gt;
&lt;p&gt;What we hear from customers in their journey toward making data actionable are the following challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Introducing new innovations or capabilities&lt;/li&gt;
&lt;li&gt;Maintaining business SLAs in a changing environment&lt;/li&gt;
&lt;li&gt;Using or introducing legacy services as a result of mergers and acquisitions&lt;/li&gt;
&lt;li&gt;Ongoing upgrade of applications/services&lt;/li&gt;
&lt;li&gt;Difficulty of packaging and distributing apps to end customers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Introducing new innovations while maintaining existing business SLAs is a big challenge with often a conflict of interest.  IT organizations are mainly focusing on delivering existing SLAs and therefore push back can be experienced when the business wants to launch new innovative products.&lt;/p&gt;
&lt;p&gt;In addition, organizations are even more pressured when they acquire or merge businesses as that brings in the challenge of onboarding existing legacy systems.&lt;/p&gt;
&lt;p&gt;Finally, the technology updates are going faster than ever before. Upgrading applications and services is becoming more complex and challenging every day. This goes hand in hand with the complexity involved in how apps are being packaged and distributed to your end customers.&lt;/p&gt;
&lt;p&gt;In the following paragraphs, we will have a look at what MapR Technologies has to offer to overcome these challenges and really put your data into action.&lt;/p&gt;
&lt;h2&gt;MapR Volume Driver Plugin for Kubernetes&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/mapr-volume-driver-plugin-1604297556537.png&quot; alt=&quot;MapR Volume Driver Plugin for Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s have a look at how we can combine existing applications with new innovative applications and services:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Applications&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Placing applications and even microservices in container pods is a first step in making them flexible and agile. This allows us to distribute the application or service to where it runs best. It also allows physical separation of different types of applications. This way you can easily run classic applications and processes (for example, an ETL process) as well as an innovative machine learning application for image classification using Tensorflow on the same environment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Compute - Kubernetes&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Finally, Google’s Kubernetes is quickly gaining adoption as the container scheduler and orchestration solution to allow running applications and services anywhere. To maintain the agility and flexibility of the container-based applications running on Kubernetes, the MapR Kubernetes Volume Driver Plugin gives all applications and microservices seamless access to the MapR Data Platform.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Data Stores&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The data required by these different applications and (micro)services can, however, be anywhere, as data nowadays is distributed geographically across multiple environments. From edge environments to a combination of private and public cloud, where the data actually is stored should be transparent to the applications and services.&lt;/p&gt;
&lt;p&gt;Data - MapR Data Platform&lt;/p&gt;
&lt;p&gt;That’s where the MapR Data Platform with its Global Namespace comes in. It virtually combines all MapR clusters into a single Global Data Fabric, providing applications and (micro)services seamless access to all data, irrespective of its physical location.&lt;/p&gt;
&lt;h2&gt;Business Benefits&lt;/h2&gt;
&lt;p&gt;Combining MapR with Kubernetes integration delivers the following business benefit to any organization:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Faster innovation while running ongoing business and operations&lt;/li&gt;
&lt;li&gt;Flexible scaling (up and down) to accommodate business needs&lt;/li&gt;
&lt;li&gt;Easy integration of mergers and acquisitions&lt;/li&gt;
&lt;li&gt;Ease of maintaining and rolling out upgrades&lt;/li&gt;
&lt;li&gt;Ease of packaging and distributing apps to end customers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To summarize the business benefits of the powerful combination of Kubernetes with the MapR Data Platform:&lt;/p&gt;
&lt;p&gt;Combining ongoing business operations with deploying new business innovations has never been easier. Scaling any application or service to accommodate ever-changing business and customer needs is simply a matter of scaling up or down the number of application container pods. Onboarding legacy services as part of mergers and acquisitions doesn’t have to stop you from innovating in parallel, rolling out new application and service results with quicker innovation and time to market. And finally, packaging and distributing applications and services to your customer allows you to adopt new technologies and innovation immediately.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/blog/kubernetes-tutorial-part-2-of-3-how-to-install-and-deploy-applications-at-scale-on-k8s&quot;&gt;In the following blog post&lt;/a&gt;, we will start deploying a Kubernetes cluster and load the MapR Volume Driver Plugin for Kubernetes to allow enabling these business benefits.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New releases, new features - Newsletter]]></title><link>https://developer.hpe.com/2020-November-02/</link><guid isPermaLink="false">https://developer.hpe.com/2020-November-02/</guid><pubDate>Mon, 02 Nov 2020 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE DEV Hack Shack Coding Challenges: Are You Ready to Compete?]]></title><description><![CDATA[coding challenge I am really excited to announce that we have created a new community activity for you – the HPE DEV Hack Shack Coding…]]></description><link>https://developer.hpe.com/hpe-dev-hack-shack-coding-challenges-are-you-ready-to-compete/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-hack-shack-coding-challenges-are-you-ready-to-compete/</guid><pubDate>Fri, 30 Oct 2020 15:04:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-1604080946671.png&quot; alt=&quot;coding challenge&quot;&gt;&lt;/p&gt;
&lt;p&gt;I am really excited to announce that we have created a new community activity for you – the HPE DEV Hack Shack Coding Challenges. Typically we ran these challenges and physical events, but, because they became so popular, we are now providing them virtually. HPE DEV will post a coding challenge in the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt; for the community to solve, offering cool prizes to those whose solution is chosen as the winner of the challenge.   Submissions will be judged on technical achievement and completeness of the code and quiz answers. Winners will be notified and will be able to choose a prize from a given catalog.&lt;/p&gt;
&lt;p&gt;The idea is to offer a fun event to the community that shares knowledge. The challenges have been designed to be simple enough that most data scientists, developers, and DevOps engineers can work on them.  In this blog post, I’ll cover the details of the coding challenges program, including instructions for participants.&lt;/p&gt;
&lt;p&gt;These challenges are being designed to offer students a hands-on method of understanding specific open source and HPE technologies guided by a subject matter expert (SME) and detailed instructions provided in a Jupyter Notebook format. Students will have access to other SMEs through Slack should any questions arise during their challenge session.&lt;/p&gt;
&lt;h2&gt;One challenge to get started, with more coming shortly&lt;/h2&gt;
&lt;p&gt;Many of these challenges will be extensions to technical workshops we’ve hosted in the past. You can find the video &lt;a href=&quot;/hackshack/replays/0&quot;&gt;replays of these workshops&lt;/a&gt; on the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt;. Through the workshops, students received hands-on experience with coding practices through an associated Jupyter Notebook. The HPE DEV Hack Shack challenges will utilize much of the same material, including the Jupyter Notebook. For those of you who are new to the topic being covered, it is recommended you watch the corresponding &lt;a href=&quot;/hackshack/replays/0&quot;&gt;video replay&lt;/a&gt; first and then sign up for the challenge.&lt;/p&gt;
&lt;p&gt;During the mid-month of November, we will start with a single challenge:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deploy a front-end app on a Kubernetes cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Students will have 4 hours in which to go through the challenge, which includes time to review the video replay of the corresponding technical workshop, follow the Jupyter Notebook instructions, and save their work to their local laptop should they want to do more work in their own environment or retake the challenge. They’ll also be able to connect with SMEs through off-line support.&lt;/p&gt;
&lt;h2&gt;How it works&lt;/h2&gt;
&lt;p&gt;To take the challenge, go to the  &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt; and navigate to the &lt;strong&gt;Challenges&lt;/strong&gt; page. From there, select which challenge you want to take and click on the &lt;strong&gt;Challenge me&lt;/strong&gt; button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-1-1604071470980.png&quot; alt=&quot;coding challenge&quot;&gt;&lt;/p&gt;
&lt;p&gt;At this point, the registration panel pops up.
Enter the details requested and click on the &lt;strong&gt;Take on the challenge&lt;/strong&gt; button. In a matter of just a few minutes, you’ll start your challenge.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-2-1604430877028.png&quot; alt=&quot;coding challenge&quot;&gt;&lt;/p&gt;
&lt;p&gt;By pressing the &lt;strong&gt;Take on the Challenge&lt;/strong&gt; button, you initiate a back-end automated registration process. This process spawns a dedicated notebook environment for you and sends you a welcome email indicating that you have been registered in the database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-3-1604071494820.png&quot; alt=&quot;welcome email&quot;&gt;&lt;/p&gt;
&lt;p&gt;Not long after, it sends you a second email providing a link to your challenge Jupyter Notebook, along with your StudentID and password.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-4-1604071501876.png&quot; alt=&quot;credentials email&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT: Receipt of this email indicates that the challenge environment is ready for you to begin. Similar to our HPE DEV Workshops-on-Demand, you will have just 4 hours from the receipt of this second email to complete the challenge. It is recommended that you only register for a challenge when you know you will have the next 4 hours to work on it. We advise you to regularly save your work and download the Jupyter Notebook to refer to later should you not be able to finish the challenge within the given 4-hour time slot. If you cannot finish the challenge in that time, you will need to take the challenge again from the beginning.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When you click on the &lt;strong&gt;Start Challenge&lt;/strong&gt; button found in your second email, it will bring you to a Sign In page where you will log into the challenge with your StudentID and the password provided in your second email.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-5-1604072032740.png&quot; alt=&quot;jupyter sign in&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you log in, open the challenge folder on the left by double-clicking on it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-6-1604072039116.png&quot; alt=&quot;jupter notebook home screen&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each notebook generally has several sections. Start with the &lt;strong&gt;Challenge Overview&lt;/strong&gt; and follow the instructions from there to work on the challenge and how to submit your work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-7-1604367130983.png&quot; alt=&quot;jupter notebook&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-8-1604367142859.png&quot; alt=&quot;jupter notebook&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Don’t forget to save your work&lt;/h2&gt;
&lt;p&gt;One hour prior to the end of the 4-hour period, you will receive an email reminding you that your session is coming to a close and that you should download the challenge notebook if you anticipate using it in the future. At the end of every session, the environment is cleaned up automatically, so be sure to save your work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-9-1604072055584.png&quot; alt=&quot;reminder email&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the end of the challenge, you will receive a final email indicating that the challenge is over. In the email, you will also be asked to take a short survey. The results of this survey will help us improve how we offer the challenges in the future. Your feedback is very important to our being able to meet your needs, so we encourage you to take just a few minutes to fill it out.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/coding-challenge-10-1604072062041.png&quot; alt=&quot;challenge end email&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How to get help&lt;/h2&gt;
&lt;p&gt;You’ll have access to SME assistance through our challenges Slack channel. We staff the channel to answer questions between 4 am and 4 pm EST Monday through Friday. You may wish to ensure you schedule the timing of your challenge between those hours if you think you’ll have questions or need additional help.&lt;/p&gt;
&lt;p&gt;There will be a limited number of seats available for each challenge. These seats will be filled on a first-come/first-serve basis. We look forward to the opportunity of offering these challenges to you and you can win cool prizes. Remember to check back on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for any further updates.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV Hack Shack expands for premier Kubernetes event]]></title><description><![CDATA[kubecon na image Mark your calendars for November 17-20, 2020! KubeCon | CloudNativeCon, North America is coming, the Cloud Native Computing…]]></description><link>https://developer.hpe.com/hpe-dev-hack-shack-expands-for-premier-kubernetes-event/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-hack-shack-expands-for-premier-kubernetes-event/</guid><pubDate>Wed, 28 Oct 2020 19:02:50 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/kubecon-na-image-1603912007661.png&quot; alt=&quot;kubecon na image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Mark your calendars for November 17-20, 2020! &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/&quot;&gt;KubeCon | CloudNativeCon, North America&lt;/a&gt; is coming, the &lt;a href=&quot;http://cncf.io/&quot;&gt;Cloud Native Computing Foundation’s (CNCF)&lt;/a&gt; flagship event, and you won’t want to miss it! Because of its increased involvement with CNCF and commitment to open source technologies, Hewlett Packard Enterprise (HPE) is excited to upgrade its sponsorship to the Diamond level. This allows us to more deeply connect with technologists worldwide, offering them easy access to experts, technology, and tools to accelerate digital transformation.&lt;/p&gt;
&lt;p&gt;Solutions for security, data-centric workloads, and workflow challenges will be front and center, with lots of opportunities to engage in direct conversations with HPE subject matter experts. Make sure you attend the HPE speaker sessions and demos, and take advantage of the hands-on technical workshops in the HPE Developer Community Hack Shack. You’ll get to hear HPE speaker sessions on how to scale machine learning without compromising privacy and view a tutorial on how to use the Container Storage Interface (CSI) primitives with Kubernetes. Using machine learning to harness data and operationalize data pipelines using KubeDirector and other products in the HPE Ezmeral Software Portfolio will also be highlighted throughout the virtual booth. &lt;a href=&quot;https://community.hpe.com/t5/hpe-ezmeral-uncut/hpe-showcasing-enterprise-edge-to-cloud-solutions-at-kubecon/ba-p/7107337&quot;&gt;Check here for details on HPE’s overall presence at the event&lt;/a&gt;.## Explore our virtual Hack Shack and see what’s new!&lt;/p&gt;
&lt;p&gt;The HPE DEV Community Hack Shack offers developers, designers, and data scientists the opportunity to connect with HPE subject matter experts and collaborate with them to accelerate innovation using open source and HPE technologies. It’s a unique place designed to give virtual events a more personal touch and extend the experience beyond the event.&lt;/p&gt;
&lt;p&gt;The HPE DEV team is pleased to now offer Workshops-on-Demand through the Hack Shack. In our technical workshops, you’ll learn about HPE and open source solutions and get hands-on experience with these technologies, including the HPE Ezmeral Container REST API.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/10/hack-shack-home-page-1604432949140.png&quot; alt=&quot;hack shack home page&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here’s a snapshot of what you will find in the virtual Hack Shack:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/workshops&quot;&gt;WORKSHOPS:&lt;/a&gt; Check out our Workshops-on-Demand. These are free, hands-on training sessions you can access at any time, from anywhere. Using a Jupyter Notebook environment, you’ll have the opportunity to gain hands-on experience across different technologies, like Grommet and the HPE Ezmeral Container Platform REST API.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/challenges&quot;&gt;CHALLENGES:&lt;/a&gt; Our coding challenge offers an exciting opportunity to put what you’ve learned to the test and compete with others for prizes. This challenge will task you with deploying a sample front-end application on a CNCF-certified Kubernetes cluster running on HPE Ezmeral Container Platform using the API calls.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/replays&quot;&gt;REPLAYS:&lt;/a&gt; Augmenting our Workshops-on-Demand, we’ve posted replays of many of the technical workshops we’ve offered live in the past. View them to learn more about the &lt;a href=&quot;/hackshack/replays/1&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;, &lt;a href=&quot;/hackshack/replays/5&quot;&gt;SPIFFE and SPIRE authentication&lt;/a&gt;, and the &lt;a href=&quot;/hackshack/replays/2&quot;&gt;HPE Container Storage Interface&lt;/a&gt; for Kubernetes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/ezmeral&quot;&gt;HPE EZMERAL:&lt;/a&gt; We’ve made it easy for you to find detailed information on the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-container-platform/home&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/platform/hpe-ezmeral-data-fabric/home&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/machine-learning-operations.html&quot;&gt;HPE Ezmeral ML Ops&lt;/a&gt;, along with other products in this innovative set of software that can be deployed on any cloud, on any hardware, and is 100% open source Kubernetes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/arcade&quot;&gt;ARCADE:&lt;/a&gt;  In our arcade, along with a bunch of cool stickers, you’ll find Hack Shack Attack! Give our popular retro-style video game a try and compete with your friends for the highest score.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/community&quot;&gt;COMMUNITY:&lt;/a&gt;  We invite you to join and contribute your expertise in our blog forum or deliver an on-demand workshop. Connect with others in the community via &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; channels to start conversations and get answers to questions. And you can sign-up for our HPE DEV Newsletter to stay up-to-date on the newest blog posts and tutorials.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Prizes? Did someone say prizes?&lt;/h2&gt;
&lt;p&gt;Take our &lt;a href=&quot;/hackshack/challenges&quot;&gt;HPE DEV Hack Shack coding challenge&lt;/a&gt; and you could win a prize! Our coding challenges are timed events where developers are asked to create their own code to achieve a defined function while taking specific parameters into account. For KubeCon | CloudNativeCon NA, we will be challenging you to deploy a sample front-end application on a CNCF-certified Kubernetes cluster running on HPE Ezmeral Container Platform using the API calls. Submissions will be judged on technical achievement and completeness of the code and quiz answers. For details on how to participate, &lt;a href=&quot;/blog/hpe-dev-hack-shack-coding-challenges-are-you-ready-to-compete&quot;&gt;please refer to this blog post&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;But wait… there’s more!&lt;/h2&gt;
&lt;p&gt;Visit the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV Community Portal&lt;/a&gt; via the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack Lobby&lt;/a&gt;. There, you can access a treasure trove of information, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platforms&quot;&gt;PLATFORMS&lt;/a&gt; – Access GitHub resources and software development kits (SDKs)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog&quot;&gt;BLOG&lt;/a&gt; – Learn through blog posts and tutorials&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/projects&quot;&gt;OPEN SOURCE&lt;/a&gt; – Discover platforms, apps and contributions&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;EVENTS&lt;/a&gt; – Learn about upcoming events&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;COMMUNITY&lt;/a&gt; – Participate in and contribute to the community&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other sections of the KubeCon | CloudNativeCon North America HPE virtual booth, you’ll explore HPE’s different software-enabling and application development technologies, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://spiffe.io/&quot;&gt;The SPIFFE and SPIRE Projects&lt;/a&gt;. With the recent acquisition of Scytale, HPE is the leading contributor for CNCF’s SPIFFE and SPIRE projects. These projects help organizations build a foundation for zero trust through the use of platform agnostic, cryptographic service identity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/ezmeral.html&quot;&gt;The HPE Ezmeral Software Portfolio&lt;/a&gt;, including the industry’s first enterprise-grade container platform for cloud-native and distributed non cloud-native applications. Built on open source Kubernetes, this unified container platform runs on any infrastructure; either on-premises, in multiple public clouds, in a hybrid model, or at the edge.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.hpe.com/us/en/storage/intelligent-storage.html?chatsrc=ot-en&amp;#x26;jumpid=ps_8r5mdg32xs_aid-520023673&amp;#x26;gclid=Cj0KCQiAs67yBRC7ARIsAF49CdU6O6Hbaj1lwT8tcrU702BzRnZboWNQILTShb0cCk-eEk7nUjQ-yhMaAv4fEALw_wcB&amp;#x26;gclsrc=aw.ds%22%20%5Ct%20%22_blank&quot;&gt;intelligent data platform&lt;/a&gt; from HPE, focused on persistent and ephemeral storage use cases for Kubernetes and cloud-native workloads in private, public and hybrid clouds. Visit the respective platform pages for an overview:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;HPE Nimble Storage and HPE Cloud Volumes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-3par-and-primera/home&quot;&gt;HPE Primera&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For details on HPE’s overall presence at KubeCon | CloudNativeCon North America, please &lt;a href=&quot;https://community.hpe.com/t5/hpe-ezmeral-uncut/hpe-showcasing-enterprise-edge-to-cloud-solutions-at-kubecon/ba-p/7107337&quot;&gt;refer to our community blog post here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Don&apos;t miss these other interesting HPE sessions&lt;/h2&gt;
&lt;p&gt;You can also find a number of other sessions highlighting HPE offerings at KubeCon | CloudNativeCon North America, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://sched.co/eoDy&quot;&gt;Scaling machine learning without compromising privacy&lt;/a&gt;&lt;/strong&gt;, a Keynote session presented by lead data scientist and Distinguished Technologist, Nanda Vijaydev of HPE, Thursday, November 19, 1:18pm - 1:23pm&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://kccncna20.sched.com/event/f6jb?iframe=no&quot;&gt;No More Moats: Protecting Your Cloud Native Infrastructure with Zero Trust&lt;/a&gt;&lt;/strong&gt;, presented by HPE software engineer, Daniel Feldman, Tuesday, November 17, 1:00pm - 1:30pm&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://kccncna20.sched.com/type/Tutorials/101+%28sessions+for+those+new+to+the+conference+overall+and%2For+beginners+to+the+conference+content&quot;&gt;Introduction to using the CSI Primitives&lt;/a&gt;&lt;/strong&gt;, presented by Tech Marketing Engineer and Master Technologist, Michael Mattsson of HPE, Friday, November 20, 5:05pm - 6:30pm&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/program/schedule/&quot;&gt;program schedule&lt;/a&gt; for updates on times and locations.&lt;/p&gt;
&lt;p&gt;Virtual events can be challenging to engage and talk to the right people. Many who attend physical conferences come specifically to talk with industry and subject matter experts. That’s why the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community&lt;/a&gt; has worked hard to deliver its &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Community Hack Shack&lt;/a&gt; virtual experience. We want to ensure you have a place where you can do just that. For questions or help, be sure to connect with us in our virtual room at the booth or on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Streaming Machine learning pipeline for Sentiment Analysis using Apache APIs: Kafka, Spark and Drill - Part 1]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-/</link><guid isPermaLink="false">https://developer.hpe.com/streaming-machine-learning-pipeline-for-sentiment-analysis-using-apache-/</guid><pubDate>Wed, 28 Oct 2020 15:43:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Carol McDonald&quot;],
&quot;publish&quot;: &quot;2019-05-07T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;machine-learning&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Text mining and analysis of social media, emails, support tickets, chats, product reviews, and recommendations have become a valuable resource used in almost all industry verticals to study data patterns in order to help businesses to gain insights, understand customers, predict and enhance the customer experience, tailor marketing campaigns, and aid in decision-making.&lt;/p&gt;
&lt;p&gt;Sentiment analysis uses machine learning algorithms to determine how positive or negative text content is. Example use cases of sentiment analysis include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Quickly understanding the tone from customer reviews
&lt;ul&gt;
&lt;li&gt;To gain insights about what customers like or dislike about a product or service&lt;/li&gt;
&lt;li&gt;To gain insights about what might influence buying decisions of new customers&lt;/li&gt;
&lt;li&gt;To give businesses market awareness&lt;/li&gt;
&lt;li&gt;To address issues early&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Understanding stock market sentiment to gain insights for financial signal predictions&lt;/li&gt;
&lt;li&gt;Determining what people think about customer support&lt;/li&gt;
&lt;li&gt;Social media monitoring&lt;/li&gt;
&lt;li&gt;Brand/product/company popularity/reputation/perception monitoring&lt;/li&gt;
&lt;li&gt;Discontented customer detection monitoring and alerts&lt;/li&gt;
&lt;li&gt;Marketing campaign monitoring/analysis&lt;/li&gt;
&lt;li&gt;Customer service opinion monitoring/analysis&lt;/li&gt;
&lt;li&gt;Brand sentiment attitude analysis&lt;/li&gt;
&lt;li&gt;Customer feedback analytics&lt;/li&gt;
&lt;li&gt;Competition sentiment analytics&lt;/li&gt;
&lt;li&gt;Brand influencers monitoring&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Manually analyzing the abundance of text produced by customers or potential customers is time-consuming; machine learning is more efficient and with streaming analysis, insights can be provided in real time.&lt;/p&gt;
&lt;p&gt;This is the first in a series of blog posts, which discusses the architecture of a data pipeline that combines streaming data with machine learning and fast storage.  In this first part, we will explore sentiment analysis using Spark machine learning data pipelines. We will work with a dataset of Amazon product reviews and build a machine learning model to classify reviews as positive or negative.  In the &lt;a href=&quot;/blog/streaming-ml-pipeline-for-sentiment-analysis-using-apache-apis-kafka-spark-and-drill-part-2&quot;&gt;second part&lt;/a&gt; of this tutorial, we will use this machine learning model with streaming data to classify documents in real time.  The second post will discuss using the saved model with streaming data to do real-time analysis of product sentiment, storing the  results in MapR Database, and making them rapidly available for Spark and Drill SQL.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image7-1603902952832.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this post, we will go over the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Overview of classification and sentiment analysis concepts&lt;/li&gt;
&lt;li&gt;Building feature vectors from text documents&lt;/li&gt;
&lt;li&gt;Training a machine learning model to classify positive and negative reviews using logistic regression&lt;/li&gt;
&lt;li&gt;Evaluating and saving the machine learning model&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Classification&lt;/h2&gt;
&lt;p&gt;Classification is a family of supervised machine learning algorithms that identify which category an item belongs to (such as whether an email is spam or not), based on labeled data (such as the email subject and message text). Some common use cases for classification include credit card fraud detection, email spam detection, and sentiment analysis.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image3-1603902962230.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Classification takes a set of data with known labels and predetermined features and learns how to label new records, based on that information. Features are the properties that you can use to make predictions. To build a classifier model, you explore and extract the features that most contribute to the classification.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image15-1603902978389.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s go through an example for sentiment analysis for text classification of positive or negative.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?
&lt;ul&gt;
&lt;li&gt;In this example, the customer review ratings are used to label reviews as positive or not. A review with 4 to 5 stars is considered a positive review, and a review with 1 to 2 stars is considered a negative review.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the properties that you can use to make predictions?
&lt;ul&gt;
&lt;li&gt;The review text words are used as the features to discover positive or negative similarities in order to categorize customer text sentiment as positive or negative.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Machine Learning Workflow&lt;/h2&gt;
&lt;p&gt;Using Machine Learning is an iterative process, which involves:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Data discovery and model creation
&lt;ul&gt;
&lt;li&gt;Analysis of historical data&lt;/li&gt;
&lt;li&gt;Identifying new data sources, which traditional analytics or databases are not using, due to the format, size, or structure&lt;/li&gt;
&lt;li&gt;Collecting, correlating, and analyzing data across multiple data sources&lt;/li&gt;
&lt;li&gt;Knowing and applying the right kind of machine learning algorithms to get value out of the data&lt;/li&gt;
&lt;li&gt;Training, testing, and evaluating the results of machine learning algorithms to build a model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Using the model in production to make predictions&lt;/li&gt;
&lt;li&gt;Data discovery and updating the model with new data&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image18-1603902991041.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Feature Extraction&lt;/h2&gt;
&lt;p&gt;Features are the interesting properties in the data that you can use to make predictions. Feature engineering is the process of transforming raw data into inputs for a machine learning algorithm. In order to be used in Spark machine learning algorithms, features have to be put into feature vectors, which are vectors of numbers representing the value for each feature. To build a classifier model, you extract and test to find the features of interest that most contribute to the classification.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image8-1603902999171.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Apache Spark for Text Feature Extraction&lt;/h2&gt;
&lt;p&gt;The TF-IDF (Term Frequency–Inverse Document Frequency) feature extractors in &lt;a href=&quot;http://spark.apache.org/docs/latest/ml-features.html#23tf-idf&quot;&gt;SparkMLlib&lt;/a&gt; can be used to convert text words into feature vectors. TF-IDF calculates the most important words in a single document compared to a collection of documents. For each word in a collection of documents, it computes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Term Frequency (TF), which is the number of times a word occurs in a specific document&lt;/li&gt;
&lt;li&gt;Document Frequency (DF), which is the number of times a word occurs in a collection of documents&lt;/li&gt;
&lt;li&gt;Term Frequency-Inverse Document Frequency (TF-IDF), which measures the significance of a word in a document (the word occurs a lot in that document, but is rare in the collection of documents)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, if you had a collection of reviews about bike accessories, then the word &apos;returned&apos; in a review would be more significant for that document than the word  &apos;bike.&apos;In the simple example below, there is one positive text document and one negative text document, with the word tokens &apos;love,&apos;&apos;bike,&apos; and &apos;returned&apos; (after filtering to remove insignificant words like &apos;this&apos; and &apos;I&apos;).  The TF, DF, and TF-IDF calculations are shown. The word &apos;bike&apos; has a TF of 1 in 2 documents (word count in each document), a document frequency of 2 (word count in set of documents), and a TF-IDF of ½ (TF divided by DF).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image1-1603903007533.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Logistic Regression&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/ml-classification-regression.html#23logistic-regression&quot;&gt;Logistic regression is a popular method to predict a binary response.&lt;/a&gt; It is a special case of generalized linear models that predicts the probability of the outcome. Logistic regression measures the relationship between the Y &quot;Label&quot; and the X &quot;Features&quot; by estimating probabilities using &lt;a href=&quot;https://en.wikipedia.org/wiki/Logistic_regression&quot;&gt;a logistic function.&lt;/a&gt; The model predicts a probability, which is used to predict the label class.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image13-1603903017520.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In our text classification case, logistic regression tries to predict the probability of a review text being positive or negative, given the label and feature vector of TF-IDF values.  Logistic regression finds the best fit weight for each word in the collection of text by multiplying each TF-IDF feature by a weight and passing the sum through a sigmoid function, which transforms the input x into the output y, a number between 0 and 1.  In other words, logistic regression can be understood as finding the &lt;a href=&quot;http://logisticregressionanalysis.com/86-what-is-logistic-regression/&quot;&gt;parameters that best fit:&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image16-1603903026176.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image19-1603903034716.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Logistic regression has the following advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Can handle sparse data&lt;/li&gt;
&lt;li&gt;Fast to train&lt;/li&gt;
&lt;li&gt;Weights can be interpreted&lt;/li&gt;
&lt;li&gt;Positive weights will correspond to the words that are positive&lt;/li&gt;
&lt;li&gt;Negative weights will correspond to the words that are negative&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Data Exploration and Feature Extraction&lt;/h2&gt;
&lt;p&gt;We will be using a dataset of Amazon sports and outdoor products review data, which you can download here: &lt;a href=&quot;http://jmcauley.ucsd.edu/data/amazon/&quot;&gt;http://jmcauley.ucsd.edu/data/amazon/&lt;/a&gt;. The dataset has the following schema:&lt;br&gt;
*&lt;em&gt;Italicized fields are for sentiment analysis&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;reviewerID&lt;/code&gt; - ID of the reviewer, e.g., A2SUAM1J3GNN3B&lt;br&gt;
&lt;code&gt;asin&lt;/code&gt; - ID of the product, e.g., 0000013714&lt;br&gt;
&lt;code&gt;reviewerName&lt;/code&gt; - name of the reviewer&lt;br&gt;
&lt;code&gt;helpful&lt;/code&gt; - helpfulness rating of the review, e.g., 2/3&lt;br&gt;
*&lt;code&gt;reviewText&lt;/code&gt; - &lt;em&gt;text of the review&lt;/em&gt;&lt;br&gt;
*&lt;code&gt;overall&lt;/code&gt; - &lt;em&gt;rating of the product&lt;/em&gt;&lt;br&gt;
*&lt;code&gt;summary&lt;/code&gt; - &lt;em&gt;summary of the review&lt;/em&gt;&lt;br&gt;
&lt;code&gt;unixReviewTime&lt;/code&gt; - time of the review (Unix time)&lt;br&gt;
&lt;code&gt;reviewTime&lt;/code&gt; - time of the review (raw)&lt;/p&gt;
&lt;p&gt;The dataset has the following JSON format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
	&quot;reviewerID&quot;: &quot;A1PUWI9RTQV19S&quot;,
	&quot;asin&quot;: &quot;B003Y5C132&quot;,
	&quot;reviewerName&quot;: &quot;kris&quot;,
	&quot;helpful&quot;: [0, 1],
	&quot;reviewText&quot;: &quot;A little small in hind sight, but I did order a .30 cal box. Good condition, and keeps my ammo organized.&quot;,
	&quot;overall&quot;: 5.0,
	&quot;summary&quot;: &quot;Nice ammo can&quot;,
	&quot;unixReviewTime&quot;: 1384905600,
	&quot;reviewTime&quot;: &quot;11 20, 2013&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this scenario, we will use logistic regression to predict the label of positive or not, based on the following:&lt;/p&gt;
&lt;p&gt;Label :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;overall - rating of the product 4-5  = 1 Positive&lt;/li&gt;
&lt;li&gt;overall - rating of the product 1-2  =  0 Negative&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Features :&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reviewText + summary  of the review → TF-IDF features&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;USING THE SPARK ML PACKAGE&lt;/h2&gt;
&lt;p&gt;Spark ML provides a uniform set of high-level APIs, built on top of DataFrames with the goal of making machine learning scalable and easy. Having ML APIs built on top of DataFrames provides the scalability of partitioned data processing with the ease of SQL for data manipulation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image9-1603903043455.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We will use an ML Pipeline to pass the data through transformers in order to extract the features and an estimator to produce the model.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Transformer: A transformer is an algorithm that transforms one &lt;code&gt;DataFrame&lt;/code&gt; into another &lt;code&gt;DataFrame&lt;/code&gt;. We will use transformers to get a &lt;code&gt;DataFrame&lt;/code&gt; with a features vector column.&lt;/li&gt;
&lt;li&gt;Estimator: An estimator is an algorithm that can be fit on a &lt;code&gt;DataFrame&lt;/code&gt; to produce a transformer. We will use a an estimator to train a model, which can transform input data to get predictions.&lt;/li&gt;
&lt;li&gt;Pipeline: A pipeline chains multiple transformers and estimators together to specify an ML workflow.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Load the Data from a File into a DataFrame&lt;/h2&gt;
&lt;p&gt;The first step is to load our data into a &lt;code&gt;DataFrame&lt;/code&gt;. Below, we &lt;a href=&quot;http://spark.apache.org/docs/latest/sql-programming-guide.html#23manually-specifying-options&quot;&gt;specify the data source format and path to load into a &lt;code&gt;DataFrame&lt;/code&gt;&lt;/a&gt;.  Next, we use the &lt;code&gt;withColumn&lt;/code&gt; method to add a column combining the review summary with the review text, and we drop columns that are not needed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark._
import org.apache.spark.ml._
import org.apache.spark.sql._

var file =&quot;/user/mapr/data/revsporttrain.json&quot;

val df0  = spark.read.format(&quot;json&quot;)
 .option(&quot;inferSchema&quot;, &quot;true&quot;)
 .load(file)

val df = df0.withColumn(&quot;reviewTS&quot;,
  concat($&quot;summary&quot;, lit(&quot; &quot;),$&quot;reviewText&quot;))
 .drop(&quot;helpful&quot;)
 .drop(&quot;reviewerID&quot;)
 .drop(&quot;reviewerName&quot;)
 .drop(&quot;reviewTime&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;DataFrame printSchema&lt;/code&gt; displays the schema:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.printSchema

root
 |-- asin: string (nullable = true)
 |-- overall: double (nullable = true)
 |-- reviewText: string (nullable = true)
 |-- summary: string (nullable = true)
 |-- unixReviewTime: long (nullable = true)
 |-- reviewTS: string (nullable = true)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;DataFrame show&lt;/code&gt; method displays the first 20 rows or the specified number of rows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.show(5)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image5-1603903053652.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary Statistics&lt;/h2&gt;
&lt;p&gt;Spark DataFrames include some &lt;a href=&quot;https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#23pyspark.sql.DataFrame&quot;&gt;built-in functions&lt;/a&gt; for statistical processing. The &lt;code&gt;describe()&lt;/code&gt; function performs summary statistics calculations on all numeric columns and returns them as a &lt;code&gt;DataFrame&lt;/code&gt;. Below, we analyze the product rating overall column:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.describe(&quot;overall&quot;).show

**result:**
+-------+------------------+
|**summary**|           **overall**|
+-------+------------------+
|  **count**|            **200000**|
|   **mean**|          **4.395105**|
| **stddev**|**0.9894654790262587**|
|    **min**|               **1.0**|
|    **max**|               **5.0**|
+-------+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the code below, we filter to remove neutral ratings (=3), then a Spark &lt;a href=&quot;https://spark.apache.org/docs/2.2.0/ml-features.html#23bucketizer&quot;&gt;Bucketizer&lt;/a&gt; is used to add a label 0/1 column to the dataset for Positive (overall rating &gt;=4) and not positive (overall rating &amp;#x3C;4) reviews. Then, the resulting total counts are displayed. Grouping the data by the label column and counting the number of instances in each group shows that there are roughly 13 times as many positive samples as not positive samples.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val df1 = df.filter(&quot;overall !=3&quot;)

val bucketizer = new Bucketizer()
.setInputCol(&quot;overall&quot;)
.setOutputCol(&quot;label&quot;)
.setSplits(Array(Double.NegativeInfinity, 4.0,
 Double.PositiveInfinity))

val df2= bucketizer.transform(df1)

df2.groupBy(&quot;overall&quot;,&quot;label&quot;).count.show

**result:**
+-------+-----+------+
|**overall**|**label**| **count**|
+-------+-----+------+
|    **2.0**|  **0.0**|  **6916**|
|    **5.0**|  **1.0**|**127515**|
|    **1.0**|  **0.0**|  **6198**|
|    **4.0**|  **1.0**| **43303**|
+-------+-----+------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Stratified Sampling&lt;/h2&gt;
&lt;p&gt;In order to ensure that our model is sensitive to the negative samples, we can put the two sample types on the same footing using stratified sampling. The DataFrames &lt;code&gt;sampleBy()&lt;/code&gt; function does this when provided with fractions of each sample type to be returned. Here, we&apos;re keeping all instances of negative, but downsampling the  negative instances to 10%, then displaying the results.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val fractions = Map(1.0 -&gt; .1, 0.0 -&gt; 1.0)
val df3 = df2.stat.sampleBy(&quot;label&quot;, fractions, 36L)
df3.groupBy(&quot;label&quot;).count.show

**result:**

+-----+-----+
|label|count|
+-----+-----+
|  0.0|13114|
|  1.0|17086|
+-----+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, the data is split into a training data set and a test data set: 80% of the data is used to train the model, and 20% will be used for testing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// split into training and test dataset
val splitSeed = 5043
val Array(trainingData, testData) = df3.randomSplit(Array(0.8, 0.2), splitSeed)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Feature Extraction and Pipelining&lt;/h2&gt;
&lt;p&gt;The ML package needs the label and feature vector to be added as columns to the input &lt;code&gt;DataFrame&lt;/code&gt;. We set up a pipeline to pass the data through transformers in order to extract the features and label.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;RegexTokenizer&lt;/code&gt; takes an input text column and returns a &lt;code&gt;DataFrame&lt;/code&gt; with an additional column of the text split into an array of words by using the provided regex pattern.   The &lt;code&gt;StopWordsRemover&lt;/code&gt; filters out words which should be excluded, because the words appear frequently and don&apos;t carry as much meaning – for example, &apos;I,&apos; &apos;is,&apos; &apos;the.&apos;&lt;/p&gt;
&lt;p&gt;In the code below, the &lt;code&gt;RegexTokenizer&lt;/code&gt; will split up the column with the review and summary text into a column with an array of words, which will then be filtered by the &lt;code&gt;StopWordsRemover&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val tokenizer = new RegexTokenizer()
.setInputCol(&quot;reviewTS&quot;)
.setOutputCol(&quot;reviewTokensUf&quot;)
.setPattern(&quot;\\s+|[,.()\&quot;]&quot;)

val remover = new StopWordsRemover()
.setStopWords(StopWordsRemover
.loadDefaultStopWords(&quot;english&quot;))
.setInputCol(&quot;reviewTokensUf&quot;)
.setOutputCol(&quot;reviewTokens&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An example of  the results of the &lt;code&gt;RegexTokenizer&lt;/code&gt; and &lt;code&gt;StopWordsRemover&lt;/code&gt;, taking as input column &lt;code&gt;reviewTS&lt;/code&gt; and adding the &lt;code&gt;reviewTokens&lt;/code&gt; column of filtered words, is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image11-1603903062165.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;reviewTS&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;reviewTokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;resistance was good but quality wasn&apos;t So it worked well for a couple weeks, but during a lunge workout, it snapped on me. I liked it and thought it was a great product until this happened. I noticed small rips on the band. This could have been the issue.&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Array(resistance, good, quality, worked, well, couple, weeks, lunge, workout, snapped, liked, thought, great, product, happened, noticed, small, rips, band, issue)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;A &lt;code&gt;CountVectorizer&lt;/code&gt; is used to convert the array of word tokens from the previous step to vectors of word token counts.  The &lt;code&gt;CountVectorizer&lt;/code&gt; is performing the TF part of TF-IDF feature extraction.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val cv = new CountVectorizer()
.setInputCol(&quot;reviewTokens&quot;)
.setOutputCol(&quot;cv&quot;)
.setVocabSize(200000)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An example of  the results of the &lt;code&gt;CountVectorizer&lt;/code&gt;, taking as input column &lt;code&gt;reviewTokens&lt;/code&gt; and adding the &lt;code&gt;cv&lt;/code&gt; column of vectorized word counts, is shown below.  In the &lt;code&gt;cv&lt;/code&gt; column: 56004 is the size of the TF word vocabulary; the second array is the position of the word in the word vocabulary ordered by term frequency across the corpus; the third array is the count of the word (TF) in the &lt;code&gt;reviewTokens&lt;/code&gt; text.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;reviewTokens&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;cv&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Array(resistance, good, quality, worked, well, couple, weeks, lunge, workout, snapped, liked, thought, great, product, happened, noticed, small, rips, band, issue)&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;span style=&quot;overflow-wrap: anywhere;&quot;&gt;(56004,[1,2,6,8,13,31,163,168,192,276,487,518,589,643,770,955,1194,1297,4178,19185],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Below the &lt;code&gt;cv&lt;/code&gt; column created by the &lt;code&gt;CountVectorizer&lt;/code&gt; (the TF part of TF-IDF feature extraction) is the input for IDF.  IDF takes feature vectors created from the &lt;code&gt;CountVectorizer&lt;/code&gt; and down-weights features which appear frequently in a collection of texts (the IDF part of TF-IDF feature extraction). The output &lt;code&gt;features&lt;/code&gt; column is the TF-IDF features vector, which the logistic regression function will use.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// list of feature columns
val idf = new IDF()
.setInputCol(&quot;cv&quot;)
.setOutputCol(&quot;features&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An example of the results of the IDF, taking as input column &lt;code&gt;cv&lt;/code&gt; and adding the &lt;code&gt;features&lt;/code&gt; column of vectorized TF-IDF, is shown below. In the &lt;code&gt;cv&lt;/code&gt; column, 56004 is the size of the word vocabulary; the second array is the position of the word in the word vocabulary ordered by term frequency across the corpus; the third array is the TF-IDF of the word in the reviewTokens text.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;cv&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;&lt;span style=&quot;overflow-wrap: anywhere;&quot;&gt; (56004,[1,2,6,8,13,31,163,168,192,276,487,518,589,643,770,955,1194,1297,4178,19185],[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])&lt;/span&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;span style=&quot;overflow-wrap: anywhere;&quot;&gt;(56004,[1,2,6,8,13,31,163,168,192,276,487,518,589,643,770,955,1194,1297,4178,19185],[1.3167453737971118,1.3189162538557524,1.5214341820160893,1.9425118863569042,2.052613811061827,2.3350290362765134,3.188779919701724,3.245760634740672,3.316430208091361,3.620260266951124,4.115700971877636,4.165254786332365,4.655788580192657,4.32745920096672,4.781242886345692,5.001914248514512,5.008106218762434,5.169529657918772,6.793673717742568,8.990898295078788])&lt;/span&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The final element in our pipeline is an estimator, a logistic regression classifier, which will train on the vector of labels and features and return a (transformer) model.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// create Logistic Regression estimator
// regularizer parameters avoid overfitting

val lr = new LogisticRegression()
.setMaxIter(100)
.setRegParam(0.02)
.setElasticNetParam(0.3)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we put the &lt;code&gt;Tokenizer&lt;/code&gt;, &lt;code&gt;CountVectorizer&lt;/code&gt;, IDF,  and Logistic Regression Classifier in a pipeline.  A pipeline chains multiple transformers and estimators together to specify an ML workflow for training and using a model.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val steps =  Array( tokenizer, remover, cv, idf,lr)
val pipeline = new Pipeline().setStages(steps)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image10-1603903070796.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;TRAIN THE MODEL&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image17-1603903079716.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, we train the logistic regression model &lt;a href=&quot;https://spark.apache.org/docs/latest/ml-classification-regression.html#23logistic-regression&quot;&gt;with elastic net regularization&lt;/a&gt;. The model is trained by making associations between the input features and the labeled output associated with those features. The &lt;code&gt;pipeline.fit&lt;/code&gt; method returns a fitted pipeline model.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val model = pipeline.fit(trainingData)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image6-1603903088693.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: another option for training the model is to tune the parameters, using grid search, and select the best model, using k-fold cross validation with a Spark CrossValidator and a ParamGridBuilder.&lt;/p&gt;
&lt;p&gt;Next, we can get the &lt;code&gt;CountVectorizer&lt;/code&gt; and &lt;code&gt;LogisticRegression&lt;/code&gt; model from the fitted pipeline model, in order to print out the coefficient weights of the words in the text vocabulary (the word feature importance).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// get vocabulary from the CountVectorizer
val vocabulary = model.stages(2)
.asInstanceOf[CountVectorizerModel]
.vocabulary

// get the logistic regression model
val lrModel = model.stages.last
.asInstanceOf[LogisticRegressionModel]

// Get array of coefficient weights
val weights = lrModel.coefficients.toArray

// create array of word and corresponding weight
val word_weight = vocabulary.zip(weights)

// create a dataframe with word and weights columns
val cdf = sc.parallelize(word_weight)
.toDF(&quot;word&quot;,&quot;weights&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Recall that logistic regression generates the coefficient weights of a formula to predict the probability of occurrence of the feature x (in this case, a word) to maximize the probability of the outcome Y, 1 or 0 (in this case, positive or negative text sentiment). The weights can be interpreted:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Positive weights will correspond to the words that are positive&lt;/li&gt;
&lt;li&gt;Negative weights will correspond to the words that are negative&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Below, we sort the weights in descending order to show the most positive words.  The results show that &apos;great,&apos; perfect,&apos; &apos;easy,&apos; &apos;works,&apos; and &apos;excellent&apos; are the most important positive words.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// show the most positive weighted words
cdf.orderBy(desc(&quot;weights&quot;)).show(10)

**result:**
+---------+-------------------+
|     **word**|             **weight**|
+---------+-------------------+
|    **great**| **0.6078697902359276**|
|  **perfect**|**0.34404726951273945**|
|**excellent**|**0.28217372351853814**|
|     **easy**|**0.26293906850341764**|
|     **love**|**0.23518819188672227**|
|    **works**|  **0.229342771859023**|
|     **good**| **0.2116386469012886**|
|   **highly**| **0.2044040462730194**|
|     **nice**|**0.20088266981583622**|
|     **best**|**0.18194893152633945**|
+---------+-------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we sort the weights in ascending order to show the most negative words.The results show that &apos;returned,&apos; &apos;poor,&apos; &apos;waste,&apos; and &apos;useless&apos; are the most important negative words.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// show the most negative sentiment words
cdf.orderBy(&quot;weights&quot;).show(10)

**result:**
+-------------+--------------------+
|         **word**|              **weight**|
+-------------+--------------------+
|     **returned**|**-0.38185206877117467**|
|         **poor**|**-0.35366409294425644**|
|        **waste**| **-0.3159724826017525**|
|      **useless**| **-0.2914292653060789**|
|       **return**| **-0.2724012497362986**|
|**disappointing**| **-0.2666580559444479**|
|        **broke**| **-0.2656765359468423**|
| **disappointed**|**-0.23852780960293438**|
|    **returning**|**-0.22432617475366876**|
|         **junk**|**-0.21457169691127467**|
+-------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Predictions and Model Evaluation&lt;/h2&gt;
&lt;p&gt;The performance of the model can be determined, using the test data set that has not been used for any training. We transform the test &lt;code&gt;DataFrame&lt;/code&gt; with the pipeline model, which will pass the test data, according to the pipeline steps, through the feature extraction stage, estimate with the logistic regression model, and then return the label predictions in a column of a new &lt;code&gt;DataFrame&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val predictions = model.transform(testData)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image4-1603903098021.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;BinaryClassificationEvaluator&lt;/code&gt; provides a metric to measure how well a fitted model does on the test data. The default metric for this evaluator is the area under the ROC curve. The area measures the ability of the test to correctly classify true positives from false positives. A random predictor would have .5. The closer the value is to 1, the better its predictions are.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image14-1603903107176.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below, we pass the predictions &lt;code&gt;DataFrame&lt;/code&gt; (which has a &lt;code&gt;rawPrediction&lt;/code&gt; column and a label column) to the &lt;code&gt;BinaryClassificationEvaluator&lt;/code&gt;, which returns .93 as the area under the ROC curve.  &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val evaluator = new BinaryClassificationEvaluator()  
val areaUnderROC = evaluator.evaluate(predictions)

result:  0.9350783400583272
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we calculate some more metrics. The number of false/true positives and negative predictions is also useful:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;True positives are how often the model correctly predicts positive sentiment.&lt;/li&gt;
&lt;li&gt;False positives are how often the model incorrectly predicts positive sentiment..&lt;/li&gt;
&lt;li&gt;True negatives indicate how often the model correctly predicts negative sentiment.&lt;/li&gt;
&lt;li&gt;False negatives indicate how often the model incorrectly predicts negative sentiment.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val lp = predictions.select(&quot;label&quot;, &quot;prediction&quot;)
val counttotal = predictions.count()
val correct = lp.filter($&quot;label&quot; === $&quot;prediction&quot;).count()
val wrong = lp.filter(not($&quot;label&quot; === $&quot;prediction&quot;)).count()
val ratioWrong = wrong.toDouble / counttotal.toDouble
val lp = predictions.select(  &quot;prediction&quot;,&quot;label&quot;)
val counttotal = predictions.count().toDouble
val correct = lp.filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count()
val wrong = lp.filter(&quot;label != prediction&quot;)
.count()
val ratioWrong=wrong/counttotal
val ratioCorrect=correct/counttotal

val truen =( lp.filter($&quot;label&quot; === 0.0)
 .filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count()) /counttotal

val truep = (lp.filter($&quot;label&quot; === 1.0)
 .filter($&quot;label&quot; === $&quot;prediction&quot;)
 .count())/counttotal

val falsen = (lp.filter($&quot;label&quot; === 0.0)
 .filter(not($&quot;label&quot; === $&quot;prediction&quot;))
 .count())/counttotal

val falsep = (lp.filter($&quot;label&quot; === 1.0)
 .filter(not($&quot;label&quot; === $&quot;prediction&quot;))
 .count())/counttotal

val precision= truep / (truep + falsep)
val recall= truep / (truep + falsen)
val fmeasure= 2 * precision * recall / (precision + recall)
val accuracy=(truep + truen) / (truep + truen + falsep + falsen)

**result:**
**counttotal: 6112.0**
**correct: 5290.0**
**wrong: 822.0**
**ratioWrong: 0.13448952879581152**
**ratioCorrect: 0.8655104712041884**
**truen: 0.3417866492146597**
**truep: 0.5237238219895288**
**falsen: 0.044829842931937175**
**falsep: 0.08965968586387435**
**precision: 0.8538276873833023**
**recall: 0.9211510791366907**
**fmeasure: 0.8862126245847176**
**accuracy: 0.8655104712041886**
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we print out the summary and review token words for the reviews with the highest probability of a negative sentiment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;predictions.filter($&quot;prediction&quot; === 0.0)
.select(&quot;summary&quot;,&quot;reviewTokens&quot;,&quot;overall&quot;,&quot;prediction&quot;)
.orderBy(desc(&quot;rawPrediction&quot;)).show(5)

result:
+--------------------+--------------------+-------+----------+
|             summary|        reviewTokens|overall|prediction|
+--------------------+--------------------+-------+----------+
|  Worthless Garbage!|[worthless, garba...|    1.0|       0.0|
|Decent but failin...|[decent, failing,...|    1.0|       0.0|
|over rated and po...|[rated, poorly, m...|    2.0|       0.0|
|dont waste your m...|[dont, waste, mon...|    1.0|       0.0|
|Cheap Chinese JUNK! |[cheap, chinese,....|    1.0|       0.0|
+--------------------+--------------------+-------+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below we print out the summary and review token words for the reviews with the highest probability of a positive sentiment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;predictions.filter($&quot;prediction&quot; === 1.0)
.select(&quot;summary&quot;,&quot;reviewTokens&quot;,&quot;overall&quot;,&quot;prediction&quot;)
.orderBy(&quot;rawPrediction&quot;).show(5)

**result:**
+--------------------+--------------------+-------+----------+
|             summary|        reviewTokens|overall|prediction|
+--------------------+--------------------+-------+----------+
|               great|[great, excellent...|    5.0|       1.0|
|Outstanding Purchase|[outstanding, pur...|    5.0|       1.0|
|A fantastic stov....|[fantastic, stov....|    5.0|       1.0|
|Small But Delight...|[small, delightfu...|    5.0|       1.0|
|Kabar made a good...|[kabar, made, goo...|    5.0|       1.0|
+--------------------+--------------------+-------+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Saving the Model&lt;/h2&gt;
&lt;p&gt;We can now save our fitted pipeline model to the distributed file store for later use in production. This saves both the feature extraction stage and the logistic regression model in the pipeline.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;var dir = &quot;/user/mapr/sentmodel/&quot;
model.write.overwrite().save(dir)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The result of saving the pipeline model is a JSON file for metadata and Parquet files for model data. We can reload the model with the load command; the original and reloaded models are the same:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val sameModel = org.apache.spark.ml.PipelineModel.load(modeldirectory)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;There are plenty of great tools to build classification models. Apache Spark provides an excellent framework for building solutions to business problems that can extract value from massive, distributed datasets.&lt;/p&gt;
&lt;p&gt;Machine learning algorithms cannot answer all questions perfectly. But they do provide evidence for humans to consider when interpreting results, assuming the right question is asked in the first place.&lt;/p&gt;
&lt;h2&gt;Code&lt;/h2&gt;
&lt;p&gt;All of the data and code to train the models and make your own conclusions, using Apache Spark, are located in GitHub; refer to GitHub &quot;README&quot; for more information about running the code.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/mapr-sparkml-sentiment-classification&quot;&gt;https://github.com/caroljmcdonald/mapr-sparkml-sentiment-classification&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Fast data processing pipeline for predicting flight delays using Apache APIs: Kafka, Spark Streaming and Machine Learning (part 1)]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/</link><guid isPermaLink="false">https://developer.hpe.com/fast-data-processing-pipeline-for-predicting-flight-delays-using-apache-/</guid><pubDate>Wed, 21 Oct 2020 05:53:58 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-01-10T10:30:00.000Z&quot;,
&quot;tags&quot;: &quot;use-cases&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;According to &lt;a href=&quot;https://hbr.org/2017/06/how-analytics-has-changed-in-the-last-10-years-and-how-its-stayed-the-same&quot;&gt;Thomas Davenport in the HBR&lt;/a&gt;, analytical technology has changed dramatically over the last decade, with more powerful and less expensive distributed computing across commodity servers, streaming analytics, and improved machine learning technologies, enabling companies to store and analyze both far more data and many different types of it. Werner Vogel stated in his recent &lt;a href=&quot;https://www.infoq.com/news/2017/12/vogels-21st-century-architecture&quot;&gt;keynote at AWS re:invent&lt;/a&gt; that key technology drivers of today are data, the Internet of Things (IoT), and machine learning.&lt;/p&gt;
&lt;p&gt;Leveraging the huge amounts of data coming from the Internet of Things requires processing events in real time, applying machine learning to add value, and scalable fast storage. This is the first in a series of blogs, which discusses the architecture of an end-to-end application that combines streaming data with machine learning and fast storage. In this first post, I’ll help you get started using Apache Spark’s &lt;a href=&quot;https://spark.apache.org/docs/latest/ml-pipeline.html&quot;&gt;ML pipelines&lt;/a&gt; with a &lt;a href=&quot;https://spark.apache.org/docs/latest/ml-classification-regression.html#decision-tree-classifier&quot;&gt;Decision Tree Classifier&lt;/a&gt; to predict flight delays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/spark-rdd-based-api-1603259821463.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/machine-learning-logistics-and-streaming-architecture-1603259842484.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is Machine Learning?&lt;/h2&gt;
&lt;p&gt;Machine learning uses algorithms to find patterns in data, and then uses a model that recognizes those patterns to make predictions on new data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/predictions-1603259860781.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There are typically two phases in machine learning with real-time data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Data Discovery: The first phase involves analysis on historical data to build the machine learning model.&lt;/li&gt;
&lt;li&gt;Analytics Using the Model: The second phase uses the model in production on live events. (Note that Spark does provide some streaming machine learning algorithms, but you still often need to do an analysis of historical data.)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/historical-data-1603259872698.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In general, machine learning may be broken down into two-three types: supervised, unsupervised, and in between those two.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/machine-learning-1603259887263.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Supervised algorithms use labeled data in which both the input and target outcome, or label, are provided to the algorithm.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/labeled-data-1603259898645.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Supervised Learning is also called predictive modeling or predictive analytics, because you build a model that is capable of making predictions.&lt;/p&gt;
&lt;p&gt;Unsupervised learning algorithms find patterns in unlabeled data. Semi-supervised learning uses a mixture of labeled and unlabeled data. Reinforcement learning trains algorithms to maximize rewards based on feedback.&lt;/p&gt;
&lt;h2&gt;Three Categories of Techniques for Machine Learning&lt;/h2&gt;
&lt;p&gt;Three common categories of machine learning techniques are Classification, Clustering and Collaborative Filtering.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/three-common-catgories-1603259911430.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Classification: Gmail uses a machine learning technique called classification to designate if an email is spam or not, based on the data of an email: the sender, recipients, subject, and message body. Classification takes a set of data with known labels and learns how to label new records based on that information.&lt;/li&gt;
&lt;li&gt;Clustering: Google News uses a technique called clustering to group news articles into different categories, based on title and content. Clustering algorithms discover groupings that occur in collections of data.&lt;/li&gt;
&lt;li&gt;Collaborative Filtering: Amazon uses a machine learning technique called collaborative filtering (commonly referred to as recommendation), to determine which products users will like based on their history and similarity to other users.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this example we will be using a supervised machine learning algorithm for classification of flight delays.&lt;/p&gt;
&lt;h2&gt;CLASSIFICATION&lt;/h2&gt;
&lt;p&gt;Classification is a family of supervised machine learning algorithms that identify which category an item belongs to (e.g., whether a transaction is fraud or not fraud), based on labeled examples of known items (e.g., transactions known to be fraud or not). Classification takes a set of data with known labels and pre-determined features and learns how to label new records based on that information. Features are the “if questions” that you ask. The label is the answer to those questions. In the example below, if it walks, swims, and quacks like a duck, then the label is &quot;duck.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/duck-1603259924086.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s go through an example for flight delays:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What are we trying to predict?
&lt;ul&gt;
&lt;li&gt;Whether a flight will be delayed or not.&lt;/li&gt;
&lt;li&gt;Delayed is the Label: True or False&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;What are the “if questions” or properties that you can use to make predictions?
&lt;ul&gt;
&lt;li&gt;What is the Originating Airport?&lt;/li&gt;
&lt;li&gt;What is the Destination Airport?&lt;/li&gt;
&lt;li&gt;What is the Scheduled time of departure?&lt;/li&gt;
&lt;li&gt;What is the schedule time of arrival?&lt;/li&gt;
&lt;li&gt;What is the day of the week?&lt;/li&gt;
&lt;li&gt;What is the Airline Carrier?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;DECISION TREES&lt;/h2&gt;
&lt;p&gt;Decision trees create a model that predicts the label (or class) by evaluating a set of rules that follow an IF THEN ELSE…pattern. The If then ELSE feature questions are the nodes, and the answers True or false are the branches in the tree to the child nodes. A decision tree model estimates the minimum number of true/false questions needed, to assess the probability of making a correct decision. Below is an example of a simplified decision tree for flight delays:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Q1: If the scheduled departure time &amp;#x3C; 10:15 AM
&lt;ul&gt;
&lt;li&gt;T:Q2: If the originating airport is in the set {ORD, ATL,SFO}
&lt;ul&gt;
&lt;li&gt;T:Q3: If the day of the week is in {Monday, Sunday}
&lt;ul&gt;
&lt;li&gt;T:Delayed=1&lt;/li&gt;
&lt;li&gt;F: Delayed=0&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;F: Q3: If the destination airport is in the set {SFO,ORD,EWR}
&lt;ul&gt;
&lt;li&gt;T: Delayed=1&lt;/li&gt;
&lt;li&gt;F: Delayed=0&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;F :Q2: If the originating airport is not in the set {BOS . MIA}
&lt;ul&gt;
&lt;li&gt;T:Q3: If the day of the week is in the set {Monday , Sunday}
&lt;ul&gt;
&lt;li&gt;T: Delayed=1&lt;/li&gt;
&lt;li&gt;F: Delayed=0&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;F: Q3: If the destination airport is not in the set  {BOS . MIA}
&lt;ul&gt;
&lt;li&gt;T: Delayed=1&lt;/li&gt;
&lt;li&gt;F: Delayed=0&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/decision-tree-1603259936871.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;EXAMPLE USE CASE DATA SET&lt;/h2&gt;
&lt;p&gt;Our data is from &lt;a href=&quot;http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&amp;#x26;DB_Short_Name=On-Time&quot;&gt;http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&amp;#x26;DB_Short_Name=On-Time&lt;/a&gt;. We are using flight information for January, February, March, April and May 2017. For each flight, we have the following information:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Field&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Example Value&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;id&lt;/td&gt;
&lt;td&gt;Unique identifier: composed of carrier code, date, origin, destination, flight number&lt;/td&gt;
&lt;td&gt;AA_2017-02-22_SFO_ORD_150&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dofW (Integer)&lt;/td&gt;
&lt;td&gt;Day of week (1=Monday 7=Sunday)&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;carrier (String)&lt;/td&gt;
&lt;td&gt;Carrier code&lt;/td&gt;
&lt;td&gt;AA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;origin(String)&lt;/td&gt;
&lt;td&gt;Origin Airport Code&lt;/td&gt;
&lt;td&gt;JFK&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dest(String&lt;/td&gt;
&lt;td&gt;Destination airport code&lt;/td&gt;
&lt;td&gt;LAX&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;crsdephour(Integer)&lt;/td&gt;
&lt;td&gt;Scheduled departure hour&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;crsdeptime(Double)&lt;/td&gt;
&lt;td&gt;Scheduled departure time&lt;/td&gt;
&lt;td&gt;900.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;depdelay (Double)&lt;/td&gt;
&lt;td&gt;Departure delay in minutes&lt;/td&gt;
&lt;td&gt;40.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;crsarrtime (Double)&lt;/td&gt;
&lt;td&gt;Scheduled arrival time&lt;/td&gt;
&lt;td&gt;1230.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arrdelay (Double)&lt;/td&gt;
&lt;td&gt;Arrival delay minutes&lt;/td&gt;
&lt;td&gt;40.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;crselapsedtime (Double)&lt;/td&gt;
&lt;td&gt;Elapsed time&lt;/td&gt;
&lt;td&gt;390.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dist  (Double)&lt;/td&gt;
&lt;td&gt;Distance&lt;/td&gt;
&lt;td&gt;2475.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I have already cleaned up, limited the number of airports and carriers, and transformed the data into 2 JSON files, one for training and one for testing. (you can see the code for the cleanup in the github repository). The JSON file has the following format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;_id&quot;: &quot;AA_2017-01-01_ATL_LGA_1678&quot;,
    &quot;dofW&quot;: 7,
    &quot;carrier&quot;: &quot;AA&quot;,
    &quot;origin&quot;: &quot;ATL&quot;,
    &quot;dest&quot;: &quot;LGA&quot;,
    &quot;crsdephour&quot;: 17,
    &quot;crsdeptime&quot;: 1700,
    &quot;depdelay&quot;: 0.0,
    &quot;crsarrtime&quot;: 1912,
    &quot;arrdelay&quot;: 0.0,
    &quot;crselapsedtime&quot;: 132.0,
    &quot;dist&quot;: 762.0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can run the &lt;a href=&quot;https://github.com/caroljmcdonald/spark-ml-flightdelay&quot;&gt;code for this example&lt;/a&gt; with MapR 5.2.1 or MapR 6.0 (which includes Spark 2.1).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/zepplin-notebook-1603259951128.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Load The Data From a File Into a Dataframe&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/scala-case-class-1603259961799.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We use a Scala case class and &lt;a href=&quot;http://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema&quot;&gt;Structype&lt;/a&gt; to define the schema, corresponding to a line in the JSON data file.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/define-schema-for-json-file-data-1603259974595.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below we &lt;a href=&quot;http://spark.apache.org/docs/latest/sql-programming-guide.html#manually-specifying-options&quot;&gt;specify the data source, schema and class to load into a Dataset&lt;/a&gt;. We load the data from January and February, which we will use for training the model. (Note that specifying the schema when loading data into a DataFrame will &lt;a href=&quot;http://data-informed.com/6-steps-to-get-top-performance-from-the-changes-in-spark-2-0/&quot;&gt;give better performance&lt;/a&gt; than schema inference.}&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/read-data-json-file-into-dataset-1603259991766.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Dataframe show method displays the first 20 rows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/dataframe-show-method-1603260003975.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here we load data from March and April which we will use for testing the model:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/march-april-data-1603260015870.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary Statistics&lt;/h2&gt;
&lt;p&gt;Spark DataFrames include some &lt;a href=&quot;https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame&quot;&gt;built-in functions&lt;/a&gt; for statistical processing. The describe() function performs summary statistics calculations on all numeric columns and returns them as a DataFrame.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/perform-summary-statistics-1603260030384.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Data Exploration&lt;/h2&gt;
&lt;p&gt;We can use Spark SQL to explore the dataset. Here are some example queries using the Spark SQL:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/register-dataset-as-a-temporary-view-1603260043715.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below we display information for the longest departure delays:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/top-5-longest-departure-delays-1603260056308.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below we display the average departure delay by Carrier:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/average-departure-delay-1603260067246.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We want to predict flight delays where depdelay &gt; 40 minutes, so let’s explore this data. Below we see that United Airlines and Delta have the highest count of flight delays for Jan &amp;#x26; Feb 2017 (the training set).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/count-of-departure-delays-by-carrier-1603260084783.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see that Monday and Sunday have the highest count of flight delays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/count-of-departure-delays-by-day-of-the-week-1603260094738.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see that the hours between 13:00-19:00 have the highest count of flight delays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/count-of-departure-delays-by-hour-of-day-1603260105210.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see that the origin airports Chicago and Atlanta have the highest count of flight delays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/count-of-departure-delays-by-origin-1603260115273.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see that the destination airports San Francisco and Newark have the highest count of flight delays.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/count-of-departure-delays-by-destination-1603260125772.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see the count of departure delays by Origin and destination. The routes ORD-&gt;SFO and DEN-&gt;SFO have the highest delays, maybe because of weather in January and February? Adding weather to this dataset would give better results, but that is left as an exercise for the reader.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/delays-by-origin-destination-1603260137188.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the code below a Spark &lt;a href=&quot;https://spark.apache.org/docs/2.2.0/ml-features.html#bucketizer&quot;&gt;Bucketizer&lt;/a&gt; is used to split the dataset into delayed and not delayed flights with a delayed 0/1 column. Then the resulting total counts are displayed. Grouping the data by the delayed field and counting the number of instances in each group shows that there are roughly 8 times as many not delayed samples as delayed samples.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/add-labels-for-delayed-flights-1603260149465.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below we see the count of not delayed (0=dark blue) and delayed (1=light blue) flights by origin.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/delays-by-origin-1603260162630.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Stratified Sampling&lt;/h2&gt;
&lt;p&gt;In order to ensure that our model is sensitive to the delayed samples we can put the two sample types on the same footing using stratified sampling. The DataFrames sampleBy() function does this when provided with fractions of each sample type to be returned. Here, we&apos;re keeping all instances of delayed, but downsampling the not delayed instances to 29%, then displaying the results&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/stratify-the-sampling-to-fewr-not-delayed-1603260173743.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;FEATURES ARRAY&lt;/h2&gt;
&lt;p&gt;To build a classifier model, you extract the features that most contribute to the classification. In this scenario, we will build a tree to predict the label of delayed or not based on the following features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Label → delayed = 0
&lt;ul&gt;
&lt;li&gt;Delayed = 1  if delay &gt; 40 minutes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Features → {day of the week, scheduled departure time, scheduled arrival time, carrier, scheduled elapsed time, origin, destination, distance}&lt;/li&gt;
&lt;/ul&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;delayed&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;dofW&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;crsdepTime&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;crsArrTime&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;carrier&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;elapTime&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;origin&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;dest&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;dist&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0/0.0&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1015&lt;/td&gt;
&lt;td&gt;1230&lt;/td&gt;
&lt;td&gt;AA&lt;/td&gt;
&lt;td&gt;385.0&lt;/td&gt;
&lt;td&gt;JKF&lt;/td&gt;
&lt;td&gt;LAX&lt;/td&gt;
&lt;td&gt;2475.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In order for the features to be used by a machine learning algorithm, they must be transformed and put into Feature Vectors, which are vectors of numbers representing the value for each feature.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/reference-learning-spark-1603260186018.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;USING THE SPARK ML PACKAGE&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;http://spark.apache.org/docs/latest/ml-guide.html&quot;&gt;ML package&lt;/a&gt; is the newer library of machine learning routines. &lt;a href=&quot;http://spark.apache.org/docs/latest/ml-pipeline.html#pipeline-components&quot;&gt;Spark ML provides a uniform set of high-level APIs built on top of DataFrames.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/ml-pipeline-1603260196123.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We will use an ML Pipeline to pass the data through transformers in order to extract the features and an estimator to produce the model.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Transformer: A Transformer is an algorithm which transforms one DataFrame into another DataFrame. We will use a transformer to get a DataFrame with a features vector column.&lt;/li&gt;
&lt;li&gt;Estimator: An Estimator is an algorithm which can be fit on a DataFrame to produce a Transformer. We will use a an estimator to train a model which can transform data to get predictions.&lt;/li&gt;
&lt;li&gt;Pipeline: A Pipeline chains multiple Transformers and Estimators together to specify a ML workflow.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Feature Extraction and Pipelining&lt;/h2&gt;
&lt;p&gt;The ML package needs the label and feature vector to be added as columns to the input dataframe. We set up a pipeline to pass the data through transformers in order to extract the features and label.  We use a StringIndexer to encode a string columns to a column of number indices. We use a OneHotEncoder to map a number indices column to a column of binary vectors, with at most a single one-value. Encoding categorical features allows decision trees to treat categorical features appropriately, improving performance. An example of StringIndexing and OneHotEncoding for carrier is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/example-stringindexing-1603260208808.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/combination-stringindexer-onehotencoder-1603260221098.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below a Bucketizer is used to add a label of delayed 0/1. The VectorAssembler combines a given list of columns into a single feature vector column.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/vectorassembler-transformer-1603260233625.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The result of running these transformers in a pipeline will be to add a label and features column to the dataset as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/add-label-features-column-1603260245500.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The final element in our pipeline is an estimator (a decision tree classifier), training on the vector of labels and features.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/create-decision-tree-estimator-1603260259278.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below we chain the indexers and tree in a Pipeline.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/chain-indexers-1603260267909.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/tree-pipeline-1603260278538.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;TRAIN THE MODEL&lt;/h2&gt;
&lt;p&gt;We would like to determine which parameter values of the decision tree produce the best model. A common technique for model selection is k-fold cross validation, where the data is randomly split into k partitions. Each partition is used once as the testing data set, while the rest are used for training. Models are then generated using the training sets and evaluated with the testing sets, resulting in k model performance measurements. The model parameters leading to the highest performance metric produce the best model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/ml-cross-validation-process-1603260289452.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Spark ML supports k-fold cross validation with a transformation/estimation pipeline to try out different combinations of parameters, using a process called grid search, where you set up the parameters to test, and a cross validation evaluator to construct a model selection workflow.&lt;/p&gt;
&lt;p&gt;Below, we use a ParamGridBuilder to construct the parameter grid. We define an Evaluator, which will evaluate the model by comparing the test label column with the test prediction column. We use a CrossValidator for model selection.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/setup-crossvalidator-with-parameters-1603260303905.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The CrossValidator uses the Estimator Pipeline, the Parameter Grid, and the Classification Evaluator to fit the training data set and returns a model.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/use-cross-validator-estimator-1603260318523.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The CrossValidator uses the ParamGridBuilder to iterate through the maxDepth parameter of the decision tree and evaluate the models, repeating 3 times per parameter value for reliable results.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/crossvalidator-1603260329067.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, we can get the best decision tree model, in order to print out the decision tree and feature importances. (Note that the OneHotEncoders increases the number of features. In order to understand this printout better I built a tree without the encoders, which gave a slightly lower accuracy)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/get-the-best-decision-tree-model-1603260338670.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We find that the best tree model produced using the cross-validation process is one with a depth of 6. The toDebugString() function provides a print of the tree&apos;s decision nodes and final prediction outcomes at the end leaves. Below is a partial printout of the decision tree:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/treemodel-todebugstring-1603260350331.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The features numbers correspond to the following:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;( 0=carrierIndexed, 1=destIndexed, 2=originIndexed, 3=dofW, 4=crsdephour, 5=crselapsedtime, 6=crsarrtime, 7=crsdeptime, 8=dist)&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Below we can see that the feature importance in order is&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;scheduled departure time  (feature 7)&lt;/li&gt;
&lt;li&gt;destination  (feature 1)&lt;/li&gt;
&lt;li&gt;origin (feature 2)&lt;/li&gt;
&lt;li&gt;scheduled arrival time (feature 6)&lt;/li&gt;
&lt;li&gt;day of the week  (feature 3)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/featureimportances-1603260369952.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/features-importances-1603260389746.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Decision trees are often used for feature selection because they provide an automated mechanism for determining the most important features (those closest to the tree root).&lt;/p&gt;
&lt;h2&gt;Predictions and Model Evaluation&lt;/h2&gt;
&lt;p&gt;The actual performance of the model can be determined using the test data set that has not been used for any training or cross-validation activities.&lt;/p&gt;
&lt;p&gt;We transform the test Dataframe with the model pipeline, which will tranform the features according to the pipeline, estimate and then return the label predictions in a column of a new dataframe.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/dataframe-1603260403535.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/get-predictions-from-test-dataset-1603260414077.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The evaluator will provide us with the score of the predictions. Accuracy is measured by the area under the ROC curve. The area measures the ability of the test to correctly classify true positives from false positives. A random predictor would have .5 accuracy. The closer the value is to 1 the better its predictions are. In this case, the evaluation returns 81% precision.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/evaluator-predictions-1603260427433.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/evaluate-predictions-accuarcy-1603260438521.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Below, we calculate some more metrics. The number of false/true positive and negative predictions is also useful:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;True positives are how often the model correctly predicted delayed flights.&lt;/li&gt;
&lt;li&gt;False positives are how often the model incorrectly predicted delayed flights.&lt;/li&gt;
&lt;li&gt;True negatives indicate how often the model correctly predicted not delayed flights.&lt;/li&gt;
&lt;li&gt;False negatives indicate how often the model incorrectly predicted not delayed flights.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/calculate-some-more-metrics-1603260455772.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/calculate-some-more-metrics-2-1603260464963.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;SAVE THE MODEL&lt;/h2&gt;
&lt;p&gt;We can now save our fitted Pipeline for later use with streaming events. This saves both the feature extraction stage and the decision tree model chosen by model tuning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/save-model-to-file-system-1603260479132.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The result of saving the pipeline model is a JSON file for metadata and a Parquet for model data. We can re-load the model with the load command , the original and re-loaded models are the same:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;val sameCVModel = CrossValidatorModel.load(“../cfModel&quot;)&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Code&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;You can download the code and data to run these examples from here: &lt;a href=&quot;https://github.com/caroljmcdonald/spark-ml-flightdelay&quot;&gt;https://github.com/caroljmcdonald/spark-ml-flightdelay&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/caroljmcdonald/spark-ml-flightdelay/blob/master/notebooks/sparkmlpipelineflightdelays.json&quot;&gt;Zeppelin Notebook for the code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running the Code&lt;/h2&gt;
&lt;p&gt;All of the components of the use case architecture we just discussed can run on the same cluster with the MapR Data Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/mapr-cdp-1603260494864.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE firmware updates: Part 3 - The Redfish update service]]></title><description><![CDATA[This blog has been moved to the Server Management Portal]]></description><link>https://developer.hpe.com/hpe-firmware-updates-part-3-the-redfish-update-service/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-firmware-updates-part-3-the-redfish-update-service/</guid><pubDate>Mon, 19 Oct 2020 13:25:05 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/firmware_updates/part3/firmware_update_part3&quot;&gt;Server Management Portal&lt;/a&gt;&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[How to Use a Table Load Tool to Batch Puts into HBase/MapR Database]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/how-to-use-a-table-load-tool-to-batch-puts-into-hbasemapr-database/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-use-a-table-load-tool-to-batch-puts-into-hbasemapr-database/</guid><pubDate>Thu, 15 Oct 2020 06:28:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Terry He&quot;,
&quot;publish&quot;: &quot;2015-04-22T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Apache HBase is an in-Hadoop database that delivers wide-column schema flexibility with strongly consistent reads and writes. Clients can access HBase data through either a native Java API, a Thrift or REST gateway, or now through a C API, making it very easy to access data. MapR Database, yet another in-Hadoop database has the same HBase APIs, but provides enterprise-grade features for production deployments of HBase applications.&lt;/p&gt;
&lt;p&gt;Put, Get and Scan are some of the prominent programming APIs that get used in the context of HBase applications. For certain write-heavy workloads, Put operations can get slow, so batching these Put operations is a commonly used technique to increase the overall throughput of the system. The following program illustrates a table load tool, which is a great utility program that can be used for batching Puts into an HBase/MapR Database table. The program creates a simple HBase table with a single column within a column family and inserts 100000 rows with 100 bytes of data. The batch size for the Puts is set to 500 in this example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
import java.util.zip.CRC32;

import org.apache.hadoop.hbase.util.Pair;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableExistsException;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;

public class LoadTableMTBatch {

	static long uniqueSeed = System.currentTimeMillis();
	static long[] count;
	static long[] latency;
	static int[] keySizes;
	public static long printPerNum = 10000;
	public static boolean noCRC = false;
	public static long keySize = 8;
	public static long startRow = 0;
	public static int batchSize = 500;
	public static int preSplit = 1; //Used as a demo - Not accurated key distribution
	public static boolean flush = false;
	public static boolean autoFlush = false;
        public static final String KEY_PREFIX=&quot;user&quot;;
	public static final long startKey = 0L;
	public static final long endKey = 999999999999999L;
	public static final String HBASE_RESOURCE_NAME = &quot;/opt/mapr/hbase/hbase-0.98.9/conf/hbase-site.xml&quot;;
	public static String ZOOKEEPER_NODES = &quot;localhost&quot;; //Default to localhost, only needed for accessing HBase
	public static final Pair ZOOKEEPER_SETTINGS = new Pair(
			&quot;hbase.zookeeper.quorum&quot;, ZOOKEEPER_NODES);

	public static void usage(String arg) {
		System.err.println(&quot;bad token: &quot; + arg);
		System.err
				.println(&quot;loadMT -rows &amp;#x3C;100000&gt; -valuesize &amp;#x3C;100 bytes=&quot;&quot;&gt;  -debug -path  -threads &amp;#x3C;10&gt; -batchSize &amp;#x3C;500&gt; -numCF &amp;#x3C;1&gt; -numC &amp;#x3C;1&gt; -preSplit &amp;#x3C;1&gt; -zookeeperNodes  -AutoFlush -flush&quot;);
		System.exit(1);
	}

	public static void main(String[] args) throws java.io.IOException {
		Configuration conf = HBaseConfiguration.create();
		String tableName = null;
		long numRows = 100000;
		long numCF = 1;
		long numC = 1;
		long valueSize = 100;
		int numThreads = 10;
		boolean augment = false;

		for (int i = 0; i &amp;#x3C; args.length; ++i) {
			if (args[i].equals(&quot;-rows&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				numRows = Long.parseLong(args[i]);
			} else if (args[i].equals(&quot;-path&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				tableName = args[i];
			} else if (args[i].equals(&quot;-debug&quot;)) {
				conf.set(&quot;fs.mapr.trace&quot;, &quot;debug&quot;);
			} else if (args[i].equals(&quot;-valuesize&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				valueSize = Long.parseLong(args[i]);
			} else if (args[i].equals(&quot;-threads&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				numThreads = Integer.parseInt(args[i]);
			} else if (args[i].equals(&quot;-p&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				printPerNum = Long.parseLong(args[i]);
			} else if (args[i].equals(&quot;-hbase&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				conf.addResource(new Path(args[i]));
			} else if (args[i].equals(&quot;-numCF&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				numCF = Integer.parseInt(args[i]);
			} else if (args[i].equals(&quot;-numC&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				numC = Integer.parseInt(args[i]);
			} else if (args[i].equals(&quot;-batchSize&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				batchSize = Integer.parseInt(args[i]);
			} else if (args[i].equals(&quot;-preSplit&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				preSplit = Integer.parseInt(args[i]);
			} else if (args[i].equals(&quot;-zookeeperNodes&quot;)) {
				i++;
				if (i &gt;= args.length)
					usage(args[i]);
				ZOOKEEPER_NODES = args[i];
			} else if (args[i].equals(&quot;-AutoFlush&quot;)) {
				autoFlush = true;
			} else if (args[i].equals(&quot;-flush&quot;)) {
				flush = true;
			} else {
				usage(args[i]);
			}
		}
		if (tableName == null) {
			System.out.println(&quot;Must specify path&quot;);
			usage(&quot;path&quot;);
		}
		LoadTableMTBatch lt = new LoadTableMTBatch();
		try {
			LoadTableMTBatch.init(conf, tableName, numRows, numCF, numC,
					valueSize, augment);
			lt.loadTable(conf, tableName, numRows, numCF, numC, valueSize,
					numThreads);
		} catch (Exception e) {
			e.printStackTrace();
			System.exit(-1);
		}
	}

	public void generateKeySizes() {
		Random rand = new Random(uniqueSeed);
		keySizes = new int[10];
		keySizes[0] = rand.nextInt(5) + 5;
		keySizes[1] = rand.nextInt(40) + 10;
		keySizes[2] = rand.nextInt(50) + 50;
		keySizes[3] = rand.nextInt(400) + 100;
		keySizes[4] = rand.nextInt(500) + 500;
		keySizes[5] = rand.nextInt(4000) + 1000;
		keySizes[6] = rand.nextInt(5000) + 5000;
		keySizes[7] = rand.nextInt(10000) + 10000;
		keySizes[8] = rand.nextInt(12000) + 20000;
		keySizes[9] = rand.nextInt(32 * 1024 - 1) + 1;
	}

	public void loadTable(Configuration conf, String tableName, long numRows,
			long numCF, long numC, long valueSize, int numThreads)
			throws Exception {
		Thread[] loadThreads = new Thread[numThreads];
		count = new long[numThreads];
		latency = new long[numThreads];

		if (keySize &amp;#x3C; 1) {
			generateKeySizes();
		}

		long offset = (endKey - startKey) / numThreads;
		for (int i = 0; i &amp;#x3C; loadThreads.length; i++) {
			latency[i] = 0;
			if (preSplit &amp;#x3C;= 1000=&quot;&quot; 1)=&quot;&quot; {=&quot;&quot; loadthreads[i]=&quot;new&quot; thread(new=&quot;&quot; loadtablerunnable(conf,=&quot;&quot; tablename,=&quot;&quot; numrows,=&quot;&quot; numcf,=&quot;&quot; numc,=&quot;&quot; valuesize,=&quot;&quot; i,=&quot;&quot; numthreads,=&quot;&quot; batchsize));=&quot;&quot; }=&quot;&quot; else=&quot;&quot; batchsize,=&quot;&quot; startkey=&quot;&quot; +=&quot;&quot; i=&quot;&quot; *=&quot;&quot; offset,=&quot;&quot; ((i=&quot;&quot; offset)=&quot;&quot; -=&quot;&quot; 1));=&quot;&quot; for=&quot;&quot; (int=&quot;&quot; &amp;#x3C;=&quot;&quot; loadthreads.length;=&quot;&quot; i++)=&quot;&quot; loadthreads[i].start();=&quot;&quot; long=&quot;&quot; inserts=&quot;0,&quot; insertsold=&quot;0,&quot; rate=&quot;0,&quot; overallrate=&quot;0,&quot; ta=&quot;0,&quot; tb=&quot;0,&quot; t0=&quot;0,&quot; elapsedtime=&quot;0;&quot; averagelatency=&quot;0;&quot; minlatency=&quot;0;&quot; maxlatency=&quot;0;&quot; boolean=&quot;&quot; alive=&quot;true;&quot; 1;=&quot;&quot; while=&quot;&quot; (true)=&quot;&quot; if=&quot;&quot; (loadthreads[i].isalive())=&quot;&quot; insertsold)=&quot;&quot; (ta=&quot;&quot; tb);=&quot;&quot; t0);=&quot;&quot; elapsedtime;=&quot;&quot; min=&quot;&quot; max=&quot;&quot; average=&quot;&quot; latency=&quot;&quot; synchronized=&quot;&quot; (latency)=&quot;&quot; arrays.sort(latency);=&quot;&quot; 1];=&quot;&quot; latency.length);=&quot;&quot; system.out.println(&quot;elapsed=&quot;&quot; time:=&quot;&quot; &quot;=&quot;&quot; &quot;;=&quot;&quot; inserts:=&quot;&quot; current=&quot;&quot; sec;=&quot;&quot; overall=&quot;&quot; batchsize=&quot;&quot; 1000000l=&quot;&quot; &quot;ms;&quot;=&quot;&quot; &quot;ms&quot;);=&quot;&quot; (!alive)=&quot;&quot; break;=&quot;&quot; print=&quot;&quot; out=&quot;&quot; interval=&quot;&quot; thread.sleep(1000);=&quot;&quot; loadthreads[i].join();=&quot;&quot; public=&quot;&quot; static=&quot;&quot; getsum(long[]=&quot;&quot; array)=&quot;&quot; sum=&quot;0;&quot; (long=&quot;&quot; l=&quot;&quot; :=&quot;&quot; return=&quot;&quot; sum;=&quot;&quot; void=&quot;&quot; createtable(configuration=&quot;&quot; conf,=&quot;&quot; string=&quot;&quot; numcf)=&quot;&quot; throws=&quot;&quot; exception=&quot;&quot; hbaseadmin=&quot;&quot; admin=&quot;new&quot; hbaseadmin(conf);=&quot;&quot; system.out.println(&quot;created=&quot;&quot; object&quot;);=&quot;&quot; htabledescriptor=&quot;&quot; des=&quot;new&quot; htabledescriptor(tablename.getbytes());=&quot;&quot; numcf;=&quot;&quot; des.addfamily(new=&quot;&quot; hcolumndescriptor(&quot;f&quot;=&quot;&quot; i));=&quot;&quot; try=&quot;&quot; (presplit=&quot;&quot; admin.createtable(des);=&quot;&quot; byte[]=&quot;&quot; startkeybyte=&quot;Bytes.toBytes(KEY_PREFIX+startKey);&quot; endkeybyte=&quot;Bytes.toBytes(KEY_PREFIX+endKey);&quot; admin.createtable(des,=&quot;&quot; startkeybyte,=&quot;&quot; endkeybyte,=&quot;&quot; presplit);=&quot;&quot; catch=&quot;&quot; (tableexistsexception=&quot;&quot; te)=&quot;&quot; te.printstacktrace();=&quot;&quot; (ioexception=&quot;&quot; ie)=&quot;&quot; ie.printstacktrace();=&quot;&quot; init(configuration=&quot;&quot; augment)=&quot;&quot; ioexception,=&quot;&quot; (augment)=&quot;&quot; htable=&quot;&quot; intable=&quot;new&quot; htable(conf,=&quot;&quot; tablename);=&quot;&quot; result=&quot;&quot; infores=&quot;inTable.get(new&quot; get(&quot;homerow&quot;.getbytes()));=&quot;&quot; startrow=&quot;inTable.incrementColumnValue(&quot;homeRow&quot;.getBytes(),&quot; &quot;f0&quot;.getbytes(),=&quot;&quot; &quot;c0&quot;.getbytes(),=&quot;&quot; numrows)=&quot;&quot; numrows;=&quot;&quot; numcf=&quot;Bytes.toLong(infoRes.getValue(&quot;f0&quot;.getBytes(),&quot; &quot;c1&quot;.getbytes()));=&quot;&quot; numc=&quot;Bytes.toLong(infoRes.getValue(&quot;f0&quot;.getBytes(),&quot; &quot;c2&quot;.getbytes()));=&quot;&quot; uniqueseed=&quot;Bytes.toLong(infoRes.getValue(&quot;f0&quot;.getBytes(),&quot; &quot;c3&quot;.getbytes()));=&quot;&quot; keysize=&quot;Bytes.toLong(infoRes.getValue(&quot;f0&quot;.getBytes(),&quot; &quot;c4&quot;.getbytes()));=&quot;&quot; createtable(conf,=&quot;&quot; numcf);=&quot;&quot; put=&quot;&quot; info=&quot;new&quot; put(&quot;homerow&quot;.getbytes());=&quot;&quot; info.add(&quot;f0&quot;.getbytes(),=&quot;&quot; bytes.tobytes(numrows));=&quot;&quot; &quot;c1&quot;.getbytes(),=&quot;&quot; bytes.tobytes(numcf));=&quot;&quot; &quot;c2&quot;.getbytes(),=&quot;&quot; bytes.tobytes(numc));=&quot;&quot; &quot;c3&quot;.getbytes(),=&quot;&quot; bytes.tobytes(uniqueseed));=&quot;&quot; &quot;c4&quot;.getbytes(),=&quot;&quot; bytes.tobytes(keysize));=&quot;&quot; intable.put(info);=&quot;&quot; intable.flushcommits();=&quot;&quot; load(configuration=&quot;&quot; int=&quot;&quot; threadnum,=&quot;&quot; startkey,=&quot;&quot; endkey)=&quot;&quot; ioexception=&quot;&quot; system.out.println(&quot;starting=&quot;&quot; load=&quot;&quot; thread=&quot;&quot; threadnum);=&quot;&quot; threadnum=&quot;&quot; start=&quot;&quot; key=&quot;&quot; (key_prefix=&quot;&quot; startkey)=&quot;&quot; end=&quot;&quot; :&quot;=&quot;&quot; endkey));=&quot;&quot; family;=&quot;&quot; column;=&quot;&quot; p=&quot;null;&quot; counter=&quot;0;&quot; table=&quot;null;&quot; random=&quot;&quot; rand=&quot;new&quot; random(uniqueseed);=&quot;&quot; incrementrandom(rand,=&quot;&quot; (int)=&quot;&quot; startrow);=&quot;&quot; endrow=&quot;startRow&quot; htable(createhbaseconfiguration(),=&quot;&quot; tablename.getbytes());=&quot;&quot; table.setautoflush(autoflush);=&quot;&quot; startrow;=&quot;&quot; endrow;=&quot;&quot; byte[][]=&quot;&quot; rowkeys=&quot;new&quot; byte[batchsize][];=&quot;&quot; families=&quot;new&quot; columns=&quot;new&quot; values=&quot;new&quot; batch=&quot;0;&quot; batchsize;=&quot;&quot; batch++)=&quot;&quot; rowkey=&quot;new&quot; byte[(int)=&quot;&quot; keysize];=&quot;&quot; (keysize=&quot;&quot; 0)=&quot;&quot; randsize=&quot;keySizes[rand.nextInt(Integer.MAX_VALUE)&quot; %=&quot;&quot; 10];=&quot;&quot; numthreads=&quot;&quot; 1);=&quot;&quot; byte[randsize=&quot;&quot; stringbuilder=&quot;&quot; keybuilder=&quot;new&quot; stringbuilder();=&quot;&quot; keybuilder.append(i);=&quot;&quot; keybuilder.append(batch);=&quot;&quot; createkey(rowkey,=&quot;&quot; long.valueof(keybuilder.tostring())=&quot;&quot; ^=&quot;&quot; uniqueseed);=&quot;&quot; rowkeys[batch]=&quot;rowKey;&quot; generate=&quot;&quot; endkey);=&quot;&quot; value=&quot;&quot; valuesize];=&quot;&quot; fillbuffer(valuesize,=&quot;&quot; value,=&quot;&quot; batch);=&quot;&quot; values[batch]=&quot;value;&quot; cf=&quot;&quot; c=&quot;&quot; family=&quot;f&quot; (numcf=&quot;&quot; families[batch]=&quot;family.getBytes();&quot; column=&quot;c&quot; (numc=&quot;&quot; columns[batch]=&quot;column.getBytes();&quot; list=&quot;&quot; puts=&quot;new&quot; arraylist();=&quot;&quot; starttime=&quot;System.nanoTime();&quot; put(rowkeys[batch]);=&quot;&quot; p.add(families[batch],=&quot;&quot; columns[batch],=&quot;&quot; values[batch]);=&quot;&quot; puts.add(p);=&quot;&quot; table.put(puts);=&quot;&quot; (flush)=&quot;&quot; table.flushcommits();=&quot;&quot; (exception=&quot;&quot; e)=&quot;&quot; e.printstacktrace();=&quot;&quot; endtime=&quot;System.nanoTime();&quot; latency[threadnum]=&quot;(endTime&quot; starttime);=&quot;&quot; count[threadnum]=&quot;counter;&quot; finally=&quot;&quot; (table=&quot;&quot; !=&quot;null)&quot; table.close();=&quot;&quot; incrementrandom(random=&quot;&quot; rand,=&quot;&quot; num)=&quot;&quot; num;=&quot;&quot; rand.nextint();=&quot;&quot; createkey(byte[]=&quot;&quot; buffer,=&quot;&quot; seed)=&quot;&quot; random(seed);=&quot;&quot; crc32=&quot;&quot; chksum=&quot;new&quot; crc32();=&quot;&quot; rand.nextbytes(buffer);=&quot;&quot; chksum.update(buffer);=&quot;&quot; return;=&quot;&quot; createkeyforregion(byte[]=&quot;&quot; longrandom().nextlong(endkey=&quot;&quot; startkey);=&quot;&quot; buffer=&quot;Bytes.toBytes(KEY_PREFIX&quot; key);=&quot;&quot; buffer;=&quot;&quot; fillbuffernocrc(long=&quot;&quot; newseed=&quot;seed&quot; system.currenttimemillis();=&quot;&quot; random(newseed);=&quot;&quot; fillbuffer(long=&quot;&quot; chksum.getvalue();=&quot;&quot; configuration=&quot;&quot; createhbaseconfiguration()=&quot;&quot; conf=&quot;HBaseConfiguration.create();&quot; conf.addresource(new=&quot;&quot; path(hbase_resource_name));=&quot;&quot; conf.set((string)zookeeper_settings.getfirst(),=&quot;&quot; (string)=&quot;&quot; zookeeper_settings.getsecond());=&quot;&quot; conf;=&quot;&quot; class=&quot;&quot; loadtablerunnable=&quot;&quot; implements=&quot;&quot; runnable=&quot;&quot; private=&quot;&quot; tablename;=&quot;&quot; valuesize;=&quot;&quot; threadnum;=&quot;&quot; endkey=&quot;-1;&quot; loadtablerunnable(configuration=&quot;&quot; batchsize)=&quot;&quot; this.conf=&quot;conf;&quot; this.tablename=&quot;tableName;&quot; this.numrows=&quot;numRows;&quot; this.numcf=&quot;numCF;&quot; this.numc=&quot;numC;&quot; this.valuesize=&quot;valueSize;&quot; this.threadnum=&quot;threadNum;&quot; this.numthreads=&quot;numThreads;&quot; this.batchsize=&quot;batchSize;&quot; this.startkey=&quot;startKey;&quot; this.endkey=&quot;endKey;&quot; run()=&quot;&quot; (endkey=&quot;=&quot; -1)=&quot;&quot; loadtablemtbatch.load(conf,=&quot;&quot; 0,=&quot;&quot; 0);=&quot;&quot; system.exit(-1);=&quot;&quot; longrandom=&quot;&quot; extends=&quot;&quot; final=&quot;&quot; serialversionuid=&quot;1L;&quot; **=&quot;&quot; generating=&quot;&quot; a=&quot;&quot; in=&quot;&quot; the=&quot;&quot; range=&quot;&quot; of=&quot;&quot; 0&amp;#x3C;=&quot;value&amp;#x3C;=n&quot; @param=&quot;&quot; n=&quot;&quot; @return=&quot;&quot; nextlong(long=&quot;&quot; n)=&quot;&quot; (n=&quot;&quot; throw=&quot;&quot; new=&quot;&quot; illegalargumentexception();=&quot;&quot; small=&quot;&quot; use=&quot;&quot; nextint=&quot;&quot; and=&quot;&quot; cast=&quot;&quot; (long)=&quot;&quot; nextint((int)=&quot;&quot; n);=&quot;&quot; large=&quot;&quot; both=&quot;&quot; high=&quot;&quot; low=&quot;&quot; ints=&quot;&quot; highlimit=&quot;(int)&quot;&gt;&gt; 32);
			long high = (long) nextInt(highLimit) &amp;#x3C;&amp;#x3C; 32;
			long low = ((long) nextInt()) &amp;#x26; 0xffffffffL;
			return (high | low);
		}
	}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;a id=&quot;product-names&quot; /&gt;
## MapR to HPE Product Naming
| Historical MapR Product Name | New Product Name |
| --- | --- |
| MapR (Document) Database Enterprise Premier | HPE Ezmeral Data Fabric Document Database |</content:encoded></item><item><title><![CDATA[HPE OneView 5.4 Ecosystem SDKs introduce new methods for ILO configuration and default API versioning]]></title><description><![CDATA[HPE OneView Ecosystem SDKs (Ansible, Python, Golang, Terraform, Chef, Puppet, PowerShell and Ruby) now support  HPE OneView 5.4 (REST API…]]></description><link>https://developer.hpe.com/hpe-oneview-54-ecosystem-sdks-introduce-new-methods-for-ilo-configuratio/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-54-ecosystem-sdks-introduce-new-methods-for-ilo-configuratio/</guid><pubDate>Tue, 13 Oct 2020 18:55:34 GMT</pubDate><content:encoded>&lt;p&gt;HPE OneView Ecosystem SDKs (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Golang&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.3.0&quot;&gt;Terraform&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef&quot;&gt;Chef&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet&quot;&gt;Puppet&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView&quot;&gt;PowerShell&lt;/a&gt; and &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby&quot;&gt;Ruby&lt;/a&gt;) now support  HPE OneView 5.4 (REST API version 2000). Each release introduces time savings and error reduction enhancements.&lt;/p&gt;
&lt;p&gt;With the latest release of the HPE OneView PowerShell library, you can now configure iLO settings through HPE OneView server profiles and server profile templates. This release adds ILO helper Cmdlets New-OVServerProfileIloPolicy, New-OVIloLocalUserAccount and New-OVIloDirectoryGroup for server profiles and server profile templates, which enable the configuration of ILO settings and eliminate the need for a separate login into the ILO to apply the necessary settings.&lt;/p&gt;
&lt;p&gt;The newest releases of Ansible, Python, Golang, Terraform, Chef, Puppet and Ruby SDKs introduce an enhanced method to set the default API version to the appliance’s max API version. This ensures that valid API settings are used as default settings, eliminating a potential source of error.&lt;/p&gt;
&lt;p&gt;HPE OneView Python v5.4.0 and HPE OneView Ansible Module v5.8.0 introduce a breaking change from the previous SDK version. From this version onwards, the previous hpOneView module name has been renamed to hpeOneView. All HPE OneView libraries and examples will import the hpeOneView module as a parent for both SDKs.&lt;/p&gt;
&lt;p&gt;HPE OneView SDKs enable the automation of provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView. This enables active and reactive monitoring and automated deployments, in addition to the provisioning of networks, storage and server infrastructure. You can also use these SDKs to develop a resource topology similar to that of a public cloud on physical infrastructure. This provides public cloud-like “node management”, with the extra flexibility to directly configure underlying infrastructure when needed.&lt;/p&gt;
&lt;p&gt;To simplify installation and maintenance for container environments, HPE OneView 5.4 SDKs are available as Docker images.  &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;Ansible&lt;/a&gt; , &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;Terraform&lt;/a&gt; , &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;Chef&lt;/a&gt; , &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;Puppet&lt;/a&gt; , &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;Python&lt;/a&gt; , &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;Golang&lt;/a&gt; and &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;Ruby&lt;/a&gt; SDKs are now all available on Docker Hub. All prerequisite materials are incorporated into the container images to enable streamlined deployment to simplify maintenance, improve infrastructure agility, and reduce costs.&lt;/p&gt;
&lt;h2&gt;For more information:&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;HPE OneView on the HPE DEV platform site&lt;/a&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;SDK&lt;/th&gt;
&lt;th&gt;GitHub&lt;/th&gt;
&lt;th&gt;Docker Hub&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ansible&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.8.0&quot;&gt;HPE OneView Ansible Module v5.8.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;HPE OneView SDK Docker Image for Ansible&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Terraform&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.5.0&quot;&gt;HPE OneView Terraform Provider v1.5.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;HPE OneView SDK Docker Image for Terraform&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chef&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.6.0&quot;&gt;HPE OneView Chef Cookbook v3.6.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;HPE OneView SDK Docker Image for Chef&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Puppet&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.8.0&quot;&gt;HPE OneView Puppet Module v2.8.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;HPE OneView SDK Docker Image for Puppet&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.4.0&quot;&gt;HPE OneView Python SDK v5.4.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;HPE OneView SDK Docker Image for Python&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Golang&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.6.0&quot;&gt;HPE OneView Golang SDK v1.6.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;HPE OneView SDK Docker Image for Golang&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ruby&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.16.0&quot;&gt;HPE OneView Ruby SDK v5.16.0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;HPE OneView SDK Docker Image for Ruby&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PowerShell&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEOneView/releases/tag/v5.40.2551.2353&quot;&gt;HPE OneView PowerShell module v5.40.2551.2353&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</content:encoded></item><item><title><![CDATA[New Grommet release offers new components and improves accessibility]]></title><description><![CDATA[With the help of our rock star community, the Grommet team has been able to continuously introduce new components and enhancements. As a…]]></description><link>https://developer.hpe.com/new-grommet-release-offers-new-components-and-improves-accessibility/</link><guid isPermaLink="false">https://developer.hpe.com/new-grommet-release-offers-new-components-and-improves-accessibility/</guid><pubDate>Thu, 08 Oct 2020 16:15:08 GMT</pubDate><content:encoded>&lt;p&gt;With the help of our rock star community, the Grommet team has been able to continuously introduce new components and enhancements. As a matter of fact, at least 10 new components have been added over the last several months. The most recent release, Grommet 2.15.0, features two new components – Card and DateInput – along with enhancements to others, like DataChart and SkipLinks.&lt;/p&gt;
&lt;p&gt;The new &lt;a href=&quot;https://v2.grommet.io/card&quot;&gt;Card&lt;/a&gt; component works as a container of information that delivers a wide variety of content, contextual background colors, and powerful display options. The new Grommet Card component provides options to customize a header, body, and footer using CardHeader, CardBody, and CardFooter components. The new &lt;a href=&quot;https://v2.grommet.io/dateinput&quot;&gt;DateInput&lt;/a&gt; component offers developers a high level of control for date range input, from a single date to a range of dates, through numerous specified properties.&lt;/p&gt;
&lt;p&gt;One of the more interesting enhancements came from work we’ve been doing with the &lt;a href=&quot;https://www.hpe.com/us/en/cloud-services.html&quot;&gt;HPE GreenLake Cloud Services&lt;/a&gt; organization to help visualize AI data charting through the &lt;a href=&quot;https://v2.grommet.io/datachart&quot;&gt;DataChart&lt;/a&gt; component. By extending the DataChart API, we were able to better visually represent 4-dimensional data and complex chart combinations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/4-dimensions-1602173691496.jpg&quot; alt=&quot;4 dimensions&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/complex-charts-1602173698168.png&quot; alt=&quot;complex charts&quot;&gt;&lt;/p&gt;
&lt;p&gt;Accessibility received a lot of attention in this release. &lt;a href=&quot;https://v2.grommet.io/skiplinks&quot;&gt;SkipLinks&lt;/a&gt;, in particular, has been reworked significantly to align with Web Content Accessibility Guidelines (WCAG). The behavior of the SkipLinks component has been improved to allow the layer to open with a keyboard tab, edit SkipLinks messages, and enable access to interactive elements when the SkipLinks layer is shown. Other accessibility enhancements include the addition of &lt;strong&gt;messages&lt;/strong&gt; prop to Video and &lt;strong&gt;a11yTitle&lt;/strong&gt; support to TextArea.&lt;/p&gt;
&lt;p&gt;We had an impressive list of contributors to this release. We’re so grateful for your collaboration. We’d like to also extend our thanks to our &lt;a href=&quot;https://github.com/grommet/grommet/wiki/What-is-grommet-stable-and-how-to-use-it%3F&quot;&gt;stable branch&lt;/a&gt; (beta test) users. Without your help in testing all the different elements before they were fully baked, we could never have gotten so far so fast. Thank you for all the feedback you gave us to make Grommet 2.15.0 such an exciting release. The information you provided on the new DateInput component was particularly valuable. Your input will help us ensure that we meet the needs of real world use cases such as yours, while offering flexibility for future implementations.&lt;/p&gt;
&lt;p&gt;You can find details of all the enhancements in this newest version in the &lt;a href=&quot;https://github.com/grommet/grommet/releases/tag/v2.15.0&quot;&gt;release notes&lt;/a&gt;.
For more examples on how to use these components, check out &lt;a href=&quot;https://storybook.grommet.io/&quot;&gt;Storybook here&lt;/a&gt; or go directly to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://storybook.grommet.io/?path=/story/card--clickable&quot;&gt;Card&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://storybook.grommet.io/?path=/story/dateinput--form&quot;&gt;DateInput&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://storybook.grommet.io/?path=/story/datachart--everything&quot;&gt;DataChart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://storybook.grommet.io/?path=/story/skiplinks--simple&quot;&gt;SkipLinks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Expand your skills easily - Newsletter]]></title><link>https://developer.hpe.com/2020-October-01/</link><guid isPermaLink="false">https://developer.hpe.com/2020-October-01/</guid><pubDate>Thu, 01 Oct 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[ Get started with the HPE Nimble Storage Content Collection for Ansible ]]></title><description><![CDATA[With the initial release of the HPE Nimble Storage Content Collection for Ansible, it’s now possible to manage many aspects of a Nimble…]]></description><link>https://developer.hpe.com/get-started-with-the-hpe-nimble-storage-content-collection-for-ansible/</link><guid isPermaLink="false">https://developer.hpe.com/get-started-with-the-hpe-nimble-storage-content-collection-for-ansible/</guid><pubDate>Tue, 29 Sep 2020 18:33:30 GMT</pubDate><content:encoded>&lt;p&gt;With the initial release of the &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/introducing-hpe-nimble-storage-content-collection-for-ansible/ba-p/7103452&quot;&gt;HPE Nimble Storage Content Collection for Ansible&lt;/a&gt;, it’s now possible to manage many aspects of a Nimble array using either Red Hat Ansible Tower or open source Ansible. Ansible is an IT automation platform that embraces an idempotent resource model, which is essential in declarative infrastructure management paradigms. In this blog post, we’ll go through a few examples on how to kickstart your Ansible automation projects with the newly released HPE Nimble Storage modules.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/screen-shot-2020-10-01-at-15454-pm-1601594378015.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;All the functionality is embedded in the content collection as modules. For the initial release, these are the available modules.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Module&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_access_control_record&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage access control records&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_array&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage array&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_chap_user&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage iSCSI CHAP users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_disk&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage disks and media&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_encryption&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage encryption settings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_fc&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage fibre channel configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_group&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage the array group&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_info&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Query and collect information of any resource from an array&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_initiator_group&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage initiator groups&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_network&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage network configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_partner&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage replication partners&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_performance_policy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage performance policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_pool&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage storage pools&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_protection_schedule&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage protection schedules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_protection_template&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage protection templates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_shelf&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage shelves&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_snapshot_collection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage snapshot collections&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_snapshot&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage volume snapshots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_user&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage array users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_user_policy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage array user policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_volume&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage volumes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hpe_nimble_volume_collection&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Manage volume collections&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;All modules are documented per Ansible community standards. The modules have not yet been merged into the official Collection Index and documentation is provided from &lt;a href=&quot;https://hpe-storage.github.io/nimble-ansible-modules&quot;&gt;the GitHub repo&lt;/a&gt; as a temporary solution.&lt;/p&gt;
&lt;h1&gt;Preface&lt;/h1&gt;
&lt;p&gt;In the following examples there is one node acting as the Ansible management host (node21) and there are two nodes (node22 and node23) acting as iSCSI SAN hosts. There’s one HPE Nimble Storage array (nva) in the environment. Since an Ansible collection is a fairly new construct in the Ansible universe, version 2.9 is required, along with Python 3.6 or newer for the HPE Nimble Storage SDK for Python (which the Ansible modules rely on). We’ll also assume that iSCSI, multipathing and SAN connectivity is established between the SAN hosts and the HPE Nimble Storage array. NimbleOS 5.0 or newer is required on the array. These requirements are also listed with &lt;a href=&quot;https://hpe-storage.github.io/nimble-ansible-modules/&quot;&gt;the modules&apos; documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/hpedev-alster-for-blog-reva-1601597390641.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since the collection contains more than a whopping twenty two modules, one blog post won’t be able to cover the entire suite. Expect a series of blog posts over the coming months to cover more use cases. In this first installment, we’ll cover basic volume provisioning, snapshotting, cloning, inspecting, mutations and ultimately decommissioning volumes. Host attachment included!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: All examples listed below are available in &lt;a href=&quot;https://github.com/NimbleStorage/automation-examples&quot;&gt;this GitHub repo&lt;/a&gt; (change directory to &lt;code&gt;ansible/introduction&lt;/code&gt;).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Installation&lt;/h1&gt;
&lt;p&gt;No special privileges are needed on the Ansible host. Privileges are needed on managed hosts where we want to attach storage. Let’s begin with installing Ansible and the required HPE Nimble Storage SDK for Python using &lt;code&gt;pip&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ pip3 install ansible nimble-sdk --user
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, use &lt;code&gt;ansible-galaxy&lt;/code&gt; to install the HPE Nimble Storage Content Collection for Ansible.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-galaxy collection install hpe.nimble
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We’re now ready to create an inventory and start writing playbooks to manage storage resources.&lt;/p&gt;
&lt;h1&gt;Inventory&lt;/h1&gt;
&lt;p&gt;There are many different ways to write inventory files (classic .ini or YAML) and store variables. In this series of playbooks the variables will be stored with the nodes in the inventory groups. Playbooks will use the &lt;code&gt;-e&lt;/code&gt; flag to provide &quot;extra&quot; variables to make the playbooks reusable for different nodes and storage resources. In a scenario where the infrastructure is managed via source code management (SCM), resources and nodes would be stored in separate variable files.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ini&quot;&gt;[nodes]
node22 ansible_host=192.168.159.22
node23 ansible_host=192.168.159.23

[nodes:vars]
ansible_user=vagrant
ansible_password=vagrant

[arrays]
nva nimble_host=192.168.59.130 nimble_discovery_ip=192.168.59.130

[arrays:vars]
ansible_connection=local
nimble_user=admin
nimble_pass=admin

[all:vars]
ansible_python_interpreter=/usr/bin/python3
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s break these sections down:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;[nodes]&lt;/strong&gt; A group of my SAN connected nodes, name resolution is not working in my sandbox so I need to address nodes by IP address&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;[nodes:vars]&lt;/strong&gt; All nodes will share these variables&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;[arrays]&lt;/strong&gt; A group of arrays I have available in my environment, each with my custom variables to access the REST API and iSCSI discovery IP address&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;[arrays:vars]&lt;/strong&gt; All arrays will share these variables, &lt;strong&gt;ansible_connection&lt;/strong&gt; is set to local as Ansible is not logging in to the array itself, only indirect via the REST API&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;[all:vars]&lt;/strong&gt; Ansible will run on any available Python version (near enough) the Nimble SDK require Python 3.6 or newer&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Don’t put passwords in your inventory file. Use encrypted variable files with &lt;code&gt;ansible-vault&lt;/code&gt; for that purpose.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Also, an &lt;code&gt;ansible.cfg&lt;/code&gt; is put in place to pickup the &lt;code&gt;inventory&lt;/code&gt; file in the current directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ini&quot;&gt;[defaults]
inventory = inventory
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Hello world&lt;/h1&gt;
&lt;p&gt;A pattern I usually employ before doing anything is to check if my inventory is reachable.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible -m ping all
nva | SUCCESS =&gt; {
    &quot;changed&quot;: false,
    &quot;ping&quot;: &quot;pong&quot;
}
node22 | SUCCESS =&gt; {
    &quot;changed&quot;: false,
    &quot;ping&quot;: &quot;pong&quot;
}
node23 | SUCCESS =&gt; {
    &quot;changed&quot;: false,
    &quot;ping&quot;: &quot;pong&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I created a playbook that would simplify future playbooks by creating the initiator groups for all nodes on all arrays. Initiator groups are required for assigning access control records (ACRs) to volumes so they could also be incorporated in a task just before assigning the ACR.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Retrieve node facts
  hosts: nodes

- name: Create iSCSI initators for hosts in nodes group
  gather_facts: false
  hosts: arrays
  collections:
    - hpe.nimble
  tasks:
    - name: Add host
      hpe_nimble_initiator_group:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        access_protocol: iscsi
        name: &quot;{{ item }}&quot;
        iscsi_initiators: [
          {
            iqn: &quot;{{ hostvars[item][&apos;ansible_iscsi_iqn&apos;] }}&quot;,
            label: &quot;{{ item }}&quot;
          }
        ]
        description: &quot;Initiator Group for {{ item }}&quot;
        state: present
      with_items: &quot;{{ groups.nodes }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pay attention to the &lt;code&gt;collections&lt;/code&gt; stanza. That allows the modules to be discovered properly. It’s also possible to omit the collections stanza but then modules need to be called out with their fully qualified names, &lt;code&gt;hpe.nimble.hpe_nimble_initiator_group&lt;/code&gt;, which isn’t very pretty to look at.&lt;/p&gt;
&lt;p&gt;That said, the fully qualified module name is good to know about. If you need to look up any of the module options or examples, it’s available right at your fingertips. For example, to look up the documentation for the &lt;code&gt;hpe_nimble_initiator_group&lt;/code&gt; module, simply use the &lt;code&gt;ansible-doc&lt;/code&gt; command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-doc hpe.nimble.hpe_nimble_initiator_group
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, run the playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook connect.yaml

PLAY [Retrieve node facts] *****************************************************

TASK [Gathering Facts] *********************************************************
ok: [node23]
ok: [node22]

PLAY [Create iSCSI initators for hosts in nodes group] *************************

TASK [Add host] ****************************************************************
ok: [nva] =&gt; (item=node22)
ok: [nva] =&gt; (item=node23)

PLAY RECAP *********************************************************************
node22                     : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node23                     : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
nva                        : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output will vary depending on if the initiator groups already exists or not. This playbook could be re-run for each inventory update to ensure all initiator groups exist as expected.&lt;/p&gt;
&lt;h1&gt;Volume provisioning&lt;/h1&gt;
&lt;p&gt;The next example provide a whole lot more content. The purpose of &lt;code&gt;provision.yaml&lt;/code&gt; is to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a new Nimble volume, &lt;strong&gt;volume_name&lt;/strong&gt;, on &lt;strong&gt;nimble_array&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Create an ACR for the &lt;strong&gt;volume_igroup&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Discover and attach the volume from the host&lt;/li&gt;
&lt;li&gt;Format the volume and mount it on the host&lt;/li&gt;
&lt;li&gt;Make the filesystem writeable by an application&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There’s also an optional parameter, &lt;code&gt;volume_size&lt;/code&gt;. It falls back to &lt;code&gt;1000&lt;/code&gt; (MiB) if not specified.&lt;/p&gt;
&lt;p&gt;By parameterizing the playbook, it becomes very flexible to reuse. It could also quite easily be turned into an Ansible role to make it even more reusable between projects.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Provision a Nimble Volume to a SAN host 
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  collections:
    - hpe.nimble
  tasks:
    - block:
        - name: Create Volume
          hpe_nimble_volume:
            host: &quot;{{ nimble_host }}&quot;
            username: &quot;{{ nimble_user }}&quot;
            password: &quot;{{ nimble_pass }}&quot;
            state: present
            name: &quot;{{ volume_name }}&quot;
            size: &quot;{{ volume_size | default(&apos;1000&apos;) }}&quot;
            description: &quot;Volume for {{ volume_igroup }}&quot;
          register: volume
        
        - name: Set facts to pass on to node play
          set_fact:
            volume_target_name: &quot;{{ volume.attrs.target_name }}&quot;
            volume_serial_number: &quot;{{ volume.attrs.serial_number }}&quot;

      when: volume_clone is not defined

    - block: 
        - name: Create Volume from Snapshot
          hpe_nimble_volume:
            host: &quot;{{ nimble_host }}&quot;
            username: &quot;{{ nimble_user }}&quot;
            password: &quot;{{ nimble_pass }}&quot;
            state: present
            name: &quot;{{ volume_name }}&quot;
            size: &quot;{{ volume_size | default(&apos;1000&apos;) }}&quot;
            description: &quot;Volume for {{ volume_igroup }}&quot;
            snapshot: &quot;{{ snapshot_name | default(False) }}&quot;
            clone: &quot;{{ volume_clone | default(False) }}&quot;
            parent: &quot;{{ clone_from | default(False) }}&quot;
          register: volume
 
        - name: Set facts to pass on to node play
          set_fact:
            volume_target_name: &quot;{{ volume.attrs.target_name }}&quot;
            volume_serial_number: &quot;{{ volume.attrs.serial_number }}&quot;

      when: volume_clone is defined

    - name: Create ACR
      hpe_nimble_access_control_record:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        state: present
        initiator_group: &quot;{{ volume_igroup }}&quot;
        volume: &quot;{{ volume_name }}&quot;

- name: Attach a volume, format and mount it on a host
  hosts: &quot;{{ volume_igroup }}&quot;
  tasks:
    - name: Discover Target
      become: yes
      open_iscsi:
        portal: &quot;{{ hostvars[nimble_array][&apos;nimble_discovery_ip&apos;] }}&quot;
        discover: yes
        show_nodes: yes
    
    - name: Attach Target
      become: yes
      open_iscsi:
        target: &quot;{{ hostvars[nimble_array][&apos;volume_target_name&apos;] }}&quot;
        login: yes
        automatic: yes
        show_nodes: yes

    - name: Set volume device fact
      set_fact:
        volume_device_path: /dev/disk/by-id/dm-uuid-mpath-2{{ hostvars[nimble_array][&apos;volume_serial_number&apos;] }}

    - name: Create Filesystem
      become: yes
      filesystem:
        fstype: xfs
        dev: &quot;{{ volume_device_path }}&quot;
    
    - name: Mount the Volume
      become: yes
      mount:
        fstype: xfs
        state: mounted
        path: /mnt/{{ volume_name }}
        src: &quot;{{ volume_device_path }}&quot;

    - name: Set Permissions
      become: yes
      file:
        owner: &quot;{{ ansible_env.LOGNAME }}&quot;
        path: /mnt/{{ volume_name }}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a few opinionated decisions made here, like mounting the filesystem under &lt;code&gt;/mnt/volume_name&lt;/code&gt;, but other than that, the example is quite comprehensive. Let’s run it!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook provision.yaml -e nimble_array=nva \
    -e volume_name=myvol1 -e volume_igroup=node22
    
PLAY [Provision a Nimble Volume to a SAN host] *********************************************************************************

TASK [Create Volume] ***********************************************************************************************************
changed: [nva]

TASK [Set facts to pass on to node play] ***************************************************************************************
ok: [nva]

TASK [Create Volume from Snapshot] *********************************************************************************************
skipping: [nva]

TASK [Set facts to pass on to node play] ***************************************************************************************
skipping: [nva]

TASK [Create ACR] **************************************************************************************************************
changed: [nva]

PLAY [Attach a volume, format and mount it on a host] **************************************************************************

TASK [Gathering Facts] *********************************************************************************************************
ok: [node22]

TASK [Discover Target] *********************************************************************************************************
ok: [node22]

TASK [Attach Target] ***********************************************************************************************************
changed: [node22]

TASK [Set volume device fact] **************************************************************************************************
ok: [node22]

TASK [Create Filesystem] *******************************************************************************************************
changed: [node22]

TASK [Mount the Volume] ********************************************************************************************************
changed: [node22]

TASK [Set Permissions] *********************************************************************************************************
changed: [node22]

PLAY RECAP *********************************************************************************************************************
node22                     : ok=7    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
nva                        : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Bliss! A new volume was created and correctly mapped to the node. On the node, the LUN was discovered, attached, formatted and mounted (including adding an entry to &lt;code&gt;/etc/fstab&lt;/code&gt;) and is ready for use. This can be inspected by calling the &lt;code&gt;command&lt;/code&gt; or &lt;code&gt;shell&lt;/code&gt; module ad hoc.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible -m command node22 -a &apos;df -h /mnt/myvol1&apos;
node22 | CHANGED | rc=0 &gt;&gt;
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/mapper/223119d4f2e94525f6c9ce900e7f81e0a  997M   33M  965M   4% /mnt/myvol1
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Inspecting and mutating resources&lt;/h1&gt;
&lt;p&gt;Among the list of modules there’s a module not specifically mapped to a resource — the &lt;code&gt;hpe_nimble_info&lt;/code&gt; module. This module is a very powerful query tool that allows you to extract any piece of metadata from every API resource available on the array. The result set can also be filtered and limited in the query, hence preserving resources as the array only has to return the objects and metadata being requested. This functionality is also available in the Python SDK and the REST API directly and it made perfect sense to make it available natively in Ansible.&lt;/p&gt;
&lt;p&gt;Let’s grab some attributes from the volume we just created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Query a Nimble array volume
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  collections:
    - hpe.nimble
  tasks:
    - name: Inspect volume
      hpe_nimble_info:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        gather_subset:
          - volumes:
              query:
                name: &quot;{{ volume_name }}&quot;
              fields:
                - limit_iops
                - description
                - iscsi_sessions
      register: query

    - name: Results
      debug:
        var: query
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we’re gathering the IOPS limit of the volume along with the iSCSI sessions and description. Let’s see what we get.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook query.yaml -e nimble_array=nva -e volume_name=myvol1

PLAY [Query a Nimble array volume] *********************************************************************************************

TASK [Inspect volume] **********************************************************************************************************
ok: [nva]

TASK [Results] *****************************************************************************************************************
ok: [nva] =&gt; {
    &quot;query&quot;: {
        &quot;changed&quot;: false,
        &quot;failed&quot;: false,
        &quot;message&quot;: &quot;Fetched the subset details.&quot;,
        &quot;nimble_info&quot;: {
            &quot;volumes&quot;: [
                {
                    &quot;description&quot;: &quot;Volume for node22&quot;,
                    &quot;iscsi_sessions&quot;: [
                        {
                            &quot;data_digest_enabled&quot;: false,
                            &quot;header_digest_enabled&quot;: false,
                            &quot;id&quot;: &quot;346c943e6f668ac68800000000000000000000000f&quot;,
                            &quot;initiator_ip_addr&quot;: &quot;192.168.59.190&quot;,
                            &quot;initiator_name&quot;: &quot;iqn.1994-05.com.redhat:bcbe2e27f0&quot;,
                            &quot;num_connections&quot;: 1,
                            &quot;pr_key&quot;: 0,
                            &quot;session_id&quot;: &quot;346c943e6f668ac68800000000000000000000000f&quot;,
                            &quot;target_ip_addr&quot;: &quot;192.168.59.132&quot;
                        }
                    ],
                    &quot;limit_iops&quot;: -1
                }
            ]
        },
        &quot;return_status&quot;: true
    }
}

PLAY RECAP *********************************************************************************************************************
nva                        : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The registered fact returned exactly what we asked for, the IOPS limit, description and details about the iSCSI sessions.&lt;/p&gt;
&lt;p&gt;Many attributes of a resource can be mutated during runtime. In this example, I want to set a new volume description and limit the IOPS to 1000 for a particular volume. Since Ansible is idempotent, you can add the &lt;code&gt;limit_iops&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; to the “Create Volume” stanza in the &lt;code&gt;provision.yaml&lt;/code&gt; playbook. But then all subsequent volumes created from the playbook will get the same attributes.&lt;/p&gt;
&lt;p&gt;Let’s create a playbook to mutate these fields.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Mutate a Nimble array volume
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  collections:
    - hpe.nimble
  tasks:
    - name: Mutate volume
      hpe_nimble_volume:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        limit_iops: &quot;{{ volume_iops | default(&apos;-1&apos;) }}&quot;
        description: &quot;{{ volume_description | default(&apos;&apos;) }}&quot;
        name: &quot;{{ volume_name }}&quot;
        state: present
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the playbook with a few extra variables.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook tune.yaml -e nimble_array=nva \
    -e volume_name=myvol1 -e volume_iops=1000 \
    -e &quot;volume_description=&apos;Mutated by Ansible&apos;&quot;

PLAY [Mutate a Nimble array volume] ********************************************************************************************

TASK [Mutate volume] ***********************************************************************************************************
changed: [nva]

PLAY RECAP *********************************************************************************************************************
nva                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we re-run the query, we should be able to see the IOPS has been capped and the description updated.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook query.yaml -e nimble_array=nva -e volume_name=myvol1

PLAY [Query a Nimble array volume] *********************************************************************************************

TASK [Inspect volume] **********************************************************************************************************
ok: [nva]

TASK [Results] *****************************************************************************************************************
ok: [nva] =&gt; {
    &quot;query&quot;: {
        &quot;changed&quot;: false,
        &quot;failed&quot;: false,
        &quot;message&quot;: &quot;Fetched the subset details.&quot;,
        &quot;nimble_info&quot;: {
            &quot;volumes&quot;: [
                {
                    &quot;description&quot;: &quot;Mutated by Ansible&quot;,
                    &quot;iscsi_sessions&quot;: [
                        {
                            &quot;data_digest_enabled&quot;: false,
                            &quot;header_digest_enabled&quot;: false,
                            &quot;id&quot;: &quot;346c943e6f668ac68800000000000000000000000f&quot;,
                            &quot;initiator_ip_addr&quot;: &quot;192.168.59.190&quot;,
                            &quot;initiator_name&quot;: &quot;iqn.1994-05.com.redhat:bcbe2e27f0&quot;,
                            &quot;num_connections&quot;: 1,
                            &quot;pr_key&quot;: 0,
                            &quot;session_id&quot;: &quot;346c943e6f668ac68800000000000000000000000f&quot;,
                            &quot;target_ip_addr&quot;: &quot;192.168.59.132&quot;
                        }
                    ],
                    &quot;limit_iops&quot;: 1000
                }
            ]
        },
        &quot;return_status&quot;: true
    }
}

PLAY RECAP *********************************************************************************************************************
nva                        : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a very brief demonstration on how to accomplish a storage management task in a simple and reusable manner. This playbook could be put on a crontab to throttle IOPS at certain times of day to accommodate a certain business need. It can also be made available to application owners who want to be able to tweak their application storage resources. It’s then practical to use something like Ansible Tower to enable role-based access control (RBAC) for inventories and playbooks. We’ll cover Ansible Tower in a future blog. Stay tuned!&lt;/p&gt;
&lt;h1&gt;Snapshotting and cloning&lt;/h1&gt;
&lt;p&gt;Very advanced workflows can be put together using the snapshot capabilities of the HPE Nimble Storage array. In this very brief example, we’ll create a snapshot of the volume that just got created and attach it to our second host in the inventory.&lt;/p&gt;
&lt;p&gt;First, create some content and flush the buffers. This way we’ll have something to look at once our clone comes up on the target host.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible -m shell node22 -a &apos;date &gt; /mnt/myvol1/date.txt &amp;#x26;&amp;#x26; sync&apos;
node22 | CHANGED | rc=0 &gt;&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, create a snapshot with the &lt;code&gt;snapshot.yaml&lt;/code&gt; playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Create a Snapshot of a Nimble Volume
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  collections:
    - hpe.nimble
  tasks:
    - name: Snapshot operation
      hpe_nimble_snapshot:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        volume: &quot;{{ volume_name }}&quot;
        name: &quot;{{ snapshot_name }}&quot;
        description: &quot;Snapshot created by Ansible&quot;
        state: &quot;{{ snapshot_state | default(&apos;present&apos;) }}”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Snapshot!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook snapshot.yaml -e nimble_array=nva \
    -e volume_name=myvol1 -e snapshot_name=mysnap1

PLAY [Create a Snapshot of a Nimble Volume] ************************************************************************************

TASK [Snapshot operation] ******************************************************************************************************
changed: [nva]

PLAY RECAP *********************************************************************************************************************
nva                        : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There’s now a snapshot named “mysnap1” on “myvol1”. We want to create a clone from that snapshot for our new volume that we’re attaching to the second host in the inventory. We’ll reuse the &lt;code&gt;provision.yaml&lt;/code&gt; playbook by simply adding a few more variables: &lt;code&gt;snapshot_name&lt;/code&gt;, &lt;code&gt;volume_clone&lt;/code&gt; and &lt;code&gt;clone_from&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook provision.yaml -e nimble_array=nva \
    -e volume_name=myvol1-clone -e snapshot_name=mysnap1 \
    -e volume_clone=yes -e clone_from=myvol1 \
    -e volume_igroup=node23

PLAY [Provision a Nimble Volume to a SAN host] *********************************************************************************

TASK [Create Volume] ***********************************************************************************************************
skipping: [nva]

TASK [Set facts to pass on to node play] ***************************************************************************************
skipping: [nva]

TASK [Create Volume from Snapshot] *********************************************************************************************
changed: [nva]

TASK [Set facts to pass on to node play] ***************************************************************************************
ok: [nva]

TASK [Create ACR] **************************************************************************************************************
changed: [nva]

PLAY [Attach a volume, format and mount it on a host] **************************************************************************

TASK [Gathering Facts] *********************************************************************************************************
ok: [node23]

TASK [Discover Target] *********************************************************************************************************
ok: [node23]

TASK [Attach Target] ***********************************************************************************************************
changed: [node23]

TASK [Set volume device fact] **************************************************************************************************
ok: [node23]

TASK [Create Filesystem] *******************************************************************************************************
ok: [node23]

TASK [Mount the Volume] ********************************************************************************************************
changed: [node23]

TASK [Set Permissions] *********************************************************************************************************
ok: [node23]

PLAY RECAP *********************************************************************************************************************
node23                     : ok=7    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
nva                        : ok=3    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We should now be able to do a visual inspect of the content from our two nodes. Let’s check the file we just created before the snapshot got created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible -m command node22 -a &apos;cat /mnt/myvol1/date.txt&apos;
node22 | CHANGED | rc=0 &gt;&gt;
Tue Sep 29 02:17:11 UTC 2020
$ ansible -m command node23 -a &apos;cat /mnt/myvol1-clone/date.txt&apos;
node23 | CHANGED | rc=0 &gt;&gt;
Tue Sep 29 02:17:11 UTC 2020
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Presto! The clone may now be manipulated safely without impacting the parent.&lt;/p&gt;
&lt;p&gt;In future blogs, we’ll explore how to structure playbooks to idempotent “refresh” a clone from a parent in a safe and secure manner. That, in turn, empowers users to access volumes to allow dev and test on production-like data without running the risk of mistakingly disrupting mission-critical applications.&lt;/p&gt;
&lt;h1&gt;Volume decommissioning&lt;/h1&gt;
&lt;p&gt;Housekeeping is important. What’s being provisioned should also be able to get decommissioned in the same manner. Since provisioning a storage resource and attaching it to a host is a multi-task playbook, one can’t simply declare it “absent”. Tasks needs to be stacked in the reverse order with different verbs to effectively dismantle the data access and volumes.&lt;/p&gt;
&lt;p&gt;For completeness, here’s how to unmount and detach a volume from a host and subsequently offline and delete the volume from the Nimble array.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---

- name: Prepare to delete Nimble Volume
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  collections:
    - hpe.nimble
  tasks:
    - name: Fetch Volume info
      hpe_nimble_info:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        gather_subset:
          - volumes:
              fields:
                - target_name
              query:
                name: &quot;{{ volume_name }}&quot;
              detail: True
      register: volumes
      failed_when: volumes.nimble_info.volumes.0.target_name is not defined

    - name: Set facts to pass on to node play
      set_fact:
        volume_target_name: &quot;{{ volumes.nimble_info.volumes.0.target_name }}&quot;

- name: Unmount and detach a Nimble Volume from a host
  hosts: &quot;{{ volume_igroup }}&quot;
  tasks:
    - name: Unmount the Volume
      become: yes
      mount:
        fstype: xfs
        state: absent
        path: /mnt/{{ volume_name }}
    
    - name: Detach Target
      become: yes
      open_iscsi:
        target: &quot;{{ hostvars[nimble_array][&apos;volume_target_name&apos;] }}&quot;
        login: no

    - name: Delete Target
      become: yes
      command: iscsiadm -m node -o delete -T {{ hostvars[nimble_array][&apos;volume_target_name&apos;] }}

- name: Delete a Nimble Volume
  gather_facts: false
  connection: local
  hosts: &quot;{{ nimble_array }}&quot;
  tags:
    - array_only
  collections:
    - hpe.nimble
  tasks:
    - name: Offline Volume
      hpe_nimble_volume:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        state: present
        online: False
        force: True
        name: &quot;{{ volume_name }}&quot;
      register: offline
      until: offline is not failed

    - name: Delete Volume
      hpe_nimble_volume:
        host: &quot;{{ nimble_host }}&quot;
        username: &quot;{{ nimble_user }}&quot;
        password: &quot;{{ nimble_pass }}&quot;
        state: absent
        name: &quot;{{ volume_name }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s run it! Beware — this operation is irreversible. Make sure volume names and nodes are correct. We&apos;ll start with the clone, as the parent can&apos;t be removed unless all dependent volumes have been destroyed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook decommission.yaml -e nimble_array=nva \
    -e volume_name=myvol1-clone -e volume_igroup=node23

PLAY [Prepare to delete Nimble Volume] *****************************************

TASK [Fetch Volume info] *******************************************************
ok: [nva]

TASK [Set facts to pass on to node play] ***************************************
ok: [nva]

PLAY [Unmount and detach a Nimble Volume from a host] **************************

TASK [Gathering Facts] *********************************************************
ok: [node22]

TASK [Unmount the Volume] ******************************************************
changed: [node22]

TASK [Detach Target] ***********************************************************
changed: [node22]

TASK [Delete Target] ***********************************************************
changed: [node22]

PLAY [Delete a Nimble Volume] **************************************************

TASK [Offline Volume] **********************************************************
changed: [nva]

TASK [Delete Volume] ***********************************************************
changed: [nva]

PLAY RECAP *********************************************************************
node22                     : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
nva                        : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Go ahead and destroy the parent volume.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ ansible-playbook decommission.yaml -e nimble_array=nva \
    -e volume_name=myvol1 -e volume_igroup=node22

...

PLAY RECAP *********************************************************************************************************************
node22                     : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
nva                        : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;Expect more content on HPE DEV to frame more use cases relevant for automating storage management with the Ansible modules. This will include more advanced snapshot and clone management, scheduling and resource life-cycles. We also plan on going through user management, advanced reporting and network configuration. Every module will be touched on to describe its use and options to derive value from infrastructure automation.&lt;/p&gt;
&lt;p&gt;The HPE Nimble Storage Content Collection for Ansible is available immediately and installable from &lt;a href=&quot;https://galaxy.ansible.com/hpe/nimble/&quot;&gt;Ansible Galaxy&lt;/a&gt; and the certified collection is also available on &lt;a href=&quot;https://cloud.redhat.com/ansible/automation-hub/hpe/nimble&quot;&gt;Red Hat Automation Hub&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check out &lt;a href=&quot;https://hpe-storage.github.io/nimble-ansible-modules/&quot;&gt;the documentation&lt;/a&gt; on the hosted GitHub Pages&lt;/li&gt;
&lt;li&gt;The content collection is hosted in the &lt;a href=&quot;https://galaxy.ansible.com/hpe/nimble&quot;&gt;hpe.nimble&lt;/a&gt; namespace on Ansible Galaxy&lt;/li&gt;
&lt;li&gt;Source Code &lt;a href=&quot;https://github.com/hpe-storage/nimble-ansible-modules&quot;&gt;is available&lt;/a&gt; on GitHub&lt;/li&gt;
&lt;li&gt;Read the product announcement on &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/introducing-hpe-nimble-storage-content-collection-for-ansible/ba-p/7103452&quot;&gt;Around The Storage Block&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We are open to feedback and collaboration. Join us on the HPE DEV Slack community and hang out with the team in #NimbleStorage. Signup &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;here&lt;/a&gt; and join us at &lt;a href=&quot;https://hpedev.slack.com/archives/C7TTAHRUN&quot;&gt;hpedev.slack.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Persist Kafka Data as JSON in NoSQL Storage Using MapR Event Store and MapR Database]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/how-to-persist-kafka-data-as-json-in-nosql-storage-using-mapr-event-stor/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-persist-kafka-data-as-json-in-nosql-storage-using-mapr-event-stor/</guid><pubDate>Fri, 25 Sep 2020 06:11:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Ian Downard&quot;,
&quot;publish&quot;: &quot;2016-10-31T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;nosql&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Streaming data is like the phrase, “Now you see it. Now you don’t!”&lt;/h2&gt;
&lt;p&gt;One of the challenges when working with streams, especially streams of fast data, is the transitory nature of the data. Kafka streams are characterized by a retention period that defines the point at which messages will be permanently deleted. For many applications, such as those fed by streams of rapidly generated sensor data, the retention period is a desirable and convenient way to purge stale data, but in other cases, such as with insurance or banking applications, record-retention laws may require data to be persisted far beyond the point at which that data has any practical value to streaming analytics. This is challenging in situations where rapidly ingested data creates pressure on stream consumers designed to write streaming records to a database. Even if we can ensure these consumers keep up, we still need to guarantee zero data loss in the unlikely event that they do fail.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;HPE Ezmeral Data Fabric Event Data Streams (Formerly MapR Event Store) and HPE Ezmeral Data Fabric Document Database (Formerly MapR Database) work together to provide a scalable and fault tolerant way to save streaming data in long-term storage.&lt;/strong&gt; They both have a distributed, scale-out design based on the HPE Ezmeral Data Fabric (Formerly MapR Data Platform). Furthermore, as a NoSQL data store, HPE Ezmeral Data Fabric Document Database makes it easy to persist data with the same schema encoded into streams. This is not only convenient for the developer but also minimizes the work required to transform streaming records into persistable objects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/persist-kafka-json-streams-mapr-02_0-1601014236654.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let&apos;s illustrate these concepts with an example that persists streaming data in 5 simple steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Setup stream and database connections.&lt;/li&gt;
&lt;li&gt;Consume records from a MapR stream using the standard Kafka API.&lt;/li&gt;
&lt;li&gt;Convert each consumed record to a JSON object.&lt;/li&gt;
&lt;li&gt;Persist that JSON object in HPE Ezmeral Data Fabric Document Database.&lt;/li&gt;
&lt;li&gt;Update the stream cursor to ensure graceful recovery should a stream consumer fail.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Step 1: Setup stream and database connections&lt;/h2&gt;
&lt;p&gt;Before we can do anything interesting we first have to setup our stream and database connections. We&apos;ll use the following two options that relate to fault tolerance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We disable the &lt;code&gt;enable.auto.commit&lt;/code&gt; consumer option in order to commit stream cursors only after their corresponding records have been writing to the database.&lt;/li&gt;
&lt;li&gt;We disable the &lt;code&gt;BUFFERWRITE&lt;/code&gt; table option in order to ensure database writes are not buffered on the client.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With these options we&apos;re sacrificing speed for higher fault tolerance but we can compensate for that tradeoff by creating more topic partitions and running more concurrent consumers in parallel.&lt;/p&gt;
&lt;p&gt;So, here is what our database and consumer configurations look like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Table table;
String tableName = &quot;/user/mapr/ticktable&quot;;
if (MapRDB.tableExists(tableName)) {
    table = MapRDB.getTable(tableName);
} else {
    table = MapRDB.createTable(tableName);
}
table.setOption(Table.TableOption.BUFFERWRITE, false);

Properties props = new Properties();
props.put(&quot;enable.auto.commit&quot;,&quot;false&quot;);
props.put(&quot;group.id&quot;, “mygroup”);
props.put(&quot;auto.offset.reset&quot;, &quot;earliest&quot;);
props.put(&quot;key.deserializer&quot;, &quot;org.apache.kafka.common.serialization.StringDeserializer&quot;);
props.put(&quot;value.deserializer&quot;, &quot;org.apache.kafka.common.serialization.StringDeserializer&quot;);
consumer = new KafkaConsumer&amp;#x3C;String, String&gt;(props);
List&amp;#x3C;String&gt; topics = new ArrayList&amp;#x3C;&gt;();
topics.add(topic);
consumer.subscribe(topics);
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 2: Consume records from the stream&lt;/h2&gt;
&lt;p&gt;To consume records from a stream, you first poll the stream. This gives you a collection of &lt;code&gt;ConsumerRecords&lt;/code&gt; which you then iterate through in order to access each individual stream record. This is standard Kafka API stuff, and it looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;ConsumerRecords&amp;#x3C;String, byte[]&gt; records = consumer.poll(TIMEOUT);
Iterator&amp;#x3C;ConsumerRecord&amp;#x3C;String, byte[]&gt;&gt; iter = msg.iterator();
while (iter.hasNext())
{
    ConsumerRecord&amp;#x3C;String, byte[]&gt; record = iter.next();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3: Convert streamed records to JSON&lt;/h2&gt;
&lt;p&gt;Before we write consumer records to the database, we need to put each record in a format that has columns. In our example we’re streaming byte arrays, which by themselves have no field related attributes, so we need to convert these byte arrays into a type containing attributes that will correspond to columns in our database. We’ll do this with a Java object, defined in &lt;a target=&apos;\_blank&apos;  href=&apos;https://gist.github.com/iandow/92d3276e50a7e77f41e69f5c69c8563b&apos;&gt;Tick.java&lt;/a&gt;, which uses the @JsonProperty annotation to conveniently convert Tick objects encoded as byte arrays into a JSON document, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Tick tick = new Tick(record.value());
Document document = MapRDB.newDocument((Object)tick);
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 4: Persist the JSON document to HPE Ezmeral Data Fabric Document Database&lt;/h2&gt;
&lt;p&gt;This part is easy. We can insert each JSON document as a new row to a table in MapR Database with one line of code, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;table.insertOrReplace(tick.getTradeSequenceNumber(), document);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first parameter in the insertOrReplace method is Document ID (or rowkey). It’s a property of our dataset that the value returned by &lt;code&gt;tick.getTradeSequenceNumber()&lt;/code&gt; is unique for each record, so we’re referencing that as the Document ID for our table insert in order to avoid persisting duplicate records even if duplicate messages are consumed from the stream. This guarantees idempotency in our stream consumer.&lt;/p&gt;
&lt;h2&gt;Step 5: Update the stream cursor&lt;/h2&gt;
&lt;p&gt;Finally, we’ll update the cursor in our stream topic. In the unlikely event that our stream consumer fails, this ensures that a new consumer will be able to continue working from where the last consumer left off.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;consumer.commitSync();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;The design we just outlined provides a scalable approach to persisting stream data. It ensures thread-safety by processing immutable stream data with idempotent stream consumers and achieves fault tolerance by updating stream cursors only after records have been persisted in HPE Ezmeral Data Fabric Document Database. This represents an elastic, responsive, resilient, and message-driven design consistent with the characteristics of reactive microservices. This is a reliable approach to persisting Kafka streams in long-term NoSQL storage.&lt;/p&gt;
&lt;p&gt;If you&apos;d like to see a complete example application that uses the techniques described in this post, check out the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/mapr-demos/finserv-application-blueprint/blob/master/src/main/java/com/mapr/demo/finserv/Persister.java&apos;&gt;Persister.java&lt;/a&gt; in the &lt;a target=&apos;\_blank&apos;  href=&apos;https://github.com/mapr-demos/finserv-application-blueprint&apos;&gt;Application blueprint&lt;/a&gt; on GitHub.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To view more articles on this topic, be sure to check back regularly on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Build Stanzas Using MapR Installer for Easy and Efficient Provisioning]]></title><description><![CDATA[Original Post Information: Editor’s Note: MapR products referenced are now part of the HPE Ezmeral Data Fabric. The MapR Installer provides…]]></description><link>https://developer.hpe.com/how-to-build-stanzas-using-mapr-installer-for-easy-and-efficient-provisi/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-build-stanzas-using-mapr-installer-for-easy-and-efficient-provisi/</guid><pubDate>Sat, 19 Sep 2020 05:31:19 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Prashant Rathi&quot;,
&quot;publish&quot;: &quot;2016-12-09T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;streaming&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The MapR Installer provides cluster operators an intuitive way to set up a cluster using a step-by-step wizard. The wizard guides you through:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Selecting a Core Version and ecosystem services&lt;/li&gt;
&lt;li&gt;Using Auto-Provisioning templates&lt;/li&gt;
&lt;li&gt;Specifying a list of Nodes and Disks&lt;/li&gt;
&lt;li&gt;Grouping Services and laying them out across nodes&lt;/li&gt;
&lt;li&gt;Verification of all dependencies before cluster installation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We have set up multiple clusters for several of our enterprise customers and learned quite a bit in the process. Increasingly, these deployments have not only grown in number, but have also evolved based on the type, purpose, and lifetime of these clusters. If you add to the mix the rapid innovation in the community and the complexity that it brings, there is a clear demand for a higher level of automated and consistent cluster provisioning.&lt;/p&gt;
&lt;p&gt;Installer Stanzas enable API-driven installation for the industry’s only converged data platform. With this capability, operators can build a Stanza that contains layout and settings for the cluster to be installed and passes it programmatically to the installer to execute the set of instructions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/stanza-1600932620174.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This new capability is very useful when you need a script-based tool to install the software and you do not want to click through the menus and options provided by the installer wizard. While this method provides less visual feedback than the GUI version, it can be faster and more efficient at installing software on clusters with many nodes. Not only that, but once a Stanza gets defined, you can automate the cluster setup process for each successive cluster creation with a minimum set of changes.&lt;/p&gt;
&lt;p&gt;Read the detailed &lt;a href=&quot;https://docs.datafabric.hpe.com/61/AdvancedInstallation/Stanzas/SilentInstaller.html&quot;&gt;“how-to” guide here&lt;/a&gt;. At the heart of these Stanzas is a YAML file. You must configure a YAML file before using this method to install or upgrade a cluster. Sample YAML files (basic and advanced) can be found in the installer package, but here are the top-level sections:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Environment&lt;/strong&gt;&lt;/em&gt; – specifies the mapr_core_version&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Config&lt;/strong&gt;&lt;/em&gt; – specifies the list of nodes (with login information), disks, and other configuration info. Also includes the list of services chosen from pre-existing templates or custom-defined from from MapR Ecosystem Pack (MEP) versions.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;strong&gt;Groups (optional)&lt;/strong&gt;&lt;/em&gt; – selection of services grouped across nodes for advanced layout option&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here’s an example structure for a 3-node cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;environment:
   mapr_core_version: 5.2.0
config:
   hosts:
         - demonode[1-3].example.com
   ssh_id: root
   license_type: enterprise
   mep_version: 2.0
   disks:
         - /dev/sdb
  	 - /dev/sdc
   services:   
              template-05-converged:
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;MapR Installer Stanzas come with the following set of commands that can be executed on the command line:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;u&gt;Install&lt;/u&gt; – use to fresh install, incremental install, and upgrade a cluster&lt;/li&gt;
&lt;li&gt;&lt;u&gt;Uninstall&lt;/u&gt; – use to uninstall a cluster&lt;/li&gt;
&lt;li&gt;&lt;u&gt;Export&lt;/u&gt; – use to generate a YAML file to capture state of the cluster&lt;/li&gt;
&lt;li&gt;&lt;u&gt;List&lt;/u&gt; – list nodes, services, and groups in a cluster&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Quick set of steps to get started:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapRInstaller.html&quot;&gt;Download the Installer&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.datafabric.hpe.com/61/AdvancedInstallation/Stanzas/SilentInstaller.html&quot;&gt;Review the detailed documentation here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Start building new clusters!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To view more articles on this topic, be sure to check back regularly on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[CRUD with the New Golang Client for MapR Database]]></title><description><![CDATA[Original Post Information: Editor’s Note: MapR products referenced are now part of the HPE Ezmeral Data Fabric.  Introduction HPE Ezmeral…]]></description><link>https://developer.hpe.com/crud-with-the-new-golang-client-for-mapr-database/</link><guid isPermaLink="false">https://developer.hpe.com/crud-with-the-new-golang-client-for-mapr-database/</guid><pubDate>Fri, 18 Sep 2020 20:27:10 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: [&quot;Magnus Pierre&quot;],
&quot;publish&quot;: &quot;2019-04-24T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;hpe-ezmeral-data-fabric&quot;,&quot;hpe-ezmeral&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image2-1600461098259.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Data Fabric Document Database (formerly MapR Database) has come a long way since its inception. Today, the database can not only be used as a replacement of HBase, implementing the HBase API, but also as a document store, implementing an open source API called OJAI (Open JSON Application Interface).&lt;/p&gt;
&lt;p&gt;Document stores have increased in popularity over the last few years, since they make it easy to store meaningful entities (documents) in a database and then do operations on them without having to retrieve the complete document. A typical example is to ask for a certain subtree of a document identified with a key or a set of documents that fulfill a certain condition. However, documents can also be modified in place, replaced, and deleted – basically, what is traditionally called CRUD (Create, Read, Update, Delete) in the database world. OJAI is an API to provide CRUD and flexible query capabilities to clients in a simple, consistent manner.&lt;/p&gt;
&lt;h2&gt;New to Document Stores?&lt;/h2&gt;
&lt;p&gt;For those that have no experience with document stores, here’s a brief explanation, based on a comparison with traditional RDBMS technology.&lt;/p&gt;
&lt;p&gt;Let’s consider a traditional Third Normal form (3NF) RDBMS database. One reason to divide data into tables/entities is to make data changes easie. Another is to simplify access. The normalization rules all database developers are educated in are there to guide the developers in how to organize/store the data, so it can be accessed and processed in a consistent manner by the consumers. Albeit a very useful and valuable evolution for data storage, the step of splitting the data into separate entities/tables aims to solve historical technical limitations that today are more easily handled. Normalizing and splitting data does make data access simple. It forces you to look at the data and make decisions on type and how it relates to other data, but it also comes with a lot of problems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The original format is not kept, which creates lineage issues.&lt;/li&gt;
&lt;li&gt;The data often needs to be transformed in order to fit into the defined entities, increasing the lineage issues as well as introducing new points where data can be wrongly represented.&lt;/li&gt;
&lt;li&gt;Depending on relationships (i.e. the table structure populated), it can be very complex and time-consuming to ingest new data.&lt;/li&gt;
&lt;li&gt;Data types (domains) need to be decided, which can be very hard for dynamic data and so on.&lt;/li&gt;
&lt;li&gt;And above all, it is very work-intensive and error-prone, meaning it takes a lot of time to develop a good database.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s now consider another approach, where we throw all the rules out the window. Instead, we’ll create a table for customers and then store everything related to customers – orders, transactions, address information, products purchased, products offered, along with the complete history of the customer – in the same table. Please ignore the fact that it would not make sense in an RDBMS to do this. Consider it more from a conceptual perspective. Let’s also ignore types and so on. What would be the positive things this could give you?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It would be easy to get a comprehensive overview of a single customer.&lt;/li&gt;
&lt;li&gt;Joins would be minimized, since most relevant data is already stored together.&lt;/li&gt;
&lt;li&gt;Conditions and filters could cover the whole scope of the customer engagement easily.&lt;/li&gt;
&lt;li&gt;Projections (what to return) would be central to extract a portion of the table to work with at a given time.&lt;/li&gt;
&lt;li&gt;Filtering would basically replace joins as the primary means of deciding what to retrieve.&lt;/li&gt;
&lt;li&gt;Locking would be simpler, since all information related to a customer would be in a single entity.&lt;/li&gt;
&lt;li&gt;Less initial work.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What would be the drawbacks?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Harder to change the structure.&lt;/li&gt;
&lt;li&gt;Since cardinality would be different for different types of data, one would have to deal with this in a more programmatic manner.&lt;/li&gt;
&lt;li&gt;Duplication of data embedded in the structure.&lt;/li&gt;
&lt;li&gt;Hard to optimize access for a certain part of the table.&lt;/li&gt;
&lt;li&gt;Structure would dictate access path.&lt;/li&gt;
&lt;li&gt;Structure would dictate how you could process the data.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, provided well-designed tools and APIs are available to operate on these &lt;em&gt;complex&lt;/em&gt; structures, then the model of grouping all data together can make sense. This is really what a document database system such as the HPE Ezmeral Data Fabric Document Database provides – i.e., the tools necessary to process complex documents easily. There’s nothing in a document store that defines what the structure needs to look like and how each &lt;em&gt;row&lt;/em&gt; can be different. So, in the example of the customer table, it could be one table with everything in it, or N documents with parts in it. The division is decided by the developers and what makes sense from the use case of the data.&lt;/p&gt;
&lt;h2&gt;HPE Ezmeral Data Fabric Document Database&lt;/h2&gt;
&lt;p&gt;HPE Ezmeral Data Fabric Document Database supports JSON documents. A JSON document is hierarchical in nature, which I prefer to see as a small table of its own. Each document can have its unique structure in the database. The only requirement on the document, other than it has to be valid JSON, is that it needs to be identified by an ID (_id). This ID can be used to find a unique document, or if the ID is generated using some type of logic, it can be used by filters.&lt;/p&gt;
&lt;p&gt;In the HPE Ezmeral Data Fabric Document Database, trees in a document can be stored separately in so-called column families. Learn how to parallelize access &lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapR-DB/introduction-to-column-families.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can also improve scans by placing secondary indexes on attributes. Learn about that and more &lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapR-DB/Indexes/Indexes.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;None of this really matters for our Golang discussion, but you get the concept of a complete database environment supporting the document concept in full.&lt;/p&gt;
&lt;h2&gt;Enter Language Bindings for OJAI&lt;/h2&gt;
&lt;p&gt;OJAI was originally a pure Java API, but since the API is really well designed, it fits well in other programming languages without significant changes to the conventions of the API. In order to support new languages, MapR introduced the Data Access Gateway 2.0 (DAG) in MapR Core 6.1. DAG provides the service back end for the language bindings, and in MapR Ecosystem Pack (MEP) 6.0 we shipped NodeJS and Python language bindings for HPE Ezmeral Data Fabric Document Database. Now it’s time to apply the same support for Golang (Go) and C#.&lt;/p&gt;
&lt;p&gt;If comparing our language bindings with the original API, the API and the functions defined are basically the same in each language. If you understand OJAI in one language, you will also understand it in another language implementation. Of course, there are slight differences, and these are mostly dependent on language specifics. A dynamic language, such as JavaScript, is not the same as Golang, which is strictly typed and where it makes sense that, due to language conventions or capabilities of the language, there might be deviations. But, in general, the language implementations are very close to the original API.&lt;/p&gt;
&lt;p&gt;Learn about supporting HPE Ezmeral Data Fabric Architecture for the Language Bindings &lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapROverview/MapRDataAccessGateway.html#MapRDataAccessGateway&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/image1-1600461364270.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The client library connects to the Data Access Gateway (DAG) over gRPC, and the Data Access Gateway talks directly with the HPE Ezmeral Data Fabric Document Database using MapR RPC.&lt;/p&gt;
&lt;p&gt;Multiple Data Access Gateways can run on the same cluster, and you can integrate an external load balancer for high availability and failover of the DAG service. We have found gRPC DAG 2.0 is as fast as Java OJAI client performance. Always scale test your application and have at least two DAG services behind load balances for HA.&lt;/p&gt;
&lt;h2&gt;The Golang Client&lt;/h2&gt;
&lt;p&gt;Golang, or “Go,” hardly needs an introduction. It is a fairly new and upcoming programming language out of Google, which is as fast as C but much easier to write. It is used in real production environments at Google. Good open source tools, such as Kubernetes and Docker, have been written in it, so it is quite battle-tested already.&lt;/p&gt;
&lt;h2&gt;Installation of the HPE Ezmeral Data Fabric Document Database Golang Client&lt;/h2&gt;
&lt;p&gt;Installation is very simple. In the project where you are to use the client, install it using: &lt;code&gt;go get github.com/mapr/maprdb-go-client&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Please observe that you need to have Golang installed on the system prior to running the command.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Once you have done this step, you can start using the API in the go files of your choice. A good convention, when using this API, is to provide an alias to the package in the import to get a simpler naming convention in the code itself:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-1-1600461373867.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;This will define client as the alias for the go client, meaning that everything you use from the API is referenced using client and then the function call you want to make.&lt;/p&gt;
&lt;h2&gt;Make a Connection&lt;/h2&gt;
&lt;p&gt;Connections in the Golang client are established using a connection string. A description of the parts included in a connection string can be found &lt;a href=&quot;https://docs.datafabric.hpe.com/61/MapR-DB/JSON_DB/GettingStartedGoOJAI.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Some examples of connection strings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dag-somesystem.com:5678?auth=basic;user=mapr;password=somepasswd&lt;/li&gt;
&lt;li&gt;ojai:mapr:thin:v1@localhost:5768?auth=basic;user=fred;password=george;sslCA=/opt/app/conf/rootca.pem&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the example code, we use a very simple convenience function to create the connection string, based on some values that reside in a &lt;code&gt;Struct&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-2-1600461384388.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Create / Get a Table&lt;/h2&gt;
&lt;p&gt;Once you have the connection string, you can connect to the HPE Ezmeral Data Fabric Document Database and connect to a database. In this example code, it will create a database if the database asked for does not exist. This is probably not what you would like to do in production, but for simple loads and test purposes, this is very convenient:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-3-1600461391951.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Insert Data&lt;/h2&gt;
&lt;p&gt;OJAI contains several convenience functions to, for instance, make sure you create valid documents, assign fields, and much more.&lt;/p&gt;
&lt;p&gt;Here is a simple function that does the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It receives a simple message of the type string that is used to create a Document by calling the function &lt;code&gt;CreateDocumentFromString&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;It assigns the ID to &lt;code&gt;string(time.Now())&lt;/code&gt;, using the OJAI &lt;code&gt;Document.SetIdString&lt;/code&gt; function (not the best key but fits this simple example).&lt;/li&gt;
&lt;li&gt;It assigns a simple attribute called test the value 0 of the type Float64 in the Document, using &lt;code&gt;Document.SetFloat64&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;It prints out the Document as a JSON, using the &lt;code&gt;Document.AsJsonString()&lt;/code&gt; function to stdout.&lt;/li&gt;
&lt;li&gt;It adds the Document to the database, using &lt;code&gt;DocumentStore.InsertDocument()&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-4-1600461399780.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Query data&lt;/h2&gt;
&lt;p&gt;Querying data in OJAI can be done in a few different ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Find by ID&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Find by Query&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Find All&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FindQueryString&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;FindQueryMap&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;Find by ID&lt;/code&gt; searches the system for a specific ID. As you remember, the ID is the row key identifying the document, so this is blazingly fast. &lt;code&gt;Find All&lt;/code&gt; is the equivalent of a scan in HBase.&lt;/p&gt;
&lt;h2&gt;Making a Query&lt;/h2&gt;
&lt;p&gt;Queries are made using the convenience function, &lt;code&gt;MakeQuery&lt;/code&gt;. &lt;code&gt;MakeQuery&lt;/code&gt; receives a variant list of &lt;code&gt;QueryOptions&lt;/code&gt;, which represents &lt;code&gt;SELECT&lt;/code&gt;, &lt;code&gt;WHERE&lt;/code&gt;, &lt;code&gt;ORDER BY&lt;/code&gt;, &lt;code&gt;OFFSET&lt;/code&gt;, and &lt;code&gt;LIMIT&lt;/code&gt; and produces a &lt;code&gt;Query Object&lt;/code&gt;. It has the same basic structure as Mutations and Conditions, and you work with them in a similar fashion. You also have the option to define an OJAI query directly in a JSON string and use it directly.&lt;/p&gt;
&lt;p&gt;A simple query looks like the following:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-5-1600461407820.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How to Change/Update Data&lt;/h2&gt;
&lt;p&gt;Data is changed through the concept of mutations. Mutations are executed on the server and are applied using a &lt;code&gt;DocumentMutation&lt;/code&gt; struct. You create a &lt;code&gt;DocumentMutation&lt;/code&gt; struct and provide the wanted &lt;code&gt;MutationOperations&lt;/code&gt; to the function &lt;code&gt;MakeDocumentMutation&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The following mutations exist in OJAI and the respective convenience functions in the Go-client:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Append
&lt;ul&gt;
&lt;li&gt;Read-modify-write operation. Use it to append specified values to existing binary, string, or array type fields.&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AppendSlice&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AppendString&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Decrement
&lt;ul&gt;
&lt;li&gt;Decrements the value in the &lt;code&gt;fieldpath&lt;/code&gt;. To decrement multiple field paths, use an array notation to list the field paths.&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;DecrementFloat64&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DecrementFloat64ByOne&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DecrementInt&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;DecrementIntByOne&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Delete
&lt;ul&gt;
&lt;li&gt;Removes either a single field or a list of fields from a document. If the field does not exist, the delete ignores that field.&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Delete&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Increment
&lt;ul&gt;
&lt;li&gt;Increments the value in the &lt;code&gt;fieldpath&lt;/code&gt;. To increment multiple field paths, use an array notation to list the field paths.&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;IncrementFloat64&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;IncrementFloat64ByOne&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;IncrementInt&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;IncrementIntByOne&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Merge
&lt;ul&gt;
&lt;li&gt;Combines a nested document with an existing document at a specified fieldpath&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;MergeDocument&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;MergeMap&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Put
&lt;ul&gt;
&lt;li&gt;Replace operation&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SetOrReplace&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Set
&lt;ul&gt;
&lt;li&gt;Updates one or more fields in a document.&lt;/li&gt;
&lt;li&gt;Go convenience functions:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Set&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Set&amp;#x3C;Go Type&gt; example SetInt&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is also possible to provide the mutation as a map of the form:
&lt;code&gt;map[string] interface{}&lt;/code&gt;, where each entry is of the same form – i.e., &lt;code&gt;map[string] interface{}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It is quite easy to create a static map, but if you need to generate maps and add them dynamically, a convenience function is probably needed that merges maps into a map and returns that map. I have an example in the GitHub repository of such a convenience function, if of interest, called &lt;code&gt;[MergeMaps](https://github.com/mapr/maprdb-go-client/search?q=MergeMaps&amp;#x26;type=code)&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Since the two different forms of creating mutations exists, a set of functions exists to convert the mutation type to a form that can be sent to the database:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-6-1600461418004.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;These functions are:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/code-7-1600461425614.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now we can send the &lt;code&gt;MapOrStructMutation&lt;/code&gt; to the database using:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Update&lt;/code&gt;, &lt;code&gt;CheckAndUpdate&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;code&gt;CheckAndUpdate&lt;/code&gt; is interesting, since it allows you to only do the update if a certain condition is validated to be true. This is where the OJAI API shows its true powers. By doing proper checking this way, you can avoid having to return the data to the client, do the manipulation, and then write it back to the database. All steps can happen in the database, where they belong, local to the data. This is a fundamental feature, important for everyone that deals with mutable structures with big data sizes.&lt;/p&gt;
&lt;h2&gt;Conditions&lt;/h2&gt;
&lt;p&gt;Conditions share the same structure with mutations, i.e., the same underlying type structure, so you work with it in a similar fashion. A good way of understanding conditions comes from the HPE Ezmeral Data Fabric Document Database official documentation about writing conditions directly in JSON. With the Go client, it is not strictly necessary to write the conditions this way, however, since there’s convenience functions for each defined. Examples of conditions exist in the &lt;a href=&quot;https://github.com/mapr/maprdb-go-client&quot;&gt;referenced GitHub&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2&gt;How to Delete Documents&lt;/h2&gt;
&lt;p&gt;Delete is very straight-forward and easily conducted, using the convenience functions: &lt;a href=&quot;https://github.com/mapr/maprdb-go-client/search?q=DeleteByIdBinary&quot;&gt;&lt;code&gt;DeleteByIdBinary&lt;/code&gt;&lt;/a&gt;, &lt;a href=&quot;https://github.com/mapr/maprdb-go-client/search?q=DeleteByIdString&quot;&gt;&lt;code&gt;DeleteByIdString&lt;/code&gt;&lt;/a&gt;, and &lt;a href=&quot;https://github.com/mapr/maprdb-go-client/search?q=DeleteDoc&quot;&gt;&lt;code&gt;DeleteDoc&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;DeleteByIdBinary&lt;/code&gt; and &lt;code&gt;DeleteByIdString&lt;/code&gt; deletes a document based on identity _id, whilst &lt;code&gt;DeleteDoc&lt;/code&gt; bases the delete on a Document on the client. Whether the &lt;code&gt;DeleteDoc&lt;/code&gt; matches the complete document before deleting or if it just matches the identity of the document and then deletes is unknown by me. My guess is that it matches ID, but I will validate this.&lt;/p&gt;
&lt;h2&gt;Behind the Scenes of OJAI&lt;/h2&gt;
&lt;p&gt;OJAI clients use the Data Access Gateway 2.0 introduced in MapR 6. However, in order to support good things, such as secondary indexes and queries on massive volumes of data, OJAI uses an internal service called OJAI Distributed QueryService. This QueryService is based on Apache Drill and utilizes the good capabilities of Drill to query massive volumes of data in an efficient manner. Drill is a distributed query engine that can query almost anything, but in this case, it is geared towards only asking questions against the HPE Ezmeral Data Fabric Document Database. By utilizing Drill, OJAI gets access to advanced SQL behind the scenes, a good cost-based query planner, and, of course, the parallel execution engine for processing the intermediary steps that may be the result of a complex query.&lt;/p&gt;
&lt;p&gt;HPE Ezmeral Data Fabric has the ability to run multiple Drill clusters on one of the same clusters. In production, it may be important to isolate the load from OJAI clients using the Drill QueryService from regular use of Apache Drill, and this is easily doable by assigning multiple Drill clusters.  Please see the official documentation regarding this &lt;a href=&quot;https://docs.datafabric.hpe.com/61/Drill/config-multiple-drill-clusters.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;This blog post has focused on describing how to do CRUD using the Golang client for the HPE Ezmeral Data Fabric Document Database. The client provides all the tools necessary to do real applications with great database support on massive amounts of data. Mutations with conditions are key to being able to mutate data in a big data setting, since:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You cannot return the data to the client due to volume.&lt;/li&gt;
&lt;li&gt;Even if you could, the round trip to fetch the data to the client, do the change, and then back again would be very costly.&lt;/li&gt;
&lt;li&gt;The database architecture also makes it possible to do proper applications in, for instance, simple languages such as JavaScript that you would normally not use when processing millions upon millions of rows of data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The Go client for the HPE Ezmeral Data Fabric Document Database supports really complex OJAI statements and is backed up with an architecture that is scalable and reliable.&lt;/p&gt;
&lt;p&gt;First, we have the gateway service, which can be individually scaled across the cluster and load balanced to support massive amounts of clients. Then, we have the database with its always-on and distributed architecture, where a document can be distributed across column families for parallelization with support for in-database operations, such as mutations and support for secondary indexes. Finally, we have the query service for complex queries built on Apache Drill with its scalable architecture, leading to basically an infinitely scalable solution, where each component is individually scalable based on need.&lt;/p&gt;
&lt;p&gt;Make sure to take a deep look at the client that fits your needs best – Node.js, Python, Golang, C#, and of course Java. You will see that they share the same wonderful architecture, the same ease of use, and the same wonderful API, making it a very good base for future applications.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To view more articles on this topic, be sure to check back regularly on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Containers: Best Practices for Running in Production]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/containers-best-practices-for-running-in-production/</link><guid isPermaLink="false">https://developer.hpe.com/containers-best-practices-for-running-in-production/</guid><pubDate>Wed, 16 Sep 2020 16:06:50 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suzy Visvanathan&quot;,
&quot;publish&quot;: &quot;2018-05-17T10:45:00.000&quot;,
&quot;tags&quot;: &quot;hpe-ezmeral-data-fabric&quot;,&quot;hpe-ezmeral&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;Market shifts are propelling organizations to become digital. Companies feel pressure to find better ways to offer services, respond faster and proactively to their customers, and manage growth. In their quest for digital maturity, customers are looking for solutions that not only address the aspect of capacity growth, but also other aspects such as performance, handling of different types of data through different protocols, and ability to adopt newer technology trends that makes them more efficient.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/9/containers-wide-1600290064568.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Containers is one technology trend that is propelling a transition for IT. With the advent of Docker, containers became a popular method for DevOps and are quickly being adopted in production as well. But as I talk to customers, I find that many are still quite undecided about adopting containers, especially in production environments. There are several nuances to keep in mind when considering deploying applications in containers in production:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start small.&lt;/strong&gt;  Unless you have already made an organization-wide decision to deploy containers, start with a handful of containers and measure the time it takes to build and deploy an application on containers versus, say, in VMs or bare metal. This will give you an idea of whether it is worthwhile moving to containers or not. Since containers consume CPU resources, such an exercise will help you plan your cluster configuration as well.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Orchestration tool?&lt;/strong&gt; Google search on this topic will produce a litany of articles that compares Kubernetes with Mesos or a Docker swarm and proclaims how Kubernetes has won the war. While that is great, most of you don’t need to start right away with any of these tools. Don’t get me wrong: I am a big proponent of these orchestration tools, but if you envision having just a few dozen containers in the next year or two, you are better off managing the environment manually, since it will give you good hands-on insight into how your environment behaves. If you are thinking instead of deploying hundreds or thousands of containers, by all means choose an orchestration tool.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Persistent storage for containers.&lt;/strong&gt;  When experimenting with containers, you can probably make do with local storage. If, on the other hand, you are contemplating running containers in production, then you will need persistent storage. Choose a data platform that is versatile in what it offers and not just storage for containers. A platform that has been built from the ground up for scale, reliability, and security with the ability to expand to newer technology trends is the ideal choice.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Network considerations.&lt;/strong&gt;  There are several CNI (Container Network Interface) solutions that are now available. Docker itself offers a pluggable interface and third-party vendors like Calico and Flannel offer solutions. Existing network frameworks in your data center should suffice for most deployments but evaluating CNI solutions to assess fit and needs in your data center is worth contemplating.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Standardizing on containers.&lt;/strong&gt; When building new applications, take into account design considerations to run these applications in containers, even if they won’t be deployed as containers in the short term. This will save a lot of time and cost in reformatting them later.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Containers and microservices.&lt;/strong&gt; Containers enable applications to be broken down into smaller modules, paving the way for a microservice architecture. Not all applications, especially legacy ones, can be easily broken down and may end up being more expensive to refactor, especially if you are moving to the cloud.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;HPE offers several capabilities to address many of the above highlighted best practices. The &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; brings together a solid foundation with a distributed, scalable file system and an upper layer that caters to diverse types of data – files, tables, containers, and more. The Ezmeral Data Fabric for Kubernetes extends its capabilities to offer persistent storage for containers.&lt;/p&gt;
&lt;p&gt;Armed with the right amount of education and access to community, one can easily make the transition to deploying containers in production. Taking a page out of DevOps, having operations teams understand, deploy, and educate other stakeholders will speed up understanding of the benefits and therefore adoption.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Boost skills with FREE on-demand software technology workshops]]></title><description><![CDATA[gettyimages 860310256_1600_0_72_rgb Given the physical restrictions placed on us by the COVID-19 pandemic today, everyone appears to be…]]></description><link>https://developer.hpe.com/boost-skills-with-free-on-demand-software-technology-workshops/</link><guid isPermaLink="false">https://developer.hpe.com/boost-skills-with-free-on-demand-software-technology-workshops/</guid><pubDate>Wed, 16 Sep 2020 12:46:52 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/gettyimages-860310256_1600_0_72_rgb-1600260582799.JPG&quot; alt=&quot;gettyimages 860310256_1600_0_72_rgb&quot;&gt;&lt;/p&gt;
&lt;p&gt;Given the physical restrictions placed on us by the COVID-19 pandemic today, everyone appears to be finding new ways to connect and learn.&lt;/p&gt;
&lt;p&gt;It is in this spirit that the HPE DEV team is making their highly popular Jupyter Notebook-based workshops available for free. These on-demand workshops, which premiered at the HPE Discover Virtual Event, have met with rave reviews worldwide. They provide developers, data scientists, and IT professionals the opportunity to get more hands-on experience working with HPE and open source technologies.&lt;/p&gt;
&lt;p&gt;These on-demand workshops offer students an understanding of specific technologies through a hands-on experience guided by a subject matter expert (SME) and detailed instructions provided in a Jupyter Notebook format. Students will have access to other SMEs through Slack should any questions arise during their session. In this blog post, I’ll cover the details of this learning-on-demand program.&lt;/p&gt;
&lt;p&gt;Unlike the video &lt;a href=&quot;/hackshack/replays/0&quot;&gt;replays&lt;/a&gt; found on the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt;, these on-demand workshops will allow students to interact with the Jupyter Notebook and get hands-on experience with coding practices. Students will have 4 hours to go through the course, which includes time to review the video replay, follow the Jupyter Notebook instructions, and save their work to their local laptop should they want to do more work in their own environment or retake the workshop. They’ll also be able to connect with SMEs through off-line support. For those who are new to the topic being covered, it is recommended you watch the corresponding &lt;a href=&quot;/hackshack/replays/0&quot;&gt;video replay&lt;/a&gt; first and then sign up for the course.&lt;/p&gt;
&lt;h2&gt;How it works&lt;/h2&gt;
&lt;p&gt;Those who wish to enroll in the workshop should go to the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt; and navigate to the &lt;strong&gt;Workshops&lt;/strong&gt; page. From there, select which on-demand workshop you want to take and click on the &lt;strong&gt;Register&lt;/strong&gt; button:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/navigate-register-button-1600260600088.png&quot; alt=&quot;navigate register button&quot;&gt;&lt;/p&gt;
&lt;p&gt;At this point, the registration panel pops up.
Enter the details requested and click on the &lt;strong&gt;Take the Workshop&lt;/strong&gt; button. In a matter of just a few minutes, you’ll start your workshop.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/take-the-workshop-pramod-creds-highlighted-1600260627823.png&quot; alt=&quot;take the workshop pramod creds highlighted&quot;&gt;&lt;/p&gt;
&lt;p&gt;By pressing the &lt;strong&gt;Take the Workshop button&lt;/strong&gt;, you initiate a back-end automated registration process. This process spawns a dedicated notebook environment for you and sends you a welcome email indicating that you have been registered in the database.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/welcome-email-modified-1600260633309.png&quot; alt=&quot;welcome email modified&quot;&gt;&lt;/p&gt;
&lt;p&gt;Not long after, it sends you a second email providing a link to your workshop, along with your StudentID and password.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/highlighted-clock-1600260594291.jpg&quot; alt=&quot;highlighted clock&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT: Receipt of this email indicates that the workshop environment is ready for you to begin. You will have just 4 hours from the receipt of this second email to complete the workshop. It is recommended that you only register for a workshop when you know you will have the next 4 hours to work on it. We advise you to regularly save your work and download the Jupyter Notebook to refer to later should you not be able to finish the course within the given 4-hour time slot. If you cannot finish the workshop in that time, you will need to run the course again from the beginning.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;The Jupyter Notebook-based workshops&lt;/h2&gt;
&lt;p&gt;A Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. The Jupyter Project was an important step forward for sharing and interactive development.  &lt;a href=&quot;https://jupyter.org/index.html&quot;&gt;Project Jupyter’s&lt;/a&gt; &lt;a href=&quot;https://jupyterhub.readthedocs.io/en/stable/&quot;&gt;JupyterHub&lt;/a&gt; was created to support many users. The Hub can offer notebook servers to an entire class of students, a corporate data science workgroup, a scientific research project team, or a high-performance computing group. With &lt;a href=&quot;https://github.com/jupyterhub/jupyterhub&quot;&gt;JupyterHub&lt;/a&gt;, you can create a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user &lt;a href=&quot;https://mybinder.org/v2/gh/ipython/ipython-in-depth/master?filepath=binder/Index.ipynb&quot;&gt;Jupyter Notebook&lt;/a&gt; server.&lt;/p&gt;
&lt;p&gt;As explained in Fred Passeron’s earlier post, &lt;a href=&quot;/blog/jupyter-saved-my-day&quot;&gt;Jupyter saved my day&lt;/a&gt;, for our on-demand workshops the notebooks contain simple Python or Powershell pieces of codes to interact with the different APIs available in the HPE portfolio. All instructions are provided in a markdown format. We centralize the different notebooks on a single JupyterHub server.&lt;/p&gt;
&lt;p&gt;When you click on the &lt;strong&gt;Start Workshop&lt;/strong&gt; button found in your second email, it will bring you to a &lt;strong&gt;Sign In&lt;/strong&gt; page where you will log into the workshop with your StudentID and the password provided in your second email.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/workshop-screen-1-3-1600260638100.png&quot; alt=&quot;workshop screen 1 (3)&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you log in, open the workshop folder on the left by double-clicking on it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/workshop-screen-2-1600260642513.png&quot; alt=&quot;workshop screen 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each notebook generally has several sections. Start with the &lt;strong&gt;Read Me First&lt;/strong&gt; and follow the instructions from there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/workshop-screen-3-1600260646340.png&quot; alt=&quot;workshop screen 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/read-me-first-page-1600260610550.png&quot; alt=&quot;read me first page&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Don’t forget to save your work&lt;/h2&gt;
&lt;p&gt;One hour prior to the end of the 4-hour period, you will receive an email reminding you that your session is coming to a close and that you should download the workshop notebook if you anticipate using it in the future. At the end of every session, the environment is cleaned up automatically, so be sure to save your work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/one-hr-warning-email-1600260622217.png&quot; alt=&quot;one hr warning email&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the end of the workshop, you will receive a final email indicating that the workshop is over. In the email, you will also be asked to take a short survey. The results of this survey will help us improve how we offer the workshops-on-demand in the future. Your feedback is very important to our being able to meet your needs, so we encourage you to take just a few minutes to fill it out.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/end-workshop-survey-1600260589980.png&quot; alt=&quot;end workshop survey&quot;&gt;&lt;/p&gt;
&lt;h2&gt;How to get help&lt;/h2&gt;
&lt;p&gt;To receive SME assistance through our &lt;a href=&quot;https://hpedev.slack.com/archives/C01B60X8SSD&quot;&gt;workshop Slack channel&lt;/a&gt;, make sure you join the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;workspace&lt;/a&gt;. We staff the channel to answer questions between 4 am and 4 pm EST Monday through Friday. You may wish to ensure you schedule the timing of your course between those hours if you think you’ll have questions or need additional help.&lt;/p&gt;
&lt;p&gt;There will be a limited number of seats available. These seats will be filled on a first-come/first-serve basis. We look forward to the opportunity of offering these workshops to you. Remember to &lt;a href=&quot;/hackshack/workshops&quot;&gt;check the Hack Shack workshops library&lt;/a&gt; regularly for any further updates.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Open source Elasticsearch helped us globally support virtual labs]]></title><description><![CDATA[Even if you are not a hard-core developer, you may have heard about the open source search and analytics engine ELK. The acronym ELK stands…]]></description><link>https://developer.hpe.com/open-source-elasticsearch-helped-us-globally-support-virtual-labs/</link><guid isPermaLink="false">https://developer.hpe.com/open-source-elasticsearch-helped-us-globally-support-virtual-labs/</guid><pubDate>Thu, 10 Sep 2020 14:39:09 GMT</pubDate><content:encoded>&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-0-1600073593195.png&quot; alt=&quot;ElasticSearch + Kibana&quot;&gt;
&lt;p&gt;Even if you are not a hard-core developer, you may have heard about the open source search and analytics engine ELK. The acronym ELK stands for &lt;a href=&quot;https://www.elastic.co/home&quot;&gt;Elasticsearch + Logstash + Kibana&lt;/a&gt;. Though many have heard the term ELK, some have never really had any hands-on experience with it. This was the case with me until it was decided the HPE DEV team would offer software code challenges during our recent HPE Discover Virtual Experience.&lt;/p&gt;
&lt;p&gt;The team was faced with the difficult task of having to monitor all the registrations that came in for these challenges over the course of the multi-week event, as well as identify those who finished them and when. Since the HPE Discover Virtual Experience was a worldwide event, registrations for any of our four code challenges could come in at anytime from anywhere on Earth. I decided to put ELK to the test to see if it could help us figure this out.&lt;/p&gt;
&lt;h2&gt;What is ELK?&lt;/h2&gt;
&lt;p&gt;ELK is a highly-scalable distributed, open source search and analytics engine used for all types of structured and unstructured data, including textual, numerical, and geospatial. Kibana is an open source data visualization dashboard created for Elasticsearch. Logstash, also part of the stack, is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a &quot;stash&quot; like Elasticsearch.&lt;/p&gt;
&lt;p&gt;Elasticsearch started as an open source project and, like many open source projects, grew exponentially from a few downloads in 2010 to 250 million downloads in 2018. Out of this open source maelstrom came additional software components, such as Logstash and Kibana, which now complete this very popular software stack. The Elastic business model adds commercial software on top of the open source ELK Stack.&lt;/p&gt;
&lt;p&gt;To support those attempting our code challenges, we initially set up a Slack Channel on our Slack workspace monitored by enough team members to cover all time zones. The registration process was designed to automatically spawn a dedicated notebook environment on the JupyterHub server in our datacenter and send mails to customers to get them connected to start working on their challenge.&lt;/p&gt;
&lt;p&gt;While all data relative to the registration process was stored in a database, we didn’t have time to build a front-end application to manage this information, so not all support staff had access to it. We were left with the question: how would each of us know if there were one or more challenges going on at any point in time? We needed some sort of dashboard to display this information that all team members could quickly look at, from anywhere, at any time.&lt;/p&gt;
&lt;h2&gt;Keep it simple&lt;/h2&gt;
&lt;p&gt;Being tasked with building a dashboard meant I could finally get my hands dirty with ELK. Within a few minutes, I had both &lt;a href=&quot;https://www.elastic.co/guide/en/elasticsearch/reference/7.9/install-elasticsearch.html&quot;&gt;Elasticsearch&lt;/a&gt; and &lt;a href=&quot;https://www.elastic.co/guide/en/kibana/7.9/install.html&quot;&gt;Kibana&lt;/a&gt; installed on the Linux virtual machine on which our registration process was hosted. It was easy to do, as I used the Elastic web site, which has extremely precise installation instructions for both Elasticsearch and Kibana. Though I had the Elasticsearch instance, its store was empty and I still needed to feed data to it. I needed to get the right data store there before I could build a Kibana dashboard.&lt;/p&gt;
&lt;h2&gt;There is an API? Then yes, we can do it&lt;/h2&gt;
&lt;p&gt;There is only one way to interact with Elasticsearch and it’s using its API which by default, installs on port 9200. If this is a problem for you, please take a look at my article on &lt;a href=&quot;/blog/understanding-api-basics-and-the-value-they-provide&quot;&gt;understanding the basic principles of REST API&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;My colleague Pramod, who had built the registration process application, was smart enough to provide a REST API to its backend application. This enabled me to retrieve from the database:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Customer details&lt;/li&gt;
&lt;li&gt;Challenge details&lt;/li&gt;
&lt;li&gt;Booking details&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The API returned JSON. Elasticsearch also uses JSON, so it all looked too easy, giving the impression that I only needed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;One REST API call to retrieve customer data from the registration application&lt;/li&gt;
&lt;li&gt;Another API call to upload this data into Elasticsearch&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;NDJSON?&lt;/h2&gt;
&lt;p&gt;It turns out that it wasn’t quite as easy as that. In order to bulk upload data in Elasticsearch, you have to feed it with not JSON but NDJSON, which I had never heard of. NDJSON stands for Newline Delimited JSON. So, it looked like a little &lt;strong&gt;data massaging&lt;/strong&gt; exercise was needed.&lt;/p&gt;
&lt;p&gt;So, my new algorithm was:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;GET JSON customer data from registration App customer API&lt;/li&gt;
&lt;li&gt;Prepare JSON data for bulk load&lt;/li&gt;
&lt;li&gt;POST NDJSON data to Elasticsearch customer index in bulk mode&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Trial and error&lt;/h2&gt;
&lt;p&gt;I experimented with the Elasticsearch REST API for a few hours in order to create a customer index and populate data for this index. I exclusively used the following single call to get this done:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -H &quot;Content-Type:application/x-ndjson&quot; -X POST &quot;http://localhost:9200/customers/_bulk&quot; --data-binary @customers.json 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When things went wrong with the format of my NDJSON file, I deleted my index and restarted fresh using:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -H &quot;Content-Type:application/x-ndjson&quot; -X DELETE &quot;http://localhost:9200/customers”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a lot of great articles available to help with formatting the data for bulk import, but &lt;a href=&quot;https://www.elastic.co/guide/en/elasticsearch/reference/7.9/docs-bulk.html&quot;&gt;this particular one&lt;/a&gt; was of great help to me.&lt;/p&gt;
&lt;p&gt;I found that calling our own application to retrieve customer details was easy enough. I finally had all the individual pieces nailed down. All I needed next was to write a script to orchestrate all of this.&lt;/p&gt;
&lt;h2&gt;My worst nightmare: Regular Expressions&lt;/h2&gt;
&lt;p&gt;I know there are lots of programming languages that can perform powerful string manipulations. Specifically, I wanted something quick (like bash?), which didn’t require anything else installed on the system (like bash?), so I decided to manipulate the JSON file using a simple sed command in a bash (yes bash!) script. My regular expressions were a little rusty, so I struggled a bit, but I got help from a hard-core Linux colleague (who prefers to remain anonymous, but writes lots of great Redfish blog posts for us). His motto is “I can do anything with a RegEx!”. So finally, I managed to turn the input JSON data into ready-to-be-imported NDJSON.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: It did require five successive sed commands, not just a single one as anticipated.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That was it! I now had a script that would load the latest customer configuration into my Elasticsearch. I added this script in a cron table on our server, so it ran every 15 minutes, and I was now ready to move on to building the dashboard.&lt;/p&gt;
&lt;h2&gt;Simple yet secure&lt;/h2&gt;
&lt;p&gt;Kibana has a web interface that, by default, starts on port 5601. However, for a more secure approach, we used nginx (which was already installed on our server) as a reverse proxy for Kibana. I found a great article that helped me handle this configuration.&lt;/p&gt;
&lt;p&gt;The steps involve manipulating nginx configuration files under /etc/nginx/sites-available to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a kibanaadmin user using basic authentication&lt;/li&gt;
&lt;li&gt;Create a reverse proxy from our system port 8080 to Kibana 5601&lt;/li&gt;
&lt;li&gt;Enable SSL&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our config file looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;server {

  listen [::]:8080 ssl ipv6only=on; # managed by Certbot
  listen 8080 ssl; # managed by Certbot
  ssl_certificate /etc/letsencrypt/live/ourservernamegoeshere/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/ourservernamegoeshere /privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    server_name ourservernamegoeshere.hpedev.io;

    auth_basic &quot;Restricted Access&quot;;
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &apos;upgrade&apos;;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After restarting nginx, the team could log into our Kibana web interface from anywhere and start building shiny dashboards.## It’s dashboard time!&lt;/p&gt;
&lt;p&gt;Kibana dashboards are made of individual tiles called virtualizations and Kibana offers lots of different virtualization widgets.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-1-1600073613043.png&quot; alt=&quot;Kibana virtualization options&quot;&gt;
&lt;p&gt;As you can see, there are many types of visualization tools you can use and configure using the fields found in your index. In our case, we had the following data available in our customers’ index:&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-2-1600073620575.png&quot; alt=&quot;ElasticSearch fields showing up in Kibana&quot;&gt;
&lt;p&gt;I decided to keep it simple and visualise the following data points:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Total number of active challenges&lt;/li&gt;
&lt;li&gt;Active challenges by type&lt;/li&gt;
&lt;li&gt;Total customers who registered&lt;/li&gt;
&lt;li&gt;Total challenges that were taken&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In no time, I was able to get my first version of a challenge dashboard.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-3-1600114029770.png&quot; alt=&quot;Kibana dashboard&quot;&gt;
&lt;p&gt;It was a good start, but I also wanted to know when the active challenges had started. So, I decided to add another visualisation to show the start time in a Date Histogram, as shown below.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-4-1600114050511.png&quot; alt=&quot;Kibana dashboard&quot;&gt;
&lt;p&gt;This looked really nice because, not only could we immediately see if there were challenges in flight, but we could also check what happened during the previous night, for example.&lt;/p&gt;
&lt;h2&gt;There is an API? Then yes, we can do it… again&lt;/h2&gt;
&lt;p&gt;Because our challenge participants had to use Github to collect the source material for the challenges and open pull requests (PR) to submit their responses, the next thing that we wanted to visualise was activity on the GitHub repos. But wait, GitHub has an API right?&lt;/p&gt;
&lt;p&gt;Absolutely! So, I went back to the data collection script, and now that I knew about NDJSON and regex, I was able to collect data for each of our Github repos in no time using their APIs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Number of Forks&lt;/li&gt;
&lt;li&gt;Number of Pull Requests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I created a new index in Elasticsearch for Github data and updated my script to load the Github data like I had done previously (more sed and regex). Finally, I created a new tile to visualize the GitHub data.&lt;/p&gt;
&lt;p&gt;Et voilà:
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-5-1600114070320.png&quot; alt=&quot;Kibana dashboard&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Ready to drill down?&lt;/h2&gt;
&lt;p&gt;There were so many things we could have added to our dashboard; it seemed limitless. But I discovered that you could actually drill down by right-clicking on certain tiles. So, I decided to implement drill down to display the emails of our participants. That was a nice enhancement as it allowed us to immediately know who was participating and be in a position to provide support. Again, even that was pretty easy and smooth, even for a newbie in Kibana.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/elk-6-1600073644276.png&quot; alt=&quot;Kibana drill down&quot;&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I’ve only scratched the surface of what you can do with the ELK Stack in this post. And this was a result of spending just a few hours on it only using the community part of the stack. There is a lot more to the ELK Stack, both in terms of data that we could feed in (complete system monitoring using Beats/Logstash for example) and Kibana widgets we can use to display data in various forms. Give it a try and, like me, you may get instantaneously hooked.  Don’t forget to check back on our &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more tutorials on interesting open source topics.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[All HPE OneView Ecosystem SDKs now support OneView 5.3 automation]]></title><description><![CDATA[With our newest update, all of the HPE OneView Ecosystem SDKs now support HPE OneView 5.3  (REST API version 1800) automation features. The…]]></description><link>https://developer.hpe.com/all-hpe-oneview-ecosystem-sdks-now-support-oneview-53-automation/</link><guid isPermaLink="false">https://developer.hpe.com/all-hpe-oneview-ecosystem-sdks-now-support-oneview-53-automation/</guid><pubDate>Fri, 04 Sep 2020 06:50:36 GMT</pubDate><content:encoded>&lt;p&gt;With our newest update, all of the HPE OneView Ecosystem SDKs now support &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html?chatsrc=ot-en&amp;#x26;jumpid=ps_ixkqsmug5a_aid-520023673&amp;#x26;gclid=EAIaIQobChMIpPL8tqD16QIVUfDACh0i3g6WEAAYASAAEgLJ-_D_BwE&amp;#x26;gclsrc=aw.ds&quot;&gt;HPE OneView 5.3&lt;/a&gt;  (REST API version 1800) automation features. The HPE OneView Ecosystem actively supports &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Golang&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.3.0&quot;&gt;Terraform&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef&quot;&gt;Chef&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet&quot;&gt;Puppet&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView&quot;&gt;PowerShell&lt;/a&gt;, and &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby&quot;&gt;Ruby&lt;/a&gt; SDKs.  The updated SDKs allow IT organizations to leverage the latest HPE OneView features and provide the ability to automate software-defined infrastructure from core to cloud. With such a diverse partner ecosystem, IT organizations benefit from being able to integrate HPE OneView within their preferred platform&apos;s existing management frameworks.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;  uses software-defined intelligence via a template-driven approach to automate the deployment, provisioning, updating, and integration of resources, such as compute, storage, and networking infrastructure. Designed with a modern, standards-based API, HPE OneView enables IT administrators to deploy and update using only a single line of code. This makes composing new infrastructure not only faster and more agile, but also more predictable.&lt;/p&gt;
&lt;p&gt;HPE offers SDKs for industry-leading software deployment, provisioning, and configuration management tools, including Ansible, Terraform, Chef, and Puppet. HPE also provides API language support for Python, Ruby, Golang, and PowerShell languages. Developers can therefore easily build integrations, custom automations and scalable solutions. The SDKs allow for integration with cloud-based platforms to automatically provision and configure new machines, enabling administrators to create a resource topology similar to that of a public cloud on their own physical infrastructure.&lt;/p&gt;
&lt;p&gt;To simplify installation and maintenance for customers with container environments.  HPE OneView 5.3 SDKs are available as Docker images. &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;Terraform&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;Chef&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;Puppet&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;Golang&lt;/a&gt;, and &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;Ruby&lt;/a&gt; SDKs are now all available on Docker Hub. All prerequisite materials are incorporated into the container images to enable streamlined deployment, which will simplify maintenance, improve infrastructure agility, and reduce costs.&lt;/p&gt;
&lt;h2&gt;For more information:&lt;/h2&gt;
&lt;p&gt;HPE DEV OneView: &lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;https://developer.hpe.com/platform/hpe-oneview/home&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ansible GitHub release:  &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.7.0&quot;&gt;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.7.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ansible Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;HPE OneView SDK Docker Image for Ansible&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Terraform GitHub Release: &lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.4.0&quot;&gt;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.4.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Terraform Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;HPE OneView SDK Docker Image for Terraform&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Chef GitHub Release: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.5.0&quot;&gt;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.5.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Chef Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;HPE OneView SDK Docker Image for Chef&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Puppet GitHub Release: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.7.0&quot;&gt;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.7.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Puppet Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;HPE OneView SDK Docker Image for Puppet&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Python GitHub Release:  &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.3.0&quot;&gt;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.3.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Python Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;HPE OneView SDK Docker Image for Python&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Golang GitHub Release:   &lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.4.0&quot;&gt;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.5.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Golang Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;HPE OneView SDK Docker Image for Golang&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ruby GitHub release: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.15.0&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.15.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Ruby Docker Hub: &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;HPE OneView SDK Docker Image for Ruby&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;PowerShell  GitHub release: &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPEOneView/releases/tag/v5.30.2515.1313&quot;&gt;https://github.com/HewlettPackard/POSH-HPEOneView/releases/tag/v5.30.2515.1313&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Orchestrating containers - Newsletter]]></title><link>https://developer.hpe.com/2020-September-01/</link><guid isPermaLink="false">https://developer.hpe.com/2020-September-01/</guid><pubDate>Tue, 01 Sep 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE DEV Technology Workshops: To infinity and beyond! ]]></title><description><![CDATA[didier1 In an earlier blog post, one of our team members, Frederic Passeron, discussed how Jupyter Notebooks saved the day as we…]]></description><link>https://developer.hpe.com/hpe-dev-technology-workshops-to-infinity-and-beyond/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-technology-workshops-to-infinity-and-beyond/</guid><pubDate>Fri, 28 Aug 2020 14:36:25 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/didier1-1598625790616.jpg&quot; alt=&quot;didier1&quot;&gt;&lt;/p&gt;
&lt;p&gt;In an earlier &lt;a href=&quot;/blog/jupyter-saved-my-day&quot;&gt;blog post&lt;/a&gt;, one of our team members, &lt;a href=&quot;https://twitter.com/FredPasseron&quot;&gt;Frederic Passeron&lt;/a&gt;, discussed how Jupyter Notebooks saved the day as we transitioned from traditional face-to-face events to virtual events. As a matter of fact, with the adoption of this open source technology, our team was able to deliver over 60 live, hands-on workshops to our developer audience since having to go virtual during this pandemic.&lt;/p&gt;
&lt;p&gt;We first offered this hands-on experience at technology events sponsored by Hewlett Packard Enterprise (HPE), such as the Technology and Solutions Summit (TSS), ASPIRE, and finally the HPE Discover Virtual Experience event this summer. Based on the feedback we’ve received from participants, these workshops were highly successful and proved to work well for remote, virtual environments.&lt;br&gt;
&lt;br/&gt;
Their success indicates that Jupyter Notebook-based workshops should have a life that extends beyond the individual events for which they were created. As such, we have made a number of decisions relative to these workshops and how they’ll continue to be made available to the HPE Developer Community going forward. Here&apos;s what we have decided to do:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Keep the virtual, online &lt;a href=&quot;/hackshack&quot;&gt;Hack Shack&lt;/a&gt; open between events&lt;/li&gt;
&lt;li&gt;Create workshops that can be deployed on-demand&lt;/li&gt;
&lt;li&gt;Maintain a library of Jupyter Notebooks that have been developed for these workshops&lt;/li&gt;
&lt;/ul&gt;
&lt;br/&gt;
&lt;p&gt;Because people were only just starting to learn about our virtual workshops, we saw that there was an important need to keep &lt;a href=&quot;/hackshack&quot;&gt;the HPE Hack Shack&lt;/a&gt; open between events we support. We will continue to customise and roll it out at upcoming venues, like we just did for KubeCon EMEA and we plan to do for KubeCon NA, but between events the experience will remain in place online, offering workshop replays, code challenges, an arcade section, and more.&lt;/p&gt;
&lt;p&gt;We also determined that it would be important to make our workshops available on-demand. On-demand versions will start to be rolled out this fall.  Stay tuned for an announcement of the pilot program.&lt;/p&gt;
&lt;p&gt;Finally, we decided to create a home for the Jupyter Notebooks we developed for these recent events and those we plan on developing in the future. We created a &lt;a href=&quot;https://github.com/HewlettPackard/hpe-notebooks&quot;&gt;GitHub repo&lt;/a&gt; to house these Jupyter Notebooks and a process by which one can submit them (through Pull Requests and following &lt;a href=&quot;https://github.com/HewlettPackard/hpe-notebooks/blob/master/CONTRIBUTING.md&quot;&gt;these instructions&lt;/a&gt;), so the entire community can contribute their own notebooks.&lt;/p&gt;
&lt;p&gt;Our team will be responsible for maintaining this repo, which we hope will grow rapidly (though maybe not exactly to infinity). Feel free to &lt;a href=&quot;https://github.com/HewlettPackard/hpe-notebooks&quot;&gt;take a look&lt;/a&gt;. Don’t hesitate to use these with your own teams. We welcome any feedback you receive regarding them so we can work to improve them for the future.  Finally, please share some of your Jupyter Notebooks with us. It’s a great way to contribute to the community. After all, &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV&lt;/a&gt; is all about &lt;strong&gt;sharing&lt;/strong&gt;, &lt;strong&gt;communicating&lt;/strong&gt; and &lt;strong&gt;collaborating&lt;/strong&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Software development reset – A remote DevOps epiphany]]></title><description><![CDATA[e1 No aspect of society seems able to escape being altered by the tragic outbreak of COVID-19. The now well-known slogan “Stay-at-Home” has…]]></description><link>https://developer.hpe.com/software-development-reset-a-remote-devops-epiphany/</link><guid isPermaLink="false">https://developer.hpe.com/software-development-reset-a-remote-devops-epiphany/</guid><pubDate>Wed, 26 Aug 2020 19:55:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/e1-1598484223320.jpg&quot; alt=&quot;e1&quot;&gt;&lt;/p&gt;
&lt;p&gt;No aspect of society seems able to escape being altered by the tragic outbreak of COVID-19. The now well-known slogan “Stay-at-Home” has given rise to a future where working from home has become the &lt;a href=&quot;https://www.gartner.com/en/newsroom/press-releases/2020-04-03-gartner-cfo-surey-reveals-74-percent-of-organizations-to-shift-some-employees-to-remote-work-permanently2&quot;&gt;norm rather than the exception&lt;/a&gt;. Even the IT sector has felt its impact. New ways of working and collaborating are being quickly deployed and implemented. Many organizations are learning they need to re-engineer business and IT processes and how they are consumed in order to adapt.&lt;/p&gt;
&lt;p&gt;Within many software development environments, software design, creation and testing is done on-premises; behind the corporate firewall, with operations and development working within a physical office environment. As organisations move towards adopting DevOps work practices, these principles of DevOps are being applied within the four walls of an office environment. Now, with the new reality of working from home, &lt;strong&gt;remote&lt;/strong&gt; DevOps practices need to be looked at to reduce the development cycle through continuous integration and delivery.&lt;/p&gt;
&lt;h2&gt;The pandemic’s influence on DevOps&lt;/h2&gt;
&lt;p&gt;I believe, as a consequence of this dreadful crisis, the need to produce more web-enabled products and services at speed will become a priority for many more organisations. Hence, embracing DevOps processes, strategies, and a cultural ethos in the remote working environment can ultimately safeguard companies going forward by harmonizing remote agile collaboration and automation tools and practices throughout its development environment.&lt;/p&gt;
&lt;p&gt;This end-to-end process can bring together users, developers, and IT operations through the application of continuous integration, delivery and testing processes within automation pipelines. A well-defined structure for remote work, paired with good remote tools, can ensure that development teams’ productivity remains high, speed is embraced, and the quality of code is maintained through static and dynamic code analysis tools.&lt;/p&gt;
&lt;h2&gt;What remote working means for developers&lt;/h2&gt;
&lt;p&gt;Essentially, working from home, whether you are a developer or not, means that you do not need to travel to a traditional office space and you can do your work at home. How you do it depends on your preference and your organisational culture. Some organisations may insist on some sort of oversight and may employ some sort of tracking software, so they can make sure the developer is “punching in and out” at corporate times. Working from home requires developers to have some personal traits to guarantee effectiveness, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Initiative&lt;/strong&gt; - Most developers I have come across don’t enjoy constant supervision, hence those that can start and work through a project on their own fare well remotely.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Discipline&lt;/strong&gt; - This is often a key failing in&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;developers, where coding is never complete as there is always a better way of doing it, improving it, etc. The willpower to resist this inclination and keep your eyes on the ball is the key to working remotely.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;A dedicated space (not your bedroom)&lt;/strong&gt; - Developers, like all remote workers, need a place to work in that is separate from social home activities. Such a place provides the mindset that, when you are there, you know you should be working and not procrastinating.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A willingness to improve communications skills&lt;/strong&gt; - Continuous communications with colleagues, bosses, customers, etc. will ensure that developers become trusted partners despite not working in the same room together. This inevitably involves into having clear and constructive conversations with users that underpins seamless software delivery.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Remote DevOps and user stories&lt;/h2&gt;
&lt;p&gt;Most developers know &lt;em&gt;user stories&lt;/em&gt; (epics) are used as the basis for estimating, planning, and understanding whether value is/was delivered to customers. This is a key component to an effective DevOps strategy. It breaks large tasks down into smaller chunks that deliver value independently to make the work easier to visualize and understand. The potential benefits are obvious, as meaningful segments of work can be accomplished simultaneously within an inherent customer focus.&lt;/p&gt;
&lt;p&gt;Remote virtual work should not decrease the efficiency of good user stories (even though a physical story card is not on constant display) since digital tickets are virtually available. For instance, use a tool like Miro&lt;sup&gt;©&lt;/sup&gt; online, a collaborative whiteboard used to create virtual Post-it notes with the basic 3-C concept of card, conversation and confirmation – writing on the virtual Post-it &lt;code&gt;as a ‘user type’, I want / need ‘goal’ so that I can accomplish ‘justification/business value’&lt;/code&gt; . (Of course, don’t forget the context of this Post-it.)&lt;/p&gt;
&lt;p&gt;The brevity will ensure that greater emphasis is put on the conversations between the various stakeholders. It’s only once the conversation is complete that formal confirmation happens and there is mutual agreement on what will be delivered and when.&lt;/p&gt;
&lt;h2&gt;Developer virtual myths&lt;/h2&gt;
&lt;p&gt;The new normal of virtual working may pressure some developers into a quarantine-style existence and personal traits will push individual developers differently. It is therefore important to dispel any remote working folklore myths and legends that some associate with virtual working, namely:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Working remotely means I am isolated and on my own.&lt;/strong&gt; To some developers, this is actually bliss (i.e. the much renowned hacker in his bedroom), as it may allow them to focus without office distractions. However, an effective developer is not an island and hence needs to collaborate continuously with his peers, customers and partners. Developers do not necessarily have to physically see people to avoid the feeling of loneliness or isolation. There are a number of virtual spaces (like Miro or Skype for example) that provide that real collaborative feeling of catching-up with colleagues.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;As a junior member of the development team, you can’t work alone.&lt;/strong&gt; This is so untrue that it is unbelievable that some developers feel this way. This is not an issue for junior developers, but of the organisation that hired them, as the organisation does not have the correct processes in place to support the junior developers. This can include, for example, a lack of effective feedback, inadequate on-boarding, and senior developer mentoring, to name a few.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developers lose or misunderstand company culture.&lt;/strong&gt; Office culture evolves over time. Culture can include ethics, values, integrity, principles, and reliability. The key to remote culture assimilation is to focus on how the organization uses its remote channels to convey what is expected.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developers cannot build personal relationships if they don’t meet.&lt;/strong&gt; Millennials have come to terms with the nature of friendship over social media. This should be no different for developers when working remotely. Using virtual tools to communicate can assist developers in having more direct technical conversations online when needed, as a developer is able to produce a considered response to an issue and not respond just because someone is standing in front of you.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developer collaboration is reduced through working remotely.&lt;/strong&gt; For developer teams that are not accustomed to remote work, shifting to a virtual world just requires revisiting approaches to collaboration, communication, security, quality control, individual and team performance, work assignment, compliance monitoring, and governance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Three steps to get to remote DevOps&lt;/h2&gt;
&lt;p&gt;I suggest that the following three basic remote DevOps principles need to be considered when approaching DevOps remotely:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Remote teamwork is essential.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Sitting down with a colleague at their desk is not possible. You will need to use remote communications tools to chat and collaborate. But, in this remote world, it is essential to explain the problem comprehensively with well thought-out descriptions and documentation. Eye contact and body language will not be the key to understanding – documentation will be. At last, developers will be forced to document their software.&lt;br&gt;
Obviously, remote working provides better work-life balance in terms of the flexibility of work and when to work in a 24hr period. However, it will mean that someone working from home cannot be an island and needs to be integrated fully into the corporate ethos and standards through effective communications and contact.  Additionally, corporate security policies need to be maintained end-2-end through effective security tools, encryption and policies. For example, through HPE’s GitHub repository, teams can securely share code safely with appropriate fine-grained access controls and effective two-factor authentication.&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;&lt;strong&gt;Corporate continuous integration within a remote working environment is mandatory.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Developers writing code from home, like the young hackers in their bedrooms, require secure access to the corporate environment so that they can spin up a container, run their code, and start testing. Within a CI pipeline, automated services will take that code, spin up a container and run all the needed tests and standards (formats). This pipeline will provide the results back to the users in seconds, provide a repository (like HPE’s GitHub) for all code, all in an automated way.&lt;br&gt;
Effective CI/CD pipelines are outcomes of the DevOps world and, as such, are critical in the virtual work as well. There are many CI/CD tools (i.e. Jenkins, etc.) that can assist in this endeavour. This brings us to integration needs which are even more fundamental in the virtual world. To help, DevOps apps such as GitHub can be the best place for virtual working to share code with remote co-workers. However, there will also be a need to use cloud resources within the concept of a cloud software containers across clusters. This is where Kubernetes provides building block entities within PaaS services.&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;&lt;strong&gt;Corporate continuous deployment within the remote working environment is also mandatory.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is so code can be automatically merged safely into the master production environment by wrapping the code in a bundle, creating a new container, and deploying it. It also ensures that IT production and system administration staff (who are also working remotely) can instantly view it and support it.&lt;br&gt;
Today’s new normal still requires continuous software delivery, which is demanded by the ever increasing virtual world of apps. Gone are the days when software deployment was at the behest of long-winded and time-consuming IT processes. Users demand new app releases in real-time. Virtual DevOps requires the tools, automation, and resources to deploy at will. This is where hybrid “as-a-Service” services are essential to continuous deployment, as they provide this ability.
Fundamental to these three steps for remote DevOps is the need to have an effective cloud working experience that developers can trust and use/consume. This cloud service must be designed with DevOps in mind in terms of the required performance for a development environment; in other words, a true Platform-as –a-Service (PaaS) provision.&lt;/p&gt;
&lt;p&gt;What I mean by this is that it must go over and above the basic well-known marketing terminology of cloud services, i.e. elasticity (which we need to provision the right cloud services on request), agility (for rapid testing and experimentation environments), self-provisioning (for dynamic service utilization within an auto-scaling environment) and security (for appropriate access, firewalls, VPNs, Secure Endpoints, etc.).&lt;/p&gt;
&lt;p&gt;Through such a cloud service, DevOps developers will get the level of redundancy needed to continue their work in today’s environment. In addition, their work will be automatically backed up and they will encounter minimal process interruptions.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This is just a start as to how to deploy a remote DevOps working world. You could go even further and automate many of the administration jobs by setting up measurements and metrics that will auto-manage the space and performance requirements of the containers, servers, etc., which, in turn, could lead to a NoOps nirvana.&lt;/p&gt;
&lt;p&gt;As DevOps comes of age, it will undoubtedly be affected by this frightful crisis that makes a remote environment a mandatory condition. We have always known DevOps adoption would take time, and time has a way of introducing change. It is always been said that transitioning to DevOps is, first and foremost, a cultural shift, and then a process and organisational shift. The need for remote working has highlighted the need to work differently – with open collaboration, trust, working to an output not a time, reduction in the human touch, and many other things which have always been part of the DevOps Culture.&lt;/p&gt;
&lt;p&gt;Get started with DevOps and your transformation today by visiting our &lt;a href=&quot;https://www.hpe.com/us/en/services.html&quot;&gt;HPE Pointnext website&lt;/a&gt;. And be sure to come back and check out the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for more articles like this.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Tutorial: Enabling Remote Copy using the HPE CSI Driver for Kubernetes on HPE Primera]]></title><description><![CDATA[The addition of new features to the HPE CSI Driver for Kubernetes never stops and, with the newly released 1.3.0 version of the CSI Driver…]]></description><link>https://developer.hpe.com/tutorial-enabling-remote-copy-using-the-hpe-csi-driver-for-kubernetes-on/</link><guid isPermaLink="false">https://developer.hpe.com/tutorial-enabling-remote-copy-using-the-hpe-csi-driver-for-kubernetes-on/</guid><pubDate>Wed, 26 Aug 2020 02:02:34 GMT</pubDate><content:encoded>&lt;p&gt;The addition of new features to the HPE CSI Driver for Kubernetes never stops and, with the newly released 1.3.0 version of the CSI Driver, comes the much requested support for HPE Primera and 3PAR Remote Copy Peer Persistence. Remote Copy support within Kubernetes provides enhanced availability and transparent failover for disaster recovery protection with Kubernetes. As more and more applications migrate into Kubernetes, HPE recommends customers deploy mission-critical applications with replicated persistent volumes to ensure that these applications are highly available and resistant to failure. HPE Primera and 3PAR Remote Copy can serve as the foundation for a disaster recovery solution.&lt;/p&gt;
&lt;h1&gt;Configuring Remote Copy in the HPE CSI Driver&lt;/h1&gt;
&lt;p&gt;In the example I show here, I will start with an existing single zone Kubernetes cluster. For the most up-to-date information and examples on HPE Storage and containers, please refer to &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD). Currently, the HPE CSI Driver for Kubernetes only supports HPE Primera and 3PAR Remote Copy in 2DC Peer Persistence mode. Remote Copy Periodic (async) mode is not currently supported but will be available in a future release.&lt;/p&gt;
&lt;p&gt;For information on creating a Peer Persistence configuration, review the &lt;a href=&quot;https://techhub.hpe.com/eginfolib/storage/docs/Primera/RemoteCopy/RCconfig/GUID-1F726F48-A372-4ED8-B1D7-9545D091AE98.html#GUID-1F726F48-A372-4ED8-B1D7-9545D091AE98&quot;&gt;HPE Primera Peer Persistence Host OS Support Matrix&lt;/a&gt; for the supported host OSs and host persona requirements. Refer to &lt;a href=&quot;https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&amp;#x26;docId=emr_na-a00088914en_us&quot;&gt;HPE Primera OS: Configuring data replication using Remote Copy over IP&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2&gt;Requirements:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Single zone Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Deployment of HPE CSI Driver for Kubernetes&lt;/li&gt;
&lt;li&gt;Create Secrets for Primary and Target arrays&lt;/li&gt;
&lt;li&gt;Create CustomResourceDefinition (CRD) for Peer Persistence&lt;/li&gt;
&lt;li&gt;Create StorageClass for replicated volumes&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Deploy the HPE CSI Driver for Kubernetes&lt;/h2&gt;
&lt;p&gt;I will start by installing the latest version of the HPE CSI Driver for Kubernetes, which as of this writing is version 1.3.0. Here are two methods you can use  do this:&lt;/p&gt;
&lt;h4&gt;Fresh installation of the HPE CSI Driver for Kubernetes&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;helm repo add hpe https://hpe-storage.github.io/co-deployments/
helm repo update
helm install hpe-csi hpe/hpe-csi-driver --namespace hpe-csi --version 1.3.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Upgrading an existing deployment of the HPE CSI Driver for Kubernetes&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;helm repo update
helm search repo hpe-csi-driver -l
helm upgrade hpe-csi hpe/hpe-csi-driver --namespace &amp;#x3C;namespace&gt; --version 1.3.0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I can check the status of the deployment by running the following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get all -n hpe-csi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If I used a different namespace during the deployment, I can use this command instead.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pods --all-namespaces -l &apos;app in (nimble-csp, primera3par-csp, hpe-csi-node, hpe-csi-controller)&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
hpe-csi       hpe-csi-controller-6d9bb97cd-njnj9   7/7     Running   0          1m
hpe-csi       hpe-csi-node-dlcz5                   2/2     Running   0          1m
...
hpe-csi       nimble-csp-745cb4d948-6449z          1/1     Running   0          1m
hpe-csi       primera3par-csp-867984bf86-dkf2d     1/1     Running   0          1m
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Create Remote Copy link Secrets&lt;/h2&gt;
&lt;p&gt;Here&apos;s a tip for creating Kubernetes objects at the command line.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create -f-
&amp;#x3C; paste the YAML &gt;
^D (CTRL + D)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With the HPE CSI Driver deployed, you will need to create 2 secrets. One for each Primera array (i.e. default-primera-secret and secondary-primera-secret) that are part of the Remote Copy links.&lt;/p&gt;
&lt;h4&gt;Primary Array&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: Secret
metadata:
  name: default-primera-secret
  namespace: hpe-csi
stringData:
  serviceName: primera3par-csp-svc
  servicePort: &quot;8080&quot;
  backend: 10.0.0.2
  username: 3paradm
  password: 3pardata
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Secondary Array&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: Secret
metadata:
  name: secondary-primera-secret
  namespace: hpe-csi
stringData:
  serviceName: primera3par-csp-svc
  servicePort: &quot;8080&quot;
  backend: 10.0.0.3
  username: 3paradm
  password: 3pardata
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Create Peer Persistence CustomResourceDefinition&lt;/h2&gt;
&lt;p&gt;Next, you will need to create a &lt;code&gt;CustomResourceDefinition&lt;/code&gt; that holds the target array information that will be used when creating the volume pairs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: storage.hpe.com/v1
kind: HPEReplicationDeviceInfo
metadata:
  name: replication-crd
spec:
  target_array_details:
  - targetCpg: SSD_r6
    targetName: primera-c670
    targetSecret: secondary-primera-secret
    #targetSnapCpg: SSD_r6 (optional)
    targetSecretNamespace: hpe-csi
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Create Peer Persistence StorageClass&lt;/h2&gt;
&lt;p&gt;Next, define the &lt;strong&gt;remoteCopyGroup: &amp;#x3C;remote_copy_group_name&gt;&lt;/strong&gt; and the &lt;strong&gt;replicationDevices: &amp;#x3C;replication_crd_name&gt;&lt;/strong&gt; parameters. The HPE CSI Driver can use an existing Remote Copy Group or, if it doesn&apos;t exist, it will create a new one. The CSI Driver will also use the information from the &lt;code&gt;CRD&lt;/code&gt; to create the replicated volume on the target array.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;false&quot;
  name: rep-sc
provisioner: csi.hpe.com
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/controller-expand-secret-name: default-primera-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-csi
  csi.storage.k8s.io/controller-publish-secret-name: default-primera-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-csi
  csi.storage.k8s.io/node-publish-secret-name: default-primera-secret
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-csi
  csi.storage.k8s.io/node-stage-secret-name: default-primera-secret
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-csi
  csi.storage.k8s.io/provisioner-secret-name: default-primera-secret
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-csi
  description: &quot;Volume created using Peer Persistence with the HPE CSI Driver for Kubernetes&quot;
  accessProtocol: fc

# Primera customizations
  cpg: SSD_r6
  remoteCopyGroup: new-rcg
  replicationDevices: replication-crd
  provisioning_type: tpvv
  allowOverrides: description,provisioning_type,cpg,remoteCopyGroup
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you have created the &lt;code&gt;StorageClass&lt;/code&gt; within the cluster, you can request Persistent Volumes as normal.&lt;/p&gt;
&lt;h2&gt;Create Peer Persistence PVC&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: replicated-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi
  storageClassName: rep-sc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As volumes are created using the Remote Copy, the replication between the &lt;strong&gt;default&lt;/strong&gt; and &lt;strong&gt;secondary&lt;/strong&gt; Primeras will be transparent to Kubernetes. There are &lt;a href=&quot;https://youtu.be/Eet92dOra24&quot;&gt;multiple videos&lt;/a&gt; out on YouTube demonstrating how automatic transparent failover works with various workloads and within Kubernetes it is no different. In the case of an array failure, automatic transparent failover will manage the pathing between the Primary and Secondary arrays on the Worker nodes so the application IO is not interrupted.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pvc
NAME             STATUS    VOLUME                            CAPACITY   ACCESS MODES   STORAGECLASS               AGE
replicated-pvc   Bound     pvc-ca03a916-a6fb-434c-bc00-6b8   200Gi      RWO            rep-sc                     1m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, run &lt;strong&gt;showrcopy&lt;/strong&gt; on both Primeras to see the sync status of the Remote Copy Group.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ showrcopy

Remote Copy System Information
Status: Started, Normal

Target Information

Name              ID Type Status Options Policy
virt-primera-c670  4 IP   ready  -       mirror_config

Link Information

Target            Node  Address     Status Options
virt-primera-c670 0:3:1 172.17.20.5 Up     -
virt-primera-c670 1:3:1 172.17.20.6 Up     -
receive           0:3:1 receive     Up     -
receive           1:3:1 receive     Up     -

Group Information

Name         Target            Status   Role       Mode     Options
new-rcg      virt-primera-c670 Started  Primary    Sync     auto_failover,path_management
  LocalVV                         ID  RemoteVV                          ID SyncStatus    LastSyncTime
  pvc-ca03a916-a6fb-434c-bc00-6b8 168 pvc-ca03a916-a6fb-434c-bc00-6b8   83 Synced        NA
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This verifies the replication status of the volumes created within your Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;By adding Remote Copy support within the HPE CSI Driver along with the ability to perform volume snapshots and cloning capabilities, HPE gives many options within your Disaster Recovery strategy and gives you the peace of mind that your mission-critical application data is protected.&lt;/p&gt;
&lt;h1&gt;Next Steps&lt;/h1&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; for future posts regarding the HPE CSI Driver for Kubernetes. In the meantime, check out the blog about the new &lt;a href=&quot;/blog/8nlLVWP1RKFROlvZJDo9/introducing-kubernetes-csi-sidecar-containers-from-hpe&quot;&gt;Volume Mutator capabilities of the HPE CSI Driver&lt;/a&gt;. Also, if you want to learn more about Kubernetes, CSI, and the integration with HPE storage products, you can find a ton of resources out on &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;SCOD&lt;/a&gt;! If you are already on Slack or an HPE employee, connect with us on Slack. If you are a new user, signup at &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt;. We hang out in #kubernetes, #nimblestorage and #3par-primera.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing Kubernetes CSI Sidecar Containers from HPE]]></title><description><![CDATA[With the release of the upcoming HPE CSI Driver for Kubernetes version 1.3.0, Hewlett Packard Enterprise (HPE) introduces the concept of…]]></description><link>https://developer.hpe.com/introducing-kubernetes-csi-sidecar-containers-from-hpe/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-kubernetes-csi-sidecar-containers-from-hpe/</guid><pubDate>Tue, 25 Aug 2020 01:45:01 GMT</pubDate><content:encoded>&lt;p&gt;With the release of the upcoming HPE CSI Driver for Kubernetes version 1.3.0, Hewlett Packard Enterprise (HPE) introduces the concept of Container Storage Interface (CSI) extensions to the CSI driver using Kubernetes CSI sidecar containers. This concept is not foreign to anyone familiar with the CSI architecture as most new major features get implemented as a sidecar in a true microservice architecture. Services are tightly coupled and communicate over a UNIX socket using a high-speed Remote Procedure Call (RPC) interface, gRPC, for secure and reliable communication.&lt;/p&gt;
&lt;p&gt;The interface allows third parties to write extensions to their drivers to expose a particular storage platform’s differentiating feature where it’s difficult to conceive a broad stroke feature in a vendor neutral manner. It’s also possible to leapfrog SIG Storage (the Kubernetes working group for storage) for features currently in the discovery or design phase if customer demand is being prioritized over standardization.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/csi-130-slate-1598320004312.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;The first (yes, there’s quite a few in the works) CSI sidecar is a volume mutator. It will allow end-users to alter their &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; (PVCs) during runtime, even while the &lt;code&gt;PersistentVolume&lt;/code&gt; (PV) is mounted and serving a workload. What attributes are mutable depends on the backend Container Storage Provider (CSP) being used. Also, what attributes are allowed to be altered by an end-user is controlled by the Kubernetes cluster administrator through the &lt;code&gt;StorageClass&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Let’s go through an example on how you could put the volume mutator to work using the HPE Nimble Storage CSP.&lt;/p&gt;
&lt;h1&gt;Mutating persistent volume claims&lt;/h1&gt;
&lt;p&gt;With the CSI driver deployed and a HPE Nimble Storage backend configured, it’s good to understand what attributes are mutable. On the &lt;a href=&quot;https://scod.hpedev.io/container_storage_provider/hpe_nimble_storage/index.html#common_parameters_for_provisioning_and_cloning&quot;&gt;HPE Storage Container Orchestration Documentation&lt;/a&gt; (SCOD) portal for the respective CSP, you&apos;ll find the supported parameters. For reference, this table represents the current mutable attributes.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attribute&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;destroyOnDelete&lt;/td&gt;
&lt;td&gt;Boolean&lt;/td&gt;
&lt;td&gt;Used to control deletion of volume in the backend after PV removal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;description&lt;/td&gt;
&lt;td&gt;Text&lt;/td&gt;
&lt;td&gt;Volume description&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;folder&lt;/td&gt;
&lt;td&gt;Text&lt;/td&gt;
&lt;td&gt;Place volume into an existing folder&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;limitIops&lt;/td&gt;
&lt;td&gt;Integer&lt;/td&gt;
&lt;td&gt;Change IOPS limits on volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;limitMbps&lt;/td&gt;
&lt;td&gt;Integer&lt;/td&gt;
&lt;td&gt;Change Throughput limits on volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;performancePolicy&lt;/td&gt;
&lt;td&gt;Text&lt;/td&gt;
&lt;td&gt;Change performance policy for volume (within the same block size)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dedupeEnabled&lt;/td&gt;
&lt;td&gt;Boolean&lt;/td&gt;
&lt;td&gt;Enable/Disable deduplication on volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;thick&lt;/td&gt;
&lt;td&gt;Boolean&lt;/td&gt;
&lt;td&gt;Thick/thin provisioning of volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;syncOnDetach&lt;/td&gt;
&lt;td&gt;Boolean&lt;/td&gt;
&lt;td&gt;Control that a snapshot of the volume should be synced to the replication partner each time it is detached from a node.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For the purposes of this example, let’s assume we want to allow users to be in control of a few storage attributes. We will also allow them to override the parameters during creation of the &lt;code&gt;PVC&lt;/code&gt;. &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#using_pvc_overrides&quot;&gt;Overriding parameters&lt;/a&gt; during creation is a cornerstone feature that has been part of the HPE primary storage solution since the FlexVolume days.&lt;/p&gt;
&lt;p&gt;Create a default &lt;code&gt;StorageClass&lt;/code&gt; with the &lt;code&gt;allowOverrides&lt;/code&gt; and &lt;code&gt;allowMutations&lt;/code&gt; set to allow certain performance tuning.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  description: &quot;Volume created by the HPE CSI Driver for Kubernetes&quot;
  allowOverrides: description,limitIops,limitMbps,performancePolicy
  allowMutations: description,limitIops,limitMbps,performancePolicy
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The volume mutator sidecar is dependent on the &lt;code&gt;&quot;csi.storage.k8s.io/provisioner-secret-name&quot;&lt;/code&gt; and &lt;code&gt;&quot;csi.storage.k8s.io/provisioner-secret-namespace&quot;&lt;/code&gt; to mutate volumes.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Next, create a &lt;code&gt;PVC&lt;/code&gt; with the following &lt;code&gt;.metadata.annotations&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
  annotations:
    csi.hpe.com/description: This is my volume description
    csi.hpe.com/limitIops: &quot;10000&quot;
    csi.hpe.com/limitMbps: &quot;200&quot;
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Switching over to the backend array, you can see that the volume was created with the desired overrides.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Nimble OS $ vol --info pvc-2d1795ec-7bce-4af8-b841-437a435f29e1 | egrep -iw &apos;description|iops|throughput|performance&apos;
Description: This is my volume description
Performance policy: default
IOPS Limit: 10000
Throughput Limit (MiB/s): 200
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The volume name may be retrieved with &lt;code&gt;kubectl get pvc/my-data&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let’s edit the object definition. This can be done with &lt;code&gt;kubectl edit&lt;/code&gt; or you can create a YAML file and subsequently patch the &lt;code&gt;PVC&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
  annotations:
    csi.hpe.com/description: Need more oomph!
    csi.hpe.com/performancePolicy: double-down
    csi.hpe.com/limitIops: &quot;50000&quot;
    csi.hpe.com/limitMbps: &quot;1000&quot;
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Patch the &lt;code&gt;PVC&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl patch pvc/my-data --patch &quot;$(cat my-data-boost.yaml)&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Back on the array, you can see that the attributes have changed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;Nimble OS $ vol --info pvc-2d1795ec-7bce-4af8-b841-437a435f29e1 | egrep -iw &apos;description|iops|throughput|performance&apos;
Description: Need more oomph!
Performance policy: double-down
IOPS Limit: 50000
Throughput Limit (MiB/s): 1000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since the &lt;code&gt;.spec.csi.volumeAttributes&lt;/code&gt; of the &lt;code&gt;PV&lt;/code&gt; that the backend volume was created with are immutable, the latest successful changes are annotated on the &lt;code&gt;PV&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    csi.hpe.com/description: Need more oomph!
    csi.hpe.com/limitIops: &quot;50000&quot;
    csi.hpe.com/limitMbps: &quot;1000&quot;
    csi.hpe.com/performancePolicy: double-down
    pv.kubernetes.io/provisioned-by: csi.hpe.com
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Further adjustments may be performed anytime at any stage of the lifecycle of the &lt;code&gt;PV&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Use cases&lt;/h1&gt;
&lt;p&gt;Given the gamut of options for the HPE Nimble Storage CSP, there are a number of creative ways to accelerate certain use cases that require runtime tuning of storage characteristics.&lt;/p&gt;
&lt;h2&gt;Performance management&lt;/h2&gt;
&lt;p&gt;Like in the example above, throttling volumes to adhere to a certain performance characteristic is by far the most prolific use case, especially if there&apos;s cost associated with the performance limits. The use case can be further extended by allowing users to move volumes between folders on the Nimble array, such as Gold, Silver and Bronze, all with different performance caps. Certain restrictions apply. See the &lt;a href=&quot;https://scod.hpedev.io/container_storage_provider/hpe_nimble_storage/index.html#common_parameters_for_provisioning_and_cloning&quot;&gt;documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2&gt;Data reduction changes&lt;/h2&gt;
&lt;p&gt;Using compression and deduplication may be desirable for the initial ingest of a dataset. Just note that future churn might cause issues on the workload requirement and data reduction capabilities may be toggled at will. The need might arise during runtime to prioritize space reserve. Toggling thin-provisioning with the &lt;code&gt;thick&lt;/code&gt; parameter may be used to control the reservations.&lt;/p&gt;
&lt;h2&gt;Data migration control&lt;/h2&gt;
&lt;p&gt;In the event where you need to perform a workload transition between clusters, it’s practical to apply &lt;code&gt;destroyOnDelete: &quot;false&quot;&lt;/code&gt; and &lt;code&gt;syncOnDetach: &quot;true&quot;&lt;/code&gt; on the backend volume. This is to ensure the replica destination gets updated with the latest data from the source when destaging the workload. Also, retaining the volume on the array when the Kubernetes objects are being cleaned out from the source namespace is neccesary in the event of the replica destination is being configured to reverse the replication after the transition.&lt;/p&gt;
&lt;p&gt;It will be exciting to see what other use cases will surface from the installed base with this new capability!&lt;/p&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;The HPE CSI Driver for Kubernetes version 1.3.0 will become available in the next few weeks. &lt;code&gt;StorageClasses&lt;/code&gt; may then be created with the &lt;code&gt;allowMutations&lt;/code&gt; parameter and the CSI volume mutator may be used without any further tweaks.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#using_volume_mutations&quot;&gt;Using volume mutations&lt;/a&gt; on SCOD&lt;/li&gt;
&lt;li&gt;Overview of the &lt;a href=&quot;https://scod.hpedev.io/csi_driver/index.html&quot;&gt;HPE CSI Driver for Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Source code &lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;available on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check out the HPE primary storage platform pages: &lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;HPE Nimble Storage&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/platform/hpe-3par-and-primera/home&quot;&gt;HPE Primera&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Watch the &lt;a href=&quot;https://hpedev.io&quot;&gt;HPE Developer Community&lt;/a&gt; for future exciting updates to the HPE CSI Driver for Kubernetes!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE firmware updates: Part 1 – File types and Smart Components]]></title><description><![CDATA[This blog post has been moved to the Server Management portal.]]></description><link>https://developer.hpe.com/hpe-firmware-updates-part-1-file-types-and-smart-components/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-firmware-updates-part-1-file-types-and-smart-components/</guid><pubDate>Fri, 21 Aug 2020 15:07:31 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/firmware_updates/part1/firmware_update_part1&quot;&gt;Server Management portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[HPE firmware updates: Part 2 – Interaction in operating modes]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal]]></description><link>https://developer.hpe.com/hpe-firmware-updates-part-2-interaction-in-operating-modes/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-firmware-updates-part-2-interaction-in-operating-modes/</guid><pubDate>Thu, 20 Aug 2020 11:38:44 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/firmware_updates/part2/firmware_update_part2&quot;&gt;Server Management Portal&lt;/a&gt;&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[HPE DEV Hack Shack attracts over 10K visitors at HPE Discover event]]></title><description><![CDATA[hack shack image1 March 2020 marked a turning point for physical tradeshows and events. Faced with travel and gathering restrictions imposed…]]></description><link>https://developer.hpe.com/hpe-dev-hack-shack-attracts-over-10k-visitors-at-hpe-discover-event/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-hack-shack-attracts-over-10k-visitors-at-hpe-discover-event/</guid><pubDate>Wed, 19 Aug 2020 07:42:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/hack-shack-image1-1597850961298.png&quot; alt=&quot;hack shack image1&quot;&gt;&lt;/p&gt;
&lt;p&gt;March 2020 marked a turning point for physical tradeshows and events. Faced with travel and gathering restrictions imposed by the pandemic, many companies scrambled to find new ways to connect with their customers. Virtual events quickly rose to the forefront as a preferred venue. But how to make a virtual event as engaging, interactive, and valuable as a physical event had yet to be determined. For Hewlett Packard Enterprise (HPE), this was an important consideration as it planned its premier customer technology event, HPE Discover.&lt;/p&gt;
&lt;p&gt;Since the inception of the Discover events, HPE senior executives, technologists, partners, and IT thought leaders found that it was a great way to connect with customers. One of the highlights on the show floor in recent years has been the HPE DEV Hack Shack. There, developers would come and chat with other developers, participate in hands-on workshops, and take coding challenges; all in a relaxed, fun-filled environment.&lt;/p&gt;
&lt;p&gt;As the venue changed from being a physical event, the immediate challenge for the HPE DEV team was how to replicate this experience virtually. The Dev team rose to the challenge and successfully attracted over 10,000 visitors during the HPE Discover 2020 Virtual Event. Here’s how they did it.&lt;/p&gt;
&lt;h2&gt;Bringing the Hack Shack online&lt;/h2&gt;
&lt;p&gt;While most of the HPE Discover Virtual Experience was hosted on a virtual event platform from Intrado Digital Media, the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt; was linked to as an off-platform site designed by HPE developers, which used &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet&lt;/a&gt; for its underlying web application structure. From the HPE Discover Virtual Event Intrado platform, attendees would click on a link to “enter” the Hack Shack lobby. In the lobby, a quick video tour explained how to navigate the site, pointing out all the different types of activities that were offered during the event.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/hack-shack-image2-1597850953889.png&quot; alt=&quot;hack shack image2&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Workshops&lt;/h2&gt;
&lt;p&gt;As one of the major attractions of a physical Hack Shack is the opportunity for attendees to meet up with other developers and subject matter experts (SMEs) in hands-on workshops to learn and transfer knowledge, the virtual Hack Shack needed a way to accomplish something similar. To do this, Dev team members created a set of live workshops where students could interact with the presenter online using Jupyter Notebooks. SMEs were on hand to help facilitate the workshops, answer questions, and ensure no attendee got left behind during the session. This approach proved to be very successful, judging by the high scores given by participants in the post-workshop surveys.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/hack-shack-image3-1597850947424.png&quot; alt=&quot;hack shack image3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Coding Challenges&lt;/h2&gt;
&lt;p&gt;Another popular physical Hack Shack feature the team wanted to incorporate into the virtual experience was its coding challenges. These challenges are timed events where developers are asked to create their own code to achieve a defined function while taking specific parameters into account. They challenge an attendee’s understanding of concepts delivered through the virtual workshops and offer an opportunity to apply what was learned into a real world example. The team is pleased to report that three challenge winners were selected and some cool prizes were awarded.&lt;/p&gt;
&lt;h2&gt;Fun and games&lt;/h2&gt;
&lt;p&gt;The Hack Shack is also well-known for its fun Hack Shack Attack! retro arcade style video game. Given the fact that the game is already an app, it was fairly easy to integrate it into the new, online Hack Shack Arcade. This attracted a lot of attendees who competed to achieve the highest score each week, with the weekly winner being awarded a prize along with some serious bragging rights! Stickers and dev art were also offered in the arcade for attendees to download and use for wallpaper or social posts.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/hack-shack-image4-1597853418103.png&quot; alt=&quot;hack shack image4&quot;&gt;&lt;/p&gt;
&lt;h2&gt;A virtual gathering place&lt;/h2&gt;
&lt;p&gt;The Hack Shack has always been a place for the community to gather. As one might imagine, this is not easily done virtually. Live workshops provided one excellent touch point. The Hack Shack Community page provided another, offering attendees social connection opportunities, like Slack and Twitter, to use to connect with peers. It also highlighted the HPE DEV Newsletter, which not only delivers the latest news directly to subscribers&apos; email box, but also offers an opportunity for community members to submit their own articles to be placed on the HPE DEV blog site. This proved interesting to many, as 146 new members signed up over the course of the event. The Learn On-Demand videos also provided another touch point.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/hack-shack-image5-1597850929851.png&quot; alt=&quot;hack shack image5&quot;&gt;&lt;/p&gt;
&lt;p&gt;The HPE DEV team worked long and hard to bring this experience online where it can be shared with a much broader audience. We are now participating at KubeCon + CloudNativeCon EU and are planning to offer the &lt;a href=&quot;/hackshack/&quot;&gt;Hack Shack&lt;/a&gt; at other future virtual events. It’s no surprise that virtual events will continue on into the foreseeable future. As a matter of fact, on June 9, 2020 &lt;a href=&quot;https://builtin.com/marketing/virtual-event-engagement-strategies&quot;&gt;Brian Nordli posted a blog article&lt;/a&gt; on the increasing popularity of virtual events. In it, he pointed out that many people were already planning virtual events for 2021 and 2022. He emphasized why this was by quoting Ben Chodor, president of the virtual event platform company Intrado Digital Media, who remarked &lt;em&gt;“Even when everything gets back to normal, there’s going to be a lot of people who aren’t going to feel comfortable going to Vegas or Orlando with 10,000 other people.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you happen to be at any of these virtual events, look up the &lt;a href=&quot;/hackshack/&quot;&gt;HPE DEV Hack Shack&lt;/a&gt;. If you need hints on how to create something similar for your event, check in with one of our HPE DEV team members on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack channel&lt;/a&gt;. There’s a good chance they’ll be able to offer some excellent advice.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Datasets, DataFrames, and Spark SQL for Processing of Tabular Data]]></title><description><![CDATA[Original Post Information: With Apache Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and…]]></description><link>https://developer.hpe.com/datasets-dataframes-and-spark-sql-for-processing-of-tabular-data/</link><guid isPermaLink="false">https://developer.hpe.com/datasets-dataframes-and-spark-sql-for-processing-of-tabular-data/</guid><pubDate>Wed, 19 Aug 2020 06:24:20 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-10-24T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;With Apache Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster. The Spark SQL and the Dataset/DataFrame APIs provide ease of use, space efficiency, and performance gains with Spark SQL&apos;s optimized execution engine. In this blog post we will give an introduction to Spark Datasets, DataFrames and Spark SQL. This is part 2 of a multi-blog series. You can read &lt;a href=&quot;/blog/4jqBP6MO3rc1Yy0QjMOq/spark-101-what-is-it-what-it-does-and-why-it-matters&quot;&gt;part 1 here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A Spark Dataset is a distributed collection of typed objects, which are partitioned across multiple nodes in a cluster and can be operated on in parallel. Datasets can be created from MapR XD files, MapR Database tables, or MapR Event Store topics, and can be cached, allowing reuse across parallel operations. A Dataset can be manipulated using functional transformations (map, flatMap, filter, etc.) and/or Spark SQL. A DataFrame is a Dataset of Row objects and represents a table of data with rows and columns. A DataFrame consists of partitions, each of which is a range of rows in cache on a data node.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image12-1597819064184.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;The SparkSession Object&lt;/h2&gt;
&lt;p&gt;As discussed before, a Spark application runs as independent processes, coordinated by the SparkSession object in the driver program. The entry point to programming in Spark is the org.apache.spark.sql.SparkSession class, which you use to create a SparkSession object as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val spark =
SparkSession.builder().appName(&quot;example&quot;).master(&quot;local[*]&quot;.getOrCreate()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you are using the spark-shell or a notebook, the SparkSession object is already created and available as the variable Spark.&lt;/p&gt;
&lt;h2&gt;Interactive Analysis with the Spark Shell&lt;/h2&gt;
&lt;p&gt;The Spark shell provides an easy way to learn Spark interactively. You can start the shell with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ /[installation path]/bin/spark-shell --master local[2]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can enter the code from the rest of this chapter into the Spark shell. Outputs from the shell are prefaced with ---result---.&lt;/p&gt;
&lt;h2&gt;Exploring U.S. Flight Data with Spark Datasets and DataFrames&lt;/h2&gt;
&lt;p&gt;To go over some core concepts of Spark Datasets, we will be using some flight information from the &lt;a href=&quot;https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&amp;#x26;DB_Short_Name=On-Time&quot;&gt;United States Department of Transportation&lt;/a&gt;. Later, we will use this same data to predict flight delays, so we want to explore the flight attributes that contribute most to flight delays. Using Spark Datasets, we will explore the data to answer questions like: which airline carriers, days of the week, originating airport, and hours of the day have the highest number of flight delays, when a delay is greater than 40 minutes?&lt;/p&gt;
&lt;p&gt;The flight data is in JSON files, with each flight having the following information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;id&lt;/strong&gt;: ID composed of carrier, date, origin, destination, flight number&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;dofW&lt;/strong&gt;: day of week (1=Monday, 7=Sunday)        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;carrier&lt;/strong&gt;: carrier code        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;origin&lt;/strong&gt;: origin airport code        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;dest&lt;/strong&gt;: destination airport code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;crsdephour&lt;/strong&gt;: scheduled departure hour        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;crsdeptime&lt;/strong&gt;: scheduled departure time        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;depdelay&lt;/strong&gt;: departure delay in minutes        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;crsarrtime&lt;/strong&gt;: scheduled arrival time        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;arrdelay&lt;/strong&gt;: arrival delay minutes        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;crselapsedtime&lt;/strong&gt;: elapsed time        &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;dist&lt;/strong&gt;: distance        &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It appears in the following format:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;{    
&quot;_id&quot;: &quot;AA_2017-01-01_ATL_LGA_1678&quot;,
&quot;dofW&quot;: 7,
&quot;carrier&quot;: &quot;AA&quot;,
&quot;origin&quot;: &quot;ATL&quot;,
&quot;dest&quot;: &quot;LGA&quot;,
&quot;crsdephour&quot;: 17,
&quot;crsdeptime&quot;: 1700,
&quot;depdelay&quot;: 0.0,
&quot;crsarrtime&quot;: 1912,
&quot;arrdelay&quot;: 0.0,
&quot;crselapsedtime&quot;: 132.0,
&quot;dist&quot;: 762.0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(&lt;a href=&quot;https://github.com/mapr-demos/mapr-spark2-ebook&quot;&gt;The complete data and code for all examples are available here.&lt;/a&gt;)&lt;/p&gt;
&lt;h2&gt;Loading Data from a File into a Dataset&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image10-1597819073355.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With the SparkSession &lt;code&gt;read&lt;/code&gt; method, we can read data from a file into a DataFrame, specifying the file type, file path, and input options for the schema. The schema can optionally be inferred from the contents of the JSON file, but you will get better performance and accuracy by specifying the schema.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._

val schema = StructType(Array(
    StructField(&quot;_id&quot;, StringType, true),
    StructField(&quot;dofW&quot;, IntegerType, true),
    StructField(&quot;carrier&quot;, StringType, true),
    StructField(&quot;origin&quot;, StringType, true),
    StructField(&quot;dest&quot;, StringType, true),
    StructField(&quot;crsdephour&quot;, IntegerType, true),
    StructField(&quot;crsdeptime&quot;, DoubleType, true),
    StructField(&quot;crsarrtime&quot;, DoubleType, true),
    StructField(&quot;crselapsedtime&quot;, DoubleType, true),
    StructField(&quot;label&quot;, DoubleType, true),
    StructField(&quot;pred_dtree&quot;, DoubleType, true)
  ))
var file = &quot;maprfs:///data/flights.json&quot;

val df = spark.read.format(&quot;json&quot;).option(&quot;inferSchema&quot;, &quot;false&quot;).schema(schema).load(file)

---result:---
df: org.apache.spark.sql.DataFrame = [_id: string, dofW: int ... 10 more fields]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The take method returns an array with objects from this Dataset, which we see is of type Row.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.take(1)

---result:---
Array[org.apache.spark.sql.Row] =
Array([ATL_LGA_2017-01-01_17_AA_1678, 7, AA, ATL, LGA, 17, 1700.0, 0.0, 1912.0, 0.0, 132.0, 762.0])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we supply a case class with the &lt;code&gt;as&lt;/code&gt; method when loading the data, then the data is read into a Dataset of typed objects corresponding to the case class.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;case class Flight(_id: String, dofW: Integer, carrier: String, origin: String, dest: String, crsdephour: Integer, crsdeptime: Double, depdelay: Double,crsarrtime: Double, arrdelay: Double, crselapsedtime: Double, dist: Double) extends Serializable

val df = spark.read.format(&quot;json&quot;).option(&quot;inferSchema&quot;, &quot;false&quot;).schema(schema).load(file).as[Flight]

---result---:
df: org.apache.spark.sql.Dataset[Flight] = [_id: string, dofW: int ... 10 more fields]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now the take method returns an array of Flight objects.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.take(1)

---result:---
Array[Flight] = Array(Flight(ATL_LGA_2017-01-01_17_AA_1678, 7,AA,ATL,LGA,17,1700.0,0.0,1912.0,0.0,132.0,762.0))
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Transformations and Actions&lt;/h2&gt;
&lt;p&gt;There are two types of operations you can perform on a Dataset:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;transformations&lt;/strong&gt;: create a new Dataset from the current Dataset&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;actions&lt;/strong&gt;: trigger computation and return a result to the driver program&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image1-1597819082131.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Transformations are lazily evaluated, which means they are not computed immediately. A transformation is executed only when it is triggered by an action. Once an action has run and the value is returned, the Dataset is no longer in memory, unless you call the &lt;code&gt;cache&lt;/code&gt; method on the Dataset. If you will reuse a Dataset for more than one action, you should cache it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Datasets and Type Safety&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Datasets are composed of typed objects, which means that transformation syntax errors (like a typo in the method name) and analysis errors (like an incorrect input variable type) can be caught at compile time. DataFrames are composed of untyped Row objects, which means that only syntax errors can be caught at compile time. Spark SQL is composed of a string, which means that syntax errors and analysis errors are only caught at runtime. Datasets save a developer’s time by catching errors sooner, even while typing when using an IDE.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image2-1597819089388.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dataset Transformations&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a list of some commonly used typed transformations, which can be used on Datasets of typed objects (Dataset[T]).&lt;/p&gt;
&lt;p&gt;| Transformation | Description |
| --- | --- | --- |
| map | Returns new Dataset with result of applying input function to each element |
| filter | Returns new Dataset containing elements where input function is true |
| groupByKey | Returns a KeyValueGroupedDataset where the data is grouped by the given key function |&lt;/p&gt;
&lt;p&gt;This example filter transformation on the flight Dataset returns a Dataset with flights that departed at 10 AM. The take action returns an array of flight objects to the driver program.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.filter(flight =&gt; flight.crsdephour == 10).take(3)

---result:---
Array[Flight] = Array(Flight(ORD_DEN_2017-01-01_AA_2300, 7,AA,ORD,DEN,10,1005.0,5.0,1145.0,3.0,160.0,888.0), Flight(MIA_ORD_2017-01-01_AA_2439,7,AA,MIA,ORD,10, 1005.0,4.0,1231.0,0.0,206.0,1197.0))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;DataFrame Transformations&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a list of some commonly used untyped transformations, which can be used on Dataframes (Dataset[Row]).&lt;/p&gt;
&lt;p&gt;| Transformation | Description |
| --- | --- | --- |
| select | Selects a set of columns |
| join | Join with another DataFrame, using the given join expression |
| groupBy | Groups the DataFrame, using the specified columns |&lt;/p&gt;
&lt;p&gt;This &lt;code&gt;groupBy&lt;/code&gt; transformation example groups the flight Dataset by carrier, then the count action counts the number of flights for each carrier. The show action prints out the resulting DataFrame rows in tabular format.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.groupBy(&quot;carrier&quot;).count().show()

---result:---
+-------+-----+
|carrier|count|
+-------+-----+
|     UA|18873|
|     AA|10031|
|     DL|10055|
|     WN| 2389|
+-------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is a list of some commonly used Dataset actions.&lt;/p&gt;
&lt;p&gt;| Action | Description |
| --- | --- | --- |
| show(n) | Displays the first n rows in a tabular form |
| take(n) | Returns the first n objects in the Dataset in an array |
| count | Returns the number of rows in the Dataset |&lt;/p&gt;
&lt;p&gt;Here is an example using typed and untyped transformations and actions to get the destinations with the highest number of departure delays, where a delay is greater than 40 minutes. We count the departure delays greater than 40 minutes by destination and sort them with the highest first.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;df.filter($&quot;depdelay&quot; &gt; 40).groupBy(&quot;dest&quot;).count()
.orderBy(desc(&quot;count&quot;)).show(3)

---result:---
+----+-----+
|dest|count|
+----+-----+
| SFO|  711|
| EWR|  620|
| ORD|  593|
+----+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Exploring the Flight Dataset with Spark SQL&lt;/h2&gt;
&lt;p&gt;Now let’s explore the flight Dataset using Spark SQL and DataFrame transformations. After we register the DataFrame as a SQL temporary view, we can use SQL functions on the SparkSession to run SQL queries, which will return the results as a DataFrame. We cache the DataFrame, since we will reuse it and because Spark can cache DataFrames or Tables in columnar format in memory, which can improve memory usage and performance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// cache DataFrame in columnar format in memory
df.cache

// create Table view of DataFrame for Spark SQL
df.createOrReplaceTempView(&quot;flights&quot;)

// cache flights table in columnar format in memory
spark.catalog.cacheTable(&quot;flights&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we display information for the top five longest departure delays with Spark SQL and with DataFrame transformations (where a delay is considered greater than 40 minutes):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// Spark SQL
spark.sql(&quot;select carrier,origin, dest, depdelay,crsdephour, dist, dofW from flights where depdelay &gt; 40 order by depdelay desc limit 5&quot;).show

// same query using DataFrame transformations

df.select($&quot;carrier&quot;,$&quot;origin&quot;,$&quot;dest&quot;,$&quot;depdelay&quot;, $&quot;crsdephour&quot;).filter($&quot;depdelay&quot; &gt; 40).orderBy(desc( &quot;depdelay&quot; )).show(5)

---result:---
+-------+------+----+--------+----------+
|carrier|origin|dest|depdelay|crsdephour|
+-------+------+----+--------+----------+
|     AA|   SFO| ORD|  1440.0|         8|
|     DL|   BOS| ATL|  1185.0|        17|
|     UA|   DEN| EWR|  1138.0|        12|
|     DL|   ORD| ATL|  1087.0|        19|
|     UA|   MIA| EWR|  1072.0|        20|
+-------+------+----+--------+----------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we display the average departure delay by carrier:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// DataFrame transformations

df.groupBy(&quot;carrier&quot;).agg(avg(&quot;depdelay&quot;)).show

---result:---
+-------+------------------+
|carrier|     avg(depdelay)|
+-------+------------------+
|     UA|17.477878450696764|
|     AA| 10.45768118831622|
|     DL|15.316061660865241|
|     WN|13.491000418585182|
+-------+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With a notebook like Zeppelin or Jupyter, you can also display the SQL results in graph formats.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// Spark SQL
%sql select carrier, avg(depdelay)
 from flights
 group by carrier
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1597819097878.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s explore this data for flight delays, when the departure delay is greater than 40 minutes. Below, we see that United Airlines and Delta have the highest count of flight delays for January and February 2017 (the training set).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;// __Count of Departure Delays by Carrier (where delay=40 minutes)__

df.filter($&quot;depdelay&quot; &gt; 40)
.groupBy(&quot;carrier&quot;).count.orderBy(desc( &quot;count&quot;)).show(5)

---result:---
+-------+-----+
|carrier|count|
+-------+-----+
|     UA| 2420|
|     DL| 1043|
|     AA|  757|
|     WN|  244|
+-------+-----+
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;// Count of Departure Delays by Carrier (where delay=40 minutes)

%sql
select carrier, count(depdelay)
from flights where depdelay &gt; 40
group by carrier
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image11-1597819106731.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below, we see that Monday (1), Tuesday (2), and Sunday (7) have the highest count of flight delays.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;// Count of Departure Delays by Day of the Week

%sql
select dofW, count(depdelay)
from flights where depdelay &gt; 40
group by dofW
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1597819122799.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below, we see that the hours between 13:00-19:00 have the highest count of flight delays.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql
select crsdephour, count(depdelay)
from flights where depdelay &gt; 40
group by crsdephour order by crsdephour
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1597819140727.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below, we see that the originating airports, Chicago and Atlanta, have the highest count of flight delays.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql
select origin, count(depdelay)
from flights where depdelay &gt; 40
group by origin
ORDER BY count(depdelay) desc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1597819149739.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the query below, we see the count of departure delays by origin and destination. The routes ORD-&gt;SFO and DEN-&gt;SFO have the highest delays, maybe because of weather in January and February. Adding weather to this Dataset would give better results.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql
select origin, dest, count(depdelay)
from flights where depdelay &gt; 40
group by origin, dest
ORDER BY count(depdelay) desc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1597819157738.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;You have now learned how to load data into Spark Datasets and DataFrames and how to explore tabular data with Spark SQL. These code examples can be reused as the foundation to solve many types of business problems.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Resource Allocation Configuration for Spark on YARN]]></title><description><![CDATA[Original Post Information: In this blog post, I will explain the resource allocation configurations for Spark on YARN, describe the yarn…]]></description><link>https://developer.hpe.com/resource-allocation-configuration-for-spark-on-yarn/</link><guid isPermaLink="false">https://developer.hpe.com/resource-allocation-configuration-for-spark-on-yarn/</guid><pubDate>Wed, 19 Aug 2020 06:14:55 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Hao Zhu&quot;,
&quot;publish&quot;: &quot;2015-09-11T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this blog post, I will explain the resource allocation configurations for Spark on YARN, describe the yarn-client and yarn-cluster modes, and include examples.&lt;/p&gt;
&lt;p&gt;Spark can request two resources in YARN; CPU and memory. Note that Spark configurations for resource allocation are set in spark-defaults.conf, with a name like spark.xx.xx. Some of them have a corresponding flag for client tools such as spark-submit/spark-shell/pyspark, with a name like --xx-xx. If the configuration has a corresponding flag for client tools, you need to put the flag after the configurations in parenthesis&quot;()&quot;. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;spark.driver.cores 
(--driver-cores)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;1. yarn-client vs. yarn-cluster mode&lt;/h2&gt;
&lt;p&gt;There are two deploy modes that can be used to launch Spark applications on YARN per &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/latest/running-on-yarn.html&apos;&gt;Spark documentation&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In &lt;strong&gt;yarn-client&lt;/strong&gt; mode, the driver runs in the client process and the application master is only used for requesting resources from YARN.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;yarn-cluster&lt;/strong&gt; mode, the Spark driver runs inside an application master process that is managed by YARN on the cluster, and the client can go away after initiating the application.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;2. Application Master (AM)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;a. yarn-client&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/reallocation-blog-img1-1597817848006.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s look at the settings below as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;[root@h1 conf]# cat spark-defaults.conf |grep am
**spark.yarn.am.cores     4
spark.yarn.am.memory 777m**
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By default, &lt;code&gt;spark.yarn.am.memoryOverhead&lt;/code&gt; is AM memory * 0.07, with a minimum of 384. This means that if we set spark.yarn.am.memory to 777M, the actual AM container size would be 2G. This is because 777+Max(384, 777 * 0.07) = 777+384 = 1161, and the default yarn.scheduler.minimum-allocation-mb=1024, so 2GB container will be allocated to AM. As a result, a (2G, 4 Cores) AM container with Java heap size -Xmx777M is allocated:&lt;/p&gt;
&lt;p&gt;Assigned container container_1432752481069_0129_01_000001 of capacity&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&amp;#x3C;memory:2048, vCores:4, disks:0.0&gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;b. yarn-cluster&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In yarn-cluster mode, the Spark driver is inside the YARN AM. The driver-related configurations listed below also control the resource allocation for AM.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/reallocation-blog-img2-1597817860836.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Take a look at the settings below as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;MASTER=yarn-cluster /opt/mapr/spark/spark-1.3.1/bin/spark-submit --class org.apache.spark.examples.SparkPi  \
--driver-memory 1665m \
--driver-cores 2 \
/opt/mapr/spark/spark-1.3.1/lib/spark-examples*.jar 1000

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since 1665+Max(384,1665*0.07)=1665+384=2049 &gt; 2048(2G), a 3G container will be allocated to AM. As a result, a (3G, 2 Cores) AM container with Java heap size -Xmx1665M is allocated:&lt;br&gt;
Assigned container container_1432752481069_0135_02_000001 of capacity&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&amp;#x3C;**memory:3072, vCores:2**, disks:0.0&gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;3. Containers for Spark executors&lt;/h2&gt;
&lt;p&gt;For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/reallocation-blog-img3-1597817872761.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In &lt;code&gt;spark-defaults.conf&lt;/code&gt;, &lt;code&gt;spark.executor.memory&lt;/code&gt; is set to 2g.&lt;/p&gt;
&lt;p&gt;Spark will start 2 (3G, 1 core) executor containers with Java heap size -Xmx2048M: Assigned container container_1432752481069_0140_01_000002 of capacity&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&amp;#x3C;**memory:3072, vCores:1**, disks:0.0&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Assigned container container_1432752481069_0140_01_000003 of capacity&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&amp;#x3C;**memory:3072, vCores:1**, disks:0.0&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, one core per executor means only one task can be running at any time for one executor. In the case of a broadcast join, the memory can be shared by multiple running tasks in the same executor if we increase the number of cores per executor.&lt;/p&gt;
&lt;p&gt;Note that if &lt;a target=&apos;\_blank&apos;  href=&apos;https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation&apos;&gt;dynamic resource allocation&lt;/a&gt; is enabled by setting &lt;code&gt;spark.dynamicAllocation.enabled&lt;/code&gt; to true, Spark can scale the number of executors registered with this application up and down based on the workload. In this case, you do not need to specify &lt;code&gt;spark.executor.instances&lt;/code&gt; manually.&lt;/p&gt;
&lt;h2&gt;Key takeaways:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Spark driver resource related configurations also control the YARN application master resource in yarn-cluster mode.&lt;/li&gt;
&lt;li&gt;Be aware of the max (7%, 384m) overhead off-heap memory when calculating the memory for executors.&lt;/li&gt;
&lt;li&gt;The number of CPU cores per executor controls the number of concurrent tasks per executor.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this blog post, you’ve learned about resource allocation configurations for Spark on YARN. If you have any further questions, please &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;reach out to us via Slack&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Make sure you check the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; regularly to view more articles on this subject.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Log in Apache Spark]]></title><description><![CDATA[Editor’s Note: MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may…]]></description><link>https://developer.hpe.com/how-to-log-in-apache-spark/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-log-in-apache-spark/</guid><pubDate>Wed, 19 Aug 2020 06:01:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products and solutions sold prior to the acquisition of such assets by Hewlett Packard Enterprise Company in 2019, may have older product names and model numbers that differ from current solutions. For information about current offerings, which are now part of HPE Ezmeral Data Fabric, please visit &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;https://www.hpe.com/us/en/software/data-fabric.html&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Nicolas A Perez&quot;,
&quot;publish&quot;: &quot;2016-03-01T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;An important part of any application is the underlying log system we incorporate into it. Logs are not only for debugging and traceability, but also for business intelligence. Building a robust logging system within our apps can provide significant insights into the business problems we are trying to solve.&lt;/p&gt;
&lt;h2&gt;Log4j in Apache Spark&lt;/h2&gt;
&lt;p&gt;Spark uses &lt;code&gt;log4j&lt;/code&gt; as the standard library for its own logging. Everything that happens inside Spark gets logged to the shell console and to the configured underlying storage. Spark also provides a template for app writers so we could use the same &lt;code&gt;log4j&lt;/code&gt; libraries to add whatever &lt;code&gt;messages&lt;/code&gt; we want to the existing login method for Spark.&lt;/p&gt;
&lt;h2&gt;Configuring Log4j&lt;/h2&gt;
&lt;p&gt;Under the &lt;code&gt;SPARK_HOME/conf&lt;/code&gt; folder, there is &lt;code&gt;log4j.properties.template&lt;/code&gt; file which serves as an starting point for our own &lt;code&gt;logging&lt;/code&gt; system.&lt;/p&gt;
&lt;p&gt;Based on this file, we created the &lt;code&gt;log4j.properties&lt;/code&gt; file and put it under the same directory.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;log4j.properties&lt;/code&gt; looks like follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;log4j.appender.myConsoleAppender=org.apache.log4j.ConsoleAppender  
log4j.appender.myConsoleAppender.layout=org.apache.log4j.PatternLayout  
log4j.appender.myConsoleAppender.layout.ConversionPattern=%d [%t] %-5p %c - %m%n  

log4j.appender.RollingAppender=org.apache.log4j.DailyRollingFileAppender  
log4j.appender.RollingAppender.File=/var/log/spark.log  
log4j.appender.RollingAppender.DatePattern=&apos;.&apos;yyyy-MM-dd  
log4j.appender.RollingAppender.layout=org.apache.log4j.PatternLayout  
log4j.appender.RollingAppender.layout.ConversionPattern=[%p] %d %c %M - %m%n  

log4j.appender.RollingAppenderU=org.apache.log4j.DailyRollingFileAppender  
log4j.appender.RollingAppenderU.File=/var/log/sparkU.log  
log4j.appender.RollingAppenderU.DatePattern=&apos;.&apos;yyyy-MM-dd  
log4j.appender.RollingAppenderU.layout=org.apache.log4j.PatternLayout  
log4j.appender.RollingAppenderU.layout.ConversionPattern=[%p] %d %c %M - %m%n  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;By default, everything goes to console and file&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;log4j.rootLogger=INFO, RollingAppender, myConsoleAppender  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;My custom logging goes to another file&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;log4j.logger.myLogger=INFO, RollingAppenderU  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;The noisier spark logs go to file only&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;log4j.logger.spark.storage=INFO, RollingAppender  
log4j.additivity.spark.storage=false  
log4j.logger.spark.scheduler=INFO, RollingAppender  
log4j.additivity.spark.scheduler=false  
log4j.logger.spark.CacheTracker=INFO, RollingAppender  
log4j.additivity.spark.CacheTracker=false  
log4j.logger.spark.CacheTrackerActor=INFO, RollingAppender  
log4j.additivity.spark.CacheTrackerActor=false  
log4j.logger.spark.MapOutputTrackerActor=INFO, RollingAppender  
log4j.additivity.spark.MapOutputTrackerActor=false  
log4j.logger.spark.MapOutputTracker=INFO, RollingAppender  
log4j.additivty.spark.MapOutputTracker=false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Basically, we want to hide all logs Spark generates so we don’t have to deal with them in the shell. We redirect them to be logged in the file system. On the other hand, we want our own logs to be logged in the shell and in a separate file so they don’t get mixed up with the ones from Spark. From here, we will point Splunk to the files where our own logs are which in this particular case is &lt;code&gt;/var/log/sparkU.log.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This (&lt;code&gt;log4j.properties&lt;/code&gt;) file is picked up by Spark when the application starts so we don’t have to do anything aside of placing it in the designated location.&lt;/p&gt;
&lt;h2&gt;Writing Our Own Logs&lt;/h2&gt;
&lt;p&gt;Now that we have configured the components that Spark requires in order to manage our logs, we just need to start writing logs within our apps.&lt;/p&gt;
&lt;p&gt;In order to show how this is done, let’s write a small app that helps us in the demonstration.&lt;/p&gt;
&lt;p&gt;Our app:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;object app {  
 def main(args: Array[String]) {  
   val log = LogManager.getRootLogger  
   log.setLevel(Level.WARN)  

   val conf = new SparkConf().setAppName(&quot;demo-app&quot;)  
   val sc = new SparkContext(conf)  

   log.warn(&quot;Hello demo&quot;)  

   val data = sc.parallelize(1 to 100000)  

   log.warn(&quot;I am done&quot;)  
 }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running this Spark app will demonstrate that our log system works. We will be able to see &lt;code&gt;Hello demo&lt;/code&gt; and &lt;code&gt;I am done&lt;/code&gt; messages being logged in the shell and in the file system, while the Spark logs will only go to the file system.&lt;/p&gt;
&lt;p&gt;So far, everything seems fine, yet there is a problem we haven’t mentioned.&lt;/p&gt;
&lt;p&gt;The class &lt;code&gt;org.apache.log4j.Logger&lt;/code&gt; is not &lt;code&gt;serializable&lt;/code&gt;, which implies we cannot use it inside a &lt;code&gt;closure&lt;/code&gt; while doing operations on some parts of the Spark API.&lt;/p&gt;
&lt;p&gt;For example, if we do the following in our app:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;val log = LogManager.getRootLogger  
val data = sc.parallelize(1 to 100000)  

data.map { value =&gt;   
   log.info(value)  
   value.toString  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will fail when running on Spark. Spark complains that the &lt;code&gt;log&lt;/code&gt; object is not &lt;code&gt;Serializable&lt;/code&gt;, so it cannot be sent over the network to the Spark workers.&lt;/p&gt;
&lt;p&gt;This problem is actually easy to solve. Let’s create a class that does something to our data set while doing a lot of logging.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;class Mapper(n: Int) extends Serializable{  
 @transient lazy val log = org.apache.log4j.LogManager.getLogger(&quot;myLogger&quot;)  

 def doSomeMappingOnDataSetAndLogIt(rdd: RDD[Int]): RDD[String] =  
   rdd.map{ i =&gt;  
     log.warn(&quot;mapping: &quot; + i)  
     (i + n).toString  
   }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;Mapper&lt;/code&gt; receives a &lt;code&gt;RDD[Int]&lt;/code&gt; and returns a &lt;code&gt;RDD[String]&lt;/code&gt; and it also logs what value is being mapped. In this case, note how the &lt;code&gt;log&lt;/code&gt; object has been marked as &lt;code&gt;@transient&lt;/code&gt;, which allows the serialization system to ignore the &lt;code&gt;log&lt;/code&gt; object. Now, &lt;code&gt;Mapper&lt;/code&gt; is being serialized and sent to each worker but the log object is being resolved when it is needed in the worker, solving our problem.&lt;/p&gt;
&lt;p&gt;Another solution is to wrap the &lt;code&gt;log&lt;/code&gt; object into a &lt;code&gt;object&lt;/code&gt; construct and use it all over the place. We would rather have &lt;code&gt;log&lt;/code&gt; within the class we are going to use it, but the alternative is also valid.&lt;/p&gt;
&lt;p&gt;At this point, our entire app looks like the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;import org.apache.log4j.{Level, LogManager, PropertyConfigurator}  
import org.apache.spark.`  
import org.apache.spark.rdd.RDD  

class Mapper(n: Int) extends Serializable{  
 @transient lazy val log = org.apache.log4j.LogManager.getLogger(&quot;myLogger&quot;)  
 def doSomeMappingOnDataSetAndLogIt(rdd: RDD[Int]): RDD[String] =  
   rdd.map{ i =&gt;  
     log.warn(&quot;mapping: &quot; + i)  
     (i + n).toString  
   }  
}  
object Mapper {  
 def apply(n: Int): Mapper = new Mapper(n)  
}  
object app {  
 def main(args: Array[String]) {  
   val log = LogManager.getRootLogger  
   log.setLevel(Level.WARN)  
   val conf = new SparkConf().setAppName(&quot;demo-app&quot;)  
   val sc = new SparkContext(conf)  

   log.warn(&quot;Hello demo&quot;)  

   val data = sc.parallelize(1 to 100000)  
   val mapper = Mapper(1)  
   val other = mapper.doSomeMappingOnDataSetAndLogIt(data)  
   other.collect()  

   log.warn(&quot;I am done&quot;)  
 }  
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusions&lt;/h2&gt;
&lt;p&gt;Our logs are now being shown in the shell and also stored in their own files. Spark logs are being hidden from the shell and being logged into their own file. We also solved the serialization problem that appears when trying to log in different workers.&lt;/p&gt;
&lt;p&gt;We now can build more robust BI systems based on our own Spark logs as we do with other non-distributed systems and applications we have today. Having the right insights is an important aspect of Business Intelligence, and this can help you achieve that.&lt;/p&gt;
&lt;p&gt;This post was originally published &lt;a target=&apos;\_blank&apos;  href=&apos;https://medium.com/@anicolaspp/how-to-log-in-apache-spark-f4204fad78a#.xo31z5vrd&apos;&gt;here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Building Dynamic Machine Learning Pipelines with KubeDirector]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/building-dynamic-machine-learning-pipelines-with-kubedirector/</link><guid isPermaLink="false">https://developer.hpe.com/building-dynamic-machine-learning-pipelines-with-kubedirector/</guid><pubDate>Fri, 14 Aug 2020 10:58:54 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Imagine that you’re a data scientist who’s been asked to create an application or service that can predict travel time for a proposed taxi ride. The application needs to have the ability to update the service with new data as it becomes available, so its predictions take recent patterns into account. In this tutorial, we’ll show you how to set up a Machine Learning (ML) pipeline using KubeDirector to train, register, and query your model.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/blog/running-non-cloud-native-apps-on-kubernetes-with-kubedirector&quot;&gt;KubeDirector&lt;/a&gt; was introduced to the open source community to address stateful application deployment in standard Kubernetes clusters.  In the latest release (&lt;a href=&quot;https://github.com/bluek8s/kubedirector/releases/tag/v0.5.0&quot;&gt;version 0.5&lt;/a&gt;),  KubeDirector now allows multiple clusters to share data very easily using a new feature called &lt;strong&gt;&lt;em&gt;Connections&lt;/em&gt;&lt;/strong&gt;.  The new feature helps users create large-scale dynamic, stateful containerized applications such as are found in Machine Learning (ML) pipelines and allow them to constantly evolve as models are improved and the data changes.&lt;/p&gt;
&lt;p&gt;A basic ML pipeline consists of three stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Training, where the input datasets are processed to create ML models&lt;/li&gt;
&lt;li&gt;Model registration, where the models to be used are identified and characterized&lt;/li&gt;
&lt;li&gt;Inferencing, where selected models are made available for answering queries&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After a brief description of the KubeDirector architecture used for this solution, we’ll show you how to train, register, and finally query your model for answers.  The sample dataset used, and available &lt;a href=&quot;https://s3.amazonaws.com/nyc-tlc/&quot;&gt;here under trip data&lt;/a&gt;, contains taxi ride data that will be used to infer total time per ride for given taxi pickup and drop-off locations. And yes, this is pre-2020 data. Things look much different now due to the pandemic. That’s why it’s important that your ML model is flexible and can take real-time data into account.&lt;/p&gt;
&lt;p&gt;Our solution uses three KubeDirector Applications (kdapps): A training deployment kdapp named &lt;strong&gt;&lt;em&gt;training-engine&lt;/em&gt;&lt;/strong&gt;, a Jupyter Notebook kdapp named &lt;strong&gt;&lt;em&gt;jupyter-notebook&lt;/em&gt;&lt;/strong&gt;, and an inferencing deployment kdapp named &lt;strong&gt;&lt;em&gt;deployment-engine&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;A kdapp is a custom resource (CR) that can be created in any Kubernetes cluster. The kdapp instructs KubeDirector on how a particular kind of virtual application cluster should be deployed and managed. The three kdapps used in this solution are examples that can be found online in the &lt;a href=&quot;https://github.com/bluek8s/kubedirector/tree/master/deploy/example_catalog&quot;&gt;KubeDirector github example catalog&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Register Your KubeDirector Applications with Kubernetes&lt;/h3&gt;
&lt;p&gt;Assuming that KubeDirector is already deployed and running in the Kubernetes cluster, these kdapp CRs can be created to register apps with KubeDirector, e.g. by using kubectl:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create –f cr-app-training-engine.json
kubectl create –f cr-app-jupyter-notebook.json
kubectl create –f cr-app-deployment-engine.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a one-time step that builds a catalog of applications that data scientists can now instantiate as needed.&lt;/p&gt;
&lt;p&gt;Once a kdapp is created, an instance of that app can be deployed by creating a KubeDirector virtual cluster (kdcluster) CR. A kdcluster identifies the desired kdapp and specifies runtime configuration parameters, such as the size and resource requirements of the virtual cluster.&lt;/p&gt;
&lt;h3&gt;KDApp Connections&lt;/h3&gt;
&lt;p&gt;For the purposes of this discussion, one of the most interesting parts of the kdcluster spec is the Connections stanza, which identifies other resources of interest to that kdcluster. This is detailed in the GitHub readme link &lt;a href=&quot;https://github.com/bluedatainc/solutions/tree/master/MLOps/examples/KDApp%20connections&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The KubeDirector GitHub repo includes &lt;a href=&quot;https://github.com/bluek8s/kubedirector/tree/master/deploy/example_clusters&quot;&gt;examples of kdcluster CRs&lt;/a&gt; that instantiate these kdapps.  This tutorial will show you how to create kdclusters much like those examples, except that yours will add information about the Connections that each kdcluster will use.&lt;/p&gt;
&lt;p&gt;The input data and trained ML models for this example pipeline will live in a &lt;strong&gt;&lt;em&gt;project repository&lt;/em&gt;&lt;/strong&gt; of shared persistent storage. The kdapps used will expect to be able to access that repository through directories mounted within the app containers. There are multiple ways to implement this arrangement, but here you’ll use the &lt;strong&gt;&lt;em&gt;FS Mounts&lt;/em&gt;&lt;/strong&gt; feature of the &lt;a href=&quot;https://www.hpe.com/info/container-platform&quot;&gt;HPE Ezmeral Container Platform&lt;/a&gt; to access our pre-integrated &lt;a href=&quot;https://www.hpe.com/info/data-fabric&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt; (formerly known as MapR Data Platform) for persistent container storage. The final piece of the puzzle will be the use of &lt;em&gt;ConfigMap&lt;/em&gt; resources to describe and locate the ML models used for drawing inferences.&lt;/p&gt;
&lt;p&gt;These components come together to build the pipeline like so:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/mlops-figure-1-1597402607009.png&quot; alt=&quot;mlops figure 1&quot;&gt;&lt;/p&gt;
&lt;p&gt;A project repository is used to store the key data components needed in any ML pipeline, such as source data, the model itself and the scoring script.&lt;/p&gt;
&lt;p&gt;In this example, the project repository is maintained within the HPE Ezmeral Data Fabric, since KubeDirector was itself deployed using the HPE Ezmeral Container Platform (which includes our data fabric by default); KubeDirector is a key component of the HPE Ezmeral Container Platform. However, KubeDirector is an open-source project and it can be used with any Kubernetes cluster and corresponding project repository.&lt;/p&gt;
&lt;h2&gt;Training&lt;/h2&gt;
&lt;p&gt;Once you have registered the ML training kdapp and the Jupyter Notebook kdapp for Model Building, you’ll need to launch instances of those applications as KubeDirector clusters (kdclusters) to put the ML pipeline to work. First, launch an instance of the ML training kdapp:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create –f cr-cluster-training-engine.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;KubeDirector Connection creation: Jupyter Notebook kdcluster -&gt; Training engine kdcluster&lt;/h3&gt;
&lt;p&gt;Next you’ll need to launch an instance of your Jupyter Notebook kdcluster. Before you do that, you need to modify the example kdcluster yaml file and include a new Connection stanza into the top level “spec” section.  This modification would look similar to the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;spec:
  app: “jupyter-notebook”
  appCatalog: “local”
  connections:
    clusters:
      -  “training-engine-instance”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, use the modified yaml file to launch an instance of your Jupyter Notebook kdcluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create –f cr-cluster-jupyter-notebook.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The runtime configuration of the connected training-engine kdcluster, including its service endpoints, is therefore injected by KubeDirector into the jupyter-notebook app. This info is used by the notebook app when KubeDirector triggers configuration scripts to run within the app containers.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note that these Connections can also be edited in the kdcluster spec later, for example if the notebook should be redirected to work with a different training engine.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Training the model&lt;/h3&gt;
&lt;p&gt;Now that the notebook kdcluster is running, a user accesses the Jupyter Notebook web UI via the network service port. Then, the usual process of directing the connected training deployment to generate models from the input data can be performed.&lt;/p&gt;
&lt;h3&gt;Using a little training magic&lt;/h3&gt;
&lt;p&gt;Those familiar with Jupyter Notebooks know that they include support for predefined &lt;em&gt;magic&lt;/em&gt; functions that can be called within a notebook using a command line style syntax.  These magic functions expand the capabilities of the notebook adding support for things like: &lt;em&gt;%history&lt;/em&gt;, &lt;em&gt;%edit&lt;/em&gt;, &lt;em&gt;%rerun&lt;/em&gt;, &lt;em&gt;%recall&lt;/em&gt;, &lt;em&gt;%macro&lt;/em&gt;, &lt;em&gt;%save&lt;/em&gt;, and &lt;em&gt;%pastebin&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Jupyter Notebook also includes hooks that allow defining custom magics. The kdapps in our solution utilize these custom commands (magics) to seamlessly integrate with KubeDirector Connections. This includes custom magics to handle remotely submitting training code and retrieving results and logs. These magic functions make REST API calls to the API server that runs as part of a training environment.&lt;/p&gt;
&lt;p&gt;For example the following magic functions are included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;%attachments&lt;/em&gt;: Returns a list of connected training environments&lt;/li&gt;
&lt;li&gt;&lt;em&gt;%logs --url&lt;/em&gt;: URL of the training server load balancer&lt;/li&gt;
&lt;li&gt;&lt;em&gt;%%&amp;#x3C;&amp;#x3C;training_cluster_name&gt;&gt;&lt;/em&gt;: Submits training code to the training environment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The below screenshot from the &lt;a href=&quot;https://github.com/bluedatainc/solutions/blob/master/MLOps/examples/NYCTaxi/TensorFlow/TensorflowPipelineFullTaxiDataSet.ipynb&quot;&gt;GitHub site&lt;/a&gt; shows you an example of how a Jupyter Notebook can use these magic commands:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/mlops-figure-2-1597402615159.png&quot; alt=&quot;mlops figure 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note at the top of this screenshot of the Jupyter Notebook that the &lt;em&gt;%attachments&lt;/em&gt; magic command is used to retrieve the Training Cluster name - &lt;em&gt;traningengineinstance&lt;/em&gt;.  Then, that name is used in the following magic command -  &lt;em&gt;%%trainingengineinstance&lt;/em&gt;. This single command then submits training code to the training environment – truly magic!&lt;/p&gt;
&lt;p&gt;As the training engine generates models, it will store the model data into the project repository. From the training engine’s point of view, it is simply writing to a designated subdirectory of its filesystem.&lt;/p&gt;
&lt;p&gt;Below is a snapshot from the HPE Ezmeral Container Platform Web User Interface. This illustrates the use of the integrated HPE Ezmeral Data Fabric. Specifically, the FsMounts feature is used here for organizing the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/mlops-figure-3-1597402623656.png&quot; alt=&quot;mlops figure 3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Model registration&lt;/h2&gt;
&lt;p&gt;Next, you’ll need to create a &lt;em&gt;ConfigMap&lt;/em&gt; resource to store metadata about the model to be used in deployments. Here&apos;s an example ConfigMap for the model data generated above:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: tensorflowmodel
data:
  name: tensorflowmodel
  description: &quot;example model&quot;
  model-version: &quot;1&quot;
  path: /bd-fs-mnt/TenantShare/models/10yrdatasetchecknames/0_tf
  scoring-path: /bd-fs-mnt/TenantShare/code/TF_Scoring.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The two crucial parameters that will be used by the deployment-engine kdcluster are the &lt;strong&gt;&lt;em&gt;path&lt;/em&gt;&lt;/strong&gt; and the &lt;strong&gt;&lt;em&gt;scoring-path&lt;/em&gt;&lt;/strong&gt;, each of which identifies a location within the project repository as seen within an app container. The path is the location of the serialized model data.  This path can be retrieved from the job log when the model was trained. The scoring-path locates a script that will be used by the deployment engine to deserialize and process the model.&lt;/p&gt;
&lt;h2&gt;Create inference deployment&lt;/h2&gt;
&lt;p&gt;This is what you’ve been waiting for! How long will my taxi ride take? You’ve trained a model from a data set, and now it’s time to extract value from your model.  To do this, you need to create a deployment-engine.&lt;/p&gt;
&lt;p&gt;A deployment-engine kdcluster is used to stand up services that will allow clients to draw ride-time inferences from the models you have created and registered.&lt;/p&gt;
&lt;p&gt;A deployment engine will serve inferences for one or more models. To specify models to a deployment-engine kdcluster, the Connections feature once again comes in handy. Instead of naming other kdclusters of interest, this time it will be used to name ConfigMap resources, one for each model that the deployment should use. In this case, just use the one example model that was given.&lt;/p&gt;
&lt;h3&gt;KubeDirector Connection creation: Inference server kdcluster -&gt; Model ConfigMap&lt;/h3&gt;
&lt;p&gt;For this deployment, the example &lt;strong&gt;&lt;em&gt;cr-cluster-endpoint-wrapper.yaml&lt;/em&gt;&lt;/strong&gt; file can be used. Similar to how the Jupyter Notebook kdcluster yaml file was modified, this kdcluster yaml file will be edited to include the Connection stanza.  A new property in the top-level spec section is added, similar to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt; spec:
  app: deployment-engine
  appCatalog: “local”
  connections:
    configmaps:
    -  “tensorflowmodel”
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When these ConfigMap Connections are named in a kdcluster spec, KubeDirector will inject the contents of those ConfigMaps into a JSON file within the kdcluster&apos;s app containers where they can be used by the deployment app. If the contents of the ConfigMaps change, KubeDirector will immediately update this file. The deployment app can therefore reference this file to find the models it should use, which it can then load from the FS Mount directories exposed within its containers.&lt;/p&gt;
&lt;p&gt;This key use of KubeDirector Applications, Clusters and Connections is what makes an ML pipeline created with KubeDirector very dynamic – it’s able to easily accept constantly updating and changing data sets and models of that data. The &lt;a href=&quot;https://www.nbcnewyork.com/news/local/nyc-taxis-for-hire-cars-took-sharp-hit-with-pandemic-and-are-only-slowly-coming-back/2541796/&quot;&gt;taxi ride data for New York City in 2020&lt;/a&gt; will likely look very different than the sample pre-pandemic dataset. ML pipelines need to be constantly monitored and often retrained to accommodate this model drift, as the ML model lifecycle is a highly iterative and dynamic process.&lt;/p&gt;
&lt;p&gt;A similar process could be implemented using ConfigMaps mounted into application pods using native Kubernetes mechanisms. An important benefit of the Connections feature, however, is that the connected set of ConfigMaps can grow, shrink, and change while the pod remains running. In terms of the use case here, that means you could add another model to be handled by the deployment engine without interrupting any current requests that the engine is processing.&lt;/p&gt;
&lt;h3&gt;Serving Queries&lt;/h3&gt;
&lt;p&gt;The “haproxy” service port on the inference deployment can now be used to service REST API queries, using the model created earlier to make inferences about how long a proposed taxi ride will take. An example script for making queries to this service can be found &lt;a href=&quot;https://github.com/bluedatainc/solutions/blob/master/MLOps/examples/NYCTaxi/TensorFlow/query_api_script_tf.py&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here’s that script in action, sending a query to the inferencing deployment previously created:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/mlops-figure-4-1597402633083.png&quot; alt=&quot;mlops figure 4&quot;&gt;&lt;/p&gt;
&lt;p&gt;From this output, our example script queries the model via REST API calls, providing a few parameters, and the model returns a prediction.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;While we’ve presented the steps from input dataset to query result sequentially here, in “steady state” operation all of these operations can be happening in different order and/or at the same time.&lt;/p&gt;
&lt;p&gt;Building our pipeline using Kubernetes resources and the open source KubeDirector project lets us deal flexibly with a wide range of configurations and changes, with minimal disruption. The possibilities include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple notebooks per training deployment, with each notebook’s unique password given to a group or a single user for finer-grained access control.&lt;/li&gt;
&lt;li&gt;Multiple training deployments to implement different training methods or access different datasets.&lt;/li&gt;
&lt;li&gt;Changing existing notebooks to connect to different training deployments, without losing any of the working environment of the notebook.&lt;/li&gt;
&lt;li&gt;Multiple inferencing deployments, for access control or load-balancing.&lt;/li&gt;
&lt;li&gt;Multiple models served per inferencing deployment, at different service ports or different URLs.&lt;/li&gt;
&lt;li&gt;Changing the set of models served by an inferencing deployment, without interrupting its operation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Find out more about KubeDirector applications and KubeDirector clusters at the GitHub site: &lt;a href=&quot;https://github.com/bluek8s/kubedirector/wiki&quot;&gt;GitHub&lt;/a&gt;. From there you can download open-source KubeDirector and use it in your own Kubernetes clusters!&lt;/p&gt;
&lt;p&gt;And to see how we use KubeDirector in our HPE Ezmeral Container Platform, check out the interactive demo environment &lt;a href=&quot;http://www.hpe.com/engage/containerplatform&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Look for more KubeDirector posts coming soon on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This article was co-authored by:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Joel Baxter, Distinguished Technologist&lt;/li&gt;
&lt;li&gt;Kartik Mathur, Master Technologist&lt;/li&gt;
&lt;li&gt;Don Wake, Technical Marketing Engineer&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[How to set and enforce tagging policies in HPE GreenLake for Private Cloud Enterprise]]></title><description><![CDATA[Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise. Introduction HPE…]]></description><link>https://developer.hpe.com/how-to-set-and-enforce-tagging-policies-in-hpe-greenlake-for-private-clo/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-set-and-enforce-tagging-policies-in-hpe-greenlake-for-private-clo/</guid><pubDate>Mon, 03 Aug 2020 13:15:13 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – NAME CHANGE: HPE GreenLake for Private Cloud is now part of HPE GreenLake for Private Cloud Enterprise.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;HPE GreenLake for private cloud allows customers to assign metadata to their instances in the form of tags. An instance is a set of virtual machines that compose a horizontally scalable entity or a service suite, like a database. Tags help customers manage, report and filter their instances, providing a business context to resource consumption and cost. The goal of this article is to discuss tags and the process of creating and enforcing tagging policies.&lt;/p&gt;
&lt;h2&gt;Tags&lt;/h2&gt;
&lt;p&gt;A tag is simply a label that consists of a customer-defined key and value that makes it easier to manage and filter instances. Tags allow you to categorize your instances in different ways. For example, you could define a set of tags for your instances that help you track each instances’ owner, department, and purpose.&lt;/p&gt;
&lt;p&gt;The following diagram illustrates how tagging works. In this example, I assigned two tags to each instance. One tag defines the key &lt;strong&gt;Owner&lt;/strong&gt; and the other delineates the key &lt;strong&gt;Purpose&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b1-1596460810780.png&quot; alt=&quot;b1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Tag Policies&lt;/h2&gt;
&lt;p&gt;With HPE GreenLake for private cloud tag policies, you can enforce a tag with a specific key and value or you can enforce the value coming from a specific list of values when an instance is provisioned. The list of acceptable values you designate can be any Option List that already exists in HPE GreenLake for private cloud.&lt;/p&gt;
&lt;p&gt;Option Lists can be populated in a number of ways including manually within HPE GreenLake for private cloud or from a REST API. In this article, I will illustrate the process of creating Option Lists designated as type &quot;Manual&quot;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; Refer to &lt;a href=&quot;https://cmpdocs.privatecloud.greenlake.hpe.com/en/latest/provisioning/library/library.html&quot;&gt;HPE GreenLake for private cloud documentation&lt;/a&gt; to obtain more information on Option Lists&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Other Tag Policy capabilities include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Multiple tag policies can be combined to enforce a comprehensive tag compliance program&lt;/li&gt;
&lt;li&gt;An Instance details page will warn when tags are not in compliance&lt;/li&gt;
&lt;li&gt;Administrators have the option to enable strict enforcement, stopping instance provisioning that would violate the tag policies&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Creating an Option List of Type &quot;Manual&quot;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to Provisioning - Library - OPTIONS LISTS and select +ADD key.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b2-1596460824531.png&quot; alt=&quot;b2&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enter a NAME for the Option List, select &lt;strong&gt;&quot;MANUAL&quot;&lt;/strong&gt; type, provide DATASET and click on SAVE CHANGES.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b3-1596460832147.png&quot; alt=&quot;b3&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; JSON entries must be formatted as shown in the following example.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;code&gt;[{&quot;name&quot;: &quot;Purpose1&quot;,&quot;value&quot;: &quot;Development&quot;},{&quot;name&quot;: &quot;Purpose2&quot;,&quot;value&quot;: &quot;Production&quot;}]&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Creating a Tag Policy&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to Administration - Policies and select +ADD POLICY key.&lt;/li&gt;
&lt;li&gt;In the TYPE drop-down, select &lt;strong&gt;Tags&lt;/strong&gt;. The new policy modal will show options pertaining specifically to tagging, as shown below:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b4-1596460839745.png&quot; alt=&quot;b4&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Set the options as required. In this example, the following options are set:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TYPE:&lt;/strong&gt; Tags&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NAME:&lt;/strong&gt; A name given to this specific policy&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ENABLED:&lt;/strong&gt; When checked, the policy will be put into effect upon Save&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;STRICT ENFORCEMENT:&lt;/strong&gt; When checked, new instances will not be provisioned if they violate the tag policy. When unchecked, users are only warned for non-compliance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;KEY:&lt;/strong&gt; HPE GreenLake for private cloud requires a tag be added with the entered Key&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VALUE LIST:&lt;/strong&gt; If set, a tag with the key indicated above must have a value contained in the Option List selected&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SCOPE:&lt;/strong&gt; In this example case, it is set to Global, but tag policies can also be targeted to Groups, Clouds, Users and Roles&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b5-1596460846266.png&quot; alt=&quot;b5&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you have clicked &quot;SAVE CHANGES&quot;, the policy is in effect so long as the &quot;ENABLED&quot; box is checked (which is true by default for new policies).&lt;/p&gt;
&lt;h2&gt;Checking the Tag Policy enforcement&lt;/h2&gt;
&lt;p&gt;With the Tag Policy currently in place, you can see it in action when trying to provision an instance. Since I am enforcing a tag with key &quot;Purpose&quot; to exist with STRICT ENFORCEMENT, the provisioning wizard will not allow me to progress past the configuration tab if the tag is not set as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b6-1596460852753.png&quot; alt=&quot;b6&quot;&gt;&lt;/p&gt;
&lt;p&gt;If an Option List is associated with the Tag Policy, you will see a similar validation error if you enter a tag key with no value or with a value that is not in the Option List.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b7-1596460859332.png&quot; alt=&quot;b7&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you return to the policy and deselect the &quot;STRICT ENFORCEMENT&quot; option, you will no longer be prevented from provisioning it, even when the tags violate the policy. You will, however see a message on the instance details page with information on which tag policy is being violated by the given instance, as shown below:&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/b8-1596470828071.png&quot; height=&quot;100&quot; width=&quot;700&quot; align=&quot;left&quot;&gt;&lt;/p&gt;
&lt;p&gt;In a future blog entry, I’ll show how you can use tags to provide showback reporting and budgeting with business context in Consumption Analytics. Make sure you check the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; site often to find future posts related to this topic. If you have questions, please feel free to connect with me on the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Optimizing and streamlining coding practices - Newsletter]]></title><link>https://developer.hpe.com/2020-August-03/</link><guid isPermaLink="false">https://developer.hpe.com/2020-August-03/</guid><pubDate>Mon, 03 Aug 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[The HPE DEV Community Hack Shack – A place to gather, learn, and play]]></title><description><![CDATA[From August 17-20, the Cloud Native Computing Foundation (CNCF) will welcome technologists from all over the world to virtually connect at…]]></description><link>https://developer.hpe.com/the-hpe-dev-community-hack-shack-a-place-to-gather-learn-and-play/</link><guid isPermaLink="false">https://developer.hpe.com/the-hpe-dev-community-hack-shack-a-place-to-gather-learn-and-play/</guid><pubDate>Fri, 31 Jul 2020 07:39:28 GMT</pubDate><content:encoded>&lt;p&gt;From August 17-20, the &lt;a href=&quot;http://cncf.io&quot;&gt;Cloud Native Computing Foundation (CNCF)&lt;/a&gt; will welcome technologists from all over the world to virtually connect at their flagship event, &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&quot;&gt; KubeCon | CloudNativeCon EU&lt;/a&gt; . There, Platinum Sponsor Hewlett Packard Enterprise (HPE), will host the HPE DEV Community Hack Shack virtual experience. HPE will also highlight the new HPE Ezmeral software portfolio (which includes the HPE Ezmeral Container Platform) and showcase its strategic offerings in the areas of containers and Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/kubblog2-1596183367896.png&quot; alt=&quot;kubblog2&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Meet you in the Hack Shack!&lt;/h2&gt;
&lt;p&gt;The HPE DEV Community Hack Shack is a virtual experience designed to help developers, designers, and data scientists connect with the HPE DEV team and collaborate with them to build innovative solutions. It provides a place where you can engage with experts to learn about HPE and open source solutions and work with them to guide product development. At physical events, the Hack Shack was known for its unique, informal and fun atmosphere. Our online virtual experience aims to bring this same feeling to this virtual event!&lt;/p&gt;
&lt;h2&gt;Navigating the Hack Shack&lt;/h2&gt;
&lt;p&gt;Within the HPE DEV Hack Shack, you’ll find the following tabs where you’ll be able to collaborate, learn, interact with experts, and compete for fun prizes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/challenges&quot;&gt;CHALLENGES:&lt;/a&gt; Compete with others for prizes in a number of gaming and coding challenges.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/arcade&quot;&gt;ARCADE:&lt;/a&gt; Access our well-known Hack Shack Attack! retro video game to compete for the highest score.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/replays&quot;&gt;REPLAYS:&lt;/a&gt; Watch some of the in-depth Technical Workshops we’ve offered in the past. Topics include the HPE Ezmeral Container Platform, SPIFFE and SPIRE fundamentals, and the HPE Container Storage Interface for Kubernetes.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/ezmeral&quot;&gt;HPE EZMERAL:&lt;/a&gt; Get detailed information regarding the HPE Ezmeral software portfolio.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/hackshack/community&quot;&gt;COMMUNITY:&lt;/a&gt; Connect with HPE experts and participate in on-demand training. While there, sign-up for our HPE DEV Newsletter to stay up-to-date on the newest blog posts and tutorials. As an incentive to become a part of the community, those who sign up for the &lt;a href=&quot;https://developer.hpe.com/event/kubecon-europe-2020?listid=10605211&quot;&gt;HPE DEV Newsletter&lt;/a&gt; between August 17 and August 20, will be entered into a drawing to win a prize.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/kubblog3-1596183377034.png&quot; alt=&quot;kubblog3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Looking for more developer info?&lt;/h2&gt;
&lt;p&gt;Visit the &lt;a href=&quot;https://developer.hpe.com&quot;&gt;HPE DEV Community Portal&lt;/a&gt; via the &lt;a href=&quot;/hackshack&quot;&gt;Hack Shack Lobby&lt;/a&gt;. There, you can access a treasure trove of information, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/platforms&quot;&gt;PLATFORMS&lt;/a&gt; – Access GitHub resources and software development kits (SDKs)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;/blog&quot;&gt;BLOG&lt;/a&gt; – Read blogs and tutorials&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/projects&quot;&gt;OPEN SOURCE&lt;/a&gt; – Discover platforms, apps and contributions&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;EVENTS&lt;/a&gt; – Plan on attending upcoming events&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;COMMUNITY&lt;/a&gt; – Participate and contribute to the community&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;At the &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&quot;&gt;KubeCon | CloudNativeCon EU&lt;/a&gt; HPE virtual booth, you’ll get to explore HPE’s different software-enabling and application development technologies, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;The HPE Ezmeral Container Platform&lt;/a&gt;, the industry’s first Kubernetes container platform software designed to run both cloud-native and non cloud-native applications, enabling true hybrid cloud operations across any location.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/us/en/storage/intelligent-storage.html?chatsrc=ot-en&amp;#x26;jumpid=ps_8r5mdg32xs_aid-520023673&amp;#x26;gclid=Cj0KCQiAs67yBRC7ARIsAF49CdU6O6Hbaj1lwT8tcrU702BzRnZboWNQILTShb0cCk-eEk7nUjQ-yhMaAv4fEALw_wcB&amp;#x26;gclsrc=aw.ds&quot;&gt;The intelligent data platform&lt;/a&gt; from &lt;a href=&quot;https://www.hpe.com/us/en/storage.html&quot;&gt;HPE storage&lt;/a&gt;, focused on persistent storage use cases for Kubernetes in private, public, and hybrid clouds and how to enable CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spiffe.io/&quot;&gt;The SPIFFE and SPIRE Projects&lt;/a&gt;. With the recent acquisition of Scytale, HPE is the leading contributor for CNCF’s SPIFFE and SPIRE projects. These projects help organizations build a foundation for zero trust through the use of platform agnostic, cryptographic service identity.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Sessions outside the booth at KubeCon | CloudNativeCon EU&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&quot;&gt;KubeCon | CloudNativeCon EU&lt;/a&gt; will also offer two sessions highlighting HPE offerings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Predictable Performance Through Prometheus and Performance Aware Scheduling&lt;/strong&gt;, co-sponsored by HPE and Intel and delivered by Chief Technologist, Tom Golway (HPE), and Software Engineer, Killian Muldoon (Intel)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Taming Data/State Challenges for ML Applications and Kubeflow&lt;/strong&gt;, presented by HPE Distinguished Technologist, Skyler Thomas&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/schedule/&quot;&gt;program schedule&lt;/a&gt; for specific times and locations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/grommet-1596183385085.png&quot; alt=&quot;grommet&quot;&gt;&lt;/p&gt;
&lt;p&gt;Virtual events can be challenging. Many who attend physical conferences come specifically to engage personally with industry and subject matter experts. That’s why the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community&lt;/a&gt; has worked hard to deliver its &lt;a href=&quot;/hackshack&quot;&gt;HPE DEV Community Hack Shack&lt;/a&gt; virtual experience. We want to ensure you have a place where you can do just that. For questions or help, be sure to connect with us on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Channel&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Master the automation universe the easy way!  Part 2: The Art of Packing! ]]></title><description><![CDATA[StackStorm integration pack Recently, I introduced you to StackStorm. In this post, I'll present StackStorm integration packs! StackStorm is…]]></description><link>https://developer.hpe.com/master-the-automation-universe-the-easy-way-part-2-the-art-of-packing/</link><guid isPermaLink="false">https://developer.hpe.com/master-the-automation-universe-the-easy-way-part-2-the-art-of-packing/</guid><pubDate>Tue, 28 Jul 2020 13:46:21 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/7/stackstorm-part2-1-1595944683785.png&quot; alt=&quot;StackStorm integration pack&quot;&gt;&lt;/p&gt;
&lt;p&gt;Recently, &lt;a href=&quot;/blog/master-the-automation-universe-the-easy-way-part-1-introduction-to-stack&quot;&gt;I introduced you to StackStorm&lt;/a&gt;. In this post, I&apos;ll present StackStorm integration packs! StackStorm is one of the most innovative automation platforms I have had the opportunity of becoming familiar with. When you first start working with StackStorm, it can appear to be a bit overwhelming. But fear not. I will share with you all the secrets I have discovered in my unceasing googling of the interwebs. Naturally, I can&apos;t take you on a super deep dive in this post, so I will leave that to the documents page. But you can check out the full documentation over &lt;a href=&quot;https://docs.stackstorm.com/packs.html&quot;&gt;here&lt;/a&gt;. Now, let&apos;s dive in!&lt;/p&gt;
&lt;p&gt;A StackStorm integration pack is a predefined set of actions, sensors, rules, Python or shell scripts, and other miscellaneous items. A StackStorm pack has a specific structure that looks like what&apos;s shown in the picture below. Nested under the actions directory is the workflow directory.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Contents of a pack
actions/                     #
rules/                       #
sensors/                     #
aliases/                     #
policies/                    #
tests/                       #
etc/                         # any additional things (e.g. training, scripts)
config.schema.yaml           # Configuration schema
packname.yaml.example        # example of config, used in CI
pack.yaml                    # pack definition file
requirements.txt             # requirements for python packs
requirements-tests.txt       # requirements for python tests
icon.png                     # 64x64 /png icon                        
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The good news is, if you have some existing Python or shell scripts you are using for automating things, you will be able to recycle them with StackStorm. I will be working with Python scripts in this example.&lt;/p&gt;
&lt;p&gt;The first thing to do is pair the Python file &lt;strong&gt;myPython.py&lt;/strong&gt; with a &lt;strong&gt;myPython.yaml&lt;/strong&gt; file. When this is done, the resulting pair of files represents a StackStorm &lt;strong&gt;action&lt;/strong&gt;. It can get a little confusing here. You will need a YAML file and a Python script in the &lt;strong&gt;action directory&lt;/strong&gt; and both of them should have &lt;strong&gt;the same name&lt;/strong&gt;. You can think of the YAML file as something that introduces the Python script.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Actions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Since the best place to start a journey is at the beginning, let me show you how easy it is to start packing by taking a look at the action YAML file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
name: get_switches
pack: hpecfm
description: Get an array of switches from the hpecfm controller
runner_type: python-script
entry_point: get_switches.py
enabled: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, defining the action is fairly straight forward. &lt;strong&gt;YAML&lt;/strong&gt; files start with the action name and the name of the pack where the action is located. The obligatory description is always recommended. The runner type is a Python script but there are many others you can use as well (e.g. shell scripts, webhooks). Our entry point will be a Python script called &lt;strong&gt;get_switches&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Here&apos;s a closer look at the get_switches Python script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;# A python script for getting a dictionary of switches

from pyhpecfm import fabric
from lib.actions import HpecfmBaseAction


class switchLookup(HpecfmBaseAction):
    def run(self):
        # Get switches from hpecfm controller.
        switches = fabric.get_switches(self.client)
        if isinstance(switches, list):
            # Setup a list for holding dictionaries
            switch_data = []
            # Iterate through switch data from CFM API
            for i in switches:
                # Build dictionary for return
                out = {
                      &apos;u_health&apos;: i[&apos;health&apos;],
                      &apos;u_ip_address&apos;: i[&apos;ip_address&apos;],
                      &apos;u_mac_address&apos;: i[&apos;mac_address&apos;],
                      &apos;u_name&apos;: i[&apos;name&apos;],
                      &apos;u_sw_version&apos;: i[&apos;sw_version&apos;]
                      }
                switch_data.append(out)

            return (True, switch_data)
        return (False, switches)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can take just about any Python script you have written in the past and wrap it up in a Python class where you reference the base action. &lt;strong&gt;HpecfmBaseAction&lt;/strong&gt; is the script that performs the &lt;strong&gt;API&lt;/strong&gt; authentication (other actions would have a different base action). It uses credentials in the &lt;strong&gt;/opt/stackstorm/configs&lt;/strong&gt; directory to authenticate.&lt;/p&gt;
&lt;p&gt;The script runs the &lt;strong&gt;fabric.get_switches()&lt;/strong&gt; function and assigns the output to a variable called &lt;strong&gt;switches&lt;/strong&gt;. As the script runs, a little checking is performed and verifies the returned items are a Python list. If verified, the script proceeds. You&apos;ll receive a lot of information back in the returned results, but you only need a few fields. So, setup a loop and grab the five variables that you need and return the &lt;strong&gt;switch_data&lt;/strong&gt; array. By creating this action, you can now reference it in workflows.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Workflows&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Workflows are YAML files that can run one or many actions. &lt;strong&gt;Orquesta&lt;/strong&gt;, StackStorm&apos;s new workflow application, allows you to do some fairly complex workflows. If you&apos;re interested in learning more, check out &lt;a href=&quot;https://docs.stackstorm.com/orquesta/index.html&quot;&gt;Orquesta&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Typically, a &lt;strong&gt;sensor&lt;/strong&gt; would identify an event, load a &lt;strong&gt;trigger,&lt;/strong&gt; and that trigger would cause a &lt;strong&gt;workflow&lt;/strong&gt; to spring into action. Getting to know StackStorm is like eating an elephant, you&apos;ll have to take one bite at a time. Because of this, I will discuss sensors, triggers, and rules in another blog.&lt;/p&gt;
&lt;p&gt;Earlier, I talked about how actions need the same name, but different file types. Workflows will appear odd at first because they will have the exact same name &lt;strong&gt;and&lt;/strong&gt; file type. A workflow needs a YAML file, stored in the actions directory, just like the actions. It will also require &lt;strong&gt;another YAML&lt;/strong&gt; file with the exact same name stored in the &lt;strong&gt;/actions/workflows&lt;/strong&gt; directory. Don&apos;t worry. After you make several hundred StackStorm integration packs, you will get used to this. My little &quot;trick&quot; to help keep things straight in my head is to use an underscore in the title. For example, for an action, I would use &lt;strong&gt;do_this.yaml&lt;/strong&gt; and for a workflow I would use &lt;strong&gt;dothis.yaml&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;OK, I&apos;m getting to the end. I am going to wrap it up now with this example. Here, I have an action, &lt;strong&gt;get_switches&lt;/strong&gt;, but it&apos;s just an action. If I want to really automate, I need a workflow to run this action and then do something with the results. Below you will find the &lt;strong&gt;getswitches&lt;/strong&gt; workflow. If we start with the introductory YAML, it will get us to the entry_point, another YAML called getswitches.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
name: getswitches
description: A workflow for getting plexxi switches into stackstorm.
runner_type: orquesta
entry_point: workflows/getswitches.yaml
enabled: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is a screenshot of the &lt;strong&gt;getswitches&lt;/strong&gt; workflow YAML file stored in the actions/workflow directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: 1.0

description: A workflow to copy switch inventory from hpecfm to ServiceNow.

tasks:
  getswitches:
    action: hpecfm.get_switches
    next:
      - when: &amp;#x3C;% succeeded() %&gt;
        publish:
          - switches: &amp;#x3C;% result().result %&gt;
        do: sendsnow

  sendsnow:
    action: hpecfm.sendsnow switches=&amp;#x3C;% ctx().switches %&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When the workflow runs, it calls the action to &lt;strong&gt;get_switches&lt;/strong&gt;. We just have to prefix the action with the name of the pack that contains it. Once it has the Python list from the action, it can &lt;strong&gt;publish&lt;/strong&gt; it in the &lt;strong&gt;context,&lt;/strong&gt; in an array called switches. The information is stored in the context, which is a strange concept at first, but it&apos;s just a place for saving things so other actions can read from it. The workflow goes on to run a final action called &lt;strong&gt;sendsnow&lt;/strong&gt;. You can see that the &lt;strong&gt;sendsnow&lt;/strong&gt; (no underscore) is another &lt;strong&gt;workflow&lt;/strong&gt; in the hpecfm StackStorm pack. Yes, workflows can call other workflows. I&apos;ll just pause here a second and let that sink in…&lt;/p&gt;
&lt;p&gt;This is the introductory YAML file for &lt;strong&gt;sendsnow&lt;/strong&gt;. It informs StackStorm to run the workflow sendsnow, but here it tells the workflow to expect an array (Python list) called &lt;strong&gt;switches&lt;/strong&gt; with the &lt;strong&gt;parameters&lt;/strong&gt; tag.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
name: sendsnow
description: A workflow for sending plexxi switches to servicenow.
runner_type: orquesta
entry_point: workflows/sendsnow.yaml
enabled: true
parameters:
  switches:
    required: true
    type: array
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I use &lt;strong&gt;with&lt;/strong&gt; and &lt;strong&gt;item&lt;/strong&gt; to iterate through the switches array. Remember, StackStorm saved the array in the context referenced by ctx().&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;version: 1.0

description: &quot;Send hpecfm switches to snow&quot;

input:
  - switches

tasks:

    snowswitches:
      with: &amp;#x3C;% ctx().switches %&gt;
      action: servicenow.create_record table=&quot;u_cfm_asset&quot; payload=&apos;&amp;#x3C;% item() %&gt;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What is interesting to note is the final action being called is in another StackStorm integration pack called &lt;strong&gt;servicenow&lt;/strong&gt;. This means I have the ServiceNow integration pack installed, and it has an action called create_record. StackStorm integration packs are freely available on the StackStorm Exchange &lt;a href=&quot;https://exchange.stackstorm.org/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Note that, in this example, I have workflows that can call any series of actions (or workflows) from a variety of integration packs, which are just waiting for you to use them. If you are not using StackStorm, guess what? You are going to have to write all those Python scripts yourself. Why reinvent the wheel? Using StackStorm, you can save yourself from having to write thousands of lines of Python and instead just use a handful of simple YAML files to stitch things together.&lt;/p&gt;
&lt;p&gt;After this workflow runs, all of the switches in the network are added to an asset inventory tracking database in Service Now. You could easily modify this for asset management of servers or storage devices as well. Keep an eye out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for my next post where I will be talking about sensors, triggers, and rules. Our journey into StackStorm is just getting started, so stay tuned, you seekers of knowledge!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Container Images for HPE OneView SDKs are now available]]></title><description><![CDATA[In keeping with Hewlett Packard Enterprise’s strategic vision on container use for hybrid IT, I am pleased to report that container images…]]></description><link>https://developer.hpe.com/container-images-for-hpe-oneview-sdks-are-now-available/</link><guid isPermaLink="false">https://developer.hpe.com/container-images-for-hpe-oneview-sdks-are-now-available/</guid><pubDate>Mon, 13 Jul 2020 18:18:16 GMT</pubDate><content:encoded>&lt;p&gt;In keeping with Hewlett Packard Enterprise’s strategic vision on container use for hybrid IT, I am pleased to report that container images of HPE OneView 5.2 SDKs are now available. Docker images of &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;Terraform&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;Chef&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;Puppet&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;Golang&lt;/a&gt; and &lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;Ruby&lt;/a&gt; SDKs are now all available on Docker Hub. All prerequisite materials are incorporated into the container images to enable streamlined deployment, which will allow you to simplify maintenance, improve infrastructure agility, and reduce costs.  In addition, you can expect all SDK releases in the future to incorporate an updated Docker image.&lt;/p&gt;
&lt;p&gt;Why this focus on containers? The advantage of using containers is that containers offer a virtual runtime environment that runs on top of a single operating system (OS) kernel. Containers virtualize at the operating system level, with multiple containers running atop the OS kernel directly, instead of virtualizing the hardware stack, as with the virtual machine approach. This means that containers are far more lightweight: they share the OS kernel, start much faster, and use a fraction of the memory when compared to booting an entire OS.&lt;/p&gt;
&lt;p&gt;Container images include everything required to enable a streamlined deployment process. They eliminate the need to sort through complex support matrices and package dependencies, offering one succinct manifest that can be version controlled and that allows for easy replication across machines in a cluster. Container images also include the software dependencies needed by the SDK, such as specific versions of programming language, runtimes and other software libraries.&lt;/p&gt;
&lt;p&gt;Containerized SDKs simplify maintenance as the images contain only the necessary package dependences. In addition, they do not require a dedicated node or OS, which guarantees deployment consistency. Combined with a service-based architecture, the entire unit that developers are asked to reason about becomes much smaller, leading to greater agility and productivity. Using containers eases development, testing, and overall management.&lt;/p&gt;
&lt;p&gt;All this translates to reduced cost through increased productivity. Not only do containers help developers. When containers are used, IT operations teams can focus on application deployment and management without bothering with details such as specific software versions and configurations. Teams spend less time debugging and diagnosing differences in environments, and more time shipping new functionality for users. Container use also means fewer bugs overall, since developers can now make assumptions in dev and test environments they can be sure will hold true in production.&lt;/p&gt;
&lt;p&gt;SDK Docker images are available in Docker Hub and GitHub SDK repositories. For more information, please refer to the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-terraform&quot;&gt;HPE OneView SDK for Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ruby&quot;&gt;HPE OneView SDK for Ruby&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-python&quot;&gt;HPE OneView SDK for Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-puppet&quot;&gt;HPE OneView SDK for Puppet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-chef&quot;&gt;HPE OneView SDK for Chef&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-golang&quot;&gt;HPE OneView SDK for Golang&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hub.docker.com/repository/docker/hewlettpackardenterprise/hpe-oneview-sdk-for-ansible&quot;&gt;HPE OneView SDK for Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Kubernetes Application Containers: Managing Containers and Cluster Resources]]></title><description><![CDATA[Original Post Information: Application Containers – What They Are, What They Do, And Why They Matter  We’ll start with an overview of what…]]></description><link>https://developer.hpe.com/kubernetes-application-containers-managing-containers-and-cluster-resour/</link><guid isPermaLink="false">https://developer.hpe.com/kubernetes-application-containers-managing-containers-and-cluster-resour/</guid><pubDate>Fri, 10 Jul 2020 02:52:44 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Suzanne Ferry&quot;,
&quot;publish&quot;: &quot;2018-10-12T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;containers, kubernetes&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h2&gt;Application Containers – What They Are, What They Do, And Why They Matter&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image10-1594350208331.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;We’ll start with an overview of what application containers are and how they are used in enterprises to help improve deployment time, consistency, and the efficiency and availability of applications. Along the way, we will cover key characteristics of what containers can and cannot do, and how they compare to virtual machines.&lt;/p&gt;
&lt;p&gt;We will also cover how Kubernetes is used to orchestrate containers and associated resources. We’ll discuss how Kubernetes schedules the deployment of containers, scales container resources, manages communication between applications in containers, and is used to monitor the health and availability of containers. Combining the HPE Ezmeral Data Fabric with application containers and Kubernetes makes applications fully capable of consuming and processing live, enterprise-level data.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;What’s an Application Container?&lt;/h2&gt;
&lt;p&gt;An application container is a stand-alone, all-in-one package for a software application. Containers include the application binaries, plus the software dependencies and the hardware requirements needed to run, all wrapped up into an independent, self-contained unit.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image13-1594350297951.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Deploy Containers&lt;/h2&gt;
&lt;p&gt;The application container is dropped into a system, then runs using the local hardware and operating system. Since it includes all of the necessary dependencies, the container functions exactly the same when deployed on a laptop, on a server, on a virtual machine, in the cloud, or on any other compatible system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1594350308304.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Share Containers&lt;/h2&gt;
&lt;p&gt;As a self-contained package, an application container can easily be moved to a different system, or even be uploaded and downloaded using a software hub, to be shared with others.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image22-1594350318108.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containerized Applications in the Enterprise Environment&lt;/h2&gt;
&lt;p&gt;Let’s take a look at an example of how and why you might containerize an application in an enterprise environment.&lt;/p&gt;
&lt;p&gt;This example tracks how containers are used in the development cycle of an app designed to monitor the health sensors on wearable devices. The sensor data is streamed to the customer’s health care provider network, which performs machine learning on the data, to look for warning signs of health issues.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image18-1594350327111.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Development of the health monitoring sensor tracking app is done by a team of developers, working on laptops. They commit their work to a software hub every day. For reasons that will become clear in a moment, let’s assume that the development laptops are all 3 - 4 years old, use spinning hard drives, have 16 GB of RAM, and each has its own, specialized version of some flavor of Linux and an IDE. The application works flawlessly in their development environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image21-1594350336265.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containerized Microservices&lt;/h2&gt;
&lt;p&gt;Because the health monitoring application is large and complex, it gets broken into microservices, in order to run more efficiently. Each service is then packaged into a separate application container.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1594350345325.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Complete Containers&lt;/h2&gt;
&lt;p&gt;Each container includes the final code binaries for the individual services.&lt;/p&gt;
&lt;p&gt;Included are the dependencies for each service, such as the libraries and APIs needed to communicate with the sensors on any device the app supports, geolocation to know where the data is coming from, data streaming, web protocols, authentication with the health care system, and anything else needed for the app to run.&lt;/p&gt;
&lt;p&gt;Also included is a YAML (YAML Ain&apos;t Markup Language) file, which defines the CPU, RAM, network, and storage needs for each service.&lt;/p&gt;
&lt;p&gt;Each container includes only the dependencies it needs for the single service in it. The microservices approach allows each container to specialize for its service. Two services can even use different versions of the same library if needed, because the container environment allows them to function independently of each other.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image17-1594350356545.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Application Testing Environment&lt;/h2&gt;
&lt;p&gt;Once application development is complete, the containers are tested by the QA team.&lt;/p&gt;
&lt;p&gt;The QA environment uses a dozen nodes with CentOS, running a stripped-down version of the live environment. The servers are a few years old; they vary greatly with CPU speed and cores and the amount of RAM and network cards on each server. Some have spinning disks while others use SSDs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image15-1594350365360.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Because containers include all the software dependencies each service needs, the application runs exactly the same on the QA servers as it did on the development laptops cited earlier.&lt;/p&gt;
&lt;p&gt;Containers define what hardware resources are required for each service to run. The server carves these resources out when the service is needed and provides them for the container. The service runs inside the containerized environment, which is spared from having its resources cannibalized by other applications on the server. When the service is no longer needed, the container is shut down and the resources are released back to the system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image12-1594350391823.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Application Live Environment&lt;/h2&gt;
&lt;p&gt;When the application clears QA, it is ready to be released on the live environment, which, in this example, is made up of 100 nodes in a distributed cluster. These nodes use CentOS on the very latest hardware with 12 cores, 256 GB of RAM, SSDs, and gigabit network cards.&lt;/p&gt;
&lt;p&gt;Again, because the services are containerized with all of the dependencies they need, they run just as they did on the development laptops and QA servers, albeit quite a bit faster.  The application services run happily in their containers, safe from disturbance by any other applications on the system, pulling resources when needed and releasing them when not.&lt;/p&gt;
&lt;h2&gt;High Availability with Containers&lt;/h2&gt;
&lt;p&gt;To create high availability for the application, each service is scaled by spinning up multiple instances of the container image and distributing them across the cluster. Each container only includes a single service and its dependencies, which makes them very lightweight. Cloning and launching new copies of a container takes just seconds. Quickly, several copies of the services are spread out across the cluster, each performing the work sent to it and returning results, independent of the other copies in the environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image2-1594350677060.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Share Containers&lt;/h2&gt;
&lt;p&gt;To share containers, you could, for example, just upload the containers to the cloud where they can be downloaded. Since the containers include everything the app needs to run, it will run functions the same as it does on your own clusters.&lt;/p&gt;
&lt;h2&gt;How Containers and Virtual Machines Differ&lt;/h2&gt;
&lt;p&gt;You might be wondering how using containers to develop an application differs from running the application in a virtual machine.&lt;/p&gt;
&lt;p&gt;An application container is similar to, but not the same as, a virtual machine. A container is a package of the application and its dependencies. It will run on any compatible OS and hardware. Containers are designed to be streamlined application packages and nothing else. In the case of the example, the containers were streamlined down to microservices, so that each container is as lean and efficient as possible.&lt;/p&gt;
&lt;p&gt;A virtual machine, on the other hand, includes a complete copy of an operating system, along with any applications and software dependencies running on it. In addition, a virtual machine requires a hypervisor layer to talk to the host server. A VM can be capable of performing more complex tasks, but requires a much larger package, plus more overhead in time and resources from a server.&lt;/p&gt;
&lt;p&gt;An application container is a clean and effective package for running a single application as efficiently as possible.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1594350685473.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Application Container Characteristics - Lightweight&lt;/h2&gt;
&lt;p&gt;Application containers consist of just an application, its software dependencies, and a small YAML file with its hardware requirements. They use the OS and infrastructure native to the system they are deployed onto. Therefore, a container is very lightweight when compared to other virtualization techniques, like VMs. They can be ported quickly to other environments and take just seconds to deploy, clone, or even relaunch in the case of a problem.&lt;/p&gt;
&lt;p&gt;While a container can include more than one application, that is not generally considered a best practice. An application container is a streamlined environment, designed to run an application efficiently. Adding more applications adds complexity and potential conflict to the package. In some cases, tightly coupled applications may share a container. In the vast majority of cases, however, an application will have its own specialized container.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1594350700977.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Are Scalable&lt;/h2&gt;
&lt;p&gt;Since a container only includes an application and its dependencies, it has a very small footprint on the server. Because of this small size, several copies of a container can be launched on each server.&lt;/p&gt;
&lt;p&gt;A higher application density leads to more efficient processing, as there are more copies of the application to distribute the workload.&lt;/p&gt;
&lt;p&gt;Availability of the app is also greatly improved. The loss of a single container will not affect the overall function of the service, when there are several others to pick up the lost production.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image11-1594350709356.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Are Portable&lt;/h2&gt;
&lt;p&gt;The small size of an application container makes it quick to move between servers, up to a cloud, or to mirror to another cluster. In addition, a container is completely self-sufficient and has all of the resources that the application needs to run.&lt;/p&gt;
&lt;p&gt;When moving a container, it can just be dropped onto its new home and launched in a plug-and-play fashion. You do not need to test for compatibility or load additional software.&lt;/p&gt;
&lt;p&gt;The containerized app can even run directly off of an instance in the cloud, giving you absolute flexibility over availability and the run-time environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594350718828.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Have an Ecosystem&lt;/h2&gt;
&lt;p&gt;The application container ecosystem is very large and diverse. Thousands of containerized applications are available on hubs to use as templates or foundations for proprietary apps. A simple container with Apache Web Server can be downloaded, pre-built, saving the development time and resources needed to create common services over and over.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1594350727418.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Help with Secure Multi-Tenancy&lt;/h2&gt;
&lt;p&gt;Because each application container creates an isolated environment for its application, the resources allocated to it are the entire machine. Other copies of the same container are &quot;unaware&quot; of each other.&lt;/p&gt;
&lt;p&gt;As a result, it’s easy to have multiple different applications on the same server, all running simultaneously. Each application uses the resources assigned to it. There are no concerns about an application taking resources from another application and causing it to crash. When an application completes its work, resources are released back to the system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image16-1594350736509.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Containers Offer Flexible Deployment&lt;/h2&gt;
&lt;p&gt;You know, now, that application containers are completely self-contained. Any application in a container runs in the same exact manner on any hardware. Because the container contains all dependencies, it doesn’t require the OS to supply any code. As long as the hardware can match the needs of the application, it will happily chug along no matter where you put it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image14-1594350746049.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Container Are Easy to Maintain and Specialized for Microservices&lt;/h2&gt;
&lt;p&gt;When an application is containerized, it is saved as an image. As the application grows, and as changes are made, these changes are saved as a new image on top of the original one. Therefore, any time you need to push or pull an upgrade of a containerized application, all you need to move and install is the new image. You do not need the entire container each time.&lt;/p&gt;
&lt;p&gt;Containers are a highly specialized runtime environment, designed to run a single application as efficiently as possible. Because of this specialization, they are perfectly designed to break up large applications into microservices.&lt;/p&gt;
&lt;p&gt;Each microservice can run in its container when called upon, then release the resources back to the system when it is done. You can also define a different number of clones for each container, based on the use of each service, allowing for more copies of services that are used more frequently in your app.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image24-1594350756822.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What Containers Are NOT – Persistent or Stateful&lt;/h2&gt;
&lt;p&gt;When a container is shut down, either intentionally or due to some failure, everything in the container goes down with it. You can save out results as part of your application, but other progress and data, like logs, are lost. Each time you need to use the application, or spin up a new copy, it starts from a new beginning state. The state of the application cannot be saved and built upon with a new iteration.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image3-1594350768170.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What Containers Are NOT – Data-Aware&lt;/h2&gt;
&lt;p&gt;A container does not know where its data comes from, nor where it is going. The container is completely enclosed in its environment, and cannot see data sources for work or storage. External data can be accessed, and results from work can be saved out of the container environment, but the container has no idea what or where that data is.&lt;/p&gt;
&lt;p&gt;Therefore, the app in the container cannot take advantage of data locality to make sure the work and data are on the same node. This may result in extra movement of data and reduce the efficiency of the service in the container.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image20-1594350784627.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What Containers Are NOT – Environment-Aware&lt;/h2&gt;
&lt;p&gt;Containers are not aware that the hardware that is provisioned to it is the entire server. This is helpful in that the containerized application is protected, but this also means that the application is not aware of, nor can it take advantage of, common services for enterprise applications, like networking with other apps, scheduling, distribution, and load balancing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image19-1594350794873.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Container Summary&lt;/h2&gt;
&lt;p&gt;You have seen how application containers speed up the pipeline from production to delivery. They solve the problem of having an app working reliably in different environments, making development processes much more efficient and flexible.&lt;/p&gt;
&lt;p&gt;Breaking up large applications into microservices and putting them into containers provides fast and flexible services that can quickly be deployed and moved between systems.&lt;/p&gt;
&lt;p&gt;Containers are not a stand-alone solution, however, and have a few inherent limitations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image1-1594350805136.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Using Kubernetes to Manage Containers and Cluster Resources&lt;/h2&gt;
&lt;p&gt;This section discusses how Kubernetes manages containers and resources.&lt;/p&gt;
&lt;p&gt;Kubernetes is an environment that automates the orchestration of application containers. What does &quot;Kubernetes automated orchestration&quot; cover? It covers deployment, scaling, management, monitoring, and upgrades of individual, containerized microservices. Kubernetes takes care of all of the maintenance and tools around running application containers, so you can focus on application functionality.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image3-1594350972863.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Application Deployment&lt;/h2&gt;
&lt;p&gt;Deploying an application with Kubernetes requires just a single command. In the background, Kubernetes creates the runtime environment, requests the needed resources, handles the launch of the services, and provides each with an IP address. It also scales the containers across the cluster until each service is deployed to the level requested and maintains these levels 24/7.&lt;/p&gt;
&lt;h2&gt;Application Scaling&lt;/h2&gt;
&lt;p&gt;You decide how many clones of each service are needed. Because the services are containerized, you can set different levels for different parts of the app. When you first deploy, you calculate some starting numbers for each service. Kubernetes makes sure each service is running the correct number of copies. If there are too few, it will launch more. If there are too many, it will kill a few until the correct number are running.&lt;/p&gt;
&lt;h2&gt;Application Scale&lt;/h2&gt;
&lt;p&gt;Suppose you determine that there are too many copies of a service running and they are sitting dormant, or that application usage has increased and you need more copies to handle the load. You can change the settings on the deployment file, redeploy, and Kubernetes will update the number of each running service to meet the new requirements.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1594350995532.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;High Availability&lt;/h2&gt;
&lt;p&gt;Kubernetes watches how many copies of each service are up. If a container has a failure and goes down, Kubernetes launches a new copy. Kubernetes continually verifies that the number of each service on the system matches what was requested.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1594351003832.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;If an entire server goes down, Kubernetes redeploys the missing containers on other nodes, again until the number of services running matches the defined limits. You can rest assured that your app will achieve the required six nines of availability, as long as your data center is active.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1594351011817.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Load Balancing&lt;/h2&gt;
&lt;p&gt;Kubernetes continuously monitors the usage of containers across nodes, verifying that the work is evenly distributed. If it finds an underused container or resource, it moves work to that resource, and may even move copies of a container to underused hardware.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image2-1594351020037.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Communication&lt;/h2&gt;
&lt;p&gt;When applications are broken into microservices, the individual services need to talk to each other, in order to pass along client information. Kubernetes creates a service within itself to enable the different microservices to communicate. This communication service determines which containers can use it, based on labels on the container, and then defines a port that can be used by any container with that label.&lt;/p&gt;
&lt;p&gt;As a service reads data from a wearable device on a customer, it will pass that data to the other services in the app that will stream the data, authenticate it with the health-care provider, and so on. Each instance of any service can use the same port to communicate with the other microservices in the app or any other services on the cluster that it needs.&lt;/p&gt;
&lt;p&gt;The communication service in Kubernetes is persistent, independent of the services that use it. If a container goes down or a new container is spun up, the service will continue to be available at its port to any application with the correct label.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1594351028278.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Multi-Tenancy&lt;/h2&gt;
&lt;p&gt;Let&apos;s consider the example of a health-monitoring application, serving thousands of users, sending data to a variety of health-care providers. With Kubernetes, the services could be divided up by health-care provider. Each provider could offer a differing number of services, based on usage, or could even provide variations on a service to a client, based on that client&apos;s particular needs.&lt;/p&gt;
&lt;p&gt;For example, say that this application spins up three copies of the app for users of Mega-Health, but provides four copies to Health R Us because they have a larger customer base. In addition, Health R Us uses a communication protocol different from Mega-Health – so, a separate microservice is used to connect to their system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594351052333.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Rolling Upgrades&lt;/h2&gt;
&lt;p&gt;When an application update is ready to roll out, the Kubernetes deployment file needs to be updated with the new information.&lt;/p&gt;
&lt;p&gt;Kubernetes will gradually kill existing containers with the current version of the app and spin up new containers with the updated version, until all containers for that service are running the new version.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1594351071753.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Rolling Downgrades&lt;/h2&gt;
&lt;p&gt;If there is a problem along the way, you can roll back the upgrade with a single command. Kubernetes will gradually kill containers with the new 2.0 version of the app and replace them with new instances of the older 1.0 version.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;This third section provides an overview of Kubernetes architecture.&lt;/p&gt;
&lt;h2&gt;Kubernetes Architecture&lt;/h2&gt;
&lt;p&gt;Here you can see the standard architecture for running application containers with Kubernetes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image15-1594351169191.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Object&lt;/h2&gt;
&lt;p&gt;An object in Kubernetes is anything that persists in the system, such as Pods, deployment records like ReplicationControllers, ReplicaSets, and Deployment controllers. Objects are used to define the desired state of the Kubernetes system.&lt;/p&gt;
&lt;p&gt;Pods define what is to be on the system; deployments define parameters of the Pods, such as how many copies; and services define their connections. When objects are created, Kubernetes will ensure that the objects persist, and the system matches what is defined by the objects.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image10-1594351181271.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Pod&lt;/h2&gt;
&lt;p&gt;A Pod is the basic unit of organization in Kubernetes. A Pod will contain one or more containers, all of which share storage, network, and deployment specifications. Everything in a Pod will be deployed together, at the same time, in the same location, and will share a schedule along with the deployment parameters.&lt;/p&gt;
&lt;p&gt;A Pod can contain more than one container, but in most practical applications, each Pod contains just a single container. Similar to the container itself, a Pod is a specialized environment. Each Pod has its own unique specifications, and those are usually optimized for the container in the Pod. Tightly coupled service containers that all perform a similar function, such as file locators or controllers, may share a Pod, but in most cases a single container keeps its environment optimized for the service it runs.&lt;/p&gt;
&lt;p&gt;If you do have multiple containers within a single Pod, they share an IP address that they can use to communicate outside of the Pod, and they can see each other through localhost.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1594351189416.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Name&lt;/h2&gt;
&lt;p&gt;Each object in Kubernetes is given a Name. The name of each object is provided to Kubernetes in the deployment record. Object names need to be unique within a namespace, but can be reused across separate namespaces in the system. If an object is removed from the system, the name can freely be used by another object.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image16-1594351201487.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Name Example&lt;/h2&gt;
&lt;p&gt;For example, let’s say you have a Pod that contains AwesomeApp. You want 3 instances of the Pod, so you name the Pod objects AwesomeApp1, AwesomeApp2, and AwesomeApp3. You decide that you want another copy, so you spin up a new instance of the Pod. Since it is in the same namespace as the other instances, you name it AwesomeApp4. Something happens along the line, and service AwesomeApp2 fails. Kubernetes kills the Pod that contains AwesomeApp2 and spins up a new copy. The name AwesomeApp2 is free to use now, since there is no object in the namespace with that name.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image11-1594351210989.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you create another namespace in the system, you are free to use the names AwesomeApp1, AwesomeApp2, and AwesomeApp3 again in this space.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1594351227395.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes UID&lt;/h2&gt;
&lt;p&gt;The UID is a unique, internal identifier for each object in the system. The UID is defined by Kubernetes when the object is created and is used by the system to differentiate between clones of the same object.&lt;/p&gt;
&lt;p&gt;For example, say you have a Pod that contains AwesomeApp. You want 3 instances of the Pod, so let’s say Kubernetes assigns them the UIDs 001, 002, and 003. If you make a new namespace and deploy 3 new Pods into that space, each new Pod will be given its own unique UID. The UID of each object must be unique on the system, even across namespaces.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image14-1594351236821.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Label&lt;/h2&gt;
&lt;p&gt;Labels are key-value pairs used to identify and describe objects, such as &quot;version:1.1&quot; or &quot;cluster:sf&quot;. An object can have as many labels as you want, but cannot have more than one of the same key.&lt;/p&gt;
&lt;p&gt;Labels are not used by the Kubernetes system, but provide a way for users to organize and map the objects in the system. For example, you can use them to perform an action on all Pods with the label &quot;version:1.1&quot; or a different action on all Pods with the label &quot;cluster:tokyo&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image12-1594351250937.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes ReplicaSet (ReplicationController)&lt;/h2&gt;
&lt;p&gt;ReplicaSets and ReplicationControllers are used to ensure that the correct number of Pods are running. The ReplicaSet defines what Pods are in it and how many copies of the Pods are to exist across the system. The desired number of Pods in the system can easily be scaled up or down by updating the ReplicaSet. Kubernetes then updates the number of Pods on the system to match the new specifications.&lt;/p&gt;
&lt;p&gt;ReplicaSets define the Pods in it by the container image and one or more labels. Therefore, a different ReplicaSet can be made for different uses of the same container. For example, you can decide to have 3 copies of a container on your cluster in San Francisco, but only two copies of the same container on the Tokyo cluster. Alternately, you could have a ReplicaSet for our San Francisco cluster running version 1.1 of a service, while the Tokyo copies run version 1.2 of the same service.&lt;/p&gt;
&lt;p&gt;ReplicaSets allow you to customize the number and types of services for your app, running across the environment.&lt;/p&gt;
&lt;p&gt;ReplicaSet recently replaced the ReplicationController. Both perform the same function in a Kubernetes system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1594351262324.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Deployment Controller&lt;/h2&gt;
&lt;p&gt;The Deployment Controller defines the state of Deployment Objects, like Pods and ReplicaSets. In the example of deploying the health care app in a Kubernetes environment, we commonly referred to the state of the system being updated to match a defined state. Deployments are the object used to define and maintain the state of the system. Deployments are used to create and deploy, scale, monitor, roll back, and otherwise manage the state of Pods and ReplicaSets on the system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1594351271206.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Namespace&lt;/h2&gt;
&lt;p&gt;Namespaces are useful in multi-tenant systems, to divide resources among different users of the system. Similar Pods can be deployed in different namespaces with access restrictions to different user groups of the system, each providing unique specifications for that group.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1594351280707.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Service&lt;/h2&gt;
&lt;p&gt;When you launch a container, it is given an IP address to communicate in and out. This is fine for the life of the container, but each container has its own IP address, and the life cycle of any given container is not consistent.&lt;/p&gt;
&lt;p&gt;Kubernetes opens a port for application containers to see all other apps in the system. This port will remain open and consistent, even as instances of the containers are started and stopped. Therefore, communication between your application and other services on the system remains as long as your system is running.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image3-1594351299565.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Volume&lt;/h2&gt;
&lt;p&gt;Kubernetes Volumes solve the problem containers have with persistent data storage, at least to a point. A volume in Kubernetes is associated with the Pod, not the container within the Pod. Therefore, a container can fail and be relaunched, and the Volume will persist. The relaunched container can pick up where the failed one left off. If the Pod is killed, however, the Volume will be removed with it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image13-1594351309586.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Architecture Summary&lt;/h2&gt;
&lt;p&gt;We’ve seen how application containers make the development pipeline faster and provide a specialized environment for apps to run more efficiently and consistently across systems.&lt;/p&gt;
&lt;p&gt;We’ve also seen how Kubernetes orchestrates the use of containers at the enterprise level. With just a simple deployment file, and an occasional command, Kubernetes manages the deployment, scale, partitioning, distribution, and availability of containerized applications.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594351346680.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;This section discusses how Kubernetes and Containers work on MapR.&lt;/p&gt;
&lt;p&gt;You’re almost done with your introduction to using application containers and Kubernetes. The last part of your journey is to see how to put this all together in a real-world, enterprise environment.&lt;/p&gt;
&lt;h2&gt;Application Container Example&lt;/h2&gt;
&lt;p&gt;Throughout this Kubernetes blog series, we’ve used a fictional health care app to demonstrate how application containers and Kubernetes can be used in an enterprise environment. Let’s now take a look at a couple of different ways you can use the tools in the MapR Data Platform to deploy and manage such an app on a single platform and take advantage of unique features available with MapR.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image15-1594351490580.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Solution: On-Premises&lt;/h2&gt;
&lt;p&gt;The scenario in this blog series involves streaming data to a health provider cluster for processing. In this example, the solution means processing the IoT data in an on-premises cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1594351501203.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Stream Sensor Data to MapR Cluster&lt;/h2&gt;
&lt;p&gt;The data is created by a wearable device. An app on that device uses the MapR Event Store service to stream the data to on-premises storage at the customer’s health care system.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image16-1594351510942.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Store Data in POSIX-Compliant Distributed File System&lt;/h2&gt;
&lt;p&gt;The IoT data is streamed directly into the MapR XD POSIX-compliant distributed file and object store and saved in their native JSON, CSV, or Parquet format.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image13-1594351527979.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Process Data Natively on MapR&lt;/h2&gt;
&lt;p&gt;Using both files saved in MapR XD and new live streaming, we can process the data with Spark to compare live information to legacy data saved in the system.&lt;/p&gt;
&lt;p&gt;In addition, MapR XD allows for the different sources of data to be tiered, based on level of access, giving faster access to data that is used more frequently.&lt;/p&gt;
&lt;p&gt;Kubernetes can spin up containerized machine learning apps on MapR to analyze the data natively as it streams in, all on the same cluster, saving the time of transforming the data before it is processed. In addition, MapR can scale the compute and storage independently. If more data is coming in, you can add more application containers on MapR to support the increased demand.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image3-1594351538907.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Solution: Cloud&lt;/h2&gt;
&lt;p&gt;Alternatively, MapR provides an all cloud-based solution to the wearable health care app. In this solution, we containerize the MapR Data Platform and move it to the data, rather than bringing the data to the MapR cluster. This is vastly more efficient, as data can be processed where it is created, saving time and resources needed to move the data to the cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1594351546333.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Stream Data to the Cloud&lt;/h2&gt;
&lt;p&gt;Wearable IoT devices create data and send it to the cloud, using the MapR Event Store for Apache Kafka services. A cloud provider close to the device reduces data transfer time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1594351553676.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Deploy Containerized MapR on Cloud&lt;/h2&gt;
&lt;p&gt;The MapR Data Platform is spun up in a container in the same cloud environment. The MapR Data Platform is broken into microservices, and the cluster can be spun up in just seconds. The MapR Event Store ingests the data into MapR XD on the cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image14-1594351561392.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;As clients in other areas create data on their wearable devices, MapR Event Store sends that data to cloud platforms hosted nearby. Containerized MapR clusters are spun up in those cloud environments as well, ingesting the data into MapR XD.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1594351569349.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Process Natively in the Cloud&lt;/h2&gt;
&lt;p&gt;Spark processes the data close to where it was created, greatly reducing time and resources needed to move the data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image2-1594351576898.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Global Namespace Views Content as a Single Source&lt;/h2&gt;
&lt;p&gt;The MapR Global Namespace allows all of this data to be processed at its local center, but viewed together as though they are coming from a single cluster, without the need to move any data between clouds or to on-premises storage.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1594351584377.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Data Platform Components&lt;/h2&gt;
&lt;p&gt;The following components of the MapR Data Platform can be used to make our health care app example functional, using application containers and Kubernetes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MapR Data Platform&lt;/li&gt;
&lt;li&gt;MapR Event Store for Apache Kafka&lt;/li&gt;
&lt;li&gt;MapR Distributed File and Object Store (MapR XD)&lt;/li&gt;
&lt;li&gt;Cloud Integration&lt;/li&gt;
&lt;li&gt;Global Namespace&lt;/li&gt;
&lt;li&gt;Live Data Processing&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;MapR Data Platform&lt;/h2&gt;
&lt;p&gt;The MapR Data Platform is a single, complete platform that supports enterprise-level data storage and processing needs.&lt;/p&gt;
&lt;p&gt;Just as containers provide a self-contained environment for an app to run as efficiently as possible, the MapR Data Platform provides a single environment for streaming, ingesting, processing, and storing data from the edge, IoT devices, the cloud, on-premises, or any combination of data sources and types.&lt;/p&gt;
&lt;p&gt;Just as Kubernetes handles all of the orchestration and maintenance of application containers, the MapR Data Platform handles all of the orchestration, distribution, scale, connectivity, replication, security, and high availability of your data and processing. MapR will take care of the maintenance, and your team can focus on the results.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594351592863.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Event Store for Apache Kafka&lt;/h2&gt;
&lt;p&gt;MapR Event Store for Apache Kafka provides a platform for streaming data live from IoT devices. With MapR, you can stream data at an enterprise level, easily handling data from all of your customer&apos;s wearable devices.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image17-1594351600815.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR XD&lt;/h2&gt;
&lt;p&gt;MapR XD is a POSIX-compliant distributed file and object store. This allows you to directly ingest data from IoT devices in their native JSON, CSV, or Parquet formats, then directly query this data with Apache Drill, without spending any time or resources processing the data. You can also include large binary files like images, audio, or video for something like a security app that monitors a streaming CCTV feed. All of your data can be processed as it is streaming and gains the high availability, security, and replication advantages of a MapR cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image11-1594351610742.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Data Storage Solutions: Cloud Integration&lt;/h2&gt;
&lt;p&gt;MapR natively supports data storage and application containers in all major cloud providers.&lt;/p&gt;
&lt;p&gt;The IoT data from your customer devices can be streamed to a cloud storage environment. From there, it can be accessed for processing from an on-premises cluster or even processed directly, using containers that are deployed in the same, or a different, cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image18-1594351618408.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Data Storage Solutions: Global Namespace&lt;/h2&gt;
&lt;p&gt;MapR provides a global namespace for all these different data sources used on the platform. This global namespace allows all data sources that are used by the application containers on your cluster to be seen as coming from a single source. Therefore, data does not have to be moved or copied, saving valuable time and resources. In addition, live streaming data and data saved in the cloud or on-premises can all be processed together, without the need for any preprocessing or consolidation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image1-1594351626084.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Live Data Processing&lt;/h2&gt;
&lt;p&gt;In our fictional application, we stream the data to on-premises storage at the customer’s health care system. MapR can spin up machine learning applications to analyze the data as it streams in, all on the same cluster, saving the time of copying or transferring the data. In addition, MapR can scale the compute and storage independently. If more data is coming in, just add more application containers to support the increased demand.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image12-1594351634768.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;MapR Data Platform&lt;/h2&gt;
&lt;p&gt;All of these tools in the MapR Data Platform share the same high availability, security, and replication technology that is consistent across MapR, and with the global namespace available with MapR, containerized apps finally have a persistent data source that will remain throughout the lifetime of the cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594351645052.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Tips and Best Practices to Take Advantage of Spark 2.x]]></title><description><![CDATA[Original post information: Editor’s Note: MapR products referenced are now part of the HPE Ezmeral Data Fabric.  With Apache Spark 2.0 and…]]></description><link>https://developer.hpe.com/tips-and-best-practices-to-take-advantage-of-spark-2x/</link><guid isPermaLink="false">https://developer.hpe.com/tips-and-best-practices-to-take-advantage-of-spark-2x/</guid><pubDate>Wed, 08 Jul 2020 05:54:32 GMT</pubDate><content:encoded>&lt;h2&gt;Original post information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2019-02-05T07:00:00.000Z&quot;,
&quot;tags&quot;: &quot;spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image1-1594188264230.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;With Apache Spark 2.0 and later versions, big improvements were implemented to enable Spark to execute faster, making lot of earlier tips and best practices obsolete. This blog post will first give a quick overview of what changes were made and then some tips to take advantage of these changes.&lt;/p&gt;
&lt;h2&gt;Project Tungsten&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://databricks.com/glossary/tungsten&quot;&gt;Tungsten&lt;/a&gt; is the code name for the Spark project that makes changes to Apache Spark’s execution engine, focusing on improvements to the efficiency of memory and CPU usage.  Tungsten builds upon ideas from modern compilers and massively parallel processing (MPP) technologies, such as Apache Drill, &lt;a href=&quot;https://prestodb.io/docs/current/overview/concepts.html%23query-execution-model&quot;&gt;Presto&lt;/a&gt;, and &lt;a href=&quot;https://arrow.apache.org/&quot;&gt;Apache Arrow&lt;/a&gt;.  Spark 2.x improvements include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;To reduce JVM object memory size, creation, and garbage collection processing, Spark explicitly manages memory and converts most operations to operate directly against binary data.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://arrow.apache.org/&quot;&gt;Columnar layout for memory data&lt;/a&gt; avoids unnecessary I/O and accelerates analytical processing performance on modern CPUs and GPUs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image10-1594188764938.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Vectorization allows the CPU to operate on vectors, which are arrays of column values from multiple records. This takes advantage of modern CPU designs, by keeping all pipelines full to achieve efficiency.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image9-1594188775215.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To improve the speed of data processing through more effective use of L1/ L2/L3 CPU caches, Spark algorithms and data structures exploit memory hierarchy with cache-aware computation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Spark SQL’s Catalyst Optimizer underpins all the major new APIs in Spark 2.0 and later versions, from &lt;a href=&quot;https://databricks.com/glossary/what-are-dataframes&quot;&gt;DataFrames&lt;/a&gt; and &lt;a href=&quot;https://databricks.com/glossary/what-are-datasets&quot;&gt;Datasets&lt;/a&gt; to Structured Streaming.  The Catalyst optimizer handles: analysis, logical optimization, physical planning, and code generation to compile parts of queries to Java bytecode.  Catalyst now supports both rule-based and cost-based optimization.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image12-1594188785209.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spark SQL “Whole-Stage Java Code Generation” optimizes CPU usage by generating a single optimized function in bytecode for the set of operators in a SQL query (when possible), instead of generating iterator code for each operator.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image11-1594188795498.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Tips for Taking Advantage of Spark 2.x Improvements&lt;/h2&gt;
&lt;h3&gt;Use Dataset, DataFrames, Spark SQL&lt;/h3&gt;
&lt;p&gt;In order to take advantage of Spark 2.x, you should be using Datasets, DataFrames, and Spark SQL, instead of RDDs.  Datasets, DataFrames, and Spark SQL provide the following advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Compact columnar memory format&lt;/li&gt;
&lt;li&gt;Direct memory access&lt;/li&gt;
&lt;li&gt;Reduced garbage collection processing overhead&lt;/li&gt;
&lt;li&gt;Catalyst query optimization&lt;/li&gt;
&lt;li&gt;Whole-stage code generation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When possible, use Spark SQL functions – for example, &lt;code&gt;to_date()&lt;/code&gt;, &lt;code&gt;hour()&lt;/code&gt; – instead of custom UDFs in order to benefit from the advantages above.&lt;/p&gt;
&lt;p&gt;Datasets provide the advantage of compile time type safety over DataFrames. However, Dataset functional transformations (like map) will not take advantage of  query optimization, whole-stage code generation, and reduced GC.&lt;/p&gt;
&lt;h3&gt;Use the Best Data Store for Your Use Case&lt;/h3&gt;
&lt;p&gt;Spark supports several data formats, including CSV, JSON, ORC, and Parquet, and several data sources or connectors, popular NoSQL databases, and distributed messaging stores.&lt;/p&gt;
&lt;p&gt;But just because Spark supports a given data storage or format doesn’t mean you’ll get the same performance with all of them. Typically, data pipelines will involve multiple data sources and sinks and multiple formats to support different use cases and different read/write latency requirements. Here are some guidelines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;File data stores are good for write once (append only), read many use cases.  CSV and JSON data formats give excellent write path performance but are slower for reading; these formats are good candidates for collecting raw data for example logs, which require high throughput writes. Parquet is slower for writing but gives the best performance for reading;this format is good for BI and analytics, which require low latency reads.&lt;/li&gt;
&lt;li&gt;Apache HBase and MapR Database (now part of &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;) are good for random read/write use cases. MapR Database supports consistent, predictable, high throughput, fast reads and writes with efficient updates, automatic partitioning, and sorting.&lt;/li&gt;
&lt;li&gt;Apache Kafka and MapR Event Store for Kafka (also now part of &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;) are good for scalable reading and writing of real-time streaming data.  MapR Event Store is good for data pipelines with stream-first architecture patterns and kappa or lambda architectures.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image13-1594188803710.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;CSV and JSON Tips and Best Practices&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image8-1594188812308.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;When persisting and compressing CSV and JSON files, make sure they are splittable, give high speeds, and yield reasonable compression. ZIP compression is not splittable, whereas Snappy is splittable; Snappy also gives reasonable compression with high speed.  When reading CSV and JSON files, you will get better performance by specifying the schema, instead of using inference; specifying the schema reduces errors for data types and is recommended for production code.&lt;/p&gt;
&lt;h3&gt;Parquet Tips and Best Practices&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image16-1594188820503.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Apache Parquet gives the fastest read performance with Spark. Parquet arranges data in columns, putting related values in close proximity to each other to optimize query performance, minimize I/O, and facilitate compression. Parquet detects and encodes the same or similar data, using a technique that conserves resources. Parquet also stores column metadata and statistics, which can be pushed down to filter columns (discussed below).  Spark 2.x has a vectorized Parquet reader that does decompression and decoding in column batches, providing ~ 10x faster read performance.&lt;/p&gt;
&lt;p&gt;Parquet files are immutable; modifications require a rewrite of the dataset. For streaming data, you can stream to a fast read/write data store, then extract data to Parquet files for specific analytic use cases or stream new datasets to a new partition (see partitioning below).&lt;/p&gt;
&lt;h3&gt;Parquet Partitioning&lt;/h3&gt;
&lt;p&gt;Spark table partitioning optimizes reads by storing files in a hierarchy of  directories based on partitioning columns.  For example, a directory structure could be organized by location, such as state/city, or by date, such as year/month, shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1594188830995.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;DataFrames can be saved as persistent tables into a Hive metastore, using the &lt;a href=&quot;https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html&quot;&gt;saveAsTable&lt;/a&gt; command. If you do not have Hive setup, Spark will create a default local Hive metastore (using Derby). Persistent tables have several optimization benefits: partition and statistic metadata, and they can be bucketed (discussed later).&lt;/p&gt;
&lt;p&gt;As an example with the flight dataset, a lot of queries about departure delays are organized around the originating airport (the src column), so this could make a good partitioning column.  Here is a JSON row from this Dataset:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{    
&quot;id&quot;: &quot;ATL_LGA_2017-01-01_AA_1678&quot;,
&quot;dofW&quot;: 7,
&quot;carrier&quot;: &quot;AA&quot;,
&quot;src&quot;: &quot;ATL&quot;,
&quot;dst&quot;: &quot;LGA&quot;,
&quot;crsdephour&quot;: 17,
&quot;crsdeptime&quot;: 1700,
&quot;depdelay&quot;: 0.0,
&quot;crsarrtime&quot;: 1912,
&quot;arrdelay&quot;: 0.0,
&quot;crselapsedtime&quot;: 132.0,
&quot;dist&quot;: 762.0
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is the code to persist a flights DataFrame as a table consisting of Parquet files partitioned by the src column:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.write.format(&quot;parquet&quot;)
.partitionBy(&quot;src&quot;)
.option(&quot;path&quot;, &quot;/user/mapr/data/flights&quot;)
.saveAsTable(&quot;flights&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below is the resulting directory structure as shown by a Hadoop list files command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;hadoop fs -ls /user/mapr/data/flights
  /user/mapr/data/flights/src=ATL
  /user/mapr/data/flights/src=BOS
  /user/mapr/data/flights/src=CLT
  /user/mapr/data/flights/src=DEN
  /user/mapr/data/flights/src=DFW
  /user/mapr/data/flights/src=EWR
  /user/mapr/data/flights/src=IAH
  /user/mapr/data/flights/src=LAX
  /user/mapr/data/flights/src=LGA
  /user/mapr/data/flights/src=MIA
  /user/mapr/data/flights/src=ORD
  /user/mapr/data/flights/src=SEA
  /user/mapr/data/flights/src=SFO
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Below, we see that the src=DEN subdirectory contains two Parquet files:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;hadoop fs -ls /user/mapr/data/flights/src=DEN

/user/mapr/data/flights/src=DEN/part-00000-deb4a3d4-d8c3-4983-8756-ad7e0b29e780.c000.snappy.parquet
/user/mapr/data/flights/src=DEN/part-00001-deb4a3d4-d8c3-4983-8756-ad7e0b29e780.c000.snappy.parquet
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Partition Pruning and Predicate Pushdown&lt;/h3&gt;
&lt;p&gt;Partition pruning is a performance optimization that limits the number of files and partitions that Spark reads when querying.  After partitioning the data, queries that match certain partition filter criteria improve performance by allowing Spark to only read a subset of the directories and files.  When partition filters are present, the catalyst optimizer pushes down the partition filters. The scan reads only the directories that match the partition filters, thus reducing disk I/O. For example, the following query reads only the files in the src=DEN partition directory in order to query the average departure delay for flights originating from Denver.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.filter(&quot;src = &apos;DEN&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;).avg(&quot;depdelay&quot;)
.sort(desc(&quot;avg(depdelay)&quot;)).show()

result:
+---+---+------------------+
|src|dst|     avg(depdelay)|
+---+---+------------------+
|DEN|EWR|54.352020860495436|
|DEN|MIA| 48.95263157894737|
|DEN|SFO|47.189473684210526|
|DEN|ORD| 46.47721518987342|
|DEN|DFW|44.473118279569896|
|DEN|CLT|37.097744360902254|
|DEN|LAX|36.398936170212764|
|DEN|LGA| 34.59444444444444|
|DEN|BOS|33.633187772925766|
|DEN|IAH| 32.10775862068966|
|DEN|SEA|30.532345013477087|
|DEN|ATL| 29.29113924050633|
+---+---+------------------+
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or in SQL:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;%sql
select src, dst, avg(depdelay)
from flights where src=&apos;DEN&apos; and depdelay &gt; 1
group by src, dst
ORDER BY src
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see the physical plan for a DataFrame query in the Spark web UI SQL tab (discussed in chapter 3) or by calling the explain method shown below. Here in red, we see partition filter push down, which means that the src=DEN filter is pushed down into the Parquet file scan. This minimizes the files and data scanned and reduces the amount of data passed back to the Spark engine for the aggregation average on the departure delay.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.filter(&quot;src = &apos;DEN&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;).avg(&quot;depdelay&quot;)
.sort(desc(&quot;avg(depdelay)&quot;)).explain

== Physical Plan ==
TakeOrderedAndProject(limit=1001, orderBy=[avg(depdelay)#304 DESC NULLS LAST], output=[src#157,dst#149,avg(depdelay)#314])

+- \*(2) HashAggregate(keys=[src#157, dst#149],
       functions=[avg(depdelay#152)],
       output=[src#157, dst#149, avg(depdelay)#304])

   +- Exchange hashpartitioning(src#157, dst#149, 200)

+- \*(1) HashAggregate(keys=[src#157, dst#149],
              functions=[partial_avg(depdelay#152)],  output=[src#157,  dst#149,
              sum#321, count#322L])

+- \*(1) Project[dst#149, depdelay#152, src#157]
     +- \*(1)Filter (isnotnull(depdelay#152) &amp;#x26;&amp;#x26; (depdelay#152 &gt; 1.0))

+- \*(1) FileScan parquet default.flights[dst#149,depdelay#152,src#157] Batched: true, Format: Parquet, Location: PrunedInMemoryFileIndex[maprfs:/user/mapr/data/flights/src=DEN], PartitionCount: 1, PartitionFilters: [isnotnull(src#157), (src#157 = DEN)], PushedFilters: [IsNotNull(depdelay), GreaterThan(depdelay,1.0)],
ReadSchema: struct&amp;#x3C;dst:string,depdelay:double&amp;#x26;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The physical plan is read from the bottom up, whereas the DAG is read from the top down. Note: the Exchange means a shuffle occurred between stages.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image2-1594188838987.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Partitioning Tips&lt;/h3&gt;
&lt;p&gt;The partition columns should be used frequently in queries for filtering and should have a small range of values with enough corresponding data to distribute the files in the directories. You want to avoid too many small files, which make scans less efficient with excessive parallelism.  You also want to avoid having too few large files, which can hurt parallelism.&lt;/p&gt;
&lt;h3&gt;Coalesce and Repartition&lt;/h3&gt;
&lt;p&gt;Before or when writing a DataFrame, you can use dataframe.coalesce(N) to reduce the number of partitions in a DataFrame, without shuffling, or df.repartition(N) to reorder and either increase or decrease the number of partitions with shuffling data across the network to achieve even load balancing.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.write.format(&quot;parquet&quot;)
.repartition(13)
.partitionBy(&quot;src&quot;)
.option(&quot;path&quot;, &quot;/user/mapr/data/flights&quot;)
.saveAsTable(&quot;flights&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Bucketing&lt;/h3&gt;
&lt;p&gt;Bucketing is another data organization technique that groups data with the same bucket value across a fixed number of “buckets.”  This can improve performance in wide transformations and joins by avoiding “shuffles.”  With wide transformation shuffles, data is sent across the network to other nodes and written to disk, causing network and disk I/O and making the shuffle a costly operation. Below is a shuffle caused by a df.groupBy(&quot;carrier&quot;).count; if this dataset were bucketed by “carrier,” then the shuffle could be avoided.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image14-1594188846233.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Bucketing is similar to partitioning, but partitioning creates a directory for each partition, whereas bucketing distributes data across a fixed number of buckets by a hash on the bucket value.  Tables can be bucketed on more than one value and bucketing can be used with or without partitioning.&lt;/p&gt;
&lt;p&gt;As an example with the flight dataset, here is the code to persist a flights DataFrame as a table, consisting of Parquet files partitioned by the src column and bucketed by the dst and carrier columns (sorting by the id will sort by the src, dst, flightdate, and carrier, since that is what the id is made up of):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.write.format(&quot;parquet&quot;)
.sortBy(&quot;id&quot;)
.partitionBy(&quot;src&quot;)
.bucketBy(4,&quot;dst&quot;,&quot;carrier&quot;)
.option(&quot;path&quot;, &quot;/user/mapr/data/flightsbkdc&quot;)
.saveAsTable(&quot;flightsbkdc&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The resulting directory structure is the same as before, with the files in the src directories bucketed by dst and carrier.  The code below computes statistics on the table, which can then be used by the Catalyst optimizer.  Next, the partitioned and bucketed table is read into a new DataFrame df2.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;spark.sql(&quot;ANALYZE TABLE flightsbkdc COMPUTE STATISTICS&quot;)
val df2  = spark.table(&quot;flightsbkdc&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let’s look at the optimizations for the following query:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df2.filter(&quot;src = &apos;DEN&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;,&quot;carrier&quot;)
.avg(&quot;depdelay&quot;)
.sort(desc(&quot;avg(depdelay)&quot;)).show()

result:
+---+---+-------+------------------+
|src|dst|carrier|     avg(depdelay)|
+---+---+-------+------------------+
|DEN|EWR|     UA| 60.95841209829867|
|DEN|LAX|     DL|59.849624060150376|
|DEN|SFO|     UA|59.058282208588956|
. . .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here again, we see partition filter and filter pushdown, but we also see that there is no “Exchange” like there was before bucketing, which means there was no shuffle to aggregate by src, dst, and carrier.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;== Physical Plan ==
TakeOrderedAndProject(limit=1001, orderBy=[avg(depdelay)#491 DESC NULLS LAST], output=[src#460,dst#452,carrier#451,avg(depdelay)#504])

+- \*(1) HashAggregate(keys=[src#460, dst#452, carrier#451], functions=[avg(depdelay#455)], output=[src#460, dst#452, carrier#451, avg(depdelay)#491])
 +- \*(1) HashAggregate(keys=[src#460, dst#452, carrier#451], functions=[partial_avg(depdelay#455)], output=[src#460, dst#452, carrier#451, sum#512, count#513L])
  +- \*(1)Project [carrier#451, dst#452, depdelay#455, src#460]

   +- \*(1) Filter(isnotnull(depdelay#455) &amp;#x26;&amp;#x26; (depdelay#455 &gt; 1.0))

     +- \*(1) FileScan parquet default.flightsbkdc
          [carrier#451,dst#452,depdelay#455,src#460]
         Batched: true, Format: Parquet, Location:     PrunedInMemoryFileIndex
         [maprfs:/user/mapr/data/flightsbkdc/src=DEN],
         PartitionCount: 1, PartitionFilters: [isnotnull(src#460), (src#460 = DEN)],
 PushedFilters: [IsNotNull(depdelay), GreaterThan(depdelay,1.0)],
         ReadSchema:
struct&amp;#x3C;carrier:string,dst:string,depdelay:double&amp;#x26;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the DAG below, we see that there is no exchange shuffle, and we see “Whole-Stage Java Code Generation,” which optimizes CPU usage by generating a single optimized function in bytecode.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image15-1594188853521.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Bucketing Tips&lt;/h3&gt;
&lt;p&gt;Partitioning should only be used with columns that have a limited number of values; bucketing works well when the number of unique values is large. Columns which are used often in queries and provide high selectivity are good choices for bucketing.  Spark tables that are bucketed store metadata about how they are bucketed and sorted, which optimizes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Queries on bucketed values (Spark 2.4 supports bucket pruning)&lt;/li&gt;
&lt;li&gt;Aggregations on bucketed values (wide transformations)&lt;/li&gt;
&lt;li&gt;Joins on bucketed values&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Data Modeling, Partitioning, and Filter Pushdown&lt;/h2&gt;
&lt;h3&gt;Data Modeling: Partitioning and Row Key Design&lt;/h3&gt;
&lt;p&gt;With MapR Database (now part of &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;), a table is automatically partitioned into tablets across a cluster by key range, providing for scalable and fast reads and writes by row key.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1594188884373.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this use case, the row key (the id) starts with the origin (destination airport codes), followed by the flightdate and carrier, so the table is automatically partitioned and sorted by the src, dst, date, and carrier.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1594188895684.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Data Modeling: Avoiding JOINS with Nested Entities&lt;/h3&gt;
&lt;p&gt;If your tables exist in a one-to-many relationship, it’s possible to model it as a single document; this can avoid expensive JOINS. In the one-to-many relationship example below, we have an order table, which has a one-to-many relationship with an order items table.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1594188904918.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here is a nested entity example of this one-to-many relationship in a document database.  In this example, the order and related line items are stored together and can be read together with a find on the row key (_id). This makes the reads a lot faster than joining tables together.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;{
     “id”: “123”,
     “date”: “10/10/2017”,
     “ship_status”:”backordered”
     “orderitems”: \[
          {
               “itemid”: “4348”,
               “price”: 10.00
          },
          {
               “itemid”: “5648”,
               “price”: 15.00
          }]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Projection and Filter Pushdown into MapR Database (now part of &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;)&lt;/h3&gt;
&lt;p&gt;Below, we see the physical plan for a DataFrame query, with projection and filter pushdown highlighted in red. This means that the scanning of the src, dst, and depdelay columns and the filter on the depdelay column are pushed down into MapR Database, meaning that the scanning and filtering will take place in MapR Database before returning the data to Spark. Projection pushdown minimizes data transfer between MapR Database and the Spark engine by omitting unnecessary fields from table scans. It is especially beneficial when a table contains many columns. Filter pushdown improves performance by reducing the amount of data passed between MapR Database and the Spark engine when filtering data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;df.filter(&quot;src = &apos;ATL&apos; and depdelay &gt; 1&quot;)
.groupBy(&quot;src&quot;, &quot;dst&quot;)
.avg(&quot;depdelay&quot;).sort(desc(&quot;avg(depdelay)&quot;)).explain

== Physical Plan ==
\*(3) Sort [avg(depdelay)#273 DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(avg(depdelay)#273 DESC NULLS LAST, 200)
   +- \*(2) HashAggregate(keys=[src#5, dst#6],
         functions=[avg(depdelay#9)])
      +- Exchange hashpartitioning(src#5, dst#6, 200)
         +- \*(1) HashAggregate(keys=[src#5, dst#6],
            functions=[partial_avg(depdelay#9)])
            +- \*(1) Filter (((isnotnull(src#5) &amp;#x26;&amp;#x26;
                isnotnull(depdelay#9)) &amp;#x26;&amp;#x26;
                            (src#5 = ATL)) &amp;#x26;&amp;#x26; (depdelay#9 &gt; 1.0))
               +- \*(1) Scan MapRDBRelation(/user/mapr/flighttable [src#5,dst#6,depdelay#9] PushedFilters: [IsNotNull(src), IsNotNull(depdelay), EqualTo(src,ATL), GreaterThan(depdelay,1.0)]
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spark Web UI and SQL Tips&lt;/h2&gt;
&lt;p&gt;Here is a summary of tips and what to look for:&lt;/p&gt;
&lt;h3&gt;SQL Tab&lt;/h3&gt;
&lt;p&gt;You can see details about the query plan produced by Catalyst on the web UI SQL tab.  In the query plan details, you can check and see:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The amount of time for each stage.&lt;/li&gt;
&lt;li&gt;If partition filters, projection, and filter pushdown are occurring.&lt;/li&gt;
&lt;li&gt;Shuffles between stages (Exchange) and the amount of data shuffled. If joins or aggregations are shuffling a lot of data, consider bucketing.  You can set the number of partitions to use when shuffling with the spark.sql.shuffle.partitions option.&lt;/li&gt;
&lt;li&gt;The join algorithm being used. Broadcast join should be used when one table is small; sort-merge join should be used for large tables.  You can use broadcast hint to guide Spark to broadcast a table in a join. For faster joins with large tables using the sort-merge join algorithm, you can use bucketing to pre-sort and group tables; this will avoid shuffling in the sort merge.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use the Spark SQL &lt;code&gt;ANALYZE TABLE tablename COMPUTE STATISTICS&lt;/code&gt; to take advantage of cost-based optimization in the Catalyst Planner.&lt;/p&gt;
&lt;h3&gt;Stages Tab&lt;/h3&gt;
&lt;p&gt;You can use the stage detail metrics to identify problems with an executor or task distribution.  Things to look for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tasks that are taking longer and/or killed tasks. If your task process time is not balanced, resources could be wasted.&lt;/li&gt;
&lt;li&gt;Shuffle read size that is not balanced.&lt;/li&gt;
&lt;li&gt;If your partitions/tasks are not balanced, then consider repartition as described under partitioning.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Storage Tab&lt;/h3&gt;
&lt;p&gt;Caching Datasets can make execution faster if the data will be reused. You can use the storage tab to see if important Datasets are fitting into memory.&lt;/p&gt;
&lt;h3&gt;Executors Tab&lt;/h3&gt;
&lt;p&gt;You can use the executors tab to confirm that your application has the amount of resources needed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Shuffle Read Write Columns: shows size of data transferred between stages&lt;/li&gt;
&lt;li&gt;Storage Memory Column: shows the current used/available memory&lt;/li&gt;
&lt;li&gt;Task Time Column: shows task time/garbage collection time&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;References and More Information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html&quot;&gt;Project Tungsten: Bringing Apache Spark Closer to Bare Metal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/blog/2016/05/23/apache-spark-as-a-compiler-joining-a-billion-rows-per-second-on-a-laptop.html&quot;&gt;Apache Spark as a Compiler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/sql-performance-tuning.html&quot;&gt;Apache Spark SQL Performance Tuning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/session/optimizing-apache-spark-sql-joins&quot;&gt;Spark Summit Session: Optimizing Apache Spark SQL Joins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://db-blog.web.cern.ch/blog/luca-canali/2017-06-diving-spark-and-parquet-workloads-example&quot;&gt;Diving into Spark and Parquet Workloads, by Example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/session/hive-bucketing-in-apache-spark&quot;&gt;Spark Summit Hive Bucketing in Apache Spark&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/session/lessons-from-the-field-episode-ii-applying-best-practices-to-your-apache-spark-applications&quot;&gt;Lessons from the Field, Episode II: Applying Best Practices to Your Apache Spark Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/session/why-you-should-care-about-data-layout-in-the-filesystem&quot;&gt;Why You Should Care about Data Layout in the Filesystem&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://databricks.com/session/spark-parquet-in-depth&quot;&gt;Spark + Parquet In Depth&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://c2fo.io/c2fo/spark/aws/emr/2016/07/06/apache-spark-config-cheatsheet/&quot;&gt;Apache Spark: Config Cheatsheet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://spark.apache.org/docs/latest/configuration.html&quot;&gt;Spark Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://dzone.com/articles/talend-and-apache-spark-debugging-and-logging-best&quot;&gt;Apache Spark: Debugging and Logging Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note:&lt;/strong&gt; MapR products referenced are now part of the &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Data Modeling Guidelines for NoSQL JSON Document Databases]]></title><description><![CDATA[Original Post Information: In this blog post, I’ll discuss how NoSQL data modeling is different from traditional relational schema data…]]></description><link>https://developer.hpe.com/data-modeling-guidelines-for-nosql-json-document-databases/</link><guid isPermaLink="false">https://developer.hpe.com/data-modeling-guidelines-for-nosql-json-document-databases/</guid><pubDate>Wed, 08 Jul 2020 05:22:33 GMT</pubDate><content:encoded>&lt;h2&gt;Original Post Information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2017-10-26T12:00:00.000&quot;,
&quot;tags&quot;: &quot;data modeling&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;In this blog post, I’ll discuss how NoSQL data modeling is different from traditional relational schema data modeling, and I’ll also provide you with some guidelines for document database data modeling.&lt;br&gt;
Document databases, such as MapR Database (now part of &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;), are sometimes called &quot;schema-less&quot;, but this is a misnomer. Document databases don&apos;t require the same predefined structure as a relational database, but you do have to define the facets of how you plan to organize your data.  Typically with a NoSQL data store you want to aggregate your data so that the data can quickly be read together, instead of using joins. A properly designed data model can make all the difference in how your application performs. One of our solution architects worked with a customer, and in a one-hour conversation about schema design, was able to improve access performance by a factor of 1,000x. These concepts matter.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Why NoSQL?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Simply put the motivation behind NoSQL is data volume, velocity, and/or variety. MapR Database (now part of HPE Ezmeral Data Fabric) provides for data variety with two different data models:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;HPE Ezmeral Data Fabric as a Wide column database with an Apache HBase API&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HPE Ezmeral Data Fabric as a Document database with an Open JSON API&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;img src=&quot;/uploads/media/2020/6/mapr-db-json-1594186549663.png&quot; alt=&quot;HPE Ezmeral Data Fabric&quot; width=&quot;900&quot;&gt;
&lt;p&gt;HPE Ezmeral Data Fabric JSON is different than other Document data stores in that the row key design is the same for both models, and both can store data (columns or documents) with different access patterns in a different column family with the same row key.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Relational vs. NoSQL Data Modeling&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In relational design, the focus and effort is around describing the entity and its relation with other entities; the queries and indexes are designed later.  With a relational database you normalize your schema, which eliminates redundant data and makes storage efficient. Then queries with joins bring the data back together again. However joins cause bottlenecks on read, with data distributed across a cluster, this model does not scale horizontally. With HPE Ezmeral Data Fabric, a table is automatically partitioned across a cluster by key range, and each server is the source for a subset of a table (called a tablet). HPE Ezmeral Data Fabric has a “query-first” schema design, queries should be identified first, then the row key should be designed to distribute the data evenly and also to give a meaningful primary index to query by. The row document (JSON) or columns (HBase) should be designed to group data together that will be read together. With HPE Ezmeral Data Fabric you de-normalize your schema to store in one row or document what would be multiple tables with indexes in a relational world. Grouping the data by key range provides for fast reads and writes by row key.&lt;/p&gt;
&lt;img src=&quot;/uploads/media/2020/6/mapr-db-read-writes-1594186238593.png&quot; alt=&quot;HPE Ezmeral Data Fabric faster read and writes&quot; width=&quot;900&quot;&gt;
&lt;h2&gt;&lt;strong&gt;NoSQL Data Modeling Process&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;It is useful to start off with &lt;a href=&quot;https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model&quot;&gt;Entity Relationship&lt;/a&gt; modeling in order to define the entities, relationships, and attributes in your application:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Entities: Main objects in your application&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Attributes: properties of the objects in your application&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Relationships: connections between entities - 1-1, 1-many, many-many&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/artist-performs-song-1594186280849.png&quot; alt=&quot;Artist - Performs - Song&quot;&gt;&lt;/p&gt;
&lt;p&gt;The E-R model can be used with your query and data access patterns to define the physical model so that the data that is read together is stored together.
As a modeling example we will use a social application similar to reddit (Note: I do not know how reddit is really implemented). Here are the use cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Users can post URLs to articles by category (like news, sports…).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/reddit-use-case-1-1594186305947.png&quot; alt=&quot;Reddit use case&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Users can then make comments on posts&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/reddit-use-case-2-1594186327388.png&quot; alt=&quot;Reddit use case&quot;&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some of the query requirements are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Display the posts by category and date (most recent first)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Display the comments by post&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Display the posts by userid&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Logical Model Example&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is an E-R Diagram for our example social application:
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/e-r-diagram-1594186359002.png&quot; alt=&quot;E-R Diagram&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Entities are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User, Post, Comment, Category&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The relations are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A User makes a post&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Post has comments&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A Post belongs to a category&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Relational Model Example&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is the relational model for the example social application:
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/relation-model-social-app-1594186378259.png&quot; alt=&quot;Relation Model for Social Application&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Users are stored in the user table&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The posted URL is  stored in the Post table with a foreign key to the user that posted it, and a foreign key to the category for the post.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Comments about a post are stored in the comments table with a foreign key to the post and a foreign key to the user that commented.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;﻿&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Normalization&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In a relational database, you normalize the schema to eliminate redundancy by putting repeating information into a table of its own. In this example below, we have an order table, which has a one-to-many relationship with an order items table. The order items table has a foreign key with the id of the corresponding order.&lt;/p&gt;
&lt;img src=&quot;/uploads/media/2020/6/order-items-table-1594186396974.png&quot; alt=&quot;Order Items Table&quot; width=&quot;900&quot;&gt;
&lt;h2&gt;&lt;strong&gt;Denormalization&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In a denormalized datastore, you store in one table what would be multiple indexes in a relational world. &lt;a href=&quot;https://en.wikipedia.org/wiki/Denormalization&quot;&gt;Denormalization&lt;/a&gt; can be thought of as a replacement for joins. Often with NoSQL, you de-normalize or duplicate data so that data is accessed and stored together.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Parent-Child Relationship–Embedded Entity&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Here is an example of denormalization of the SALES_ITEM schema in a Document database:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
   &quot;_id&quot;: &quot;123&quot;,
   &quot;date&quot;: &quot;10/10/2017&quot;,
   “ship_status”:”backordered”
   &quot;orderitems&quot;: [
       {
           &quot;itemid&quot;: &quot;4348&quot;,
           &quot;price&quot;: 10.00
       },
       {
           &quot;itemid&quot;: &quot;5648&quot;,
           &quot;price&quot;: 15.00
       }]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If your tables exist in a one-to-many relationship, it’s possible to model it  as a single document. In this example, the order and related line items are stored together and can be read together with a find on the row key (_id). This makes the reads a lot faster than joining tables together.
Note: that the maximum default row size is 32MB, and optimal size is between 50-100KB. If the embedded entities are really long then they could be bucketed by row key, or you could just store the id to the embedded entity table (which would require your application to query that table also).&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Document Model Example&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is the document model for the example social application:&lt;/p&gt;
&lt;img src=&quot;/uploads/media/2020/6/document-model-social-app-1594186415200.png&quot; alt=&quot;Document Model for Social Application&quot; width=&quot;900&quot;&gt;
&lt;p&gt;There are 2 tables in the document model compared to 4 in the relational:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;User details are stored in the user table&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Posted URLs are stored in the Post table&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The row key is composed of the category and a reverse timestamp so that posts will be grouped by category with the most recent first.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;There is a secondary index on the posted by attribute, to query by who submitted the URL.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Comments are embedded in the post table&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;Composite Row Key Design&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Row keys are the primary index for MapR Database (now part of HPE Ezmeral Data Fabric)  (HPE Ezmeral Data Fabric JSON 6.0 also has secondary indexes).  Data is automatically distributed as it is written by sorted row key range. You can include multiple data elements in a “composite” row key, which can be useful for grouping rows together for finding by key range. For example if you wanted to group posts by category and date, you could use a row key like &lt;code&gt;“SPORTS_20131012”&lt;/code&gt; (if you want the most recent first use a reverse timestamp). If you wanted to group restaurants by location you could use a row key like &lt;code&gt;“TN_NASHVL_PANCAKEPANTRY”&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/prefix-1594186429138.png&quot; alt=&quot;Prefix&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another option is to add a hash prefix to the row key in order to get good distribution, and still have a secondary grouping.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/prefix-row-key-1594186446213.png&quot; alt=&quot;Prefix the row key&quot;&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Generic Data, Event Data, and Entity-Attribute-Value&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Generic data is often expressed as name value or entity attribute value. In a relational database, this is complicated to represent because every row represents an instance of a &lt;strong&gt;similar object&lt;/strong&gt;. JSON allows easy variation across records.  Here is an example of clinical patient event data:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;patientid-timestamp, Temperature , &quot;102&quot;
patientid-timestamp, Coughing, &quot;True&quot;
patientid-timestamp, Heart Rate, &quot;98&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is the document model for the clinical patient event data:&lt;/p&gt;
&lt;img src=&quot;/uploads/media/2020/6/document-model-event-data-1594186470650.png&quot; alt=&quot;Document Model Event Data&quot; width=&quot;900&quot;&gt;
&lt;p&gt;The Row Key is the patient ID plus a time stamp. The variable event type and measurement is put in name value pairs.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Tree, Adjacency List, Graph Data&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Here is an example of a tree, or &lt;a href=&quot;https://en.wikipedia.org/wiki/Adjacency_list&quot;&gt;adjacency list&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/document-model-tree-1594186489204.png&quot; alt=&quot;Document Model Tree&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here is a document model for the tree shown above (there are multiple ways to represent trees):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
   &quot;_id&quot;: &quot;USA&quot;,
   “type”:”state”,
   &quot;children&quot;: [&quot;TN&quot;,”FL]
   &quot;parent&quot;: null
}
{
   &quot;_id&quot;: &quot;TN&quot;,
   “type”:”state”,
   &quot;children&quot;: [&quot;Nashville”,”Memphis”]
   &quot;parent&quot;: &quot;USA”
}
{
   &quot;_id&quot;: &quot;FL&quot;,
   “type”:”state”,
   &quot;children&quot;: [&quot;Miami”,”Jacksonville”]
   &quot;parent&quot;: &quot;USA”
}
{
   &quot;_id&quot;: &quot;Nashville&quot;,
   “type”:”city”,
   &quot;children&quot;: []
   &quot;parent&quot;: &quot;TN”
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each document is a tree node, with the row key equal to the node id. The parent field stores the parent node id. The children field stores an array of children node ids. A secondary index on the parent and children fields allows to quickly find the parent or children nodes.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Inheritance Mapping&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In modern object-oriented programming models,  different object types can be related, for instance, by extending the same base type. In object-oriented design, these objects are considered instances of the same base type, as well as instances of their respective subtypes. It is useful to store objects in a single database table to simplify comparisons and calculations over multiple objects. But we also need to allow objects of each subtype to store their respective attributes, which may not apply to the base type or to other subtypes. This does not match a relational model  but is very easy to do with a document model.  Here is an example of object inheritance for store products,  bike, pedal, and jersey are all types of store products:
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/inheritance-mapping-1594186510565.png&quot; alt=&quot;Inheritance Mapping Online Store Example&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this online store example, the type of product is a prefix in the row key. Some of the name value pairs are different, and may be missing depending on the type of product. This allows to model different product types in the same table and to find a group of products easily by product type.&lt;/p&gt;
&lt;img src=&quot;/uploads/media/2020/6/product-type-1594186531426.png&quot; alt=&quot;Product Types&quot; width=&quot;900&quot;&gt;
&lt;p&gt;In this blog post, you learned how document database data modeling is different from traditional relational schema modeling, and you also got some guidelines for document database data modeling.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;For More Information:&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/ojai/ojai&quot;&gt;Open JSON Application Interface&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/ojai/ojai/wiki&quot;&gt;Open JSON Application Interface wiki&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.infoq.com/articles/unified-data-modeling-for-relational-and-nosql-databases&quot;&gt;Unified Data Modeling for Relational and NoSQL Databases&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://highlyscalable.wordpress.com/2012/03/01/nosql-data-modeling-techniques/&quot;&gt;NOSQL DATA MODELING TECHNIQUES&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Spark 101: What Is It, What It Does, and Why It Matters]]></title><description><![CDATA[Original post information:  In this blog post, we will give an introduction to Apache Spark and its history and explore some of the areas in…]]></description><link>https://developer.hpe.com/spark-101-what-is-it-what-it-does-and-why-it-matters/</link><guid isPermaLink="false">https://developer.hpe.com/spark-101-what-is-it-what-it-does-and-why-it-matters/</guid><pubDate>Fri, 03 Jul 2020 06:19:21 GMT</pubDate><content:encoded>&lt;h2&gt;Original post information:&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;&quot;authorDisplayName&quot;: &quot;Carol McDonald&quot;,
&quot;publish&quot;: &quot;2018-10-17T08:00:00.000Z&quot;,
&quot;tags&quot;: &quot;spark&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image5-1593756989434.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this blog post, we will give an introduction to Apache Spark and its history and explore some of the areas in which its particular set of capabilities show the most promise. We will discuss the relationship to other key technologies and provide some helpful pointers.&lt;/p&gt;
&lt;p&gt;With Spark 2.0 and later versions, big improvements were implemented to make Spark easier to program and execute faster.&lt;/p&gt;
&lt;h2&gt;What Is Apache Spark?&lt;/h2&gt;
&lt;p&gt;Spark is a general-purpose distributed data processing engine that is suitable for use in a wide range of circumstances. On top of the Spark core data processing engine, there are libraries for SQL, machine learning, graph computation, and stream processing, which can be used together in an application. Programming languages supported by Spark include: Java, Python, Scala, and R. Application developers and data scientists incorporate Spark into their applications to rapidly query, analyze, and transform data at scale. Tasks most frequently associated with Spark include  ETL and SQL batch jobs across large data sets, processing of streaming data from sensors, IoT, or financial systems, and machine learning tasks.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image7-1593817446675.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;History&lt;/h3&gt;
&lt;p&gt;In order to understand Spark, it helps to understand its history. Before Spark, there was MapReduce, a resilient distributed processing framework, which enabled Google to index the exploding volume of content on the web, across large clusters of commodity servers. &lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image6-1593757021829.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;There were 3 core concepts to the Google strategy:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Distribute data:&lt;/strong&gt; when a data file is uploaded into the cluster, it is split into chunks, called data blocks, and distributed amongst the data nodes and replicated across the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Distribute computation:&lt;/strong&gt; users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs and a reduce function that merges all intermediate values associated with the same intermediate key. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines in the following way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The mapping process runs on each assigned data node, working only on its block of data from a distributed file.&lt;/li&gt;
&lt;li&gt;The results from the mapping processes are sent to the reducers in a process called &quot;shuffle and sort&quot;: key/value pairs from the mappers are sorted by key, partitioned by the number of reducers, and then sent across the network and written to key sorted &quot;sequence files&quot; on the reducer nodes.&lt;/li&gt;
&lt;li&gt;The reducer process executes on its assigned node and works only on its subset of the data (its sequence file). The output from the reducer process is written to an output file.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tolerate faults:&lt;/strong&gt; both data and computation can tolerate failures by failing over to another node for data or processing.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;MapReduce word count execution example:&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image4-1593757039111.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some iterative algorithms, like PageRank, which Google used to rank websites in their search engine results, require chaining multiple MapReduce jobs together, which causes a lot of reading and writing to disk. When multiple MapReduce jobs are chained together, for each MapReduce job, data is read from a distributed file block into a map process, written to and read from a SequenceFile in between, and then written to an output file from a reducer process.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image3-1593817522915.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;A year after Google published a &lt;a href=&quot;http://static.googleusercontent.com/media/research.google.com/en/us/archive/mapreduce-osdi04.pdf&quot;&gt;white paper describing the MapReduce&lt;/a&gt; framework (2004), Doug Cutting and Mike Cafarella created Apache Hadoop™.&lt;/p&gt;
&lt;p&gt;Apache Spark™ began life in 2009 as a project within the AMPLab at the University of California, Berkeley. Spark became an incubated project of the Apache Software Foundation in 2013, and it was promoted early in 2014 to become one of the Foundation’s top-level projects. Spark is currently one of the most active projects managed by the Foundation, and the community that has grown up around the project includes both prolific individual contributors and well-funded corporate backers, such as Databricks, IBM, and China’s Huawei.&lt;/p&gt;
&lt;p&gt;The goal of the Spark project was to keep the benefits of MapReduce’s scalable, distributed, fault-tolerant processing framework, while making it more efficient and easier to use. The advantages of Spark over MapReduce are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Spark executes much faster by caching data in memory across multiple parallel operations, whereas MapReduce involves more reading and writing from disk.&lt;/li&gt;
&lt;li&gt;Spark runs multi-threaded tasks inside of  JVM processes, whereas MapReduce runs as heavier weight JVM processes. This gives Spark faster startup, better parallelism, and better CPU utilization.&lt;/li&gt;
&lt;li&gt;Spark provides a richer functional programming model than MapReduce.&lt;/li&gt;
&lt;li&gt;Spark is especially useful for parallel processing of distributed data with &lt;strong&gt;iterative&lt;/strong&gt; algorithms.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How a Spark Application Runs on a Cluster&lt;/h2&gt;
&lt;p&gt;The diagram below shows a Spark application running on a cluster.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Spark application runs as independent processes, coordinated by the SparkSession object in the driver program.&lt;/li&gt;
&lt;li&gt;The resource or cluster manager assigns tasks to workers, one task per partition.&lt;/li&gt;
&lt;li&gt;A task applies its unit of work to the dataset in its partition and outputs a new partition dataset. Because iterative algorithms apply operations repeatedly to data, they benefit from caching datasets across iterations.&lt;/li&gt;
&lt;li&gt;Results are sent back to the driver application or can be saved to disk.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/image1-1593757064272.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Spark supports the following resource/cluster managers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Spark Standalone&lt;/strong&gt; – a simple cluster manager included with Spark&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Apache Mesos&lt;/strong&gt; – a general cluster manager that can also run Hadoop applications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Apache Hadoop YARN&lt;/strong&gt; – the resource manager in Hadoop 2&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; – an open source system for automating deployment, scaling, and management of containerized applications&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Spark also has a local mode, where the driver and executors run as threads on your computer instead of a cluster, which is useful for developing your applications from a personal computer.&lt;/p&gt;
&lt;h2&gt;What Does Spark Do?&lt;/h2&gt;
&lt;p&gt;Spark is capable of handling several petabytes of data at a time, distributed across a cluster of thousands of cooperating physical or virtual servers. It has an extensive set of developer libraries and APIs and supports languages such as Java, Python, R, and Scala; its flexibility makes it well-suited for a range of use cases. Spark is often used with distributed data stores such as &lt;a href=&quot;https://www.hpe.com/us/en/software/data-fabric.html&quot;&gt;HPE Ezmeral Data Fabric&lt;/a&gt;, Hadoop’s HDFS, and Amazon’s S3, with popular NoSQL databases such as HPE Ezmeral Data Fabric, Apache HBase, Apache Cassandra, and MongoDB, and with distributed messaging stores such as HPE Ezmeral Data Fabric and Apache Kafka.&lt;/p&gt;
&lt;p&gt;Typical use cases include:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stream processing:&lt;/strong&gt; From log files to sensor data, application developers are increasingly having to cope with &quot;streams&quot; of data. This data arrives in a steady stream, often from multiple sources simultaneously. While it is certainly feasible to store these data streams on disk and analyze them retrospectively, it can sometimes be sensible or important to process and act upon the data as it arrives. Streams of data related to financial transactions, for example, can be processed in real time to identify– and refuse– potentially fraudulent transactions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Machine learning:&lt;/strong&gt; As data volumes grow, machine learning approaches become more feasible and increasingly accurate. Software can be trained to identify and act upon triggers within well-understood data sets before applying the same solutions to new and unknown data. Spark’s ability to store data in memory and rapidly run repeated queries makes it a good choice for training machine learning algorithms. Running broadly similar queries again and again, at scale, significantly reduces the time required to go through a set of possible solutions in order to find the most efficient algorithms.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Interactive analytics:&lt;/strong&gt; Rather than running pre-defined queries to create static dashboards of sales or production line productivity or stock prices, business analysts and data scientists want to explore their data by asking a question, viewing the result, and then either altering the initial question slightly or drilling deeper into results. This interactive query process requires systems such as Spark that are able to respond and adapt quickly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Data integration:&lt;/strong&gt; Data produced by different systems across a business is rarely clean or consistent enough to simply and easily be combined for reporting or analysis. Extract, transform, and load (ETL) processes are often used to pull data from different systems, clean and standardize it, and then load it into a separate system for analysis. Spark (and Hadoop) are increasingly being used to reduce the cost and time required for this ETL process.&lt;/p&gt;
&lt;h2&gt;Who Uses Spark?&lt;/h2&gt;
&lt;p&gt;A wide range of technology vendors have been quick to support Spark, recognizing the opportunity to extend their existing big data products into areas where Spark delivers real value, such as interactive querying and machine learning. Well-known companies such as IBM and Huawei have invested significant sums in the technology, and a growing number of startups are building businesses that depend in whole or in part upon Spark. For example, in 2013 the Berkeley team responsible for creating Spark founded Databricks, which provides a hosted end-to-end data platform powered by Spark. The company is well-funded, having received $247 million across  four rounds of investment in 2013, 2014, 2016 and 2017, and Databricks employees continue to play a prominent role in improving and extending the open source code of the Apache Spark project.&lt;/p&gt;
&lt;p&gt;The major Hadoop vendors, including MapR, Cloudera, and Hortonworks, have all moved to support YARN-based Spark alongside their existing products, and each vendor is working to add value for its customers. Elsewhere, IBM, Huawei, and others have all made significant investments in Apache Spark, integrating it into their own products and contributing enhancements and extensions back to the Apache project. Web-based companies, like Chinese search engine Baidu, e-commerce operation Taobao, and social networking company Tencent, all run Spark-based operations at scale, with Tencent’s 800 million active users reportedly generating over 700 TB of data per day for processing on a cluster of more than 8,000 compute nodes.&lt;/p&gt;
&lt;p&gt;In addition to those web-based giants, pharmaceutical company Novartis depends upon Spark to reduce the time required to get modeling data into the hands of researchers, while ensuring that ethical and contractual safeguards are maintained.&lt;/p&gt;
&lt;h2&gt;What Sets Spark Apart?&lt;/h2&gt;
&lt;p&gt;There are many reasons to choose Spark, but the following three are key:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Simplicity:&lt;/strong&gt; Spark’s capabilities are accessible via a set of rich APIs, all designed specifically for interacting quickly and easily with data at scale. These APIs are well-documented and structured in a way that makes it straightforward for data scientists and application developers to quickly put Spark to work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Speed:&lt;/strong&gt; Spark is designed for speed, operating both in memory and on disk. Using Spark, a team from Databricks &lt;a href=&quot;https://spark.apache.org/news/spark-wins-daytona-gray-sort-100tb-benchmark.html&quot;&gt;tied for first place&lt;/a&gt; with a team from the University of California, San Diego, in the 2014 Daytona GraySort benchmarking challenge (&lt;a href=&quot;https://spark.apache.org/news/spark-wins-daytona-gray-sort-100tb-benchmark.html&quot;&gt;https://spark.apache.org/news/spark-wins-daytona-gray-sort-100tb-benchmark.html&lt;/a&gt;). The challenge involves processing a static data set; the Databricks team was able to process 100 terabytes of data stored on solid-state drives in just 23 minutes, and the previous winner took 72 minutes by using Hadoop and a different cluster configuration. Spark can perform even better when supporting interactive queries of data stored in memory. In those situations, there are claims that Spark can be 100 times faster than Hadoop’s MapReduce.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Support:&lt;/strong&gt; Spark supports a range of programming languages, including Java, Python, R, and Scala. Spark includes support for tight integration with a number of leading storage solutions in the Hadoop ecosystem and beyond, including HPE Ezmeral Data Fabric (file system, database, and event store), Apache Hadoop (HDFS), Apache HBase, and Apache Cassandra. Furthermore, the Apache Spark community is large, active, and international. A growing set of commercial providers, including Databricks, IBM, and all of the main Hadoop vendors, deliver comprehensive support for Spark-based solutions.&lt;/p&gt;
&lt;h3&gt;The Power of Data Pipelines&lt;/h3&gt;
&lt;p&gt;Much of Spark&apos;s power lies in its ability to combine very different techniques and processes together into a single, coherent whole. Outside Spark, the discrete tasks of selecting data, transforming that data in various ways, and analyzing the transformed results might easily require a series of separate processing frameworks, such as Apache Oozie. Spark, on the other hand, offers the ability to combine these together, crossing boundaries between batch, streaming, and interactive workflows in ways that make the user more productive.&lt;/p&gt;
&lt;p&gt;Spark jobs perform multiple operations consecutively, in memory, and only spilling to disk when required by memory limitations. Spark simplifies the management of these disparate processes, offering an integrated whole – a data pipeline that is easier to configure, easier to run, and easier to maintain. In use cases such as ETL, these pipelines can become extremely rich and complex, combining large numbers of inputs and a wide range of processing steps into a unified whole that consistently delivers the desired result.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[July is hot with news! - Newsletter]]></title><link>https://developer.hpe.com/2020-July-02/</link><guid isPermaLink="false">https://developer.hpe.com/2020-July-02/</guid><pubDate>Thu, 02 Jul 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Meet you in the Hack Shack!]]></title><description><![CDATA[Prior to the pandemic, developers looked forward to connecting with one another at the HPE DEV Hack Shack at events like HPE Technology and…]]></description><link>https://developer.hpe.com/meet-you-in-the-hack-shack/</link><guid isPermaLink="false">https://developer.hpe.com/meet-you-in-the-hack-shack/</guid><pubDate>Tue, 30 Jun 2020 16:48:15 GMT</pubDate><content:encoded>&lt;p&gt;Prior to the pandemic, developers looked forward to connecting with one another at the HPE DEV Hack Shack at events like HPE Technology and Solutions Summit (TSS) and HPE Discover. Out of necessity for the health and safety of everyone, these events have gone virtual. While there’s some concern that this detracts from the personal touch offered at physical events, the option of going virtual has significant benefits. One major advantage is being able to reach a broader audience than just those folks who can afford to fly into town for an event.&lt;/p&gt;
&lt;p&gt;The Hack Shack, by design, is a place where people meet, collaborate, and learn from one another. It’s meant to be a place where innovative thinkers can share their ideas and engage in a bit of camaraderie. It’s also a place where event attendees come to relax and have a bit of fun, participating in competitive coding challenges and games to achieve high scores and win prizes. How does one make all this virtual?&lt;/p&gt;
&lt;p&gt;This was the challenge posed to our group of talented developers and designers. And though it’s impossible to say whether we’ve been successful or not with only a week of the virtual event under way, what I can say is that what we have been able to accomplish has been very well received. In addition, the work that we’ve done also puts us in a great position to offer a permanent online Hack Shack, complete with the ability to offer on-demand workshops and challenges, continuing the education of devs, data scientists, and designers around the world.&lt;/p&gt;
&lt;p&gt;During the first week of the HPE Discover Virtual Event, we hosted 10 sessions/workshops at the Hack Shack:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hack Shack Workshop W479: Introduction to the HPE Container Platform REST API&lt;/li&gt;
&lt;li&gt;Hack Shack Workshop W480: API 101 - API basics and the value they provide&lt;/li&gt;
&lt;li&gt;Hack Shack Workshop W481: Aruba API yourself!&lt;/li&gt;
&lt;li&gt;Hack Shack Workshop W482: Redfish API use with PowerShell, Python, &amp;#x26; Bash/cURL&lt;/li&gt;
&lt;li&gt;Hack Shack Workshop W485: AI 101 - Convolutional neural network (CNN) for MNIST&lt;/li&gt;
&lt;li&gt;Hack Shack Workshop W486: Automate apps with the HPE Container Platform&lt;/li&gt;
&lt;li&gt;Hack Shack Session T491: What data scientists can learn from software developers&lt;/li&gt;
&lt;li&gt;Hack Shack Session T492: Accelerate innovation with DevOps for machine learning&lt;/li&gt;
&lt;li&gt;Making AI Real B329: A View From a Practitioner&lt;/li&gt;
&lt;li&gt;Hack Shack Session T490: Demystifying AI Technology Choices&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each of the hands-on workshops included a presenter who ran the students through Jupyter notebooks covering the material and a couple of subject matter experts (SMEs) who helped facilitate the sessions and answer questions. The Jupyter Notebook format worked out really well. It provided the students with a fool-proof method to follow along with the instructor, step-by-step, as well as the opportunity to download material for later use.&lt;/p&gt;
&lt;p&gt;The workshops were capped for attendance so as to preserve a high-quality interactive experience for the students. The API 101 session proved to be very popular, as well as the one covering how to automate apps with the HPE Container Platform, where a lot of the participants remained thoroughly engaged to the very end and asked a lot of questions.&lt;/p&gt;
&lt;p&gt;Because the workshops are done virtually and you’re not physically in the room with the students, we knew it would be difficult to gauge how people felt during the sessions, i.e. if someone was confused or getting frustrated about something. We used a format where students could ask questions and receive help from the SMEs online to try and mitigate this. We also included a survey at the end of each workshop. Answers provided in the surveys will be used to help improve the workshops as we move forward.&lt;/p&gt;
&lt;p&gt;It was heartening to hear some of the feedback that the students provided:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure1-1593537773963.jpg&quot; alt=&quot;hs blog figure1&quot;&gt;&lt;/p&gt;
&lt;p&gt;While the workshops are great, there’s a whole lot more you can do at the &lt;a href=&quot;/hackshack/&quot;&gt;HPE Discover Virtual Event Hack Shack&lt;/a&gt;. Starting Week 2, we will be offering coding challenges where you can show off your coding chops and compete with others for prizes. And we’ll have the Arcade open for those of you who just want a chance to go after the IT Monster in the Hack Shack Attack! game. You can even download stickers! One of the best ways to find out what’s happening in the Hack Shack is to scroll down on our new page and check out our little video tour &lt;strong&gt;This Week in the Hack Shack&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure2-1593537838340.jpg&quot; alt=&quot;hs blog figure2&quot;&gt;&lt;/p&gt;
&lt;p&gt;For those of you who are looking for the Hack Shack through the HPE Discover Virtual Event catalog, the way one views what sessions and workshops are available at the HPE Discover Virtual Event has changed since the days leading up to the event. In order to see the catalog and determine what sessions you wish to attend, you must first register. Don’t worry… it doesn’t cost anything. It’s just that, now that the event is running, things other than the catalog take center stage.&lt;/p&gt;
&lt;p&gt;When you &lt;a href=&quot;https://www.hpe.com/us/en/discover.html&quot;&gt;register&lt;/a&gt;, make sure that the password you choose is &lt;strong&gt;ALPHA-NUMBERIC ONLY!&lt;/strong&gt; Don’t try putting in any fancy or special characters. If you run into problems with the registration, there is online chat and the opportunity to email folks who can help.
If you haven’t already signed up for them, there are a couple of ways to find the Hack Shack Workshops and Challenges. I’ve found two easy methods you can use to navigate to them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Option 1&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Click on this &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1043&amp;#x26;locale=en_US&quot;&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;This should bring you to a screen that looks like what’s shown below. This is where you log in:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure3-1593537844172.jpg&quot; alt=&quot;hs blog figure3&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Once logged in, you will be brought to a page where you can navigate to the session catalog (Content Catalog). Click on this link.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure4-1593537855197.jpg&quot; alt=&quot;hs blog figure4&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Once you are in the catalog, search for &lt;strong&gt;Hack Shack&lt;/strong&gt; to find Workshops and Challenges.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure5-1593537864210.jpg&quot; alt=&quot;hs blog figure5&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;You can also use filters looking for Content Type to find the Hack Shack Workshops and Challenges.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure6-1593537873734.jpg&quot; alt=&quot;hs blog figure6&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Option2&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can also find what Hack Shack Workshops and Challenges are available by navigating through the Hack Shack Lobby. Each week, the lobby schedule will be updated to show what sessions are available for registration. Simply click on the &lt;a href=&quot;/hackshack&quot;&gt;Hack Shack Lobby link&lt;/a&gt; to follow the highlighted Steps 1, 2, and 3.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hs-blog-figure7-1593537881565.jpg&quot; alt=&quot;hs blog figure7&quot;&gt;&lt;/p&gt;
&lt;p&gt;We hope you’ll take advantage of attending the &lt;a href=&quot;https://www.hpe.com/us/en/discover.html&quot;&gt;HPE Discover Virtual Event&lt;/a&gt;. It’s free, informative, and chock full of material you just can’t get anywhere else. And we hope you’ll continue to follow us at the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; where we’ll be opening up new platform pages, offering more tutorials and blog posts, and work to provide you with a continuing Hack Shack experience!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get involved in the open source community! Part 3: Contributing back to the community]]></title><description><![CDATA[git101-part1-git icon 1788c In the Part 1 of my blog series, I discussed how to get started with Git and leverage some of the content…]]></description><link>https://developer.hpe.com/get-involved-in-the-open-source-community-part-3-contributing-back-to-th/</link><guid isPermaLink="false">https://developer.hpe.com/get-involved-in-the-open-source-community-part-3-contributing-back-to-th/</guid><pubDate>Wed, 24 Jun 2020 08:06:31 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/git-icon-1788c-1590702885345.png&quot; alt=&quot;git101-part1-git icon 1788c&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-gi&quot;&gt;Part 1&lt;/a&gt; of my blog series, I discussed how to get started with Git and leverage some of the content provided by the open source community. In &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun&quot;&gt;Part 2&lt;/a&gt; of the series, I covered how to create and populate your first repo on GitHub. So far, we have covered use cases 1, 2 and 3 from the list below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use case 1: I’d like to use something from the community&lt;/li&gt;
&lt;li&gt;Use case 2: I&apos;d like to report an issue on a repository&lt;/li&gt;
&lt;li&gt;Use case 3: I&apos;d like to share something with the community&lt;/li&gt;
&lt;li&gt;Use case 4: I&apos;d like to contribute code to a repository&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this article, I’m covering the last use case, where we will modify code from an existing repo and submit it back to its owner.&lt;/p&gt;
&lt;h1&gt;Use case 4: I&apos;d like to contribute code to a repo&lt;/h1&gt;
&lt;p&gt;In use case 1, we saw that we could use code from the community using &lt;code&gt;git clone&lt;/code&gt;. In use case 2, we reported an issue on a repo that we cloned and used. But what if you would like to contribute back to this project? For example, you might have discovered a bug that you have already fixed in your copy of the code and would like to share this fix. Or maybe you have enhanced a section of the code or fixed the documentation. Basically, anything that belongs to the project can be contributed back. In cases like this, the best approach is to &lt;code&gt;fork&lt;/code&gt; the original repository into your own GitHub account (which we have created in use case 3) and work on this private copy. Let&apos;s see how this works.&lt;/p&gt;
&lt;h2&gt;Step 1: Forking a repo into your own GitHub account&lt;/h2&gt;
&lt;p&gt;In this lab, we would like to fork the &lt;a href=&quot;https://github.com/Didier-Lalli/WelcomeGitDidier&quot;&gt;https://github.com/Didier-Lalli/WelcomeGitDidier&lt;/a&gt; repo in order to improve it. This step is done from the GUI of GitHub and requires that you own a GitHub account yourself and are logged into it.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure you are connected to your GitHub account&lt;/li&gt;
&lt;li&gt;Then, open &lt;a href=&quot;https://github.com/Didier-Lalli/WelcomeGitDidier&quot;&gt;https://github.com/Didier-Lalli/WelcomeGitDidier&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/welcomegit-1592986241565.png&quot; alt=&quot;git-part3-welcomegit&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Click on the Fork button in the upper right corner&lt;/li&gt;
&lt;li&gt;After a while (depending on the size of the repo), you’ll be redirected to your own GitHub account. There, you will find your own copy of the WelcomeGitDidier repo, mentioning that it was forked from another source.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/forkedrepo-1592986346589.png&quot; alt=&quot;git-part3-forkedrepo&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 2: Cloning the forked repo&lt;/h2&gt;
&lt;p&gt;In order to start making changes to this repo, we need to clone it locally like we did in &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-gi&quot;&gt;Part1&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Open a terminal session (terminal on Mac, PowerShell on Windows).&lt;/p&gt;
&lt;p&gt;1/ Clone your copy of the WelcomeGitDidier&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git clone https://github.com/&amp;#x3C;YourGitHubUsername&gt;/WelcomeGitDidier
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For example, for me it’s:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git clone https://github.com/didou06/WelcomeGitDidier
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2/ Check files part of the repo&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cd WelcomeGitDidier
$ ls -l
total 16
-rw-r--r--  1 lalli  staff  20 May 25 18:11 README.md
-rw-r--r--  1 lalli  staff  79 May 25 18:11 helloworld.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;3/ Check content of Python file&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cat helloworld.py
# This is part of lab 4 of our Git101 Jupyter Notebook 
print(&quot;Hello world !&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;4/ Check status of repo&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git status
On branch master
Your branch is up to date with &apos;origin/master&apos;.
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3: Making changes to the cloned repo&lt;/h2&gt;
&lt;p&gt;Let&apos;s say that we would like to make a contribution to the Python script. For example, we would like to change that &quot;Hello world!&quot; into &quot;Hello World!&quot; or, even better, add another line that prints your name or “hello world” in a different language. You decide, but please make sure you keep what was already there in the file. Just keep adding to it.&lt;/p&gt;
&lt;p&gt;1/ Edit helloworld.py with your favorite editor and add a line at the end of the file with something like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;print(&quot;Hello Didou06, many thanks for the contribution&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save the file&lt;/p&gt;
&lt;p&gt;2/ Check status of repo&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git status
On branch master
Your branch is up to date with &apos;origin/master&apos;.
Changes not staged for commit:
  (use &quot;git add &amp;#x3C;file&gt;...&quot; to update what will be committed)
  (use &quot;git checkout -- &amp;#x3C;file&gt;...&quot; to discard changes in working directory)
	modified:   helloworld.py
no changes added to commit (use &quot;git add&quot; and/or &quot;git commit -a&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that Git is telling us in the &lt;code&gt;git status&lt;/code&gt; output that changes have been made to a file (helloworld.py) that have not been staged (nor committed) yet. We are going to commit those changes, but if you would like to cancel the changes you made and revert back to the original content of the clone repo (from master branch at origin), you can use &lt;code&gt;git checkout -f&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;3/ Before you commit those changes, make sure the new code works fine. This is very important. You should not commit non-working code. We can run the Python code with:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ python helloworld.py 
Hello world !
Hello Didou06, many thanks for the contribution
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that we are happy about this new version of the code, let&apos;s add the file and commit the changes like we did in &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun&quot;&gt;Part 2&lt;/a&gt; blog:&lt;/p&gt;
&lt;p&gt;4/ Commit changes locally using &lt;code&gt;git add&lt;/code&gt; and &lt;code&gt;git commit&lt;/code&gt; and verify that Git has picked up the change using &lt;code&gt;git status&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git add helloworld.py
$ git commit -m &quot;Updated helloworld for Part 3 blog3&quot;
[master 0a7dbb0] Updated helloworld for Part 3 blog
 1 file changed, 1 insertion(+)
$ git status
On branch master
Your branch is ahead of &apos;origin/master&apos; by 1 commit.
  (use &quot;git push&quot; to publish your local commits)
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 4: Working in your own branch&lt;/h2&gt;
&lt;p&gt;At this point we are ready to &lt;code&gt;git push&lt;/code&gt; our changes. But before we do this, let&apos;s discuss branches. Branches are a key concept that are at the heart of Git.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Mastering branches is beyond the scope of this lab, but you can read more about it &lt;a href=&quot;https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It is not considered good practice to push changes to the master branch, which is the default when first cloning a repo (note all the &quot;On branch master&quot; messages in the output from the git commands). Instead, developers would typically create a branch (in most cases corresponding to a theme that they have worked on, such as a new feature or a fix) and push the changes to a new branch. Git allows as many branches as you&apos;d like. Git also provides the tooling to merge branches with the master branch when all validation checks have been successful.&lt;/p&gt;
&lt;p&gt;So, let&apos;s create a branch for this update called &amp;#x3C;yourname&gt;/AddedMyName.&lt;/p&gt;
&lt;p&gt;This next script will do the following:&lt;/p&gt;
&lt;p&gt;1/ Create a branch called &amp;#x3C;yourname&gt;/AddedMyName using &lt;code&gt;git branch&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git branch didou06/AddedMyName
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2/ Switch to using that new branch instead of the master using &lt;code&gt;git checkout&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git checkout didou06/AddedMyName
Switched to branch &apos;didou06/AddedMyName&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that you are now operating from this new branch, but still within your local repo. It&apos;s now time to push your changes back to our remote repo using a &lt;code&gt;git push&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;3/ Push changes to your own remote repo using &lt;code&gt;git push&lt;/code&gt;. You will be prompted for your GitHub username and password.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git push origin didou06/AddedMyName
Username for &apos;https://github.com&apos;: didou06
Password for &apos;https://didou06@github.com&apos;: 
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 413 bytes | 413.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: 
remote: Create a pull request for &apos;didou06/AddedMyName&apos; on GitHub by visiting:
remote:      https://github.com/Didou06/WelcomeGitDidier/pull/new/didou06/AddedMyName
remote: 
To https://github.com/Didou06/WelcomeGitDidier.git
 * [new branch]      didou06/AddedMyName -&gt; didou06/AddMyName
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;4/ Verify that Git has picked up the change using &lt;code&gt;git status&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git status
On branch didou06/AddedMyName
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;5/ Now, open GitHub and verify that you have a new branch (in addition to master) in the branches section of the repo page&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/branches-1592986802278.png&quot; alt=&quot;git-part3-branches&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can drill down to the content of &lt;strong&gt;helloworld.py&lt;/strong&gt; to verify that your changes are there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/updatedpython-1592986856289.png&quot; alt=&quot;git-part3-updatedpython&quot;&gt;&lt;/p&gt;
&lt;p&gt;6/ At this stage, your only remote repo is WelcomeGitDidier on your own GitHub account. You can verify this using the command &lt;code&gt;git remote -v&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git remote -v
origin https://github.com/Didou06/WelcomeGitDidier.git (fetch)
origin https://github.com/Didou06/WelcomeGitDidier.git (push)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But what would happen if the original repo that you forked was to change? It is possible that, if you forked something a while ago, things have evolved on the original master branch, and your branch is now out of synch. One way to fix this is to add the original repo as an additional remote repo to your local repo using git remote. This is a best practice and we usually name that remote repo: &lt;strong&gt;upstream&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;7/ Let&apos;s do this now using &lt;code&gt;git remote add&lt;/code&gt; and check again using &lt;code&gt;git remote -v&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git remote add upstream https://github.com/Didier-Lalli/WelcomeGitDidier
$ git remote -v
origin	https://github.com/Didou06/WelcomeGitDidier.git (fetch)
origin	https://github.com/Didou06/WelcomeGitDidier.git (push)
upstream	https://github.com/Didier-Lalli/WelcomeGitDidier (fetch)
upstream	https://github.com/Didier-Lalli/WelcomeGitDidier (push)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;8/ If you need to integrate some recent changes made upstream, you can do it using the &lt;code&gt;git merge&lt;/code&gt; command.&lt;/p&gt;
&lt;p&gt;This git merge might generate merge conflicts as multiple changes may have been made on the same section of one of the files. Git will do its best to automatically merge the changes, but in impossible cases, it will tell you and you would have to fix those by hand (in a text editor) before committing changes to your repo. You can use &lt;code&gt;git status&lt;/code&gt; to check if this is the case.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git merge upstream/master 
Already up to date.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is not the case in our example.&lt;/p&gt;
&lt;h2&gt;Step 5: Opening a Pull Request&lt;/h2&gt;
&lt;p&gt;It&apos;s now time to contribute back these changes to the original DidierWelcomeGit repo. This is done by opening a so-called &lt;strong&gt;Pull Request&lt;/strong&gt; (often abbreviated PR by developers). This action tells the owner of the repo that you forked (that&apos;s me) that you are proposing some changes and you are asking the owner to review and pull those changes (thus the Pull Request term) into the shared repo master copy. These pull requests are created from the GitHub web page.&lt;/p&gt;
&lt;p&gt;1/ In GitHub, list the branches available to identify the one that you have just created (&amp;#x3C;yourname&gt;/AddedMyName)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/branches-1592986802278.png&quot; alt=&quot;git-part3-branches&quot;&gt;&lt;/p&gt;
&lt;p&gt;2/ Use the &lt;code&gt;New pull request&lt;/code&gt; button to open a new PR. Notice that this is opening a pull request on the original repo we forked earlier in the lab. Put a simple comment with your email, check that the changes you made are part of the pull request at the bottom of the page, and hit the Create pull request button.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/openingpr-1592986965058.png&quot; alt=&quot;git-part3-openingpr&quot;&gt;&lt;/p&gt;
&lt;p&gt;3/ You can now see your &lt;strong&gt;Pull Request&lt;/strong&gt; (there might be other ones already there). This now requires approval from the owner and might lead to an exchange between the owner and the contributor, until he/she accepts the PR (or not).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/viewingpr-1592987054957.png&quot; alt=&quot;git-part3-viewingpr&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You can continue to make updates to this branch (and by default this Pull Request) until it is accepted by the owner of the repo.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This terminates use case 4: You should now be able to open a Pull Request to enhance a public repo. Congratulations. You are now a real open source contributor. Welcome to the community!&lt;/p&gt;
&lt;h1&gt;Where do I go from here?&lt;/h1&gt;
&lt;p&gt;We only touched the surface of Git in these three blog posts. We showed you some typical use cases in order to illustate the most important Git actions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Cloning an existing public repo (covered in &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-gi&quot;&gt;Part 1&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Opening an issue on a public repo (covered in &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-gi&quot;&gt;Part 1&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Creating your GitHub account and populating a first public repo (covered in &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun&quot;&gt;Part 2&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Forking a public repo and opening a pull request (covered here)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you want to discover more about Git, I recommend the following resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.git-scm.com/book/en/v2&quot;&gt;The Pro Git Book&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://guides.github.com/introduction/git-handbook/&quot;&gt;The Git Handbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=SWYqp7iY_Tc&amp;#x26;feature=youtu.be&quot;&gt;Video Git Crash Course&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Stay tuned to &lt;a href=&quot;/blog&quot;&gt;HPE DEV&lt;/a&gt; for the last article where I will cover use case 4. In this use case, you will start contributing some code back to an existing project.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Master the automation universe the easy way! Part 1: Introduction to StackStorm]]></title><description><![CDATA[stackstorm In case you haven’t noticed, the world is obsessed with automation. You hear it every day in meetings with sales people, team…]]></description><link>https://developer.hpe.com/master-the-automation-universe-the-easy-way-part-1-introduction-to-stack/</link><guid isPermaLink="false">https://developer.hpe.com/master-the-automation-universe-the-easy-way-part-1-introduction-to-stack/</guid><pubDate>Mon, 22 Jun 2020 07:20:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/stackstorm-1592810807425.png&quot; alt=&quot;stackstorm&quot;&gt;&lt;/p&gt;
&lt;p&gt;In case you haven’t noticed, the world is obsessed with automation. You hear it every day in meetings with sales people, team members and in the latest blog entries about data centers. Inevitably, someone will mention the words “Rest API” and, to prove they really know what they are talking about, they quickly mention Salt, Chef, Puppet, and Ansible, just like a trusted advisor would. Talk about name-dropping!&lt;/p&gt;
&lt;p&gt;But not everyone is reading off the same script. The automation space is filled with a lot of choices when it comes to picking a solution and there are plenty of ways one can &lt;em&gt;automate&lt;/em&gt; tasks. When I hear the word &lt;em&gt;automate&lt;/em&gt;, I naturally think &lt;em&gt;remove humans from the process&lt;/em&gt;.  This means choosing a solution that can be &lt;em&gt;aware&lt;/em&gt; of the environment, something that can listen and watch. Then, when a predefined event happens, automation can spring into action and do the heavy lifting for us humans.&lt;/p&gt;
&lt;p&gt;When we look at automation tools like Salt, Chef, Puppet, and Ansible, they all have a place in the automation world. These are tried and true industry solutions but, for the most part, need to be initiated by some sort of process, even if it’s simply logging in and manually kicking it off. StackStorm takes a different approach. StackStorm (st2) is an event-based automation framework that is often described as &lt;em&gt;If this, then that&lt;/em&gt; automation. StackStorm can use sensors to monitor systems and listen for specific events. If an event happens, then a rule can be applied to run a single action or a complex set of actions called a &lt;em&gt;workflow&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/stackstorm-view-1592810817247.png&quot; alt=&quot;stackstorm view&quot;&gt;&lt;/p&gt;
&lt;p&gt;StackStorm has quite a few moving parts. The good news is you can start small, automating with a couple of &lt;strong&gt;actions&lt;/strong&gt;, and later begin using &lt;strong&gt;sensors&lt;/strong&gt; and &lt;strong&gt;rules&lt;/strong&gt; as your understanding grows. Let’s take a closer look at actions. &lt;strong&gt;Actions&lt;/strong&gt; are just scripts, pieces of code that can perform automation or remediation tasks. They are the workhorse of StackStorm. Actions can restart a service on a server, spin up a VM, or send an alert or notification, just to name a few possibilities. These examples are really not all that impressive, but hold on, ‘cause we’re just getting started.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sensors&lt;/strong&gt; are deployed to watch event/alarm queues. An alarm sensor can listen for specific alarms and recognize when the alarm is present. Next, the sensor will load a &lt;strong&gt;trigger&lt;/strong&gt; that is assigned to a &lt;strong&gt;rule&lt;/strong&gt;. When the trigger fires, the rule that is tied to that trigger runs the actions or workflows that are assigned to the rule. This could be something like sending a notification to pager duty, spin up or down a VM, or open an incident report in &lt;strong&gt;Service Now&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;My point here is that when an event happens, several things are put into motion with no human intervention. Sure, tools like Ansible, Chef, and Puppet have capabilities to take action and make changes, but what they are missing is the ability to automatically &lt;em&gt;know&lt;/em&gt; when something happens.&lt;/p&gt;
&lt;p&gt;StackStorm sensors, actions, trigger, rules, and workflows are all provided together in StackStorm packs. If you have a st2 server, you can own all the pre-written automation required to integrate into something like &lt;strong&gt;Service Now&lt;/strong&gt; simply by typing &lt;code&gt;st2 pack install servicenow&lt;/code&gt;. With one command, you have installed all the automation software you need to fully integrate with &lt;strong&gt;Service Now&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If you point your web browser to &lt;a href=&quot;https://exchange.stackstorm.com&quot;&gt;exchange.stackstorm.com&lt;/a&gt;, you will discover over 170 StackStorm automation packs just waiting for you to consume. Azure, AWS, VMware… hundreds of actions just waiting for you to install and use to automate just about anything.&lt;/p&gt;
&lt;p&gt;I have been deep into StackStorm for over a year and a half now. I’ve developed st2 packs for HPE OneView, iLoAmplifier, HPE Composable Fabric, Aruba CX and Qumulo and I am just getting started. Want to learn more? Head on over to my &lt;a href=&quot;https://github.com/xod442/stackstorm-tutorial&quot;&gt;StackStorm tutorial&lt;/a&gt; and you, too, can master the automation universe, if you’re into that sort of thing. Keep an eye out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; site for more interesting articles and tutorials on automation.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing an NFS Server Provisioner for the HPE CSI Driver for Kubernetes]]></title><description><![CDATA[With the release of HPE CSI Driver for Kubernetes 1.2.0, a new set of features has been made available as a technology preview, including an…]]></description><link>https://developer.hpe.com/introducing-an-nfs-server-provisioner-for-the-hpe-csi-driver-for-kuberne/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-an-nfs-server-provisioner-for-the-hpe-csi-driver-for-kuberne/</guid><pubDate>Sat, 20 Jun 2020 21:04:47 GMT</pubDate><content:encoded>&lt;p&gt;With the release of HPE CSI Driver for Kubernetes 1.2.0, a new set of features has been made available as a technology preview, including an NFS Server Provisioner and a Pod Monitor. The motivation behind including these new features is explored in the official announcement – &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/tech-preview-network-file-system-server-provisioner-for-hpe-csi/ba-p/7092948&quot;&gt;Tech preview: Network File System Server Provisioner for HPE CSI Driver for Kubernetes&lt;/a&gt;. In this blog post, I’ll demonstrate how to put these features to good use!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The features and capabilities showcased within this blog are considered beta and subject to change. Do not use for production workloads until the official general availability release. Currently on target for version 1.3.0 of the HPE CSI Driver for Kubernetes.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This tutorial assumes version 1.2.0 or later of the HPE CSI Driver for Kubernetes has been installed with a functioning backend, such as HPE Nimble Storage, HPE Primera or HPE 3PAR. The CSI driver is available as a &lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-csi-driver&quot;&gt;Helm chart&lt;/a&gt; or &lt;a href=&quot;https://operatorhub.io/operator/hpe-csi-operator&quot;&gt;Operator&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Background&lt;/h1&gt;
&lt;p&gt;The Container Storage Providers supported by the HPE CSI Driver are block storage solutions that serve volumes over either iSCSI or Fibre Channel. Inherently, traditional filesystems on these volumes are either XFS, ext3/4 or Btrfs. In other words, non-clustered filesystems that only allow a single host at time to access the volumes. With that limitation in mind, the HPE CSI Driver will only support &lt;code&gt;ReadWriteOnce&lt;/code&gt; (RWO) &lt;code&gt;PersistentVolumeClaims&lt;/code&gt; (PVCs) natively. In an effort to serve multiple &lt;code&gt;Pods&lt;/code&gt; across multiple Kubernetes worker nodes, the user would have to be creative by either running Rook or the upstream NFS server provisioner to provide what is called &lt;code&gt;ReadWriteMany&lt;/code&gt; (RWX) and &lt;code&gt;ReadOnlyMany&lt;/code&gt; (ROX) access modes.&lt;/p&gt;
&lt;p&gt;In an effort to simplify deployment, Hewlett Packard Enterprise opted to create a solution that is seamless for the HPE CSI Driver users and administrators.  More information about the design and the motivation behind it can be found in the &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/tech-preview-network-file-system-server-provisioner-for-hpe-csi/ba-p/7092948&quot;&gt;tech preview blog post&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Enable the NFS Server Provisioner&lt;/h1&gt;
&lt;p&gt;Enabling the NFS Server Provisioner for PVCs is straightforward, as it’s controlled by the &lt;code&gt;StorageClass&lt;/code&gt; option &lt;code&gt;nfsResources&lt;/code&gt; parameter. What’s important to understand is that, once &lt;code&gt;nfsResources&lt;/code&gt; have been enabled on a &lt;code&gt;StorageClass&lt;/code&gt;, all PVCs are subject to be served by a NFS server setup by the NFS Server Provisioner, including RWO.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All API objects declarations below assumes &lt;code&gt;kubectl create -f-&lt;/code&gt;, paste the YAML stanza and hit CTRL-D on a new line after pasting.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the shipping default &lt;code&gt;StorageClass&lt;/code&gt; with &lt;code&gt;nfsResources&lt;/code&gt; enabled.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: nimble-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: nimble-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: nimble-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: nimble-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: nimble-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  description: Volume created by the HPE CSI Driver for Kubernetes
  accessProtocol: iscsi
  csi.storage.k8s.io/fstype: xfs
  nfsResources: &quot;true&quot;
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Creating an RWX claim is as simple as it can be.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-rwx-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s now possible to create a &lt;code&gt;Deployment&lt;/code&gt; with multiple replicas to access the claim.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 5
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - image: datamattsson/my-app
          name: my-app
          volumeMounts:
            - name: my-app
              mountPath: /data
      volumes:
        - name: my-app
          persistentVolumeClaim:
            claimName: my-rwx-pvc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once deployed and scaled, it should look like what you see below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP          NODE  
my-app-6bfbb6f87f-5gr8k   1/1     Running   0          37s   10.45.0.2   tme-lnx-worker4 
my-app-6bfbb6f87f-rzn64   1/1     Running   0          37s   10.36.0.2   tme-lnx-worker1  
my-app-6bfbb6f87f-tk4vx   1/1     Running   0          37s   10.44.0.3   tme-lnx-worker2  
my-app-6bfbb6f87f-vkqrc   1/1     Running   0          37s   10.44.0.2   tme-lnx-worker2  
my-app-6bfbb6f87f-z87p2   1/1     Running   0          37s   10.44.0.1   tme-lnx-worker2
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; Creating a ROX claim requires the &lt;code&gt;Pod&lt;/code&gt; to mount the claim read-only. Please check the documentation on the &lt;a href=&quot;https://scod.hpedev.io/csi_driver/using.html#using_the_nfs_server_provisioner&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD) for more details.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Inspecting the actual NFS Deployment&lt;/h1&gt;
&lt;p&gt;While all parts of both the NFS client and server are being deployed transparently for the user, it is important to understand what actually end up running on the Kubernetes cluster. By default, each PVC creates a single replica &lt;code&gt;Deployment&lt;/code&gt;, a &lt;code&gt;Service&lt;/code&gt;, and a RWO PVC that maps to a supported backend. The NFS mount for the Pods accessing the claim is being taken care of by the HPE CSI Driver.&lt;/p&gt;
&lt;p&gt;Where the NFS servers gets deployed is controlled by the &lt;code&gt;nfsNamespace&lt;/code&gt; parameter. The default is &quot;hpe-nfs&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/csi-120-rev1b-1592687082644.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Object &lt;a href=&quot;https://scod.hpedev.io/csi_driver/diagnostics.html#nfs_server_provisioner_resources&quot;&gt;naming conventions and other diagnostics&lt;/a&gt; are available on SCOD.&lt;/p&gt;
&lt;h1&gt;User control of the NFS server&lt;/h1&gt;
&lt;p&gt;Sometimes it’s desired to have a single default &lt;code&gt;StorageClass&lt;/code&gt; on the cluster for all access mode needs. This is possible but requires non-portable PVCs and a tweak to the &lt;code&gt;StorageClass&lt;/code&gt; using &lt;code&gt;allowOverrides&lt;/code&gt; and omitting &lt;code&gt;nfsResources&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Another important detail is that the NFS server needs to be deployed in the same Namespace as the requesting claim to allow users to create CSI snapshots and clones.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: nimble-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: nimble-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: nimble-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: nimble-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: nimble-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  description: Volume created by the HPE CSI Driver for Kubernetes
  accessProtocol: iscsi
  csi.storage.k8s.io/fstype: xfs
  allowOverrides: nfsResources,nfsNamespace
reclaimPolicy: Delete
allowVolumeExpansion: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By annotating the PVC, a user can request the NFS Server Provisioner to serve the claim. The user can also deploy the server in the &lt;code&gt;Namespace&lt;/code&gt; requesting the claim.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-other-rwx-pvc
  annotations:
    csi.hpe.com/nfsResources: &quot;true&quot;
    csi.hpe.com/nfsNamespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Inspecting the &lt;code&gt;Namespace&lt;/code&gt; where the claim was created, we may observe the API objects that were created.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get configmap,deploy,pvc,service -o name
configmap/hpe-nfs-config
deployment.apps/hpe-nfs-053b6374-db9c-46c9-94d9-d3c3e59a55e4
persistentvolumeclaim/hpe-nfs-053b6374-db9c-46c9-94d9-d3c3e59a55e4
persistentvolumeclaim/my-other-rwx-pvc
service/hpe-nfs-053b6374-db9c-46c9-94d9-d3c3e59a55e4
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The user may now use the requesting claim as a &lt;code&gt;dataSource&lt;/code&gt; in a new claim to clone it. For a comprehensive tutorial on how to use CSI snapshots and clones, check out this previous blog post: &lt;a href=&quot;/blog/PklOy39w8NtX6M2RvAxW/hpe-csi-driver-for-kubernetes-snapshots-clones-and-volume-expansion&quot;&gt;HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Advanced configuration&lt;/h1&gt;
&lt;p&gt;The tech preview is currently hardcoded to allow twenty running NFS servers per worker node in the cluster. Request limits are also unrestricted. It’s encouraged to tinker with the request limits during the tech preview as the engineering team is looking for real world guidance. In the GA release, the request limits will be not be unrestricted. We know that the NFS server has around 150MiB memory foot print coming up cold and it all comes around to how much buffer cache you want to put aside for servicing cached read requests. We also know the NFS server will consume quite a lot of CPU cycles during load tests.&lt;/p&gt;
&lt;p&gt;These are the &lt;code&gt;StorageClass&lt;/code&gt; parameters that control the request limits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;nfsResourceLimitsCpuM&lt;/strong&gt;: Specify CPU limits for the server &lt;code&gt;Deployment&lt;/code&gt; in milli CPU. Default: no limits applied. Example: &quot;500m&quot;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;nfsResourceLimitsMemoryMi&lt;/strong&gt;: Specify memory limits (in megabytes) for the server &lt;code&gt;Deployment&lt;/code&gt;. Default: no limits applied. Example: &quot;500Mi&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It’s also possible to fine tune the NFS server itself. The &lt;code&gt;ConfigMap&lt;/code&gt; &quot;hpe-nfs-config&quot; in the &lt;code&gt;Namespace&lt;/code&gt; where the server is deployed represents the running server configuration. Samples can be found in the &lt;a href=&quot;https://github.com/nfs-ganesha/nfs-ganesha/tree/master/src/config_samples&quot;&gt;NFS-Ganesha GitHub&lt;/a&gt; repo.&lt;/p&gt;
&lt;p&gt;The NFS client mount options are also tunable, this could be useful for making tweaks to fit a certain best practice for running a particular application over NFS. Do note that NFSv4 is the only version supported at this time.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;nfsMountOptions&lt;/strong&gt;: Customize NFS mount options for the &lt;code&gt;Pods&lt;/code&gt; to the server &lt;code&gt;Deployment&lt;/code&gt;. Default: &quot;nolock, hard,vers=4&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Automatic recovery with the Pod Monitor&lt;/h1&gt;
&lt;p&gt;In an effort to ensure the NFS servers are kept alive, HPE introduced a customer facing feature called Pod Monitor. It monitors &lt;code&gt;Pods&lt;/code&gt; on the cluster with the label &lt;code&gt;monitored-by: hpe-csi&lt;/code&gt;. It checks the status of the &lt;code&gt;Pod&lt;/code&gt; on 30 second intervals (tunable) and watches for the &lt;code&gt;NodeLost&lt;/code&gt; transition. It also verifies that the node is indeed unreachable and effectively deletes the &lt;code&gt;Pod&lt;/code&gt; and &lt;code&gt;VolumeAttachments&lt;/code&gt; to let Kubernetes reschedule the &lt;code&gt;Pod&lt;/code&gt; on a healthy node.&lt;/p&gt;
&lt;p&gt;The Pod Monitor is necessary because the defaults in Kubernetes used to perform automatic recovery for node outages are too extreme and would stall workloads running over 10 minutes to recover. And, there would still be problems with the &lt;code&gt;VolumeAttachment&lt;/code&gt;, as CSI won’t forcefully remove it because, for all it knows, the volume is still mounted on the node that became isolated. As the CSPs supported by the HPE CSI Driver strip the initiator groups from a volume before applying the new ones, split brain would never happen. There might be dirty buffers, but modern filesystems recover gracefully and if there’s an application level corruption, there should be backups or at least snapshots of the volume to recover from.&lt;/p&gt;
&lt;p&gt;Read more about the &lt;a href=&quot;https://scod.hpedev.io/csi_driver/monitor.html&quot;&gt;Pod Monitor&lt;/a&gt; on SCOD as it’s possible to apply it to any workload backed by a HPE CSI Driver volume.&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Please take the NFS Server Provisioner and Pod Monitor for a spin if you get the chance. We value your feedback and we keep all channels of communication open for this purpose. If you’re an HPE employee, join our Slack community at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;. If you’re not an employee, sign up at &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; first. It’s also possible to &lt;a href=&quot;https://github.com/hpe-storage/csi-driver/issues/new?title=Feedback%20on%20RWX%20functionality%20in%201.2.0&quot;&gt;report issues through GitHub&lt;/a&gt;. Beta feedback, questions and concerns may also be raised through a regular support ticket. Please check with your HPE representative of the respective storage backend on how to log a support case.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE CSI Driver for Kubernetes &lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HPE CSI Operator on &lt;a href=&quot;https://operatorhub.io/operator/hpe-csi-operator&quot;&gt;OperatorHub.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Read the Tech Preview announcement blog COMING SOON!&lt;/li&gt;
&lt;li&gt;Visit &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD)&lt;/li&gt;
&lt;li&gt;HPE CSI Driver &lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;source code&lt;/a&gt; on GitHub&lt;/li&gt;
&lt;li&gt;Browse the &lt;a href=&quot;https://developer.hpe.com/api/hpe-nimble-csp/&quot;&gt;CSP specification&lt;/a&gt; if you want to include your platform with the HPE CSI Driver&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Until my next blog post, happy containerizing!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[All HPE Composable Ecosystem SDKs now support OneView 5.2 automation]]></title><description><![CDATA[HPE OneView 5.2 (REST API version 1600) automation features are now supported by all of the HPE OneView SDKs. HPE’s Composable Ecosystem…]]></description><link>https://developer.hpe.com/all-hpe-composable-ecosystem-sdks-now-support-oneview-52-automation/</link><guid isPermaLink="false">https://developer.hpe.com/all-hpe-composable-ecosystem-sdks-now-support-oneview-52-automation/</guid><pubDate>Wed, 17 Jun 2020 18:50:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView 5.2&lt;/a&gt; (REST API version 1600) automation features are now supported by all of the HPE OneView SDKs. HPE’s Composable Ecosystem actively supports &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;Ansible&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;Python&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Golang&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.3.0&quot;&gt;Terraform&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef&quot;&gt;Chef&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet&quot;&gt;Puppet&lt;/a&gt;, &lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView&quot;&gt;PowerShell&lt;/a&gt; and &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby&quot;&gt;Ruby&lt;/a&gt; SDKs. Using the unified HPE OneView API and these popular tools, IT administrators can deploy and update servers, storage, and networking simultaneously, using only a single line of code. Composing new infrastructure is now not only faster and more agile, but also more predictable.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; uses software-defined intelligence in a template-driven approach to automate the deployment, provisioning, updating, and integration of resources, such as compute, storage, and networking infrastructure. Designed with a modern, standards-based API, HPE OneView gives IT organizations the ability to connect their software-defined infrastructure from core to cloud within a diverse partner ecosystem. IT organizations can leverage this partner ecosystem to integrate HPE OneView within their existing management frameworks for their preferred platforms.&lt;/p&gt;
&lt;p&gt;HPE offers SDKs for industry-leading software deployment, provisioning, and configuration management tools, including Ansible, Terraform, Chef and Puppet. These SDKs can be used to configure HPE OneView managed infrastructure resources. IT organizations can use these SDKs to streamline the task of configuring and maintaining a company&apos;s servers. The SDKs allow for integration with cloud-based platforms to automatically provision and configure new machines, enabling administrators to create a resource topology similar to that of a public cloud on their own physical infrastructure.&lt;/p&gt;
&lt;p&gt;Developers can programmatically control HPE OneView managed resources using infrastructure-as-code for physical compute, storage, and fabric resources. Infrastructure-as-code enables complete datacenter automation, consistent reproducibility, versioning, and roll back.&lt;/p&gt;
&lt;p&gt;The HPE OneView SDKs also provide API language support for Python, Ruby, Golang and PowerShell languages, enabling developers to easily build integrations and scalable solutions with HPE OneView. With language support, you can integrate popular automation tools based on these languages with HPE OneView.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;[HPE DEV OneView] (&lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;https://developer.hpe.com/platform/hpe-oneview/home&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;Ansible GitHub release&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;[Terraform GitHub Release] (&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview&quot;&gt;https://github.com/HewlettPackard/terraform-provider-oneview&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[Chef GitHub Release] (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.4.0&quot;&gt;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.4.0&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[Puppet GitHub Release] (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.6.0&quot;&gt;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.6.0&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[Python GitHub Release]  (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.2.0&quot;&gt;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.2.0&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[Golang GitHub Release] (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.4.0&quot;&gt;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.4.0&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[Ruby GitHub release] (&lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.13.0&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.13.0&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;[PowerShell] (&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView&quot;&gt;https://github.com/HewlettPackard/POSH-HPOneView&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE achieves gold for large-scale enterprise Kubernetes deployments]]></title><description><![CDATA[cloudnativelogo Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name…]]></description><link>https://developer.hpe.com/hpe-achieves-gold-for-large-scale-enterprise-kubernetes-deployments/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-achieves-gold-for-large-scale-enterprise-kubernetes-deployments/</guid><pubDate>Wed, 17 Jun 2020 15:29:22 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/cloudnativelogo-1593006822299.png&quot; alt=&quot;cloudnativelogo&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) is now a Gold member of the &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation&lt;/a&gt; (CNCF). Over the past several months, HPE has made significant contributions to the open source community. These contributions are instrumental in helping enterprises modernize their applications across hybrid cloud, multi-cloud, and edge environments. As a Gold member, HPE emphasizes its continued dedication to promote open source projects.&lt;/p&gt;
&lt;p&gt;CNCF hosts critical components of the global technology infrastructure, serving as the vendor-neutral home for many of the fastest-growing open source projects, including Kubernetes, Prometheus, and Envoy. Its goal is to build sustainable ecosystems for cloud-native software. HPE recognizes the crucial role CNCF plays in bringing together the world’s top developers, end users, and vendors and has been an active member of CNCF since 2017.&lt;/p&gt;
&lt;p&gt;At HPE, our focus is on more than just next generation cloud-native applications. We are dedicated to delivering breakthrough innovation and a new approach to helping enterprises deploy containerized environments at scale for both cloud-native and non-cloud-native applications.  Our contributions range from delivering products and solutions for cloud-native storage and Kubernetes to providing advisory and professional support and training services focused on designing and implementing Kubernetes and cloud-native technologies. CNCF recently awarded HPE the Kubernetes &lt;a href=&quot;https://landscape.cncf.io/format=card-mode&amp;#x26;organization=hewlett-packard-enterprise&amp;#x26;selected=hpe-kcsp&quot;&gt;Certified Service Provider&lt;/a&gt; (KCSP) designation, as a pre-qualified service provider with deep experience in helping enterprises successfully adopt and deploy Kubernetes.&lt;/p&gt;
&lt;p&gt;HPE has actively collaborated with the open source community for many years, providing expertise and deep contributions to numerous open source projects. Our commitment to fostering innovation with open source Kubernetes and the container ecosystem has been accelerated through our acquisitions of &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2019/05/hewlett-packard-enterprise-integrates-bluedata-to-accelerate-ai-and-data-driven-innovation-in-the-enterprise.html&quot;&gt;BlueData&lt;/a&gt; and &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2019/08/hpe-advances-its-intelligent-data-platform-with-acquisition-of-mapr-business-assets.html&quot;&gt;MapR&lt;/a&gt;. Building on these recent software acquisitions, HPE recently introduced our ground-breaking new container management solution: &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;the HPE Ezmeral Container Platform&lt;/a&gt;.&lt;/p&gt;
&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hpe-kubernetescertified-1592924398917.png&quot; style=&quot;float: left; margin-right: 10px;&quot;&gt;
&lt;p&gt;The HPE Ezmeral Container Platform is an integrated turnkey solution with BlueData software as the multi-cluster container management control plane and the MapR distributed file system as the unified &lt;a href=&quot;https://www.hpe.com/info/data-fabric&quot;&gt;data fabric&lt;/a&gt; for persistent storage. HPE Ezmeral Container Platform provides the ability to deploy and manage multiple versions of 100% open source Kubernetes and was recently certified by CNCF as a &lt;a href=&quot;https://landscape.cncf.io/selected=hpe-container-platform&quot;&gt;Kubernetes Certified Distribution&lt;/a&gt; — ensuring conformance, interoperability, consistency, and access to the latest open source Kubernetes features.&lt;/p&gt;
&lt;p&gt;This new platform uniquely addresses the requirements for large-scale enterprise Kubernetes deployments across a wide range of use cases, from machine learning and edge analytics to CI/CD pipelines and application modernization. IT teams can use it to manage multiple Kubernetes clusters with multi-tenant container isolation and pre-integrated persistent storage. It includes innovations such as &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1043&amp;#x26;sid=20267_0&amp;#x26;locale=en_US&quot;&gt;KubeDirector&lt;/a&gt;, an open source project from HPE that enables organizations to run non-cloud-native stateful applications on Kubernetes without modifying code. The HPE Ezmeral Container Platform also provides developers with secure, on-demand access to their environments. This access lets them develop apps and release code faster, while providing the container portability benefits of build once and deploy anywhere.&lt;/p&gt;
&lt;p&gt;More recently, &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/blog-post/2020/02/hpe-acquires-scytale-to-advance-open-secure-edge-to-cloud-strategy.html&quot;&gt;HPE acquired Scytale&lt;/a&gt;, a founding contributor to CNCF’s &lt;a href=&quot;https://github.com/spiffe/spiffe&quot;&gt;SPIFFE&lt;/a&gt; (the Secure Production Identity Framework for Everyone) and SPIRE (the SPIFFE Runtime Environment) open source projects. The SPIFFE and SPIRE projects deliver a foundational capability—service identity authentication—for cloud- and container-deployed micro-services. The projects have rapidly grown in popularity, receiving contributions from Google, Pinterest, Square, Uber, and others.&lt;/p&gt;
&lt;p&gt;The open source SPIFFE and SPIRE projects enable organizations to deliver secure, high performance, multi-cloud, and multi-tenant IT infrastructure more quickly to employees, customers, and partners. In fact, the CNCF Technical Oversight Committee (TOC) just &lt;a href=&quot;https://www.cncf.io/blog/2020/06/22/toc-approves-spiffe-and-spire-to-incubation/&quot;&gt;promoted them from sandbox to incubation-level hosted projects&lt;/a&gt;. Learn more in this new blog post &lt;a href=&quot;https://community.hpe.com/t5/shifting-to-software-defined/bringing-trusted-computing-to-the-cloud/ba-p/7092622#.XvDMTJpKiM8&quot;&gt;Bringing Trusted Computing to the Cloud&lt;/a&gt; from my colleague Sunil James, founder and CEO of Scytale, now at HPE.&lt;/p&gt;
&lt;p&gt;HPE looks forward to taking on new responsibilities and doubling down on our contributions within the CNCF as a Gold member. Working with CNCF, our goal is to accelerate the creation and adoption of open, standards-based data access; advance new capabilities for cloud-native applications and environments; and promote new technologies and solutions that support them.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.cncf.io/announcement/2020/06/22/cloud-native-computing-foundation-announces-hewlett-packard-enterprise-as-gold-member/&quot;&gt;Read the CNFC press release announcing HPE as a Gold member&lt;/a&gt;. You can also check out more information on &lt;a href=&quot;https://www.hpe.com/info/container-platform&quot;&gt;HPE Container Platform&lt;/a&gt; and HPE’s &lt;a href=&quot;https://developer.hpe.com/projects&quot;&gt;open source&lt;/a&gt; initiatives. We also recommend you keep an eye out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; site for more interesting articles on containers, Kubernetes, and other open source offerings.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE GreenLake for private cloud]]></title><description><![CDATA[blog greenlake intro 1200p The complexity of today’s IT brings many challenges. IT organizations find themselves dealing with clustered…]]></description><link>https://developer.hpe.com/hpe-greenlake-for-private-cloud/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-greenlake-for-private-cloud/</guid><pubDate>Tue, 16 Jun 2020 10:51:11 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/blog-greenlake-intro-1200p-1592501708480.jpg&quot; alt=&quot;blog greenlake intro 1200p&quot;&gt;&lt;/p&gt;
&lt;p&gt;The complexity of today’s IT brings many challenges. IT organizations find themselves dealing with clustered applications, multiple data centers, public, private, and hybrid clouds, and applications with complex dependencies. Deploying and operating enterprise workloads across these varied environments is often a time-consuming, manual process, which is both inefficient and often fraught with errors.&lt;/p&gt;
&lt;p&gt;You need a solution and approach that can orchestrate and automate your processes and ensure that all tasks happen efficiently and in the proper order. HPE GreenLake provides a solution for private clouds that enables savings in both time and money, elimination of errors, and construction of more predictable and reliable services.&lt;/p&gt;
&lt;h3&gt;The all-encompassing cloud&lt;/h3&gt;
&lt;p&gt;In the early days of cloud computing, private clouds promised the scalability, elasticity, and manageability of public clouds, combined with the security and control within on-premises datacenter environments. Providing a private cloud turned out to be harder than everyone expected. Some of the earliest private cloud implementations didn’t offer the same scalability, elasticity, and resilience characteristic of cloud environments. Fortunately, over time, vendor offerings improved.&lt;/p&gt;
&lt;p&gt;While private clouds still retain their importance, they are now necessarily part of a broader discussion. The role of a private cloud is looked at more in terms of how it is implemented within a hybrid cloud strategy, taking into account the application it serves and the desired outcome. For instance, depending on the application, it could be cheaper to run a workload in a private cloud versus a public cloud (or vice versa).&lt;/p&gt;
&lt;p&gt;Managing private cloud environments can be quite arduous, involving many repetitive tasks. These tasks might include sizing, provisioning and configuring resources like virtual machines (VMs), establishing VM clusters and load balancing, creating storage logical unit numbers (LUNs), invoking virtual networks, making the actual deployment and then monitoring and managing availability and performance. Although each of these processes is effective, they are inefficient and can lead to errors. These errors then require troubleshooting, which delays the workload&apos;s availability. They may also expose security vulnerabilities that can put the enterprise at risk.&lt;/p&gt;
&lt;p&gt;With complete automation across the cloud environment, an organization eliminates these repetitive and manual processes for workload deployment and management. To achieve cloud automation, an IT team needs to use orchestration and automation tools that run on top of their virtualized environment. Orchestration enables an administrator to codify the various steps and processes involved with workload deployment and management, while automation invokes those steps without human intervention.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/greenlake-central-blueprint-image-1592499731966.png&quot; alt=&quot;greenlake central blueprint image&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.hpe.com/psnow/doc/a50003040enw?jumpid=in_lit-psnow-red&quot;&gt;HPE GreenLake for private cloud technical whitepaper&lt;/a&gt; describes the application blueprint capabilities inherent with HPE GreenLake that provides this automation for private clouds. The solution enables savings in both time and money, elimination of errors, and construction of more predictable and reliable services beyond bare virtual machines.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Create agile infrastructure with HPE OneView API based automation ]]></title><description><![CDATA[The HPE OneView Python SDK v5.2 is now available, providing support for HPE OneView 5.2 (REST API version 1600) and Image Streamer 5.2 (API…]]></description><link>https://developer.hpe.com/create-agile-infrastructure-with-hpe-oneview-api-based-automation/</link><guid isPermaLink="false">https://developer.hpe.com/create-agile-infrastructure-with-hpe-oneview-api-based-automation/</guid><pubDate>Mon, 15 Jun 2020 19:23:25 GMT</pubDate><content:encoded>&lt;p&gt;The HPE OneView Python SDK v5.2 is now available, providing support for HPE OneView 5.2 (REST API version 1600) and Image Streamer 5.2 (API 1600). This release leverages new Python standards, including refactored base classes and the introduction of mixed classes, and now supports 30 HPE OneView managed resources.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; takes a software-defined, programmatic approach to enable agile management of infrastructure with efficient workflow automation, a modern RESTful API, and a comprehensive partner ecosystem. By automating the provisioning of physical infrastructure on-demand, using software-defined templates from HPE OneView, this integration allows administrators to create a resource topology and user experience similar to that of a public cloud on their own physical infrastructure.&lt;/p&gt;
&lt;p&gt;The HPE OneView Python SDK allows developers who use the Python language to programmatically control HPE OneView managed resources using infrastructure-as-code for physical compute, storage, and fabric resources. Infrastructure-as-code enables complete datacenter automation, significantly increasing reliability, compliance, and deployment flexibility. HPE has provided a wide range of &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/blob/master/examples/README.md&quot;&gt;examples&lt;/a&gt; to help you use the OneView API and test functionality.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;GitHub Release&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.2.0&quot;&gt;Release notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Ansible Module v5.6.0. Now Available]]></title><description><![CDATA[HPE is pleased to announce the availability of the HPE OneView Ansible Module v5.6.0. This module provides integration of HPE OneView with…]]></description><link>https://developer.hpe.com/hpe-oneview-ansible-module-v560-now-available/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-ansible-module-v560-now-available/</guid><pubDate>Mon, 15 Jun 2020 19:09:06 GMT</pubDate><content:encoded>&lt;p&gt;HPE is pleased to announce the availability of the HPE OneView Ansible Module v5.6.0. This module provides integration of HPE OneView with Ansible by Red Hat®, an industry-leading software deployment, provisioning, and configuration management tool. This new module supports HPE OneView 5.2 (REST API version 1600). It also provides support for Image Streamer 5.2 (API 1600).&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; uses software-defined intelligence in a template-driven approach to deploy, provision, update, and integrate resources, such as compute, storage, and networking infrastructure. Designed with a modern, standards-based API, HPE OneView gives IT organizations the ability to connect their software-defined infrastructure from core to cloud by provisioning turnkey private cloud infrastructure within a diverse partner ecosystem. IT organizations can leverage the partner ecosystem to integrate HPE OneView within their existing management frameworks for their preferred platforms. These capabilities allow teams to deliver projects consistently while meeting desired outcomes for key stakeholders.&lt;/p&gt;
&lt;p&gt;The HPE OneView Ansible module enables the transformation of on-premises infrastructure through the software-defined infrastructure automated provisioning of bare-metal resources, including servers, storage, and networking. It does so as part of an application deployment process. Using Ansible with HPE OneView allows customers to create a flexible and adaptive infrastructure that is essential to addressing the need for organizational agility and supporting initiatives that accelerate the delivery of customer and business value.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.6.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/blob/master/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/blob/master/infrastructure-as-code/infrastructure-as-code.md&quot;&gt;Whitepaper: Infrastructure as code with HPE OneView and Ansible by Red Hat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.hpe.com/h22228/video-gallery/us/en/700000796/EN/US/7f333bd5-49f8-4a91-9891-a66554ea402c/demo-video-ansible-integration-with-hpe-oneview/video?lang=en-US&quot;&gt;Demonstration Video&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Dockerizing your NodeJS based backend Applications ]]></title><description><![CDATA[Hello developers!  In this blog post, I will take you through a step-by-step procedure that uses best practices to dockerize your NodeJS…]]></description><link>https://developer.hpe.com/dockerizing-your-nodejs-based-backend-applications/</link><guid isPermaLink="false">https://developer.hpe.com/dockerizing-your-nodejs-based-backend-applications/</guid><pubDate>Mon, 15 Jun 2020 14:51:00 GMT</pubDate><content:encoded>&lt;p&gt;Hello developers!  In this blog post, I will take you through a step-by-step procedure that uses best practices to dockerize your NodeJS backend applications. To better understand this tutorial, the reader should have some basic familiarity with Docker as-a-container technology and NodeJS-based backend applications. Briefly, NodeJS is the most popular JavaScript-based server side ecosystem. It is mostly used to build highly scalable, fault tolerant microservices-based backend enterprise applications. Many enterprise application developers prefer NodeJS over other programming languages, such as JAVA, because of its light memory footprint, event driven, non-blocking model. NodeJS is also very popular in building end-to-end applications in JavaScript, typically using MEAN (Mongo, Express, Angular, Node) and MERN (Mongo, Express, React, Node) stacks. In this post, I do not intend to focus on NodeJS-based application development, but rather on how to dockerize already developed NodeJS based applications.&lt;/p&gt;
&lt;p&gt;Let’s first try to build a sample web server using express, which is a NodeJS package. Express is a node package typically used for building REST APIs in server side JavaScript. Below are the steps we are going to perform in this article:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create and Initialize a sample NodeJS based backend application
using express package&lt;/li&gt;
&lt;li&gt;Dockerize the NodeJS application( i.e creating Dockerfile for
application)&lt;/li&gt;
&lt;li&gt;Build a docker image out of Dockerfile&lt;/li&gt;
&lt;li&gt;Run a built image as a docker container and access the NodeJS
backend application in web client typically web browser&lt;/li&gt;
&lt;li&gt;Push the finalized docker image on docker hub or any private
docker registry using docker CLI.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;Pre-requisite: you must have latest version of Node and Docker installed on your machine . ( Supported platform are Linux/windows/MAC). Throughout this article , i will be using docker for windows 10 using Hyper-V hypervisor . Follow the official documentation to install Node and Docker on your platform.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;STEP 1: Create and initialize a sample NodeJS based application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Create sample project directory as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mkdir sample_node_app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Initialize the NodeJS project using node package manager (npm). It will generate a project skeleton with a package.json file (also called project descriptor) which carries all the meta data and dependencies information for your application.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd sample_node_app &amp;#x26;&amp;#x26; npm init
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the project skeleton is ready, edit the &quot;scripts&quot; section of the package.json file as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat package.json
     {
         &quot;name&quot;: &quot;sampleapp&quot;,
         &quot;version&quot;: &quot;1.0.0&quot;,
         &quot;description&quot;: &quot;This is sample Node Project&quot;,
          &quot;main&quot;: &quot;index.js&quot;,
          &quot;scripts&quot;: {
           &quot;start&quot;: &quot;node index.js&quot;
          },
          &quot;author&quot;: &quot;rahul kumar&quot;,
          &quot;license&quot;: &quot;ISC&quot;
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to build a web server using NodeJS, install the express package using npm as below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm install express --save
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By this time, your package.json should look as what’s shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat package.json
    {
       &quot;name&quot;: &quot;sampleapp&quot;,
       &quot;version&quot;: &quot;1.0.0&quot;,
        &quot;description&quot;: &quot;This is sample Node Project&quot;,
        &quot;main&quot;: &quot;index.js&quot;,
        &quot;scripts&quot;: {
           &quot;start&quot;: &quot;node index.js&quot;
         },
        &quot;author&quot;: &quot;rahul kumar&quot;,
        &quot;license&quot;: &quot;ISC&quot;,
       &quot;dependencies&quot;: {
             &quot;express&quot;: &quot;^4.17.1&quot;
       }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once we have the project skeleton ready, let’s add express code to build a simple web server that listens on a certain port, for example, port 4000.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat index.js
const express = require(&apos;express&apos;);
const application=new express();
const PORT=4000;
application.get(&apos;/&apos;,(req,resp)=&gt;{	
	resp.send(`Congrats ! Your Node Express server is running on PORT ${PORT}`);
});
application.listen(PORT,()=&gt;{
	console.log(`Node express server is running on ${PORT}. Enjoy NodeJS`)
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that we have the Node express server ready, let’s run it!&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt; npm start
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you have started the web server using the above command, you will see the following on the console:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/node-1592239169346.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then, you can access the application on &lt;a href=&quot;http://localhost:4000&quot;&gt;http://localhost:4000&lt;/a&gt; in a browser.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/browser-1592239218657.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Okay! Now we have the simplest possible NodeJS backend application running on port 4000.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;STEP 2: Dockerize the application&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now, let’s create a Dockerfile in the project directory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat Dockerfile
# Base image used 
FROM ALPINE
# Installing project dependencies
RUN npm install
# Running default command 
CMD [&quot;npm&quot;, &quot;start&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simple enough. So, now we have the smallest possible Dockerfile. Let’s move on to the next step .&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;STEP 3: Building the docker image&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Navigate to the directory where your Dockerfile is present, also known as the “build context” for your Dockerfile. Note that you can change the name of the docker file to what you desire. If you do, you need to use the –f option in the docker CLI while building the docker image. By default, the docker CLI looks for a file named Dockerfile in the build context.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build .
Sending build context to Docker daemon   2.01MB
Step 1/3 : FROM alpine
latest: Pulling from library/alpine
df20fa9351a1: Pull complete                                                                                             Digest: sha256:185518070891758909c9f839cf4ca393ee977ac378609f700f60a771a2dfe321
Status: Downloaded newer image for alpine:latest
 ---&gt; a24bb4013296
Step 2/3 : RUN npm install
 ---&gt; Running in 0b3e3ae93e9c
/bin/sh: npm: not found
The command &apos;/bin/sh -c npm install&apos; returned a non-zero code: 127
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have found a problem: We need a Docker base image that has node and npm already installed.&lt;/p&gt;
&lt;p&gt;Problem resolution: Why we are getting this error? To better understand this, you need to understand on what basis to select the base images in your custom (or project specific) Dockerfile. The rule of thumb is that you should choose your base image based on what programs you need to build your custom image. In this case, npm is the package manager for node and it is required for installing dependencies that your project needs inside the container. In Docker world, alpine images are very small in size (just a few megabytes) and the alpine image we selected does not seem to have npm in it.&lt;/p&gt;
&lt;p&gt;One way to overcome this problem is to use some other base image that has node and npm installed. The best option in this case would be to use the official base image found on the Docker Hub public registry. On Docker Hub, if we see a full-fledged Node official image tag, its size is probably on the order of 300-400 MB. But we don&apos;t need that large a sized Node image. Instead, what we need is just a tiny version of Node image that has at least npm in it. That is why Node also comes with its own alpine tag just like other official images.&lt;/p&gt;
&lt;p&gt;Let’s try to modify our Dockerfile to fix this.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat Dockerfile
# Base image used 
FROM node:alpine
# Install project dependencies
RUN npm install
# Run default command
CMD [&quot;npm&quot;, &quot;start&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, build it again with the alpine tag of the official node base image:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build .
Sending build context to Docker daemon   2.01MB
Step 1/3 : FROM node:alpine
alpine: Pulling from library/node
cbdbe7a5bc2a: Pull complete                                                                                                                                                                  fb0e3739aee1: Pull complete                                                                                                                                                                  738de7869598: Pull complete                                                                                                                                                                  ffd68be3d86c: Pull complete                                                                                                                                                                  Digest: sha256:7d11fea6d901bfe59999dda0fa3514438628a134d43c27c2eaec43cc8f4b98d5
Status: Downloaded newer image for node:alpine
 ---&gt; 3bf5a7d41d77
Step 2/3 : RUN npm install
 ---&gt; Running in 091b75730aa4
npm WARN saveError ENOENT: no such file or directory, open &apos;/package.json&apos;
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open &apos;/package.json&apos;
npm WARN !invalid#2 No description
npm WARN !invalid#2 No repository field.
npm WARN !invalid#2 No README data
npm WARN !invalid#2 No license field.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have found a problem: We need to ensure the docker container is aware of our package.json file.&lt;/p&gt;
&lt;p&gt;Problem resolution: When you see this warning or error, it is because the node package manager always relies on package.json file (which is generated as part of npm init during project initialization) for the installation of project dependencies. It shouldn’t be any surprise that containers don&apos;t have this package.json on its file system snapshot because the whole of our project source code is on a local hard disk. So the File system snapshot found in the container is merely the one which comes from node:alpine base image. One solution to this problem is to make your project source code somehow available in the container. Hence, before we install project dependencies using npm install in Dockerfile, package.json must be made available inside your container file system. The way we do it is to use the COPY instruction of Dockerfile.&lt;/p&gt;
&lt;p&gt;Here is the syntax:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# COPY docker instruction syntax
COPY &amp;#x3C;PATH to folder to copy from&gt; &amp;#x3C;destination inside container&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that there are various ways in which you can make your source code available to the container. The preferred ways are docker volumes and bind mounts. But in this example, we will directly copy the whole source code into the container file system.
Here is the updated docker file :&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat Dockerfile
# Base image used  
FROM node:alpine 
COPY ./ ./
# Install project dependencies
RUN npm install
# Running default command
CMD [&quot;npm&quot;, &quot;start&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s build and tag it in order to push it to Docker Hub.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build -t rajput/sample_node_app .
Sending build context to Docker daemon   2.01MB
Step 1/4 : FROM node:alpine
 ---&gt; 3bf5a7d41d77
Step 2/4 : COPY ./ ./
 ---&gt; 586eada1b908
Step 3/4 : RUN npm install
 ---&gt; Running in 90bcbf81b81c
npm WARN sampleapp@1.0.0 No repository field.

audited 50 packages in 2.925s
found 0 vulnerabilities

Removing intermediate container 90bcbf81b81c
 ---&gt; bab8c88e351b
Step 4/4 : CMD [&quot;npm&quot;,&quot;start&quot;]
 ---&gt; Running in bcb475eb3b19
Removing intermediate container bcb475eb3b19
 ---&gt; 14119783c338
Successfully built 14119783c338
Successfully tagged rajput/sample_node_app:latest
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have now successfully built a docker image for our project. Let’s verify it using the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker images 
REPOSITORY               TAG                 IMAGE ID            CREATED              SIZE
rajput/sample_node_app   latest              14119783c338        About a minute ago   119MB
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, we have built the docker image and tagged it. Let’s run it now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;STEP 4: Running the Docker container&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Run the docker image that we built in the previous step and see what happens.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run rajput/sample_node_app
&gt; sampleapp@1.0.0 start /
&gt; node index.js

Node express server is running on 4000. Enjoy NodeJS
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Great! Our NodeJS express server is running. Let’s access our application on &lt;a href=&quot;http://localhost:4000&quot;&gt;http://localhost:4000&lt;/a&gt;.
Uh oh! Something’s wrong. We are not able to connect to our application.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/access-1592307009901.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s see if the NodeJS container is running to find the root cause.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                    NAMES
c2408a62350e        rajput/sample_node_app   &quot;docker-entrypoint.s…&quot;   14 seconds ago      Up 13 seconds                                practical_tharp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Although we have successfully built our docker image and the container is running as well, the application still can&apos;t be accessed. As you can see , the PORT column in the screen above is empty. To understand why this is, you need to understand Docker Networking concepts. The port 4000 is inside the docker container, but we are accessing this 4000 on our host where the docker engine is running. One way to solve this problem is to route the traffic that is coming from your local host to the container where our application resides. Each docker container is considered to be an isolated sandbox with its own namespace, port, mount, etc. So, let’s make sure that whenever any request comes from the local host it is routed or redirected to the application in the container. This concept is called Port Mapping at run time in the docker networking world.&lt;/p&gt;
&lt;p&gt;Here’s how it looks:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# syntax for port mapping
docker run -p &amp;#x3C;PORT on HOST&gt;: &amp;#x3C;PORT in container&gt; &amp;#x3C;Image name or ID&gt;
docker run -p 8080:4000 rajput/sample_node_app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And now we can access our Node application which is running in container!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/sucaccess-1592307705946.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s visualize the NodeJS container file system snapshot by running a shell inside it&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run -it rajput/sample_node_app sh
/ # ls
Dockerfile         etc                lib                node_modules       package.json       run                sys                var
bin                home               media              opt                proc               sbin               tmp
dev                index.js           mnt                package-lock.json  root               srv                usr
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s see if we can improve on this: Our application source code is mixed with the root file system of the container.&lt;/p&gt;
&lt;p&gt;Let’s do it!  You can see the application source code, including package.json, was copied inside the root file system of the container. Notice that, in order to avoid any undesired state of your container root file system once it is mixed with the project source code, it is recommended that you choose your own custom application directory inside the container file system. This will help in avoiding any undesired overriding of the file system by your own project source code. We can achieve this by using the WORKDIR instruction in our Docker file. This instruction ensures that any further commands (followed by WORKDIR) in the Dockerfile will be executed relative to WORKDIR in the container&lt;/p&gt;
&lt;p&gt;We will use this to optimize our Dockerfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat Dockerfile
# Base image used  
FROM node:alpine 
WORKDIR /usr/mynodeapp
COPY ./ ./
# Install project dependencies
RUN npm install
# Running default command
CMD [&quot;npm&quot;, &quot;start&quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Build and tag the docker file&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build -t rajput/sample_node_app  .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Run the container out of the successfully built image:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run -p 8080:4000 rajput/sample_node_app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that you might get an error here saying:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;Error response from daemon: driver failed programming external connectivity on endpoint bold_germain (22e41c9ded62fe9f7347d0bbe116e4b23ad7c33890f5ec7401c0566441210616): Bind for 0.0.0.0:8080 failed: port is already allocated.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to resolve this, you need to ensure that the port on your host machine is not already allocated before running the NodeJS container.
One common way to solve this is by stopping the running container on that conflicted port or choose a different port on your host machine.
You can stop running the container using the below command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker container stop &amp;#x3C;container ID&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s list the running containers&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Examine the file system for NodeJS container&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker exec -it 13940468222f sh
/usr/mynodeapp # ls
Dockerfile         index.js           node_modules       package-lock.json  package.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can see that all of our project source code is not in the root file system anymore and instead, it is in /usr/mynodeapp/.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;/usr/mynodeapp # ls
Dockerfile         index.js           node_modules       package-lock.json  package.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There is still an opportunity to improve this docker file. The Node package manager (npm) just uses package.json to install any project specific dependencies and it doesn’t care about the rest of the source code of your application. To further optimize the code, you need to first copy the package.json and then run npm install. Later, you will copy the remaining source code. This will ensure that any source code change in your application does not install project dependencies again and again, and that it does so only when new dependencies are added or an existing dependency version is changed in the package.json file in your project. It is always a best practice to keep the things that more frequently change, such as project or application code, at the bottom of the docker file.&lt;/p&gt;
&lt;p&gt;If you have done all this, the new Dockerfile should look like what’s shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cat Dockerfile
MAINTAINER geeks@hpe.com
# Base image 
FROM node:alpine
# Working directory
WORKDIR /usr/mynodeapp
COPY ./package.json  ./
RUN npm install
COPY ./  ./
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s build and tag the image again . Ensure tag name for your image is unique at any point of time.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker build -t  rajput/sample_node_app .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the finalized image is successfully built, we can run the NodeJS container out of it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker run -p 8080:4000 rajput/sample_node_app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Good, so we have the final docker image for our NodeJS based backend application. Let’s push it to Docker Hub.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 5:   Push the image on docker hub&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Congratulations ! It&apos;s time to push the docker image onto the docker hub.
Docker hub is a public registry for your docker images ! You need to have docker hub account in order to push your image onto it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker login
Username: rajput
Password:
Login Succeeded
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;docker push rajput/sample_node_app
The push refers to repository [docker.io/rajput/sample_node_app]
312072b77e32: Pushed                                                                                                                                                                         5a3885fb97b9: Pushed                                                                                                                                                                         latest: digest: sha256:707eae883285a5209283aea94950fee5c9f9357a36b1d6f53c60cb659fd950ec size: 1782
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you have pushed your image, this is what it will look like on the Docker Hub public registry.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/6/hub-1592318453804.PNG&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Even though we have made great progress, we are still left with the problem that for any change in the application source code still requires rebuilding the docker image. Any time we want to modify project source code, we need to rebuild the image again. This is because the source code is still on our local hard disk and it is nowhere referenced in the container and it is copied into the container only when the docker image is built. One way to solve this problem is to use docker volumes or bind mounts, which let you change your source code from local hard disk and have the change reflected in the running container without even rebuilding the image. I plan on writing a follow-on article that will address this.&lt;/p&gt;
&lt;p&gt;Along with addressing this issue, I plan on showing you the same application built with multiple containers. We will add some No-SQL backend DB, like Redis or Mongo to the NodeJS application service layer. It will be a multi-tier app with a proper service and database layer. We will leverage docker compose to run both containers; one for database and one for express server and see how they interact with each other.
As a full-stack developer with a sincere passion for the latest software technologies, I like to help others by writing tutorials like this. I hope you enjoyed my post. You might want to check out another post I wrote &lt;a href=&quot;https://developers.redhat.com/blog/author/rkumar/&quot;&gt;here&lt;/a&gt;. Keep checking back on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for new and interesting articles on containers!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Raw Block and Ephemeral Inline Volumes on Kubernetes]]></title><description><![CDATA[With the release of the HPE CSI Driver for Kubernetes 1.2.0 quite a few new Container Storage Interface (CSI) concepts were introduced as…]]></description><link>https://developer.hpe.com/using-raw-block-and-ephemeral-inline-volumes-on-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/using-raw-block-and-ephemeral-inline-volumes-on-kubernetes/</guid><pubDate>Thu, 11 Jun 2020 21:18:38 GMT</pubDate><content:encoded>&lt;p&gt;With the release of the &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/hpe-csi-driver-for-kubernetes-1-2-0-available-now/ba-p/7091977&quot;&gt;HPE CSI Driver for Kubernetes 1.2.0&lt;/a&gt; quite a few new Container Storage Interface (CSI) concepts were introduced as fully supported features. As always, new capabilities introduce new YAML stanzas that needs to be understood to take full advantage of these capabilities. In this blog post, we’ll explore how to expose raw block volumes and the different ways to declare an ephemeral inline volume for Kubernetes Pods.&lt;/p&gt;
&lt;p&gt;Many CSI drivers support these capabilities. For the examples below, we’ll use a recent version of the HPE CSI Driver with the default &lt;code&gt;StorageClass&lt;/code&gt; installed on Kubernetes 1.18. Do note that ephemeral inline volumes are still considered beta in Kubernetes.&lt;/p&gt;
&lt;h1&gt;Raw block volumes&lt;/h1&gt;
&lt;p&gt;Kubernetes supports running a diverse set of applications with various needs when it comes to infrastructure requirements, such as compute, networking and storage. Historically, a “volume” on Kubernetes translates to a POSIX-like filesystem to store persistent data at a given path inside a &lt;code&gt;Pod&lt;/code&gt;. With the introduction of raw block volumes, there’s now a way to present the underlying block device that the filesystem normally is created on. This is beneficial for applications that are capable of addressing the device directly to store data. It effectively removes the double-buffering effects that filesystems introduces along with the POSIX semantics and filesystem internals. Applications that truly can take advantage of raw block volumes on Kubernetes are few and far between.&lt;/p&gt;
&lt;p&gt;The concept of presenting a raw block volume to a &lt;code&gt;Pod&lt;/code&gt; on Kubernetes is very similar to how Raw Device Mappings (RDMs) are presented on VMware vSphere, where a virtual machine get unfettered direct access to a LUN on a storage fabric exposed to the VMware ESX host.&lt;/p&gt;
&lt;p&gt;Let’s compare the Kubernetes minutia needed to declare a regular volume versus a raw block volume.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
  name: block-device 
spec: 
  accessModes: 
  - ReadWriteOnce 
  resources: 
    requests: 
      storage: 32Gi 
  volumeMode: Block 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a conventional Persistent Volume Claim (PVC). The only thing that stands out is the &lt;code&gt;.spec.volumeMode&lt;/code&gt;. By default, &lt;code&gt;volumeMode&lt;/code&gt; is set to &lt;code&gt;Filesystem&lt;/code&gt; and is usually never called out explicitly. Setting the &lt;code&gt;volumeMode&lt;/code&gt; attribute to &lt;code&gt;Block&lt;/code&gt; will change this, presenting the device itself, once it is exposed to a &lt;code&gt;Pod&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;To be able to address the block device, there’s additional details that needs to be declared in the &lt;code&gt;Pod&lt;/code&gt; specification. Let’s bring up a &lt;code&gt;Pod&lt;/code&gt; as an example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;--- 
apiVersion: v1 
kind: Pod 
metadata: 
  name: ioping 
spec: 
  containers: 
  - name: ioping 
    image: hpestorage/ioping 
    command: [ &quot;ioping&quot; ] 
    args: [ &quot;/dev/xvda&quot; ] 
    volumeDevices: 
    - name: raw 
      devicePath: /dev/xvda 
  volumes: 
  - name: raw 
    persistentVolumeClaim: 
      claimName: block-device 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;.spec.volumes&lt;/code&gt; stanza is exactly the same as it would be for using a filesystem. It’s the &lt;code&gt;.spec.containers.volumeDevices&lt;/code&gt; and &lt;code&gt;.spec.containers.volumeDevices.devicePath&lt;/code&gt; that just got introduced. Creating the above PVC and &lt;code&gt;Pod&lt;/code&gt; would result in the following log output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl logs -f pod/ioping 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=1 time=1.10 ms (warmup) 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=2 time=1.01 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=3 time=862.1 us 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=4 time=1.11 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=5 time=895.1 us 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=6 time=1.11 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=7 time=976.4 us 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=8 time=853.5 us (fast) 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /dev/xvda (block device 32 GiB): request=9 time=912.7 us 
^C 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s evident that we are indeed accessing a raw block device from inside the &lt;code&gt;Pod&lt;/code&gt;.&lt;/p&gt;
&lt;h1&gt;Real world example for raw block volumes: Rook&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/rook/rook&quot;&gt;Rook&lt;/a&gt; is a Cloud Native Computing Foundation (CNCF) incubator project (a graduation proposal in the works at this time) to provide open source cloud-native storage for Kubernetes. Rook provides object, file and block storage to Kubernetes using &lt;a href=&quot;https://ceph.io/&quot;&gt;CEPH&lt;/a&gt;. Rook is complementary to the HPE CSI Driver, which only provide block (Nimble, Primera, 3PAR) versus Rook giving the option to deploy a distributed filesystem on Kubernetes backed by Enterprise storage to present additional data access protocols.&lt;/p&gt;
&lt;p&gt;Let’s assume we have deployed the Rook Operator on the Kubernetes cluster. Creating a new &lt;code&gt;CephCluster&lt;/code&gt; is done as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;--- 
apiVersion: ceph.rook.io/v1 
kind: CephCluster 
metadata: 
  name: rook-ceph 
  namespace: rook-ceph 
spec: 
  cephVersion: 
    image: ceph/ceph:v14.2.9 
  dataDirHostPath: /var/lib/rook 
  mon: 
    count: 3 
    volumeClaimTemplate: 
      spec: 
        storageClassName: hpe-standard 
        resources: 
          requests: 
            storage: 10Gi 
  storage: 
   storageClassDeviceSets: 
    - name: set1 
      count: 3 
      portable: true 
      tuneSlowDeviceClass: false 
      volumeClaimTemplates: 
      - metadata: 
          name: data 
        spec: 
          resources: 
            requests: 
              storage: 32Gi 
          storageClassName: hpe-standard 
          volumeMode: Block 
          accessModes: 
            - ReadWriteOnce 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pay attention to the &lt;code&gt;volumeMode: Block&lt;/code&gt; attribute in the specification. We can further inspect the PVC created by the &lt;code&gt;StatefulSet&lt;/code&gt; that has been declared:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get pvc -n rook-ceph -l ceph.rook.io/DeviceSetPVCId=set1-data-0 -o json | json items.0.spec 
{ 
  &quot;accessModes&quot;: [ 
    &quot;ReadWriteOnce&quot; 
  ], 
  &quot;resources&quot;: { 
    &quot;requests&quot;: { 
      &quot;storage&quot;: &quot;32Gi&quot; 
    } 
  }, 
  &quot;storageClassName&quot;: &quot;hpe-standard&quot;, 
  &quot;volumeMode&quot;: &quot;Block&quot;, 
  &quot;volumeName&quot;: &quot;pvc-26e4e5d5-0e08-46c6-9a2e-679e0bde6264&quot; 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s now possible to use the CEPH cluster to create &lt;a href=&quot;https://rook.io/docs/rook/v1.3/ceph-filesystem.html&quot;&gt;filesystems&lt;/a&gt; and &lt;a href=&quot;https://rook.io/docs/rook/v1.3/ceph-object.html&quot;&gt;object stores&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note!&lt;/strong&gt; Use Rook at your own risk. This is an example, not an endorsement.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Ephemeral inline volumes&lt;/h1&gt;
&lt;p&gt;The term ephemeral inline volume is quite a mouthful for what it is – a temporary placement of data which you don’t really care about long-term, most commonly talked about as a &quot;scratch disk&quot;. However, this is a very important construct for data intensive applications where Kubernetes administrators now have the ability to dictate placement of IO intensive applications that require temporary storage. Up until the introduction of ephemeral inline volumes, applications have simply used the container runtime provided union filesystem inside the container for scratch space or used other shared mechanisms like &lt;code&gt;hostPath&lt;/code&gt; or &lt;code&gt;emptyDir&lt;/code&gt;. Sharing resources on the host has its challenges. First off, the Kubernetes admin has no means to put any sort of boundaries in place for an individual container. That, in turn, could lead to potentially having a single container consume the entire host filesystem and starve other containers on the host for resources.&lt;/p&gt;
&lt;p&gt;The term “inline” means the volume declaration resides inside the &lt;code&gt;Pod&lt;/code&gt; specification. Each &lt;code&gt;Pod&lt;/code&gt;, regardless of replica count, will be given a dedicated &lt;code&gt;ReadWriteOnce&lt;/code&gt; volume as per the declaration. Let’s see what it looks like.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;--- 
apiVersion: v1 
kind: Pod 
metadata: 
  name: my-pod-inline-mount-2 
spec: 
  containers: 
    - name: pod-datelog-1 
      image: nginx 
      command: [&quot;bin/sh&quot;] 
      args: [&quot;-c&quot;, &quot;while true; do date &gt;&gt; /data/mydata.txt; sleep 1; done&quot;] 
      volumeMounts: 
        - name: my-volume-1 
          mountPath: /data 
  volumes: 
    - name: my-volume-1 
      csi: 
       driver: csi.hpe.com 
       fsType: ext3 
       volumeAttributes: 
         csi.storage.k8s.io/ephemeral: &quot;true&quot; 
         inline-volume-secret-name: nimble-secret 
         inline-volume-secret-namespace: kube-system 
         accessProtocol: &quot;iscsi&quot; 
         size: &quot;7Gi&quot; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The interesting part here is the &lt;code&gt;.spec.volumes.csi&lt;/code&gt; stanza. This is the bare minimum amount of parameters required to provision an inline volume. Any additional parameters supported by the Container Storage Provider (CSP) may be used here. Note that there’s no &lt;code&gt;StorageClass&lt;/code&gt; at play here. All parameters, including the &lt;code&gt;Secret&lt;/code&gt; needs to be part of the declaration. This is where a word of caution is warranted. Handing out the &lt;code&gt;Secret&lt;/code&gt; to a user is the same as handing over credentials to the CSP backend!&lt;/p&gt;
&lt;p&gt;CSI ephemeral inline volumes provide a means for the Kubernetes admin to make cluster users aware of how and where temporary storage resources may be provisioned. This is not a particularly good idea with the HPE Nimble Storage CSP at this time as there’s no mechanism to create the necessary separation. With the HPE 3PAR/Primera CSP, it’s possible to create a separate Virtual Domain for inline volumes and the user is essentially a tenant on the backend storage array.&lt;/p&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;It’s always exciting to talk about new features and capabilities. Take the new CSI driver for a spin and let us know what you think. We hang out on the HPE DEV Slack community. Sign up on &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; if you’re an external HPE user or login directly at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt; if you’re an HPE employee.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE CSI Driver for Kubernetes &lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-csi-driver&quot;&gt;Helm Chart&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HPE CSI Operator for Kubernetes on &lt;a href=&quot;https://operatorhub.io/operator/hpe-csi-operator&quot;&gt;OperatorHub.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Learn about the CSI driver on &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD)&lt;/li&gt;
&lt;li&gt;Visit the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;HPE Nimble Storage&lt;/a&gt; or &lt;a href=&quot;https://developer.hpe.com/platform/hpe-3par-and-primera/home&quot;&gt;HPE Primera&lt;/a&gt; platform pages&lt;/li&gt;
&lt;li&gt;Read the HPE CSI Driver for Kubernetes release blog on &lt;a href=&quot;https://community.hpe.com/t5/around-the-storage-block/hpe-csi-driver-for-kubernetes-1-2-0-available-now/ba-p/7091977&quot;&gt;Around The Storage Block&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Get involved in the open source community! Part 2: Sharing with the community ]]></title><description><![CDATA[git101-part1-git icon 1788c In the Part 1 of my blog series, I discussed how to get started with Git and leverage some of the content…]]></description><link>https://developer.hpe.com/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun/</link><guid isPermaLink="false">https://developer.hpe.com/get-involved-in-the-open-source-community-part-2-sharing-with-the-commun/</guid><pubDate>Fri, 05 Jun 2020 12:47:25 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/git-icon-1788c-1590702885345.png&quot; alt=&quot;git101-part1-git icon 1788c&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;/blog/get-involved-in-the-open-source-community-part-1-getting-started-with-gi&quot;&gt;Part 1&lt;/a&gt; of my blog series, I discussed how to get started with Git and leverage some of the content provided by the open source community. We already covered use case 1 and use case 2 from the list below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use case 1: I’d like to use something from the community&lt;/li&gt;
&lt;li&gt;Use case 2: I&apos;d like to report an issue on a repository&lt;/li&gt;
&lt;li&gt;Use case 3: I&apos;d like to share something with the community&lt;/li&gt;
&lt;li&gt;Use case 4: I&apos;d like to contribute code to a repository&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this article I am covering use case 3, where we will start contributing to the community.&lt;/p&gt;
&lt;h1&gt;Use case 3: I&apos;d like to share something with the community&lt;/h1&gt;
&lt;p&gt;Let&apos;s try sharing something with the open source community. In this use case, you will create your first repository on GitHub. And if you don&apos;t have a GitHub account yet, it will also be a good time to create one in order to start sharing things with others. GitHub started as a place for sharing and versioning source code, but many other things can be shared on GitHub, including PDF files (tutorials, for example), config files, markdown files, and Jupyter Notebooks. So, don&apos;t be shy and start to contribute.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Anything can be shared, but Git works best on non-binary content.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 1: Creating a personal account on Github&lt;/h2&gt;
&lt;p&gt;For those of you who do not have a GitHub account (or equivalent), now is a good time to create one. To do so:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Connect to  &lt;a href=&quot;https://github.com/join&quot;&gt;https://github.com/join&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Choose a username, enter your email, and select a password&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/signup-1591362322035.png&quot; alt=&quot;git101-part2-signup&quot;&gt;&lt;/p&gt;
&lt;p&gt;It’s as simple as that!&lt;/p&gt;
&lt;h2&gt;Step 2: Create an empty repo called WelcomeGit&lt;/h2&gt;
&lt;p&gt;Let’s create our first repository (a.k.a. repo) using the following instructions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use the + sign next to your profile to Add a repository&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/newrepo-1591362355840.png&quot; alt=&quot;git101-part2-newrepo&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Provide WelcomeGit for a repo name&lt;/li&gt;
&lt;li&gt;Provide a description&lt;/li&gt;
&lt;li&gt;Make it Public&lt;/li&gt;
&lt;li&gt;Do not initialize a README file&lt;/li&gt;
&lt;li&gt;Do not setup an ignore file&lt;/li&gt;
&lt;li&gt;Select Apache License 2.0&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: It is good practice to select a licensing scheme as soon as you create a repository. There are many choices, and the specificities of each scheme is beyond the scope of this article. We selected Apache License 2.0, which is a very permissive license for your code (basically, anyone can use it for any purpose and there is no requirement to share changes). You can find out more regarding open source licensing schemes [here] (&lt;a href=&quot;https://opensource.org/licenses&quot;&gt;https://opensource.org/licenses&lt;/a&gt;).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/createrepo-1591362387371.png&quot; alt=&quot;git101-part2-createrepo&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 3: Working securely with GitHub&lt;/h2&gt;
&lt;p&gt;You could use your GitHub password from the CLI, but it&apos;s not considered good practice to use passwords in clear text in a terminal session. Our recommendation is to use a specifically generated token instead (which we can revoke when you don’t need it anymore). For this, we will need to follow the instructions described [here] (&lt;a href=&quot;https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line&quot;&gt;https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line&lt;/a&gt;).&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Another popular option is to use an SSH key to access GitHub and avoid providing any password or token. Creating a public/private keypair is beyond the scope of this lab, so we will stick to HTTPS and use a token. But feel free to check how to import your SSH key &lt;a href=&quot;https://help.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let&apos;s go:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/settings/profile&quot;&gt;Open Personal Settings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/settings/apps&quot;&gt;Select the Developer settings&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/settings/tokens&quot;&gt;Select Personal access tokens&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/settings/tokens/new&quot;&gt;Click Generate new token&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Set up a note such as &quot;Git101 Token&quot;, select &quot;repo&quot; for the scope and click Generate token (this will prompt you for your GitHub account password)&lt;/li&gt;
&lt;li&gt;Copy the generated token to the clipboard and use it when prompted for you git password&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: You cannot display the value of a token once you close the window. You will need to regenerate a new token if you forgot its value.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Step 4: Build content into your WelcomeGit repo from Git CLI&lt;/h2&gt;
&lt;p&gt;Open a terminal session (terminal on Mac, PowerShell on Windows).&lt;/p&gt;
&lt;p&gt;1/ Create a folder called &lt;strong&gt;welcomegit&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ mkdir welcomegit
$ cd welcomegit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2/ Initialize Git on this empty folder using &lt;code&gt;git init&lt;/code&gt;. After Git was initialize all changes in this folder will be monitored by Git. Your folder has become a local repo(sitory)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git init
Initialized empty Git repository in /Users/lalli/welcomegit/.git/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;3/ Create a &lt;strong&gt;README.md&lt;/strong&gt; file with just one line in it. For example: “This is my README file”&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cat &gt;&gt; README.md
This is my README file
ˆD
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;4/ Let’s query the status of our local repo using &lt;code&gt;git status&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git status
On branch master
No commits yet
Untracked files:
  (use &quot;git add &amp;#x3C;file&gt;...&quot; to include in what will be committed)
	README.md
nothing added to commit but untracked files present (use &quot;git add&quot; to track)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Git tells you that a file was discovered and it needs to be added to Git to make it part of the repo.&lt;/p&gt;
&lt;p&gt;5/ Add file to the repo with git add (&lt;code&gt;git add .&lt;/code&gt; will add all files in folder)&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git add README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;6/ Let’s query the status of our local repo using &lt;code&gt;git status&lt;/code&gt; again&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git status
On branch master
No commits yet
Changes to be committed:
  (use &quot;git rm --cached &amp;#x3C;file&gt;...&quot; to unstage.
	new file:   README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We see that the new file is now part of the repo, but the changes have not yet been committed.&lt;/p&gt;
&lt;p&gt;7/ Before we commit, let’s set a few Git environment variables&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git config --global user.email &amp;#x3C;your-git-email&gt;
$ git config --global user.name &amp;#x3C;your-git-username&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;8/ Commit changes to repo with &lt;code&gt;git commit&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git commit -m “Added README.md to repo”
[master (root-commit) ab3adf5] Added README.md to repo
 1 file changed, 1 insertion(+)
 create mode 100644 README.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;9/ Let’s now connect this local repo to our WelcomeGit repo (created in Step 2) using &lt;code&gt;git remote&lt;/code&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Best practice is to call this remote: origin&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git remote add origin &amp;#x3C;your-repo-URL-goes-here&gt; -m master
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ git remote add origin https://github.com/didou06/WelcomeGit -m master
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;10/ Verify status of our remote repo with &lt;code&gt;git remote -v&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git remote -v 
origin https://github.com/Didou06/WelcomeGit.git (fetch)
origin https://github.com/Didou06/WelcomeGit.git (push)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;11/ It’s now time to push our changes to your remote repo using git push. You should be prompted for GitHub username and password.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git push --set-upstream origin master
Username for &apos;https://github.com&apos;: didou06
Password for &apos;https://didou06@github.com&apos;: 
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 247 bytes | 247.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/Didou06/WelcomeGit.git
 * [new branch]      master -&gt; master
Branch &apos;master&apos; set up to track remote branch &apos;master&apos; from &apos;origin&apos;.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;12/ Query status of repo one last time&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ git status
On branch master
Your branch is up to date with &apos;origin/master&apos;.
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That’s already quite a lot of commands. There are a lot more, but these are the most important ones to learn to get started.&lt;/p&gt;
&lt;h2&gt;Step 5: Verify WelcomeGit repo in web GUI&lt;/h2&gt;
&lt;p&gt;On your GitHub repo, you can verify that you now have a &lt;strong&gt;README.md&lt;/strong&gt; file with some content (refresh page, if needed). Well done!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/readmeadded-1591365686744.png&quot; alt=&quot;git101-part2-readmeadded&quot;&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s it. From now on, every change made in the WelcomeGit folder on your machine (add file with &lt;code&gt;git add&lt;/code&gt;, delete file with &lt;code&gt;git rm&lt;/code&gt;, rename file with &lt;code&gt;git mv&lt;/code&gt;, modify content of files) will be tracked by Git. It is your choice to commit changes locally like we did in Step 4 using &lt;code&gt;git commit&lt;/code&gt;. It is also your choice to push changes from your local repo to your remote repo using &lt;code&gt;git push&lt;/code&gt;. You can add contributors to your project, and each will work the same way, using a local repo, then pushing back centrally. Git makes it possible to scale to hundreds of contributors as shown in the &lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;Grommet GitHub&lt;/a&gt;, for example.&lt;/p&gt;
&lt;p&gt;This terminates use case 3: Congratulations! You have created your GitHub account and populated your first repo from the command line using Git. You have just become an active member of the open source community :-).&lt;/p&gt;
&lt;p&gt;If you want to discover more about Git, I recommend the following resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.git-scm.com/book/en/v2&quot;&gt;The Pro Git Book&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://guides.github.com/introduction/git-handbook/&quot;&gt;The Git Handbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=SWYqp7iY_Tc&amp;#x26;feature=youtu.be&quot;&gt;Video Git Crash Course&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Stay tuned to &lt;a href=&quot;/blog&quot;&gt;HPE DEV&lt;/a&gt; for the last article where I will cover use case 4. In this use case, you will start contributing some code back to an existing project.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[WOW - Part 3 - Conveying the business value in a purposefully designed UX]]></title><description><![CDATA[picture1 What is WOW? Previously in my last two posts (Part 1, Part 2), I covered the first three stages of a methodology our group…]]></description><link>https://developer.hpe.com/wow-part-3-conveying-the-business-value-in-a-purposefully-designed-ux/</link><guid isPermaLink="false">https://developer.hpe.com/wow-part-3-conveying-the-business-value-in-a-purposefully-designed-ux/</guid><pubDate>Thu, 04 Jun 2020 16:59:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture1-1591290304651.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is WOW?&lt;/h2&gt;
&lt;p&gt;Previously in my last two posts (&lt;a href=&quot;/blog/wow-a-practiced-and-perfected-design-process-part-1-uncovering-the-merit&quot;&gt;Part 1&lt;/a&gt;, &lt;a href=&quot;/blog/wow-part-2-communicating-with-customers-and-constructors&quot;&gt;Part 2&lt;/a&gt;), I covered the first three stages of a methodology our group developed called WOW (Why On What with customers and constructors). This workflow demonstrates how to quantify the value of the design process and provides enterprise UX designers with a practiced and perfected path to achieve success.&lt;/p&gt;
&lt;p&gt;The WOW methodology helps creative teams focus on four important stages of a project:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Uncover why the business (and this particular project) needs a UX&lt;/li&gt;
&lt;li&gt;Involve the customer early in the design phase&lt;/li&gt;
&lt;li&gt;Ensure the constructor (developer) has the right information during design implementation&lt;/li&gt;
&lt;li&gt;Convey the business value of the design&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now that we’ve helped uncover why a project needs UX design (by illustrating the benefits of the tools it offers and shown how a designer’s use of empathy can smooth the path to achieving a successful design), I’d like to cover the last part of WOW. In this article, I’ll show you how to articulate the business value of the design process.&lt;/p&gt;
&lt;h2&gt;UX deliverables with data, comparisons, and designs&lt;/h2&gt;
&lt;p&gt;The tools that designers use to create designs, such as wireframes or screen designs, are not enough to demonstrate the value of design. Audiences understand the screens but, at the same time, they also like to debate the screens. Using data and comparison studies can help avoid vague and lengthy debates that rarely result in any specific outcome.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture3-1591290322580.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Our data shows that UX-reviews were among the lengthiest meetings for our team over the past few years. I agree these meetings are very engaging!! But just a mention of a small number of data points can bring the meeting duration down by 10-12 minutes.&lt;/p&gt;
&lt;h2&gt;Wireframes versus the voice of a customer?&lt;/h2&gt;
&lt;p&gt;Wireframes have a notorious history of being treated as a designer’s opinion. Wireframes, as primary deliverable, are not enough to help stakeholders realize the value the UX design process brings to the project. Facts, testimonials, and comparative studies should be used as the primary deliverables to be given to stakeholders. Wireframes should only be used as a secondary method that supports the primary deliverables.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture5-1591290355435.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;As part of the WOW process, designers will have gone through stages that helps them accumulate the facts required to sway opinion. They will be able to show how the UX design process added value from the very beginning, identifying potential gaps in the project and adding competitive value. They will be able to express the value of the UX in customer terms, relaying specifically what customers found helpful and necessary for the success of the product experience.&lt;/p&gt;
&lt;h2&gt;Key takeaways&lt;/h2&gt;
&lt;p&gt;As I’ve shown in these past three blog posts, there is a lot more to UX design than what is often considered. UX design isn’t just used to make an application look pretty. UX design brings many benefits to the product development process, in that it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Offers specific UX tools and techniques that can help identify the existing gaps in the product.&lt;/li&gt;
&lt;li&gt;Promotes empathy, not just with customers but also with UI developers (the constructors).&lt;/li&gt;
&lt;li&gt;Provides tools to demonstrate the business value achieved through UX.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture6-1591290392865.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;I hope that, by using this WOW methodology, you will find it easier to convince others of the value of the UX design process and achieve greater success in your next project. I’m always interested in hearing about your experience. Feel free to reach out to me at &lt;a href=&quot;https://twitter.com/uxwithparul&quot;&gt;@uxwithparul&lt;/a&gt; if you have any questions. Don’t forget to check back at &lt;a href=&quot;/blog&quot;&gt;HPE DEV&lt;/a&gt; and see what other tutorials we have that can make your life easier.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Container Platform REST API – Part 2: Deploying containerized applications]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/hpe-container-platform-rest-api-part-2-deploying-containerized-applicati/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-container-platform-rest-api-part-2-deploying-containerized-applicati/</guid><pubDate>Thu, 04 Jun 2020 16:41:01 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;In my previous blog post, &lt;a href=&quot;/blog/hpe-container-platform-rest-api-part-1-authenticating&quot;&gt;HPE Container platform REST API – Part 1: Authenticating&lt;/a&gt;, I introduced the HPE Container Platform (HPE CP) REST API. I showed you how to authenticate to the HPE Container Platform API endpoint and retrieve data from objects in a secure way using the command line cURL. Continuing with this series, my second article will walk you through the steps you need to take to deploy containerized applications programmatically on Kubernetes clusters that are managed by the HPE Container Platform. It will show you how to take the REST API authentication call you established while going through the first blog and apply it to a real life scenario focused on the following areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deployment of cloud native microservices based applications&lt;/li&gt;
&lt;li&gt;Deployment of non-cloud native, stateful, distributed analytics workloads using pre-configured &lt;a href=&quot;/blog/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern&quot;&gt;KubeDirector applications&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Deploying stateless and stateful containerized applications using a programmatic approach&lt;/h2&gt;
&lt;p&gt;This tutorial assumes you have established a login session with the HPE Container Platform as explained in the first part of this series. The next step in deploying a containerized application in Kubernetes clusters managed by the HPE Container Platform is to get the kubeconfig file for your tenant working context.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: A working context establishes the user identity, its tenant name and role (member or admin). Based on this context, tenant users are granted privileges and permissions to create and manage resources for their tenant on Kubernetes clusters managed by HPE CP.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The HPE CP REST API call below allows you to obtain the kubeconfig file used to access the Kubernetes cluster for your tenant user account based on your assigned role (tenant member or tenant admin), just as if you had downloaded it from the HPE CP UI.&lt;/p&gt;
&lt;p&gt;The REST API call is a &lt;strong&gt;GET&lt;/strong&gt; request for the target URL &lt;strong&gt;/api/v2/k8skubeconfig&lt;/strong&gt; authenticated for your working tenant context (X-BDS-SESSION). Here, the kubeconfig file is saved as &lt;em&gt;config&lt;/em&gt; in your $HOME/.kube directory. The call retrieves a configuration file suitable for use by K8s API client such as &lt;em&gt;kubectl&lt;/em&gt;, which includes a valid session location (token) for your current session.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;
curl -k -s --request GET &quot;https://&amp;#x3C;Gateway-IP-Address-or-fqdn&gt;:8080/api/v2/k8skubeconfig&quot; \
--header &quot;X-BDS-SESSION: $sessionlocation&quot; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; &gt; $HOME/.kube/config

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now send Kubernetes API requests using kubectl to deploy enterprise workloads to the kubernetes cluster using the privileges assigned to your tenant role.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Once the session location expires (after 24 hours by default), any attempt to execute kubectl commands will prompt you for your password and will require you to log in again before continuing.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let&apos;s see how this works by deploying a simple hello-world stateless, microservices application. We’ll also try it with a complex, distributed, stateful application, using big data analytics-processing, such as Spark.&lt;/p&gt;
&lt;h3&gt;Hello-World stateless application deployment&lt;/h3&gt;
&lt;p&gt;The hello-world application is a &lt;strong&gt;stateless&lt;/strong&gt; application because it does not require persistence of data nor an application state. The hello-world application is a very simple application that will return &lt;code&gt;Hello Kubernetes!&lt;/code&gt; when accessed. The YAML file below describes the application resources involved, such as the deployment, the Pod, the Docker container image and port, and the NodePort service used to expose the application outside of the Kubernetes cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-demouser
spec:
  selector:
    matchLabels:
      run: hello-demouser
  replicas: 2
  template:
    metadata:
      labels:
        run: hello-demouser
    spec:
      containers:
        - name: hello-world-demouser
          image: gcr.io/google-samples/node-hello:1.0
          ports:
            - containerPort: 8080
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world-service-demouser
spec:
  selector:
    run: hello-demouser
  ports:
  - name: http-hello
    protocol: TCP
    port: 8080
    targetPort: 8080
  type: NodePort

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next step is to save the file, for example as &lt;em&gt;hello-world-app.yaml&lt;/em&gt;, and deploy the application using the K8s API call &lt;code&gt;kubectl apply -f hello-world-app.yaml&lt;/code&gt;. As shown by the command &lt;code&gt;kubectl get&lt;/code&gt; below, this simple hello-world application will be represented by standard Kubernetes resource elements (Deployment, Pods and Service) that compose your containerized application.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get deploy,pod,service 

 
NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-world-demouser   2/2     2            2           3m6s

NAME                                        READY   STATUS    RESTARTS   AGE
pod/hello-world-demouser-54b6fcd974-mt8sc   1/1     Running   0          3m6s
pod/hello-world-demouser-54b6fcd974-wnkld   1/1     Running   0          3m6s

NAME                                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                                    
service/hello-world-service-demouser   NodePort  10.96.113.175  &amp;#x3C;none&gt;  8080:32295/TCP                         
  

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For this tutorial, the HPE Container Platform has been configured to automatically map the NodePort service endpoint to the HPE Container Platform gateway host. In this setup, access to application services running in containers in the HPE Container Platform is proxied via the gateway host and a port number greater than 10000. The following &lt;code&gt;kubectl&lt;/code&gt; command can be used to obtain the application service endpoint from the annotations:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl describe service hello-world-service-demouser

Name:                     hello-world-service-demouser
Namespace:                k8shacktenant
Labels:                   hpecp.hpe.com/hpecp-internal-gateway=true
Annotations:              hpecp-internal-gateway/8080: gateway1.hpedevlab.net:10012
                          kubectl.kubernetes.io/last-applied-configuration:
                            {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;hello-world-service-demouser&quot;,&quot;namespace&quot;:&quot;k8shacktenant&quot;},&quot;spec&quot;...
Selector:                 run=hello-demouser
Type:                     NodePort
IP:                       10.96.113.175
Port:                     http-hello  8080/TCP
TargetPort:               8080/TCP
NodePort:                 http-hello  32295/TCP
Endpoints:                10.192.0.217:8080,10.192.1.102:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason  Age   From         Message
  ----    ------  ----  ----         -------
  Normal  HpeCp   16s   hpecp-agent  Created HPECP K8S service


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this example, the application service endpoint is &lt;em&gt;gateway1.hpedevlab.net:10012&lt;/em&gt;. You can connect to the application using the cURL command below or connect through your favorite browser:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;curl –k –s --request GET https://gateway1.hpedevlab.net:10012
Hello Kubernetes!

&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Spark stateful application deployment using KubeDirector&lt;/h3&gt;
&lt;p&gt;HPE has been working within the open source Kubernetes community to add capabilities that enable the running of &lt;strong&gt;stateful&lt;/strong&gt; analytics workloads (e.g: data-intensive, AI/ML and analytics-processing distributed applications) on Kubernetes.  The open source project is known as &lt;strong&gt;Kubernetes Director&lt;/strong&gt; or &lt;strong&gt;KubeDirector&lt;/strong&gt; for short.&lt;/p&gt;
&lt;p&gt;KubeDirector is a key component of the HPE Container Platform, implemented as a Kubernetes custom controller (also known as operator) on every Kubernetes cluster managed by the HPE Container Platform. You can find more information about KubeDirector by visiting the HPE DEV portal &lt;a href=&quot;https://developer.hpe.com/platform/hpe-container-platform/home&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;stateful&lt;/strong&gt; application may require persistence of network identity (i.e.: hostname) and persistence of certain mount points across application cluster nodes rescheduling, restarts, upgrades and rollbacks. In the HPE Container Platform, these applications generally refer to a distributed, single-node or multi-node application &lt;strong&gt;virtual cluster&lt;/strong&gt;. Each application virtual cluster node runs as a &lt;strong&gt;container&lt;/strong&gt; in a Pod in the Kubernetes cluster managed by the HPE Container Platform.&lt;/p&gt;
&lt;p&gt;In our HPE CP deployment, three pre-configured KubeDirector Application types have been installed on the Kubernetes cluster managed by HPE Controller Platform. As tenant user, you can get the list of KubeDirector applications that are visible to your tenant using the &lt;code&gt;kubectl&lt;/code&gt; command below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get kubedirectorapp
NAME                  AGE
centos7x              37d
ml-jupyter-notebook   37d
spark221e2            37d


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, let’s inspect the definition of the Spark application type using kubectl command &lt;code&gt;kubectl describe kubedirectorapp spark221e2&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Spark is a non-cloud native multi-tier application with tightly coupled and interdependent services. As shown in the output of command below, the Spark221e2 KubeDirector Application describes the application metadata: the service roles, the service endpoints port and port name prefix (that comes from the URL Scheme), the Docker images, the configuration packages, the cardinality (minimum number of members in a role), and the root file system directories (e.g.: /etc, /bin, /opt, /var, /usr) of the containers to persist beyond the life span of the containers. This means stateful applications that require writing data to their root file systems can now successfully run on Kubernetes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl describe kubedirectorapp spark221e2
Name:         spark221e2
Namespace:    k8shacktenant
Labels:       &amp;#x3C;none&gt;
Annotations:  &amp;#x3C;none&gt;
API Version:  kubedirector.hpe.com/v1beta1
Kind:         KubeDirectorApp
Metadata:
  Creation Timestamp:  2020-03-24T08:25:15Z
  Generation:          1
  Resource Version:    981021
  Self Link:           /apis/kubedirector.hpe.com/v1beta1/namespaces/k8shacktenant/kubedirectorapps/spark221e2
  UID:                 753618be-1edf-470c-bff5-385d0e76fafe
Spec:
  Config:
    Role Services:
      Role ID:  controller
      Service I Ds:
        ssh
        spark
        spark-master
        spark-worker
      Role ID:  worker
      Service I Ds:
        ssh
        spark-worker
      Role ID:  jupyter
      Service I Ds:
        ssh
        jupyter-nb
    Selected Roles:
      controller
      worker
      jupyter
  Config Schema Version:  7
  Distro ID:              bluedata/spark221e2
  Label:
    Description:  Spark 2.2.1 with Jupyter notebook
    Name:         Spark 2.2.1 + Jupyter
  Roles:
    Cardinality:  1
    Config Package:
      Package URL:   file:///opt/configscripts/appconfig-2.6.tgz
    Id:              controller
    Image Repo Tag:  docker.io/bluedata/sparkbase:2.2
    Persist Dirs:
      /usr
      /opt
      /var
      /data
    Cardinality:  0+
    Config Package:
      Package URL:   file:///opt/configscripts/appconfig-2.6.tgz
    Id:              worker
    Image Repo Tag:  docker.io/bluedata/sparkbase:2.2
    Persist Dirs:
      /usr
      /opt
      /var
      /data
    Cardinality:  0+
    Config Package:
      Package URL:   file:///opt/configscripts/appconfig-2.6.tgz
    Id:              jupyter
    Image Repo Tag:  docker.io/bluedata/jupyter:2.3
    Persist Dirs:
      /usr
      /opt
      /var
      /data
  Services:
    Endpoint:
      Is Dashboard:  false
      Port:          22
    Id:              ssh
    Label:
      Name:  SSH
    Endpoint:
      Is Dashboard:  true
      Path:          /
      Port:          8080
      URL Scheme:    http
    Id:              spark
    Label:
      Name:  Spark master
    Endpoint:
      Is Dashboard:  false
      Port:          7077
      URL Scheme:    spark
    Id:              spark-master
    Label:
      Name:  Spark master
    Endpoint:
      Is Dashboard:  true
      Path:          /
      Port:          8081
      URL Scheme:    http
    Id:              spark-worker
    Label:
      Name:  Spark worker
    Endpoint:
      Is Dashboard:  true
      Path:          /
      Port:          8888
      URL Scheme:    http
    Id:              jupyter-nb
    Label:
      Name:          Jupyter Notebook
  Systemd Required:  true
  Version:           2.7
Events:              &amp;#x3C;none&gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A configuration manifest YAML file is then used to create an application virtual cluster that &lt;strong&gt;instantiates&lt;/strong&gt; a defined KubeDirector Application type. The configuration file is used to describe the attributes of a given KubeDirector Application type instance, such as the application instance name, KubeDirector App type, the service roles and their number of nodes and compute size, as well as the persistent storage size.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The Spark KubeDirector Application variant used in this tutorial is a distributed implementation of the data-processing Spark cluster where the master (Spark driver) and worker (Spark executors) services run on different cluster nodes (1 controller node and 2 worker nodes). A separate Jupyter node is used as an interactive client to execute programs on the Spark cluster.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;apiVersion: &quot;kubedirector.hpe.com/v1beta1&quot;
kind: &quot;KubeDirectorCluster&quot;
metadata: 
  name: &quot;spark221e2-demouser&quot;
spec: 
  app: &quot;spark221e2&quot;
  appCatalog: &quot;local&quot;
  roles: 
    - 
      id: &quot;controller&quot;
      members: 1
      resources: 
        requests: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
        limits: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
      storage: 
        size: &quot;10Gi&quot;
    - 
      id: &quot;worker&quot;
      members: 2
      resources: 
        requests: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
        limits: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
      storage: 
        size: &quot;10Gi&quot;
    - 
      id: &quot;jupyter&quot;
      members: 1
      resources: 
        requests: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
        limits: 
          memory: &quot;4Gi&quot;
          cpu: &quot;2&quot;
      storage: 
        size: &quot;10Gi&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, you can save the file, for example as &lt;em&gt;spark-config.yaml&lt;/em&gt;, and deploy the application using the K8s API call &lt;code&gt;kubectl apply –f spark-config.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Your Spark application virtual cluster will be represented in the Kubernetes cluster by a resource of type &lt;strong&gt;KubeDirectorCluster&lt;/strong&gt;, with the name that was indicated inside the YAML file used to create it. You can use the &lt;code&gt;kubectl describe kubedirectorcluster &amp;#x3C;clustername&gt;&lt;/code&gt; command below to observe the status of all the resources that compose the virtual cluster, the state of the virtual cluster, and any events logged against it. Upon successful creation of the virtual cluster, the state of the cluster should have a value of &lt;strong&gt;&quot;configured&quot;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;As shown in the output of the kubectl commands below, the instance of the KubeDirector Application virtual cluster is made up of a &lt;strong&gt;StatefulSet&lt;/strong&gt; per role (Spark controller, Spark workers, and jupyter), a &lt;strong&gt;Pod&lt;/strong&gt; (a cluster node) per service role member, a &lt;strong&gt;NodePort Service&lt;/strong&gt; per service role member, a &lt;strong&gt;headless service&lt;/strong&gt; for the application cluster, and a &lt;strong&gt;Persistent Volume Claim (pvc)&lt;/strong&gt; per Pod that requested persistent storage.&lt;/p&gt;
&lt;p&gt;Some lines have been deleted from the output of the kubectl command to display the essential information.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl describe kubedirectorcluster spark221e2-demouser
Name:         spark221e2-demouser
Namespace:    k8shacktenant
…
…
Kind:         KubeDirectorCluster
…
…
Status:
  Cluster Service:  kdhs-lbjmt
  …
  …
  Roles:
    Id:  controller
    Members:
      Node ID:  1
      Pod:      kdss-j2v6x-0
      Pvc:      p-kdss-j2v6x-0
      Service:  s-kdss-j2v6x-0
      State:    configured
    Stateful Set:  kdss-j2v6x
    …
    …
    Id:  worker
    Members:
      Node ID:  2
      Pod:      kdss-7dtqk-0
      Pvc:      p-kdss-7dtqk-0
      Service:  s-kdss-7dtqk-0
      State:    configured

      …
      …
      Node ID:  3
      Pod:      kdss-7dtqk-1
      Pvc:      p-kdss-7dtqk-1
      Service:  s-kdss-7dtqk-1
      State:    configured
    Stateful Set:  kdss-7dtqk
    …
    …   
    Id: jupyter
    Members:
      Node ID:  4
      Pod:      kdss-mwkh4-0
      Pvc:      p-kdss-mwkh4-0
      Service:  s-kdss-mwkh4-0
      State:    configured
    Stateful Set:  kdss-mwkh4
    …
    …
    State:      configured
Events:
  Type    Reason   Age                    From          Message
  ----    ------   ----                   ----          -------
  Normal  Cluster  31m                    kubedirector  new
  Normal  Role     31m                    kubedirector  creating role
  …
  …

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get all -l kubedirector.hpe.com/kdcluster= spark221e2-demouser
NAME               READY   STATUS    RESTARTS   AGE
pod/kdss-7dtqk-0   1/1     Running   0          31m
pod/kdss-7dtqk-1   1/1     Running   0          31m
pod/kdss-j2v6x-0   1/1     Running   0          31m
pod/kdss-mwkh4-0   1/1     Running   0          31m

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                     
service/kdhs-lbjmt       ClusterIP   None           &amp;#x3C;none&gt;       8888/TCP                                                    
service/s-kdss-7dtqk-0   NodePort    10.96.141.31   &amp;#x3C;none&gt;   	22:32677/TCP,8081:31223/TCP                                 
service/s-kdss-7dtqk-1   NodePort    10.96.44.156   &amp;#x3C;none&gt;   22:32143/TCP,8081:30019/TCP                                 
service/s-kdss-j2v6x-0   NodePort    10.96.215.65   &amp;#x3C;none&gt;   22:30358/TCP,8080:31430/TCP,7077:31358/TCP,8081:31160/TCP   
service/s-kdss-mwkh4-0   NodePort    10.96.5.87     &amp;#x3C;none&gt;   22:30390/TCP,8888:30227/TCP                                 

NAME                          READY   AGE
statefulset.apps/kdss-7dtqk   2/2     31m
statefulset.apps/kdss-j2v6x   1/1     31m
statefulset.apps/kdss-mwkh4   1/1     31m

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pvc -l kubedirector.hpe.com/kdcluster= spark221e2-demouser

NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
p-kdss-7dtqk-0   Bound    mapr-pv-84ebfd58-e33e-4535-8856-6ef3b54f84a5   10Gi       RWO            default        31m
p-kdss-7dtqk-1   Bound    mapr-pv-898d2f58-f516-40ef-bc37-7bd418888f78   10Gi       RWO            default        31m
p-kdss-j2v6x-0   Bound    mapr-pv-c635bfa2-b773-430b-aabf-ecc3e0a8bfb5   10Gi       RWO            default        31m
p-kdss-mwkh4-0   Bound    mapr-pv-49b9ffb5-b14a-4703-95a3-95b4695eb4f7   10Gi       RWO            default        31m

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In the output above, the HPE Container Platform defines HPE Data Fabric (MapR) as default Kubernetes StorageClass for the Kubernetes Clusters managed by the HPE Container Platform. HPE CP uses MapR Container Storage Interface (CSI) storage plugin to expose the HPE Data Fabric as storage provider to the Kubernetes containerized workloads (Pods) that request persistent storage.&lt;/p&gt;
&lt;p&gt;The ClusterIP service is the headless service required by a Kubernetes StatefulSet to work. It maintains a stable Pod network identity (that is, persistence of the hostname of the Pods across Pods rescheduling).&lt;/p&gt;
&lt;p&gt;And, the NodePort services expose the application services outside the Kubernetes cluster. The HPE Container Platform automatically maps the NodePort Service endpoints to the HPE Container Platform gateway host to port greater than 10000. To get a report on all the services related to a specific virtual cluster, you can use the command &lt;code&gt;kubectl describe service -l  kubedirector.hpe.com/kdcluster=kubedirectorclusterName&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now, you can connect to the Spark dashboard and the Jupyter Notebook from your browser and start your real time data processing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture1-1591289916989.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture2-1591289929686.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;To increase or decrease the number of members in a role, you would just have to edit the configuration YAML file for your application and use the &lt;code&gt;kubectl apply -f file.yaml&lt;/code&gt; command to implement the changes. The KubeDirector operator will manage the application cluster expansion or shrinkage for you.&lt;/p&gt;
&lt;p&gt;Hopefully, this blog has helped you learn how to programmatically interact with the HPE Container Platform to deploy both cloud native stateless, microservices based applications and non-cloud native distributed stateful &lt;a href=&quot;/blog/running-non-cloud-native-apps-on-kubernetes-with-kubedirector&quot;&gt;KubeDirector&lt;/a&gt; applications for various use cases.&lt;/p&gt;
&lt;p&gt;You can stay up to date with the latest news from HPE DEV by &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;signing up for our monthly newsletter.&lt;/a&gt; In it, you will find more awesome developers and data scientists focused posts about the HPE Container Platform. You can also follow our community on &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; and join the conversation on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Channel.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[We’re here to help - Newsletter]]></title><link>https://developer.hpe.com/2020-June-02/</link><guid isPermaLink="false">https://developer.hpe.com/2020-June-02/</guid><pubDate>Tue, 02 Jun 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Get involved in the open source community! Part 1: Getting started with Git]]></title><description><![CDATA[Git git101-part1-git icon 1788c Wikipedia cites: "Git is a distributed version-control system (VCS) for
tracking changes in source code…]]></description><link>https://developer.hpe.com/get-involved-in-the-open-source-community-part-1-getting-started-with-git/</link><guid isPermaLink="false">https://developer.hpe.com/get-involved-in-the-open-source-community-part-1-getting-started-with-git/</guid><pubDate>Thu, 28 May 2020 18:03:56 GMT</pubDate><content:encoded>&lt;h2&gt;Git&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/git-icon-1788c-1590702885345.png&quot; alt=&quot;git101-part1-git icon 1788c&quot;&gt;&lt;/p&gt;
&lt;p&gt;Wikipedia cites: &quot;Git is a distributed version-control system (VCS) for
tracking changes in source code during software development. It is
designed for coordinating work among programmers, but it can be used to
track changes in any set of files. Its goals include speed, data
integrity, and support for distributed, non-linear workflows. Git was
created by Linus Torvalds in 2005 for development of the Linux kernel,
with other kernel developers contributing to its initial development.
Its current maintainer since 2005 is Junio Hamano. As with most other
distributed version-control systems, and unlike most client–server
systems, every Git directory on every computer is a full-fledged
repository with complete history and full version-tracking abilities,
independent of network access or a central server. Git is free and
open-source software distributed under the terms of the GNU General
Public License version 2&quot;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: The origin of the name Git is unclear and subject to many
interpretations :-).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Latest version is 2.25 from the 13-JAN-2020.&lt;/p&gt;
&lt;p&gt;Competitor products include Subversion, Microsoft Team Foundation
Server, Mercurial, CVS, Perforce, Microsoft Visual SourceSafe, Rational
ClearCase. According to a survey from StackOverflow, Git was used by
87.2% of developers in 2018 and is still growing.&lt;/p&gt;
&lt;h2&gt;GitHub&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/img/github-mark.png&quot; alt=&quot;git101-part1-github mark light 120px plus&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some commercial companies are providing Source Code Management (SCM)
solutions based on Git. One of the most popular is GitHub
(&lt;a href=&quot;http://github.com&quot;&gt;http://github.com&lt;/a&gt;). It was bought by Microsoft in&lt;/p&gt;
&lt;ol start=&quot;2018&quot;&gt;
&lt;li&gt;From Wikipedia, “as of January 2020, GitHub reports having over 40
million users and more than 100 million repositories (including at least
28 million public repositories), making it the largest host of source
code in the world.”&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Competitors to GitHub include: Bitbucket, Microsoft Team Foundation
Server, Gitlab, Phabricator, Assembla, Beanstalk, Helix Core, Gerrit,
SourceForge. We can safely claim that Git is a key component of the open
source community and has contributed to its success.&lt;/p&gt;
&lt;h2&gt;Git Command Line Interface (CLI)&lt;/h2&gt;
&lt;p&gt;Most developers interact with Git using its command line interface (CLI)
in a terminal/command window. But for those of you not comfortable with
CLI, lots of Git graphical user interfaces (GUIs) exist and most of them
are open source. Choosing one is generally a matter of personal
preference. This
&lt;a href=&quot;https://en.wikipedia.org/wiki/Comparison_of_Git_GUIs&quot;&gt;site&lt;/a&gt; might help
you choose one. Even if you choose to use a Git GUI, it’s best to learn
the basics of Git using the CLI. We will be using the CLI in this blog
series.&lt;/p&gt;
&lt;h2&gt;Installing Git on your machine&lt;/h2&gt;
&lt;p&gt;There are multiple ways to install Git on your machine. For Windows
platforms, one option is to install from
&lt;a href=&quot;https://gitforwindows.org/&quot;&gt;https://gitforwindows.org/&lt;/a&gt;. For Mac, the
easiest is to use brew: brew install git. Both will install the command
line interface (CLI) to Git. If you prefer graphical user interfaces,
you have plenty of options, too. A recommended one, for both Windows and
Mac, is GitHub Desktop . This being said, you might not need any of
these as there is a very good integration of Git in most code editors.
For example, Visual Studio Code, which has now become a very popular
open source Integrated Development Environment (IDE), has very good
support for Git.&lt;/p&gt;
&lt;h2&gt;Getting involved in the open source community&lt;/h2&gt;
&lt;p&gt;To get you involved in the open source community, we will take you
through four typical use cases involving Git:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use case 1: I’d like to use something from the community&lt;/li&gt;
&lt;li&gt;Use case 2: I&apos;d like to report an issue on a repository&lt;/li&gt;
&lt;li&gt;Use case 3: I&apos;d like to share something with the community&lt;/li&gt;
&lt;li&gt;Use case 4: I&apos;d like to contribute code to a repository&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To help you get started using Git, I will cover use cases 1 and 2 in
this blog post. In my next post, Part 2, I will cover how to create your
own GitHub account to share something with the community. And in my
final post, Part 3, I will cover how to contribute code to an existing
repository.&lt;/p&gt;
&lt;h2&gt;Use case 1: I’d like to use something from the community&lt;/h2&gt;
&lt;p&gt;This is probably the most frequent use case. While looking for a
solution to a problem, you discover that someone has already provided a
great open source solution with a GitHub repository pointer. How can you
take advantage of this? There are actually 2 options: you can clone this
repo or you can fork it. You would most likely clone the repository
locally if all you want is to take a look at the content and try it. You
should fork a repository to your own GitHub account in the case where
you would like to modify the content and contribute it back to the
original project. I will show you how to fork in use case 4. In this use
case, we will use git clone.&lt;/p&gt;
&lt;h3&gt;Step 1: Cloning a repo locally&lt;/h3&gt;
&lt;p&gt;Let&apos;s imagine that you have found a great repo at
&lt;a href=&quot;https://github.com/Didier-Lalli/WelcomeGitDidier&quot;&gt;https://github.com/Didier-Lalli/WelcomeGitDidier&lt;/a&gt;
and you&apos;d like to use the Python program shared by its author.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/welcomegit-1590699942819.png&quot; alt=&quot;git101-part1-welcomegit&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Clone or download&lt;/strong&gt; button shows the URL of the repo. Copy it to
the clipboard. Open a terminal session (terminal on Mac, PowerShell on
Windows).&lt;/p&gt;
&lt;p&gt;1/ Clone repo in that folder with &lt;code&gt;git clone&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git clone &amp;#x3C;PasteClipboardContentHere&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git clone https://github.com/Didier-Lalli/WelcomeGitDidier.git
Cloning into &apos;WelcomeGitDidier&apos;...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 11 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (11/11), done.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2/ Change to that folder and list the content&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cd WelcomeGitDidier
$ ls -l
-rw-r--r--  1 lalli  staff  20 May 25 18:11 README.md
-rw-r--r--  1 lalli  staff  79 May 25 18:11 helloworld.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can now see that there are two files and one of them is
&lt;strong&gt;helloworld.py&lt;/strong&gt;. That’s the code we’d like to execute.&lt;/p&gt;
&lt;h3&gt;Step 2: Running shared code&lt;/h3&gt;
&lt;p&gt;You can now use the Python program with: &lt;code&gt;python helloworld.py&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note: you need to have a Python environment running on your machine. If
this is not the case, check
&lt;a href=&quot;https://www.python.org/&quot;&gt;https://www.python.org/&lt;/a&gt; to get started.&lt;/p&gt;
&lt;p&gt;1/ Let’s take a quick look at the code&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cat helloworld.py 
# This is part of labs of our Git101 Jupyter Notebook 
print(&quot;Hello world!&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;2/ Execute code&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ python helloworld.py 
Hello world!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;3/ Check the status of your copy of the repo with &lt;code&gt;git status&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git status
On branch master
Your branch is up to date with &apos;origin/master&apos;.
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This terminates use case 1: You should now be able to clone a public
repo and execute a code provided by the author. But there will be cases
where you will want to modify some of the code that was provided by the
author. When this happens, you have two options:&lt;/p&gt;
&lt;p&gt;1/ Contact the author and ask him/her to make the change and &lt;code&gt;git clone&lt;/code&gt;
it again&lt;/p&gt;
&lt;p&gt;2/ Make the change yourself in your private copy of the repo and then
tell the author about it&lt;/p&gt;
&lt;p&gt;I’ll tackle the second option in another post, where we’ll look at
contributing code to a project (use case 4). But for now, let’s cover
the first option in our next use case.&lt;/p&gt;
&lt;h2&gt;Use case 2: I&apos;d like to report an issue on a repo&lt;/h2&gt;
&lt;p&gt;There are many ways to get involved in the open source community. Most
of them involve writing code, such as when you’re designing new features
or fixing bugs, which we will cover in use case 4 in another post. There
is another way to contribute, which is to open an issue (think of it as
a opening a ticket) to either signal a problem you have found in the
code or propose ideas for new features or enhancements. GitHub offers
this capability in each project/repo web page.&lt;/p&gt;
&lt;p&gt;Let&apos;s imagine that, in the &lt;strong&gt;WelcomeGitDidier&lt;/strong&gt; repo from use case 1 we
would like to ask the owner to change the Python code, for example, to
display &quot;Hello World!&quot; instead of &quot;Hello world!&quot;.&lt;/p&gt;
&lt;h2&gt;Opening an issue on WelcomeGitDidier&lt;/h2&gt;
&lt;p&gt;Go back to
&lt;strong&gt;&lt;a href=&quot;https://github.com/Didier-Lalli/WelcomeGitDidier&quot;&gt;https://github.com/Didier-Lalli/WelcomeGitDidier&lt;/a&gt;&lt;/strong&gt;
and locate the &lt;strong&gt;Issues&lt;/strong&gt; tab. Create a new issue, set a title and a
description for it, and click &lt;strong&gt;Submit new issue&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/newissuecreated-1590699914277.png&quot; alt=&quot;git101-part1-newissuecreated&quot;&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s it! The owner(s) of the project will use this issue list to track
feedback from the community and, hopefully, take action.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Taking a look at the activity in the Issues section of a project
is always a good way to sense its level of activity and its level of
openness.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This terminates use case 2: You should now be able to open an issue on a
public repo.&lt;/p&gt;
&lt;p&gt;If you want to discover more about Git, I recommend the following
resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.git-scm.com/book/en/v2&quot;&gt;The Pro Git Book&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://guides.github.com/introduction/git-handbook&quot;&gt;The Git
Handbook&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/SWYqp7iY_Tc&quot;&gt;Video Git Crash Course&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Stay tuned to &lt;a href=&quot;https://developer.hpe.com/blog&quot;&gt;HPE DEV&lt;/a&gt; for the next
articles where I will cover use case 3 and use case 4 where you will
start contributing to the community.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Discover’s brand-new, virtual HPE DEV Hack Shack!]]></title><description><![CDATA[hack shack larger The world has changed, shifting everyone’s priorities. Digital transformation is no longer something that’s being planned…]]></description><link>https://developer.hpe.com/hpe-discovers-brand-new-virtual-hpe-dev-hack-shack/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discovers-brand-new-virtual-hpe-dev-hack-shack/</guid><pubDate>Wed, 27 May 2020 14:17:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/hack-shack-larger-1590677030652.png&quot; alt=&quot;hack shack larger&quot;&gt;&lt;/p&gt;
&lt;p&gt;The world has changed, shifting everyone’s priorities. Digital transformation is no longer something that’s being planned. It’s something that is &lt;em&gt;imperative&lt;/em&gt;, &lt;strong&gt;now&lt;/strong&gt;. Hewlett Packard Enterprise (HPE) is here to help. HPE Discover offers you the opportunity to connect with subject matter experts and explore solutions through instructor-led workshops to help you realize how HPE can help move your business forward.&lt;/p&gt;
&lt;p&gt;This year, HPE is bringing &lt;a href=&quot;https://www.hpe.com/us/en/discover.html&quot;&gt;HPE Discover&lt;/a&gt; to you as an on-going virtual experience, offering more opportunities for customers to attend from all over the world – free of charge. We will be kicking off the event on June 23rd and producing it across three different time zones. As part of the virtual experience, you’ll be given the opportunity to view important keynotes, check out what’s new, and hear from key industry analysts. Best of all, you’ll get to experience the all-new, virtual HPE DEV Hack Shack! In the Hack Shack, you’ll interact with experts to learn and try out new solutions for building, designing, and using software. If you’ve ever been to the HPE DEV Hack Shack at HPE Discover, you’ll know how valuable and fun it can be. In addition to the Jupyter Notebooks-based workshops, we’ll be offering competitive coding challenges and fun games to play.&lt;/p&gt;
&lt;h2&gt;What you’ll find in the Hack Shack&lt;/h2&gt;
&lt;p&gt;In the HPE DEV Hack Shack you’ll find experts giving talks on a wide variety of topics, from AI and machine learning to containers and DevOps. Workshops are open to all levels with some aimed specifically at beginners, like API 101 or Introduction to the HPE Container Platform. Here’s a quick list of some of our currently planned sessions and workshops. For more complete information, please visit the HPE Discover &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1043&amp;#x26;locale=en_US&quot;&gt;session catalog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack T490: Demystifying AI Technology Choices&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:arti.garg@hpe.com&quot;&gt;Arti Garg&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack T491: What Data Scientists Can Learn From Software Developers&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:glyn.bowden@hpe.com&quot;&gt;Glyn Bowden&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack T492: Accelerate Innovation With DevOps for Machine Learning&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:nanda.vijaydev@hpe.com&quot;&gt;Nanda Vijaydev&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W479: Introduction to the HPE Container Platform REST API&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:denis.Choukroun@hpe.com&quot;&gt;Denis Choukroun&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W480: API 101 - API basics and the value they provide&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:Didier.Lalli@hpe.com&quot;&gt;Didier Lalli&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W481: Aruba API yourself!&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:joe.neville@hpe.com&quot;&gt;Joe Neville&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W482: Redfish API use with PowerShell, Python, &amp;#x26; Bash/cURL&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:francois.donze@hpe.com&quot;&gt;Francois Donze&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W485: AI 101 - Convolutional neural network (CNN) for MNIST&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:hana.malha@hpe.com&quot;&gt;Hana Malha&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack W486: Automate apps with the HPE Container Platform&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker: &lt;a href=&quot;mailto:Chris.Crawford@hpe.com&quot;&gt;Chris Crawford&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;During progressive weeks we’ll add in a number of different Hack Shack Challenges that test your hacking prowess. Perhaps you’d like to try deploying your app in a Kubernetes cluster. Or, maybe you’d like to try streamlining queries using Redfish. You can even check out the new Grommet Designer by using it to design your own app!&lt;/p&gt;
&lt;p&gt;As you may know, HPE is hyper focused on delivering everything as-a-service, all in a hybrid cloud environment. We embrace whatever form of cloud our customers adopt, providing products and services that are singularly focused on delivery through hybrid IT. So, where does the Hack Shack fit in? HPE DEV provides IT folks, whether they’re developers, IT admins, data scientists, or machine learning engineers, the resources needed to take advantage of hybrid cloud. We’ll help you accelerate your recovery by showing you easy ways to become more agile in your business. And we’ll help you accelerate your digital transformation through innovative solutions to complex problems, using open source software, and implementing DevOps methodologies and ITOps tools.&lt;/p&gt;
&lt;p&gt;For more details on available courses and times, please visit the HPE Discover site &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=1043&amp;#x26;locale=en_US&quot;&gt;session catalog&lt;/a&gt;. Register now! It’s free, informative, and fun!&lt;/p&gt;
&lt;p&gt;Note: Starting June 23rd, please log in to the &lt;a href=&quot;https://content.attend.hpe.com/go/virtualplatform.landing/?l=1043&amp;#x26;local=en_US&quot;&gt;HPE Discover Virtual Experience&lt;/a&gt; and search by Session ID or by area of interest (i.e. Hack Shack) to view a session.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[PharML.Bind COVID-19 Compound Affinity Prediction ]]></title><description><![CDATA[picture1 Hewlett Packard Enterprise (HPE), in collaboration with the Medical University of South Carolina (MUSC), recently announced the…]]></description><link>https://developer.hpe.com/pharmlbind-covid-19-compound-affinity-prediction/</link><guid isPermaLink="false">https://developer.hpe.com/pharmlbind-covid-19-compound-affinity-prediction/</guid><pubDate>Tue, 26 May 2020 14:47:37 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture1-1590504636961.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE), in collaboration with the Medical University of South Carolina (MUSC), recently announced the open-sourcing of PharML.Bind, a powerful plug-and-play framework for drug discovery. In addition to the codebase for training, inference, data pre-processing and visualization, we are releasing an ensemble of six Molecular-Highway Graph Neural Networks (MH-GNNs) that have been pre-trained for hundreds of hours on state-of-the-art HPE Cray Supercomputers. This announcement is expected to have far reaching implications for COVID-19 research and will ultimately affect how research for other diseases is handled. For all you data scientists out there, I thought I would take a moment to talk a little more about this initiative, offer some examples of how the research can be applied, and direct you to where you can get started with your own research.&lt;/p&gt;
&lt;p&gt;This document will present some examples that produce encouraging results relative to SARS-CoV-2 and point users to additional resources. It should be considered an overview of the Quick-Start Guide, which is provided on &lt;a href=&quot;https://github.com/jbalma/pharml/tree/master/docs&quot;&gt;GitHub.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In the Quick-Start Guide, you’ll learn how to use the ensemble of pre-trained MH-GNNx5 models to make affinity predictions relative to a given structure file (PDB). The guide outlines, in (far) more detail, how users can setup PharML.Bind at home to test various compound sets against relevant COVID-19 structures and even fine-tune the pre-trained models for other interesting mappings. For this post, we’ll just discuss some examples of the framework and introduce some of its capabilities.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: Any compounds or discussion around treatments for COVID-19 should be considered just that – discussions.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Example Experiment&lt;/h2&gt;
&lt;p&gt;PharML.Bind is a framework for predicting compound affinity for full protein structures. It differs from other approaches concerning affinity prediction in two ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;It learns the physical rules governing binding interactions from high-resolution data&lt;/strong&gt;. This means that rather than approximating the rules of quantum mechanics (docking) or simulating the kinetics of the interaction through Molecular Dynamics (MD) or Quantum Chemistry (QC) based numerical methods, PharML.Bind can use whatever information is embedded in real-world data to make decisions. And it does so orders of magnitude faster than alternative approaches.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;It is active-site and compound-pose agnostic.&lt;/strong&gt; This means that it uses full-protein structure when making predictions and it doesn’t require users to specify the region of interest along the protein target for a given compound. In addition, it does not need to determine how that compound binds to a given target, just whether it does or not.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Fundamental to the capability and efficiency of PharMLbined is its use of graph neural networks and their associated graph representations used to test candidate molecules against a given protein’s conformation. The MH-GNNx5 models that were released have been trained to represent the binding behaviour across tens of thousands of proteins paired with hundreds of thousands of compounds currently approved under the FDA. These models can be put to immediate use in generating potential compounds rank-ordered by their potential affinity for COVID-19 molecular structures.&lt;/p&gt;
&lt;p&gt;PharML.Bind utilizes PDB files to define the spatial conformations of a protein of interest (targets), and SDF files (and CSV files) to define sets of compounds (ligands) to be tested against those proteins. You can learn about PDB files and how they are organized &lt;a href=&quot;https://pdb101.rcsb.org/learn/guide-to-understanding-pdb-data/introduction&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For our purposes, PDB files only need to contain the biomolecular structure information for how the protein is arranged in a 3D space. The hierarchical definition of a protein in PDB format is defined from the bottom up:&lt;/p&gt;
&lt;p&gt;atoms -&gt; residues -&gt; chains -&gt; assemblies&lt;/p&gt;
&lt;p&gt;The basement reality for a protein is the atom. Atoms are defined by the their type and location in X,Y,Z space – no other information is needed. The atomic resolution varies depending on the method used to gather the data (x-ray crystallography, NMR, or Cryo-EM), but typically is on the order of a few Angstroms (~ 0.1 nanometers).&lt;/p&gt;
&lt;p&gt;For example, using one of these pre-trained models for inference can generate affinity predictions for the well-known “spike” glycoprotein (6VSB). For this particular protein, composed of over 20,000 atoms, we can predict rank-ordered affinity across a set of more than 300,000 compounds in under 25 minutes (1314.1s) using only a single Cray CS-Storm Server containing 8 Nvidia V100 GPUs. Traditional docking methods would require weeks to produce similar results.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture2-1590504725019.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;With that rate in mind, we laid out an experiment to refine the set of drugs predicted to bind to the primary infection mechanism of the SARS-CoV-2 virus.&lt;/p&gt;
&lt;p&gt;The virus targets human cells in the lungs, heart, kidneys, and intestines. It does so by evolving a structure (made of protein) that fits spatially into the “active site” defined by a distinct set of proteins on the cell surface. Specifically, the shape of some parts of this spike are highly likely to “stick” to the ACE2-receptor, so long as the ACE2-receptor is also bonded to (or in “complex” with) an amino acid transporter. The conformation of this combined structure on the surface defines a high affinity “lock” for the virus to fit into via its novel “key” at the tip of these spikes. The key in this case is referred to as a Receptor Binding Domain (RBD) and serves as the primary mechanism by which the SARS-CoV-2 assembly infects its host cells. For a great overview of how SARS-CoV-2 is suspected to interact with ACE2, check out Dr Scott Klioze’s video &lt;a href=&quot;https://www.youtube.com/watch?v=W1k1sUoLPlA&amp;#x26;feature=youtu.be&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Using what we learned above about how PharML.Bind works, can we use this to predict interactions between the human cell’s receptor site and the RBD of the virus’s spikes?&lt;/p&gt;
&lt;p&gt;Yes, we can. However, because we don’t know much about the active site that the ACE2 receptor SARs-CoV-2 uses to attach to human cells, we’ll need two structures in total to make reasonable assertions about what might potentially interfere with the mechanism of action for the RBD.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/image-1590529010362.png&quot; alt=&quot;image&quot;&gt;&lt;/p&gt;
&lt;p&gt;Notice that items 1 &amp;#x26; 2 listed above are of the same spike structure but were obtained from two different experiments with varied resolutions. Item 2 shows the same glycoprotein structure but with the RBD structure bound to ACE-2. This data was collected via X-Ray Crystallography and has the highest resolution.&lt;/p&gt;
&lt;p&gt;In order to maximize the likelihood that a subset of the compounds we test against these structures are useful for treating infection by SARS-CoV2-19, we need to carefully think through the question we’re asking PharML.Bind to solve:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture4-1590504846111.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;One of the things that makes the SARS-CoV-19 virus so dangerous is that it seems to exploit a never before observed active site of the ACE2 receptor at the surface of the cell.  Even with the advent of Cryo-EM technologies in the last few years, producing crystallized proteins appropriate for X-Ray Crystallography or atomic resolution microscopy is an incredibly complex and time-consuming process. This complexity greatly limits the size, type and interactions we can currently observe for many proteins.&lt;/p&gt;
&lt;p&gt;The ACE2 receptor is found on cells which make up the lungs, GI tract, kidneys and blood. It appears that there are two primary RBDs along these spike structures. Because PharML.Bind is active-site agnostic (meaning it doesn’t need information about where the active site is along a structure in order to make predictions), the spikes are a great place to start searching for potential active regions which might interact with existing drugs. We can carefully tune the test cases we run so that we end up with a short list of drugs that potentially bind with high-affinity to the RBD at the tip of the spike. This potentially could limit an infection or prevent the virus from spreading once inside a host.&lt;/p&gt;
&lt;p&gt;Let’s look at how the conformation of the RBD changes when it is in complex with ACE2 versus the open state. I’ve highlighted a chunk of the amino-acid sequence for a particular part of the protein (PLQSYGF) in green for reference.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture5-1590504935947.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture6-1590504955042.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, we can clearly see that there is definitely a region along the spike’s RBD which strongly interacts with the ACE2 receptor. The change to the RBD’s shape is noticeable between the two structures. The next step is to run the entire set of FDA-approved compounds against these two structures, and attempt to make some inferences about compounds that might strongly interact with the RBD before it binds to ACE2, therebye protecting cells from infection.&lt;/p&gt;
&lt;p&gt;The next steps are a bit more involved and require some preprocessing of the BindingDB FDA-Approved Compounds dataset, running the actual inference phase with Tensorflow in a Conda environment, and finally doing some additional post-processing. For a detailed walk-through, please visit the &lt;a href=&quot;https://github.com/jbalma/pharml/tree/master/docs&quot;&gt;Quick-Start Guide,&lt;/a&gt; provided on GitHub.&lt;/p&gt;
&lt;p&gt;After running the experiment proposed above, we can immediately see that the distribution of compounds PharML.Bind predicts are not random, and, in fact, have some interesting features.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture5-1590505329500.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;The histogram to the right shows compounds from the FDA-approved dataset arranged by their molecular similarity (where similarity comes from their pair-wise Tanimoto score).  If we selected compounds at random from this dataset, we would expect to see a gaussian distribution (orange). However, we see that PharML.Bind has selected non-gaussian sets of compounds as potential binders against the target protein (in this case the main protease, or Mpro). It has selected some particular compounds that are highly similar (&gt;0.7 fingerprint similarity). This implies that PharML is detecting features present only in specific subsets or classes of compounds available in the dataset.&lt;/p&gt;
&lt;p&gt;We can look at the summary CSV produced after many COVID-19 structures were run against this same database. Each of the ensemble models has casted their vote for which compounds it thinks it should bind. The post-processing phase aggregates these votes, and then rank orders them by their probability to bind. The image below shows some of the interesting output for the two variations on the structure we’ve been looking at.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/screenshot-bindingdb-fda-1590529064031.jpg&quot; alt=&quot;screenshot bindingdb fda&quot;&gt;&lt;/p&gt;
&lt;p&gt;When rank-ordering compounds that have CHEMBL IDs, PharML.Bind will also produce a convenient web-based summary of the results. Let’s take a look at what we see for &lt;strong&gt;CHEMBL20:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/screenshot-bindingdb-compound-1590529046012.jpg&quot; alt=&quot;screenshot bindingdb compound&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, we can see that this is an FDA-approved drug, which goes by various trade names. It is predicted by PharML.Bind to interact with spike glycoprotein of SARS-CoV-2. Because it appears in the “open” conformation of the spike, but not in the ACE2-Bound conformation, we have some confidence that the compound will actually bind somewhere that potentially interferes with the spike’s interaction with its target receptor. Obviously, there are plenty of other structures we can now investigate just as quickly and across much larger datasets. To dig deeper into the results generated for several other structures comprising SARS-CoV-2 against the BindingDB FDA-approved dataset, visit the subdirectory on GitHub &lt;a href=&quot;https://github.com/jbalma/pharml/tree/master/examples&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In general, we observe PharML.Bind making overlapping predictions of compounds either in clinical trials for treatment of COVID-19 or which align with docking and MD simulations, suggesting further study is warranted for many more compounds than currently in clinical trials. For more information on how these compounds were generated, and how you can use PharML.Bind in your own research, please consult the Quick-Start Guide and GitHub examples page linked above.&lt;/p&gt;
&lt;p&gt;In this article, we showed how you can get started with plug-and-play drug discovery with the newly released Open-Therapeutics framework PharML.Bind. We reviewed how to use a pre-trained model on a new target structure (the spike protrusion of COVID-19) against a large database of FDA-approved drugs. By using an ensemble of models, we were able to assign confidence scores to each drug predicted to bind and use the results to zoom in on a few interesting FDA-approved drugs that might be viable for repurposing in the fight against COVID-19.&lt;/p&gt;
&lt;p&gt;Hopefully, sharing this information with you will help spark ideas on how you might apply this sort of machine learning to your activities. We will be building out more material on the HPE DEV site specific to AI and machine learning. Make sure you check back at &lt;a href=&quot;/blog&quot;&gt;HPE DEV&lt;/a&gt; for more interesting blogs as we move forward.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Container Platform REST API – Part 1: Authenticating  ]]></title><description><![CDATA[image001 Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was…]]></description><link>https://developer.hpe.com/hpe-container-platform-rest-api-part-1-authenticating/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-container-platform-rest-api-part-1-authenticating/</guid><pubDate>Tue, 26 May 2020 03:08:40 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/image001-1590504102737.png&quot; alt=&quot;image001&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Businesses are challenged today with being able to run their existing monolithic applications alongside their cloud-native apps in hybrid cloud environments. The &lt;a href=&quot;https://developer.hpe.com/platform/hpe-container-platform/home&quot;&gt;HPE Container Platform&lt;/a&gt; (HPE CP) uses container technology to make it simpler and more cost-effective to deploy, run and manage both cloud native microservices enterprise workloads and non-cloud native monolithic applications with containers. Businesses can take advantage of the HPE Container Platform for a variety of use cases, including machine learning (ML), data analytics, and DevOps workloads. Its abilities make the HPE Container Platform ideal for helping IT accelerate their application development and deployment on containers on-demand through a self-service portal.&lt;/p&gt;
&lt;p&gt;In this two-part blog series, I am going to discuss how the HPE Container Platform exposes a RESTful API that provides programmable access to capabilities via a self-service portal. I will then share some recent learnings and experience doing this.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note: This series does not cover how to perform IT administrative tasks through the HPE CP REST API.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;These blog posts are targeted at developers who want to get started with the REST API so they can interact with the HPE Container Platform programmatically. This series is also designed for solution architects who simply want to understand the product from a developer and data scientist’s perspective so they can discuss its capabilities with their customers’ developers and data analysts.&lt;/p&gt;
&lt;p&gt;In the first part of the series, you will interact with the HPE CP REST API using a handy graphical tool called &lt;a href=&quot;https://www.postman.com/&quot;&gt;Postman.&lt;/a&gt; You will also have the opportunity to use cURL &lt;a href=&quot;https://curl.haxx.se/&quot;&gt;(Command-line URL),&lt;/a&gt; the universal and well-appreciated command-line utility from the Linux community. In the second part, I will go a step further to explain how you can use this REST API to deploy containerized applications using a programmatic approach.&lt;/p&gt;
&lt;p&gt;If you are not already familiar with REST API calls and Postman, I encourage you to check out the &lt;a href=&quot;/blog/understanding-api-basics-and-the-value-they-provide&quot;&gt;Understanding API basics and the value they provide&lt;/a&gt; article. It explains REST API concepts such as HTTP verbs you call against a REST API service, the headers and payloads, and how to use Postman to make REST API calls.&lt;/p&gt;
&lt;h2&gt;The HPE Container Platform High-Level Architecture&lt;/h2&gt;
&lt;p&gt;The diagram below depicts a simplified view of the physical architecture within the HPE Container Platform being deployed for this tutorial. It illustrates how you can interact programmatically with the HPE Container Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture2-1590462775163.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;The HPE Controller Platform deployment includes the following key components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Controller host&lt;/strong&gt; manages all the hosts that comprise the HPE Container Platform deployment.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Kubernetes (K8s) hosts&lt;/strong&gt; are under the direct control of the Controller host. These hosts can be grouped into one or more distinct Kubernetes clusters that run containerized applications.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Gateway host&lt;/strong&gt; acts as a proxy server that carries client requests, i.e. HPE CP UI, REST API, K8s API (kubectl commands) to the HPE Container Platform controller, to one of the Kubernetes clusters, or to one of the containerized application services running in one of the Kubernetes clusters. Containerized application service endpoints are exposed outside the Kubernetes cluster to users via the gateway re-mapped ports.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Authentication Proxy&lt;/strong&gt; handles user authentication and forwards authenticated K8s API traffic (kubectl commands) to the Kubernetes cluster master and returns any responses to the request back to the user.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;HPE Data Fabric&lt;/strong&gt; (a MapR File System) is a storage provider for persistent volumes for the containerized applications that require persistence of data. The default StorageClass is available out of the box from the HPE Data Fabric (MapR) using the HPE Container Storage Interface (CSI) driver for MapR.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The HPE Container Platform REST API Reference&lt;/h2&gt;
&lt;p&gt;The HPE CP REST API allows you to execute multiple actions programmatically, from performing administrative tasks like creating Kubernetes clusters to deploying applications for various use cases in a shared multi-tenant environment.&lt;/p&gt;
&lt;p&gt;Before you can call the HPE CP REST API, you need to know what calls can be placed. The REST API reference documentation describes each object type, along with the supported operations, request input parameters, response model, and response codes.&lt;/p&gt;
&lt;p&gt;To access the HPE CP REST API reference documentation, obtain the IP address or hostname of the HPE Container Platform &lt;strong&gt;Gateway&lt;/strong&gt; host or &lt;strong&gt;Controller&lt;/strong&gt; host from the platform administrator. Then, in a web browser, navigate to the following URL: &lt;strong&gt;http(s)://Gateway-or-Controller-IP-address-or-fqdn/apidocs&lt;/strong&gt;.  Access protocols for the HPE Container Platform REST API reference documentation varies depending on whether your platform has been configured to use HTTPS secure protocol (recommended) or non-secure HTTP protocol.&lt;/p&gt;
&lt;h2&gt;Session Authentication&lt;/h2&gt;
&lt;p&gt;With the exception of some REST API calls, most of the API calls you can do against the HPE CP REST API must be authenticated with a sort of token that is retrieved by sending a username, password, and tenant name to the HPE CP API server. The HPE Container Platform uses a &lt;em&gt;session location&lt;/em&gt; to identify the &lt;strong&gt;working tenant context&lt;/strong&gt; of a given REST API operation. The session location is then stored in the HTTP header of subsequent requests to perform Create, Read, Update, and Delete (CRUD) operations using HTTP verbs, such as POST, GET, PUT, PATCH, and DELETE.&lt;/p&gt;
&lt;p&gt;A tenant is a group of users created by the platform administrator. A tenant is allocated a quota of resources such as CPU, GPU, memory, storage, and Kubernetes clusters resources. A tenant can represent, for example, an office location, a business unit, a department, or a project. Tenant users can then deploy applications within the context of their tenant. Once a resource is actively in use by one tenant, it will not be shared with other tenants. The platform administrator assigns users roles (tenant Member or Tenant Admin) and tenant membership through either LDAP/AD authentication groups or local directory user accounts.&lt;/p&gt;
&lt;p&gt;A working context establishes the user identity, its tenant name, and role (member or admin). Based on this context, tenant users are granted privileges and permissions to create and manage resources for their tenant on Kubernetes clusters managed by HPE CP.&lt;/p&gt;
&lt;p&gt;You request an authentication &lt;em&gt;session location&lt;/em&gt; by issuing an authentication request for a new login session, providing your username/password credentials and your tenant name in the JSON body.&lt;/p&gt;
&lt;p&gt;This is confirmed by looking at the HPE CP REST API reference documentation for session object: &lt;strong&gt;POST /api/v2/session&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture3-1590462845389.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;You then get the session location (in the form &lt;em&gt;/api/v2/session/sessionId&lt;/em&gt;) from the JSON response header with a status &lt;em&gt;201 (created)&lt;/em&gt;, which means that the session object has been created successfully as the result of the HTTP POST request. This is also confirmed by looking at the REST API reference guide, as you can see below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture4-1590462859828.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;You then extract and save the session location value from the JSON response header. For each subsequent call, you set a new HTTP header with its key set to &lt;strong&gt;X-BDS-SESSION&lt;/strong&gt; and its value set to the session location value. This will set the &lt;strong&gt;working tenant context&lt;/strong&gt; for your request.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Now, let’s put it into action with Postman:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The REST API calls shown below explicitly specify JSON as the exchange format between the API client (Postman) and the API server (HPE Container Platform).&lt;/p&gt;
&lt;p&gt;The communication with REST API is HTTP or HTTP over HTTPS on port 8080. The communication protocol varies depending on whether your platform has been configured to use HTTPS secure protocol (recommended) or non-secure HTTP protocol.&lt;/p&gt;
&lt;p&gt;All the API calls are in the form:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An HTTP verb such as GET, POST, DELETE, PATCH, UPDATE, PUT&lt;/li&gt;
&lt;li&gt;The target REST API object URL:  &lt;strong&gt;http(s)://Gateway-IP-or-fqdn:8080/api/v2/object&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture5-1590462875550.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the example here, I am acting as a tenant user and I request an authentication session location through a &lt;strong&gt;POST /api/v2/session&lt;/strong&gt; API call. The user credentials and tenant name are provided in the request body using Postman Environment variables:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture6-1590464352997.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;When you click the &lt;strong&gt;Send&lt;/strong&gt; button, you get the response, &lt;em&gt;201 created&lt;/em&gt;, in the response body, which means the session resource was successfully created.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture7-1590464474273.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;The response header provides the resource path for the created session object in the form &lt;code&gt;/api/v2/session/SessionId&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture8-1590464521826.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;p&gt;All subsequent REST API calls will need to specify the &lt;em&gt;session location&lt;/em&gt; resource path in their headers to use as &lt;strong&gt;working tenant context&lt;/strong&gt;. The REST API call must include an &lt;strong&gt;X-BDS-SESSION&lt;/strong&gt; header with the value of the session object’s resource path &lt;code&gt;/api/v2/session/SessionId&lt;/code&gt; previously created. In the example below, the GET REST API call request will fetch information about the session you have just established with the HPE Container Platform as a tenant user:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture9-1590464546066.png&quot; alt=&quot;picture9&quot;&gt;&lt;/p&gt;
&lt;p&gt;As tenant user, the response body lists information about your session, such as your username, tenant name, and expiration time. Notice that the session location token will remain valid for 1440 minutes (or 24 hours), after which time you will have to establish a new login session.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture10-1590464591318.png&quot; alt=&quot;picture10&quot;&gt;&lt;/p&gt;
&lt;h1&gt;From Postman to code&lt;/h1&gt;
&lt;p&gt;You may ask yourself, how do these calls translate into code you can use in your own application? Postman can help with that thanks to its embedded code-generators. Imagine you have been experimenting with a POST session request in Postman and you would like to run the same POST call from &lt;em&gt;cURL&lt;/em&gt; in a script or from other code languages. As shown in the picture below, Postman lets you generate snippets of code in various languages and frameworks that will help you do this. You can use the &lt;strong&gt;Code&lt;/strong&gt; link under the blue &lt;strong&gt;Send&lt;/strong&gt; button to open the &lt;em&gt;GENERATE CODE SNIPPETS&lt;/em&gt; and select your preferred language or framework. In the rest of this blog, cURL is used as the preferred script language.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture11-1590464615471.png&quot; alt=&quot;picture11&quot;&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl –k –i –s --location --request POST &apos;http(s)://&amp;#x3C;Gateway-IP-Address-or-fqdn&gt;:8080/api/v2/session&apos; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; \
--data-raw &apos;{
	&quot;name&quot;: &quot;&amp;#x3C;YourUSername&gt;&quot;,
	&quot;password&quot;: &quot;&amp;#x3C;YourPassword&gt;&quot;,
	&quot;tenant_name&quot;: &quot;&amp;#x3C;YourTenantName&gt;&quot;
}&apos;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An example of the response header received is shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture12-1590464628659.png&quot; alt=&quot;picture12&quot;&gt;&lt;/p&gt;
&lt;p&gt;Extract the session location path value &lt;code&gt;/api/v2/session/&amp;#x3C;sessionId&gt;&lt;/code&gt; from the header response and use it in any subsequent REST API calls. As shown by the example below, you can use a &lt;em&gt;GET&lt;/em&gt; REST API call for object &lt;strong&gt;/api/v2/session&lt;/strong&gt; to fetch information about the session you have just established with the HPE Container Platform as a tenant user. You can combine &lt;em&gt;cURL&lt;/em&gt; command with &lt;strong&gt;jq&lt;/strong&gt; as the command line JSON processor to parse the JSON body responses and obtain a structured print of the output as shown here:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -k -s --location --request GET &apos;http(s)://&amp;#x3C;Gateway-IP-Address-or-fqdn&gt;:8080/api/v2/session&apos; \
--header &apos;X-BDS-SESSION: /api/v2/session/&amp;#x3C;SessionId&gt;&apos; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos; | jq 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture13-1590464642369.png&quot; alt=&quot;picture13&quot;&gt;&lt;/p&gt;
&lt;p&gt;Although sessions have a time to live (TTL) of 24 hours, it is a best practice in REST API programming to cleanup and delete those sessions when done with your REST API calls. You can use a &lt;em&gt;DELETE&lt;/em&gt; call to the target object &lt;strong&gt;/api/v2/session/SessionId&lt;/strong&gt; to achieve this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl -k -i -s --request DELETE &apos;http(s)://&amp;#x3C;Gateway-IP-Address-or-fqdn&gt;:8080/api/v2/&amp;#x3C;SessionId&gt;&apos; \
--header &apos;X-BDS-SESSION: /api/v2/session/&amp;#x3C;SessionId&gt;&apos; \
--header &apos;Accept: application/json&apos; \
--header &apos;Content-Type: application/json&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You have now learned the basics of programmatic access to the HPE Container Platform through its REST API. In &lt;a href=&quot;/blog/hpe-container-platform-rest-api-part-2-deploying-containerized-applicati&quot;&gt;the next article&lt;/a&gt; in this series, I will discuss how you can deploy programmatically cloud native stateless, microservices-based applications and non-cloud native distributed, stateful applications within the context of Kubernetes clusters managed by the HPE Container Platform.
You can stay up to date with the latest news from HPE DEV by &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;signing up for our monthly newsletter.&lt;/a&gt; In it, you will receive more awesome developer and data scientist focused posts about the HPE Container Platform. You can also follow our community on &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; and join the conversation on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Channel.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The neuroscience behind a design system]]></title><description><![CDATA[taylor cropped (3) After joining the Hewlett Packard Enterprise (HPE) Experience Studio as a UI Developer this past July, my first main…]]></description><link>https://developer.hpe.com/the-neuroscience-behind-a-design-system/</link><guid isPermaLink="false">https://developer.hpe.com/the-neuroscience-behind-a-design-system/</guid><pubDate>Wed, 13 May 2020 14:53:52 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/taylor-cropped-3-1589386197151.png&quot; alt=&quot;taylor cropped (3)&quot;&gt;&lt;/p&gt;
&lt;p&gt;After joining the Hewlett Packard Enterprise (HPE) Experience Studio as a UI Developer this past July, my first main project was to work on the HPE Design System. The HPE Design System is a design, development, and research effort aimed at creating a consistent user experience across the breadth of software offered by (HPE). The HPE Design System will put in place a prescribed model for all HPE software developers to adhere to, optimizing the way HPE teams create user experiences for their customers.&lt;/p&gt;
&lt;p&gt;I graduated last May with a B.S. in Computational Neuroscience, so, out of interest – or maybe habit – I’m constantly thinking about the way the brain interprets and perceives our experiences. Since I started work on the HPE Design System, I’ve begun to think more broadly about &lt;em&gt;why&lt;/em&gt; we invest time in building one. Because, there’s a lot more to a design system than the slick, tightly-wrapped feeling that accompanies the term. A design system is more than just a library of components and design standards; it’s an entire visual and interactive language that engages with a user’s emotion, attention, and decision-making abilities. The way I see it, the benefits of creating a design system are grounded in three important aspects of cognition: emotion, attention, and decision making.&lt;/p&gt;
&lt;h2&gt;Emotion and memory&lt;/h2&gt;
&lt;p&gt;Our memories are incredibly emotionally driven. The saying, “Someone might not remember what you say to them, but they’ll remember how you made them feel” holds more weight than a simple push for thoughtful interactions. In a similar way, in regards to applications, a user may not always remember the words on the page or the exact images used, but they will likely remember the feeling of that experience. Like a go-to friend whose demeanor I know I can rely on in times of stress or joy, a design system allows various applications and products to form a united experience that establishes a relationship of trust with the user regardless of if this is their first time using an application or their hundredth.&lt;/p&gt;
&lt;h2&gt;Attention is limited&lt;/h2&gt;
&lt;p&gt;Humans are incredibly smart, but our attentional capacity is, in fact, quite limited. Working memory, which refers to the information that we are currently attending to or “working with”, has a capacity defined roughly at 7 ± 2 items and a duration of about 20 seconds. However, our attention can be aided by consistency and the ability to group information.&lt;/p&gt;
&lt;p&gt;A well thought out design system helps a user focus their attention and break information into digestible pieces. By developing consistency in how information is presented and how a user is able to engage with that information, a design system liberates the user in a way that allows them to focus on what they’re trying to do as opposed to how to do it.&lt;/p&gt;
&lt;h2&gt;Decision-making&lt;/h2&gt;
&lt;p&gt;Ultimately, a user engages with an application to complete a task. Maybe that task is to check the status of a server or to manage user permissions, or maybe it’s just to explore a new topic. Regardless of the user’s goal, a design system enables and empowers the user in regards to this activity. By honing in on the ways emotion and attention drive our perception, a design system allows a user to make quicker, more accurate decisions about how to quickly achieve what the user set out to do.&lt;/p&gt;
&lt;h2&gt;Helping teams help customers&lt;/h2&gt;
&lt;p&gt;Beyond the end user’s experience, a design system helps expedite the design and development cycle for teams using it. In a sense, a design system is like bumpers on a bowling lane that gently redirect an application towards the ideal end deliverable. It provides templates and patterns to point them in the right direction, as well as resources and a community to engage with, amongst other things. A design system should be seen as being just as valuable to the internal teams designing/developing it as it is to the end user. It should be an aid to internal development teams in creating the best possible experience for the end user.&lt;/p&gt;
&lt;p&gt;By establishing a consistent visual and interaction language, the HPE Design System enables experiences to be crafted with uncompromising integrity. These benefits are not found just at the surface level but grounded in the ways our brains perceive the world.&lt;/p&gt;
&lt;p&gt;In conclusion, I’m really excited about the work I am doing on the HPE Design System and the improved experiences it will create for HPE customers as well as the internal teams using it. A design system leverages the knowledge we have about emotion, attention, and decision making to optimize the experience users have with an application. Check out the work that has started on the &lt;a href=&quot;https://design-system.hpe.design/&quot;&gt;design system.&lt;/a&gt; For other updates from the HPE DEV team, be sure to keep visiting &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing the HPE Nimble Storage SDK for Python]]></title><description><![CDATA[Python is by far one of the most approachable programming languages available today. Easy to read semantics, a humongous library for…]]></description><link>https://developer.hpe.com/introducing-the-hpe-nimble-storage-sdk-for-python/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-the-hpe-nimble-storage-sdk-for-python/</guid><pubDate>Fri, 08 May 2020 22:17:59 GMT</pubDate><content:encoded>&lt;p&gt;Python is by far one of the most approachable programming languages available today. Easy to read semantics, a humongous library for integrating with anything under the sun and best of all, it’s all open source, available on every popular operating system platform. Today, Hewlett Packard Enterprise (HPE) releases HPE Nimble Storage SDK for Python 1.0.0. It allows customers to extract even more value from their HPE Nimble Storage array by abstracting functionality into larger frameworks, whether it’s for custom automation, resource reporting, conformance adherence or any data the business might need to extract from the storage system in a programmatic fashion.&lt;/p&gt;
&lt;p&gt;The software development kit (SDK) includes all public application programming interfaces (API) available via the Nimble OS REST API (REST stands for REpresentational State Transfer). It is now trivial to write both simple or intricate applications interfacing with Nimble OS. Historically, our Python users have used the popular Requests library to manipulate API resources on the array and we’ve now abstracted away that interaction to ultimately make it simpler and more structured to interface with Nimble OS.&lt;/p&gt;
&lt;p&gt;In this blog post, we’ll get you started with the SDK and show you how to write a simple asset reporting program to copy and paste into a spreadsheet. In a more realistic scenario, this data would’ve been inserted into an inventory and asset tracking system.&lt;/p&gt;
&lt;h1&gt;Getting started&lt;/h1&gt;
&lt;p&gt;Like all Python libraries, the HPE Nimble Storage SDK for Python is hosted on PyPi, the Python Package Index. Most Python distributions come with the command line (CLI) tool, &lt;code&gt;pip&lt;/code&gt;, to manage installations from PyPi. This tutorial assumes &lt;code&gt;pip&lt;/code&gt; is already installed.&lt;/p&gt;
&lt;p&gt;From the CLI, install the SDK.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install nimble-sdk 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Python 3.6 or newer is required. If multiple Python distributions are installed on your system, make sure Python 3 is installed or suffix Python commands with a ‘3’, like &lt;code&gt;pip3&lt;/code&gt; and &lt;code&gt;python3&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Let’s run an interactive example against a Nimble array and pull out a list of drives in the array.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;python3 
Python 3.7.7 (default, Mar 10 2020, 15:43:03) 
[Clang 11.0.0 (clang-1100.0.33.17)] on darwin 
Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. 
&gt;&gt;&gt; from nimbleclient import NimOSClient 
&gt;&gt;&gt; import pprint
&gt;&gt;&gt; client = NimOSClient(&quot;192.168.1.1&quot;, &quot;admin&quot;, &quot;admin”) 
&gt;&gt;&gt; pprint.pprint(client.disks.list())
[&amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000100)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000200)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000300)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000400)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000500)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000600)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000700)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000800)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000900)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000a00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000b00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000c00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000d00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000e00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000000f00)&gt;,
 &amp;#x3C;Disk(id=2c49686580b78e0b160001000000000a0000001000)&gt;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let’s pull out more info about a particular drive, in the same session.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&gt;&gt;&gt; pprint.pprint(client.disks.get(&quot;2c49686580b78e0b160001000000000a0000000c00&quot;).attrs)
{&apos;array_id&apos;: &apos;0949686580b78e0b16000000000000000000000001&apos;,
 &apos;array_name&apos;: &apos;nva-test&apos;,
 &apos;bank&apos;: 0,
 &apos;block_type&apos;: &apos;block_unknown&apos;,
 &apos;disk_internal_stat_1&apos;: &apos;00e29a628e&apos;,
 &apos;firmware_version&apos;: &apos;1.0&apos;,
 &apos;hba&apos;: 2,
 &apos;id&apos;: &apos;2c49686580b78e0b160001000000000a0000000c00&apos;,
 &apos;is_dfc&apos;: False,
 &apos;model&apos;: &apos;VMware Virtual S&apos;,
 &apos;path&apos;: &apos;/dev/sdm&apos;,
 &apos;port&apos;: 8,
 &apos;raid_id&apos;: 10,
 &apos;raid_resync_average_speed&apos;: 0,
 &apos;raid_resync_current_speed&apos;: 0,
 &apos;raid_resync_percent&apos;: 100,
 &apos;raid_state&apos;: &apos;okay&apos;,
 &apos;serial&apos;: &apos;/dev/sdm&apos;,
 &apos;shelf_id&apos;: &apos;2d49686580b78e0b16000000010000637300000013&apos;,
 &apos;shelf_location&apos;: &apos;A.0&apos;,
 &apos;shelf_location_id&apos;: 0,
 &apos;shelf_serial&apos;: &apos;cs-19d266&apos;,
 &apos;size&apos;: 39728447488,
 &apos;slot&apos;: 12,
 &apos;smart_attribute_list&apos;: [],
 &apos;state&apos;: &apos;in use&apos;,
 &apos;type&apos;: &apos;hdd&apos;,
 &apos;vendor&apos;: &apos;Nimble&apos;,
 &apos;vshelf_id&apos;: 0}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To get an idea of what APIs are available, hitting &lt;code&gt;&amp;#x3C;TAB&gt;&lt;/code&gt; in an interactive session, just before entering which resource to manipulate will list the available APIs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&gt;&gt;&gt; client.&amp;#x3C;TAB&gt; 
client.access_control_records           client.folders                          client.snapshot_collections 
client.active_directory_memberships     client.groups                           client.snapshots 
client.alarms                           client.initiator_groups                 client.software_versions 
client.application_categories           client.initiators                       client.space_domains 
client.application_servers              client.jobs                             client.subnets 
client.arrays                           client.key_managers                     client.support 
client.audit_log                        client.master_key                       client.tokens 
client.chap_users                       client.network_configs                  client.user_groups 
client.controllers                      client.network_interfaces               client.user_policies 
client.disks                            client.performance_policies             client.users 
client.events                           client.pools                            client.versions 
client.fibre_channel_configs            client.protection_schedules             client.volume_collections 
client.fibre_channel_initiator_aliases  client.protection_templates             client.volumes 
client.fibre_channel_interfaces         client.protocol_endpoints               client.witnesses 
client.fibre_channel_ports              client.replication_partners 
client.fibre_channel_sessions           client.shelves 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;How to interact with each of these resources is available in the &lt;a href=&quot;https://hpe-storage.github.io/nimble-python-sdk&quot;&gt;documentation&lt;/a&gt; hosted on GitHub.&lt;/p&gt;
&lt;h1&gt;Hello World&lt;/h1&gt;
&lt;p&gt;Running Python interactively is practical to use as a sidekick when developing a new script to find attributes and other syntax minutia. Let’s write a real “Hello World” Python program that extracts some array meta data that is used to track inventory.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/usr/bin/env python3

# Import the required modules
import argparse
from nimbleclient import NimOSClient

# The NimbleInventory class
class NimbleInventory:
    def __init__(self, username, password, arrays):
        self.inventory = {}
        self.username = username
        self.password = password
        self.arrays = arrays
        self.fetch()

    # Loop over requested arrays and gather the data
    def fetch(self):
        for array in self.arrays:
            try:
                api = NimOSClient(array, self.username, self.password)
                entry = api.arrays.get()
                self.inventory[entry.attrs.get(&apos;name&apos;)] = [entry.attrs.get(&apos;version&apos;), 
                                           entry.attrs.get(&apos;extended_model&apos;),
                                           entry.attrs.get(&apos;serial&apos;)]
            except:
                self.inventory[array] = [&apos;n/a&apos;, &apos;n/a&apos;, &apos;n/a&apos;]

    # Print the list of arrays in a spreadsheet friendly manner
    def report(self):
        for xlsout in self.inventory:
            print (&quot;{name}\t{version}\t{model}\t{serial}&quot;.format(name=xlsout, 
                                                         version=self.inventory[xlsout][0],
                                                         model=self.inventory[xlsout][1], 
                                                         serial=self.inventory[xlsout][2]))

# If called directly
if __name__ == &apos;__main__&apos;:
    # Parse CLI arguments
    parser = argparse.ArgumentParser(description=&apos;Nimble array inspector&apos;)
    parser.add_argument(&apos;--username&apos;, required=True,
                    type=str, help=&apos;Nimble OS username&apos;)
    parser.add_argument(&apos;--password&apos;, required=True, 
                    type=str, help=&apos;Nimble OS password&apos;)
    parser.add_argument(&apos;--array&apos;, required=True, type=str, 
                    help=&apos;Nimble array hostname or IP address&apos;, nargs=&apos;+&apos;)
    args = parser.parse_args()

    # New inventory
    data = NimbleInventory(args.username, args.password, args.array)

    # Report 
    data.report()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Running the program against a set of arrays will produce something like the following (includes the syntax):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./nimble_inventory.py --username admin --password admin --array nimble-array1 nimble-array2 nimble-array3 nimble-array4 
nimble-array1    5.1.3.100-668356-opt    Virtual-6G-12T-320F    cs-XXXXX 
nimble-array2    5.1.4.0-683149-opt    AF3000-2P-11T    AF-XXXXX 
nimble-array3    5.1.3.0-663613-opt    CS3000-2P-21T-1440F    AF-XXXXX 
nimble-array4    5.0.7.300-644174-opt    AF5000-2F-46T    AF-XXXXX 
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If this would’ve been a real-world use case, a read-only user account would’ve been appropriate to set up on the targeted systems.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;The HPE Nimble Storage SDK for Python source code and documentation links have been added to the HPE DEV portal &lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;platform page for HPE Nimble Storage&lt;/a&gt;, which makes it easy to find in the future.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hpe-storage/nimble-python-sdk&quot;&gt;Source code&lt;/a&gt; on GitHub&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://hpe-storage.github.io/nimble-python-sdk&quot;&gt;Documentation&lt;/a&gt; hosted on GitHub Pages&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://pypi.org/project/nimble-sdk/&quot;&gt;Python package&lt;/a&gt; hosted on PyPi&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Come hang out with us on our Slack community. HPE employees may sign up directly at &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;hpedev.slack.com&lt;/a&gt;, partners and customers need to signup at &lt;a href=&quot;https://slack.hpedev.io&quot;&gt;slack.hpedev.io&lt;/a&gt; first. Also, watch this space for future releases around software development kits for HPE Nimble Storage. There’s plenty of really exciting projects in the works. Stay tuned!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[WOW - Part 2: Communicating with customers and constructors]]></title><description><![CDATA[title slide What is WOW? In my previous post, I explained that many businesses struggle to understand the true value designers bring to a…]]></description><link>https://developer.hpe.com/wow-part-2-communicating-with-customers-and-constructors/</link><guid isPermaLink="false">https://developer.hpe.com/wow-part-2-communicating-with-customers-and-constructors/</guid><pubDate>Fri, 08 May 2020 16:40:10 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/title-slide-1588956219016.jpg&quot; alt=&quot;title slide&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is WOW?&lt;/h2&gt;
&lt;p&gt;In my &lt;a href=&quot;/blog/wow-a-practiced-and-perfected-design-process-part-1-uncovering-the-merit&quot;&gt;previous post,&lt;/a&gt; I explained that many businesses struggle to understand the true value designers bring to a project. To help UX designers overcome this issue, I’m sharing a methodology our group developed called WOW (Why On What with customers and constructors). This workflow demonstrates how to quantify the value of the design process and provides enterprise UX designers with a practiced and perfected path to achieve success.
To briefly summarize, the WOW methodology helps creative teams focus on four important stages of a project:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Uncover why the business (and this particular project) needs a UX&lt;/li&gt;
&lt;li&gt;Involve the customer early in the design phase&lt;/li&gt;
&lt;li&gt;Ensure the constructor (developer) has the right information during design implementation&lt;/li&gt;
&lt;li&gt;Convey the business value of the design&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In part one of this blog series, I discussed the first part of the workflow – addressing why businesses should embrace UX design. Here, in part two, I will cover the next two steps of the process – focusing on communications with the customer (user) of the UX design and the constructor (developer). Keeping close communications with these two personas is key to any successful UX design.&lt;/p&gt;
&lt;h2&gt;Customer - Right brain of product&lt;/h2&gt;
&lt;p&gt;An essential ingredient in the creative UX design process is customer communications. You can think of this as flexing the right half of the brain during the creative process. This is where empathy really starts to come into play. Working with the customer directly helps bring the right needs, insights, and feel to a product. Customer interactions can help remove opinioned workflows and bring clarity to the early phases of design.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-2-slide-2-_-after-title-slide-1588956251003.jpg&quot; alt=&quot;part 2 slide 2 _ after title slide&quot;&gt;&lt;/p&gt;
&lt;p&gt;Customer interactions can be done through face-to-face or remote meetings. With the right set of questions, a designer can quickly gather enough information to make designs move ahead without ambiguity. During your interactions, help customers keep a “feel as if you are using the product” mindset and make sure you ask the right questions to ensure clarity. Your work here is to help them understand that you truly care about how they feel about using the product. Remember, it’s not so important at this point to consider “how” things are implemented. You’ll address the “how” later.&lt;/p&gt;
&lt;p&gt;Because customers use a UI (user interface) or real screens to interact with products, you’ll use wireframes as the intermediary steps to help create the prescribed vision of the UX and achieve the final UI. You can then take advantage of these wireframes and use them to help communicate with the constructor.&lt;/p&gt;
&lt;h2&gt;Constructor - Left brain of product&lt;/h2&gt;
&lt;p&gt;Another essential ingredient in the UX design process is ensuring clear and accurate communications with the constructors of the product, aka the developers. This is like working the left half of the brain, where you need to be more concerned about technical feasibility and how things work from the developer’s point of view.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-2-slide-3-1588956289625.jpg&quot; alt=&quot;part 2 slide 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Spending the time to ensure a developer understands the interaction, component usage, and other subtle aspects of a design is very important for the success of the UX. This step in the process is called the designer-developer handoff. Oftentimes this important phase is missed in the design process because of time constraints, resource dependencies, etc. This phase is so important that it should truly be a non-dismissible requirement. This is why we have integrated it as a distinct step in the WOW UX-flow.&lt;/p&gt;
&lt;p&gt;A strong designer–developer relationship based on mutual understanding is key for the overall success of a UX. Navigating this step carefully will help the designer identify any design shortcoming far before production release. It will also help the developer better understand the requirements and develop faster with less iterations. Design system approaches, like being three sprints ahead (3-SA), employing iterative development, and using UI templates, are all particularly helpful in these designer-developer handoffs.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;By this point in using a UX design process, the designer has already added value to the product. Potential functionality gaps have been identified and competitive features have been added. Relying on empathy, the customer has been heard and the developer has all the information required to achieve the desired result. What’s left is using the tools available to demonstrate the business value achieved through the UX. In Part 3, I’ll discuss more on wireframes and their use in addressing the final stage of WOW, conveying the overall business value of the design.&lt;/p&gt;
&lt;p&gt;Again, please feel free to reach out to me &lt;a href=&quot;https://twitter.com/uxwithparul&quot;&gt;@uxwithparul&lt;/a&gt; if you have any questions. You’ll be able to find subsequent posts in this series and view other informative blogs at &lt;a href=&quot;/blog&quot;&gt;HPE DEV.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Here to help  - Newsletter]]></title><link>https://developer.hpe.com/2020-May-01/</link><guid isPermaLink="false">https://developer.hpe.com/2020-May-01/</guid><pubDate>Fri, 01 May 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Tutorial: How to get started with the HPE CSI Driver and HPE Primera and 3PAR]]></title><description><![CDATA[Tutorial: How to get started with the HPE CSI Driver and HPE Primera and 3PAR With the release of the HPE Container Storage Interface (CSI…]]></description><link>https://developer.hpe.com/tutorial-how-to-get-started-with-the-hpe-csi-driver-and-hpe-primera-and-/</link><guid isPermaLink="false">https://developer.hpe.com/tutorial-how-to-get-started-with-the-hpe-csi-driver-and-hpe-primera-and-/</guid><pubDate>Thu, 30 Apr 2020 16:12:11 GMT</pubDate><content:encoded>&lt;h1&gt;Tutorial: How to get started with the HPE CSI Driver and HPE Primera and 3PAR&lt;/h1&gt;
&lt;p&gt;With the release of the HPE Container Storage Interface (CSI) driver for Kubernetes back in January, HPE has been hard at work  on integrating additional platforms into the CSI driver framework. Initially the HPE CSI Driver for Kubernetes only supported Nimble Storage now with the latest v1.1.1 comes &lt;a href=&quot;https://community.hpe.com/t5/hpe-storage-tech-insiders/hpe-csi-driver-for-kubernetes-1-1-1-and-hpe-3par-and-hpe-primera/ba-p/7086675&quot;&gt;support for HPE Primera and 3PAR arrays&lt;/a&gt;. In this tutorial, I will walk you through the steps of deploying the CSI driver with HPE Primera and then we will deploy a Wordpress site using persistent storage. With that, let&apos;s get going!&lt;/p&gt;
&lt;h2&gt;Assumptions&lt;/h2&gt;
&lt;p&gt;I will be starting with an existing Kubernetes cluster. This can be a fresh install with &lt;a href=&quot;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/&quot;&gt;kubeadm&lt;/a&gt; or &lt;a href=&quot;https://kubernetes.io/docs/setup/production-environment/tools/kubespray/&quot;&gt;kubespray&lt;/a&gt;. The deployment of Kubernetes is outside of the scope of this document. If you don&apos;t have a cluster up and running, I recommend that you get started there.&lt;/p&gt;
&lt;p&gt;Also I am assuming &lt;code&gt;kubectl&lt;/code&gt; is installed and configured to communicate with the cluster and Helm 3 to deploy the HPE CSI Driver for Kubernetes. If not, here are some good resources to check out for assistance &lt;a href=&quot;https://kubernetes.io/docs/tasks/tools/install-kubectl/&quot;&gt;Install and Set Up kubectl&lt;/a&gt; and &lt;a href=&quot;https://helm.sh/docs/&quot;&gt;https://helm.sh/docs/&lt;/a&gt; to get setup.&lt;/p&gt;
&lt;h2&gt;Deploying the HPE CSI Driver and HPE 3PAR and Primera Container Storage Provider with Helm&lt;/h2&gt;
&lt;p&gt;To get started with the deployment of the HPE CSI Driver, check out the &lt;a href=&quot;https://scod.hpedev.io/&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt; (SCOD for short) site. SCOD is an umbrella documentation project for all Kubernetes and Docker integrations for HPE primary storage tailored for IT Ops, developers and partners. It includes HPE 3PAR, HPE Primera, HPE Cloud Volumes and HPE Nimble Storage.&lt;/p&gt;
&lt;p&gt;The HPE CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator. For this tutorial, I will be using Helm to the deploy the CSI driver.&lt;/p&gt;
&lt;p&gt;The official Helm chart for the HPE CSI Driver for Kubernetes is hosted on &lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-csi-driver&quot;&gt;hub.helm.sh&lt;/a&gt;. There, you will find the configuration and installation instructions for the chart.&lt;/p&gt;
&lt;p&gt;The first step of installing the HPE CSI Driver is creating the &lt;strong&gt;values.yaml&lt;/strong&gt; file.&lt;/p&gt;
&lt;p&gt;Please refer to this sample &lt;a href=&quot;https://github.com/hpe-storage/co-deployments/tree/master/helm/values/csi-driver&quot;&gt;values.yaml&lt;/a&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;vi primera-values.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Copy the following into the file. Make sure to set the &lt;strong&gt;backendType: primera3par&lt;/strong&gt; and the &lt;strong&gt;backend&lt;/strong&gt; to the array IP along with the array username and password.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# HPE backend storage type (nimble, primera3par)
backendType: primera3par

secret:
  # parameters for specified backendType (nimble, primera3par)
  create: true
  backend: 192.168.1.10
  username: 3paradm
  password: 3pardata
  servicePort: &quot;8080&quot;

## For creating the StorageClass automatically:
storageClass:
  create: false
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; &lt;br /&gt;
The user specified will need at a minimum the &lt;strong&gt;edit&lt;/strong&gt; role on the array.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Save and exit.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;&lt;br /&gt;Deploying the HPE CSI Driver with the HPE 3PAR and Primera CSP currently doesn&apos;t support the creation of the &lt;strong&gt;default&lt;/strong&gt; &lt;code&gt;StorageClass&lt;/code&gt; in the Helm chart. Make sure to set &lt;strong&gt;create: false&lt;/strong&gt; or omit the &lt;code&gt;StorageClass&lt;/code&gt; section.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;Installing the chart&lt;/h3&gt;
&lt;p&gt;To install the chart with the name hpe-csi:&lt;/p&gt;
&lt;p&gt;Add the HPE CSI Driver for Kubernetes helm repo:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm repo add hpe https://hpe-storage.github.io/co-deployments
helm repo update
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install the latest chart:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install hpe-csi hpe/hpe-csi-driver --namespace kube-system -f primera-values.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Wait a few minutes as the deployment finishes.&lt;/p&gt;
&lt;p&gt;Verify that everything is up and running correctly with the listing out the pods.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get pods --all-namespaces -l &apos;app in (primera3par-csp, hpe-csi-node, hpe-csi-controller)&apos;
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   hpe-csi-controller-84d8569476-vt7xg   5/5       Running   0          13m
kube-system   hpe-csi-node-s4c8z                    2/2       Running   0          13m
kube-system   primera3par-csp-66f775b555-2qclg      1/1       Running   0          13m
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get secret -n kube-system | grep primera3par
primera3par-secret                Opaque                                5         13m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If all of the components show in &lt;code&gt;Running&lt;/code&gt; state, then the HPE CSI driver and the HPE 3PAR and Primera Container Storage Provider has been successfully deployed.&lt;/p&gt;
&lt;h2&gt;Using the HPE CSI Driver and HPE 3PAR and Primera Container Storage Provider&lt;/h2&gt;
&lt;p&gt;Now, let&apos;s validate the deployment by creating a &lt;code&gt;StorageClass&lt;/code&gt;, &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; and deploy a Wordpress site.&lt;/p&gt;
&lt;p&gt;We need to create a &lt;code&gt;StorageClass&lt;/code&gt; API object using the HPE CSI driver, along with the parameters specific to HPE 3PAR and Primera CSP, as well as the &lt;code&gt;Secret&lt;/code&gt; used for the &lt;strong&gt;primera3par&lt;/strong&gt; backend. For a full list of supported parameters, please refer to the &lt;a href=&quot;https://scod.hpedev.io/container_storage_provider/hpe_3par_primera/index.html&quot;&gt;CSP specific documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The below YAML declarations are meant to be created with &lt;code&gt;kubectl create&lt;/code&gt;. Either copy the content to a file on the host where &lt;code&gt;kubectl&lt;/code&gt; is being executed, or copy &amp;#x26; paste into the terminal, like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl create -f-
&amp;#x3C; paste the YAML &gt;
^D (CTRL + D)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a &lt;code&gt;StorageClass&lt;/code&gt; API object for a Primera Data Reduction volume.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: primera-reduce-sc
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/provisioner-secret-name: primera3par-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: primera3par-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: primera3par-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: primera3par-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  # Uncomment for k8s 1.15 for resize support
  csi.storage.k8s.io/controller-expand-secret-name: primera3par-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  cpg: SSD_r6
  provisioning_type: reduce
  accessProtocol: fc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; for MariaDB for use by Wordpress. This object creates a &lt;code&gt;PersistentVolume&lt;/code&gt; as defined. Make sure to reference the correct &lt;code&gt;.spec.storageClassName&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-my-wordpress-mariadb-0
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: primera-reduce-sc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, let&apos;s make another for the Wordpress application.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: primera-reduce-sc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s again verify the &lt;code&gt;PersistentVolume&lt;/code&gt; were created successfully.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pv
NAME                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
data-my-wordpress-mariadb-0   Bound     pvc-1abdb7d7-374e-45b3-8fa1-534131ec7ec6   50Gi       RWO            primera-reduce-sc   1m
my-wordpress                  Bound     pvc-ff6dc8fd-2b14-4726-b608-be8b27485603   20Gi       RWO            primera-reduce-sc   1m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above output means that the HPE CSI Driver successfully provisioned a new volume based upon the &lt;strong&gt;primera-reduce-sc&lt;/strong&gt; &lt;code&gt;StorageClass&lt;/code&gt;. The volume is not attached to any node yet. It will only be attached to a node once a scheduled workload requests the &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; &lt;strong&gt;pvc-nginx&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Now, let&apos;s use Helm to deploy Wordpress using the &lt;code&gt;PVC&lt;/code&gt; created previously. When Wordpress is deployed, the volumes will be attached, formatted and mounted.&lt;/p&gt;
&lt;p&gt;The first step is to add the Wordpress chart.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo bitnami/wordpress
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/wordpress       9.2.1           5.4.0           Web publishing platform for building blogs and ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Deploy Wordpress by setting &lt;code&gt;persistence.existingClaim=&amp;#x3C;existing_PVC&gt;&lt;/code&gt; to the &lt;code&gt;PVC&lt;/code&gt; created in the previous step.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install my-wordpress bitnami/wordpress --version 9.2.1 --set service.type=ClusterIP,wordpressUsername=admin,wordpressPassword=adminpassword,mariadb.mariadbRootPassword=secretpassword,persistence.existingClaim=my-wordpress,allowEmptyPassword=false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Check to verify that Wordpress and MariaDB were deployed and are in the &lt;strong&gt;Running&lt;/strong&gt; state. This may take a few minutes.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get pods
NAME                            READY     STATUS    RESTARTS   AGE
my-wordpress-69b7976c85-9mfjv   1/1       Running   0          2m
my-wordpress-mariadb-0          1/1       Running   0          2m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally lets take a look at the Wordpress site. You can use &lt;code&gt;kubectl port-forward&lt;/code&gt; to access the Wordpress application from within the Kubernetes cluster to verify everything is working correctly.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl port-forward svc/my-wordpress 80:80
Forwarding from 127.0.0.1:80 -&gt; 8080
Forwarding from [::1]:80 -&gt; 8080
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt;&lt;br /&gt;If you have something already running locally on port 80, modify the port-forward to an unused port (i.e. 5000:80).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Open a browser on your workstation to &lt;strong&gt;&lt;a href=&quot;http://127.0.0.1&quot;&gt;http://127.0.0.1&lt;/a&gt;&lt;/strong&gt; and you should see, &lt;strong&gt;&quot;Hello World!&quot;&lt;/strong&gt;. Access the admin console at: &lt;strong&gt;&lt;a href=&quot;http://127.0.0.1/admin&quot;&gt;http://127.0.0.1/admin&lt;/a&gt;&lt;/strong&gt; using the user/password used to deploy the Helm Chart. Happy Blogging!&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;Stay tuned to HPE DEV for future blogs regarding the HPE CSI Driver for Kubernetes. In the meantime, if you want to learn more about Kubernetes, CSI and the integration with HPE storage products, you can find a ton of Resources out on &lt;a href=&quot;https://scod.hpedev.io/&quot;&gt;SCOD&lt;/a&gt;! If you are already on Slack or an HPE employee, connect with us on &lt;a href=&quot;https://hpedev.slack.com/&quot;&gt;Slack&lt;/a&gt;. If you are a new user, signup at &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;slack.hpedev.io&lt;/a&gt;. We hang out in #kubernetes, #nimblestorage and #3par-primera.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Multiple HPE efforts help fight COVID-19]]></title><description><![CDATA[title slide for covid blog Scientists around the world are racing to determine how to defeat COVID-19. Technology companies, like Hewlett…]]></description><link>https://developer.hpe.com/multiple-hpe-efforts-help-fight-covid-19/</link><guid isPermaLink="false">https://developer.hpe.com/multiple-hpe-efforts-help-fight-covid-19/</guid><pubDate>Tue, 28 Apr 2020 15:22:08 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/title-slide-for-covid-blog-1588088234878.jpg&quot; alt=&quot;title slide for covid blog&quot;&gt;&lt;/p&gt;
&lt;p&gt;Scientists around the world are racing to determine how to defeat COVID-19. Technology companies, like Hewlett Packard Enterprise (HPE), are in a unique position to assist. As a company whose purpose is to advance the way people live and work, HPE is proud that its technology and its employees’ expertise are being called into action to help organizations address the COVID-19 crisis. Here are four ways that HPE’s technology and talent are on the front lines in the fight against COVID-19, and how you can help.&lt;/p&gt;
&lt;h2&gt;Key member of the COVID-19 High Performance Computing Consortium&lt;/h2&gt;
&lt;p&gt;Given enough compute power, scientists can accelerate finding a resolution to the COVID-19 pandemic. Computing resources help researchers collect, process, and analyze the massive amounts of data required to understand and model the virus’ genetic coding. Epidemiological data scientists rely on supercomputing processing to help them understand disease conditions and distribution patterns within the population. This data is essential to identify risk factors and determine health-related policies.&lt;/p&gt;
&lt;p&gt;As part of the &lt;a href=&quot;https://covid19-hpc-consortium.org/&quot;&gt;COVID-19 High Performance Computing Consortium,&lt;/a&gt; HPE joins other key industry, government, and academic partners in providing COVID-19 researchers with access to the world’s most powerful high-performance computing resources. One way consortium members are helping is by offering those with approved COVID-19 related research proposals free access to technology resources required to find a cure for the virus. More specifics can be found in the &lt;a href=&quot;https://www.whitehouse.gov/briefings-statements/white-house-announces-new-partnership-unleash-u-s-supercomputing-resources-fight-covid-19/&quot;&gt;White House Announces New Partnership to Unleash U.S. Supercomputing Resources to Fight COVID-19&lt;/a&gt; article.&lt;/p&gt;
&lt;h2&gt;HPE AI/ML projects could lead to quicker cures overall&lt;/h2&gt;
&lt;p&gt;With the acquisition of supercomputing leader, Cray Inc., HPE added powerful computing resources to its arsenal. HPE DEV community member, Sreenivas Rangan Sukumar, explains that the HPE artificial intelligence (AI) / machine learning (ML) team currently has several projects in the works helping scientists find treatments. Three projects, in particular, may set the basis for an end-to-end pipeline by which cures for numerous diseases can be found more quickly than ever before.&lt;/p&gt;
&lt;p&gt;The first project, &lt;a href=&quot;https://github.com/jbalma/pharml&quot;&gt;PharML.Bind,&lt;/a&gt; is an AI-driven approach that screens databases of potential drug compounds. It works by feeding supercomputer-based simulations of molecular dynamics (docking) to validate chemicals as potential targets for a vaccine or drug. Current bio-physical molecular docking is expensive and takes several hours in simulation testing for each potential chemical, which slows results. In contrast, PharML.Bind is computationally affordable and significantly faster in predicting the binding potential for a chemical to a target virus protein.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/screen-shot-2020-04-21-at-24215-pm-1588088160374.PNG&quot; alt=&quot;screen shot 2020 04 21 at 2.42.15 pm&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another project addresses the treatment phase of the COVID-19 pandemic. This project focuses on using AI to augment human researchers in validating a screen by connecting-the-dots across literature and curated-databases (such as Biosamples, Uniprot, Clinical Trials, DrugBank, PubChem etc.). The project uses the Cray Graph Engine, &lt;a href=&quot;https://www.cray.com/blog/cray-graph-engine-takes-trillion-triples/&quot;&gt;the world’s fastest graph database,&lt;/a&gt; to host, query, and reason with a knowledgebase of 100+ billion facts in a few seconds – something that otherwise would have taken months. Now using a supercomputer, researchers have access to this integrated body of knowledge – all in one place.&lt;/p&gt;
&lt;p&gt;Finally, several HPE engineers and developers are working on the &lt;a href=&quot;https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/data&quot;&gt;White House COVID-19 “Kaggle” Challenge.&lt;/a&gt; This urgent call-to-action is asking the world’s AI experts for assistance in answering key scientific questions related to COVID-19. These questions include: “What is known about transmission, incubation, and environmental stability?”, “What do we know about COVID-19 risk factors?”, and “What worked and what didn’t?”&lt;/p&gt;
&lt;p&gt;While each of these COVID-19 specific AI/ML efforts will provide significant contributions to the community individually, the combined use of these projects can assist healthcare efforts far beyond this immediate crisis. As Sreenivas Rangan Sukumar points out, accelerated chemical-structure discovery and literature-based evidence discovery means faster cures for many other illnesses.&lt;/p&gt;
&lt;h2&gt;The Folding@Home effort&lt;/h2&gt;
&lt;p&gt;In addition to HPE offering corporate resources, HPE employees are personally doing what they can to help. Numerous engineers and developers are volunteering CPU time from their home computers as part of a distributed computing project called &lt;a href=&quot;https://foldingathome.org/&quot;&gt;Folding@Home.&lt;/a&gt; An open source project initially focused on fighting different types of cancer and neurological diseases, Folding@Home now has a major focus on COVID-19.&lt;/p&gt;
&lt;p&gt;Greg Bowman, PhD, an assistant professor of biochemistry and molecular biophysics at Washington University School of Medicine in St. Louis, leads this open source activity. The goal is to simulate the dynamics of COVID-19 proteins in the hunt for new therapeutic opportunities. &lt;a href=&quot;https://foldingathome.org/2020/03/15/coronavirus-what-were-doing-and-how-you-can-help-in-simple-terms/&quot;&gt;By unravelling the mysteries of protein dynamics, including the folding process,&lt;/a&gt; Dr. Bowman hopes to better understand how viral proteins work and design therapies to stop the coronavirus.&lt;/p&gt;
&lt;p&gt;As pointed out in HPE DEV member Michael Mattsson’s &lt;a href=&quot;https://datamattsson.tumblr.com/post/613349069061046272/lets-hack-covid-19&quot;&gt;blog,&lt;/a&gt; a couple of HPE community teams are already actively supporting Folding@Home within HPE: HPE DEV (246814) and WeAreHPE (247332). The worldwide response to this effort has been tremendous. Folding@Home recently clocked 470 PFLOPs of compute power, which is more than two times higher than the peak performance of ORNL&apos;s Summit, the world&apos;s most powerful supercomputer. In just over a month, the WeAreHPE team placed in the top 500 in terms of work unit count, ranking 470 out of 251,480.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/folding-at-home-1-1588088181250.png&quot; alt=&quot;folding at home 1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Networking with HPE Aruba and Airheads volunteers&lt;/h2&gt;
&lt;p&gt;In addition to compute power, network infrastructure and the skills required to set it up are also needed to address this crisis. Many hospitals are struggling to handle the patient load and are building temporary COVID-related healthcare facilities.&lt;/p&gt;
&lt;p&gt;To help healthcare organizations set up quickly, Aruba created the Airheads Volunteer Corps, an opt-in registry of volunteer network engineers ready to assist in the build out of networks for medical facilities battling this pandemic. In creating this registry, Aruba connects those in need of IT assistance, with those who have the ability to help. HPE is also supporting this effort by donating thousands of secure connectivity kits. Learn more &lt;a href=&quot;https://community.arubanetworks.com/t5/Community-Matters-Blog/Airheads-Volunteer-Corps-and-Healthcare-Connectivity-Bundles/ba-p/645495&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;From Genoa, Italy, Aruba’s Stefano Brioschi shared how he and Chiara Cipollini, HPE account manager for the Italian cruise market, volunteered time and equipment to build a Wi-Fi network on a passenger vessel. The Grandi Navi Veloci (GNV) &lt;a href=&quot;https://www.seatrade-cruise.com/news/splendid-case-study-how-first-passenger-ship-was-transformed-coronavirus-relief&quot;&gt;Splendid, was being turned into a fully-equipped hospital ship,&lt;/a&gt; but the ship’s iron construction prevented adequate mobile connectivity. The two connected with local partner, Mantero Sistemi, to deploy a wireless infrastructure to support communications between ambulance crews, doctors, medical devices, the ship’s crew, and patients.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/aruba-3-1588088198966.jpg&quot; alt=&quot;aruba 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;In only 5 days and working with a team of 10 people, they were able to create a network composed of 70 access points, 10 switches, and over 4 kilometers of UTP cable. Through this effort, they provided Wi-Fi coverage across the various decks of the ship, helping to form a controlled facility where patients could recover before returning to their homes.&lt;/p&gt;
&lt;h2&gt;Join the effort&lt;/h2&gt;
&lt;p&gt;The HPE DEV team supports many of these efforts. Interested in volunteering your expertise to help find a cure? Register to our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; workspace and join us on the &lt;a href=&quot;https://hpedev.slack.com/archives/CSA0R2T7B&quot;&gt;#hpedev-volunteers&lt;/a&gt; channel to learn more about how you can join the HPE DEV Folding@home team (TeamID: 246814). If you’re interested in participating in the Aruba Airheads opt-in registry, you can &lt;a href=&quot;https://connect.arubanetworks.com/Airheads_Volunteer_Corps&quot;&gt;volunteer here.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[COVID-19 limits accessibility – Free offer for HPE iLO Advanced opens it up]]></title><description><![CDATA[COVID-19 has introduced a variety of new challenges for many HPE customers and partners, such as closed offices, inaccessible server rooms…]]></description><link>https://developer.hpe.com/covid-19-limits-accessibility-free-offer-for-hpe-ilo-advanced-opens-it-u/</link><guid isPermaLink="false">https://developer.hpe.com/covid-19-limits-accessibility-free-offer-for-hpe-ilo-advanced-opens-it-u/</guid><pubDate>Thu, 23 Apr 2020 15:56:14 GMT</pubDate><content:encoded>&lt;p&gt;COVID-19 has introduced a variety of new challenges for many HPE customers and partners, such as closed offices, inaccessible server rooms, and/or work-from-home mandates. Like many businesses across the globe, you may be facing the need to quickly adapt and transform, in order to continue to serve your customers’ needs. If you are an HPE customer or partner (or plan to be), know that we are here to support you, so you can focus your efforts on your customers.&lt;/p&gt;
&lt;p&gt;HPE is here to help. As of now, HPE iLO Advanced is free for all HPE customers and partners through 2020. Configure, monitor, and update HPE servers seamlessly from anywhere, TODAY. &lt;a href=&quot;https://community.hpe.com/t5/Servers-The-Right-Compute/HPE-iLO-Advanced-is-now-FREE-for-all-HPE-customers-and-partners/ba-p/7084121#.Xp3R9MhKg2w&quot;&gt;Read more here&lt;/a&gt; and learn how to claim your free license.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Top 13 Capabilities Within SPIFFE and SPIRE Released In 2019]]></title><description><![CDATA[With the acquisition of Scytale this past February, Hewlett Packard Enterprise (HPE) acquired a seasoned team of experts in cloud-native…]]></description><link>https://developer.hpe.com/top-13-capabilities-within-spiffe-and-spire-released-in-2019/</link><guid isPermaLink="false">https://developer.hpe.com/top-13-capabilities-within-spiffe-and-spire-released-in-2019/</guid><pubDate>Tue, 21 Apr 2020 00:42:07 GMT</pubDate><content:encoded>&lt;p&gt;With the acquisition of Scytale this past February, Hewlett Packard Enterprise (HPE) acquired a seasoned team of experts in cloud-native security and zero-trust networking. The Scytale team is recognized as founding contributors to Cloud Native Foundation’s (CNCF’s) SPIFFE (the Secure Production Identity Framework for Everyone) and SPIRE (the SPIFFE Runtime Environment) open source projects.&lt;/p&gt;
&lt;p&gt;HPE is fully-committed to continuing Scytale’s stewardship and contributions to SPIFFE and SPIRE, as these projects will play a fundamental role in HPE’s plans to deliver a dynamic, open, and secure edge-to-cloud platform&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;following&lt;/em&gt; content was originally posted on Scytale’s blog:&lt;/p&gt;
&lt;p&gt;Over the course of 2019, the Scytale team continued its work shepherding the SPIFFE and SPIRE communities towards building standards and tooling that support the service identity concept. Part of the Cloud Native Computing Foundation (CNCF), these projects have grown in popularity and seen an increasing number of contributions from engineering teams at Amazon, Bloomberg, Google, Pinterest, Square, TransferWise, Uber, Yahoo Japan, and more.&lt;/p&gt;
&lt;p&gt;These community-led innovations and contributions have enabled SPIFFE to authenticate to more types of software systems and improved the deployment, operability, and performance of SPIRE in large-scale environments.&lt;/p&gt;
&lt;h2&gt;Authenticate to Service Mesh, Cloud Platforms, Databases, and more&lt;/h2&gt;
&lt;p&gt;Until recently, SPIFFE has primarily been used to secure communications between services identified by a shared SPIFFE identity provider (IdP). While numerous contributions were made throughout this past year, thirteen key capabilities were released in 2019, which allow you to use SPIFFE and SPIRE to enable strong trust between cloud-native services and other shared services—including databases, service meshes, and public cloud providers. Some enhance ease of deployment and operability, while others offer performance and scalability improvements. These key features include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SPIFFE federation:&lt;/strong&gt; This feature allows services in disparate domains identified by independent SPIFFE identity providers, such as SPIRE, to securely authenticate and communicate with each other.  Key use cases for SPIFFE federation include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Federation between multiple mesh implementations&lt;/li&gt;
&lt;li&gt;Federating trust across different domains within an organization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;      A number of emerging projects and platforms have adopted SPIFFE. This includes Istio, SPIRE, Scytale Enterprise, and several service meshes, including NGINX (F5 Networks) and Grey Matter (Decipher Technology Studios). SPIFFE federation will enable service interoperability across all these distributed environments. View this recent session from KubeCon to see how you can &lt;a href=&quot;https://www.youtube.com/watch?v=cx_NnvbsCP4&quot;&gt;secure communications between meshes and beyond with SPIFFE federation.&lt;/a&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OIDC federation:&lt;/strong&gt; SPIRE now supports OIDC (OpenID Connect). With federation primitives built into SPIFFE and SPIRE, you can directly authenticate to OIDC-compatible validators without having to generate or manage secrets. For example, a system running within an on-premise data center managed by SPIRE can now directly authenticate to cloud platforms like AWS without sharing secrets or private keys. &lt;a href=&quot;https://www.youtube.com/watch?v=db_3LefoG9k&amp;#x26;feature=youtu.be&quot;&gt;Here is a video that demonstrates this capability.&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database authentication:&lt;/strong&gt; You can now use SPIRE-issued identities to directly authenticate to databases and other systems that support legacy x509 authentication. This approach negates the need for a secret store and instead relies on short-lived asymmetric keys. This approach can also provide traffic encryption. To learn more, &lt;a href=&quot;https://www.youtube.com/watch?v=YFll-3jgFrU&amp;#x26;feature=youtu.be&quot;&gt;view this demo.&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Ease of Deployment and Operability&lt;/h3&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Plugin infrastructure refactoring:&lt;/strong&gt; This feature enables plugins to provide auxiliary services, such as health check, to the SPIRE core. You can also request services from the SPIRE core or use server-like telemetry. The Notifier Plugin was added as a result of this refactoring. This plugin detects certain events and then performs an action in response.  An example of this is the Kubernetes Bundle notifier plugin, which resolves a bootstrapping problem within Kubernetes. It publishes the latest trust bundle to a Kubernetes ConfigMap when the SPIRE server is started and the bundle is updated.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Registration API remote access:&lt;/strong&gt; This enhancement allows remote software services to authenticate against the registration API via SVIDs. It enables greater deployment flexibility by allowing software services to manage registrations, even if these services are not running next to a SPIRE server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubernetes workload registrar:&lt;/strong&gt; This registrar automates the registration of Kubernetes workloads on SPIRE when they are registered in Kubernetes. Using this capability, you can interact exclusively with the Kubernetes API server to get SPIRE fully deployed and operational. &lt;a href=&quot;https://github.com/spiffe/spire/tree/master/support/k8s/k8s-workload-registrar&quot;&gt;View this link for more details.&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Telemetry and logging:&lt;/strong&gt; These capabilities have been significantly improved, helping engineers better track and troubleshoot performance issues in production environments. Several telemetry points were added, including a SQL datastore plugin. In addition, existing points were audited, augmented, and labeled with meaningful descriptions.  Finally, key health and performance metrics from the SPIRE server can now be exported directly to Prometheus, Statsd, and DataDog.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Upstream plugin for AWS Secrets Manager and AWS Certificate Manager:&lt;/strong&gt; This plugin enables you to use AWS Secrets Manager or AWS Certificate Manager as an upstream signing authority for SPIRE-issued identities. It can improve your security posture since keys do not need to be stored on disk.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;MySql support:&lt;/strong&gt; Now, MySql can be used to store configuration data for the SPIRE server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker-based workload attestor:&lt;/strong&gt; This capability enables you to use selectors based on Docker labels and the container&apos;s image ID.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Improved documentation:&lt;/strong&gt; New documentation enhancements make it easier to get started on SPIFFE and SPIRE. Documentation now includes a guide for getting started with Kubernetes, as well as use cases and case studies. You can find these on SPIFFE.io.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Performance and Scalability Improvements&lt;/h3&gt;
&lt;ol start=&quot;12&quot;&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Nested SPIRE:&lt;/strong&gt; This new capability enables you to organize your SPIRE deployment into a hierarchy in which one SPIRE server can deliver CA identities to a downstream SPIRE server. This enables you to segregate your SPIRE deployment so you can improve fault management and resiliency.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Performance improvements:&lt;/strong&gt; The team made several performance improvements, particularly within the datastore plugin. These enhancements optimize database performance and speed up scale-related hot spots.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;HPE and the Scytale team are grateful for all the contributors that help keep the momentum going with SPIFFE and SPIRE. The Scytale team is now working along with the HPE DEV team to expand contribution opportunities. We’re looking forward to even more contributions in 2020.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing HPE Storage Container Orchestrator Documentation]]></title><description><![CDATA[Writing intuitive, end-user facing documentation for complex systems that is easy to navigate and consume is not a simple task.  The Hewlett…]]></description><link>https://developer.hpe.com/introducing-hpe-storage-container-orchestrator-documentation/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-hpe-storage-container-orchestrator-documentation/</guid><pubDate>Mon, 20 Apr 2020 19:56:01 GMT</pubDate><content:encoded>&lt;p&gt;Writing intuitive, end-user facing documentation for complex systems that is easy to navigate and consume is not a simple task.  The Hewlett Packard Enterprise (HPE) storage team picked up the responsibility of consolidating all documentation pertaining to Kubernetes and Docker integration with HPE storage solutions under one umbrella. &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;HPE Storage Container Orchestrator Documentation&lt;/a&gt;, or SCOD for short, is a documentation portal that is now officially available to customers and partners interested in HPE persistent storage solutions that they can integrate with their Kubernetes or Docker projects. Let&apos;s explore the toolchain that makes writing documentation effortless and beautiful!&lt;/p&gt;
&lt;h1&gt;Lean and modern tooling&lt;/h1&gt;
&lt;p&gt;SCOD stores all the documentation source files in markdown on GitHub. When a pull request is merged to the master branch (after a careful review), the documentation portal is automatically rebuilt using a GitHub Action. The site itself is hosted by GitHub and uses GitHub Pages to serve the content. MkDocs is the rendering engine used for SCOD. Its theme is based on the widely popular &lt;a href=&quot;https://docs.readthedocs.io&quot;&gt;readthedocs.io&lt;/a&gt; look and feel, augmented with some HPE DEV flair.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/screen-shot-2020-04-17-at-100820-am-1587412198284.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The unique advantage of using MkDocs is that a tech writer may render the entire project locally and immediately observe what the content would look like in the final product. The tech writer may also use their favorite text editing tool as long as it doesn’t add any hidden formatting.  This gives both writers and reviewers a very lean and efficient way to publish documentation without obstacles and impediments posed by dinosaur tools and error prone review processes. It’s single source, single output and provides a shared view in GitHub to review each other’s branches and carefully track changes.&lt;/p&gt;
&lt;h1&gt;Full steam ahead!&lt;/h1&gt;
&lt;p&gt;It’s difficult to estimate how far a system built using the SCOD toolchain would scale, but for single project documentation it’s a very efficient process with minimal overhead and gets everything out of the way for tech writers to simply do what they’re supposed to do – write and review documentation! Expect a flurry up of updates in the coming months. HPE looks forward to helping you learn more about the HPE storage integrations with Kubernetes and Docker through SCOD.&lt;/p&gt;
&lt;p&gt;Would you like to know more?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learn about &lt;a href=&quot;https://pages.github.com/&quot;&gt;GitHub Pages&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Deploy MkDocs &lt;a href=&quot;https://github.com/marketplace/actions/deploy-mkdocs&quot;&gt;GitHub Action&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Build your own site with &lt;a href=&quot;https://www.mkdocs.org&quot;&gt;MkDocs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Checkout the &lt;a href=&quot;https://datamattsson.tumblr.com/post/612351271067893760/the-perfect-documentation-storm&quot;&gt;complete cookbook&lt;/a&gt; for SCOD&lt;/li&gt;
&lt;li&gt;Visit &lt;a href=&quot;https://scod.hpedev.io&quot;&gt;SCOD&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Using Jitsi Meet to reach out to others]]></title><description><![CDATA[picture11 Along with half of the world’s population, I find myself working from home due to sheltering requirements imposed on us by the…]]></description><link>https://developer.hpe.com/using-jitsi-meet-to-reach-out-to-others/</link><guid isPermaLink="false">https://developer.hpe.com/using-jitsi-meet-to-reach-out-to-others/</guid><pubDate>Wed, 15 Apr 2020 21:37:50 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture11-1586988196990.png&quot; alt=&quot;picture11&quot;&gt;&lt;/p&gt;
&lt;p&gt;Along with half of the world’s population, I find myself working from home due to sheltering requirements imposed on us by the COVID-19 pandemic. In these troubled times, communication with my friends and family have become critical. I found that I needed ways to stay in touch with them easily. Of course, mobile apps offer a great diversity of options to stay in touch with one another. But conducting a multi-user video conference on a mobile phone is not very easy nor is it convenient.&lt;/p&gt;
&lt;p&gt;There are many people today who are looking for ways to virtually connect through video conferencing. From a more professional standpoint, teachers need to be able to instruct and stay in touch with their students. Small companies could use it for meetings. Doctor/patient interactions could also benefit from such a tool. But it would need to be very inexpensive (i.e. free) and easy to use.&lt;/p&gt;
&lt;p&gt;As a Hewlett Packard Enterprise (HPE) employee, the company offers us many ways to video conference one another, i.e. through Teams, Skype, and Zoom. Even though these technologies try to make it easy for everyone to use and affordable, I thought I might check on the open source side of the world to see what was available there as well.&lt;/p&gt;
&lt;p&gt;Zoom is free for 40 minutes. That’s nice, but is it enough? Looking a little deeper, I found &lt;a href=&quot;https://jitsi.org/jitsi-meet/&quot;&gt;Jitsi Meet.&lt;/a&gt; Jitsi is free and ready to go whenever you want. Just open the &lt;a href=&quot;https://meet.jit.si/&quot;&gt;link&lt;/a&gt; and start your meeting. That’s all it takes.&lt;/p&gt;
&lt;p&gt;For those who are a bit more technical, you may want consider setting up your own Jitsi Meet server at home. This could be a service you could deliver to your family and friends that would offer you a bit more control and privacy, as you wouldn’t have to rely on an external provider. To help those of you who may be interested in setting up your own Jitsi Meet instance, I thought I’d share my experience. In this post, I’ll run you through the installation process and show you how to use it.&lt;/p&gt;
&lt;h2&gt;What is Jitsi?&lt;/h2&gt;
&lt;p&gt;Jitsi is a set of open-source projects that allows you to easily build and deploy secure videoconferencing solutions. At the heart of Jitsi are two projects, Jitsi Videobridge and Jitsi Meet. These projects allow you to have conferences on the internet, while other projects in the community enable features such as audio, dial-in, recording, and simulcasting.&lt;/p&gt;
&lt;h2&gt;Why Jitsi?&lt;/h2&gt;
&lt;p&gt;Why was I interested in finding an open source-based solution? I wanted a living project with a community behind it in case I ran into issues or required support. I also needed a solution that I could set up myself and would not require too many resources.&lt;/p&gt;
&lt;p&gt;Official minimum hardware requirements for the Jitsi Meet server are very low. One gigabyte of memory and one CPU core plus 25 gigabytes of storage should be enough when used on a Linux platform. It can be set up directly on your own hardware or on a virtual machine hosted by your favorite Cloud provider.&lt;/p&gt;
&lt;p&gt;I own a ProLiant Microserver Gen8 at home. It runs a hypervisor and hosts several virtual machines (OPNsense Firewall and Nextcloud server). I use this server to perform tests on new operating systems and software stacks. Since I had some room left on the system, I decided to create a new VM for this new project.&lt;/p&gt;
&lt;p&gt;My home network looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture2-1586988219313.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;I followed several guides on how to perform the installation:&lt;/p&gt;
&lt;p&gt;Jitsi-Meet official install guide:
&lt;a href=&quot;https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md&quot;&gt;https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://vitux.com/how-to-install-jitsi-meet-video-conference-platform-on-ubuntu/&quot;&gt;https://vitux.com/how-to-install-jitsi-meet-video-conference-platform-on-ubuntu/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;HA Proxy – Let’s Encrypt:
&lt;a href=&quot;https://forum.opnsense.org/index.php?topic=12126.0&quot;&gt;https://forum.opnsense.org/index.php?topic=12126.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Advanced Configuration: Behind NAT configuration
&lt;a href=&quot;https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md&quot;&gt;https://github.com/jitsi/jitsi-meet/blob/master/doc/quick-install.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Secure Domain: First four steps only.
&lt;a href=&quot;https://github.com/jitsi/jicofo#secure-domain&quot;&gt;https://github.com/jitsi/jicofo#secure-domain&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Installing Jitsi Meet&lt;/h2&gt;
&lt;p&gt;My VM requirements:
CPU 1
RAM 2 GB
HDD 25 GB
Nic 1&lt;/p&gt;
&lt;h2&gt;Initial Setup&lt;/h2&gt;
&lt;p&gt;Jitsi Meet requirements:
A server running Ubuntu 18.04 LTS
A non-root user with sudo privileges
Before starting, update your operating system with the latest version with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
sudo apt-get update -y
sudo apt-get upgrade -y

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once your system is up-to-date, restart your system to apply the changes.
Next, you will need to set up a relevant hostname (node1 in this example) and FQDN to your system. You can do this by running the following commands:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo hostnamectl set-hostname node1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Next, open /etc/hosts file and add FQDN.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo nano /etc/hosts&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Add the following line and adapt to your host and domain names:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;127.0.1.1 node1.example.com node1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Add the public IP and the relevant domain name, too.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;X.X.X.X node1.example.com node1&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Save and close the file. Then, verify the hostname with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;hostname -f&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Swap Adjustment&lt;/h2&gt;
&lt;p&gt;With a limited set of resources, I needed to adapt and adjust the swap file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
sudo dd if=/dev/zero of=/swapfile count=2048 bs=1M
sudo chmod600 /swapfile
sudo makeswap /swapfile
sudo swapon /swapfile
echo ‘/swapfile none swap sw 0 0’ | sudo tee –a /etc/fstab

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Install Java&lt;/h2&gt;
&lt;p&gt;Next, you will need to install Java to your system. You can install OpenJDK JRE 8 by running the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo apt-get install -y openjdk-8-jre-headless -y&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Once Java is installed, verify the Java version with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;java -version&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;
openjdk version &quot;10.0.2&quot; 2018-07-17
OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3)
OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.3, mixed mode)

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Install Nginx&lt;/h2&gt;
&lt;p&gt;Jitsi Meet uses Nginx as a reverse proxy. So you will need to install it to your system. You can install it with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo apt-get install nginx -y&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Once Nginx is installed, you can check the Nginx service with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo systemctl status nginx&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;
? nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-11-14 10:06:01 UTC; 1h 4min ago
     Docs: man:nginx(8)
 Main PID: 16734 (nginx)
    Tasks: 2 (limit: 1114)
   CGroup: /system.slice/nginx.service
           ??16734 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ??18709 nginx: worker process

Nov 14 10:06:01 node1 systemd[1]: Starting A high performance web server and a reverse proxy server...

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Install Jitsi Meet&lt;/h2&gt;
&lt;p&gt;By default, Jitsi Meet is not available in the Ubuntu 18.04 default program repository. So, you will need to add the Jitsi Meet download server to the Ubuntu program repository.&lt;/p&gt;
&lt;p&gt;You can do this by running the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;wget -qO - https://download.jitsi.org/jitsi-key.gpg.key | sudo apt-key add -
sudo sh -c &quot;echo &apos;deb https://download.jitsi.org stable/&apos; &gt; /etc/apt/sources.list.d/jitsi.list&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, update the repository and install Jitsi Meet with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
sudo apt-get update -y
sudo apt-get install jitsi-meet -y

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;During the installation process, you will need to provide your hostname as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture4-1586988248790.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Provide your hostname and click on the &lt;strong&gt;OK&lt;/strong&gt; button. You will be asked to select the SSL certificate as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture6-1586988261662.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Local Firewall Update&lt;/h2&gt;
&lt;p&gt;Jitsi Meet requires ports 80 and 433 (TCP) and a range from 10000 to 20000 (UDP) to be opened locally and on your network to work properly. Consider also opening SSH for remote access You update the firewall rules by running the following commands:&lt;/p&gt;
&lt;p&gt;Ubuntu Firewall rules:&lt;/p&gt;
&lt;p&gt;Allow SSH for remote access:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo ufw allow ssh&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Allow Ports 80, 443, and range from 10000 to 20000 for Jitsi Meet.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
sudo ufw allow http
sudo ufw allow https
sudo ufw allow in 10000:20000/udp

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Enable firewall rules.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo ufw enable&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;Local Area Network Firewall Rules (OPNsense)&lt;/h2&gt;
&lt;p&gt;As I had already set up several servers behind my firewall, I had to use HA Proxy to distribute the different HTTP / HTTPS requests to the different servers (Nextcloud, Jitsi Meet).&lt;/p&gt;
&lt;p&gt;I followed the thread below to set everything up. &lt;a href=&quot;https://forum.opnsense.org/index.php?topic=12126.0&quot;&gt;https://forum.opnsense.org/index.php?topic=12126.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The thread actually covered the HA Proxy and the Let’s Encrypt topics. This proved to be very helpful in the end. Certificates are now handled by the HA Proxy layer in front of the different servers.&lt;/p&gt;
&lt;h2&gt;Secure Meeting Room Access&lt;/h2&gt;
&lt;p&gt;By default, anyone who has access to your Jitsi instance will be able to start a conference. If your server is open to the world, anyone can have a chat with anyone else. (Unfortunately, I don’t have unlimited resources. Therefore, I needed to set some limits.)&lt;/p&gt;
&lt;p&gt;If you want to limit the ability to start a conference to registered users only, set up a &quot;secure domain&quot;. Follow the instructions at &lt;a href=&quot;https://github.com/jitsi/jicofo#secure-domain&quot;&gt;https://github.com/jitsi/jicofo#secure-domain&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Follow the first 4 steps and you’ll be ready to go! This will allow you to set up a user with credentials with the necessary rights to create a room.&lt;/p&gt;
&lt;h2&gt;Access Jitsi Meet&lt;/h2&gt;
&lt;p&gt;If you have followed my instructions, Jitsi Meet is now up and listening on port 443. Open your web browser and type the URL &lt;code&gt;https://node1.example.com&lt;/code&gt; or &lt;code&gt;https://your-server-ip.&lt;/code&gt; You will be redirected to the following page:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture7-1586988283022.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here, provide the room name you desire and click on the &lt;strong&gt;GO&lt;/strong&gt; button. You should see the following on your screen:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/picture8-1586988303937.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click on the &lt;strong&gt;Allow&lt;/strong&gt; button to start the live video conference.&lt;/p&gt;
&lt;h2&gt;Some last comments&lt;/h2&gt;
&lt;p&gt;After a few hours of work, I had the server ready to go and I could open my first room. If you’ve followed my instructions, you should now be all set to easily meet with your friends, family, and colleagues!&lt;/p&gt;
&lt;p&gt;Keeping a social link is very important during these times. That’s why simple solutions like Jitsi Meet can really be helpful. Jitsi Meet offers an easy way for people to connect.&lt;/p&gt;
&lt;p&gt;There are so many possible applications for this technology. Educators can use this solution with the relevant hardware configuration so that a complete class could easily attend a course. Small companies meetings could be hosted via a dedicated server, too.&lt;/p&gt;
&lt;p&gt;For those who might find technology a little more daunting or less accessible, Jitsi Meet still can be easily used with a simple browser. Think how this could help the elderly keep in touch and interact with their families.&lt;/p&gt;
&lt;p&gt;Stay tuned to the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for more helpful tutorials.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[WOW – A practiced and perfected design process  Part 1 – Uncovering the merit of UX design to projects]]></title><description><![CDATA[title slide What is WOW? The HPE Experience Studio works with design teams throughout Hewlett Packard Enterprise (HPE) to deliver a…]]></description><link>https://developer.hpe.com/wow-a-practiced-and-perfected-design-process-part-1-uncovering-the-merit/</link><guid isPermaLink="false">https://developer.hpe.com/wow-a-practiced-and-perfected-design-process-part-1-uncovering-the-merit/</guid><pubDate>Mon, 13 Apr 2020 17:45:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/title-slide-1587659126865.jpg&quot; alt=&quot;title slide&quot;&gt;&lt;/p&gt;
&lt;h1&gt;What is WOW?&lt;/h1&gt;
&lt;p&gt;The HPE Experience Studio works with design teams throughout Hewlett Packard Enterprise (HPE) to deliver a consistent user experience (UX) for customers. We are constantly learning, iterating, and innovating the UX design process across the HPE portfolio. As the manager for experience design, I’ve found that many businesses struggle with understanding the true value UX designers bring to a project. In this series of blog posts, I want to share with you a method our group developed that demonstrates this value and provides enterprise UX designers with a practiced and perfected path to achieving success. This is a workflow we call WOW (Why On What with customers and constructors). You’ll note that, as with any good UX design, we used a little creativity in the spelling, with two reflected “C’s” (representing the customers and the constructors) used to create the “O”.&lt;/p&gt;
&lt;p&gt;WOW is a domain and persona agnostic workflow that has been perfected over a period of time. The flow doesn’t provide any governance on tools, activities, or timelines, but instead sets the right framework in place to bring about a level of certainty and success to a project while moving from one part of the process to the next. It is an easy, engaging methodology that helps creative teams focus on four important stages of a project:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Uncover why the business (and this particular project) needs a UX&lt;/li&gt;
&lt;li&gt;Involve the customer early in the design phase&lt;/li&gt;
&lt;li&gt;Ensure the constructor (developer) has the right information during design implementation&lt;/li&gt;
&lt;li&gt;Convey the business value of the design&lt;/li&gt;
&lt;/ol&gt;
&lt;br/&gt;
&lt;h2&gt;Uncovering the project value of the UX using a contextual presentation&lt;/h2&gt;
&lt;p&gt;Traditionally, designers tend to focus on what UX design tools and methods they plan to use for a project when they present to stakeholders (such as software developers, program and product managers, etc.) As they attempt to convey the value of a purposefully designed UX, they also cover the revenue and efficiency impact the design will have on customers. Though this approach is good for on-stage presentations, in the HPE Experience Studio, we’ve found that this method is not effective in meetings where stakeholders are looking to understand the value of the UX in the context of the overall project.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-1-slide-2-1587659158892.jpg&quot; alt=&quot;part 1 slide 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Presentations like this tend to generalize outcomes from design efforts, which can undermine the importance of the UX for a project. Stakeholders can become overwhelmed or confused and may not be able to relate the benefits presented directly to the UX within the context of their projects. Because of this, it is important to avoid generalizing UX outcomes and to instead talk about the impact of the UX in terms the project at hand.&lt;/p&gt;
&lt;p&gt;This point became very clear as we worked with several groups and practiced a concept called contextual inquiry, which is part of a &lt;a href=&quot;https://en.wikipedia.org/wiki/User-centered_design&quot;&gt;user-centered design&lt;/a&gt; (UCD) research method. Experience in this area helped us to devise a contextual UX presentation, which focuses on business KPIs that can be used to easily articulate the design-value for a specific project.&lt;/p&gt;
&lt;h2&gt;Shift mindsets from pretty to pragmatic by focusing on tools that address KPIs&lt;/h2&gt;
&lt;p&gt;Many times a UX project is perceived as an attempt of beautifying the existing look and feel of a product. Though this is one important aspect of UX design, there is far more to it. More importantly, enterprise stakeholders should understand how the UX design relates to the overall value of the product.&lt;/p&gt;
&lt;p&gt;Almost every enterprise product is developed with four KPIs (key performance indicators) in mind:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;New customer acquisition&lt;/li&gt;
&lt;li&gt;Customer retention&lt;/li&gt;
&lt;li&gt;Increased yield&lt;/li&gt;
&lt;li&gt;Reduced costs&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Often, the reduced cost KPI is the biggest impediment for considering UX proposals and investing in a focus on UX within a product.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-1-slide-3-1587659291004.jpg&quot; alt=&quot;part 1 slide 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using specific tools, the UX design process can actually play a key role in helping developers address KPIs and easily justify “Why” a product team should invest in UX design. Tools like &lt;a href=&quot;https://en.wikipedia.org/wiki/System_usability_scale&quot;&gt;SUS (System Usability Scale)&lt;/a&gt; and heuristic evaluations can uncover design debts present in the product that could lead to issues down the road.  Heuristic analysis and evaluations can illustrate the scope of improvements a properly designed UX can bring in terms of product usability, encouraging the &lt;em&gt;acquisition of new customers&lt;/em&gt;. Using a SWOT analysis, designers can pinpoint current product functionality debts and highlight competitive advantages, which will help &lt;em&gt;increase the yield&lt;/em&gt; of the product.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-1-slide-4-1587659222302.jpg&quot; alt=&quot;part 1 slide 4&quot;&gt;&lt;/p&gt;
&lt;p&gt;By measuring the product’s system usability with SUS, the UX design process addresses the requirement to retain customers by making sure customers feel they have been heard and that their requirements are addressed on a timely basis. Performing a UX-competitive analysis on a product helps product managers identify and prioritize investment opportunities. This can lower expenses by reducing the number of wrong investments. As you can see, the design process not only uncovers issues but also very elegantly uncovers gaps and opportunities, a combination that is often hard to find!!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/part-1-slide-5-1587659254292.jpg&quot; alt=&quot;part 1 slide 5&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;There is a lot more to UX design than what is often considered. UX design processes aren’t intended to just make an application look pretty. The benefits of using a thoughtfully developed UX design process extend way beyond the resulting user interface (UI). Benefits can be obtained immediately by involving UX designers at the very beginning of a project. As I’ve pointed out, in the first part of our WOW workflow, designers use tools that really help uncover the business value of UX design. In Part 2, I will cover steps 2 and 3 of WOW; the importance of involving the customer early in the design phase, and ensuring that the constructor (developer) has the right information during design implementation.&lt;/p&gt;
&lt;p&gt;Feel free to reach out to me &lt;a href=&quot;https://twitter.com/uxwithparul&quot;&gt;@uxwithparul&lt;/a&gt; if you have any questions. Follow subsequent posts in this series and view other informative blogs at &lt;a href=&quot;/blog&quot;&gt;HPE DEV.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[App DEV and the HPE Container Platform]]></title><description><![CDATA[hpecpimage2 Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was…]]></description><link>https://developer.hpe.com/app-dev-and-the-hpe-container-platform/</link><guid isPermaLink="false">https://developer.hpe.com/app-dev-and-the-hpe-container-platform/</guid><pubDate>Thu, 09 Apr 2020 15:41:06 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/hpecpimage2-1586446910822.jpg&quot; alt=&quot;hpecpimage2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;The HPE Container Platform offers enterprise IT and software development teams an agile application development and delivery platform. It can be used for both cloud-native microservices-based applications, as well as non-cloud-native monolithic applications. In addition to providing IT Operations teams with a unified control plane based on 100% open source Kubernetes, the HPE Container Platform gives data scientists and application developers a broad set of tools. They can use these tools to build a variety of use cases – including application modernization, CI/CD pipelines, machine learning, and edge analytics.&lt;/p&gt;
&lt;p&gt;The HPE Container Platform combines several innovations from HPE. One such innovation is KubeDirector, an open source project integrated into the HPE Container Platform. KubeDirector manages an application’s dependency on access to the root file system. This dependency on root file systems usually makes it impossible for non-cloud-native applications to run in containers. With KubeDirector, enterprises can deploy all their enterprise applications on a common Kubernetes framework. The ability to deploy and manage these as containers allows them to take advantage of portability and cloud economics.&lt;/p&gt;
&lt;p&gt;In addition to KubeDirector, the HPE Container Platform offers a rich set of additional functionality. For example, it provides a high level of security with integrations into enterprise security and authentication services, such as SAML (Security Assertion Markup Language), AD (Active Directory Security), Kerberos, and more. HPE is working on additional extensions in this area, including integrating HPE’s “silicon root of trust”, the link between HPE silicon and its Integrated Lights Out (iLo) management system, in support of the concept of Zero Trust. The SPIRE and SPIFFE open source initiatives, contributed by HPE’s recently acquired Scytale, is expected to play an important role in this.&lt;/p&gt;
&lt;p&gt;Using the HPE Container Platform also provides unprecedented resiliency and scalability, supporting standard CSI Storage drivers and several different types of storage, including the HPE Data Fabric (previously the MapR Data Fabric). These are important elements that developers or data scientists can leverage in their applications without writing a single line of code. These capabilities empower data scientists and data analysts to quickly stitch together machine learning and analytics pipelines that can then be deployed on-premises, in multiple public clouds, or at the edge.&lt;/p&gt;
&lt;p&gt;Today’s enterprises look to benefit from the many advantages provided through a cloud-native architecture. They desire the capabilities it offers to rapidly process data and glean insights that help them better service their customers. They want to be able to pay for only the services they need. And they appreciate being able to develop and run applications in an agile environment that allows them to deploy applications on any infrastructure – on-premises, cloud, or at the edge.&lt;/p&gt;
&lt;p&gt;Learn more about how HPE Container Platform enables enterprises to achieve their transformation goals by reading Matheen Raza’s blog post, &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/the-cloud-is-an-experience-not-a-destination-2002.html?sdfsdf&quot;&gt;The cloud is an experience, not a destination.&lt;/a&gt; In his blog, you’ll also find a link to a podcast by Chris Gardner, principal analyst serving infrastructure and operations professionals, Forrester and Tom Phelan, Fellow, hybrid IT infrastructure, HPE on how containers and open source Kubernetes accelerate innovation. Stay up-to-date on what’s happening in this area by going to the HPE DEV &lt;a href=&quot;https://developer.hpe.com/platform/hpe-container-platform/home&quot;&gt;HPE Container Platform page.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Chef Cookbook now supports HPE OneView 5.0]]></title><description><![CDATA[The HPE OneView Chef Cookbook 3.2.0 now supports HPE OneView 5.0 (REST API version 1200). The Cookbook provides Chef recipes to interact…]]></description><link>https://developer.hpe.com/hpe-oneview-chef-cookbook-now-supports-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-chef-cookbook-now-supports-hpe-oneview-50/</guid><pubDate>Mon, 06 Apr 2020 16:38:30 GMT</pubDate><content:encoded>&lt;p&gt;The HPE OneView Chef Cookbook 3.2.0 now supports HPE OneView 5.0 (REST API version 1200). The Cookbook provides Chef recipes to interact with HPE OneView and HPE Synergy Image Streamer APIs, enabling IT automation teams to easily build integrations and scalable solutions. By automating the provisioning of physical infrastructure on-demand, using software-defined templates from HPE OneView, this integration allows administrators to create a resource topology and user experience similar to that of a public cloud on their own physical infrastructure. In addition, it significantly increases reliability, compliance, and deployment flexibility.&lt;/p&gt;
&lt;p&gt;Chef recipes configure a series of HPE OneView and HPE Synergy Composable Infrastructure resources to a particular state. They define how the resources are to be configured, which specific versions of software to run, and ensure that software is installed in the correct order, based on dependencies. Chef checks that each resource is properly configured and corrects the configuration of any resources that are not in the desired state. This lets IT organizations streamline the task of configuring and maintaining a company&apos;s servers, in addition to enabling integration with cloud-based platforms to automatically provision and configure new machines.&lt;/p&gt;
&lt;p&gt;A Chef Cookbook is where a collection of recipes are stored. Each cookbook should relate to a single task, but can have multiple recipes or server configurations stored together. The Chef server stores each of these cookbooks. When a new Chef client node checks in with the server, recipes are sent to tell the node how to configure itself. The client will then check in every now and again to make sure that no changes have occurred and no modifications need to be made. If there is a change, the client will take action to manage the change. Patches and updates can be rolled out over an entire infrastructure simply by changing the recipe, eliminating the need to interact with each machine individually.&lt;/p&gt;
&lt;p&gt;Chef is a one of the industry&apos;s most popular Infrastructure-as-Code tools. It is based on the Ruby programming language, which can be leveraged to model solutions for complex or custom scenarios. Additional support can be found in Chef’s open-source development community.&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.2.0&quot;&gt;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.2.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository with code and examples is available on GitHub at:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef&quot;&gt;https://github.com/HewlettPackard/oneview-chef&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Chef Supermarket details are available at:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://supermarket.chef.io/cookbooks/oneview&quot;&gt;https://supermarket.chef.io/cookbooks/oneview&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Technology helps during uncertain times  - Newsletter]]></title><link>https://developer.hpe.com/2020-April-01/</link><guid isPermaLink="false">https://developer.hpe.com/2020-April-01/</guid><pubDate>Wed, 01 Apr 2020 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Project Golang SDK: A valuable lesson in doing]]></title><description><![CDATA[This year, I decided to learn the Golang programming language. I have always preferred learning any language or technology by working on a…]]></description><link>https://developer.hpe.com/project-golang-sdk-a-valuable-lesson-in-doing/</link><guid isPermaLink="false">https://developer.hpe.com/project-golang-sdk-a-valuable-lesson-in-doing/</guid><pubDate>Mon, 30 Mar 2020 15:54:22 GMT</pubDate><content:encoded>&lt;p&gt;This year, I decided to learn the Golang programming language. I have always preferred learning any language or technology by working on a project, rather than just implementing some simple example programs from tutorials. So, I carefully considered what sort of project I could do that would not only provide me with a valuable learning experience, but also make a positive contribution. In this blog post, I’d like to share with you how the project I chose helped me achieve my goal in learning Golang.&lt;/p&gt;
&lt;p&gt;In considering the project I wanted to take on, I had a strong urge to work on developing an SDK (software development toolkit) for the Hewlett Packard Enterprise’s (HPE) Composable Fabric Manager API (application programming interface). Why an SDK? Glad you asked! SDKs are like wrappers to APIs, giving them extra capabilities. For instance:&lt;/p&gt;
&lt;p&gt;Developers don’t have to worry about creating a JSON payload in the right format and taking care of the endpoints with parameters. All they need to do is to send the required parameters and the SDK will take care of REST operations. More importantly, the developer can implement his software suite employing the programming language he is most comfortable with. In this case, it would be Golang.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An SDK adds another layer of security layer to the application.&lt;/li&gt;
&lt;li&gt;An SDK also isolates the developer from the changes in the API which significantly reduces the time and effort to maintain the code base.&lt;/li&gt;
&lt;li&gt;SDKs simplify development of applications that need to scale to multiple devices.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Given all this, I was pretty much convinced that my idea to implement the Golang SDK was a good one. I started off by reading the Golang documentation to understand the syntax, see how similar or how different it is from other programming languages, and learn about features that are exclusive to Golang.&lt;/p&gt;
&lt;p&gt;Here are a few key things I quickly learned:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Golang is a compiled language. It also supports cross compilation. For those who may be unfamiliar with this term, cross compilation is the mechanism for creating executable code for a platform other than the one on which the compiler is running.&lt;/li&gt;
&lt;li&gt;It was mostly built for those who want to write less code and achieve more.&lt;/li&gt;
&lt;li&gt;It is predominantly a functional programming language. However, Golang does incorporate a touch of an object-oriented approach.&lt;/li&gt;
&lt;li&gt;It supports the development of web applications, APIs, etc. pretty well.&lt;/li&gt;
&lt;li&gt;Golang doesn’t have exceptions or assertions, which might feel a bit odd at first if one is used to coding with programming languages like Python, Java or C#.&lt;/li&gt;
&lt;li&gt;Golang also has functional testing module built in, which makes it a great programming language to use in CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;Golang is also web aware. That is, it can pull the dependencies directly from the web, version control system like GitHub, SVN, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Truly, I would not be surprised if a Python lover discovers a similar love for Golang!&lt;/p&gt;
&lt;p&gt;After learning the basics of Golang, and few advanced concepts, I started working on the design of my SDK. I must say, it involved a lot of thought and iteration. Finally, I came up with a nice design and started the implementation. Here are some of the things I learned while designing the SDK for this API:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The SDK must mimic the API. This means that the resource files in the SDK must implement functions corresponding to the endpoints which are defined by the API. For example, if you have an API endpoint for CRUD operations (CRUD — Create, Read, Update and Delete, which is very common in REST API terminologies) that is /rest/v1/users, then the SDK must have a method like AddUser(newUser), GetUsers(), UpdateUser(userUUID, updatedUserInfo) and DeleteUser(userUUID). The naming convention must be very intuitive.&lt;/li&gt;
&lt;li&gt;A standard must be followed to map the API resources and their attributes to the resources and attributes in the SDK. This means that, in some sense, the resources are datatypes in the SDK. This is an advantage, since the developer can map the API resources to variables instead of parsing the JSON payloads all by himself.&lt;/li&gt;
&lt;li&gt;The SDK should have documentation. Reading the documentation for either the API or the SDK must help you understand how the API works. Any limitations must be explicitly stated.&lt;/li&gt;
&lt;li&gt;The SDK must implement a good error handling mechanism, with error logging and error messages. Logging must not log any sensitive information&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I began building the unit test suite simultaneously along with the implementation system test suite. Most of the time, it is easier to implement functional testing only. But sometimes, it is not. With all this done, I am now confident that I can demo my Golang SDK.&lt;/p&gt;
&lt;p&gt;As Dave Packard once pointed out, it’s important to always set an extraordinary goal by which all of your trivial goals can be achieved. Going through the process that I have shared above, I not only learned a new programming language, but also produced something valuable. I am looking forward to getting this project reviewed by Open Source Review Board at HPE and possibly making it an open source project. If this project becomes open source, then I’ll probably have to work on deciding on the contribution guidelines, test frameworks, how to build validations, and even set up a CI/CD pipeline using Travis CI or Jenkins or GitHub actions or whatever. A lot of work to do, to be sure!&lt;/p&gt;
&lt;p&gt;Interested in learning more about SDKs and APIs? You’ll find a lot of good material on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; site. Make sure you check it out. Or, sign up for the &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE DEV newsletter&lt;/a&gt; to read the latest articles.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView RUBY SDK now available with support for HPE OneView 5.0]]></title><description><![CDATA[With a modern, RESTful API and a comprehensive partner ecosystem, HPE OneView delivers a software-defined, programmatic approach to managing…]]></description><link>https://developer.hpe.com/hpe-oneview-ruby-sdk-now-available-with-support-for-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-ruby-sdk-now-available-with-support-for-hpe-oneview-50/</guid><pubDate>Thu, 19 Mar 2020 19:59:13 GMT</pubDate><content:encoded>&lt;p&gt;With a modern, RESTful API and a comprehensive partner ecosystem, HPE OneView delivers a software-defined, programmatic approach to managing infrastructure for efficient workflow automation. The &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.10.0&quot;&gt;HPE OneView Ruby SDK v5.10.0&lt;/a&gt; is now available and supports HPE OneView 5.0 (REST API version 1200). The HPE OneView RUBY SDK extends the HPE OneView API language support for the Ruby language. This Ruby SDK provides a Ruby Library to easily interact with the HPE OneView API, enabling developers to easily build integrations and scalable solutions with HPE OneView.&lt;/p&gt;
&lt;p&gt;This SDK allows Ruby developers to programmatically control HPE OneView managed resources using an infrastructure-as-code approach for physical compute, storage, and fabric resources. Using infrastructure-as-code enables complete datacenter automation, consistent reproducibility, versioning, and roll back.&lt;/p&gt;
&lt;p&gt;Ruby is a popular software language that supports multiple programming paradigms, including procedural, object-oriented, and functional programming. A number of widely used tools and frameworks, such as Puppet, are written in Ruby. In addition, it is a popular choice for building web and data analysis applications.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.10.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://rubygems.org/gems/oneview-sdk/versions/5.10.0&quot;&gt;Ruby Gem&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Puppet Module v2.4.0 now supports HPE OneView 5.0]]></title><description><![CDATA[Hewlett Packard Enterprise is pleased to introduce the HPE OneView Puppet Module v2.4.0 that now supports HPE OneView 5.0 (REST API version…]]></description><link>https://developer.hpe.com/hpe-oneview-puppet-module-v240-now-supports-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-puppet-module-v240-now-supports-hpe-oneview-50/</guid><pubDate>Thu, 19 Mar 2020 19:56:04 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;Hewlett Packard Enterprise is pleased to introduce the HPE OneView&lt;/a&gt; Puppet Module v2.4.0 that now supports HPE OneView 5.0 (REST API version 1200). The &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.4.0&quot;&gt;HPE OneView Puppet Module&lt;/a&gt; automates the configuring and provisioning of bare-metal infrastructure. This enables faster and easier app and service delivery, while increasing reliability, maintaining compliance, and offering deployment flexibility. By automating the provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView, administrators can create a resource topology similar to that of a public cloud on their own physical infrastructure.&lt;/p&gt;
&lt;p&gt;The Puppet Module for HPE OneView allows for management of HPE OneView appliances through the use of Puppet manifests and resource style declarations. The manifests use the HPE OneView Ruby SDK to make calls to the HPE OneView REST API. The module adds several resource types to Puppet and custom methods to allow users to easily create, update, query, and delete hardware resources.&lt;/p&gt;
&lt;p&gt;Puppet offers customers a mature configuration management solution. It is easy to install and features an excellent graphical user interface (GUI).  It runs on all major operating systems, such as Linux, Windows, Unix, and even Mac OS X.&lt;/p&gt;
&lt;p&gt;For more information:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.4.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://forge.puppet.com/hewlettpackard/oneview&quot;&gt;Puppet Forge&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE CSI Driver for Kubernetes: Snapshots, Clones and Volume Expansion]]></title><description><![CDATA[The Container Storage Interface (CSI) introduces enterprise data management, such as volume snapshots and volume clones as native Kubernetes…]]></description><link>https://developer.hpe.com/hpe-csi-driver-for-kubernetes-snapshots-clones-and-volume-expansion/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-csi-driver-for-kubernetes-snapshots-clones-and-volume-expansion/</guid><pubDate>Thu, 19 Mar 2020 03:36:56 GMT</pubDate><content:encoded>&lt;p&gt;The Container Storage Interface (CSI) introduces enterprise data management, such as volume snapshots and volume clones as native Kubernetes objects. In Kubernetes 1.17, these interfaces have matured to a beta state. Recently, Hewlett Packard Enterprise (HPE) &lt;a href=&quot;https://community.hpe.com/t5/HPE-Storage-Tech-Insiders/HPE-CSI-Driver-for-Kubernetes-1-1-0-Generally-Available/ba-p/7082995&quot;&gt;released version 1.1.0&lt;/a&gt; of the HPE CSI Driver for Kubernetes with full support for these features. Let’s walk through how a Kubernetes user can take advantage of these constructs to become more agile by deploying, testing and running stateful applications on Kubernetes.&lt;/p&gt;
&lt;h1&gt;Deploy the CSI driver with Helm&lt;/h1&gt;
&lt;p&gt;In this tutorial, upstream Kubernetes 1.17.4 is being used along with Helm 3. Let’s add in the HPE storage container orchestrator deployments Helm repository.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/
&quot;hpe-storage&quot; has been added to your repositories
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before installing the Helm chart, a values file needs to be created to instruct the driver on how to find and authenticate to the backend storage system. For this exercise, the &lt;code&gt;StorageClass&lt;/code&gt; that will be installed is also marked as the default &lt;code&gt;StorageClass&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Contents of a file named values.yaml
secret:
  backend: 192.168.1.1
  username: admin
  password: admin
storageClass:
  defaultClass: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Install the CSI driver into the &quot;kube-system&quot; namespace.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install hpe-csi-driver hpe-storage/hpe-csi-driver --version 1.1.0 --namespace kube-system -f values.yaml
NAME: hpe-csi-driver
LAST DEPLOYED: Tue Mar 17 09:21:12 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The required components should come online fairly quickly. The following &lt;code&gt;kubectl&lt;/code&gt; command may be used to monitor the driver workloads.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get pods --all-namespaces -l &apos;app in (nimble-csp, hpe-csi-node, hpe-csi-controller)&apos;
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   hpe-csi-controller-7d9cd6b855-zzmd9   5/5     Running   0          15s
kube-system   hpe-csi-node-dk5t4                    2/2     Running   0          15s
kube-system   hpe-csi-node-pwq2d                    2/2     Running   0          15s
kube-system   nimble-csp-546c9c4dd4-5lsdt           1/1     Running   0          15s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As described above, a default &lt;code&gt;StorageClass&lt;/code&gt; is also deployed on the cluster.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get storageclass
NAME                     PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
hpe-standard (default)   csi.hpe.com   Delete          Immediate           true                   20s
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Kubernetes distribution specific details&lt;/h1&gt;
&lt;p&gt;As per the Kubernetes Special Interest Group (SIG) Storage, the snapshot controllers, custom resource definitions and RBAC resources should be deployed on the cluster by the vendor of the Kubernetes distribution, not the CSI driver vendor. These resources are not deployed on upstream Kubernetes 1.17.4, which is being used in this tutorial. Now, let’s deploy the necessary resources.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The resources will be deployed in the &quot;default&quot; namespace.&lt;/p&gt;
&lt;p&gt;For each CSI driver that supports snapshots, at least one &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; object needs to be created. There’s only one backend that supports snapshots on this cluster and the  &lt;code&gt;VolumeSnapshotClass&lt;/code&gt; is therefore marked as default, which makes it easy for users to not care about implementation details.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: hpe-snapshot
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: &quot;true&quot;
driver: csi.hpe.com
deletionPolicy: Delete
parameters:
  description: &quot;Snapshot created by the HPE CSI Driver&quot;
  csi.storage.k8s.io/snapshotter-secret-name: nimble-secret
  csi.storage.k8s.io/snapshotter-secret-namespace: kube-system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All YAML presented in this blog post should be created with &lt;code&gt;kubectl create -f- &amp;#x3C;hit ENTER, then paste the content and hit CTRL-D on a new line&gt;&lt;/code&gt; unless otherwise specified.&lt;/p&gt;
&lt;h1&gt;Get started!&lt;/h1&gt;
&lt;p&gt;The next set of tasks are completely agnostic to which particular storage vendor CSI driver being used. This is the ideal behavior for users interacting with Kubernetes so as to not worry about implementation details.&lt;/p&gt;
&lt;p&gt;For this tutorial, a dual replica Redis deployment is being used as an example application. The default is three replicas, but the cluster only has two worker nodes and affinity rules can’t be fulfilled for the Redis chart with three replicas.&lt;/p&gt;
&lt;p&gt;Ensure the upstream Helm stable repo is accessible.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm repo add stable https://kubernetes-charts.storage.googleapis.com
&quot;stable&quot; has been added to your repositories
helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the &quot;stable&quot; chart repository
...Successfully got an update from the &quot;hpe-storage&quot; chart repository
Update Complete. ⎈ Happy Helming!⎈
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In subsequent examples, some output has been truncated to enhance readability. Now, let’s install Redis and insert some data.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install prod stable/redis-ha --version 4.4.1 --set-string replicas=2
kubectl exec -it prod-redis-ha-server-0 sh -n default
Defaulting container name to redis.
/data $ redis-cli set hpedev testing
OK
/data $ exit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There’s a key in the Redis database named &quot;hpedev&quot; with the value &quot;testing&quot;. Imagine this as state of the application that needs to be preserved. Now, let’s create a snapshot of the Persistent Volume Claims (PVC) that the Redis application requested.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;
---

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: snapshot-0
spec:
  source:
    persistentVolumeClaimName: data-prod-redis-ha-server-0

---

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: snapshot-1
spec:
  source:
    persistentVolumeClaimName: data-prod-redis-ha-server-1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If everything went well, the snapshots may be enumerated.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl get volumesnapshots
NAME         READYTOUSE   SOURCEPVC                     SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
snapshot-0   true         data-prod-redis-ha-server-0                           10Gi          hpe-snapshot    snapcontent-abc0c69c-a22e-499e-8353-b6a6611cd283   16s            17s
snapshot-1   true         data-prod-redis-ha-server-1                           10Gi          hpe-snapshot    snapcontent-b7f81e8c-661d-42df-b452-13cdab878505   16s            17s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We now have the opportunity to instantiate another Redis instance using these snapshots as the source for a new deployment. The key here is that the PVCs need to be created before we bring the new deployment online. Since naming is deterministic for Helm charts, this is quite simple. Create the following PVCs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-test-redis-ha-server-0
spec:
  dataSource:
    name: snapshot-0
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-test-redis-ha-server-1
spec:
  dataSource:
    name: snapshot-1
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, bring up a &quot;test&quot; deployment and read the key we inserted prior and insert another key we want to use for a subsequent test.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install test stable/redis-ha --version 4.4.1 --set-string replicas=2
kubectl exec -it test-redis-ha-server-0 sh -n default
Defaulting container name to redis.
/data $ redis-cli get hpedev
“testing&quot;
/data $ redis-cli set upgrade anothertest
OK
/data $ exit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It’s now possible to transform the data of the &quot;test&quot; deployment without disturbing the data of the &quot;prod&quot; deployment. This opens up the possibility to create advanced testing and development workflows that uses an exact representation of production data. Whether this dataset is a few bytes or a handful of terabytes, the operation will only take a few seconds to execute as the snapshots and clones are not making any copies of the source data.&lt;/p&gt;
&lt;p&gt;The &quot;upgrade&quot; key inserted above will now be used in the next workflow. Clone directly from an existing PVC without creating a snapshot. Let’s now create a set of new PVCs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-clone-redis-ha-server-0
spec:
  dataSource:
    name: data-test-redis-ha-server-0
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-clone-redis-ha-server-1
spec:
  dataSource:
    name: data-test-redis-ha-server-1
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a new &quot;clone&quot; Redis instance and retrieve the key from the previous workflow.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;helm install clone stable/redis-ha --version 4.4.1 --set-string replicas=2
$ kubectl exec -it clone-redis-ha-server-0 sh -n default
Defaulting container name to redis.
/data $ redis-cli get upgrade
&quot;anothertest&quot;
/data $ exit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Not only does this demonstrate the ability to clone directly from a PVC as declared in the &lt;code&gt;dataSource&lt;/code&gt;, but it also demonstrates the ability to perform a cloning operation on an already existing clone. It’s also possible to create forks of the Redis database to create even more sophisticated workflows. A practical example would be to create a snapshot of a production instance, clone from that instance, obfuscate some data for the end user (could be a potentially IO intensive operation when working with terabyte datasets) and then use the obfuscated clone as a source for subsequent workflows presented to end-users. The idea is to obfuscate the data once and stamp out many new permutations quickly from that one source.&lt;/p&gt;
&lt;h1&gt;Volume expansion&lt;/h1&gt;
&lt;p&gt;One of the most common “Day 2” operations in storage and data management is to expand volume capacity. This feature has been in beta since Kubernetes 1.16 and is now available in the HPE CSI Driver for Kubernetes as a supported feature. In true Kubernetes simplistic fashion the end-user that created the PVC may simply increase the capacity of the PVC specification and the CSI resizer will pick it up and perform all the necessary operations. These operations include increasing the backend storage system volume size, rescanning the multipath device on the host and finally growing the filesystem. This used to be a tedious operation that required a storage admin and Kubernetes admin to satisfy a user requirement, which is very counter-productive.&lt;/p&gt;
&lt;p&gt;Let’s expand the storage requests for the Redis production instance (this can be done with &lt;code&gt;kubectl edit&lt;/code&gt; or &lt;code&gt;kubectl patch&lt;/code&gt; as well). Run &lt;code&gt;kubectl apply&lt;/code&gt; with the following specification.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-prod-redis-ha-server-0
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-prod-redis-ha-server-1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 32Gi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The original size of the PVC was 10Gi. In a few moments, the new size should be been picked up by the &lt;code&gt;Pod&lt;/code&gt; running Redis.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;kubectl exec -it prod-redis-ha-server-0 sh -n default
Defaulting container name to redis.
/data $ df -h .
Filesystem                Size      Used Available Use% Mounted on
/dev/mapper/mpathi       32.0G     32.7M     32.0G   0% /data
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;Stay tuned to HPE DEV for future blogs regarding the HPE CSI Driver for Kubernetes. In the meantime, connect with us on &lt;a href=&quot;https://hpedev.slack.com&quot;&gt;Slack&lt;/a&gt;. We hang out in #kubernetes and #nimblestorage&lt;/p&gt;
&lt;p&gt;If you want to learn more about Kubernetes, CSI and the integration with HPE storage products, here are a few pointers that would get you started.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learn about Kubernetes on &lt;a href=&quot;https://kubernetes.io&quot;&gt;kubernetes.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Explore the &lt;a href=&quot;https://kubernetes-csi.github.io/&quot;&gt;Container Storage Interface&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check out the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-container-platform/home&quot;&gt;HPE Container Platform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;GitHub repository for the &lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;HPE CSI Driver for Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Why DevSecOps approach is key to mainstream container use]]></title><description><![CDATA[Editor’s note: This article was originally posted on HPE Enterprise.nxt on March 17, 2020 Container-based deployments are gaining in…]]></description><link>https://developer.hpe.com/why-devsecops-approach-is-key-to-mainstream-container-use/</link><guid isPermaLink="false">https://developer.hpe.com/why-devsecops-approach-is-key-to-mainstream-container-use/</guid><pubDate>Tue, 17 Mar 2020 11:46:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s note: This article was originally posted on HPE Enterprise.nxt on March 17, 2020&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;br /&gt;
&lt;p&gt;Container-based deployments are gaining in popularity, but to maintain this momentum, security must start early in project development.&lt;/p&gt;
&lt;p&gt;Containers offer a range of benefits across enterprise computing environments―from corporate data centers to the cloud to the edge―and IT organizations are looking to increase use cases. But for container-based deployments to continue on this upward trajectory, security is a must, and it needs to be addressed at the start of development using DevSecOps best practices.&lt;/p&gt;
&lt;p&gt;Join this discussion with HPE Pointnext Services&apos; Simon Leech and host Dana Gardner to learn about the advantages containers bring, what it takes to move your projects from proof of concept to mainstream production, and why a DevSecOps approach to security is key to that.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ayoJY1EGsAw&quot;&gt;&lt;img src=&quot;https://img.youtube.com/vi/ayoJY1EGsAw/hqdefault.jpg&quot; alt=&quot;Containers go mainstream&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;p&gt;&lt;strong&gt;Dana Gardner:&lt;/strong&gt; Hello, and welcome to the next edition of the BriefingsDirect Voice of Innovation podcast series. I&apos;m &lt;a href=&quot;https://www.linkedin.com/in/danagardner/&quot;&gt;Dana Gardner&lt;/a&gt;, principal analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into modern IT deployment architecture strategies.&lt;/p&gt;
&lt;p&gt;Container-based deployment models have rapidly gained popularity, from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.&lt;/p&gt;
&lt;p&gt;Yet, in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum―and that means security addressed during development and employment under the rubric of &lt;a href=&quot;https://www.devsecops.org/blog/tag/DevSecOps+Explained&quot;&gt;DevSecOps&lt;/a&gt; best practices.&lt;/p&gt;
&lt;p&gt;Stay with us now as we examine the escalating benefits that come from secure and robust container use with our guest, &lt;a href=&quot;https://www.linkedin.com/in/simonleech/&quot;&gt;Simon Leech&lt;/a&gt;, worldwide security and risk management practice at Hewlett Packard Enterprise (HPE) Pointnext Services. Welcome, Simon.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Simon Leech:&lt;/strong&gt; Hey, Dana. Good afternoon.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; Simon, are we at an inflection point where we&apos;re going to see containers take off in the mainstream? Why is this the next level of virtualization?&lt;/p&gt;
&lt;h3&gt;Mainstream containers coming&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to following in terms of rapid application development.&lt;/p&gt;
&lt;p&gt;One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.&lt;/p&gt;
&lt;p&gt;But what we have seen with containers is that, as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.&lt;/p&gt;
&lt;p&gt;So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment, not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers, as not just perhaps a proof of concept (POC) or test environment but as ready for the production mainstream?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer&apos;s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by, has realized he can actually make a lot of progress by developing his applications using a container-based architecture.&lt;/p&gt;
&lt;p&gt;What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.&lt;/p&gt;
&lt;p&gt;Don&apos;t just see containers as a developer&apos;s laptop project, but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; I imagine this requires a lifecycle approach to containers thinking―not just about the development but in how they are going to be used over time and in different places.&lt;/p&gt;
&lt;p&gt;Now, 451 Research recently predicted that the &lt;a href=&quot;https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf&quot;&gt;market for containers will hit $2.7 billion this year&lt;/a&gt;. Why do you think that the IT operators―the people who will be inheriting these applications and microservices―will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?&lt;/p&gt;
&lt;h3&gt;Quick-change code artists&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So, whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.&lt;/p&gt;
&lt;p&gt;So it allows you to make many more changes than you previously would have been able to deliver to the organization, and it allows you to address those changes very rapidly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; How does this allow for a more common environment to extend across hybrid IT, from on premises to cloud to hybrid cloud and then ultimately to the edge?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don&apos;t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can&apos;t run a Windows container on top of a Linux OS or vice versa.&lt;/p&gt;
&lt;p&gt;But within the general Linux space, that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multicloud extensibility benefit?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn&apos;t matter if that&apos;s a &lt;a href=&quot;https://cloud.google.com/&quot;&gt;Google Cloud Platform&lt;/a&gt; instance, a &lt;a href=&quot;https://azure.microsoft.com/en-us/&quot;&gt;Microsoft Azure&lt;/a&gt; instance, or &lt;a href=&quot;https://aws.amazon.com/&quot;&gt;Amazon Web Services (AWS)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?&lt;/p&gt;
&lt;p&gt;What are the challenges to providing security specifically, making sure that the containers are not going to add risk and, in fact, improve the deployment productivity of organizations?&lt;/p&gt;
&lt;h3&gt;Make security a business priority&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.&lt;/p&gt;
&lt;p&gt;If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.&lt;/p&gt;
&lt;p&gt;A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place that allows you to train those internal coders so that they understand the need to think a little bit more about security, especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don&apos;t get properly instigated. It&apos;s good to start with a very secure baseline.&lt;/p&gt;
&lt;p&gt;And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.&lt;/p&gt;
&lt;p&gt;Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it&apos;s all going to go into production someone calls security. Security comes along and says, &quot;Hey, have you done risk assessments on this?&quot; And that ends up delaying the project.&lt;/p&gt;
&lt;p&gt;If you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.&lt;/p&gt;
&lt;p&gt;At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it&apos;s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.&lt;/p&gt;
&lt;p&gt;The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach―or even a security self-service approach―allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?&lt;/p&gt;
&lt;h3&gt;Updates, the container way&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; A lot of the principles are the same. So, there&apos;s obviously still a need for network security tools. There&apos;s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the &lt;a href=&quot;https://thenewstack.io/how-to-lock-down-the-kernel-to-secure-the-container/&quot;&gt;shared kernel&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An interesting white paper has been released by the &lt;a href=&quot;https://www.nist.gov/&quot;&gt;National Institute of Standards and Technology&lt;/a&gt; in the U.S., &lt;a href=&quot;https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf&quot;&gt;SP 800-190&lt;/a&gt;, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.&lt;/p&gt;
&lt;p&gt;So, when we&apos;re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.&lt;/p&gt;
&lt;p&gt;One of the important things to understand when we&apos;re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine. We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.&lt;/p&gt;
&lt;p&gt;In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it&apos;s launched from the new container image.&lt;/p&gt;
&lt;p&gt;It&apos;s important to remember we don&apos;t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that&apos;s really a change in the mindset for a lot of security professionals, because they think, &quot;OK, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,&quot; or whatever, and, &quot;I&apos;m going to scan the environment. I&apos;m going to find out what&apos;s missing, and then I&apos;m going to deploy patches to plug in the risk.&quot;&lt;/p&gt;
&lt;p&gt;So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?&lt;/p&gt;
&lt;h3&gt;Simplify app separation&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.&lt;/p&gt;
&lt;p&gt;One of the challenges we&apos;ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding cleartext passwords into the text, because it&apos;s easier. And, yeah, that&apos;s understandable. You don&apos;t need to worry about managing or remembering passwords.&lt;/p&gt;
&lt;p&gt;But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.&lt;/p&gt;
&lt;p&gt;These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization.&lt;/p&gt;
&lt;p&gt;For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it&apos;s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs.&lt;/p&gt;
&lt;p&gt;This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; There is another burgeoning new area where containers are being used. Not just in applications and runtime environments but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.&lt;/p&gt;
&lt;p&gt;How should the all-important security around data caches and different data sources enter into our thinking?&lt;/p&gt;
&lt;h3&gt;Save a slice for security&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Because containers are temporary instances, it&apos;s important that you&apos;re not actually storing any data within the container itself. At the same time, as importantly, you&apos;re not storing any of that data on the host OS either.&lt;/p&gt;
&lt;p&gt;It&apos;s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and deprovisioned.&lt;/p&gt;
&lt;p&gt;So your container itself, as I said, doesn&apos;t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; Simon, I&apos;ve had an opportunity to &lt;a href=&quot;https://community.hpe.com/t5/Transforming-IT/5-cultural-changes-to-make-container-security-and-DevSecOps-real/ba-p/7070077#.XlPrGC3MzUI&quot;&gt;read some of your blogs&lt;/a&gt;, and one of your statements jumped out: &quot;The organizational culture still lags behind when it comes to security.&quot; What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; It&apos;s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don&apos;t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a &lt;a href=&quot;http://blog.idonethis.com/two-pizza-team/&quot;&gt;two-pizza team&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas, and I think that that applies equally to development teams when you&apos;re working on container projects. They don&apos;t need to be big; they don&apos;t need to be massive.&lt;/p&gt;
&lt;p&gt;It&apos;s important to make sure there&apos;s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That&apos;s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; We&apos;ve talked about people and process. There is also, of course, that third leg of the stool: the technology. Are the people building container platforms, like HPE, thinking along these lines as well? What does the technology and the way it&apos;s being designed bring to the table to help organizations be DevSecOps-oriented?&lt;/p&gt;
&lt;h3&gt;Select specific, secure solutions&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; There are a couple of ways that &lt;a href=&quot;https://www.csoonline.com/article/3398485/28-devsecops-tools-for-baking-security-into-the-development-process.html&quot;&gt;technology solutions&lt;/a&gt; are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.&lt;/p&gt;
&lt;p&gt;A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It&apos;s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like &lt;a href=&quot;https://www.aquasec.com/&quot;&gt;Aqua Security&lt;/a&gt; or &lt;a href=&quot;https://www.twistlock.com/&quot;&gt;Twistlock&lt;/a&gt;, which was recently acquired by &lt;a href=&quot;https://www.paloaltonetworks.com/&quot;&gt;Palo Alto Networks&lt;/a&gt;, I believe.&lt;/p&gt;
&lt;p&gt;And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.&lt;/p&gt;
&lt;p&gt;Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as &lt;a href=&quot;https://sysdig.com/&quot;&gt;Sysdig&lt;/a&gt;, &lt;a href=&quot;https://www.guardicore.com/&quot;&gt;Guardicore&lt;/a&gt;, and &lt;a href=&quot;https://neuvector.com/&quot;&gt;NeuVector&lt;/a&gt;. And then there&apos;s another bucket of solutions, which are more open source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as &lt;a href=&quot;https://www.sonarqube.org/&quot;&gt;SonarQube&lt;/a&gt;, Platform as a Service (PaaS), and &lt;a href=&quot;https://sysdig.com/opensource/falco/&quot;&gt;Falco&lt;/a&gt;, which is the open source project that Sysdig runs. You also have &lt;a href=&quot;https://dockerbench.com/&quot;&gt;Docker Bench&lt;/a&gt; and &lt;a href=&quot;https://www.projectcalico.org/&quot;&gt;Calico&lt;/a&gt;, a networking security tool.&lt;/p&gt;
&lt;p&gt;But no single solution covers all of an enterprise customer&apos;s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who&apos;s going to be the best partner to deliver those products with those technology solutions for you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; And how are you designing HPE Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of &lt;a href=&quot;https://www.hpe.com/us/en/solutions/bluedata.html&quot;&gt;the BlueData acquisition&lt;/a&gt;. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.&lt;/p&gt;
&lt;p&gt;I&apos;m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we&apos;d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren&apos;t necessarily aware of all of the changes they&apos;re going to have to make to their IT model.&lt;/p&gt;
&lt;p&gt;At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.&lt;/p&gt;
&lt;p&gt;We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we&apos;ve done to build our own container security solution reference architecture.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?&lt;/p&gt;
&lt;h3&gt;Edge elevates container benefits&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.&lt;/p&gt;
&lt;p&gt;There&apos;s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; And over the past few years, we&apos;ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.&lt;/p&gt;
&lt;p&gt;But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you&apos;re talking about edge deployments, that rings very true.&lt;/p&gt;
&lt;p&gt;From the perspective of the amount of resources it has to use, it&apos;s going to be a lot lighter, when you&apos;re talking about something like autonomous driving, to have a shared kernel rather than lots of instances of a VM running, for example.&lt;/p&gt;
&lt;p&gt;From a strictly security perspective, if you deal with container lifecycle management effectively; involve the security guys early; have a process around releasing, updating, and retiring container images into your registry; and have a process around introducing security controls and code scanning in your software development lifecycle―making sure that every container that gets released is signed with an appropriate enterprise signing key―then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.&lt;/p&gt;
&lt;p&gt;That&apos;s one of the big benefits of containers. It&apos;s very much a declarative environment. It&apos;s something that you prescribe. This is how it&apos;s going to look. And it&apos;s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.&lt;/p&gt;
&lt;p&gt;There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?&lt;/p&gt;
&lt;h3&gt;Begin with risk evaluation&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can&apos;t modernize, and they&apos;re going to remain in your traditional data center for a number of years.&lt;/p&gt;
&lt;p&gt;I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they&apos;ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.&lt;/p&gt;
&lt;p&gt;Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations, and that&apos;s really where it belongs.&lt;/p&gt;
&lt;p&gt;In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the &lt;a href=&quot;https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf&quot;&gt;NIST white paper SP 800-190&lt;/a&gt; is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.&lt;/p&gt;
&lt;p&gt;At the same time, at &lt;a href=&quot;https://www.hpe.com/us/en/home.html&quot;&gt;HPE&lt;/a&gt;, we are also committed to delivering relevant information to our customers. If you look on our website, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; I&apos;m afraid we will have to leave it there. We have been exploring how container-based deployment models have gained popularity, from cloud models to corporate data centers. And we have learned how, in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum.&lt;/p&gt;
&lt;p&gt;So please join me in thanking our guest, Simon Leech, worldwide security and risk management practice at HPE Pointnext Services. Thank you so much, Simon.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Leech:&lt;/strong&gt; Thanks for having me.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gardner:&lt;/strong&gt; I learned a lot. And thanks as well to our audience for joining this sponsored BriefingsDirect Voice of Innovation discussion. I&apos;m Dana Gardner, principal analyst at Interarbor Solutions, your host for this ongoing series of HPE-supported discussions.&lt;/p&gt;
&lt;p&gt;Thanks again for listening. Please pass this along to your IT community, and do come back next time.&lt;/p&gt;
&lt;p&gt;Container deployment best practices: Lessons for leader&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Begin with a risk assessment―containers aren&apos;t appropriate for every application.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Look for projects that will give quick wins and show value.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Approach container projects using a DevSecOps model, with security integrated from the start.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.&lt;/p&gt;
&lt;p&gt;&lt;u&gt;&lt;strong&gt;About the author:&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;
&lt;p&gt;Since 1999, Dana Gardner has emerged as a leading identifier of enterprise software solutions, strategies, partnerships, and markets and new IT business value opportunities. He is frequently quoted as a thought leader in top news and IT industry publications such as The New York Times, The Wall Street Journal, The Boston Globe, The Washington Post, Businessweek, San Francisco Chronicle, Reuters, Associated Press, MSNBC.com, CNN.com, and more. As a skilled multimedia communicator and evangelist, he has written dozens of industry reports on the business benefits of IT and Internet innovation for advancing general productivity, improving employee efficiency, and reducing total IT costs. As founder and president of Interarbor Solutions, Dana has taken a strong record in consulting services for IT vendors, carriers, and enterprises to yet another level: the exciting new communications capabilities around Internet social media. Businesses of all kinds are quickly exploiting blogs, podcasts, and video podcasts for education, communications, and viral outreach. Dana practices what he preaches as a frequent blogger, on ZDNet and his personal blog, as well as a podcaster. He began podcasting as a founding member of the Gillmor Gang in 2005.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Jupyter saved my day]]></title><description><![CDATA[Prepping for an industry event always requires a lot of planning, and sometimes a lot of last minute juggling. Preparing for the Hack Shack…]]></description><link>https://developer.hpe.com/jupyter-saved-my-day/</link><guid isPermaLink="false">https://developer.hpe.com/jupyter-saved-my-day/</guid><pubDate>Tue, 10 Mar 2020 20:48:19 GMT</pubDate><content:encoded>&lt;p&gt;Prepping for an industry event always requires a lot of planning, and sometimes a lot of last minute juggling. Preparing for the Hack Shack activities for HPE’s Technology and Solutions Summit (TSS) event in Paris was no different. With the event only a few weeks away, everything appeared to be ready to go. We had lined up the subjects matter experts, reserved our space, attracted a registered audience, and arranged for a dedicated infrastructure to host the technical activities. Unfortunately, the Coronavirus situation escalated and, for the safety of everyone, HPE has decided to adjust the onsite event and turn parts of it into a virtual event.&lt;/p&gt;
&lt;h2&gt;How do you switch to a virtual event quickly?&lt;/h2&gt;
&lt;p&gt;Before the announcement of the onsite event cancellation, event organizers had already contacted the different teams who were participating to figure out how the event could be delivered virtually. They proposed that a subset of the sessions be offered. Those selected would need to be eligible for remote delivery.&lt;/p&gt;
&lt;p&gt;Because our team had already learned how to deal with remote delivery, most of our sessions already fit that requirement. This made it fairly easy to propose a new agenda that included nearly all of the originally planned workshops. It has been a few years since we last shipped material to an event location to support our labs or workshops. Due to budget constraints, and also simple noise issues (it is not easy to run a lab with racks full of servers running next to you), most of our lab activities became remote based-labs anyway. A simple and reliable internet connection is now all that one needs to run them.&lt;/p&gt;
&lt;h2&gt;Jupyter saves the day&lt;/h2&gt;
&lt;p&gt;HPE DEV Hack Shack sessions are a little more complicated than simple slide presentations. We always do our best to provide attendees with a hands-on experience that they can’t get elsewhere. So, how would we achieve similar results in a virtual situation?&lt;/p&gt;
&lt;p&gt;During a TSS event in 2018, a participant provided his solution to a Hack Shack challenge in a Jupyter Notebook format. Didier Lalli, our team leader, found both the solution and the format interesting. As a consequence, he proposed that our team adopt this format for any future labs. I volunteered to take a look at Jupyter Notebooks and see how we could best use it to deliver our labs. In the end, I proposed we use Jupyterhub, as I felt it was the most appropriate tool to address our needs. As a happy happenstance, this decision put us in a great position to turn TSS 2020 into a virtual event.&lt;/p&gt;
&lt;h2&gt;What are Jupyter and Jupyterhub?&lt;/h2&gt;
&lt;p&gt;To give you a bit of background: The Jupyter project (for Julia / Python / R) is the result of a project called IPython, which is an advanced Python interpreter, that improves the productivity of your code in Python. IPython has gradually evolved, notably with the creation of Notebooks, and now offers a web interface in JSON format that allows you to run a Python kernel and to code directly in a browser with a display of intermediate results. The Jupyter Project was an important step forward for sharing and &quot;interactive&quot; development. &lt;a href=&quot;https://jupyter.org/index.html&quot;&gt;Project Jupyter’s&lt;/a&gt; &lt;a href=&quot;https://jupyterhub.readthedocs.io/en/stable/&quot;&gt;JupyterHub&lt;/a&gt; was created to support many users. The Hub can offer notebook servers to an entire class of students, a corporate data science workgroup, a scientific research project team, or a high-performance computing group. With &lt;a href=&quot;https://github.com/jupyterhub/jupyterhub&quot;&gt;JupyterHub,&lt;/a&gt; you can create a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user &lt;a href=&quot;https://mybinder.org/v2/gh/ipython/ipython-in-depth/master?filepath=binder/Index.ipynb&quot;&gt;Jupyter Notebook&lt;/a&gt; server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1583962183422.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;The setup we plan on using for the TSS event consists of one HPE ProLiant DL360 Gen 10 with 2x Intel Xeon Silver 4114 CPUs running at 2.2GHz with 10 Cores in each (20 in Hyperthreading mode). The system has 160 GB of ram. It is running an Ubuntu 18.04 distribution and installation was performed following the Jupyter guidelines provided &lt;a href=&quot;https://jupyterhub.readthedocs.io/en/stable/installation-guide-hard.html&quot;&gt;here.&lt;/a&gt; We are leveraging Python and Powershell kernels. I installed &lt;a href=&quot;https://github.com/timkpaine/jupyterlab_iframe/&quot;&gt;Iframe&lt;/a&gt; as well, but have not made any use of it so far.&lt;/p&gt;
&lt;h2&gt;What is a Jupyter Notebook?&lt;/h2&gt;
&lt;p&gt;The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. In our case, Notebooks contain simple Python or Powershell pieces of codes to interact with the different APIs available in the HPE portfolio. Instructions are provided in a markdown format. We centralize the different notebooks on a single Jupyterhub server. This allows us to replicate them across the 40 students currently configured on the server fairly easily through Ansible playbooks. When changes are necessary (fixing a typo for instance in a notebook), updates can be performed quickly by just launching the playbook.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture2-1583962189410.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Advantages of using Jupyter Notebooks&lt;/h2&gt;
&lt;p&gt;From a course standpoint, the notebook format is a really simple and flexible solution. The HPE DEV team used to deliver labs in a very academic fashion providing Adobe Acrobat Reader files to the lab attendees. Lab attendees would follow the instructions of the lab, and copy and paste commands from the pdf file to the Windows terminal server session or to a simple PuTTY session. In many cases, errors would occur because some hidden characters were copied from the pdf document to the session (credentials for instance).&lt;/p&gt;
&lt;p&gt;In addition, the format was not the most flexible when any changes needed to be made to the master document (a doc file to be converted in pdf). One would have to redistribute the new version of the file to all the students, making things cumbersome. Even though we always managed to overcome these issues, Didier’s idea was quite appealing since most of our labs and workshops use coding exercises based on Python and Powershell.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture3-1583962195154.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Taking Didier’s idea, from a logistical standpoint, we architected a solution that could be accessible on the internet. Using Jupyter Notebooks allows us to deliver labs from simply anywhere. The students only needs an internet access to our Jupyterhub server et voila!
They can even download the notebooks and run them locally on their laptops if they want to do them again later on or reuse some of the code for their own projects.&lt;/p&gt;
&lt;p&gt;As I worked on developing this solution, I realized that there were so many new possibilities for its use, like Jupyter Notebooks-as-a-Service. This could be a service that would allow you to learn about the latest and greatest updates from our different APIs. At a minimum, these Jupyter Notebooks will be used in the Hack Shack for future events, like HPE Discover 2020. We are also investigating a simple sharing possibility leveraging our GitHub repository in order to provide people with some of our notebooks.&lt;/p&gt;
&lt;p&gt;We are just starting off using this Jupyter technology. The future holds great promise as the forthcoming HPE Container Platform will allow us to deploy and use Jupyter Notebooks beyond simple API calls. For instance, we will have the opportunity to dive into the Artificial Intelligence world through notebooks that make use of TensorFlow technology. Please be sure to check back at &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt; for a follow up on this.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Second internal HPE Hackathon around IoT and AI]]></title><description><![CDATA[picture1 Under the banner of *"We burn for IoT and AI", the HPE presales IoT & Data Analytics team recently hosted our second Hackathon in…]]></description><link>https://developer.hpe.com/second-internal-hpe-hackathon-around-iot-and-ai/</link><guid isPermaLink="false">https://developer.hpe.com/second-internal-hpe-hackathon-around-iot-and-ai/</guid><pubDate>Thu, 05 Mar 2020 18:11:36 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1583431980969.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Under the banner of *&quot;We burn for IoT and AI&quot;, the HPE presales IoT &amp;#x26; Data Analytics team recently hosted our second Hackathon in Boeblingen, Germany. As more and more concrete IoT and AI projects come our way, we find hackathons helpful in developing our knowledge base. Hackathons help us build proficiency by going beyond slide presentations and offering hands-on experience with the technologies required to address the needs of IoT and AI projects.&lt;/p&gt;
&lt;p&gt;As a Hybrid IT presales consultant who was responsible for the planning of the first and the second hackathon, I am proud to say that we not only doubled the number of participants but also the number of use cases we developed. About 20 highly motivated people from the HPE AI, Data Analytics &amp;#x26; IoT Competence Center, HPE Pointnext Advisory and Professional Services, and HPE Pointnext Delivery groups pursued the goal of educating themselves and creating tangible outcomes. Besides colleagues from more traditional HPE departments, colleagues from new HPE acquisitions, BlueData and MapR, participated and made a noticeable impact on the development of the use cases. The HPE Customer Technology Center (CTC) in Boeblingen provided us with the perfect location for planning, developing and implementing our visions.&lt;/p&gt;
&lt;p&gt;Use case topics ranged from IoT to AI, with the goal of building Edge-to-Cloud Data Pipelines powered by HPE infrastructure and software that address today’s customer challenges. To give you an idea of what we were able to achieve, here’s a summary of the use cases we covered:## Industrial Data Fabric from Edge-to-Cloud with OT Link &amp;#x26; MapR Edge Cluster&lt;/p&gt;
&lt;p&gt;In our first use case, we aimed at setting up an Edge-to-Cloud Data Pipeline with an industrial focus. To achieve this, we built a MapR Edge cluster on an HPE Edgeline server and connected it to an already installed MapR datalake on HPE Apollo servers in the data center. Using features like the MapR global namespace and the event store for Apache Kafka, we were able to collect data at the edge and process it in the core/cloud. The data collection performed by an HPE OT Link instance brought the industrial focus to the use case. In fact, HPE OT Link was used to combine operation technology capabilities with enterprise-class IT capabilities in a single system. In the end, we were able to route Industrial IoT data acquired by the OT Link through a MapR Data Pipeline to the core/cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1583268352426.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Image Classification using HPE ML Ops, powered by NVIDIA GPU acceleration&lt;/h2&gt;
&lt;p&gt;The goal of this use case was to recognize server parts through an Android app to assist users in differentiating components. A corresponding model for the classification of the components was trained using HPE ML Ops. The training and inference took place on HPE Apollo servers, equipped with Nvidia GPUs. In particular, HPE ML Ops along with GPU acceleration noticeably shortened the training process. The final result was a deep-learning model that was deployed with the feature set of HPE ML Ops.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture2-1583268373183.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Industrial Data Pipeline and visualization for OEE calculation with Edgeline, OT Link and Azure IoT Hub&lt;/h2&gt;
&lt;p&gt;In the industrial Data Pipeline use case, we wanted to calculate the key performance indicator for Overall Equipment Effectiveness (OEE) in HPE OT Link and visualize it utilizing OT Links dashboard capabilities and Grafana. For this purpose, we gathered a variety of machine data and built different flows in OT Link in order to calculate the OEE. We sent the telemetry data to Microsoft Azure and added the Azure IoT Edge Device integration in OT Link. As a final result, data was sent to Azure and written into a time-series database. The real-time data is visualized in Grafana and in OT Link.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture3-1583268389028.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Twitter Sentiment Analysis using MapR Event Store, Kafka, and R on HPE ML Ops&lt;/h2&gt;
&lt;p&gt;In the last use case, our goal was to perform social media sentiment analysis using HPE ML Ops. For this purpose, a Data Pipeline was built for batch analytics. Using the statistical programming language R, various statistical analyses about the popularity of the keyword were conducted. In a second step, a streaming component utilizing the MapR Event Store for Apache Kafka was added. The resulting real-time data was visualized using a dashboard displaying various reports. The result of this use case is an in-depth batch analysis and a real-time dashboard evaluation to the popularity of the given keyword.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture4-1583268402834.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some impressions of a week full of hacking and new developments around HPE products and IoT/AI can be seen below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture5-1583268416136.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;Apart from topic-related synergy effects, the team spirit between colleagues in different positions was strengthened. We were excited that our organizational leaders were also interested in our challenges and came around to watch us tackling these challenges. Just like they did for our first hackathon, &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV&lt;/a&gt; supported us by supplying us not only equipment but also with helpful tips for planning the event. Additionally, we benefited from Nvidia’s support, since the Nvidia GPUs made a significant difference in training and inference for our second use case.&lt;/p&gt;
&lt;p&gt;I want to thank the participants for a great week and fantastic achievements. A special thank you goes to our leaders for making this possible.&lt;/p&gt;
&lt;p&gt;In the end, all I can say is that AI, Data Analytics &amp;#x26; IoT Competence Center is already looking forward to the next Hackathon!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Keeping your code clean - Newsletter]]></title><link>https://developer.hpe.com/2020-March-03/</link><guid isPermaLink="false">https://developer.hpe.com/2020-March-03/</guid><pubDate>Tue, 03 Mar 2020 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE brings a little magic to KubeCon + CloudNativeCon, Europe]]></title><description><![CDATA[kubecon eu new date image3 Will you be among the 10,000 technologists heading to Amsterdam from August 13 – 16, 2020 to attend Cloud Native…]]></description><link>https://developer.hpe.com/hpe-brings-a-little-magic-to-kubecon-cloudnativecon-europe/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-brings-a-little-magic-to-kubecon-cloudnativecon-europe/</guid><pubDate>Wed, 19 Feb 2020 17:32:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/3/kubecon-eu-new-date-image3-1585572572202.png&quot; alt=&quot;kubecon eu new date image3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Will you be among the 10,000 technologists heading to Amsterdam from August 13 – 16, 2020 to attend &lt;a href=&quot;https://www.cncf.io/&quot;&gt;Cloud Native Computing Foundation’s&lt;/a&gt; flagship conference, &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/&quot;&gt;KubeCon | CloudNativeCon EU&lt;/a&gt;? We will and we’re so excited for opportunity to demonstrate a little magic while we show off what Hewlett Packard Enterprise (HPE) offers in the areas of containers and Kubernetes (k8s)!&lt;/p&gt;
&lt;p&gt;HPE will be featured in two KubeCon breakout sessions. Both will take place on Wednesday, April 1st. The first session, &lt;strong&gt;Topology Aware Scheduling using Prometheus and Telemetry Aware Scheduler&lt;/strong&gt;, is co-sponsored by HPE and Intel and given by Chief Technologist, Tom Golway (HPE), and Cloud Software Engineer, Swati Sehgal (Intel). The second breakout session, &lt;strong&gt;Taming Data/State Challenges for ML Applications and Kubeflow&lt;/strong&gt;, will be presented by HPE Distinguished Technologist, Skyler Thomas. Refer to the &lt;a href=&quot;https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/schedule/&quot;&gt;program schedule&lt;/a&gt; for specific times and locations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture10-1582133922632.png&quot; alt=&quot;picture10&quot;&gt;&lt;/p&gt;
&lt;p&gt;On the event floor at booth P17, our magician will greet you and introduce you to HPE representatives from our different software-enabling and application development groups. You’ll get to learn about some pretty magical technologies and programs, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Container Platform&lt;/a&gt;, the industry’s first Kubernetes container platform software designed to run both cloud-native and non cloud-native applications, enabling true hybrid cloud operations across &lt;strong&gt;any&lt;/strong&gt; location. Now if that’s not magical, we don’t know what is!&lt;/li&gt;
&lt;li&gt;The &lt;a href=&quot;https://www.hpe.com/us/en/storage/intelligent-storage.html?chatsrc=ot-en&amp;#x26;jumpid=ps_8r5mdg32xs_aid-520023673&amp;#x26;gclid=Cj0KCQiAs67yBRC7ARIsAF49CdU6O6Hbaj1lwT8tcrU702BzRnZboWNQILTShb0cCk-eEk7nUjQ-yhMaAv4fEALw_wcB&amp;#x26;gclsrc=aw.ds&quot;&gt;intelligent data platform&lt;/a&gt; from &lt;a href=&quot;https://www.hpe.com/us/en/storage.html&quot;&gt;HPE storage&lt;/a&gt;, focused on persistent storage use cases for Kubernetes in private, public, and hybrid clouds and how to enable CI/CD pipelines. Another bit of magic we know you’ve been waiting to see.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt;, a growing community that helps developers and designers focus on creating the best possible software experiences using open source and HPE products. The magic here is how much easier you can make your life by tapping into these resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You’ll want to catch at least one of our theater sessions that will run throughout the day. There will be a random drawing for an Argon Kit after each presentation. Presenters will cover topics such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An introduction to the HPE Container Platform&lt;/li&gt;
&lt;li&gt;Exploring the benefits of joining the HPE DEV community&lt;/li&gt;
&lt;li&gt;Persistent Storage for Kubernetes&lt;/li&gt;
&lt;li&gt;Hybrid Cloud CI/CD with Nimble/CV/HCP&lt;/li&gt;
&lt;li&gt;Taming Data and State Challenges with Kubeflow&lt;/li&gt;
&lt;li&gt;App centric MongoDB 3PAR/Primera&lt;/li&gt;
&lt;li&gt;SPIRE and SPIFFE&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;HPE’s newest acquisition, Scytale, a cloud native security startup that is built on the open-source Secure Production Identity Framework for Everyone (SPIFFE) protocol, will also be in attendance in a separate booth (S48) on the tradeshow floor. Some Scytale presentations will also be given in booth P17.&lt;/p&gt;
&lt;p&gt;As always, the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community&lt;/a&gt; is excited to have an opportunity to connect with developers and designers at an event such as this. If you’re going, make sure you stop on by and see us at HPE booth P17. Hope to see you in Amsterdam!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Understanding Concurrency in Python Part 3 - Asyncio]]></title><description><![CDATA[picture1 If you have been following my blog posts on concurrency in Python, you’ll remember that I first covered the threading library in…]]></description><link>https://developer.hpe.com/understanding-concurrency-in-python-part-3-asyncio/</link><guid isPermaLink="false">https://developer.hpe.com/understanding-concurrency-in-python-part-3-asyncio/</guid><pubDate>Wed, 19 Feb 2020 17:32:20 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1582323160514.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you have been following my blog posts on concurrency in Python, you’ll remember that I first covered the &lt;a href=&quot;/blog/understanding-concurrency-in-python-part-1-threading&quot;&gt;threading library in Python&lt;/a&gt; and found that it helped significantly with multi-threaded execution of I/O bound tasks. However, it did not improve efficiencies for CPU bound tasks. In &lt;a href=&quot;/blog/understanding-concurrency-in-python-part-2-multiprocessing&quot;&gt;Part 2&lt;/a&gt;, I pointed out how multiprocessing can help you get around this. In this post, I will show you how the Python asyncio library can help achieve concurrency of processes by letting the application have control over context switching. This library can come in handy when you are dealing with APIs that are implemented with a polling mechanism to handle asynchronous tasks.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Asyncio&lt;/em&gt; became associated with Python in version 3.4. It is a single thread/single process cooperative multitasking library that uses something called co-routines, event loops, and awaitable objects to achieve concurrency. An asyncio task has exclusive use of the CPU until it wishes to give it up to the event loop. Asyncio is very beneficial for I/O bound operations, but not very helpful for CPU bound operations.&lt;/p&gt;
&lt;p&gt;Let’s use the same I/O bound example we used in Part 1 and Part 2 of this series to fetch responses from websites and see how to implement concurrency of this I/O bound operation using asyncio.&lt;/p&gt;
&lt;p&gt;Step 1: Import the necessary libraries and modules.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import asyncio
import requests
import time

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 2: Define the co-routines. A co-routine can be defined by prefixing the keyword async before the function definition. An asyncio co-routine is a function that can pause and resume during execution. This means that it acts, more or less, like a &lt;a href=&quot;https://wiki.python.org/moin/Generators&quot;&gt;generator&lt;/a&gt; in Python.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
async def get_response(site):
    return requests.get(site)

async def main():
    # define some sites to query
    tasks = []
    sites = [&quot;http://www.google.com&quot;, &quot;http://www.linkedin.com&quot;,
             &quot;http://www.quora.com&quot;, &quot;http://www.facebook.com&quot;]

    for site in sites:
        tasks.append(asyncio.ensure_future(get_response(site)))
    
    await asyncio.gather(*tasks)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;ensure_future&lt;/em&gt; takes in a co-routine and returns the future version of it. Basically, it assumes that a result will eventually be given to it at some point in time. And while a co-routine is waiting, its execution is temporarily suspended. Once the future of that co-routine is set to some result, the co-routine is resumed.&lt;/p&gt;
&lt;p&gt;Now, create the event loop and call the co-routines to fetch responses. Capture the time taken for execution to see how asyncio performs.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
print(&quot;Time taken for asyncio&quot;, time.time()-start_time)


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using asyncio in this example, I got the following results:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture3-1582323145443.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see that the asyncio execution took relatively less time than a regular iterative execution of an I/O bound operation.&lt;/p&gt;
&lt;p&gt;The complete code we used to illustrate how the asyncio library helps would look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import asyncio
import requests
import time

async def get_response(site):
    return requests.get(site)

async def main():
    # define some sites to query
    tasks = []
    sites = [&quot;http://www.google.com&quot;, &quot;http://www.linkedin.com&quot;,
             &quot;http://www.quora.com&quot;, “http://www.facebook.com&quot;]

for site in sites:
    tasks.append(asyncio.ensure_future(get_response(site)))

    await asyncio.gather(*tasks)

start_time = time.time()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()

print(&quot;Time taken for asyncio&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note - You might come across an error that says ‘event loop is already running’, while executing the above code in Jupyter Notebooks. In that case, either run the code from your terminal as a regular Python program or fix the issue by installing nest_asyncio from &lt;a href=&quot;https://pypi.org/project/nest-asyncio/&quot;&gt;https://pypi.org/project/nest-asyncio/&lt;/a&gt; and importing it in the code.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As you can see from the example above, the Python asyncio library can be very advantageous when you want the application to have more control over context switching than the operating system, especially when dealing with APIs that have a polling mechanism or any other I/O bound operation.&lt;/p&gt;
&lt;p&gt;Hopefully, after reading through all three of my articles, you now have a basic understanding of the libraries that are available in Python for concurrency, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Threading&lt;/li&gt;
&lt;li&gt;Multiprocessing&lt;/li&gt;
&lt;li&gt;Asyncio&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By reading these posts, I hope you also learned to determine which library to employ based on whether the operation is I/O bound or CPU bound. My three articles just covered the basics of concurrency. Is there more to it? Definitely! You may want to explore more on locks in threading, pools in multiprocessing, and a bunch of other cool features found in &lt;a href=&quot;https://docs.python.org/3/library/asyncio.html&quot;&gt;asyncio.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You’ll find more articles and tutorials like this on the &lt;a href=&quot;/blog&quot;&gt;HPE blog site.&lt;/a&gt; Remember to check back of-ten to see what’s new! Don’t forget, you can follow me on Twitter &lt;a href=&quot;https://twitter.com/deyagondsamarth&quot;&gt;@deyagondsamarth&lt;/a&gt; or connect with me on &lt;a href=&quot;https://hpedev.slack.com/?redir=%2Fteam%2FUQM0ZTE1F&quot;&gt;Slack.&lt;/a&gt; Happy coding!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Understanding Concurrency in Python Part 2 - Multiprocessing]]></title><description><![CDATA[picture2 In Part 1 of this series, Understanding Concurrency in Python, I covered the timing of multi-threaded executions. Through specific…]]></description><link>https://developer.hpe.com/understanding-concurrency-in-python-part-2-multiprocessing/</link><guid isPermaLink="false">https://developer.hpe.com/understanding-concurrency-in-python-part-2-multiprocessing/</guid><pubDate>Wed, 19 Feb 2020 17:23:08 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture2-1582133413956.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;In Part 1 of this series, &lt;a href=&quot;/blog/understanding-concurrency-in-python-part-1-threading&quot;&gt;Understanding Concurrency in Python&lt;/a&gt;, I covered the timing of multi-threaded executions. Through specific examples, I showed you that, no matter how many cores you have on your computer, the Python threading library does not really help you fully exploit the abilities of multi-threading. But a resolution is available, which I promised to show you. I am covering that resolution here in Part 2. Thanks to the Python multiprocessing library, you can take complete advantage of all the cores in your computer and more efficiently handle CPU bound functions.&lt;/p&gt;
&lt;p&gt;Multiprocessing allows you to run functions as independent Python processes on different cores. Even though these processes run independently, they can still communicate with each other when needed.&lt;/p&gt;
&lt;p&gt;Let’s look at an example where multiprocessing helps us achieve concurrency and the speed required to handle a CPU bound function. To compare and contrast our results, we will be using the same example we looked at in our previous post to determine how threading affected CPU bound functions.&lt;/p&gt;
&lt;p&gt;Step 1: Import the necessary libraries and modules.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import multiprocessing
import time

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 2: Define a CPU intensive function cpu_bound() that accepts a number, multiplies it by a million, and calculates the sum of all the numbers in a range of 0 to that product. For ease of reference, we’ll use the same example we used in Understanding Concurrency in Python Part 1 – Threading. As we did previously, remember to create an additional list of random numbers. The example we used before is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
def cpu_bound(num):
    return sum([i for i in range(num*1000000)])

numbers = [11, 23, 53, 34]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When we previously used this example as we looked at threading, we determined the time taken for regular execution of this function iterated over the list of numbers. In that instance, the results were as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture4-1582133425400.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Step 3: This time, let’s capture the time taken to execute this CPU intensive function using the Python Multiprocessing library to invoke multiple processes. Here, we use the Process method of the multiprocessing library, which takes two parameters. One parameter is target, which is set to &lt;em&gt;cpu_bound&lt;/em&gt; function, and another is &lt;em&gt;args,&lt;/em&gt; which is the arguments for the function &lt;em&gt;cpu_bound&lt;/em&gt; function.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
for number in numbers:
    p = multiprocessing.Process(target=cpu_bound, args=(number,))
    p.start()
print(&quot;Time taken for multi-processing execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When I re-executed the regular and multiprocessing-fashioned code, the results were as follows:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture5-1582133442778.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;This is so awesome! You can see the drastic reduction in the time taken to execute the same &lt;em&gt;cpu_bound()&lt;/em&gt; function using the multiprocessing library where &lt;em&gt;cpu_bound&lt;/em&gt; function is executed as an independent process for every number in the list.&lt;/p&gt;
&lt;p&gt;The complete code that we used to illustrate how the multiprocessing library helps would look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import multiprocessing
import time


def cpu_bound(num):
    return sum([i for i in range(num*1000000)])

numbers = [11, 23, 53, 34]

start_time = time.time()
for number in numbers:
    cpu_bound(number)

print(&quot;Time taken for regular execution&quot;, time.time()-start_time)
start_time = time.time()
for number in numbers:

    p = multiprocessing.Process(target=cpu_bound, args=(number,))
    p.start()
    p.join()

print(&quot;Time taken for multi-processing execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you are working on a Microsoft Windows-based system, you can open your Task Manager while the above Python code is running to see the number of Python processes being spawned.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture6-1582133457943.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;As promised, I’ve shown you how to achieve efficient concurrency in Python for CPU bound functions using the Multiprocessing library. This method executes the target method as independent processes for every input, thus utilizing the CPU resources to the maximum extent. However, the control over context switching between the processes is still with the operating system. This might be a concern at times. Is there an alternative for this? Yes, there is! So, make sure you check out my next post, &lt;em&gt;Understanding Concurrency in Python Part 3 – Asyncio&lt;/em&gt;, to learn more about how this library can let an application have control over context switching and execute multiple functions simultaneously. You can read all my blog posts on &lt;a href=&quot;/blog&quot;&gt;HPE DEV.&lt;/a&gt; Feel free to reach out to me with any questions on &lt;a href=&quot;https://hpedev.slack.com/?redir=%2Fteam%2FUQM0ZTE1F&quot;&gt;Slack&lt;/a&gt; or connect with me on Twitter &lt;a href=&quot;https://twitter.com/deyagondsamarth&quot;&gt;@deyagondsamarth.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automation is not testing]]></title><description><![CDATA[picture1 I recently attended a webinar where the speaker commented that "Automation is not testing". It started me thinking about testing in…]]></description><link>https://developer.hpe.com/automation-is-not-testing/</link><guid isPermaLink="false">https://developer.hpe.com/automation-is-not-testing/</guid><pubDate>Tue, 18 Feb 2020 17:55:26 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1582049527283.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;I recently attended a webinar where the speaker commented that &quot;Automation is not testing&quot;. It started me thinking about testing in general and some of the challenges it can present.
We all know that we should be testing scripts, but you can fall into a lot of potential pit falls when starting this journey. In this article, I’m going to jump straight into the deep end and deal with one of the struggles that took me a while to deal with.&lt;/p&gt;
&lt;p&gt;Imagine the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You have a local device with a REST API.&lt;/li&gt;
&lt;li&gt;You write a library that accesses that REST API.&lt;/li&gt;
&lt;li&gt;You write tests for that library so it runs against the local devices REST API.&lt;/li&gt;
&lt;li&gt;You push the library to GITHUB so other people can leverage your work. (You are a good person, right? )&lt;/li&gt;
&lt;li&gt;You configure TravisCI or CircleCI for integration testing.&lt;/li&gt;
&lt;li&gt;You realize all your tests fail because Travis/Circle doesn’t have access to your internal device.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And now, your GITHUB badges all show red, and no one trusts your code. Which brings us to &lt;em&gt;vcrpy.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;What’s vcrpy?&lt;/h2&gt;
&lt;p&gt;Wow! So glad you asked! &lt;em&gt;Vcrpy&lt;/em&gt; is a REST API library that helps to automatically record responses from a REST API and captures them on a local file you can reuse later.
According to the &lt;em&gt;vcrpy&lt;/em&gt; docs, (available &lt;a href=&quot;https://vcrpy.readthedocs.io/en/latest/&quot;&gt;here&lt;/a&gt;), the three main benefits of the library are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The ability to work offline&lt;/li&gt;
&lt;li&gt;Completely deterministic tests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;*Increased test execution speed&lt;/p&gt;
&lt;p&gt;Let’s dig in by writing a quick piece of code that is going to access a REST API. In this case, I’m going to use a public API, but let’s imagine it’s behind your firewall on a device where you don’t want anyone fiddling with anything.&lt;/p&gt;
&lt;p&gt;For this example, I’m going to be using the public API at &lt;a href=&quot;https://api.kanye.rest&quot;&gt;https://api.kanye.rest&lt;/a&gt;. This is a public API that responds to every GET request with a quote from Kanye West.
Let’s use python to create a small library that will access the API, print a message, and return the quote as JSON.
If you want to take a look at the library, feel free to check out the GITHUB repository &lt;a href=&quot;https://github.com/netmanchris/pykanyerest&quot;&gt;pykanyerest.&lt;/a&gt;
If you don’t want to leave this page, I’ve included the library function below.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import requests
import json


def get_new_quote():
    url = &apos;https://api.kanye.rest&apos;
    r = requests.get(url)
    quote = json.loads(r.text)
    print (&quot;New Kanye Quote coming up!&quot;)
    return quote

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that we’ve got the new function built, let’s take a quick look at what we get. We will run the function and capture it in a python variable called x.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
x = get_new_quote()
New Kanye Quote coming up!

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, let’s use the json library to take a look at what was returned.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
json.dumps(x)
&apos;{&quot;quote&quot;: &quot;I’m nice at ping pong&quot;}&apos;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that we’ve got some working code, let’s run it again and capture this as the python variable y.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
y = get_new_quote()
New Kanye Quote coming up!
json.dumps(y)
&apos;{&quot;quote&quot;: &quot;The world is our office&quot;}&apos;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Hmmmmm…&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;As you can see, each time you perform a GET against the API, another variable is returned. You can imagine this probably isn’t the best thing for testing because we really don’t have a clear indication as to what &lt;em&gt;exactly&lt;/em&gt; we’re going to expect from this API.&lt;/p&gt;
&lt;p&gt;We do know a couple of things that can be used for testing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the returned object is a python dictionary&lt;/li&gt;
&lt;li&gt;the returned object has a single key/value pair&lt;/li&gt;
&lt;li&gt;the returned object’s first key is “quote”&lt;/li&gt;
&lt;li&gt;the returned object’s first value is going to be an object of type STR&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Imagine code that could test for all these things.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Take your time… I’ll wait.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;While you were imagining it, I wrote a quick test to help make sure that the API returns what we expect.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If you don’t expect anything, you deserve what you get, right?&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
from unittest import TestCase
from pykanyerest.quotes import *


class TestGetNewQuote(TestCase):
    &quot;&quot;&quot;
    Test Case for get_new_quote function from kanye.rest
    &quot;&quot;&quot;

    def test_GetNewQuote(self):
        &quot;&quot;&quot;
        &quot;&quot;&quot;
        quote = get_new_quote()
        self.assertEqual(type(quote), dict)
        self.assertEqual(len(quote), 1)
        keys = quote.keys()
        self.assertIn(&apos;quote&apos;, keys)
        self.assertEqual(type(quote[&apos;quote&apos;]), str)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you were to run this test, you would find that it passes. &lt;em&gt;You can trust me on this, right?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now, we’ve already got a lot we can hang our tests on, but imagine we want to also test the exact contents of the value.
The code to test would now look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
class TestGetNewQuote(TestCase):
    &quot;&quot;&quot;
    Test Case for get_new_quote function from kanye.rest
    &quot;&quot;&quot;

    def test_GetNewQuote(self):
        &quot;&quot;&quot;
        &quot;&quot;&quot;
        quote = get_new_quote()
        self.assertEqual(type(quote), dict)
        self.assertEqual(len(quote), 1)
        keys = quote.keys()
        self.assertIn(&apos;quote&apos;, keys)
        self.assertEqual(type(quote[&apos;quote&apos;]), str)
        self.assertEqual(quote[&apos;quote&apos;], &quot;If I got any cooler I would freeze to death&quot;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you ran THIS code, you would now find it fails.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
Traceback (most recent call last):
  File &quot;/Users/christopheryoung/PycharmProjects/OpenTestingBlog/tests/test_quotes.py&quot;, line 18, in test_GetNewQuote
    self.assertEqual(quote[&apos;quote&apos;], &quot;If I got any cooler I would freeze to death&quot;)
  File &quot;/Applications/PyCharm.app/Contents/helpers/pycharm/teamcity/diff_tools.py&quot;, line 38, in _patched_equals
    raise error
teamcity.diff_tools.EqualsAssertionError:  :: People only get jealous when they care. != If I got any cooler I would freeze to death

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Why did it fail?&lt;/h2&gt;
&lt;p&gt;Well, of course it failed, because the API is supposed to return a &lt;em&gt;NEW&lt;/em&gt; quote every time we hit it.&lt;/p&gt;
&lt;p&gt;This is where the &lt;em&gt;vcrpy&lt;/em&gt; library comes in SUPER handy, as it can record and freeze the API response so we can make sure the last test passes every time.&lt;/p&gt;
&lt;p&gt;So, the first thing we’re going to do is to import the VCR library into our test file and configure the new test to record the API response into a file on the local filesystem.
As you can see below, we’ve really only added two lines.
The first is the “import vcr” line at the top that makes the vcrpy library available to the test script. The second is the decorator on top of the “test_GetNewQuote” function, which does two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Tells the script where to copy the results of the API call the first time you run it&lt;/li&gt;
&lt;li&gt;Tells the script where to look for the results every subsequent time you run it.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import vcr
from unittest import TestCase
from pykanyerest.quotes import *


class TestGetNewQuote(TestCase):
    &quot;&quot;&quot;
    Test Case for get_new_quote function from kanye.rest
    &quot;&quot;&quot;

    @vcr.use_cassette(cassette_library_dir=&apos;./test_pykanyerest/fixtures/cassettes&apos;)
    def test_GetNewQuote(self):
        &quot;&quot;&quot;
        &quot;&quot;&quot;
        quote = get_new_quote()
        self.assertEqual(type(quote), dict)
        self.assertEqual(len(quote), 1)
        keys = quote.keys()
        self.assertIn(&apos;quote&apos;, keys)
        self.assertEqual(type(quote[&apos;quote&apos;]), str)
        self.assertEqual(quote[&apos;quote&apos;], &quot;If I got any cooler I would freeze to death&quot;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once you’ve run the test once, a new file will appear in the ./test_pykanyerest/fixtures/cassettes/test_GetNewQuote folder with the following contents:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;
interactions:
- request:
    body: null
    headers:
      Accept:
      - &apos;*/*&apos;
      Accept-Encoding:
      - gzip, deflate
      Connection:
      - keep-alive
      User-Agent:
      - python-requests/2.22.0
    method: GET
    uri: https://api.kanye.rest/
  response:
    body:
      string: &apos;{&quot;quote&quot;:&quot;I&apos;&apos;m a creative genius&quot;}&apos;
    headers:
      Access-Control-Allow-Headers:
      - Content-Type
      Access-Control-Allow-Methods:
      - GET
      Access-Control-Allow-Origin:
      - &apos;*&apos;
      CF-RAY:
      - 55a5b75cac3aecee-YUL
      Connection:
      - keep-alive
      Content-Length:
      - &apos;33&apos;
      Content-Type:
      - application/json
      Date:
      - Fri, 24 Jan 2020 23:16:38 GMT
      Expect-CT:
      - max-age=604800, report-uri=&quot;https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct&quot;
      Server:
      - cloudflare
      Set-Cookie:
      - __cfduid=d1c38c52e7d73039e047e768fdea93b5d1579907798; expires=Sun, 23-Feb-20
        23:16:38 GMT; path=/; domain=.kanye.rest; HttpOnly; SameSite=Lax
      Vary:
      - Accept-Encoding
    status:
      code: 200
      message: OK
version: 1

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you look closely, you can see a lot of information in there, including the contents of the response body string, which is now ‘{“quote”:”I’m a creative genius”}’.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note: For those paying attention, you will probably have guessed that the test we created above will still fail, as the Kanye quote we were looking for has changed again.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Let’s change the test to look for the new quote that’s captured in the test_GetNewQuote file above.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import vcr
from unittest import TestCase
from pykanyerest.quotes import *


class TestGetNewQuote(TestCase):
    &quot;&quot;&quot;
    Test Case for get_new_quote function from kanye.rest
    &quot;&quot;&quot;

    @vcr.use_cassette(cassette_library_dir=&apos;./test_pykanyerest/fixtures/cassettes&apos;)
    def test_GetNewQuote(self):
        &quot;&quot;&quot;
        &quot;&quot;&quot;
        quote = get_new_quote()
        self.assertEqual(type(quote), dict)
        self.assertEqual(len(quote), 1)
        keys = quote.keys()
        self.assertIn(&apos;quote&apos;, keys)
        self.assertEqual(type(quote[&apos;quote&apos;]), str)
        self.assertEqual(quote[&apos;quote&apos;], &quot;I&apos;m a creative genius&quot;)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now when we run the tests, they all pass.&lt;/p&gt;
&lt;p&gt;And when we run them again?
… They still pass.&lt;/p&gt;
&lt;p&gt;And when we run them again?
… &lt;strong&gt;They still pass.&lt;/strong&gt;
We can now run the same test a million times and get the same answer, which is pretty cool, right?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;One More Thing&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;One of the other major benefits of using something like &lt;em&gt;vcrpy&lt;/em&gt; is we can now continue to refactor our code even when we don’t have access to the resource we’re testing.&lt;/p&gt;
&lt;p&gt;Stuck in a plane with no internet? No problem, you’ve captured the &lt;em&gt;actual&lt;/em&gt; response from the original server. Your code has no clue you’re not connected.&lt;/p&gt;
&lt;p&gt;No access to the corporate network? No problem, you’ve captured the &lt;em&gt;actual&lt;/em&gt; response from the original server. Your code has no clue you’re not connected.&lt;/p&gt;
&lt;p&gt;TravisCI has no access to your internal resources? No problem, you’ve captured the actual response from the original server and posted to GITHUB. TravisCI has no clue you’re not connected.&lt;/p&gt;
&lt;h2&gt;VCRPY&lt;/h2&gt;
&lt;p&gt;I’ve only begun to scratch the surface of this library, but hopefully it will spark your curiosity to investigate it a bit more and see how you can write some new tests for that fancy REST API on your new infrastructure.&lt;/p&gt;
&lt;p&gt;Just remember, you don’t want to write your secret usernames and passwords to the public GITHUB. The nice thing is that the vcrpy also has the ability to hide credentials. I’ll leave it up to you to see if you can figure that part out.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://kontrolissues.net/mentions/netmanchris/&quot;&gt;@netmanchris&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Understanding Concurrency in Python Part 1 - Threading]]></title><description><![CDATA[picture3 Although most developers understand the basic concepts of concurrency and parallelism, the nuances can be pretty tricky to…]]></description><link>https://developer.hpe.com/understanding-concurrency-in-python-part-1-threading/</link><guid isPermaLink="false">https://developer.hpe.com/understanding-concurrency-in-python-part-1-threading/</guid><pubDate>Mon, 10 Feb 2020 22:17:20 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture3-1581373479403.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Although most developers understand the basic concepts of concurrency and parallelism, the nuances can be pretty tricky to understand. At a high level, these are techniques/mechanisms employed to execute multiple processes or threads simultaneously while, at the same time, ensure the CPU is used to its maximum extent. To provide you with a more complete understanding of concurrency in Python, I’ve written a three-part tutorial. I will start off by covering the topic of threading, and then delve into multiprocessing, and asyncio.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Concurrency&lt;/strong&gt; is when processes are executed on a single processor by context switching, and they appear to be running simultaneously. &lt;strong&gt;Parallelism&lt;/strong&gt; is when processes are executed on multiple processors or cores and are actually running simultaneously.&lt;/p&gt;
&lt;p&gt;Python provides multiple libraries to achieve concurrency, namely threading, multiprocessing, and asyncio. These libraries will be better employed if we understand a few aspects about concurrency in Python.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;CPython enforces GIL (Global Interpreter Lock), which mandates one thread execution at a time. The thread needs to acquire this exclusive lock every time before the execution of any bytecode.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Concurrency is preferred when the process is either I/O bound or CPU bound. I/O bound processes are those that communicate with the devices that are slower than the processor.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, a process talking to a poor network connection, printer/scanner, etc. is an I/O bound process. CPU bound processes are those that do significant CPU intensive computations. Here, the resource that limits the speed of execution is the CPU, unlike those in I/O bound processes.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In I/O bound or CPU bound processes, the threads don’t need to struggle/race to acquire the GIL.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The threading and asyncio libraries are best used when the process is I/O bound, and the multiprocessing library is good to use when the process is CPU bound.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here are some examples of how to employ the threading library in detail:&lt;/p&gt;
&lt;p&gt;Remember, this library is best for dealing with I/O bound functions.&lt;/p&gt;
&lt;p&gt;Start by looking at an I/O bound function for fetching responses from several websites. (I’m going to refer to this example in Parts 2 and 3 of this series, so make sure you take notes!) If you execute this task in both a regular and multi-threaded fashion and capture the time taken to fetch responses from all the sites, you can see it’s faster with a multi-threaded execution and slower with a regular execution.&lt;/p&gt;
&lt;p&gt;Step 1: First, import the necessary libraries and modules.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import threading
import time
import requests

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 2:  Define a function get_response() that accepts site as an input and fetches the response data from that site using requests.get() method.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
def get_response(site):
    return requests.get(site)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 3: Create a list of several websites. Append any site and as many sites as you want to the list.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
sites = [&quot;http://www.google.com&quot;, &quot;http://www.linkedin.com&quot;,
         &quot;http://www.quora.com&quot;, &quot;http://www.facebook.com&quot;]


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 4: Iterate through this list of sites and invoke the function get_response() for each site. Capture and print the time taken for this complete iteration using a time.time() method.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
for site in sites:
    get_response(site)  

print(&quot;Time taken for regular execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 5: Now, define threads using the threading library with target to get_response() function and arguments set to sites in the list.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
threads = [threading.Thread(target=get_response, args=(site,))
          for site in sites]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 6: Iterate over these threads and start these threads using the thread.start() method. Use the thread.join() method to wait till the thread execution completes. Also, capture the time using the time.time() method to see the time taken to complete the execution.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
for thread in threads:
    thread.start()
for thread in threads:
    thread.join()
print(&quot;Time taken for multi-threaded execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1581373455996.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see that the multi-threaded execution of this I/O bound task is way faster than the regular execution. The efficiency of a multi-threaded execution is so significant in a scenario like this with only four sites to fetch responses from. Imagine how much more of an advantage we would see when the list of sites grows longer! Did you just think about working on a mind-blowing web scraping project?&lt;/p&gt;
&lt;p&gt;The consolidated code would look like what’s shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import threading
import time
import requests


def get_response(site):
    return requests.get(site)

sites = [&quot;http://www.google.com&quot;, &quot;http://www.linkedin.com&quot;,
         &quot;http://www.quora.com&quot;, &quot;http://www.facebook.com&quot;]

start_time = time.time()
for site in sites:
    get_response(site)

print(&quot;Time taken for regular execution&quot;, time.time()-start_time)

threads = [threading.Thread(target=get_response, args=(site,))
           for site in sites]
start_time = time.time()
for thread in threads:
    thread.start()
for thread in threads:
    thread.join()
print(&quot;Time taken for multi-threaded execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, let’s consider a CPU bound function and observe how a threading library isn’t of much help in achieving any further efficiency.&lt;/p&gt;
&lt;p&gt;Step 1: Again, be sure to first import the necessary libraries and modules.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
import threading
import time
import requests

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 2: This time, define a CPU intensive function cpu_bound() that accepts a number, multiplies it by 10^6 and calculates the sum of all numbers in a range of 0 to that product.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def cpu_bound(num):
    return sum([i for i in range(num*1000000)])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 3: Create a list of random numbers.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
numbers = [11, 23, 53, 34]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 4: Just like in the last example, iterate over these numbers and invoke the cpu intensive function cpu_bound(). Capture the time taken to complete the execution. Print out the time taken for regular execution.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
for number in numbers:
    cpu_bound(number)

print(&quot;Time taken for regular execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 5: As shown previously, define the variable threads using threading.Thread() method with target function set to cpu_bound and arguments set to the numbers in the list.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
threads = [threading.Thread(target=cpu_bound, args=(number,))
          for number in numbers]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Step 6: Iterate over these threads and start the execution of these threads using the thread.start() method. Use the thread.join() method to wait till the thread execution completes. Also, capture the time using the time.time() method to see the time taken to complete the execution and print it out.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;
start_time = time.time()
for thread in threads:
    thread.start()
for thread in threads:
    thread.join()
print(&quot;Time taken for multi-threaded execution&quot;, time.time() - start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture2-1581373442995.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see that employing the threading module did not help us much in achieving any further efficiency while executing a CPU bound function. This is probably because line-by-line execution is done faster than waiting for a thread to complete and for another thread to acquire the GIL and proceed.&lt;/p&gt;
&lt;p&gt;The consolidated code would look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;import threading
import time
import requests


def cpu_bound(num):
    return sum([i for i in range(num*1000000)])

numbers = [11, 23, 53, 34]

start_time = time.time()
for number in numbers:
    cpu_bound(number)

print(&quot;Time taken for regular execution&quot;, time.time()-start_time)

threads = [threading.Thread(target=cpu_bound, args=(number,))
           for number in numbers]

start_time = time.time()
for thread in threads:
    thread.start()
for thread in threads:
    thread.join()
print(&quot;Time taken for multi-threaded execution&quot;, time.time()-start_time)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Through these two examples, it’s apparent that multi-threaded execution takes more than, or almost the same amount of time as, that of regular execution while handling CPU bound functions. What you need to understand is that, &lt;em&gt;no matter how many cores you have on your computer, the threading library of Python will not help you to completely exploit the abilities of multi-threading&lt;/em&gt;. Because of this, any CPU intensive functions won’t benefit from multi-threaded execution.&lt;/p&gt;
&lt;p&gt;Are there any instances wherein we can take complete advantage of all the cores that our computer has and experience an efficient handling of especially CPU bound functions?&lt;/p&gt;
&lt;p&gt;Yes! Python supplies a Multiprocessing library that helps this exact situation. In my next post that covers more of the nuances of concurrency, Understanding Concurrency in Python Part 2 – Multiprocessing, I will explain more about this. Remember to check back on the HPE DEV blog site often to keep up with the many different tutorials we post. If you want to, you can also follow me on Twitter &lt;a href=&quot;https://twitter.com/deyagondsamarth&quot;&gt;@deyagondsamarth.&lt;/a&gt; or connect with me on &lt;a href=&quot;https://hpedev.slack.com/?redir=%2Fteam%2FUQM0ZTE1F&quot;&gt;slack.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Eric Soderberg and Shimrit Yacobi - HPE DEV Team Members working on Grommet ]]></title><description><![CDATA[In this Meet the HPE DEV Team blog, I’d like to introduce Eric Soderberg @ericsoderberg and Shimrit (aka “Shimi”) Yacobi, members of the HPE…]]></description><link>https://developer.hpe.com/meet-eric-soderberg-and-shimrit-yacobi-hpe-dev-team-members-working-on-g/</link><guid isPermaLink="false">https://developer.hpe.com/meet-eric-soderberg-and-shimrit-yacobi-hpe-dev-team-members-working-on-g/</guid><pubDate>Fri, 07 Feb 2020 18:33:36 GMT</pubDate><content:encoded>&lt;p&gt;In this Meet the HPE DEV Team blog, I’d like to introduce Eric Soderberg &lt;a href=&quot;https://twitter.com/ericsoderberg?lang=en&quot;&gt;@ericsoderberg&lt;/a&gt; and Shimrit (aka “Shimi”) Yacobi, members of the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE Dev Community&lt;/a&gt;  who are focused on &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet.&lt;/a&gt; Grommet is a React-based library of reusable UI components that help developers and designers create web applications. This open source UI development and design framework simplifies the way web applications are built by providing a package of commonly used interface elements from which developers and designers can choose to use.&lt;/p&gt;
&lt;h2&gt;Eric Soderberg – Co-creator of Grommet&lt;/h2&gt;
&lt;p&gt;Awaking each morning with the hope of finding unexpected beauty is the way Eric  best describes his approach to life. He is constantly seeking out the fresh, different, and creative – what he describes as the many reflections of God. He lives simply and with purpose. The one car he owns is driven mostly by his wife, while Eric bikes back and forth to work, rain or shine. An elder at his local church, Eric finds a great deal of fulfillment in his volunteer work.&lt;/p&gt;
&lt;p&gt;As co-creator of Grommet, Eric brings this sense of beauty, simplicity, and service to his work. When customers expressed frustration at the profusion of disparate user interfaces across HPE products (a result of HPE having acquired many different companies), Eric worked closely with Chris Carlozzi &lt;a href=&quot;https://twitter.com/chriscarlozzi?lang=en&quot;&gt;@chriscarlozzi&lt;/a&gt; to design a user experience framework that was accessible, responsive, and simple. Seeking a neutral framework that could be broadly adopted, they started with ReactJS and developed the Grommet libraries for theming, accessibility, and modularity.&lt;/p&gt;
&lt;p&gt;Eric encouraged developers in the different groups to use Grommet by building sample applications and showing them how easily it could be done. As a result, applications across HPE began to look more unified, and Grommet itself improved as the learnings from each interaction found its way back into the code.&lt;/p&gt;
&lt;h2&gt;Shimi Yacobi – Grommet Core Developer and Community Manager&lt;/h2&gt;
&lt;p&gt;Finding herself as the only female in the advanced computer classes in her high school, Shimi gravitated naturally towards engineering, and wound up in a career she truly loves. An outgoing and personable developer, Shimi enjoys yoga, hiking, scuba diving, and spending time with her family. She is also very involved with the community, including a local Women’s Network focused on empowering women in technology.&lt;/p&gt;
&lt;p&gt;Shimi first encountered Grommet while she was working as a developer in the &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/hyper-converged.html&quot;&gt;HPE Hyperconverged&lt;/a&gt; systems group. She is now a core developer for Grommet, as well as the Grommet Community Manager. In this latter role, she connects with developers and designers to understand their requirements and extends the Grommet framework to address their current and future needs. She also provides consulting to Grommet users on best practices and how to get things done using the framework. Shimi values the feedback provided by each contributor, as being so tuned into the community ensures that Grommet is always on the cutting edge.&lt;br&gt;
To keep up to date with everything that’s going on with Grommet, make sure you connect with them on &lt;a href=&quot;https://grommet.slack.com/&quot;&gt;Slack.&lt;/a&gt; You can follow the &lt;a href=&quot;https://twitter.com/grommet_io&quot;&gt;Grommet&lt;/a&gt; Twitter handle or follow &lt;a href=&quot;https://twitter.com/ericsoderberg&quot;&gt;Eric&lt;/a&gt; directly on Twitter. And don’t forget to check out the HPE DEV portal to learn more about the &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet platform.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New HPE OneView Ansible Module v5.4.0 includes support for HPE OneView 5.0]]></title><description><![CDATA[picture1 HPE OneView offers a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a…]]></description><link>https://developer.hpe.com/new-hpe-oneview-ansible-module-v540-includes-support-for-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/new-hpe-oneview-ansible-module-v540-includes-support-for-hpe-oneview-50/</guid><pubDate>Fri, 07 Feb 2020 18:29:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/picture1-1581100387470.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; offers a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a modern RESTful API, and a comprehensive partner ecosystem. HPE’s software development kits (SDKs) and modules enable HPE OneView to integrate with partner products and take advantage of the benefits they offer.&lt;/p&gt;
&lt;p&gt;HPE is pleased to announce the availability of the HPE OneView Ansible Module v5.4.0. This module provides integration of HPE OneView with Ansible by Red Hat®, an industry-leading software deployment, provisioning, and configuration management tool. This new module supports HPE OneView 5.0 (REST API version 1200). It also provides backwards compatibility, extending support to API 800 and API 1000.&lt;/p&gt;
&lt;p&gt;The HPE OneView Ansible module enables the automated provisioning of bare-metal resources, including servers, storage, and networking as part of the application deployment process. Using Ansible with HPE OneView allows customers to create a flexible and adaptive infrastructure, essential to addressing the need for organizational agility and supporting initiatives that accelerate the delivery of customer and business value.&lt;/p&gt;
&lt;p&gt;For more information&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.4.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/blob/master/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Whitepaper: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/blob/master/infrastructure-as-code/infrastructure-as-code.md&quot;&gt;Infrastructure as code with HPE OneView and Ansible by Red Hat&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Demystifying code and design - Newsletter]]></title><link>https://developer.hpe.com/2020-February-03/</link><guid isPermaLink="false">https://developer.hpe.com/2020-February-03/</guid><pubDate>Mon, 03 Feb 2020 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Refactoring in Python]]></title><description><![CDATA[01 refactoring 1024 What is refactoring, and why do we need it? Too often, developers don’t take the time required to refactor and refine…]]></description><link>https://developer.hpe.com/refactoring-in-python/</link><guid isPermaLink="false">https://developer.hpe.com/refactoring-in-python/</guid><pubDate>Thu, 30 Jan 2020 22:57:21 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2020/1/01-refactoring-1024-1581354751665.jpg&quot; alt=&quot;01 refactoring 1024&quot;&gt;&lt;/p&gt;
&lt;h2&gt;What is refactoring, and why do we need it?&lt;/h2&gt;
&lt;p&gt;Too often, developers don’t take the time required to refactor and refine their code, resulting in the accumulation of technical debt that needs to be addressed somewhere down the road. Refactoring code is a technique used to improve the non-functional requirements of the code by modifying the external structure without altering the default behavior of the code. By non-functional requirements, it refers to attributes like modularity, readability, testability, maintainability and other enhancements, the absence of which can add technical debt to the code. Refactoring code is like making final touches to a painting.&lt;/p&gt;
&lt;h2&gt;Refactoring techniques for Python:&lt;/h2&gt;
&lt;p&gt;While the same refactoring techniques can be employed across different programming languages, this article focuses on refactoring techniques that are more relevant for the Python programming language.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Flat is better than nested&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When given a &lt;em&gt;while&lt;/em&gt; &lt;em&gt;loop&lt;/em&gt; driven by a condition whose iterations check the truth value of another ‘if condition’ inside it, then the &lt;em&gt;loop&lt;/em&gt; &lt;em&gt;condition&lt;/em&gt; and nested &lt;em&gt;if&lt;/em&gt; &lt;em&gt;condition&lt;/em&gt; can be combined with an &lt;em&gt;AND&lt;/em&gt; logical operator and used as the new &lt;em&gt;loop&lt;/em&gt; &lt;em&gt;condition&lt;/em&gt; to yield the same result. The code still works perfectly. But when refactored as described, it becomes easier to read and debug.&lt;/p&gt;
&lt;p&gt;Below is an example code snippet before refactoring:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def _upheap(self, index):

    parent_index = (index-1)//2
    while index &gt; 0:  # condition driving the while loop
        if self._heap[parent_index] &gt; self._heap[index]:  # if conditional
            self._swap(parent_index, index)
            index = parent_index
            parent_index = (index-1)//2

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The refactored code snippet is shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def _upheap(self, index):

    parent_index = (index-1)//2
    while index &gt; 0 and self._heap[parent_index] &gt; self._heap[index]:
        self._swap(parent_index, index)
        index = parent_index
        parent_index = (index-1)//2

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Doesn’t the code look much more refined now?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Global variables are always a bad idea&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every function in code has access to global variables. If the global variables are modified, it’s hard to find out which function made the change. Debugging becomes tougher as the code grows. When it comes to readability, strange variables will start popping up from out of nowhere. This is because the global variables in use might be declared somewhere very far into the code. (Don’t you think that declaring the variables close to their usage in the code is a good refactoring idea? This would be especially helpful in Python, which is a dynamically-typed programming language.)&lt;/p&gt;
&lt;p&gt;To avoid changing variables that shouldn’t be changed, it is always a good idea not to implement global variables. Global constants, however, are pretty handy.&lt;/p&gt;
&lt;p&gt;Consider the below snippet that uses a global variable.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;x = 10


def increment_x_by_one():

    global x
    x += 1
    return x


def increment_x_by_two():

    global x
    x += 2
    return x

print(increment_x_by_one())
print(increment_x_by_two())

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output will be as shown below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;11&lt;/code&gt;
&lt;code&gt;13&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;We wanted the second output to be 12, but earlier, the &lt;code&gt;function increment_x_by_one&lt;/code&gt; has already been modified with the initial/default value that the variable x is supposed to have. This causes us to have an error.&lt;/p&gt;
&lt;p&gt;Now, let’s refactor the code, avoiding the use of a global variable.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def increment_x_by_one(x):
    x += 1
    return x


def increment_x_by_two(x):
    x += 2
    return x

x = 10

print(increment_x_by_one(x))
print(increment_x_by_two(x))

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The output now will be accurate, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;11&lt;/code&gt;
&lt;code&gt;12&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The result is absolutely as expected, and the function that modified variable x can be easily tracked.&lt;/p&gt;
&lt;h2&gt;Don’t use magic literals&lt;/h2&gt;
&lt;p&gt;Many times, string literals and numeric values are used directly, as opposed to being assigned to a variable, even though they imply something significant in the code. It is good to store them as variables with proper names to improve the code’s readability. Those string literals and numeric values might be referred to in multiple places. If they are to be modified, then all their references need to be traced, which is, again, an effort that is mostly prone to error.&lt;/p&gt;
&lt;p&gt;Consider the below snippet before refactoring:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;if age &gt; 21:
    return True


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, consider the refactored code:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;age_limit = 21
if age &gt; age_limit:
    return True


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Comprehending the code block is much easier now, isn’t it?&lt;/p&gt;
&lt;h2&gt;Remove the commented code&lt;/h2&gt;
&lt;p&gt;Often working/non-working code sections are commented out when prompted to try an alternative logic, and then they are left “as is”. This makes the readability of the code much more difficult. Also, other comments might get overlooked. So, the commented code should be erased. The best way to keep track of your commented code is via a version control mechanism, like git, using proper commit messages.&lt;/p&gt;
&lt;h2&gt;Address the redundancy&lt;/h2&gt;
&lt;p&gt;If two blocks of code are doing the same logic or same set of instructions, then address the redundancy by either extracting a function out of the common code or by placing the redundant instruction on an appropriate line before or after (but only in one place).&lt;/p&gt;
&lt;p&gt;Consider the below example before refactoring:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;x = 10

if x % 2 == 0:
    print(“The number”, x, “is even”)
    print(“The squared of the number is”, x**2)
else:
    print(“The number”, x, “is odd”)
    print(“The squared of the number is”, x**2)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, let’s refactor it:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;x = 10

if x % 2 == 0:
    print(“The number”, x, “is even”)
else:
    print(“The number”, x, “is odd”)

print(“The squared of the number is”, x**2)

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Refactoring not only makes it more readable, but also saves on Python code interpretation time.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Many refactoring tools are available as software packages or extensions for almost every programming language. Some IDE’s (Integrated Development Environments) like PyCharm have built-in refactoring tools. Practice refactoring your code before committing it to the codebase. Besides meeting the non-functional requirements, a neatly refactored code eases debugging, too.&lt;/p&gt;
&lt;p&gt;And guess what?! This blog, too, is refactored a couple of times to improve its readability. You see, fundamentals work everywhere!&lt;/p&gt;
&lt;p&gt;I hope you found my post on refactoring useful. For more coding tips, keep checking the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog.&lt;/a&gt; And if you have any questions, feel free to reach out to me &lt;a href=&quot;https://twitter.com/deyagondsamarth&quot;&gt;@deyagondsamarth&lt;/a&gt; or connect with me on &lt;a href=&quot;https://hpedev.slack.com/?redir=%2Fteam%2FUQM0ZTE1F&quot;&gt;Slack&lt;/a&gt;. Happy coding!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Grommet with Gatsby]]></title><description><![CDATA[picture1 Gatsby is a popular, free, and open source framework based on ReactJS that helps developers quickly build websites and apps…]]></description><link>https://developer.hpe.com/using-grommet-with-gatsby/</link><guid isPermaLink="false">https://developer.hpe.com/using-grommet-with-gatsby/</guid><pubDate>Tue, 21 Jan 2020 17:34:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1579628420945.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.gatsbyjs.org/&quot;&gt;Gatsby&lt;/a&gt; is a popular, free, and open source framework based on &lt;a href=&quot;https://reactjs.org/&quot;&gt;ReactJS&lt;/a&gt; that helps developers quickly build websites and apps. Because Gatsby is a PWA (Progressive Web App) generator, it provides code and data splitting out-of-the-box. Gatsby loads only the critical HTML, CSS (cascading style sheets), data, and JavaScript required so websites load quickly. Gatsby uses a GraphQL query interface to easily get data from just about any source, making clicking around the website feel very fast. You can augment these capabilities and make great looking applications and websites mobile-friendly, accessible, and responsive by using &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet.&lt;/a&gt; In this tutorial, I’ll show you how to get started using Grommet components and styles with Gatsby to make websites and applications more inviting.&lt;/p&gt;
&lt;h2&gt;Pre-requisites&lt;/h2&gt;
&lt;p&gt;This tutorial assumes you have &lt;a href=&quot;https://yarnpkg.com/lang/en/&quot;&gt;Yarn&lt;/a&gt; as well as &lt;a href=&quot;https://nodejs.org/en/download/&quot;&gt;Node.js and npm&lt;/a&gt; installed.&lt;/p&gt;
&lt;h2&gt;Step 1: Create a basic Gatsby app&lt;/h2&gt;
&lt;p&gt;First, use npm to install the Gatsby commands and create a basic Gatsby app with the
&lt;code&gt;gatsby new &amp;#x3C;name&gt;&lt;/code&gt; command. You can use an existing Gatsby starter (there are some that include Grommet), but we&apos;ll use the default Gatsby minimal app for this example.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm install -g gatsby-cli
gatsby new gatsby-with-grommet 
cd gatsby-with-grommet
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 2: Add in grommet dependencies&lt;/h2&gt;
&lt;p&gt;Add Grommet to the dependencies. We&apos;ll use Yarn, since Gatsby uses it when doing
&lt;code&gt;gatsby new&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;yarn add grommet grommet-icons styled-components
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3: Start up a development server&lt;/h2&gt;
&lt;p&gt;Start up a Gatsby development server to make changes and see the effect of those changes in real time.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;gatsby develop
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, you can view the result in your local browser at &lt;code&gt;http://localhost:8000/.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1579628526610.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;We aren&apos;t using Grommet components yet, so let&apos;s add some in, along with the Grommet themes.&lt;/p&gt;
&lt;h2&gt;Step 4: Start using Grommet components&lt;/h2&gt;
&lt;p&gt;First, replace the contents of the &lt;code&gt;src/components/layout.css&lt;/code&gt; file with this simple bit of CSS to reset some browser defaults we don&apos;t want. We&apos;ll rely on Grommet&apos;s internal styles rather than use a separate CSS file to style the user interface (UI).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-css&quot;&gt;html {
  box-sizing: border-box;
}

*,
*:before,
*:after {
  box-sizing: inherit;
}

body {
  margin: 0;
  padding: 0;
  font-weight: normal;
}

img {
  max-width: 100%;
  height: auto;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, make the main Layout component using the Grommet wrapper and theme, as well as the other equivalent Grommet components, instead of the literal html elements. Change &lt;code&gt;src/components/layout.js&lt;/code&gt; to look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &quot;react&quot;
import PropTypes from &quot;prop-types&quot;
import { useStaticQuery, graphql } from &quot;gatsby&quot;

import Header from &quot;./header&quot;
import &quot;./layout.css&quot;
import { Grommet, Anchor, Box, Footer, Text } from &quot;grommet&quot;
import { grommet } from &quot;grommet/themes&quot;

const Layout = ({ children }) =&gt; {
  const data = useStaticQuery(graphql`
    query SiteTitleQuery {
      site {
        siteMetadata {
          title
        }
      }
    }
  `)

  return (
    &amp;#x3C;Grommet
      theme={grommet}
      full
      style={{
        display: &quot;flex&quot;,
        flexDirection: &quot;column&quot;,
      }}
    &gt;
      &amp;#x3C;Header siteTitle={data.site.siteMetadata.title} /&gt;
      &amp;#x3C;Box as=&quot;main&quot; pad=&quot;medium&quot; flex overflow=&quot;auto&quot;&gt;
        {children}
      &amp;#x3C;/Box&gt;
      &amp;#x3C;Footer background=&quot;light-4&quot; justify=&quot;center&quot; pad=&quot;small&quot;&gt;
        &amp;#x3C;Text textAlign=&quot;center&quot; size=&quot;small&quot;&gt;
          © {new Date().getFullYear()}, Built with
          {` `}
          &amp;#x3C;Anchor href=&quot;https://www.gatsbyjs.org&quot;&gt;Gatsby&amp;#x3C;/Anchor&gt;
          {` and `}
          &amp;#x3C;Anchor href=&quot;https://v2.grommet.io&quot;&gt;Grommet&amp;#x3C;/Anchor&gt;
        &amp;#x3C;/Text&gt;
      &amp;#x3C;/Footer&gt;
    &amp;#x3C;/Grommet&gt;
  )
}

Layout.propTypes = {
  children: PropTypes.node.isRequired,
}

export default Layout
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You’ll also want to change &lt;code&gt;src/components/header.js&lt;/code&gt; to use the Grommet equivalents.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import { Link } from &quot;gatsby&quot;
import PropTypes from &quot;prop-types&quot;
import React from &quot;react&quot;
import { Header as GrommetHeader, Heading } from &quot;grommet&quot;

const Header = ({ siteTitle }) =&gt; (
  &amp;#x3C;GrommetHeader background=&quot;brand&quot; justify=&quot;center&quot;&gt;
    &amp;#x3C;Heading&gt;
      &amp;#x3C;Link
        to=&quot;/&quot;
        style={{
          color: `white`,
          textDecoration: `none`,
        }}
      &gt;
        {siteTitle}
      &amp;#x3C;/Link&gt;
    &amp;#x3C;/Heading&gt;
  &amp;#x3C;/GrommetHeader&gt;
)

Header.propTypes = {
  siteTitle: PropTypes.string,
}

Header.defaultProps = {
  siteTitle: ``,
}

export default Header
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, change &lt;code&gt;src/pages/index.js&lt;/code&gt; to use Grommet equivalents.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &quot;react&quot;
import { Link } from &quot;gatsby&quot;
import { Box, Heading, Paragraph } from &quot;grommet&quot;

import Layout from &quot;../components/layout&quot;
import Image from &quot;../components/image&quot;
import SEO from &quot;../components/seo&quot;

const IndexPage = () =&gt; (
  &amp;#x3C;Layout&gt;
    &amp;#x3C;SEO title=&quot;Home&quot; /&gt;
    &amp;#x3C;Heading&gt;Hi people&amp;#x3C;/Heading&gt;
    &amp;#x3C;Paragraph&gt;Welcome to your new Gatsby site.&amp;#x3C;/Paragraph&gt;
    &amp;#x3C;Paragraph&gt;Now go build something great.&amp;#x3C;/Paragraph&gt;
    &amp;#x3C;Box width={{ max: &quot;300px&quot; }} pad=&quot;small&quot;&gt;
      &amp;#x3C;Image /&gt;
    &amp;#x3C;/Box&gt;
    &amp;#x3C;Link to=&quot;/page-2/&quot;&gt;Go to page 2&amp;#x3C;/Link&gt;
  &amp;#x3C;/Layout&gt;
)

export default IndexPage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make similar changes to &lt;code&gt;src/pages/page-2.js&lt;/code&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Note that Gatsby has a way of optimizing and lazy-loading images that’s implemented in the example &lt;code&gt;src/components/image.js&lt;/code&gt;. You can use something like this rather than Grommet&apos;s &lt;code&gt;&amp;#x3C;Image&gt;&lt;/code&gt; component to help optimize a Gatsby site.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You should now see the Grommet styling in the browser.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1579638304412.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 5: Routing&lt;/h2&gt;
&lt;p&gt;Gatsby defines routes automatically for pages in the &lt;code&gt;src/pages&lt;/code&gt; folder so many Gatsby applications don&apos;t have to set up routes or even interact with the router directly. Gatsby uses &lt;code&gt;@reach/router&lt;/code&gt; instead of the react router. So, for efficient internal links, Gatsby applications use Gatsby&apos;s &lt;code&gt;&amp;#x3C;Link&gt;&lt;/code&gt; component. This component uses &lt;code&gt;navigateTo&lt;/code&gt; from &lt;code&gt;@reach/router&lt;/code&gt; to change routes. One way to get Grommet&apos;s styling for these internal links is to make a version of Grommet&apos;s &lt;code&gt;&amp;#x3C;Anchor&gt;&lt;/code&gt; that uses &lt;code&gt;navigateTo&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Create a file in &lt;code&gt;src/components&lt;/code&gt; called &lt;code&gt;link.js&lt;/code&gt; that contains the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &quot;react&quot;
import PropTypes from &quot;prop-types&quot;
import { Anchor } from &quot;grommet&quot;
import { navigate } from &quot;gatsby&quot;

const Link = ({ to, ...rest }) =&gt; (
  &amp;#x3C;Anchor
    href={to}
    onClick={ev =&gt; {
      navigate(to)
      ev.preventDefault()
    }}
    {...rest}
  /&gt;
)

Link.propTypes = {
  to: PropTypes.string,
}
export default Link
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, change all the places that use Gatsby&apos;s &lt;code&gt;&amp;#x3C;Link&gt;&lt;/code&gt; to use this Link component instead. For example, change &lt;code&gt;src/pages/index.js&lt;/code&gt; to import this new component and not the Gatsby Link component.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-diff&quot;&gt;import React from &quot;react&quot;
// import { Link } from &quot;gatsby&quot;
import Link from &quot;../components/link&quot;
import { Box, Heading, Paragraph } from &quot;grommet&quot;

import Layout from &quot;../components/layout&quot;
import Image from &quot;../components/image&quot;
import SEO from &quot;../components/seo&quot;

const IndexPage = () =&gt; (
  &amp;#x3C;Layout&gt;
    &amp;#x3C;SEO title=&quot;Home&quot; /&gt;
    &amp;#x3C;Heading&gt;Hi people&amp;#x3C;/Heading&gt;
    &amp;#x3C;Paragraph&gt;Welcome to your new Gatsby site.&amp;#x3C;/Paragraph&gt;
    &amp;#x3C;Paragraph&gt;Now go build something great.&amp;#x3C;/Paragraph&gt;
    &amp;#x3C;Box width={{ max: &quot;300px&quot; }} pad=&quot;small&quot;&gt;
      &amp;#x3C;Image /&gt;
    &amp;#x3C;/Box&gt;
    &amp;#x3C;Link to=&quot;/page-2/&quot;&gt;Go to page 2&amp;#x3C;/Link&gt;
  &amp;#x3C;/Layout&gt;
)

export default IndexPage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because you started using this other Link, the &lt;strong&gt;Go to page 2&lt;/strong&gt; link on the main page is using the Grommet styling.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture4-1579628555064.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Make the same change to &lt;code&gt;src/pages/page2.js&lt;/code&gt; and &lt;code&gt;src/components/header.js&lt;/code&gt; to ensure those use the Grommet link styling as well.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;You can see the complete result at &lt;a href=&quot;https://github.com/MikeKingdom/gatsby-with-grommet-starter&quot;&gt;https://github.com/MikeKingdom/gatsby-with-grommet-starter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Gatsby is a great framework for making high-performing websites and web applications. Try adding Grommet to make these websites even better looking, responsive, accessible and mobile-friendly.
If you have any questions, please join me on the &lt;a href=&quot;https://app.slack.com/client/T04LMHMUT/C04LMHN59&quot;&gt;Grommet Slack channel.&lt;/a&gt; You’ll find a lot of support there. And don’t forget to check out the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV site&lt;/a&gt; to learn more about all our platforms.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Tools We Use to Shape User Experiences]]></title><description><![CDATA[picture4 As a designer myself, I’ve found that folks who don’t get the chance to work closely with designers are often curious to know what…]]></description><link>https://developer.hpe.com/the-tools-we-use-to-shape-user-experiences/</link><guid isPermaLink="false">https://developer.hpe.com/the-tools-we-use-to-shape-user-experiences/</guid><pubDate>Wed, 15 Jan 2020 21:15:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture4-1579123006064.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a designer myself, I’ve found that folks who don’t get the chance to work closely with designers are often curious to know what we use to shape the designs and experiences we create. How do we get the results we want to achieve?&lt;/p&gt;
&lt;p&gt;Just as developers rely on a handful of tools to code their projects, designers use a handful of tools as well. And, in the same vein, different tools are used to solve different design problems. For instance, it’s hard to edit a vector image in a raster program and vice versa.&lt;/p&gt;
&lt;p&gt;This article will explore some of the common tools used by experience designers at the &lt;a href=&quot;https://hpe.design/&quot;&gt;HPE Experience Studio&lt;/a&gt; throughout the design process to work with stakeholders (i.e. product managers, developers, and design engineers) to develop a solution. What each of these tools provides in terms of details and functionality for prototyping ranges widely across a spectrum, from low-fidelity (sketches and notes for high-level brainstorming and collaboration) to high-fidelity (final, polished pixel-perfect mock ups).&lt;/p&gt;
&lt;h2&gt;Wireframes&lt;/h2&gt;
&lt;p&gt;Wireframes are low-fidelity designs used to communicate an idea. Just as the title infers, they are simple lines, boxes, and text (or dummy text) used to pull together a concept or layout well enough to give a clear understanding of the direction being taken. Wireframes are used early in the development process to establish the basic structure of a page before more of an investment of time is made in the visual design and content. Designers use this fidelity of work so that stakeholders don’t get caught up in the fine details of a design, like choice of colors and copy specifics.&lt;/p&gt;
&lt;p&gt;Many of the tools used to create wireframes are also used for other types of deliverables. Here are a couple popular ones.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://balsamiq.com/&quot;&gt;Balsamiq&lt;/a&gt; – This tool is almost specifically tailored for wireframes and is very easy for beginners to learn.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.axure.com/&quot;&gt;Axure&lt;/a&gt; – This is another tool that has a specific wireframe “mode”. It is a really powerful tool that can be used for many deliverables; however, it has a steep learning curve. Axure can also be used for high-fidelity mockups and interactive prototypes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;High-Fidelity Mockups&lt;/h2&gt;
&lt;p&gt;High-fidelity mockups are screens intended to reflect a finished product (or very close to it). They are often used as the final stage in design when visual details are being worked out. High-fidelity mockups are static screens that can also be pulled together to create an interactive prototype (I’ll cover more on this later.).&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.sketch.com/&quot;&gt;Sketch&lt;/a&gt; – This is a very powerful desktop based tool, and the one our team uses the most. You can use it to go from low fidelity to high fidelity all in one place. While I am a big fan of Sketch, it does have some limitations – particularly in the cloud storage and prototype arena. Fortunately, Sketch developers are currently addressing these limitations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.figma.com/&quot;&gt;Figma&lt;/a&gt; – This product is available both as a cloud and desktop platform. You can use either Figma’s web-based tool or download their desktop version. Very similar to Sketch, Figma is a newer tool that offers some interactive and code-based design capabilities Sketch does not. I personally love Figma, as their web-based app is responsive, easy to use, and convenient. It also makes it easier to collaborate with others, since the file can be accessed by more than one person in the web app.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.adobe.com/products/xd.html&quot;&gt;Adobe XD&lt;/a&gt; – Any list of high-fidelity mockup tools would be amiss if it didn’t include an Adobe product. Adobe XD is a high-fidelity mockup and prototyping tool that has many of the same functions as the tools listed above. It was a bit late to the game (it’s only been officially launched and out of beta for a year), and it just doesn’t feel as easy to learn and use as Sketch and Figma.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Interactive Prototypes&lt;/h2&gt;
&lt;p&gt;Interactive prototypes are designs that supply a level of interactivity with a user. They might be screens linked together by buttons or displayed interactivity through a form. The intent of their use is to make the design and user experience simulate what you would use in the real app or website. These prototypes can range in fidelity from low to high, as they are stitched together views of the deliverables listed above: wireframes and high-fidelity mockups. Below are some of the tools used to create interactive prototypes.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.invisionapp.com/&quot;&gt;Invision&lt;/a&gt; – With this product, you can upload any images and connect them to create a prototype. In our group, we rely on the stack Sketch + Craft + Invision to deliver interactive prototypes. I use Sketch, as covered above, plus a Sketch plug-in called Craft, which allows users to create links and connect artboards (screens) in Sketch, and then export to Invision. I could skip the Craft plug-in, but the plug-in makes life a lot easier by removing a step for exporting and uploading images.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://www.framer.com/&quot;&gt;FramerX&lt;/a&gt;– This really cool product allows you to both design and prototype using the same tool. It also allows some code editing of React components. This code editing ability is very helpful because you can use UI libraries based in React (like Grommet) to play around with components and understand what is behind them a little better.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://zeplin.io/&quot;&gt;Zeplin&lt;/a&gt; – Our team doesn’t use Zeplin, but since it is one of the most popular and widely used prototype tools according to this [survey] (&lt;a href=&quot;https://uxtools.co/survey-2019/#prototyping&quot;&gt;https://uxtools.co/survey-2019/#prototyping&lt;/a&gt;), I’ve included it in this list.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://reactjs.org/&quot;&gt;React&lt;/a&gt; and &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; – Since HPE has its own UI library (Grommet) it makes sense that the team I work in builds some of our own prototypes as coded prototypes with React.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The funny thing about most of the tools listed above is that many of them can equally be used to create wireframes, mockups, and prototypes. It all depends on what you are looking to do and how you want to get there. Another consideration you might want to take into account when choosing which tools to work with is how you will be hosting the files, whether it’s via a company server or in the cloud. Most of these tools are easily affordable and have a low learning curve to get started. That means that even if you are not a designer, you can explore these tools to communicate design decisions with your colleagues and stakeholders.&lt;/p&gt;
&lt;p&gt;Check out &lt;a href=&quot;https://hpe.design/resources&quot;&gt;HPE design tools&lt;/a&gt; you can use with many of the applications listed above. Also, be sure to visit &lt;a href=&quot;https://hpe.design/&quot;&gt;hpe.design&lt;/a&gt; to learn more about our team and what we do.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Hack Shack Returns to HPE TSS Paris 2020]]></title><description><![CDATA[picture3 HPE Technology and Solutions Summit (TSS) is the largest and most comprehensive technical knowledge transfer event held by Hewlett…]]></description><link>https://developer.hpe.com/the-hack-shack-returns-to-hpe-tss-paris-2020/</link><guid isPermaLink="false">https://developer.hpe.com/the-hack-shack-returns-to-hpe-tss-paris-2020/</guid><pubDate>Wed, 15 Jan 2020 20:16:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1579119466925.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE Technology and Solutions Summit (TSS) is the largest and most comprehensive technical knowledge transfer event held by Hewlett Packard Enterprise (HPE).  Delivered annually since its inception in 2006, HPE TSS has established a reputation as a renowned training initiative. With separate events held in Europe, Asia-Pacific, and the US, presales consultants and solutions architects from HPE and partner communities  from all over the world benefit from direct access to HPE technology experts, engineers, and chief technologists. In Europe this year, the city of lights (Paris, France) will once again host &lt;a href=&quot;https://h41382.www4.hpe.com/tss/&quot;&gt;this event March 9-13, 2020.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you plan on attending, you will receive up-to-the-minute training on product developments, roadmap updates, and strategic insights. You will even have opportunities to practice and apply your knowledge under non-disclosure. As evidenced by the schedule, many exciting sessions are planned.&lt;/p&gt;
&lt;p&gt;The HPE DEV team will again be present at this event. We even have our own dedicated area, called the Hack Shack. It’s made up of a garden and a lab room, providing a cozy place where you can learn and have some fun. Many different activities will take place in the Hack Shack throughout the week. To discover more about the HPE DEV program, head straight to the Hack Shack desk. It opens at the same time as the Presales Suite, which is where attendees learn how HPE tools and resources make daily business easier and more effective.&lt;/p&gt;
&lt;p&gt;In the garden, we will host different code and game challenges where players compete with other players and coders for prizes. On Monday, you’ll find breakout sessions that describe the HPE DEV program and its importance to you. These sessions will be followed by several technical workshops where you can learn about the different API ecosystems available in the pan HPE portfolio. The workshops will cover &lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;HPE OneView,&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home&quot;&gt;ILO Redfish,&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/platform/hpe-simplivity/home&quot;&gt;HPE SimpliVity,&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/platform/hpe-nimble-storage/home&quot;&gt;HPE Nimble Storage,&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet&lt;/a&gt; and &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/composable-fabric.html#portfolio&quot;&gt;HPE Composable Fabric Manager.&lt;/a&gt; Our open labs on Wednesday and Friday will cover the &lt;a href=&quot;https://developer.hpe.com/platform/bluedata/home&quot;&gt;HPE Container Platform,&lt;/a&gt; &lt;a href=&quot;https://www.arubanetworks.com/&quot;&gt;Aruba&lt;/a&gt; networking products, and a really cool facial recognition app.&lt;/p&gt;
&lt;p&gt;You’ll find so much to do at the Hack Shack in a fun and inviting atmosphere. Presales attendees and partners will have the opportunity to engage with HPE experts during the different sessions. The complete agenda will be posted soon on the &lt;a href=&quot;https://h41382.www4.hpe.com/tss/#agenda&quot;&gt;HPE TSS website.&lt;/a&gt; You’ll find all the details you need when you search for Hack Shack as the session type. The Hack Shack will be located on floor 7.3 next to the generic lab rooms.&lt;/p&gt;
&lt;p&gt;Make sure you stop by the Hack Shack to learn more about the HPE DEV program, as well as other key DevOps contributions and value offered through the HPE DEV community. Hope to see you at the event!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ Teaching Students to Code Reaps Rewards]]></title><description><![CDATA[picture1 In a world where software drives everything, from work and education to transportation and entertainment, developing an…]]></description><link>https://developer.hpe.com/teaching-students-to-code-reaps-rewards/</link><guid isPermaLink="false">https://developer.hpe.com/teaching-students-to-code-reaps-rewards/</guid><pubDate>Wed, 15 Jan 2020 20:01:10 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1579118573174.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;In a world where software drives everything, from work and education to transportation and entertainment, developing an understanding of code and how it works at an early age is important. In addition to cultivating opportunities for future careers, teaching students to code helps them develop crucial skills that support math and processing concepts and encourages critical thinking.&lt;/p&gt;
&lt;h2&gt;HPE DEV supports teaching students to code&lt;/h2&gt;
&lt;p&gt;Under its charter, Build – Communicate – Collaborate, the HPE DEV community supports initiatives dedicated to teaching programming to young students, such as the &lt;a href=&quot;https://code.org/learn&quot;&gt;Hour of Code&lt;/a&gt;™ project and the &lt;a href=&quot;https://www.codeclubworld.org/&quot;&gt;Code Club.&lt;/a&gt; Volunteers, like Didier Lalli and Frederic Passeron, see it as a way to encourage an interest in STEM (Science, Technology, Engineering and Mathematics), develop future talent, and give back to the community by exposing how application programming works. I recently had the opportunity to speak with Didier and Frederic to discuss the programs and the differences between them.&lt;/p&gt;
&lt;h2&gt;How these programs work&lt;/h2&gt;
&lt;p&gt;Didier, who participated in the &lt;a href=&quot;https://www.webtimemedias.com/article/sophia-initiative-code-club-pour-initier-les-collegiens-au-codage?old_id=65306&quot;&gt;Code Club in Sophia Antipolis,&lt;/a&gt; set the stage. “While the Hour of Code and the Code Club are two different initiatives, both expose students to programming skills at an early age, doing so in a fun and engaging manner. Both programs have us work within classroom settings alongside the student’s regular math or technology teacher.”&lt;/p&gt;
&lt;p&gt;“We introduce software programming concepts through the use of a program called &lt;a href=&quot;https://scratch.mit.edu/&quot;&gt;Scratch,&lt;/a&gt;” Didier continued. “This is a graphical programming language developed by the Massachusetts Institute of Technology (MIT) that allows students to create their own interactive stories, games, and animations. After we present the software program, we describe the necessary steps used to connect to the website where they will interact with it. The students then select an activity they wish to perform.”&lt;/p&gt;
&lt;p&gt;Frederic pointed out how the use of Scratch energized the sessions. “Scratch is so graphical and simple to use. It makes it easy for the students to understand programming concepts like &apos;loops&apos;, &apos;if/then&apos; conditions, and iterative processes. Even students who weren’t initially excited about participating were quickly hooked. I’d say that, within ten minutes, all were thoroughly engaged.”&lt;/p&gt;
&lt;p&gt;Frederic showed me the program, demonstrating how students could choose from different characters and make the character move across the screen using simple commands. Seeing Scratch in action, I immediately understood how this felt more like a game rather than a lesson.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1579118765885.png&quot; alt=&quot;How Scratch presents code to students&quot;&gt;&lt;/p&gt;
&lt;p&gt;“Learning this way sure is a far cry from how I learned to code years ago, what with reading all the documentation and writing so many lines of code, only to find that a small typo stopped my code from actually working.”&lt;/p&gt;
&lt;h2&gt;Comparing the different programs&lt;/h2&gt;
&lt;p&gt;While Didier worked with the Code Club, Frederic volunteered with the Hour of Code project. The main difference between the two programs is really about reach. The Code Club boasts &lt;a href=&quot;https://codeclub.org/en/about&quot;&gt;over 13,000 clubs around the world in over 160 different countries&lt;/a&gt; and runs all year long. The Hour of Code touches &lt;a href=&quot;https://code.org/learn&quot;&gt;millions of students in over 180 countries&lt;/a&gt; and is held during Computer Science Week, usually in December or January. Whether a volunteer works with one group or the other generally depends on local availability. However, there are some other points of differentiation between the programs.&lt;/p&gt;
&lt;p&gt;For instance, the Code Club is sponsored by &lt;a href=&quot;https://www.raspberrypi.org/about/&quot;&gt;Raspberry Pi Foundation&lt;/a&gt; and mixes volunteers from different companies (such as HPE, ARM, Cadence, Orange Labs, and SAP) to deliver one-hour sessions to the same class over ten weeks. For the first 5 or 6 sessions, the students engage in pre-defined coursework. The next 4 weeks are dedicated to a project where students are given free rein to let their imagination run wild and design their own game.&lt;/p&gt;
&lt;p&gt;The Hour of Code, on the other hand, is backed by &lt;a href=&quot;https://csfirst.withgoogle.com/c/cs-first/en/code-your-hero/overview.html?utm_source=google&amp;#x26;utm_medium=cpc&amp;#x26;utm_campaign=20191105-Firewood_HOC19--hsms-ins-&amp;#x26;src=cpc-google-20191105-Firewood_HOC19--hsms-ins-&amp;#x26;gclid=CjwKCAiAx_DwBRAfEiwA3vwZYpw3qtT6quajGSUh5HM4eHi-UbNxq1jqYjfwnhhOWTCX7G2ibRubfRoCFCcQAvD_BwE&quot;&gt;Google&lt;/a&gt; and tends to be taught by volunteers from a single company, such HPE. Its format is based on 1.5 hour sessions for groups of 30 students. A single volunteer cycle consists of four sessions, allowing the volunteers to teach a total of 120 students.&lt;/p&gt;
&lt;p&gt;“Another thing that sets the Hour of Code apart,” explains Frederic, “is that it actively promotes diversity. This program works at increasing gender, racial, and socioeconomic diversity in tech education and, by extension, in the tech workforce. I see that sometimes girls are shy about their math and technical skills. Hour of Code points out that female developers are rare, yet they are in high demand by companies. It encourages girls and others who may not see themselves as fitting into ‘geek’ roles to seek out these types of careers.”&lt;/p&gt;
&lt;h2&gt;Join HPE DEV in giving of your time and talent&lt;/h2&gt;
&lt;p&gt;Teaching kids to code doesn’t just benefit students. Volunteers like Didier and Frederic find it rewarding as well. “Understanding how a PC, tablet, or phone operates is a good thing for all kids, even those that don’t end up working in the tech field. Removing the veil of magic behind how things work helps them understand how their involvement can have an effect,” Didier explained. “Hopefully, they will apply this newfound self-awareness to the world we all live in and strive to make it better.”&lt;/p&gt;
&lt;p&gt;“For volunteers,” he continued, “it’s very interesting to see how kids react to problems; how some are quick to get it and some more interested in designing icons than spending time on the logic of their program. While each student is different, it’s obvious that a number of them already show a real attraction to software development.”&lt;/p&gt;
&lt;p&gt;Frederic expressed how participating in activities like this helped him realize another ambition. “When I was in college, I studied to become an English teacher. Life intervened, and I found myself working for HPE instead. Volunteering for Hours of Code helped to finally close that loop, offering me the opportunity to teach.” As Frederic continued, he pointed out other things that attracted him to the program. “I think that the Hour of Code program is particularly important in that it promotes diversity. It does so by telling young girls that they can play an important part in the future of code, since female coders are rare and therefore, very valuable.”&lt;/p&gt;
&lt;p&gt;For those interested in volunteering for &lt;a href=&quot;https://www.codeclubworld.org/about/countries/&quot;&gt;Code Club&lt;/a&gt; teaching opportunities, connect with your local Code Club. Volunteers for &lt;a href=&quot;https://hourofcode.com/us/how-to&quot;&gt;Hour of Code&lt;/a&gt; should connect with local schools to learn how to participate. As an additional benefit for HPE employees, the HPE Gives program provides credit for HPE volunteers for programs like these.&lt;/p&gt;
&lt;p&gt;Giving has long been a key component of the Hewlett Packard Enterprise company culture, tracing back to the values of the company founders, Bill Hewlett and David Packard. As David Packard once said, “The betterment of society is not a job to be left to a few. It&apos;s a responsibility to be shared by all.” That’s why HPE DEV supports initiatives like these. Check out our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack&lt;/a&gt; #hpedev-volunteers channel to discuss how we all can make a difference by sharing our skills and knowledge with others. Connect on Twitter with &lt;a href=&quot;https://twitter.com/DidierLalli&quot;&gt;Didier&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/FredPasseron&quot;&gt;Frederic&lt;/a&gt; to follow their adventures with HPE DEV.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Python SDK now available with support for HPE OneView 5.0 ]]></title><description><![CDATA[Use HPE OneView software-defined intelligence to save time and effort The HPE OneView Python SDK v5.0 is now available, providing support…]]></description><link>https://developer.hpe.com/hpe-oneview-python-sdk-now-available-with-support-for-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-python-sdk-now-available-with-support-for-hpe-oneview-50/</guid><pubDate>Wed, 08 Jan 2020 18:31:57 GMT</pubDate><content:encoded>&lt;h2&gt;Use HPE OneView software-defined intelligence to save time and effort&lt;/h2&gt;
&lt;p&gt;The HPE OneView Python SDK v5.0 is now available, providing support for HPE OneView 5.0 (REST API version 1200). The HPE OneView Python SDK allows developers who use the Python language to programmatically control HPE OneView managed resources. It also provides for the integration of popular automation tools based on Python, such as Ansible, with HPE OneView. These capabilities enable complete datacenter automation using infrastructure-as-code for physical compute, storage, and fabric resources.&lt;/p&gt;
&lt;p&gt;In addition, the HPE OneView Python SDK code was refactored. SDK code now resides in two different repositories; &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;HPE OneView-Python&lt;/a&gt; and &lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView&quot;&gt;python-hpOneView.&lt;/a&gt; Going forward, the &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python&quot;&gt;HPE OneView-Python&lt;/a&gt; repository will be updated with the latest SDKs. The &lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView&quot;&gt;python-hpOneView&lt;/a&gt; repository will support legacy SDKs for existing customers who do not wish to change their client code.&lt;/p&gt;
&lt;p&gt;For more information about the HPE OneView Python SDK v5.0 release, please go to: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.0.0&quot;&gt;https://github.com/HewlettPackard/oneview-python/releases/tag/v5.0.0&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[New Year's Resolutions - Newsletter]]></title><link>https://developer.hpe.com/2020-January-06/</link><guid isPermaLink="false">https://developer.hpe.com/2020-January-06/</guid><pubDate>Mon, 06 Jan 2020 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[HPE OneView Golang SDK is now available with support for HPE OneView 5.0]]></title><description><![CDATA[The HPE OneView Golang SDK v1.2.0 now supports HPE OneView 5.0 (REST API version 1200). The HPE OneView Golang SDK extends the HPE OneView…]]></description><link>https://developer.hpe.com/hpe-oneview-golang-sdk-is-now-available-with-support-for-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-golang-sdk-is-now-available-with-support-for-hpe-oneview-50/</guid><pubDate>Mon, 16 Dec 2019 17:01:31 GMT</pubDate><content:encoded>&lt;p&gt;The HPE OneView Golang SDK v1.2.0 now supports HPE OneView 5.0 (REST API version 1200). The HPE OneView Golang SDK extends the HPE OneView API language support for the &lt;a href=&quot;https://golang.org/&quot;&gt;go language,&lt;/a&gt; which is popular for cloud-native applications. With language support, you can integrate popular tools based on Golang, such as Terraform with HPE OneView. &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; takes a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a modern RESTful API, and a comprehensive partner ecosystem.&lt;/p&gt;
&lt;p&gt;The HPE OneView Golang SDK allows developers who use the go language to programmatically control HPE OneView managed resources using infrastructure-as-code for physical compute, storage, and fabric resources. Infrastructure-as-code enables complete datacenter automation, consistent reproducibility, versioning, and roll back.&lt;/p&gt;
&lt;p&gt;FOR MORE INFORMATION:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.2.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/blob/master/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Terraform provider now supports HPE OneView 5.0]]></title><description><![CDATA[HPE OneView Terraform provider v1.2.0 now supports HPE OneView 5.0 (REST API version 1200). Terraform, by HashiCorp, is a popular tool for…]]></description><link>https://developer.hpe.com/hpe-oneview-terraform-provider-now-supports-hpe-oneview-50/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-terraform-provider-now-supports-hpe-oneview-50/</guid><pubDate>Mon, 16 Dec 2019 16:58:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; Terraform provider v1.2.0 now supports HPE OneView 5.0 (REST API version 1200). &lt;a href=&quot;https://www.terraform.io/&quot;&gt;Terraform&lt;/a&gt;, by &lt;a href=&quot;https://www.hashicorp.com/&quot;&gt;HashiCorp,&lt;/a&gt; is a popular tool for infrastructure automation that facilitates a multicloud or hybrid cloud approach to infrastructure management. When integrated with HPE OneView, Terraform is a powerful orchestration tool that can be used to create, manage, and update infrastructure resources. These resources may be physical machines, VMs, network switches, containers, or others.&lt;/p&gt;
&lt;p&gt;HPE OneView Terraform provider automates the provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView. This enables administrators to create a resource topology similar to that of a public cloud on their own physical infrastructure. With this type of resource topology, administrators can easily migrate applications and workloads to an on-premises private cloud environment to realize their hybrid cloud strategy.&lt;/p&gt;
&lt;p&gt;FOR MORE INFORMATION:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.2.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/blob/master/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/blob/master/README.md&quot;&gt;Details about the Terraform provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[New HPE DEV leadership team discusses program focus and expansion]]></title><description><![CDATA[picture1 As Hewlett Packard Enterprise (HPE) pivots to an as-a-Service and software-enabled infrastructure company, I was eager to find out…]]></description><link>https://developer.hpe.com/new-hpe-dev-leadership-team-discusses-program-focus-and-expansion/</link><guid isPermaLink="false">https://developer.hpe.com/new-hpe-dev-leadership-team-discusses-program-focus-and-expansion/</guid><pubDate>Mon, 09 Dec 2019 16:10:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1575907933380.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;As Hewlett Packard Enterprise (HPE) pivots to an as-a-Service and software-enabled infrastructure company, I was eager to find out more about how HPE DEV can assist developers and designers in modernizing their legacy applications, as well help them create new, cloud-native apps. And with the recent &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2019/11/Hewlett-Packard-Enterprise-introduces-Kubernetes-based-platform-for-bare-metal-and-edge-to-cloud-deployments.html&quot;&gt;HPE Container Platform announcement,&lt;/a&gt; I wanted to know more about HPE DEV’s plans to attract developers, designers, data scientists, and DevOps teams to join the community – especially given the teams’ expertise in API (application programming interface) development.&lt;/p&gt;
&lt;p&gt;To discover answers, I reached out to the new HPE DEV community leads: Distinguished Technologist, Didier Lalli, and hybrid IT technology consultant, Denis Choukroun.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; Didier and Denis, thank you for taking the time to talk about HPE DEV and how it can assist the community. I wonder if you could provide a high-level summary of what the HPE Developer Community Program is and what it offers to its members.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; The HPE Developer Community Program was launched during KubeCon in Austin, Texas December 2017. The program is built around three main pillars: Build, Communicate, and Collaborate with the goal of helping developers and designers accelerate their innovation. We do this by assisting them with our APIs to integrate with products such as &lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;HPE OneView,&lt;/a&gt; &lt;a href=&quot;https://developer.hpe.com/platform/hpe-simplivity/home&quot;&gt;HPE SimpliVity&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home&quot;&gt;iLO.&lt;/a&gt; But this isn’t all we do. For example, we share information about open source projects we support or initiate, such as &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet.&lt;/a&gt; We reach out through several communication channels, so members of the community can select which channel they prefer to engage on. These include our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;web site&lt;/a&gt;, a monthly newsletter, &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/HPE_Developer?lang=en&quot;&gt;Twitter,&lt;/a&gt; and a Yammer group for folks within HPE.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; Didier described what HPE DEV offers to the community very well. I would just like to add that the HPE DEV portal offers developers, designers, DevOps engineers, and data scientists a voice of their own. Here they can share and learn via blogs about HPE products and solutions, along with API integration and automation capabilities.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; As HPE evolves into a company focused on software-defined infrastructures and delivering everything-as-a-Service, where does HPE DEV fit in? I understand the group assists developers in providing the best user experience in their applications through research and open source projects, like Grommet. I also know others in the group develop tutorials using APIs to develop cool applications. How does HPE DEV fit into the overall company strategy and what services does it provide?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; The group we belong to, the HPE Experience Studio, has two components: a User Experience team (UX) and a Developer Experience team (DEVX). The HPE Developer Community Program is managed by the DEVX team; but we leverage research, learnings, and development done by the UX team. We encourage contributions from the UX team, as they contribute to the design and development of the as-a-Service model, which is central to HPE’s new delivery strategy. Our DEVX team also adds to these projects. We look at improving the developer’s experience by ensuring common ways to access services. As an example, there is research and prototyping going on right now for a unique command line (CLI) experience that spans across multiple HPE services.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; Yes, this is exactly the case. HPE is pivoting to an as-a-Service edge to cloud company helping ITOps and developers consume infrastructure in different ways. For example, the recent HPE announcement about the HPE Container Platform gives the HPE Developer Community Program more opportunities to help developers release new code quicker. It also enables us to assist data scientists with the deployment of AI/ML and Big Data tools. All this can now be done faster and anywhere – on premises and in the cloud.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; How do you, personally, fit into HPE DEV? What has been your role in the past and what new responsibilities have you recently taken on?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; Over 30 years, I played many roles. I started as a Software Engineer at Digital Equipment Corporation (DEC), writing mostly in C, C++, and ADA. Then, I moved to pre-sales, mostly focused on PC networking. At the time, connecting a PC to the rest of the network was a real challenge. I spent almost 15 years in EMEA pre-sales for HPE ProLiant systems, but I was always considered the software guy in a hardware team. I was lucky enough to witness the birth of Docker containers and had the opportunity to travel throughout Europe to teach our folks how to master this new technology. More recently, my focus returned to development. I participated in the creation of the HPE OneView Ecosystem, working with partners and ISVs (i.e. Turbonomic, Mesosphere, SaltStack) to help them better use the HPE OneView API and develop joint solutions for our customers. While I was doing this, I discovered the real power of REST API. Since then, I’ve worked on several of our product APIs, providing some sort of SDK (software development kit) in PowerShell, Python and, more recently in Go. When the HPE Developer Community Program started, I saw it as the perfect fit for me, and I contributed a lot, especially through blog posts and tutorials. Today I’m proud to be the technical lead of the HPE Developer Community Program.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; Like Didier, I have more than 30 years of IT experience in various technology roles, including work as an operations and support networking engineer, a technical consultant, and a solution architect. I started as Local Area Network engineer at DEC. I then got the chance to be part of the first IT outsourcing (Managed Service) team. We delivered Wide Area Network support to customers who outsourced their IT infrastructure to the company. When Hewlett Packard acquired Compaq, I worked as a technical consultant and solution architect focused on converged infrastructure and hybrid IT solutions for both internal and customer-facing engagements. During this period, Didier and I co-authored the “Programming CloudSystem Matrix for Dummies” book. Just recently, I took on the role of program manager of HPE DEV. This is an area where I feel I can add a lot of value due to the project management skills I’ve learned through other roles.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; Do you have any specific goals associated with your new roles?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; As the technical lead of the program, I have a number of specific goals for 2020. A major goal is to grow the community both in terms of engagement as well as representation. This not only means increasing visits to our web site, but also expanding participation via Slack conversations and the number of followers on our Twitter  channel. It also means inviting more content contributors from HPE by reaching out to HPE teams who aren’t currently represented. By doing this, all community members will benefit, gaining access to a wider knowledge base and driving consistency, interoperability and ease of use throughout products. Ultimately we’d like to change how developers view HPE. Today, a lot of developers we meet at events wonder what we have to offer to them so we have to spend a lot of time explaining it. The goal of HPE DEV is to change this perception so developers start to recognize the value of developing on HPE platforms.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; I have specific goals as well. As an ambassador of the HPE Developer Community Program, my goal is to help developers and front-end application designers accelerate innovation and development time. So I work hard to spread the message about the value HPE DEV brings, evangelizing HPE DEV initiatives and offers to developers at HPE events and open source conferences.  In my role as program manager, my primary goal is to coordinate and ensure clarity among all the multiple projects related to HPE Developer Community Program.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; Denis, I understand you recently attended the Linux Foundation’s premier event, KubeCon + CloudNativeCon in San Diego and heard about the HPE Container Platform announcement first-hand. It sounds pretty exciting. Can you give some details about the announcement, what you learned there, and your thoughts on how this might impact the work HPE DEV takes on?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; It was great to be able to attend this conference in San Diego and hear the HPE announcement around containers and Kubernetes at a time when the containerization of applications is on the rise. The HPE Container Platform is of paramount importance to HPE. This announcement was the highlight of the event, focusing on the convergence of HPE innovations. It combines the BlueData container-based control plane with the &lt;a href=&quot;/blog/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern&quot;&gt;KubeDirector&lt;/a&gt; open source project, together with the MapR data fabric for container persistent data storage, and integrates Kubernetes, the de facto open source standard for container orchestration. This emergent technology will enable coders to develop applications faster and deploy them anywhere. The HPE Container Platform will undoubtedly impact HPE DEV’s work, as this container platform gives us opportunities to expand the reach of the community to data scientists, DevOps, and ITOps engineers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; Modern apps, written using containers, certainly have an advantage in cloud environments. They’re more portable and scale much more easily. What do you think this announcement means for existing apps? How will this benefit enterprises overall?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; This announcement is a real game changer. This gives us a great story to deliver to developers. The HPE Container Platform lets you not only run modern cloud-native applications, but also legacy applications that you have always wanted to containerize, but couldn&apos;t. The HPE Container Platform allows more traditional apps, the ones with mostly persistent data, to operate in a modern container environment, with no need of rewriting them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; Given all this, what platforms do you believe will be the most important to HPE and our customers as we move forward?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; I think we will continue to provide a lot of assistance in the use of our existing APIs, but we will also explore several new areas. First, as already discussed, we will provide more content regarding design and research, thanks to the other half of the team, the UX group. Then, we will look to add more content from other products within the HPE portfolio that may not yet be covered within the HPE Developer Community. I’m thinking possibly about Cloud Volumes, Composable Fabric Manager, or even Aruba. Finally, we will provide more focus on our team around the HPE Container Platform, so that we can help developers and their ITOps teams easily adopt this promising platform.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Denis:&lt;/strong&gt; The HPE Container Platform is certainly an important piece of HPE’s hybrid stack. It will drive our focus toward containers. This solution addresses the evolving needs of application developers and the capabilities enterprises demand. It should help cut down complexity and time for developers, data scientists, DevOps, and ITOps teams to migrate their applications (both cloud-native and monolithic traditional apps) to containers and deploy them anywhere.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Dale:&lt;/strong&gt; This deeper focus on assisting developers with platforms to modernize their applications all sounds very exciting. How can folks connect and engage as a part of the HPE Developer Community?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Didier:&lt;/strong&gt; The best way is to connect with us is to go to &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;hpedev.io,&lt;/a&gt; our web portal. From there you can find all the different links that will allow you to subscribe to the monthly newsletter, join us on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; or follow us on &lt;a href=&quot;https://twitter.com/HPE_Developer?lang=en&quot;&gt;Twitter.&lt;/a&gt; Give it a try!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Think container first]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/think-container-first/</link><guid isPermaLink="false">https://developer.hpe.com/think-container-first/</guid><pubDate>Thu, 05 Dec 2019 17:13:30 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Attendees at KubeCon + CloudNativeCon 2019 were on hand to see the new HPE Container Platform that Hewlett Packard Enterprise (HPE) announced at the event. The HPE Container Platform is the industry’s first enterprise-grade Kubernetes-based container platform designed for both cloud-native applications and monolithic applications with persistent storage. With the HPE Container Platform, enterprise customers can accelerate application development for new and existing apps – running on bare-metal or virtualized infrastructure, on any public cloud, and at the edge. You may be asking yourself, what does this have to do with HPE, a leading infrastructure company?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Everything.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;As a developer, you know first-hand how IT depends on software to optimize resources and manage what otherwise would be unmanageable. Almost every business today operates as a software business, as vendors write code to connect their systems to the rest of the universe – whether it’s to ensure operation in the cloud, provide a user interface on a smart phone, collect data from edge devices, or simply check in on stock availability in the warehouse. Code, and its ability to run anywhere and everywhere, is key to today’s businesses.&lt;/p&gt;
&lt;p&gt;But it’s not always easy to get your code to run everywhere. Newly written apps? Yeah, that’s really no problem. Containers and Kubernetes orchestration offer DevOps agility and quick creation of cloud-native applications. But other legacy apps, the ones that have been around for a while and businesses rely on, just aren’t easy to move. IT operations (ITops) managers have been scratching their heads, trying to figure out how they can make the most of cloud-enabled economics and still work within the confines of what they’re stuck with. While many good reasons justify hybrid cloud implementations, such as data security, sometimes ITops groups feel as though they just have to accept a hybrid cloud environment, because it would be too heavy a lift and costly to cloud-enable these types of apps.&lt;/p&gt;
&lt;p&gt;HPE understands this. After all, as a worldwide enterprise, it has had to deal with these same issues. Phil Davis, HPE President of Hybrid IT, stated, “The cloud is not a destination. It’s an experience and operating model.” In regards to this recent announcement, Phil Davis &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/blog-post/2019/11/With-HPE-Container-Platform-enterprise-wide-containerization-is-a-reality.html&quot;&gt;reiterated this&lt;/a&gt; fact and added “… the cloud experience can offer considerable benefits to organizations, such as speed, ease of provisioning, scalability, and pay-per-use. But the vast majority of existing non-cloud-native apps have been left behind. They were written over the past decades and often are the core applications and data which run the business.”&lt;/p&gt;
&lt;p&gt;You and your fellow ITops teams, developers, and designers most likely appreciate how containerization for both new and existing apps is going to help your organization move forward. But others in your business may not. They may not understand why they should look toward HPE for the industry’s first Kubernetes platform designed to run legacy and cloud native apps that enables a true hybrid cloud environment across any location – on premises, in public clouds, and at the edge.&lt;/p&gt;
&lt;p&gt;You can help them. You can help business managers envision their company as one built on the accumulation of data they can actually put to use to provide a better service to their customers and as one that runs as economically as possible using public cloud services in ways they couldn’t previously imagine.&lt;/p&gt;
&lt;p&gt;Start by showing &lt;a href=&quot;https://www.youtube.com/watch?v=1G0D7ZY0dvk&amp;#x26;feature=youtu.be&quot;&gt;this short video&lt;/a&gt; to those around you who don&apos;t understand how containers can improve the business. It’s concise and gets right to the point. And it will hopefully open up some conversations you may have been meaning to have for a while.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV and the new HPE Container Platform at KubeCon 2019]]></title><description><![CDATA[picture1 HPE Developer Community teammate, Pramod Sareddy, and I just returned from KubeCon + CloudNativeCon North America 2019. During my…]]></description><link>https://developer.hpe.com/hpe-dev-and-the-new-hpe-container-platform-at-kubecon-2019/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-and-the-new-hpe-container-platform-at-kubecon-2019/</guid><pubDate>Wed, 04 Dec 2019 20:27:38 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1575491346999.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE Developer Community teammate, Pramod Sareddy, and I just returned from KubeCon + CloudNativeCon North America 2019. During my long journey back home to France, I decided to write about what I learned at the event while the breath-taking view of the San Diego bay was still fresh in my mind.&lt;/p&gt;
&lt;p&gt;It had been a busy week for the HPE Developer Community team as we greeted dozens of event attendees and updated them on what’s new from Hewlett Packard Enterprise (HPE). The event offered a great opportunity to exchange thoughts and information with developers, front-end application designers, enterprise solution architects, and chief technology officers who were interested in containers and hybrid cloud solutions. The HPE Developer Community team also got to meet the HPE designers and architects of the newly announced HPE Container Platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1575491389586.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the conference, HPE presented and demoed the newly announced HPE Container Platform, as well as a &lt;a href=&quot;https://community.hpe.com/t5/HPE-Storage-Tech-Insiders/HPE-storage-supporting-the-HPE-Container-Platform-at-KubeCon/ba-p/7070094#.Xd8FCuhKh9A&quot;&gt;true hybrid cloud CI/CD pipeline&lt;/a&gt; enabled by the HPE Container Platform, HPE Nimble storage, and HPE Cloud Volumes. Multiple in-booth theater sessions focused on these topics. Attendees crowded around the HPE booth to interact with HPE experts and get a chance to look at these new innovations.&lt;/p&gt;
&lt;p&gt;Also in the booth, Pramod and I presented the benefits of joining the HPE Developer Community. Many attendees were interested in how the community can help developers, designers, and DevOps and ITOps engineers accelerate innovation and speed up their development time. They were also eager to sign up for our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;HPE Developer community newsletter,&lt;/a&gt; which earned them a chance to win one of our daily prizes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1575491496447.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;We also showed off a pretty cool face recognition application Pramod wrote in &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet,&lt;/a&gt; which runs on containers in a Kubernetes cluster, hosted on an HPE EdgeLine system. Another attraction in our booth was our HPE DEV-developed &lt;a href=&quot;https://github.com/HewlettPackard/hpe-hack-shack-attack&quot;&gt;Hack Shack Attack&lt;/a&gt; gaming competition where players try to tame the IT monster. The game proved very popular among the developer community, as it earned participants extra chances to win a prize. We selected winners through a raffle at the end of each day.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture4-1575491590488.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;The highlight of the event for us was witnessing the convergence of HPE innovations, resulting in the newly announced &lt;a href=&quot;https://www.hpe.com/us/en/solutions/container-platform.html&quot;&gt;HPE Container Platform.&lt;/a&gt; The technology combines the BlueData container-based control plane with the &lt;a href=&quot;/blog/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern&quot;&gt;KubeDirector&lt;/a&gt; open source project, along with the MapR data fabric for container persistent data storage and integrates Kubernetes, the de facto open source standard for container orchestration. This announcement piqued everyone’s attention, as evidenced by a continuous stream of event attendees stopping by to learn more about the &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2019/11/Hewlett-Packard-Enterprise-introduces-Kubernetes-based-platform-for-bare-metal-and-edge-to-cloud-deployments.html&quot;&gt;announcement.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture5-1575491696591.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;In short, the HPE Container Platform (a hardware agnostic, enterprise-grade container management software product) extends the benefits of running cloud-native enterprise applications with CI/CD micro services architecture on containers to non-cloud-native applications with persistent storage. Distributed, stateful applications, such as AI, ML, and big data applications, as well as enterprise traditional legacy workloads (which are generally not cloud-native and require persistent storage), can now run at scale on containers without the need to re-architect them. The HPE Container Platform delivers a seamless, cloud-like experience whether on-premises, at the edge, in multiple public clouds, or in a hybrid model. This makes the HPE Container Platform ideal for helping application developers, DevOps/ITops/CloudOps engineers, and data scientists accelerate their application development and deployment on containers, on-demand through a self-service portal and a RESTful API that surfaces programmable access.&lt;/p&gt;
&lt;p&gt;The HPE Container Platform is now available for preview on any customers’ infrastructure of choice (bare metal and VM) and any of the major public clouds. To learn more about this unique HPE container software platform visit the &lt;a href=&quot;https://developer.hpe.com/platform/bluedata/home&quot;&gt;HPE DEV portal&lt;/a&gt; and check out our blog article &lt;a href=&quot;/blog/running-non-cloud-native-apps-on-kubernetes-with-kubedirector&quot;&gt;here!&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For more opportunities to contribute your insights, knowledge and experience to building the best possible developer experience around containers and open source Kubernetes, as well as learn from your peers, join the conversation on our &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack Channel&lt;/a&gt; and &lt;a href=&quot;https://developer.hpe.com/signup&quot;&gt;join the community.&lt;/a&gt; Stay up to date with the latest news from HPE DEV by &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;signing up for our monthly newsletter&lt;/a&gt; and we will send you more awesome developer-focused posts about HPE projects and platforms every month. You can also follow our community on &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We look forward to seeing you again at &lt;a href=&quot;https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2020/&quot;&gt;KubeCon + CloudNativeCon EMEA 2020,&lt;/a&gt; in Amsterdam, from March 30 to April 2, 2020.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Understanding API basics and the value they provide ]]></title><description><![CDATA[picture1 Over the course of my career, I’ve come across many technical colleagues who were experts on a given product or technology, but…]]></description><link>https://developer.hpe.com/understanding-api-basics-and-the-value-they-provide/</link><guid isPermaLink="false">https://developer.hpe.com/understanding-api-basics-and-the-value-they-provide/</guid><pubDate>Tue, 03 Dec 2019 16:58:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1575393131999.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Over the course of my career, I’ve come across many technical colleagues who were experts on a given product or technology, but always from a user’s point of view. They knew how to design a solution, sell it to the customer, install it, and customize it to meet the customer’s requirements. And, sometimes, they could even debug rather difficult problems. Yet, when developers asked these technology experts about possible integration of their product with third-party software, or when they were pressed to provide scripts and automate processes, the technology experts were at a loss. They simply did not understand the product from a developer’s perspective.&lt;/p&gt;
&lt;p&gt;And for good reason. There’s a certain amount of magic that happens within an application done by the code itself. Many applications you work with today, on PCs, servers, or off in the cloud, include a hidden jewel called the Application Programming Interface (API). In a nutshell, an API makes it possible to interact with an application programmatically. This means that developers can integrate their application with your applications in a standardized way. This becomes very useful because it opens up a whole lot of new use cases and business opportunities. But in order for you to take advantage of these opportunities, you need to understand the product’s API well enough so that you can discuss its capabilities with developers.&lt;/p&gt;
&lt;h2&gt;REST and HTTP&lt;/h2&gt;
&lt;p&gt;Exploring how you interact programmatically with a product is no easy task. Developers are more familiar with this exercise, but as a non-developer, you might wonder how to get started. The good news is that a de facto standard for building APIs has emerged in the last few years. It’s called REST (REpresentation State Transfer). Let me first explain what REST is, and then I’ll define a few important concepts used by REST APIs, such as HTTP and JSON.&lt;/p&gt;
&lt;p&gt;Most of you are familiar with the terms HTTP (HyperText Transfer Protocol) and URL (Uniform Resource Locator) because you use a web browser every day. Yet you may not know how they relate to REST and APIs. REST is an architecture that uses a set of principles to describe how networked resources are defined and addressed, leveraging HTTP to build APIs.&lt;/p&gt;
&lt;p&gt;HTTP defines how messages are formatted and transmitted, as well as what actions Web servers should take in response to various commands. In HTTP, one uses verbs to describe an action that will apply to a given URL. When you launch a web browser, the implicit verb is GET. However, there are other verbs in the HTTP specifications such as POST, PUT, PATCH, and DELETE. REST APIs use these verbs to describe possible actions. The URL in a REST API call is, like in a browser, the target to which the verb must be applied. For example, &lt;code&gt;GET http://google.com&lt;/code&gt; means &quot;retrieve the content of the main index page at google.com, and render it in my browser window&quot;.&lt;/p&gt;
&lt;p&gt;Let’s review the most important HTTP verbs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GET – Retrieve object instance properties&lt;/li&gt;
&lt;li&gt;PUT/PATCH –Modify object instance properties (two different ways)&lt;/li&gt;
&lt;li&gt;POST – Create a new object instance&lt;/li&gt;
&lt;li&gt;DELETE - Remove an object instance&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Passing data back and forth&lt;/h2&gt;
&lt;p&gt;When we know how to use an HTTP verb to describe an action, we can use a URL to point to something in the API, but we need a mechanism to exchange data with the application at the other end of the API. This mechanism is useful to create new items or to retrieve a list of items from the application. We use a couple of techniques to do this. The first technique uses something described by the HTTP specifications called HTTP headers. A header allows the attachment of information with the API call as a key/value pair of string.&lt;/p&gt;
&lt;p&gt;For example, you can add an  &lt;code&gt;Accept &lt;/code&gt; header with value &lt;code&gt;application/json&lt;/code&gt; to tell the API to return the result in json. There is no standard for HTTP headers, but there are many well-known headers used by APIs such as Accept, Content-Type, X-API-Version, etc. Headers are useful for passing a simple text value.&lt;/p&gt;
&lt;p&gt;But what if you want to pass more data to your API? For this, you can use the HTTP message payload, the body of your HTTP call. This is typically used to specify the credentials to get access to an API.&lt;/p&gt;
&lt;h2&gt;JSON&lt;/h2&gt;
&lt;p&gt;To pass data back and forth with an API, you need to make sure the client (calling the API) and the API agree on a format for this data. The most popular format nowadays is called JSON (for JavaScript Object Notation). JSON is a fairly readable machine language which typically looks like a series of Key/Value pairs separated with colons, with string values in quotes and multiple key/values separated with commas. The following is a simple example of a valid JSON snippet:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;
{
“userName”: “foo@hpe.com”,
“password”: “super-secure-password”
}   
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;JSON is case sensitive, so make sure you have the key names right (&quot;userName&quot; is not the same as &quot;username&quot;). You can express complex values by nesting JSON values and by using square brackets to indicate a collection of similar items as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{ “members”: [
                { “lastname”: “max”,
                 “firstname”: “mad”,
                 “age”: 50},
                {“lastname”: “james”,
                 “firstname”: “bond”,
                 “age”: 60}],
“count”: 2
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;An API might accept more than one format, and it may also provide responses in more than one format (JSON, XML, TEXT). Therefore, the convention is to use HTTP headers to specify which format you are going to use to pass data to the API (in your payload) and what data format you expect for the response. These headers are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accept: application/json (please provide me with a response in JSON)&lt;/li&gt;
&lt;li&gt;Content-Type: application/json (I’m handing out a payload in JSON)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Putting it all together&lt;/h2&gt;
&lt;p&gt;In summary, an API uses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A URL. It also uses the term URI (Uniform Resource Identifier) to specify an object ID or a class of objects&lt;/li&gt;
&lt;li&gt;A verb to specify what action to take on that URL/URI&lt;/li&gt;
&lt;li&gt;One or more HTTP headers to attach certain properties on the API call, such as what format to accept for the response&lt;/li&gt;
&lt;li&gt;A payload to pass data to the API&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let’s look at a few examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;GET http(s)://myAPIendpoint/rest/servers with HEADER Accept: application/json: Retrieves the list of servers from API endpoint http(s)://myAPIendpoint in JSON format&lt;/li&gt;
&lt;li&gt;POST http(s)://myAPIendpoint/rest/servers with HEADER Content-Type: application/json and appropriate JSON payload: Creates a new object of type server&lt;/li&gt;
&lt;li&gt;DELETE  http(s)://myAPIendpoint/rest/servers/1212138349873: Deletes server object given provided UUID&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Postman to the rescue&lt;/h2&gt;
&lt;p&gt;Since a browser can only really handle GET, it is clearly not the appropriate tool for manipulating an API. The good news is that a fantastic tool called Postman exists, which comes with a free version you can use to explore APIs. Visit &lt;a href=&quot;http://getpostman.com&quot;&gt;http://getpostman.com&lt;/a&gt; to find the version that works for you (Windows, Mac and Linux). Once installed, start Postman on your machine. You do not need to sign-in to use it for exploring APIs. But since Postman is more than just a client API tool, you’ll want to check out its full feature set.&lt;/p&gt;
&lt;h2&gt;Let’s find a sandbox to give it a try&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure you have access to the Internet for this next section.&lt;/p&gt;
&lt;p&gt;Millions of APIs are available on the Internet today. Most of them are fee-based and require proper authentication, but a few free APIs are still available, which you can use with or without authentication. I selected a very simple one about a weather report, which doesn’t require authentication to use. You can look up more in this list to find another API, if you prefer: &lt;a href=&quot;https://github.com/public-apis/public-apis&quot;&gt;https://github.com/public-apis/public-apis&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The API end-point is:
&lt;a href=&quot;https://www.metaweather.com/api/&quot;&gt;https://www.metaweather.com/api/&lt;/a&gt;. According to the documentation, we first issue a call to find a location on Earth:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Querying API for WOEID of a city, such as Paris&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In Postman, make sure the URL is set to &lt;a href=&quot;https://www.metaweather.com/api/location/search/?query=Paris&quot;&gt;https://www.metaweather.com/api/location/search/?query=Paris&lt;/a&gt;, and that GET is the selected verb before you hit Send to issue the API call. You can see in the response section that it is, indeed provided in JSON, which is somehow readable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;[{&quot;title&quot;:&quot;Paris&quot;,&quot;location_type&quot;:&quot;City&quot;,&quot;woeid&quot;:615702,&quot;latt_long&quot;:&quot;48.856930,2.341200&quot;}]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the JSON response, we can extract the woeid (Where On Earth ID) for Paris (615702) and use this value to query for a weather report in Paris:&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Query API for weather report in Paris using WOEID&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Still in Postman, make sure the URL is set to &lt;a href=&quot;https://www.metaweather.com/api/location/615702/&quot;&gt;https://www.metaweather.com/api/location/615702/&lt;/a&gt; and hit Send again:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;
{
    &quot;consolidated_weather&quot;: [
        {
            &quot;id&quot;: 6618427756118016,
            &quot;weather_state_name&quot;: &quot;Showers&quot;,
            &quot;weather_state_abbr&quot;: &quot;s&quot;,
            &quot;wind_direction_compass&quot;: &quot;W&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:06.334166Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-06&quot;,
            &quot;min_temp&quot;: 8.605,
            &quot;max_temp&quot;: 12.215,
            &quot;the_temp&quot;: 11.875,
            &quot;wind_speed&quot;: 4.475518589403219,
            &quot;wind_direction&quot;: 277.3125549075835,
            &quot;air_pressure&quot;: 1005.5,
            &quot;humidity&quot;: 75,
            &quot;visibility&quot;: 10.535348564383998,
            &quot;predictability&quot;: 73
        },
        {
            &quot;id&quot;: 5413015522377728,
            &quot;weather_state_name&quot;: &quot;Light Rain&quot;,
            &quot;weather_state_abbr&quot;: &quot;lr&quot;,
            &quot;wind_direction_compass&quot;: &quot;SW&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:09.637230Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-07&quot;,
            &quot;min_temp&quot;: 7.0,
            &quot;max_temp&quot;: 10.62,
            &quot;the_temp&quot;: 10.41,
            &quot;wind_speed&quot;: 8.787927365497495,
            &quot;wind_direction&quot;: 230.868718440178,
            &quot;air_pressure&quot;: 1000.0,
            &quot;humidity&quot;: 68,
            &quot;visibility&quot;: 15.52930883639545,
            &quot;predictability&quot;: 75
        },
        {
            &quot;id&quot;: 6503888796516352,
            &quot;weather_state_name&quot;: &quot;Showers&quot;,
            &quot;weather_state_abbr&quot;: &quot;s&quot;,
            &quot;wind_direction_compass&quot;: &quot;ESE&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:12.628582Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-08&quot;,
            &quot;min_temp&quot;: 4.395,
            &quot;max_temp&quot;: 9.465,
            &quot;the_temp&quot;: 9.11,
            &quot;wind_speed&quot;: 3.1676953831490766,
            &quot;wind_direction&quot;: 119.31121355342638,
            &quot;air_pressure&quot;: 1005.5,
            &quot;humidity&quot;: 76,
            &quot;visibility&quot;: 13.218118686868687,
            &quot;predictability&quot;: 73
        },
        {
            &quot;id&quot;: 6070183267401728,
            &quot;weather_state_name&quot;: &quot;Light Rain&quot;,
            &quot;weather_state_abbr&quot;: &quot;lr&quot;,
            &quot;wind_direction_compass&quot;: &quot;SW&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:15.316626Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-09&quot;,
            &quot;min_temp&quot;: 4.02,
            &quot;max_temp&quot;: 9.68,
            &quot;the_temp&quot;: 9.61,
            &quot;wind_speed&quot;: 4.867012487862881,
            &quot;wind_direction&quot;: 224.66670428043918,
            &quot;air_pressure&quot;: 1012.5,
            &quot;humidity&quot;: 73,
            &quot;visibility&quot;: 12.878850015907101,
            &quot;predictability&quot;: 75
        },
        {
            &quot;id&quot;: 5082031551676416,
            &quot;weather_state_name&quot;: &quot;Showers&quot;,
            &quot;weather_state_abbr&quot;: &quot;s&quot;,
            &quot;wind_direction_compass&quot;: &quot;WNW&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:18.342176Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-10&quot;,
            &quot;min_temp&quot;: 4.32,
            &quot;max_temp&quot;: 9.120000000000001,
            &quot;the_temp&quot;: 9.379999999999999,
            &quot;wind_speed&quot;: 5.455819375540178,
            &quot;wind_direction&quot;: 286.2336385713633,
            &quot;air_pressure&quot;: 1006.5,
            &quot;humidity&quot;: 72,
            &quot;visibility&quot;: 12.235109460749225,
            &quot;predictability&quot;: 73
        },
        {
            &quot;id&quot;: 6399005695148032,
            &quot;weather_state_name&quot;: &quot;Showers&quot;,
            &quot;weather_state_abbr&quot;: &quot;s&quot;,
            &quot;wind_direction_compass&quot;: &quot;W&quot;,
            &quot;created&quot;: &quot;2019-11-06T12:36:21.245629Z&quot;,
            &quot;applicable_date&quot;: &quot;2019-11-11&quot;,
            &quot;min_temp&quot;: 4.1850000000000005,
            &quot;max_temp&quot;: 8.07,
            &quot;the_temp&quot;: 6.65,
            &quot;wind_speed&quot;: 2.8226837270341205,
            &quot;wind_direction&quot;: 267.5,
            &quot;air_pressure&quot;: 1009.0,
            &quot;humidity&quot;: 77,
            &quot;visibility&quot;: 9.999726596675416,
            &quot;predictability&quot;: 73
        }
    ],
    &quot;time&quot;: &quot;2019-11-06T14:45:55.699444+01:00&quot;,
    &quot;sun_rise&quot;: &quot;2019-11-06T07:44:50.213694+01:00&quot;,
    &quot;sun_set&quot;: &quot;2019-11-06T17:23:01.239529+01:00&quot;,
    &quot;timezone_name&quot;: &quot;LMT&quot;,
    &quot;parent&quot;: {
        &quot;title&quot;: &quot;France&quot;,
        &quot;location_type&quot;: &quot;Country&quot;,
        &quot;woeid&quot;: 23424819,
        &quot;latt_long&quot;: &quot;46.71,1.72&quot;
    },
    &quot;sources&quot;: [
        {
            &quot;title&quot;: &quot;BBC&quot;,
            &quot;slug&quot;: &quot;bbc&quot;,
            &quot;url&quot;: &quot;http://www.bbc.co.uk/weather/&quot;,
            &quot;crawl_rate&quot;: 360
        },
        {
            &quot;title&quot;: &quot;Forecast.io&quot;,
            &quot;slug&quot;: &quot;forecast-io&quot;,
            &quot;url&quot;: &quot;http://forecast.io/&quot;,
            &quot;crawl_rate&quot;: 480
        },
        {
            &quot;title&quot;: &quot;HAMweather&quot;,
            &quot;slug&quot;: &quot;hamweather&quot;,
            &quot;url&quot;: &quot;http://www.hamweather.com/&quot;,
            &quot;crawl_rate&quot;: 360
        },
        {
            &quot;title&quot;: &quot;Met Office&quot;,
            &quot;slug&quot;: &quot;met-office&quot;,
            &quot;url&quot;: &quot;http://www.metoffice.gov.uk/&quot;,
            &quot;crawl_rate&quot;: 180
        },
        {
            &quot;title&quot;: &quot;OpenWeatherMap&quot;,
            &quot;slug&quot;: &quot;openweathermap&quot;,
            &quot;url&quot;: &quot;http://openweathermap.org/&quot;,
            &quot;crawl_rate&quot;: 360
        },
        {
            &quot;title&quot;: &quot;Weather Underground&quot;,
            &quot;slug&quot;: &quot;wunderground&quot;,
            &quot;url&quot;: &quot;https://www.wunderground.com/?apiref=fc30dc3cd224e19b&quot;,
            &quot;crawl_rate&quot;: 720
        },
        {
            &quot;title&quot;: &quot;World Weather Online&quot;,
            &quot;slug&quot;: &quot;world-weather-online&quot;,
            &quot;url&quot;: &quot;http://www.worldweatheronline.com/&quot;,
            &quot;crawl_rate&quot;: 360
        }
    ],
    &quot;title&quot;: &quot;Paris&quot;,
    &quot;location_type&quot;: &quot;City&quot;,
    &quot;woeid&quot;: 615702,
    &quot;latt_long&quot;: &quot;48.856930,2.341200&quot;,
    &quot;timezone&quot;: &quot;Europe/Paris&quot;
}

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, a fair amount of information is provided by the API, including latitude, longitude, temperature, and weather conditions in Paris (which is not great at the moment by the way). As a developer, you could parse this JSON response and use it in your own application.&lt;/p&gt;
&lt;h2&gt;Now what?&lt;/h2&gt;
&lt;p&gt;Congratulations! You have placed your first API calls using Postman. This simple API only accepted GET calls, so one might argue everything could have been handled by a standard browser. The advantage of using Postman is that you can save your calls and start building collections of API calls that are useful in your job. We used a very limited API to get the weather report in Paris, but the exact same technique (using Postman) could be applied with other REST APIs, such as the ones from HPE OneView, HPE SimpliVity or HPE iLO. I plan on discussing these in future articles.&lt;/p&gt;
&lt;p&gt;If you have questions or want to connect, you can reach me on Twitter &lt;a href=&quot;https://twitter.com/DidierLalli&quot;&gt;@DidierLalli.&lt;/a&gt; In addition, don’t forget to keep an eye out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for future articles on this topic.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A Faster Path to Application Modernization - Newsletter]]></title><link>https://developer.hpe.com/2019-December-02/</link><guid isPermaLink="false">https://developer.hpe.com/2019-December-02/</guid><pubDate>Mon, 02 Dec 2019 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Using HPE Cloud Volumes with Amazon EKS]]></title><description><![CDATA[Running applications on Kubernetes is becoming more mainstream by the minute. Kubernetes is fairly unambiguous to the developer but may pose…]]></description><link>https://developer.hpe.com/using-hpe-cloud-volumes-with-amazon-eks/</link><guid isPermaLink="false">https://developer.hpe.com/using-hpe-cloud-volumes-with-amazon-eks/</guid><pubDate>Fri, 29 Nov 2019 17:05:55 GMT</pubDate><content:encoded>&lt;p&gt;Running applications on Kubernetes is becoming more mainstream by the minute. Kubernetes is fairly unambiguous to the developer but may pose challenges for the Ops teams. Private cloud, hybrid cloud or public cloud? Managed Kubernetes or BYO (build your own) Kubernetes? There’s an endless matrix of combinations. It becomes even more challenging when you start caring about stateful applications and persistent storage, as data has gravity in the sense it becomes sticky to where you deploy your application. This technical tutorial will explore the paradigm of managed Kubernetes and how Hewlett Packard Enterprise (HPE) bring value to the public cloud by providing advanced data services to Kubernetes on Amazon Web Services (AWS).&lt;/p&gt;
&lt;p&gt;At KubeCon this year, HPE put together &lt;a href=&quot;https://community.hpe.com/t5/HPE-Storage-Tech-Insiders/HPE-storage-supporting-thech-HPE-Container-Platform-at-KubeCon/ba-p/7070094&quot;&gt;a hybrid cloud CI/CD pipeline&lt;/a&gt; using the new HPE Container Platform and Amazon EKS (Elastic Kubernetes Service). Central to the solution was the integration of HPE Nimble Storage and HPE Cloud Volumes, two key components that enable true hybrid cloud for containers. This blog post will focus on getting started with Amazon EKS, deploying the HPE Volume Driver for Kubernetes FlexVolume Plugin onto EKS, and using HPE Cloud Volumes instead of Amazon EBS (Elastic Block Storage) for persistent storage.&lt;/p&gt;
&lt;h1&gt;Managed Kubernetes in Public Cloud&lt;/h1&gt;
&lt;p&gt;There are plenty of ways to run Kubernetes in the public cloud. Many distributions of Kubernetes allow you to deploy directly to the hyper-scalers by using simple cloud native tooling to instantiate Kubernetes clusters. But doing so brings the inconvenience of managing your own control plane, the Kubernetes masters with their associated services, which can be a bit daunting and not highly valuable or impactful work. Managed Kubernetes also comes in different shapes and sizes. The most common model is centered around creating clusters, where the cloud provider manages the entire control plane for you, and only worker nodes are deployed as regular instances in your cloud infrastructure. This model brings the benefit of being able to choose instance type (CPU, GPU, and memory) and allows customization of the worker nodes, like installing custom packages and other tweaks necessary to run a certain workload.&lt;/p&gt;
&lt;p&gt;The managed Kubernetes service I’m going to focus on in this blog post is Amazon EKS. HPE recently released the &lt;a href=&quot;https://github.com/hpe-storage/flexvolume-driver&quot;&gt;HPE Volume Driver for Kubernetes FlexVolume Plugin&lt;/a&gt; version 3.1, which supports HPE Cloud Volumes and includes support not only for EKS, but also for Azure AKS (Azure Kubernetes Service). As HPE just received connectivity with Google Cloud, expect full support for GKE (Google Kubernetes Engine) in the near future.&lt;/p&gt;
&lt;h1&gt;Prerequisites&lt;/h1&gt;
&lt;p&gt;This tutorial assumes a certain familiarity with AWS and HPE Cloud Volumes, as well as the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE Cloud Volumes connectivity that is setup according to &lt;a href=&quot;https://docs.cloudvolumes.hpe.com/help/&quot;&gt;the documentation&lt;/a&gt; in a region that provides Amazon EKS. The litmus test is if you can connect a cloud volume to an EC2 instance in the same region.&lt;/li&gt;
&lt;li&gt;A VPC ID and IPv4 CIDR of the existing VPC from where HPE Cloud Volumes can be accessed.&lt;/li&gt;
&lt;li&gt;HPE Cloud Volumes API Access Key and API Secret Key generated for use by the FlexVolume driver. Make sure you&apos;re logged in to &lt;a href=&quot;https://cloudvolumes.hpe.com/login&quot;&gt;cloudvolumes.hpe.com&lt;/a&gt; and &lt;a href=&quot;https://cloudvolumes.hpe.com/usersettings&quot;&gt;copy from here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The environment variables &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;, &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; and &lt;code&gt;AWS_DEFAULT_REGION&lt;/code&gt; set.&lt;/li&gt;
&lt;li&gt;The utilities &lt;code&gt;aws&lt;/code&gt;, &lt;code&gt;helm&lt;/code&gt; (version 2), &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;eksctl&lt;/code&gt; installed on the machine you’re using to provision your EKS cluster (further info outlined where used below).&lt;/li&gt;
&lt;li&gt;An SSH public key that was generated from the provisioning host.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have any questions on the prerequisites or need clarification, please feel free to reach out to me &lt;a href=&quot;https://hpedev.slack.com/&quot;&gt;on Slack&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Configure your subnets&lt;/h1&gt;
&lt;p&gt;By default, &lt;code&gt;eksctl&lt;/code&gt; creates new VPCs and subnets for the cluster it provisions. This is impractical when using HPE Cloud Volumes, as you want to use an existing VPC with subnets already accessible from HPE Cloud Volumes. In this tutorial, I’ll tag two subnets for the internal load-balancer and two subnets for the external load-balancer. These subnets will be referenced in the EKS setup later.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;aws ec2 create-tags --resources subnet-AA000000 subnet-BB000000 --tags Key=kubernetes.io/role/elb,Value=1 
aws ec2 create-tags --resources subnet-CC000000 subnet-DD000000 --tags Key=kubernetes.io/role/internal-elb,Value=1 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above command produces no output if it is successful. Ensure the environment variables are set as outlined in the prerequisites. Learn more about the  &lt;a href=&quot;https://aws.amazon.com/cli/&quot;&gt;the AWS CLI here&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Fire up the EKS cluster&lt;/h1&gt;
&lt;p&gt;The tool &lt;code&gt;eksctl&lt;/code&gt; was initially developed outside of Amazon. It’s now a joint effort between Amazon and the original author, Weaveworks. Please see the  &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html&quot;&gt;the official documentation&lt;/a&gt;  for the &lt;code&gt;eksctl&lt;/code&gt; tool to learn how to get started.&lt;/p&gt;
&lt;p&gt;Review and edit the &lt;code&gt;eksctl&lt;/code&gt; command below to fit the target environment. Ensure there’s a public SSH key in place, environment variables set and the subnets have been tagged. &lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;eksctl create cluster \ 
--name HPEDEV \ 
--version 1.13 \ 
--region ${AWS_DEFAULT_REGION} \ 
--nodegroup-name standard-workers \ 
--node-type t3.medium \ 
--nodes 3 \ 
--nodes-min 1 \ 
--nodes-max 4 \ 
--ssh-public-key ~/.ssh/id_rsa.pub \ 
--node-ami auto \ 
--vpc-public-subnets subnet-AA000000,subnet-BB000000 \ 
--vpc-private-subnets subnet-CC000000,subnet-DD000000 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kubernetes 1.12 or 1.13 is currently required for the FlexVolume driver when deployed on EKS. Stay tuned for future updates.&lt;/p&gt;
&lt;p&gt;This command takes a few minutes to execute. Here’s an example output from a successful run:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;[ℹ]  eksctl version 0.8.0 
[ℹ]  using region us-west-2 
[✔]  using existing VPC (vpc-00000000) and subnets (private:[subnet-CC000000 subnet-DD000000] public:[subnet-AA000000 subnet-AA000000]) 
[!]  custom VPC/subnets will be used; if resulting cluster doesn&apos;t function as expected, make sure to review the configuration of VPC/subnets 
[ℹ]  nodegroup &quot;standard-workers&quot; will use &quot;ami-04e247c4613de71fa&quot; [AmazonLinux2/1.13] 
[ℹ]  using SSH public key &quot;/home/ubuntu/.ssh/id_rsa.pub&quot; as &quot;eksctl-HPEDEV-nodegroup-standard-workers-d0:ac:7f:86:04:d6:1a:ec:71:cc:3d:75:cf:1b: 
f2:46&quot; 
[ℹ]  using Kubernetes version 1.13 
[ℹ]  creating EKS cluster &quot;HPEDEV&quot; in &quot;us-west-2&quot; region 
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup 
[ℹ]  if you encounter any issues, check CloudFormation console or try &apos;eksctl utils describe-stacks --region=us-west-2 --cluster=HPEDEV&apos; 
[ℹ]  CloudWatch logging will not be enabled for cluster &quot;HPEDEV&quot; in &quot;us-west-2&quot; 
[ℹ]  you can enable it with &apos;eksctl utils update-cluster-logging --region=us-west-2 --cluster=HPEDEV&apos; 
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster &quot;HPEDEV&quot; in &quot;us-west-2&quot; 
[ℹ]  2 sequential tasks: { create cluster control plane &quot;HPEDEV&quot;, create nodegroup &quot;standard-workers&quot; } 
[ℹ]  building cluster stack &quot;eksctl-HPEDEV-cluster&quot; 
[ℹ]  deploying stack &quot;eksctl-HPEDEV-cluster&quot; 
[ℹ]  building nodegroup stack &quot;eksctl-HPEDEV-nodegroup-standard-workers&quot; 
[ℹ]  deploying stack &quot;eksctl-HPEDEV-nodegroup-standard-workers&quot; 
[✔]  all EKS cluster resources for &quot;HPEDEV&quot; have been created 
[✔]  saved kubeconfig as &quot;/home/ubuntu/.kube/config&quot; 
[ℹ]  adding identity &quot;arn:aws:iam::000000000000:role/eksctl-HPEDEV-nodegroup-standard-NodeInstanceRole-000000000000&quot; to auth ConfigMap 
[ℹ]  nodegroup &quot;standard-workers&quot; has 0 node(s) 
[ℹ]  waiting for at least 1 node(s) to become ready in &quot;standard-workers&quot; 
[ℹ]  nodegroup &quot;standard-workers&quot; has 3 node(s) 
[ℹ]  node &quot;ip-172-31-1-186.us-west-2.compute.internal&quot; is not ready 
[ℹ]  node &quot;ip-172-31-24-30.us-west-2.compute.internal&quot; is not ready 
[ℹ]  node &quot;ip-172-31-28-199.us-west-2.compute.internal&quot; is ready 
[ℹ]  kubectl command should work with &quot;/home/ubuntu/.kube/config&quot;, try &apos;kubectl get nodes&apos; 
[✔]  EKS cluster &quot;HPEDEV&quot; in &quot;us-west-2&quot; region is ready 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, you should have connectivity to the cluster using &lt;code&gt;kubectl&lt;/code&gt;, as shown below:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;$ kubectl get nodes 
NAME                                          STATUS   ROLES    AGE   VERSION 
ip-172-31-1-186.us-west-2.compute.internal    Ready    &amp;#x3C;none&gt;   2m    v1.13.11-eks-5876d6 
ip-172-31-24-30.us-west-2.compute.internal    Ready    &amp;#x3C;none&gt;   2m    v1.13.11-eks-5876d6 
ip-172-31-28-199.us-west-2.compute.internal   Ready    &amp;#x3C;none&gt;   2m    v1.13.11-eks-5876d6 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;/uploads/media/2019/10/screen-shot-2019-11-26-at-22906-pm-1574814807294.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;You&apos;re now ready to deploy the FlexVolume driver!&lt;/p&gt;
&lt;h1&gt;Deploy the FlexVolume driver&lt;/h1&gt;
&lt;p&gt;The easiest way to install the HPE Volume Driver for Kubernetes FlexVolume Plugin is to use Helm. The FlexVolume driver Helm chart is available on &lt;a href=&quot;https://hub.helm.sh/charts/hpe-storage/hpe-flexvolume-driver&quot;&gt;hub.helm.sh&lt;/a&gt;. But before you deploy the driver, you need to create a &lt;code&gt;values.yaml&lt;/code&gt; file that corresponds to the HPE Cloud Volumes setup. This file is used as input to Helm.&lt;/p&gt;
&lt;p&gt;Use this &lt;code&gt;values.yaml&lt;/code&gt; file as a starting point:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;--- 
backend: cloudvolumes.hpe.com 
username: &amp;#x3C; HPE Cloud Volumes API Access Key &gt; 
password: &amp;#x3C; HPE Cloud Volumes API Secret Key &gt; 
pluginType: cv 
fsType: xfs 
storageClass: 
  defaultClass: true 
cv: 
  config: 
    snapPrefix: BaseFor 
    automatedConnection: true 
    existingCloudSubnet: &amp;#x3C; CIDR network in VPC, i.e 172.31.0.0/16 &gt; 
    region: us-west-2 
    privateCloud: &amp;#x3C; VPC ID, i.e vpc-00000000 &gt; 
    cloudComputeProvider: &quot;Amazon AWS&quot; 
    perfPolicy: Other 
    volumeType: GPF 
    encryption: true 
    protectionTemplate: twicedaily:4 
    destroyOnRm: true 
    limitIOPS: &quot;1000&quot; 
    initiators: 
      - &apos;&quot;eth0&quot;&apos; 
      - &apos;&quot;eth1&quot;&apos; 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next you need to install Tiller on your cluster. Helm 3 does not require Tiller, however, the Helm chart has not been tested with Helm 3.  So I&apos;m using Helm 2 for this exercise.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl -n kube-system create serviceaccount tiller 
kubectl create clusterrolebinding tiller \ 
  --clusterrole=cluster-admin \ 
  --serviceaccount=kube-system:tiller 
helm init --service-account tiller 
kubectl -n kube-system  rollout status deploy/tiller-deploy 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Tiller should now be installed on the cluster. Now install the chart:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;helm repo add hpe-storage https://hpe-storage.github.io/co-deployments/ 
helm install hpe-storage/hpe-flexvolume-driver --version 3.1.0 --namespace kube-system --name hpe-flexvolume-driver -f values.yaml 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should take less than a minute or so to install the required components. Run the below &lt;code&gt;kubectl&lt;/code&gt; command to verify that all components are running:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl -n kube-system get deploy/hpe-dynamic-provisioner deploy/cv-cp deploy/hpe-flexvolume-driver 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It should output something similar to this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE 
deployment.extensions/hpe-dynamic-provisioner   1/1     1            1           30s 
deployment.extensions/cv-cp                     1/1     1            1           30s 
NAME                                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE 
daemonset.extensions/hpe-flexvolume-driver   3         3         3       3            3           &amp;#x3C;none&gt;          30s 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Incidentally, the cluster will end up with two default storage classes, which will block provisioning from a default storage class. Annotate the built-in “gp2” storage class to become non-default:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl annotate sc/gp2 storageclass.kubernetes.io/is-default-class=false --overwrite 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The EKS cluster is now ready for users to declare PVCs (Persistent Volume Claims) served by HPE Cloud Volumes!&lt;/p&gt;
&lt;h1&gt;Hello World&lt;/h1&gt;
&lt;p&gt;In the current incarnation of HPE Cloud Volumes it may take some time for the first PV (Persistent Volume) to connect to the first Pod. Network resources are provisioned in the background as needed. The second volume to attach only takes a few seconds. Now create a PVC and attach a workload.&lt;/p&gt;
&lt;p&gt;Here’s a an example PVC that creates a 64GiB volume:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
  name: mypvc 
spec: 
  accessModes: 
  - ReadWriteOnce 
  resources: 
    requests: 
      storage: 64Gi 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Either create a file with the PVC stanza above or paste to &lt;code&gt;stdin&lt;/code&gt; with &lt;code&gt;kubectl create -f &amp;#x3C;file&gt;&lt;/code&gt;. Once the PVC is declared, a PV will be created. Inspect it with the following:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;kubectl get pvc 
NAME    STATUS   VOLUME                                              CAPACITY   ACCESS MODES   STORAGECLASS   AGE 
mypvc   Bound    hpe-standard-5b75aaa8-10a7-11ea-8232-0a210d2b9ace   64Gi       RWO            hpe-standard   18s 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Volume creation does not attach the device to any host. The attachment is being done once a workload requests the claim. So, you must declare a simple Pod to attach the claim:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;--- 
apiVersion: v1 
kind: Pod 
metadata: 
  name: ioping 
spec: 
  containers: 
  - name: ioping 
    image: hpestorage/ioping 
    command: [ &quot;ioping&quot; ] 
    args: [ &quot;/data&quot; ] 
    volumeMounts: 
    - name: data 
      mountPath: /data 
  volumes: 
  - name: data 
    persistentVolumeClaim: 
      claimName: mypvc 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Depending on the environment, this initial deployment could take up to 10 minutes. I’ve been bouncing clusters up and down all day, hence my pod was up in less than a minute:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl get pod 
NAME     READY   STATUS    RESTARTS   AGE 
ioping   1/1     Running   0          53s 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Logs from the Pod indicate a XFS filesystem mount on /data served by a multipath device:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl logs pod/ioping 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /data (xfs /dev/mapper/mpatha): request=1 time=8.26 ms (warmup) 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /data (xfs /dev/mapper/mpatha): request=2 time=7.05 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /data (xfs /dev/mapper/mpatha): request=3 time=7.33 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /data (xfs /dev/mapper/mpatha): request=4 time=8.13 ms 
4 KiB &amp;#x3C;&amp;#x3C;&amp;#x3C; /data (xfs /dev/mapper/mpatha): request=5 time=8.44 ms 
^C 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, you can also login to the HPE Cloud Volumes portal to observe the attached volume.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/uploads/media/2019/10/screen-shot-2019-11-26-at-43542-pm-1574815092842.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Many infrastructure choices are available today that enable Kubernetes. Likewise, there are many options on the persistent storage side. HPE Cloud Volumes gives Ops teams much needed relief, as they may rely on a service to persist their applications data rather than becoming storage administrators. Not many of the cloud native software-defined storage (SDS) solutions for Kubernetes give you that luxury. Consuming storage services directly from the cloud provider creates an unnecessary lock-in situation, and the lack of enterprise-grade features is what drives users to alternative SDS solutions. HPE checks all the boxes by providing reliable and performant storage for Kubernetes. And it’s for private cloud, hybrid cloud, and public cloud as well as managed Kubernetes and BYO Kubernetes. Multicloud mobility and competitive pricing comes bundled!&lt;/p&gt;
&lt;p&gt;Start today by learning more about:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://cloudvolumes.hpe.com&quot;&gt;HPE Cloud Volumes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/hpe-storage/flexvolume-driver/tree/master/examples/kubernetes/hpe-cloud-volumes&quot;&gt;HPE Cloud Volumes StorageClass parameters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://aws.amazon.com/eks/&quot;&gt;Amazon EKS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[DevOps D-Day speakers focus on real-world use cases]]></title><description><![CDATA[picture3 A large crowd of DevOps fanatics gathered in Marseille, France on November 14th to participate in sessions and workshops at the 5th…]]></description><link>https://developer.hpe.com/devops-d-day-speakers-focus-on-real-world-use-cases/</link><guid isPermaLink="false">https://developer.hpe.com/devops-d-day-speakers-focus-on-real-world-use-cases/</guid><pubDate>Thu, 21 Nov 2019 16:00:12 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1574352241077.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;A large crowd of DevOps fanatics gathered in Marseille, France on November 14th to participate in sessions and workshops at the 5th DevOps D-Day event. The event was held in the VIP lounges of the famous, French soccer club arena of the Olympique de Marseille, right in the heart of the city. Despite the bad weather, 1,200 people and 24 sponsors gathered for the event.&lt;/p&gt;
&lt;p&gt;The conference featured around fifty different sessions ranging from high-level talks about how to develop a DevOps mindset to detailed examples of how companies implemented DevOps within their organization. There were numerous sessions on specific technologies as well, with Kubernetes being a very popular topic. Hewlett Packard Enterprise (HPE) and longtime partner, AntemetA, co-sponsored a large booth and two keynote sessions. These were presented by Christian Schutz who spoke about the adoption of Kubernetes in French companies over the course of 2019 and how to build a CI/CD pipeline with Kubernetes and OpenStack.&lt;/p&gt;
&lt;p&gt;During the breaks between sessions, crowds gathered in our booth, eager to play the famous arcade game, Space Invaders. The one who achieved the highest score won a prize offered by AntemetA. Those of us on the HPE Dev team spent a lot of time conversing with interesting event attendees, such as La Poste and Airbus Helicopters, as well as many university students who came looking for swag (and jobs). We also had a drawing for a prize where we randomly chose a name from those who signed up for the &lt;a href=&quot;https://hpe-developer.8ar.ms/newsletter-signup&quot;&gt;HPE Dev Newsletter.&lt;/a&gt; The winner of our drawing was from NeoPost. He came to pick up the gift and have his picture taken at the end of the day.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/small-version-didier-with-winner-3-1574454720874.jpg&quot; alt=&quot;small version didier with winner 3&quot;&gt;&lt;/p&gt;
&lt;p&gt;The few sessions I went to focused on customer experiences, which I always find interesting. In the sessions I attended, the speakers all delivered their talk in French in a laidback, interesting way. I found two sessions particularly engaging.&lt;/p&gt;
&lt;p&gt;The first was from the sport’s gear manufacturer, Decathlon, who discussed their DevOps implementation and the toolset they put in place for their developer teams. The speaker explained that, within Decathlon, they have a developer community that participates in regular face-to-face meetups on particular technical subjects and their technology choices are made in conjunction with suggestions from the community.&lt;/p&gt;
&lt;p&gt;The second talk I attended was given by an IT Manager from CMA CGM, one of the world’s top container transportation and shipping companies, who talked about their DevOps transformation. It was quite funny to hear a physical container shipping company talk about software containers, and how DevOps, when not done correctly, can quickly become DevOops!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1574352210141.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Don’t we know it! DevOps requires many enterprises to change a lot of deeply entrenched processes. These changes can be challenging for some, as evidenced in our recent blog &lt;a href=&quot;/blog/devops-and-its-impact-on-project-management&quot;&gt;DevOps and its impact on project management.&lt;/a&gt; Don’t forget to check back and see what new blogs are up on the &lt;a href=&quot;/blog&quot;&gt;HPE Dev site&lt;/a&gt; on this and other interesting topics.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Insights from HPE DEV at Microsoft Ignite 2019 ]]></title><description><![CDATA[picture6 The light shone brightly on HPE DEV at Microsoft Ignite 2019, as we shared knowledge, collaborated, and connected with developers…]]></description><link>https://developer.hpe.com/insights-from-hpe-dev-at-microsoft-ignite-2019/</link><guid isPermaLink="false">https://developer.hpe.com/insights-from-hpe-dev-at-microsoft-ignite-2019/</guid><pubDate>Wed, 20 Nov 2019 17:04:41 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture6-1574269899492.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;The light shone brightly on HPE DEV at Microsoft Ignite 2019, as we shared knowledge, collaborated, and connected with developers. Our objectives at this show were straightforward – show how to build the best possible experiences with Hewlett Packard Enterprise (HPE) solutions. We did that by:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Demonstrating how the HPE development toolset/frameworks easily build out end-to-end applications&lt;/li&gt;
&lt;li&gt;Exhibiting real-world use cases with practical suggestions and ideas&lt;/li&gt;
&lt;li&gt;Promoting the HPE DEV Community and showcasing how we’re encouraging developer collaboration&lt;/li&gt;
&lt;li&gt;Spotlighting key HPE offerings around IoT and edge computing, Microsoft Azure and Azure Stack&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;HPE DEV supported the HPE tagline &lt;strong&gt;Fear no Cloud&lt;/strong&gt; by demonstrating how attendees could consume Microsoft Azure Cognitive Services at edge and by demystifying DevOps for use with Azure Stack as part of a hybrid IT infrastructure.&lt;/p&gt;
&lt;p&gt;A live demo showing Face Detection with Azure Cognitive Services was one of the main attractions at the event. The demo offered a great opportunity for attendees to visualize an application architecture built with lightweight containers running in a Kubernetes cluster on a ruggedized Edgeline300 Server receiving a live feed from a simple, everyday camera. Each attendee was able to interact with the application and find out more about how edge processing works in real world use cases combining artificial intelligence (AI) with the best HPE solutions for edge computing.&lt;/p&gt;
&lt;p&gt;The HPE DEV demonstration of DevOps workflows with an HPE Proliant for Microsoft Azure Stack Hub satisfied developer curiosity regarding how hybrid IT can support and extend an application’s capability from Edge to Cloud with a &lt;strong&gt;Fear No Cloud&lt;/strong&gt; mindset.&lt;/p&gt;
&lt;p&gt;As HPE DEV has done in previous events, attendees had the opportunity to play the &lt;a href=&quot;https://github.com/HewlettPackard/hpe-hack-shack-attack&quot;&gt;Hack Shack Attack&lt;/a&gt; game, an open source 8-bit arcade game. To add to the fun, as part of a raffle encouraging folks to sign up for the HPE DEV newsletter, we offered a prize each day to one lucky winner.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture4-1574269876302.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Everyone waiting for the results of the daily raffle&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1574269854835.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;And the winner is…&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There were many announcements during Microsoft Ignite, but the two most significant to HPE were:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Azure Arc:&lt;/strong&gt; Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Azure Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds -- including those of competitors, such as AWS and Google Cloud.&lt;/p&gt;
&lt;p&gt;HPE played a key role in the announcement, with the HPE Superdome Flex featured prominently on the announcement stage. HPE will participate in the preview of the product and, although it’s not yet publically available, customers can sign up for this now. For more information on &lt;a href=&quot;https://azure.microsoft.com/en-us/blog/azure-arc-extending-azure-management-to-any-infrastructure/?ActivityID=NA&amp;#x26;AssetID=NA&amp;#x26;elq2=~~eloqua..type--emailfield..syntax--recipientid..encodeFor--url~~&quot;&gt;Azure Arc, check out this link.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Azure Stack Portfolio:&lt;/strong&gt;
Microsoft also launched a full portfolio of hybrid solutions, spanning from the edge to Azure Stack Hub (previously referred to as Azure Stack).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1574269831398.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://azure.microsoft.com/en-us/blog/expanding-the-azure-stack-portfolio-to-run-hybrid-applications-across-the-cloud-datacenter-and-the-edge/?ActivityID=NA&amp;#x26;AssetID=NA&amp;#x26;elq2=~~eloqua..type--emailfield..syntax--recipientid..encodeFor--url~~&quot;&gt;Click here for specifics on this new portfolio.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The HPE DEV area was the talk of town within the HPE booth. It had a stronger than ever presence given the number of developers and Microsoft MVP’s in attendance who wanted to know more about real world use case examples and to see sample code. Many wanted to directly signup as part of the HPE DEV Community and keep plugged into HPE DEV collaborations.&lt;/p&gt;
&lt;p&gt;As the tradeshow floor activities wound down, Microsoft hosted an event at Universal Studios. Like everyone else, HPE DEV joined in the fun.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1574269799671.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Pramod, Vivek, and Emmy join in the celebration&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Running Non-Cloud-Native Apps on Kubernetes with KubeDirector]]></title><description><![CDATA[Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise. For more information on why the name was changed…]]></description><link>https://developer.hpe.com/running-non-cloud-native-apps-on-kubernetes-with-kubedirector/</link><guid isPermaLink="false">https://developer.hpe.com/running-non-cloud-native-apps-on-kubernetes-with-kubedirector/</guid><pubDate>Mon, 18 Nov 2019 17:16:18 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Editor’s Note – HPE Ezmeral Container Platform is now HPE Ezmeral Runtime Enterprise&lt;/strong&gt;. For more information on why the name was changed, please &lt;a href=&quot;https://community.hpe.com/t5/HPE-Ezmeral-Uncut/HPE-Ezmeral-Container-Platform-is-now-HPE-Ezmeral-Runtime/ba-p/7151720#.YW7nOxrMKM8&quot;&gt;click here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;This week at &lt;a href=&quot;https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/&quot;&gt;KubeCon North America,&lt;/a&gt; Hewlett Packard Enterprise (HPE) unveiled the new &lt;a href=&quot;https://www.hpe.com/us/en/newsroom/press-release/2019/11/Hewlett-Packard-Enterprise-introduces-Kubernetes-based-platform-for-bare-metal-and-edge-to-cloud-deployments.html&quot;&gt;HPE Container Platform.&lt;/a&gt; It’s the industry’s first Kubernetes-based software platform designed to run both cloud-native and non-cloud-native applications in containers, enabling true hybrid cloud operations across any location: on-premises, public clouds, and the edge.&lt;/p&gt;
&lt;p&gt;It’s widely acknowledged that open source Kubernetes is the de facto standard for the orchestration of containerized cloud-native applications. However, it’s another thing entirely to use Kubernetes to orchestrate the containerized deployment and management of non-cloud-native monolithic applications as well. How can HPE make such a bold claim?&lt;/p&gt;
&lt;p&gt;Let’s take a peek under the hood at one of the technical innovations in the HPE Container Platform that permits HPE to back up such an aggressive claim.
By way of background, I’m one of the co-founders of BlueData, a software company that HPE acquired just about a year ago. The HPE Container Platform represents the next major release of BlueData’s container-based software platform, combined with integrated persistent container storage from MapR (also recently acquired by HPE), and open source Kubernetes for container orchestration.&lt;/p&gt;
&lt;p&gt;Prior to the acquisition, the BlueData team initiated an Apache open source project called KubeDirector. The initiative was focused on running non-cloud-native, stateful applications on Kubernetes. You can refer to my earlier blog post &lt;a href=&quot;https://kubernetes.io/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/&quot;&gt;here,&lt;/a&gt; where I described the KubeDirector project. Now, with the HPE Container Platform, KubeDirector has come into its own as a key component of the platform -- delivering on its objective to deploy and manage non-cloud-native applications on Kubernetes.&lt;/p&gt;
&lt;h2&gt;KubeDirector and Non-Cloud-Native Applications&lt;/h2&gt;
&lt;p&gt;Non-cloud-native, monolithic applications (e.g. stateful applications with persistent storage) have a number of specific requirements. For example, they typically require fixed network configuration (i.e. static IP addresses). Kubernetes provides a construct, known as &lt;a href=&quot;/blog/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern&quot;&gt;StatefulSets,&lt;/a&gt; which permits an application to be deployed with stable, unique network identifiers. This means that the IP address of a container will be preserved across pod rescheduling.&lt;/p&gt;
&lt;p&gt;However, another attribute of non-cloud-native applications is that they may store data in /etc or other directories typically located on the root (“/”) file system of the container. When Kubernetes restarts a crashed container, the storage associated with the original instance of the container is lost. This means that any data stored by the stateful application in the /etc directory will be lost.&lt;/p&gt;
&lt;p&gt;Kubernetes uses the &lt;a href=&quot;https://kubernetes.io/docs/concepts/storage/persistent-volumes/&quot;&gt;PersistentVolume&lt;/a&gt; construct to overcome this default behavior and allow the container to store data in a unit of storage that persists beyond the lifespan of a single container. But PersistentVolumes cannot easily be used for the root or “/” file system of a container. Any data written to the root file system will be lost when the container exits.&lt;/p&gt;
&lt;p&gt;By clever use of the Kubernetes &lt;a href=&quot;https://kubernetes.io/docs/concepts/workloads/pods/init-containers/&quot;&gt;Init Container&lt;/a&gt; concept, KubeDirector overcomes this limitation and allows the root file system of a container to persist beyond the life span of the container. In fact, KubeDirector permits the user to specify which of the directories typically located on the root file system (/etc, /bin, etc) need to be preserved. This means stateful applications that write data to their root file systems can now successfully run on Kubernetes.&lt;/p&gt;
&lt;p&gt;Another common attribute of non-cloud-native, stateful applications is that they require careful management to survive node loss, software upgrade, and application cluster expansion and contraction. Common Kubernetes application deployment tools such Helm or KubeFlow are “client side” only mechanisms. This means that once they are used to deploy the application, they are out of the picture. There is no “server side” component to help the application respond to events such as node (container) loss, software upgrade, or application cluster expansion. In order to support these sorts of operations, an application-specific &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/operator/&quot;&gt;Operator&lt;/a&gt; needs to be written. But writing an application Operator is not an easy task – it requires deep application domain knowledge as well as familiarity with the Kubernetes custom resource APIs.&lt;/p&gt;
&lt;p&gt;KubeDirector is an application deployment tool as well as a Kubernetes Operator, and it is application-neutral. In other words, the KubeDirector Operator can be used for multiple applications: it can deploy and manage any application using Kubernetes without additional code being written. The term KubeDirectorApp identifies a given application type managed by KubeDirector. A simple YAML file is used to describe the attributes of a given KubeDirectorApp. This YAML file is given to the KubeDirector Operator, which can then deploy the application and manage the application cluster-specific expansion, contraction, upgrade, node loss, and other operations.&lt;/p&gt;
&lt;p&gt;The use of the single KubeDirector operator to manage multiple KubeDirectorApps also removes the complex “Day 2” operations required to add a new Operator to a running Kubernetes cluster. No need to grant escalated privileges in order to add a new Operator to support a new application. Just write the YAML file describing the new application and KubeDirector does the rest.&lt;/p&gt;
&lt;p&gt;In short, with KubeDirector, a lot less effort is required to add support for a new application to Kubernetes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No need to create or modify an application-specific Operator&lt;/li&gt;
&lt;li&gt;No need to rely on or certify an Operator from elsewhere&lt;/li&gt;
&lt;li&gt;No need to register a new Custom Resource Definition and/or update user/group ACLs&lt;/li&gt;
&lt;li&gt;No need to (possibly dramatically) change clients to deal with new schemas&lt;/li&gt;
&lt;li&gt;No need for complexity with an easy mechanism for configuring per user per KubeDirectorApp permissions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The new HPE Container Platform will be generally available early next year, so stay tuned for more. Many innovative aspects of the HPE Container Platform ease the deployment and management of containerized applications for environments requiring enterprise-grade security, performance, and scalability – running on bare-metal or virtualized infrastructure, from edge to core to cloud. And one thing is for sure: with KubeDirector under the hood, HPE is delivering on its claim to run non-cloud-native, stateful applications on Kubernetes.&lt;/p&gt;
&lt;p&gt;If you’re at &lt;a href=&quot;https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/&quot;&gt;KubeCon + CloudNativeCon&lt;/a&gt; this week, you can join my presentation on Thursday afternoon with my colleague Joel Baxter to learn more: &lt;a href=&quot;https://kccncna19.sched.com/event/UaaF/kubedirector-deploying-complex-stateful-applications-on-kubernetes-joel-baxter-thomas-phelan-hewlett-packard-enterprise&quot;&gt;KubeDirector - Deploying Complex Stateful Applications on Kubernetes.&lt;/a&gt; You can also stop by the HPE booth (P28) at KubeCon to meet us and ask any questions, and check out the KubeDirector GitHub repo &lt;a href=&quot;https://github.com/bluek8s/kubedirector/&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[10th anniversary of DevOps Days celebrated in Ghent]]></title><description><![CDATA[picture2 I recently enjoyed attending DevOps Days in Ghent, one of the prettiest cities in Europe. Ghent was especially beautiful during the…]]></description><link>https://developer.hpe.com/10th-anniversary-of-devops-days-celebrated-in-ghent/</link><guid isPermaLink="false">https://developer.hpe.com/10th-anniversary-of-devops-days-celebrated-in-ghent/</guid><pubDate>Thu, 14 Nov 2019 15:56:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1573748147341.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;I recently enjoyed attending DevOps Days in Ghent, one of the prettiest cities in Europe. Ghent was especially beautiful during the two days I was at the event, given the gorgeous weather we experienced. Hewlett Packard Enterprise (HPE) was a Gold Sponsor for this 10th year anniversary celebration of DevOps Days, and Stefan De Schuyter (Belgium presales), Simon Leech, Mario Devargas (PointNext), and I were excited to be able to present the HPE Dev Community to the 500+ attendees who came to Ghent.&lt;/p&gt;
&lt;p&gt;To encourage conversations between attendees and the 25+ vendors at the event, the organizers offered a prize for those who went around to each booth to have a form stamped. Our local HPE host, Stefan, immediately hooked people and brought them into our booth by saying “What does HPE have for you? Stamps, stickers, and APIs.” It was a fun way to start the dialog as we gave each visitor a stamp. We also engaged in conversations about HPE Composable Infrastructure and our DevOps transformation services.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1573748200636.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;The DevOps Days session format was somewhat unusual. There was a good balance of 15 minute breakout sessions, 5 minute &quot;Ignite&quot; sessions, and so-called OpenSpace sessions where attendees got to choose subjects for an open discussion in which they could both listen and contribute. The Ignite sessions were delivered through automatically advancing slides and most speakers were just incredible in their ability to convey their thoughts and still keep up with the timing. The event used very creative formats to communicate with the audience and it provided me with a lot of good ideas on how I could improve my own presentation skills.&lt;/p&gt;
&lt;p&gt;During our time at the event, we were also very lucky to have &quot;the godfather of DevOps&quot;, Patrick Debois, visit our booth and shake hands with the gremlin, our mascot. I would say that the show was very successful, given everyone’s smiling faces and the whole engaged atmosphere at the event.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/shaking-hands-with-stack-4-1573752028593.png&quot; alt=&quot;shaking hands with stack (4)&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the past decade, DevOps Days has grown from a single event to over 80 events that are now being hosted in different locations around the world. Each technical conference features session topics on software development, IT infrastructure operations, and the intersection between them. They are hosted by local volunteers and deliver a combination of curated talks combined with self-organized open space content focused on areas like automation, testing, security, and organizational culture. Make sure you check the &lt;a href=&quot;https://devopsdays.org/&quot;&gt;DevOps Days website&lt;/a&gt; to find a DevOps Days event close to you…I highly recommend them.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet the HPE DEV Team – Blaine Southam and Michael Mattsson]]></title><description><![CDATA[HPE DEV community members hail from a variety of backgrounds. In this edition of our Meet the HPE DEV Team blog, I’m going to introduce you…]]></description><link>https://developer.hpe.com/meet-the-hpe-dev-team-blaine-southam-and-michael-mattsson/</link><guid isPermaLink="false">https://developer.hpe.com/meet-the-hpe-dev-team-blaine-southam-and-michael-mattsson/</guid><pubDate>Wed, 06 Nov 2019 18:16:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community&lt;/a&gt; members hail from a variety of backgrounds. In this edition of our &lt;strong&gt;Meet the HPE DEV Team&lt;/strong&gt; blog, I’m going to introduce you to
Blaine Southam and Michael Mattsson.&lt;/p&gt;
&lt;h2&gt;Blaine Southam – Distinguished Technologist by day / ham radio operator by night&lt;/h2&gt;
&lt;p&gt;As a Distinguished Technologist, Blaine is often pulled into numerous projects. Folks rely on him to gather the right people, current processes, and correct technologies required to ensure the success of a new project. He’s worked as the Composable API Ecosystem principle architect, as the designer for multiple HPE OneView plugins, and was also responsible for the HPE OneView Global Dashboard and the HPE HC380 hyperconverged offering. Blaine also led the HPE Synergy / VMware Cloud Foundation integration project, which was recently added as an HPE GreenLake service. As the current HPE Cloud Ecosystem Architect, Blaine is now involved in helping internal and external partners offer their content as-a-Service through the HPE portal.&lt;/p&gt;
&lt;p&gt;Blaine is the type of person who’s often involved in many projects while at work, and with his four kids, he’s kept pretty busy at home as well. When he’s not spending time with his family, Blaine volunteers with the youth group at his church, planning activities and summer camps. Yet, even with all this going on, he’s somehow found the time to get his Amateur Radio License (aka Ham radio). Recently, Blaine was excited to receive a message from a gentleman in Germany who congratulated him on getting his license. If you’d like to connect with Blaine, you can reach him on Twitter &lt;a href=&quot;https://twitter.com/bsoutham&quot;&gt;@bsoutham.&lt;/a&gt; Or, better yet, use his call sign, KE0YAN, and give him a shout out over the radio.&lt;/p&gt;
&lt;h2&gt;Michael Mattsson – Master Technologist and avid gamer&lt;/h2&gt;
&lt;p&gt;As a systems engineer and developer, Michael has had the opportunity to work for a number of different companies on very interesting projects over the years. Starting off in an R&amp;#x26;D group at Volvo, Michael built a lot of custom automation and system management software that garnered the attention of Volvo partner, Proact. Proact offered him a consulting role for their top accounts where his ability to visualize the broader picture and talent in automation led to key open storage projects. This set him up for his next career move, one that literally moved him from Europe to the U.S., with a new employer located in Redwood City, California.&lt;/p&gt;
&lt;p&gt;Although Michael’s seen a lot of change in his career, one thing’s remained constant – his focus on data and storage. An HPE Master Technologist, Michael is currently working as a technical marketing engineer for the HPE storage marketing group. An HPE Nimble Storage expert, he concentrates on container ecosystems and how they relate to storage. I got a chuckle when Michael told me “If I had a cent for every time I spelled Kubernetes, today I would be a millionaire.” Michael attributes the fact that he became a developer due to his early exposure to video games, starting with Pong when he was only four years old. A life-long gamer who’s been through just about every home game console there is, Michael even has his own arcade at home to which he devotes a great deal of time for care and maintenance. You can catch up with Michael on Twitter &lt;a href=&quot;https://twitter.com/datamattsson&quot;&gt;@datamattsson.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Take a look at &lt;a href=&quot;https://www.youtube.com/watch?v=hfY6Ko02yiU&amp;#x26;feature=youtu.be&quot;&gt;Blaine and Michael’s video&lt;/a&gt; to learn a little more about them. And, if you ever have questions on HPE OneView, cloud ecosystems, containers, or HPE Nimble Storage, don’t be afraid to reach out to Blaine or Michael on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack.&lt;/a&gt; Blaine can be reached at bsoutham and you can address Michael at michaelm.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Hackathon builds greater competency in IoT and AI ]]></title><description><![CDATA[florian hackathon first image1 In May of this year, the HPE Customer Technology Center (CTC) in Böeblingen, Germany held its first HPE…]]></description><link>https://developer.hpe.com/hpe-hackathon-builds-greater-competency-in-iot-and-ai/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-hackathon-builds-greater-competency-in-iot-and-ai/</guid><pubDate>Wed, 06 Nov 2019 18:12:07 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/florian-hackathon-first-image1-1573154920702.jpg&quot; alt=&quot;florian hackathon first image1&quot;&gt;&lt;/p&gt;
&lt;p&gt;In May of this year, the HPE Customer Technology Center (CTC) in Böeblingen, Germany held its first HPE hackathon. The event brought together a group of Hewlett Packard Enterprise (HPE) developers and designers from IoT &amp;#x26; Data Analytics, Presales, Pointnext services, and certain delivery groups who wished to become more knowledgeable about Internet of Things (IoT) and artificial intelligence (AI) implementations. Noticing that the more concrete IoT and AI projects became, the harder it was to find enough proficient resources to handle them, the group planned a hackathon that could help multiply the knowledge base.&lt;/p&gt;
&lt;p&gt;Hackathons can help build proficiency by going beyond PowerPoint presentations and getting down into the finer details to determine how the technology really works. The group locked themselves away for a week to learn more about these topics. To focus their efforts, they defined two projects to work on using HPE technologies; one covered data pipelining and the other covered data analytics.&lt;/p&gt;
&lt;h2&gt;Data Pipeline with HPE OTLink&lt;/h2&gt;
&lt;p&gt;The scope of the first project was to set up a data pipeline, from data acquisition to analytics. It was set up to gather data from multiple and different types of devices, including temperature and humidity sensors, Philips ® Hue light bulbs, door sensors, and edge hardware. The data was acquired using HPE Edgeline systems with HPE OTLink software and sent to a dashboard in the Azure cloud using open-source data pipeline services. In a second step, we established bi-directional communication to execute commands from the cloud at the edge.&lt;/p&gt;
&lt;h2&gt;Data Analytics Provisioning and Analytics with HPE BlueData&lt;/h2&gt;
&lt;p&gt;The second project set up a BlueData software stack on an HPE Apollo infrastructure including NVIDIA GPUs to provision multiple services from a marketplace that were used for further data science work. The team implemented an image recognition service that enables users to recognize different server options, such as fans or hard drives, by video. The image below shows the overall architecture and it’s something that could be applied to a midsized manufacturing customer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/chart-2-2-1573155663722.jpg&quot; alt=&quot;chart 2 (2)&quot;&gt;&lt;/p&gt;
&lt;p&gt;The CTC group spent a lot of time coding, and enjoyed the opportunity to work with one another and share their knowledge. They felt that it was truly worth the time and effort. Because they found it so successful, they are hoping to hold a similar event the third week in January 2020 and are thinking of inviting select partners to participate.&lt;/p&gt;
&lt;p&gt;If you’re interested in learning more, connect with Florian Doerr at &lt;a href=&quot;mailto:florian.doerr@hpe.com&quot;&gt;florian.doerr@hpe.com&lt;/a&gt; or &lt;a href=&quot;https://twitter.com/florian_doerr&quot;&gt;@florian_doerr.&lt;/a&gt; For more opportunities to work with HPE DEV, connect with the HPE Developer Community on the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV Slack channel.&lt;/a&gt; And don’t forget to monitor the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blogs&lt;/a&gt; to hear more about what’s going on in the HPE DEV world.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Destination: Agility - Newsletter]]></title><link>https://developer.hpe.com/2019-November-01/</link><guid isPermaLink="false">https://developer.hpe.com/2019-November-01/</guid><pubDate>Fri, 01 Nov 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Use Grommet Face Detection and Microsoft Azure Cognitive Services to identify people in pictures]]></title><description><![CDATA[picture1 In this article, I am going to show you how to perform face detection and recognition using the Grommet Face Detection application…]]></description><link>https://developer.hpe.com/use-grommet-face-detection-and-microsoft-azure-cognitive-services-to-ide/</link><guid isPermaLink="false">https://developer.hpe.com/use-grommet-face-detection-and-microsoft-azure-cognitive-services-to-ide/</guid><pubDate>Fri, 01 Nov 2019 02:40:45 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture1-1572576650255.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this article, I am going to show you how to perform face detection and recognition using the Grommet Face Detection application and the Microsoft Azure Face API, which is available through Azure Cognitive Services. This is a fun application we have used at different events to showcase the many uses of Grommet. While fun, it also has practical applications in healthcare, banking, retail, and security industries.&lt;/p&gt;
&lt;p&gt;The front-end face detection application is used to fetch the image from the network camera, send it to Azure Face API, and display the facial recognition response. The Azure Cognitive Services Face API is used to recognize human faces in a picture. When you send a picture from the face detection application to the Azure Face API,  it detects human faces in the image and tells you that, within a certain position, there is a face. It will even identify certain face-related attributes such as gender, estimated age, emotion, head position, facial hair, eyeglasses, and a lot more!&lt;/p&gt;
&lt;h2&gt;Why use the Grommet Face Detection application?&lt;/h2&gt;
&lt;p&gt;I built this face detection/recognition application to show how quickly you can build a responsive web app using HPE Open Source UI development and a design framework called Grommet that interacts with Microsoft Azure Cognitive Services. The face-detection scenario leverages the Azure Face API in conjunction with Node.js to extract and recognize the facial expressions and emotions of various people.&lt;/p&gt;
&lt;p&gt;You can use this same methodology to build your own personalized application. For instance, you can design it so that when you walk into the room, the computer recognizes you and automatically sets the lights and music per your preference. There are other use cases as well. Imagine building a natural language interactive robot. You could design it so that it looks at someone’s face and recognizes them when they walk into a store. It could determine that person’s emotions by the way he or she looks. If angry, the robot knows and reacts accordingly. If happy, perhaps it might try to sell that person more items. To show you how this all works, instead of building a complex robot, I built a simple Face Detection Web Application using HPE Grommet that interacts with Microsoft Cognitive Services. You can learn more about Grommet by reading  a previous HPE DEV blog - &lt;a href=&quot;/blog/grommet-the-glue-that-binds-development-and-design&quot;&gt;Grommet – the Glue that Binds Development and Design.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;What exactly are Microsoft Cognitive Services?&lt;/h2&gt;
&lt;p&gt;Microsoft Cognitive Services allow applications to hear, see, and understand the world just like you would. If you were to pause for a second and think of all the people you know, your family, your friends, your boss and so on, what would come to mind? Are you thinking of their faces? And emotions?&lt;/p&gt;
&lt;p&gt;Microsoft Cognitive Services provides you the power to recognize all of that in your applications. And it’s so simple to use! All you need to do is make a REST API call using your preferred programming language and you will enable your application to do amazing things. Microsoft Cognitive Services is comprised of Decision, Vision, Speech, Knowledge, Search, and Language APIs. Within each of these buckets, there are numerous capabilities. In this post, I’m going to focus on how I used the Azure Face API, which is part of the Vision service, to detect, recognize, and analyze the human faces in images.&lt;/p&gt;
&lt;h2&gt;Recognizing  and identifying a human face&lt;/h2&gt;
&lt;p&gt;There are two aspects of facial recognition; the first is recognizing that there is a face within a picture and the second is identifying whose face it is. Cognitive Services allow you to easily recognize faces in a picture. You send a picture, and it tells you that at certain rectangle position, there is a face. It will even tell you things like about the face, like age, gender, expression, whether eyeglasses are worn, and a lot more!&lt;/p&gt;
&lt;p&gt;To identify the face in the picture using the Azure Face API, you first need to build a library (the dataset of your AI machine learning model) of people with associated faces. To do this, you first need to create a Person group. To that Person group, you add numerous images of people.  For example, I could upload pictures of Hewlett Packard Enterprise (HPE) CEO Antonio Neri to the library, along with other faces. I then assign Antonio’s name to (one or more) pictures that have him in it. Once you have created your database of people, you then train the model. Training the model is an important step for your AI model to make more accurate identifications. After that, input a picture that the program has never seen before. This would be a picture of a person who is already in the library. It could also be a picture with more than one person, where one of the people in the picture is part of our library. Based on matching it against those already in the library, the face detection application will &quot;identify&quot; the person, as in &quot;I think this is Antonio Neri, and my confidence is 99%.&quot;&lt;/p&gt;
&lt;p&gt;I must mention that given the number of steps required, I can’t show every line of code in this article. However, you can download the source code and set it up. The instructions can be found in the &lt;strong&gt;Set up the code&lt;/strong&gt; section of this article.&lt;/p&gt;
&lt;h2&gt;Set up the Face API in Azure&lt;/h2&gt;
&lt;p&gt;Like many Cognitive Services APIs, to use the Face API, you need to provision it for use in the Azure Portal. Log in to the Azure portal, click on the &lt;strong&gt;Create a resource&lt;/strong&gt; link, and choose the option that allows you to create a new instance of the Face API. You will be provided with a Cognitive service API endpoint and subscription keys used for every call to your deployed instance of the Azure Cognitive Services Face API as shown below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1572576983408.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1572577002107.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;The application will ask you to define certain parameters. Among other things, you’ll need to specify a pricing tier. For this example, the F0 subscription-level free tier is fine. Once you’ve created a resource, grab the endpoint address and the two keys. I chose to put mine into an environment file called .env inside the face detection source code folder.&lt;/p&gt;
&lt;h2&gt;Authenticate to the Face API&lt;/h2&gt;
&lt;p&gt;Calls to the Face API are simple REST calls. They can be GET/POST/DELETE or any other such common HTTP verb wrapped in a Face API call with your preferred programming language, whether it’s C#, Node.js, cURL, Go, Java, Javascript, Python, PHP, or Ruby. All the API calls must be authenticated. To authenticate the call, you need to pass in a specific HTTP header called Ocp-Apim-Subscription-Key, and then pass in either of the two keys you previously saved. The reason Microsoft Azure provides two keys is for redundancy. If one key gets compromised, you can choose to use the other while the first one is regenerated.&lt;/p&gt;
&lt;p&gt;I used the fetch method inside my face detection application to make REST API calls. The instructions to download the source code based on Node.js can be found in the &lt;strong&gt;Set up the code&lt;/strong&gt; section of this article.&lt;/p&gt;
&lt;h2&gt;Face detection&lt;/h2&gt;
&lt;p&gt;Detection, as the name suggests, detects human faces in an image. But it can do so much more than that. When you call up the Detect method, you have a choice of requesting multiple details of the recognized faces. Of course, the more details you request, the longer the request takes. In your inputs to the detect method, you can pass in various optional request parameters like FaceID, FaceLandmarks, and FaceAtrributes ( smile, emotion, age, gender, and so on).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;
// adding face detect attributes
const addImageParams =
  &apos;returnFaceId=true&amp;#x26;returnFaceLandmarks=false&amp;#x26;returnFaceAttributes=age,gender,smile,facialHair,glasses,emotion,hair,makeup,accessories,headPose&apos;;

// URI for face Detection
const detectUri = `${baseUrl}/detect?${addImageParams}`;
// API call to Detect human faces in an image, return face rectangles, and optionally with faceIds,
// landmarks, and attributes
async function fetchFaceEntries(imageData) {
  const blob = await dataURLtoBlob(imageData);
  const faceDetect = await fetch(detectUri, {
    method: &apos;POST&apos;,
    headers: {
      &apos;Content-Type&apos;: &apos;application/octet-stream&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    body: blob,
  }).catch(err =&gt; {
    console.log(&apos;err&apos;, err);
  });

return faceDetect;
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Lights, camera, action!&lt;/h2&gt;
&lt;p&gt;Now that you have the basics, let me show you the steps required to work with the Face Detection application.&lt;/p&gt;
&lt;h2&gt;Step 1: Create a Person Group&lt;/h2&gt;
&lt;p&gt;Creating a Person Group is a matter of issuing a PUT request to your Face API endpoint URL that looks like this:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/persongroups/{personGroupId}&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The personGroupId is simply a user-provided string. Along with such a request, you need to include a request body with the following JSON object:&lt;/p&gt;
&lt;p&gt;{ ‘name’: personGroupId}&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;
// URI to create a person group mentioned in .env file
const personGroupUri = `${baseUrl}/persongroups/${personGroupName}?`;
// create a person group
async function createPersonGroup() {
  const personGroup = await fetch(personGroupUri, {
    method: &apos;PUT&apos;,
    headers: {
      &apos;Content-Type&apos;: &apos;application/json&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    body: JSON.stringify({
      name: personGroupName,
    }),
  }).catch(err =&gt; {
    console.log(&apos;err&apos;, err);
  });

const pGroup = await personGroup.json();
  return pGroup;
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 2: Create a Person&lt;/h2&gt;
&lt;p&gt;Once you’ve created a Person Group, the next step is to add people into that Person Group. This is a simple POST request to your Face API endpoint:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/persongroups/{personGroupId}/persons&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note that the request URL includes the personGroupId. This is how you tell the Face API which Person Group a Person belongs in. Also, you need to specify the name of the person you’re adding as a JSON object that looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;	{ &quot;name&quot;: personName}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The name is the display name of the target person. The REST API call returns the personID created. You will need the personID to associate a face to the person in the next step.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;
// URI to create a new person in a specified person group
const personUri = `${baseUrl}/persongroups/${personGroupName}/persons`;
// create a new person in a specified person group
async function createPerson(personName) {
  const person = await fetch(personUri, {
    method: &apos;POST&apos;,
    headers: {
      &apos;Content-Type&apos;: &apos;application/json&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    body: JSON.stringify({
      name: personName,
    }),
  }).catch(err =&gt; {
    console.log(&apos;err&apos;, err);
  });
  const personResponse = await person.json();
  return personResponse;
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3: Add a Person Face&lt;/h2&gt;
&lt;p&gt;Adding a Person Face (an image that contains the face of the person) is another REST call. This time it’s a POST request to the following URL:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;As you can see from the URL, you’re posting to the /persistedFaces URL for the given person in the given person group. The next question is: How do you specify the actual file contents? There are two ways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Either you can specify the content-type header to be application/json, and send the following JSON object in the body of the POST request:
{ url: ‘url_to_the_image’}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Or you can specify the content-type header to be application/octet-stream and send the contents of the image.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;
// API call to add the current face image to a specific person
async function addImage(imageSrc, personId) {
  const addImageUrl = `${baseUrl}/persongroups/${personGroupName}/persons/${personId}/persistedFaces?`;
  const buff = await dataUriToBuffer(imageSrc);
  await fetch(`${addImageUrl}${addImageParams}`, {
    method: &apos;POST&apos;,
    body: buff,
    headers: {
      &apos;Content-Type&apos;: &apos;application/octet-stream&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    credentials: &apos;same-origin&apos;,
  });
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 4: Train the Model&lt;/h2&gt;
&lt;p&gt;Once you’ve created the Person Group, added Persons, and added Faces to the Persons, you need to train the person group before you can start asking Face API to identify people. Training the Person Group is yet another REST call. You simply issue a POST request to the following URL specifying the personGroupId of the person group you want to train. The subscription key must be passed in the request header (not shown here) as for any other Face API call:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/persongroups/{personGroupId}/train&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The training task is an asynchronous task. Training time depends on the number of person entries, and their faces in a person group. It could be several seconds to minutes. To check training status, issue another GET REST call to the following URL:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/persongroups/{personGroupId}/training&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Congratulations! With your model complete, now you can send an input picture and have the Face API identify the person.&lt;/p&gt;
&lt;p&gt;In my example, I called the appropriate method to create one for a person named “Pramod” in the Person Group called “hpe”, then added persons face to Person Group and trained the model.&lt;/p&gt;
&lt;h2&gt;Step 5: Detect or recognize the person&lt;/h2&gt;
&lt;p&gt;Remember, the way recognition works is that you must first detect there is a face in the picture. Here’s an example. You have to send the picture to the Face API and have the Face API detect a face in the picture. In doing so, the Face API detect method returns an identifier (the FaceID) of the detected face. This identifier is temporary and it’s only good for 48 hours. However, Person IDs are permanent.&lt;/p&gt;
&lt;p&gt;You send the picture to Face API by issuing a POST request to the following URL:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/detect[?returnFaceId][&amp;#x26;returnFaceLandmarks][&amp;#x26;returnFaceAttributes][&amp;#x26;recognitionModel][&amp;#x26;returnRecognitionModel][&amp;#x26;detectionModel]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Also, you must include the following body:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;{
    &quot;url&quot;: http://example.com/1.jpg //URL of input image
}

&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// adding face detect attributes
const addImageParams =
  &apos;returnFaceId=true&amp;#x26;returnFaceLandmarks=false&amp;#x26;returnFaceAttributes=age,gender,smile,facialHair,glasses,emotion,hair,makeup,accessories,headPose&apos;;

// URI for face Detection
const detectUri = `${baseUrl}/detect?${addImageParams}`;
// API call to Detect human faces in an image, return face rectangles, and optionally with faceIds,
// landmarks, and attributes
async function fetchFaceEntries(imageData) {
  const blob = await dataURLtoBlob(imageData);
  const faceDetect = await fetch(detectUri, {
    method: &apos;POST&apos;,
    headers: {
      &apos;Content-Type&apos;: &apos;application/octet-stream&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    body: blob,
  }).catch(err =&gt; {
    console.log(&apos;err&apos;, err);
  });
  return faceDetect;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 6: Identify the person&lt;/h2&gt;
&lt;p&gt;Once you have the Face ID, you can identify the person from the person group with personGroupId. You do so by issuing a POST request to the following URL:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://{endpoint}/face/v1.0/identify&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Also, you must include the following body:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;{
 &apos;personGroupId&apos;: personGroupId,
 &quot;faceIds&quot;: [faceId],
 &quot;maxNumOfCandidatesReturned&quot;: 1,
 &quot;confidenceThreshold&quot;: 0.5
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that, in the body, you specify the number of candidates you’d like to have in the result set and the minimum confidence threshold of recognition that should be met. Also, you pass in the Face ID.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-javascript&quot;&gt;// URI for face Identify
const identifyUril = `${baseUrl}/identify?`;
// Call this after detecting faces, with face IDs,
// to identify the faces from the people group
async function identifyFaceFromGroup(faceIdsArray, personGroupId) {
  const res = await fetch(identifyUril, {
    method: &apos;POST&apos;,
    body: JSON.stringify({
      faceIds: faceIdsArray,
      personGroupId,
    }),
    headers: {
      &apos;Content-Type&apos;: &apos;application/json&apos;,
      &apos;Ocp-Apim-Subscription-Key&apos;: subscriptionKey,
    },
    credentials: &apos;same-origin&apos;,
  }).catch(err =&gt; {
    console.log(&apos;err&apos;, err);
  });
  return res;
}

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Face Detection system&lt;/h2&gt;
&lt;p&gt;In my trials using the Face Detection Application, I implemented the setup below. It consists of an EdgeLine server on which I created a Kubernetes cluster and deployed the front-end Face Detection application. It is backed by a camera application used to fetch the images from the network cameras and send it to the Azure Face API for facial recognition.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can clone the repo and run it on your laptop with NodeJS. Also, you can send any Image URL as input to the Azure Face API. I chose to get the images from network cameras and deployed the application on an EdgeLine server.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture6-1572577698549.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this example, users log in to the face detection application running on the EL300 Kubernetes cluster. As soon as the user logs in, the face detection app fetches the current image from the network camera and loads the screen below.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture7-1572577723410.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once you click on the Detect Face button, the image will be sent to Azure Face API and you will get the below result with rectangles covering the faces along with data on facial attributes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture9-1572577749047.png&quot; alt=&quot;picture9&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since I added only one image with the name “Pramod” to the face detection library, Azure Face API was able to predict the name with 69% confidence and the rest of the faces came up as unknown with other facial attributes like gender, age, hair, emotions and so on.&lt;/p&gt;
&lt;h2&gt;Set up the code&lt;/h2&gt;
&lt;p&gt;As you can see, there are a lot of steps, and I wasn’t able to explain every single detail in this article. However, you can get the full source code for this and set it up on your computer easily. Just follow these instructions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Ensure that you have Node.js version 8x or newer installed, as well as NPM 6 or later.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Clone this GitHub repository &lt;a href=&quot;https://github.com/reddypramod85/facedetection&quot;&gt;https://github.com/reddypramod85/facedetection&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provision an instance of the Face API in Azure, as explained earlier in this article.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the endpoint and the two subscription keys for the provisioned instance of Face API.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a .env file and place one of the keys, PersonGroupName (EX: “hpe”), and the image URL ( EX: &lt;code&gt;&quot;http://localhost:5000/images/oscongroup.png&quot;&lt;/code&gt;) in the .env file as shown below:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;REACT_APP_SUBSCRIPTION_KEY= “Azure Face API subscription key”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;REACT_APP_PERSON_GROUP_NAME=hpe&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;REACT_APP_CAMERA_IMAGE_URL=&lt;code&gt;&quot;http://localhost:5000/images/oscongroup.png&quot;&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Run npm install to install the Grommet face detection application.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;There are many use cases where face detection is helpful. It can help in retail environments. For instance, imagine that you walk into a store, and a camera instantly recognizes you and sends you discount codes on your phone that you must use in the next 20 minutes. The possibilities are really endless, and this is only one of the many capabilities of Microsoft Cognitive Services. I hope to cover some of these other features in future articles. Keep a lookout on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for more articles on this subject.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Grace Hopper Celebration spotlights women technologists]]></title><description><![CDATA[brittany ghc 2 I recently had the opportunity to attend my first Grace Hopper Celebration (GHC), the world’s largest gathering of women…]]></description><link>https://developer.hpe.com/grace-hopper-celebration-spotlights-women-technologists/</link><guid isPermaLink="false">https://developer.hpe.com/grace-hopper-celebration-spotlights-women-technologists/</guid><pubDate>Wed, 30 Oct 2019 20:53:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/brittany-ghc-2-1572897237615.png&quot; alt=&quot;brittany ghc 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;I recently had the opportunity to attend my first Grace Hopper Celebration (GHC), the world’s largest gathering of women technologists. Co-founded by Dr. Anita Borg and Dr. Telle Whitney in 1994 and inspired by the legacy of Admiral Grace Murray Hopper, the event focuses on the research and career interests of women in computing. The experience was truly inspiring. Being surrounded by 25,000 women technologists was incredibly empowering and motivating, and for once, I felt as though I wasn’t in the minority!&lt;/p&gt;
&lt;p&gt;Over the course of the 3-day conference, I attended workshops, keynotes, panel discussions, and a career fair. Two keynote sessions book-ended the celebration, serving as the kickoff and closing of the conference. I felt the energy as soon as I walked into the hall. The vibe was so uplifting with music, dancing, and cheering. Women were also shouting #We will statements, like:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1572469485663.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;#We will challenge our own goals&lt;/p&gt;
&lt;p&gt;#We will challenge the status quo&lt;/p&gt;
&lt;p&gt;#We will increase the visibility of women in tech&lt;/p&gt;
&lt;p&gt;#We will make Katherine Johnson as well-known as Neil Armstrong&lt;/p&gt;
&lt;p&gt;#We will celebrate diversity&lt;/p&gt;
&lt;p&gt;The first keynote featured numerous speakers and started off with Brenda Darden Wilkerson, the CEO of AnitaB.org, the organization producing this event. As an advocate for diversity, Brenda emphasized, “It is my mission to achieve 50/50 tech equity by 2025.”&lt;/p&gt;
&lt;p&gt;Another speaker, Ana Roca Castro, spoke about igniting the genius in children. Ana started &lt;a href=&quot;https://www.geniusplaza.com/en/&quot;&gt;Genius Plaza&lt;/a&gt; to insure every child received the best education possible, no matter where that child was born or raised. Genius Plaza provides access to personalized learning content for children in arts, math, and science. Ana added to our #we will statements by exclaiming “#We will ignite the genius in every child!”&lt;/p&gt;
&lt;p&gt;Another inspiring woman at the conference was Dr. Fei-Fei Li. She is the inventor of &lt;a href=&quot;http://www.image-net.org/&quot;&gt;ImageNet,&lt;/a&gt; a large visual database designed for visual object recognition software research. “ImageNet is an image database organized according to the &lt;a href=&quot;https://wordnet.princeton.edu/&quot;&gt;WordNet&lt;/a&gt; hierarchy in which each node of the hierarchy is depicted by hundreds and thousands of images.”&lt;/p&gt;
&lt;p&gt;When Dr. Li helped design ImageNet in 2006, some of her colleagues tried to dissuade her by telling her the project was foolish. Yet, now the database consists of over 14 million images. The process used to create ImageNet is considered to be the foundation on which machine learning was based. One thing Dr. Li said really stuck with me. She said, “It’s okay to feel small sometimes. But together, we can be big enough to accomplish anything.”&lt;/p&gt;
&lt;p&gt;These were just a couple of the inspiring talks during the welcoming keynote. I would encourage everyone to listen to the recording of this &lt;a href=&quot;https://ghc.anitab.org/&quot;&gt;session.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;With over 400 sessions held during the conference, it was very difficult to choose which ones to attend. Each one I attended was very good. Some that I found the most interesting included:&lt;/p&gt;
&lt;h2&gt;Building a (better) Open Source Community&lt;/h2&gt;
&lt;p&gt;Led by Lisa Tagliaferri – Manager of Developer Education at DigitalOcean&lt;/p&gt;
&lt;p&gt;This session provided insights on how people could maintain their open source projects.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Lisa covered gender and diversity as it pertained to major technology companies, as well as how it plays out in open source projects.&lt;/li&gt;
&lt;li&gt;She also offered tips and resources for users on open source projects, including tutorials on how to get started and troubleshooting guides.&lt;/li&gt;
&lt;li&gt;She explored how open source projects could do more to promote inclusivity and diversity.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture3-1572469470326.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Branching Out: GitHub and Core skills for Contributions to Open Source&lt;/h2&gt;
&lt;p&gt;Led by Lily Sturmann, Parul Singh – Software Engineers at Red Hat, Inc.&lt;/p&gt;
&lt;p&gt;This session went into details about forking, cloning and other basics around contributing code.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In this workshop, Lily and Parul walked users through creating a Pull request on an open source project. (Lucky for me, I do this almost every day, so I was able to assist the person next to me.)&lt;/li&gt;
&lt;li&gt;This session provided a great way to network with the people working at the same table.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How to develop Creativity and Need-driven Production? Using Tech-Fashion as an Example&lt;/h2&gt;
&lt;p&gt;Led by Kitty Yeung – Creative Technologist at Microsoft&lt;/p&gt;
&lt;p&gt;This session focused on incorporating technology into fashion.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kitty explained how she was able to create a flower that could be worn by a mother and her daughter. If the daughter started to wander off too far, the flower would vibrate and let the mother know her daughter is out range. She went into a little bit of detail explaining how she incorporated circuits into this flower.&lt;/li&gt;
&lt;li&gt;Kitty gave a similar talk at Hackaday Supercon, which you can view &lt;a href=&quot;https://www.youtube.com/watch?v=KTL_1zz_cRc&quot;&gt;here&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Virtual Reality for Brain Surgeries: Enhancing Visuality &amp;#x26; Improving Patient Outcomes&lt;/h2&gt;
&lt;p&gt;Led by Prachi Shah – Software Engineer, Verily Life Sciences, Google&lt;/p&gt;
&lt;p&gt;This session centered on an amazing surgical situation which helped transform how we look at MRI’s and CT scans, going from 2D to something that we can print in 3D and then use that to create a virtual reality that can be used to prepare surgeons.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prachi explained that doctors currently use 2D images to plan 3D surgeries, and that so much more could be done using technology.&lt;/li&gt;
&lt;li&gt;She explained how technologists are now trying to use 3D modeling pulled from CT and MRI scans and print3D models of the brain. These can be used for physical planning as well as patient education. Taking it a step further, she described the opportunity to provide visualizations in VR for neurosurgeons to use to visualize operating on a patient’s brain.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture4-1572469494011.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Empowering Women to Improve Women’s Health through Tech, Education &amp;#x26; Engagement&lt;/h2&gt;
&lt;p&gt;Led by Nimmi Ramanujam, Engineering &amp;#x26; Director of Center for Global Women’s Health Tech Duke University&lt;/p&gt;
&lt;p&gt;The goal of this session was to promote the development of technology that can profoundly impact women’s health.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Nimmi is passionate about empowering women to know their own health and discussed how she wanted to develop a low-cost technology that could help women and doctors detect and help prevent cervical cancer.&lt;/li&gt;
&lt;li&gt;I really recommend watching her &lt;a href=&quot;https://www.youtube.com/watch?v=LePaY_Ms6_o&quot;&gt;Social Impact Abie Award Winner Video.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Edge AI with Raspberry&lt;/h2&gt;
&lt;p&gt;Led by Penny Anderson, Director of Engineering, MathWorks&lt;/p&gt;
&lt;p&gt;In this workshop, we developed a deep-learning application which detects objects in an image. We then deployed and ran the application on a Raspberry Pi board.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;During the workshop, we divided into groups and explored the process of how the code detects objects and determines the age of a person in a specific image.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture5-1572469501148.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Empower the Next Generation: Supporting K-12 Education as an Industry Professional&lt;/h2&gt;
&lt;p&gt;Led by Amy Liu, Crystal Hsieh Software Engineer, LinkedIn&lt;/p&gt;
&lt;p&gt;This session explored the lack of computer science teachers in schools today and proposed potential solutions to the problem. The speakers provided resources, like LinkedIn’s &lt;a href=&quot;https://github.com/linkedin/high-school-trainee&quot;&gt;High School Trainee Program,&lt;/a&gt; which is a great open source tool that helps you get started by providing program materials.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture6-1572469513141-no-white-border-1572906184377.jpg&quot; alt=&quot;picture6 1572469513141 no white border&quot;&gt;&lt;/p&gt;
&lt;p&gt;During my time at the conference, I also learned some statistics about women in the technology workforce, including the fact that only &lt;strong&gt;2% of the computing workforce was made up of Hispanic women.&lt;/strong&gt; I felt honored to be able to represent that segment and attend the Celebrating Latinas in Technical Roles Reception which was held at the event.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/imageedit_2_6801947141-1572579297293.png&quot; alt=&quot;imageedit_2_6801947141&quot;&gt;&lt;/p&gt;
&lt;p&gt;After three days of enjoying the incredibly inspiring stories about different women’s journeys in technology, I could think of no better way to end the conference than the closing keynote, which featured more Abie-award speakers, including Dr. Vivienne Ming, co-founder of Socos Labs, and Nonny De la Peña, the founder and CEO of Emblematic Group.&lt;/p&gt;
&lt;p&gt;Dr. Vivienne Ming explained how she enjoyed using AI to think in ways no one else has ever done before. She also explained that courage is something that you practice and share with others. Explore more of her talks &lt;a href=&quot;https://www.youtube.com/watch?v=1lpGcWxDv98&quot;&gt;here.&lt;/a&gt; Nonny De la Peña, a tele-immersive journalist, discussed how she got involved in this new industry. She started recording audio in a food bank when a man ended up going into a diabetic coma. The recording showed the real power of this type of journalism. Nonny continues to create videos in which people can use virtual reality to put themselves inside a story and feel what it is like to be somewhere without actually being physically present. Some of her talks can be found in these &lt;a href=&quot;https://www.ted.com/talks/nonny_de_la_pena_the_future_of_news_virtual_reality?language=en&quot;&gt;Ted Talks.&lt;/a&gt; This final keynote was truly inspiring and you can view it &lt;a href=&quot;https://ghc.anitab.org/&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;HPE Women&lt;/h2&gt;
&lt;p&gt;Finally, I benefitted from the chance to meet and get to know some of the other inspiring women who also work at HPE. I loved hearing stories from women around the world about their journey and path into the tech industry. I was overwhelmed by the fact that women I had just met reached out to let me know they were proud of me. Each of our stories and journeys was remarkable and the conference provided such a great opportunity to share them with one another.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture8-1572469528275.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture9-1572469543131.png&quot; alt=&quot;picture9&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Contributors enhance Grommet during Hacktoberfest]]></title><description><![CDATA[picture2 Because of Hacktoberfest, the Grommet GitHub repository has been as lively as a Biergarten in Germany these past few weeks. For…]]></description><link>https://developer.hpe.com/contributors-enhance-grommet-during-hacktoberfest/</link><guid isPermaLink="false">https://developer.hpe.com/contributors-enhance-grommet-during-hacktoberfest/</guid><pubDate>Mon, 28 Oct 2019 22:31:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/picture2-1572302279782.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Because of Hacktoberfest, the &lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;Grommet GitHub repository&lt;/a&gt; has been as lively as a Biergarten in Germany these past few weeks. For those of you who don’t know, Hacktoberfest is a month-long celebration aimed at encouraging coders of all backgrounds and skill levels worldwide to contribute to open source software. In only the first sixteen days, coders from all over submitted 54 commits to the Grommet code base. With the help of seventeen Hacktoberfest rockstar contributors, we got to a point where 93% of our storybook examples are React hook-friendly and more than a dozen Grommet components had been refactored to use hooks!&lt;/p&gt;
&lt;p&gt;The issue that had the most traffic was Refactoring components and stories to use hooks. The ability to use hooks has recently been added to React and is proving to be very popular because it solves a number of issues. Hooks are functions that let you “hook into” React state and lifecycle features from function components. Hooks don’t work inside classes — they let you use React without classes. This issue alone attracted more than 50 pull requests (PRs) and helps make Grommet a React framework that is super edgy.&lt;/p&gt;
&lt;p&gt;Developers also tackled issues of &lt;a href=&quot;/blog/using-typescript-in-grommet-applications&quot;&gt;TypeScript,&lt;/a&gt; layout, and the implementation of best practices and standards. Some were bug fixes, but many more were enhancements to the code. You can find all the issues on the &lt;a href=&quot;https://github.com/grommet/grommet/issues?utf8=%E2%9C%93&amp;#x26;q=+label%3Ahacktoberfest&quot;&gt;Grommet Hacktoberfest repository.&lt;/a&gt; As the HPE Experience Studio senior engineer and open source developer, I couldn’t be happier with the engagement we received around this October event run by &lt;a href=&quot;https://www.digitalocean.com/&quot;&gt;DigitalOcean&lt;/a&gt; and &lt;a href=&quot;https://dev.to/&quot;&gt;DEV design.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For those of you unfamiliar with &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet,&lt;/a&gt; it is a React-based library of reusable UI components that help developers and designers create web applications. This open source UI development and design tool simplifies the way web applications are built by providing a package of commonly used interface elements from which developers and designers can choose to use. To keep up to date with everything that’s going on with Grommet, make sure you connect with us on &lt;a href=&quot;https://grommet.slack.com/&quot;&gt;Slack.&lt;/a&gt;  Or, check out the HPE DEV portal to learn more about the &lt;a href=&quot;https://developer.hpe.com/platform/grommet/home&quot;&gt;Grommet platform.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Mingle with Microsoft experts at Microsoft Ignite]]></title><description><![CDATA[screen shot 2019 10 23 at 10.14.35 am HPE DEV is gearing up for Microsoft Ignite 2019, an annual conference for developers and IT…]]></description><link>https://developer.hpe.com/mingle-with-microsoft-experts-at-microsoft-ignite/</link><guid isPermaLink="false">https://developer.hpe.com/mingle-with-microsoft-experts-at-microsoft-ignite/</guid><pubDate>Wed, 23 Oct 2019 15:45:39 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/10/screen-shot-2019-10-23-at-101435-am-1571847321995.png&quot; alt=&quot;screen shot 2019 10 23 at 10.14.35 am&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV is gearing up for &lt;a href=&quot;https://www.microsoft.com/en-us/ignite&quot;&gt;Microsoft Ignite 2019&lt;/a&gt;, an annual conference for developers and IT professionals hosted by Microsoft where Hewlett Packard Enterprise (HPE) will be showcasing its capabilities around hybrid cloud solutions. Under the tagline &lt;a href=&quot;https://www.hpe.com/us/en/alliance/microsoft/ignite2019.html&quot;&gt;Fear No Cloud with HPE at Microsoft Ignite 2019,&lt;/a&gt; HPE will demonstrate how to enable the consumption of Microsoft Azure as-a-Service with HPE GreenLake, unlock insights from data with SQL Server 2019, and run Azure services at the Edge with HPE Edgeline Converged Edge Systems. As evidenced by the &lt;a href=&quot;https://myignite.techcommunity.microsoft.com/sessions&quot;&gt;schedule,&lt;/a&gt; many exciting sessions are planned for this November 4-8 event in Orlando, Florida.&lt;/p&gt;
&lt;p&gt;Microsoft Ignite 2019 attendees will have the opportunity to learn from 50+ HPE Microsoft experts with over 750 years of combined Microsoft experience; &lt;a href=&quot;https://www.hpe.com/us/en/alliance/microsoft/ignite2019.html#booth&quot;&gt;attend 60+ booth theater sessions delivered&lt;/a&gt; by HPE, Microsoft, and partner experts; and explore &lt;a href=&quot;https://www.hpe.com/us/en/alliance/microsoft/ignite2019.html#demos&quot;&gt;Hybrid Cloud, Intelligent Edge and SQL solutions demos.&lt;/a&gt; All this can be found in booth #2549 in the partner showcase. Customers and partners will also be able to engage with HPE experts in two HPE Microsoft Ignite breakout sessions held at the HUB showroom, theater number 7, Tuesday, November 5th. For specifics on dozens of booth sessions, technical breakout sessions, workshops, and partner panel luncheons, visit the &lt;a href=&quot;https://www.hpe.com/us/en/alliance/microsoft/ignite2019.html&quot;&gt;HPE/Microsoft Alliance page&lt;/a&gt; and download the session flyer.&lt;/p&gt;
&lt;p&gt;Numerous opportunities will be available for attendees to enjoy food and drink, and mingle with industry experts to discover answers to today’s biggest business challenges. You’re invited to attend the Edge and Cloud luncheon panel sessions and HPE Microsoft Ignite Expert Happy Hours. Make sure you stop by HPE booth #2549 to learn more about HPE/Microsoft solutions, as well as other key DevOps contributions and the value offered through the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community.&lt;/a&gt; Hope to see you at the event!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV heads to KubeCon + CloudNativeCon, San Diego]]></title><description><![CDATA[kubecon2 HPE DEV is gearing up for KubeCon | CloudNativeCon North America, a major open source developer conference focused on furthering…]]></description><link>https://developer.hpe.com/hpe-dev-heads-to-kubecon-cloudnativecon-san-diego/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-heads-to-kubecon-cloudnativecon-san-diego/</guid><pubDate>Wed, 09 Oct 2019 15:45:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/kubecon2-1570637685840.png&quot; alt=&quot;kubecon2&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV is gearing up for &lt;a href=&quot;https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/&quot;&gt;KubeCon | CloudNativeCon North America,&lt;/a&gt; a major open source developer conference focused on furthering the education and advancement of Kubernetes and cloud native computing. As evidenced by the &lt;a href=&quot;https://kccncna19.sched.com/&quot;&gt;schedule,&lt;/a&gt; a ton of exciting sessions are planned for this November 18-21 event in San Diego, California.&lt;/p&gt;
&lt;p&gt;A cross-organizational team from Hewlett Packard Enterprise (HPE) will staff the booth (P28) aimed at promoting and educating attendees on what HPE offers in the areas of containers and Kubernetes (k8s). Attendees will get the opportunity to talk with representatives from HPE DEV, HPE Storage and other HPE business areas.&lt;/p&gt;
&lt;p&gt;BlueData, a leading provider of container-based software solutions that HPE recently acquired to accelerate AI and data-driven innovation in the enterprise, will also be there. Tom Phelan, Chief Architect and co-founder of BlueData, &lt;a href=&quot;https://twitter.com/tapbluedata/status/1169810013851795456?s=20&quot;&gt;recently expressed his excitement&lt;/a&gt; in being selected, along with his colleague, Joel Baxter, to talk about the &lt;a href=&quot;/blog/complex-stateful-applications-on-kubernetes-kubedirector-version-02&quot;&gt;KubeDirector&lt;/a&gt; open source project at the event.&lt;/p&gt;
&lt;p&gt;Make sure you stop by the HPE booth (P28) to learn more about what HPE is doing in this exciting area, as well as other key DevOps contributions and value offered through the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE DEV community.&lt;/a&gt; Looking forward to seeing you at the event!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[DevOps and its impact on project management]]></title><description><![CDATA[hpe20160726048 As a project manager, a great deal of my time is spent on driving efficiency within the Hewlett Packard Enterprise (HPE…]]></description><link>https://developer.hpe.com/devops-and-its-impact-on-project-management/</link><guid isPermaLink="false">https://developer.hpe.com/devops-and-its-impact-on-project-management/</guid><pubDate>Mon, 07 Oct 2019 20:18:47 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/hpe20160726048-1570557978016.jpg&quot; alt=&quot;hpe20160726048&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a project manager, a great deal of my time is spent on driving efficiency within the Hewlett Packard Enterprise (HPE) Chief Design Office. I’ve been lucky enough to spend most of my career in an age when DevOps is highlighted as a superior method for software development. It’s also exciting to see the DevOps philosophy gain momentum and expand into other organizations. Many teams now recognize the benefits of agile methodologies, and continuously work to streamline processes, accelerate feedback loops, and quicken the pace of innovation.&lt;/p&gt;
&lt;p&gt;Since I live the DevOps methodology every day, I have to remind myself that not everyone comes at their work with this mindset. I recently came across a blog that really hit home for me. This was an article written by Angel Rivera and entitled &lt;a href=&quot;https://circleci.com/blog/devops-did-not-exist/&quot;&gt;DevOps didn’t exist when I started as a developer: How this one principle changed my career.&lt;/a&gt; Angel has a unique take on the subject, which follows along a timeline from the mid-1990’s till now and relates how much things have changed. The author helps the reader understand that others, depending on how long they may have worked in this industry, may still be steeped in historical approaches to dealing with application development, and how important open conversations can be to overcoming this bias.&lt;/p&gt;
&lt;p&gt;As Angel points out, DevOps is more than just a set of software development practices that combine software development and IT operations in a way that shortens the development and release cycle. It’s a way of doing business based on a firm foundation of trust and cooperation. It is not easily achieved and is highly based on culture. One must nurture a DevOps culture for it to take hold and thrive.&lt;/p&gt;
&lt;p&gt;I like to think the work we are doing here in the Chief Design Office with HPE DEV helps continue to break down silos as we work across different organizations, both internally and outside of Hewlett Packard Enterprise, to help everyone become a little more agile. I would love to hear your thoughts. Connect with us on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; and share your stories on what DevOps means to you.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Advancing open source projects - Newsletter]]></title><link>https://developer.hpe.com/2019-October-04/</link><guid isPermaLink="false">https://developer.hpe.com/2019-October-04/</guid><pubDate>Fri, 04 Oct 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Hacktoberfest 2019 - Help Grommet and win a free t-shirt in the process]]></title><description><![CDATA[picture1 Are you a developer looking to help drive the growth of open source software and make positive contributions to the ever-growing…]]></description><link>https://developer.hpe.com/hacktoberfest-2019-help-grommet-and-win-a-free-t-shirt-in-the-process/</link><guid isPermaLink="false">https://developer.hpe.com/hacktoberfest-2019-help-grommet-and-win-a-free-t-shirt-in-the-process/</guid><pubDate>Mon, 30 Sep 2019 16:06:29 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1569859873855.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;Are you a developer looking to help drive the growth of open source software and make positive contributions to the ever-growing Grommet community? Are you a fan of &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; and have ideas on some features you’d like to see? Well, here’s your chance to contribute and get a free t-shirt in the process!&lt;/p&gt;
&lt;p&gt;Hacktoberfest is a month-long celebration run by &lt;a href=&quot;https://www.digitalocean.com/&quot;&gt;DigitalOcean&lt;/a&gt; and &lt;a href=&quot;https://dev.to/&quot;&gt;DEV design&lt;/a&gt; aimed at encouraging coders worldwide to contribute to open source software. It is open to all backgrounds and skill levels, from experienced developers to students just learning to code.&lt;/p&gt;
&lt;p&gt;To participate, all you have to do is sign up anytime between October 1 and October 31 and submit four &lt;a href=&quot;https://help.github.com/en/articles/about-pull-requests&quot;&gt;pull requests&lt;/a&gt; to any public GitHub repositories. To contribute to Grommet, access any of the Grommet issues in the Hacktoberfest challenge by going to the &lt;a href=&quot;https://github.com/grommet/grommet/labels/hacktoberfest&quot;&gt;Grommet GitHub Repo.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For specifics on the event, including rules and details on how to get your t-shirt, go to the &lt;a href=&quot;https://hacktoberfest.digitalocean.com/&quot;&gt;Hacktoberfest 2019 home page.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Is continuous deployment of modern web applications in Microsoft Azure really so difficult? ]]></title><description><![CDATA[In my studies on Microsoft Azure fundamentals, I often need to dive deep into some of the many Azure services available, all the way from…]]></description><link>https://developer.hpe.com/is-continuous-deployment-of-modern-web-applications-in-microsoft-azure-r/</link><guid isPermaLink="false">https://developer.hpe.com/is-continuous-deployment-of-modern-web-applications-in-microsoft-azure-r/</guid><pubDate>Thu, 26 Sep 2019 16:24:18 GMT</pubDate><content:encoded>&lt;p&gt;In my studies on Microsoft Azure fundamentals, I often need to dive deep into some of the many Azure services available, all the way from Infrastructure-as-a-Service (IaaS) functionalities (compute, networking, storage, firewall, load balancer) to advanced Platform-as-a-Service (PaaS) services. I recently explored how to implement a Continuous Integration/Continuous Delivery (CI/CD) pipeline with the Microsoft Azure PaaS &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/app-service/overview&quot;&gt;App Service,&lt;/a&gt; a fully-managed web hosting platform used to host any web-based application supported by Azure.&lt;/p&gt;
&lt;p&gt;CI/CD is used by software developers to build, test, integrate, deliver, and deploy software rapidly, reliably, and repeatedly with minimal human intervention. Given the increased emphasis on developers to set up CI/CD pipelines, I wanted to assess how difficult this would be to do in this environment.&lt;/p&gt;
&lt;p&gt;I created a Web App resource (an instance of Azure App Service) inside my Azure subscription and tried to set up a CI/CD pipeline from the popular DevOps source code version control system, GitHub. It turned out that it was so simple and fast I wanted to show you how to use the Azure App Service and one of its built-in pipeline orchestrators, &lt;a href=&quot;https://github.com/projectkudu/kudu/wiki&quot;&gt;Kudu.&lt;/a&gt; I will share with you a method similar to what my colleague, Maddu Rebanna, shared in his blog article, &lt;a href=&quot;/blog/deploy-a-full-stack-application-on-netlify-that-includes-a-cicd-pipeline&quot;&gt;Deploy a full stack application on Netlify that includes a CI/CD pipeline.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: Azure offers a free App Service plan SKU (Stock Keeping Unit). It is a great option for developers to quickly deploy, host, and test modern web applications developed in a variety of languages (.NET, .NET Core, Java, Ruby, Node.js, PHP, or Python) at no cost, while enabling continuous deployment from GitHub. If you don’t already have a free Azure account, go ahead and sign up &lt;a href=&quot;https://azure.microsoft.com/en-us/free/&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;THE BASIC STEPS REQUIRED&lt;/h2&gt;
&lt;p&gt;Azure App Service is a PaaS that provides the complete platform, both hardware (compute resources) and software (OS and runtime stack), on which cloud applications run. This means that you, as developer, just have to focus on designing and developing your web applications.&lt;/p&gt;
&lt;p&gt;Once you have an account in GitHub and an account in Azure public cloud, you only need to follow three basic steps to deploy and publish a web application using continuous deployment. These steps are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a &lt;em&gt;Web App&lt;/em&gt; resource in Azure. Creating a Web App allocates a set of hosting resources within the Azure App Service platform on which your web application will run. For this example, I suggest you use a free dev/test hosting App Service plan so as to not incur any cost for your Azure account. The App Service plan SKU is always a free service, even after the one-year trial period is over.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Authorize your Azure account to connect to your GitHub repository. (This only needs to be done if it is the first time you have established a delivery pipeline between Azure and your GitHub account.)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect your GitHub application build repository and branch to the Web App resource you just deployed in your Azure account in order to setup the delivery pipeline.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I used Visual Studio and an ASP.NET Core web application template available in Visual Studio to create a sample web application named AlpineSkiHouse and copied the Visual Studio generated source repo to my organization’s GitHub in a public repository.&lt;/p&gt;
&lt;h2&gt;STEP BY STEP GUIDE&lt;/h2&gt;
&lt;p&gt;Now, from the Azure portal, let’s take a closer look at these three simple steps:&lt;/p&gt;
&lt;p&gt;STEP 1:&lt;/p&gt;
&lt;p&gt;Create an ASP.NET Web Application in an existing resource group (CICDRg1), specifying a free SKU (Free F1) App Service Plan to host the web application:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1569515900610.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture2-1569515891835.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Azure App service gives you a custom URL for your published Web App resource under domain azurewebsites.net. For a production environment, a company would have to use a paid tier App Service Plan to map their &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-custom-domain&quot;&gt;custom DNS domain&lt;/a&gt; name to their Azure Web App. For this test, you can use the default azurewebsites.net domain.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture3-1569515886048.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;STEP 2:&lt;/p&gt;
&lt;p&gt;Next, select the &lt;strong&gt;Deployment Center&lt;/strong&gt; in the left menu of the Web App resource you just created. Select &lt;strong&gt;GitHub&lt;/strong&gt; and follow the authorization prompts to sign in to your GitHub account. This will authorize the Azure App Service to make the connection to your GitHub account using OAuth, an open-standard authorization protocol. You only need to authorize once with GitHub source control service.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture4-1569515878902.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;STEP 3:&lt;/p&gt;
&lt;p&gt;Then, select &lt;strong&gt;App Service Build Service,&lt;/strong&gt; the built-in pipeline provider Kudu, and specify the GitHub location (organization, repository and branch) of your code to setup the delivery pipeline by clicking on Continue.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture5-1569515872332.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture6-1569515866508.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, click Finish to confirm the setup of the delivery pipeline for your Azure Web App.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture7-1569515859003.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once your web application build code repository is connected to your Azure Web App, Azure App Service build service provider (Kudu) does the rest for you. It auto-syncs code from the deployment source (GitHub) and executes a series of steps to build and get your application in a runnable state. Kudu will also auto-sync any future committed changes on the code into the Web App hosted in Azure App Service platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture8-1569515852645.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture9-1569515845266.png&quot; alt=&quot;picture9&quot;&gt;&lt;/p&gt;
&lt;p&gt;After a minute or so, you will see that your web application is deployed and published in Azure App Service. Click on the URL to access your web application from your browser.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture10-1569515838569.png&quot; alt=&quot;picture10&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture11-1569515830186.png&quot; alt=&quot;picture11&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;It’s that easy!!!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We just explored one of the many methods you can use to set up a CI/CD pipeline and enable continuous deployment in Azure. I hope you will find this blog article helpful to quickly and easily deploy and test your web applications through the Azure App Service built-in pipeline orchestrator, Kudu. The &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/app-service/&quot;&gt;Azure App Service documentation&lt;/a&gt; will provide you with all of the information you need to jumpstart your knowledge of Azure App Service and Azure continuous deployment services, such as Kudu build service or Azure DevOps pipelines.&lt;/p&gt;
&lt;p&gt;Please remember to follow our &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; posts for more information on this and other topics designed to streamline your application development environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE+SUSE reprise their Redfish Workshop in Lyon, France in October]]></title><description><![CDATA[HPE Open Source and Linux Technology Strategist, Bruno Cornec will once again be a speaker at the upcoming Open Source Summit in Lyon…]]></description><link>https://developer.hpe.com/hpesuse-reprise-their-redfish-workshop-in-lyon-france-in-october/</link><guid isPermaLink="false">https://developer.hpe.com/hpesuse-reprise-their-redfish-workshop-in-lyon-france-in-october/</guid><pubDate>Wed, 25 Sep 2019 15:21:20 GMT</pubDate><content:encoded>&lt;p&gt;HPE Open Source and Linux Technology Strategist, Bruno Cornec will once again be a speaker at the &lt;a href=&quot;https://events.linuxfoundation.org/events/open-source-summit-europe-2019/&quot;&gt;upcoming Open Source Summit&lt;/a&gt; in Lyon, October 28th – 30th, 2019. His hands-on lab will help attendees learn more about Docker and how to use containers. As he did at the previous summit held in &lt;a href=&quot;/blog/redfish-workshop-at-the-open-source-summit-na-2019&quot;&gt;San Diego last August,&lt;/a&gt; Bruno will also act as the coordinator for the co-located HPE+SUSE Redfish Workshop.&lt;/p&gt;
&lt;p&gt;This &lt;a href=&quot;http://trac.project-builder.org/wiki/RedfishWSEurope2019&quot;&gt;Redfish Workshop&lt;/a&gt; will take place immediately after the summit on Thursday, October 31, 2019 from 9:00am – 5:00pm at the Lyon Convention Centre. At the event, system administrators, architects, and developers can see live demos, learn how to use Redfish, and interact with &lt;a href=&quot;https://en.wikipedia.org/wiki/Redfish_(specification)&quot;&gt;Redfish&lt;/a&gt; Project technical experts.&lt;/p&gt;
&lt;p&gt;System and software engineers, developers, and administrators can all benefit from knowledgeable speakers who will point out the benefits derived from a standard management layer that can be used to deploy, configure, and manage an environment. Other topics will also be covered, including the use of DMTF tools for system configuration and the REST API to perform Redfish operations from Python. Attendees will get the chance to practice the concepts being presented through interactive sessions, demos, and labs.&lt;/p&gt;
&lt;p&gt;This is a free event, sponsored by Hewlett Packard Enterprise (HPE) and SUSE.&lt;/p&gt;
&lt;p&gt;For more information about the event, please visit the &lt;a href=&quot;http://trac.project-builder.org/wiki/RedfishWSEurope2019&quot;&gt;Redfish Workshop page.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT:&lt;/strong&gt; A limited number of seats are available for this event, so please &lt;a href=&quot;https://framaforms.org/redfish-workshop-oss-europe-2019-registration-form-1567095132&quot;&gt;register&lt;/a&gt; as soon as you can.&lt;/p&gt;
&lt;p&gt;If you have expertise you would like to share during this workshop, please email &lt;a href=&quot;mailto:Bruno.Cornec@hpe.com&quot;&gt;Bruno.Cornec@hpe.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Exploring how to deliver everything as-a-Service - Newsletter]]></title><link>https://developer.hpe.com/2019-September-13/</link><guid isPermaLink="false">https://developer.hpe.com/2019-September-13/</guid><pubDate>Fri, 13 Sep 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Using TypeScript in Grommet Applications]]></title><description><![CDATA[typescriptimage Recently, I spent time learning TypeScript, a typed superset of JavaScript that offers optional static type-checking along…]]></description><link>https://developer.hpe.com/using-typescript-in-grommet-applications/</link><guid isPermaLink="false">https://developer.hpe.com/using-typescript-in-grommet-applications/</guid><pubDate>Mon, 09 Sep 2019 18:08:08 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/typescriptimage-1568052628959.jpeg&quot; alt=&quot;typescriptimage&quot;&gt;&lt;/p&gt;
&lt;p&gt;Recently, I spent time learning TypeScript, a typed superset of JavaScript that offers optional static type-checking along with the latest ECMAScript features. TypeScript can be pretty painful to wrap your head around, given the breadth of its capabilities. But I found that TypeScript has many benefits. For instance, declaring types as you write your application, whether they are a number, string, array, or function, can save developers time by catching problems early in the development process. And, if you are maintaining a project with others using TypeScript, it will cut down on runtime because you are able to surface bugs during compilation. In this tutorial, I will introduce you to how to get started with TypeScript within the confines of a React and Grommet project.&lt;/p&gt;
&lt;h2&gt;Step 1&lt;/h2&gt;
&lt;p&gt;To get started, you might want to check out another blog in which Ian Bovard does a great job going step-by-step in helping you install Grommet into your Create React application - &lt;a href=&quot;/blog/using-your-first-grommet-component-with-create-react-app&quot;&gt;Using Your First Grommet Component with Create-React-App.&lt;/a&gt; You can add TypeScript to any of your projects, at any point, if this is something you’d like to try.&lt;/p&gt;
&lt;h2&gt;Step 2&lt;/h2&gt;
&lt;p&gt;Assuming you already have a Grommet app, to add TypeScript to your Grommet application, all you need to do is cd into that project and run the following command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;yarn add typescript @types/node @types/react @types/react-dom @types/jest

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or, if you are using npm, then use this instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;npm install –save typescript @types/node @types/react @types/react-dom @types/jest

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 3&lt;/h2&gt;
&lt;p&gt;Once you’ve added TypeScript to the Grommet application, you can pick up where Ian talks about the App.js in regards to the Heading component.&lt;/p&gt;
&lt;p&gt;As you go through your files, change the App.js to App.tsx. This will trigger the use of TypeScript.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/screen-shot-2019-09-09-at-122230-pm-1568053417737.png&quot; alt=&quot;screen shot 2019 09 09 at 12.22.30 pm&quot;&gt;&lt;/p&gt;
&lt;p&gt;Following my own suggestion for this example, I will start with the Heading component that was used in &lt;a href=&quot;/blog/using-your-first-grommet-component-with-create-react-app&quot;&gt;Ian’s blog:&lt;/a&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;import React from &quot;react&quot;;
import { Grommet, grommet, Heading } from &quot;grommet&quot;;
function App() {
  return (
    &amp;#x3C;Grommet className=&quot;App&quot; theme={grommet}&gt;
      &amp;#x3C;Heading level=&apos;1&apos;&gt;TypeScript Rocks&amp;#x3C;/Heading&gt;
    &amp;#x3C;/Grommet&gt;
  );
}

export default App;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As shown in Ian’s blog, each component has its own props. In the Grommet project, these props have been declared in their own TypeScript files to define the types that should be accepted by each prop.&lt;/p&gt;
&lt;p&gt;Let’s try and break a few things to make sure that the types are being checked correctly. The Heading component has various props you can manipulate, including things like size, color, level, and margin. You can change these props to fit your specific design requirements. In this example, we will change the level prop of the Heading component. It accepts a number from 1-6 or a string containing a number 1-6. If we were to use a string that passed in the word “small”, this should give us a TypeScript error.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/screen-shot-2019-09-09-at-121650-pm-1568053104287.png&quot; alt=&quot;screen shot 2019 09 09 at 12.16.50 pm&quot;&gt;&lt;/p&gt;
&lt;p&gt;The great thing about TypeScript is that when you have an error, you can hover over it and it will explain the type options that can be accepted, as well as what was given. In this example, we received an error that stated&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/screen-shot-2019-09-12-at-93544-am-1568737040433.png&quot; alt=&quot;screen shot 2019 09 12 at 9.35.44 am&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the error was very clear in stating what prop types were expected and what was given. This shows what broke the project, so the level prop can be changed back to “1”. Once this is changed, you will see the error disappear without having to debug the whole application!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/screen-shot-2019-09-09-at-122738-pm-1568053721635.png&quot; alt=&quot;screen shot 2019 09 09 at 12.27.38 pm&quot;&gt;&lt;/p&gt;
&lt;p&gt;See what happens after I’ve made the change?
Yay! The error has disappeared!&lt;/p&gt;
&lt;p&gt;Through this very simple TypeScript error example, you can see how using TypeScript can save a developer a lot of time debugging. Imagine if someone was not using TypeScript in their project. If the string “small” was passed to the Heading level prop, it would not have immediately been recognized as an error. Although you can do a prop check using &lt;a href=&quot;https://reactjs.org/docs/typechecking-with-proptypes.html&quot;&gt;PropTypes&lt;/a&gt; in React, errors are only caught at runtime. Using TypeScript allows you to see the error while you’re developing an application, catching them as you compile. However, it is good practice to use both PropTypes as well as TypeScript to check for errors.&lt;/p&gt;
&lt;p&gt;Using TypeScript is particularly advantageous when you are onboarding someone new onto a project. The team can rest assured that no matter what new code is added, the new developer will need to follow the established types, making it harder to introduce new mistakes in a code base the developer may not be as familiar with.&lt;/p&gt;
&lt;p&gt;For more help, each component in Grommet has an index.d.ts file that contains all of the prop types for each specific component. Another easy way to quickly interact with Grommet and TypeScript can be found in the &lt;a href=&quot;https://codesandbox.io/s/grommet-ts-ugq2y&quot;&gt;Grommet sandbox&lt;/a&gt; the team put together to help developers who are using TypeScript.&lt;/p&gt;
&lt;p&gt;Lately, more and more users have been incorporating TypeScript into their Grommet projects because it helps them find errors faster and results in cleaner code. The team continuously tries to make it easier for TypeScript users to use Grommet. If you run into any problems with what I described above or have any additional suggestions, please check out the resources listed below.&lt;/p&gt;
&lt;p&gt;Resources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;Grommet Github&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://slackin.grommet.io/&quot;&gt;Grommet Slack&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once you have joined our Grommet Slack be sure to check out the #typescript channel&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[KubeDirector: The easy way to run complex stateful applications on Kubernetes]]></title><description><![CDATA[KubeDirector is an open source project designed to make it easy to run complex stateful scale-out application clusters on Kubernetes…]]></description><link>https://developer.hpe.com/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern/</link><guid isPermaLink="false">https://developer.hpe.com/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubern/</guid><pubDate>Mon, 09 Sep 2019 17:42:49 GMT</pubDate><content:encoded>&lt;p&gt;KubeDirector is an open source project designed to make it easy to run complex stateful scale-out application clusters on Kubernetes. KubeDirector is built using the custom resource definition (CRD) framework and leverages the native Kubernetes API extensions and design philosophy. This enables transparent integration with Kubernetes user/resource management as well as existing clients and tools.&lt;/p&gt;
&lt;p&gt;We recently &lt;a href=&quot;https://medium.com/@thomas_phelan/operation-stateful-introducing-bluek8s-and-kubernetes-director-aa204952f619/&quot;&gt;introduced the KubeDirector project&lt;/a&gt;, as part of a broader open source Kubernetes initiative we call BlueK8s. I’m happy to announce that the pre-alpha code for &lt;a href=&quot;https://github.com/bluek8s/kubedirector/&quot;&gt;KubeDirector&lt;/a&gt; is now available. And in this blog post, I’ll show how it works.&lt;/p&gt;
&lt;p&gt;KubeDirector provides the following capabilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The ability to run non-cloud native stateful applications on Kubernetes without modifying the code. In other words, it’s not necessary to decompose these existing applications to fit a microservices design pattern.&lt;/li&gt;
&lt;li&gt;Native support for preserving application-specific configuration and state.&lt;/li&gt;
&lt;li&gt;An application-agnostic deployment pattern, minimizing the time to onboard new stateful applications to Kubernetes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to run these applications on Kubernetes – with a minimal learning curve and no need to write GO code. The applications controlled by KubeDirector are defined by some basic metadata and an associated package of configuration artifacts. The application metadata is referred to as a KubeDirectorApp resource.&lt;/p&gt;
&lt;p&gt;To understand the components of KubeDirector, clone the repository on &lt;a href=&quot;https://github.com/bluek8s/kubedirector/&quot;&gt;GitHub&lt;/a&gt; using a command similar to:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;git clone http://&amp;#x3C;userid&gt;@github.com/bluek8s/kubedirector.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The KubeDirectorApp definition for the Spark 2.2.1 application is located in the file &lt;code&gt;kubedirector/deploy/example_catalog/cr-app-spark221e2.json&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt; ~&gt; cat kubedirector/deploy/example_catalog/cr-app-spark221e2.json
 {
    &quot;apiVersion&quot;: &quot;kubedirector.bluedata.io/v1alpha1&quot;,
    &quot;kind&quot;: &quot;KubeDirectorApp&quot;,
    &quot;metadata&quot;: {
        &quot;name&quot; : &quot;spark221e2&quot;
    },
    &quot;spec&quot; : {
        &quot;systemctlMounts&quot;: true,
        &quot;config&quot;: {
            &quot;node_services&quot;: [
                {
                    &quot;service_ids&quot;: [
                        &quot;ssh&quot;,
                        &quot;spark&quot;,
                        &quot;spark_master&quot;,
                        &quot;spark_worker&quot;
                    ],
…
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The configuration of an application cluster is referred to as a KubeDirectorCluster resource. The KubeDirectorCluster definition for a sample Spark 2.2.1 cluster is located in the file &lt;code&gt;kubedirector/deploy/example_clusters/cr-cluster-spark221.e1.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; cat kubedirector/deploy/example_clusters/cr-cluster-spark221.e1.yaml
apiVersion: &quot;kubedirector.bluedata.io/v1alpha1&quot;
kind: &quot;KubeDirectorCluster&quot;
metadata:
  name: &quot;spark221e2&quot;
spec:
  app: spark221e2
  roles:
  - name: controller
    replicas: 1
    resources:
      requests:
        memory: &quot;4Gi&quot;
        cpu: &quot;2&quot;
      limits:
        memory: &quot;4Gi&quot;
        cpu: &quot;2&quot;
  - name: worker
    replicas: 2
    resources:
      requests:
        memory: &quot;4Gi&quot;
        cpu: &quot;2&quot;
      limits:
        memory: &quot;4Gi&quot;
        cpu: &quot;2&quot;
  - name: jupyter
…
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Running Spark on Kubernetes with KubeDirector&lt;/h2&gt;
&lt;p&gt;With KubeDirector, it’s easy to run Spark clusters on Kubernetes.&lt;/p&gt;
&lt;p&gt;First, verify that Kubernetes (version 1.9 or later) is running, using the command &lt;code&gt;kubectl version&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl version
Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;11&quot;, GitVersion:&quot;v1.11.3&quot;, GitCommit:&quot;a4529464e4629c21224b3d52edfe0ea91b072862&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-09-09T18:02:47Z&quot;, GoVersion:&quot;go1.10.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}
Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;11&quot;, GitVersion:&quot;v1.11.3&quot;, GitCommit:&quot;a4529464e4629c21224b3d52edfe0ea91b072862&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-09-09T17:53:03Z&quot;, GoVersion:&quot;go1.10.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Deploy the KubeDirector service and the example KubeDirectorApp resource definitions with the commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;cd kubedirector
make deploy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These will start the KubeDirector pod:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get pods
NAME                           READY     STATUS     RESTARTS     AGE
kubedirector-58cf59869-qd9hb   1/1       Running    0            1m   
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;List the installed KubeDirector applications with &lt;code&gt;kubectl get KubeDirectorApp&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get KubeDirectorApp
NAME           AGE
cassandra311   30m
spark211up     30m
spark221e2     30m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can launch a Spark 2.2.1 cluster using the example KubeDirectorCluster file and the &lt;code&gt;kubectl create -f deploy/example_clusters/cr-cluster-spark211up.yaml&lt;/code&gt; command. Verify that the Spark cluster has been started:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
kubedirector-58cf59869-djdwl     1/1       Running   0          19m
spark221e2-controller-zbg4d-0    1/1       Running   0          23m
spark221e2-jupyter-2km7q-0       1/1       Running   0          23m
spark221e2-worker-4gzbz-0        1/1       Running   0          23m
spark221e2-worker-4gzbz-1        1/1       Running   0          23m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The running services now include the Spark services:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get service
NAME                                TYPE         CLUSTER-IP        EXTERNAL-IP    PORT(S)                                                    AGE
kubedirector                        ClusterIP    10.98.234.194     &amp;#x3C;none&gt;         60000/TCP                                                  1d
kubernetes                          ClusterIP    10.96.0.1         &amp;#x3C;none&gt;         443/TCP                                                    1d
svc-spark221e2-5tg48                ClusterIP    None              &amp;#x3C;none&gt;         8888/TCP                                                   21s
svc-spark221e2-controller-tq8d6-0   NodePort     10.104.181.123    &amp;#x3C;none&gt;         22:30534/TCP,8080:31533/TCP,7077:32506/TCP,8081:32099/TCP  20s
svc-spark221e2-jupyter-6989v-0      NodePort     10.105.227.249    &amp;#x3C;none&gt;         22:30632/TCP,8888:30355/TCP                                20s
svc-spark221e2-worker-d9892-0       NodePort     10.107.131.165    &amp;#x3C;none&gt;         22:30358/TCP,8081:32144/TCP                                20s
svc-spark221e2-worker-d9892-1       NodePort     10.110.88.221     &amp;#x3C;none&gt;         22:30294/TCP,8081:31436/TCP                                20s
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pointing the browser at port 31533 connects to the Spark Master UI:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/kubedirector-1568051725410.png&quot; alt=&quot;kubedirector&quot;&gt;&lt;/p&gt;
&lt;p&gt;That’s all there is to it! In fact, in the example above we also deployed a Jupyter notebook along with the Spark cluster.&lt;/p&gt;
&lt;p&gt;To start another application (e.g. Cassandra), just specify another KubeDirectorApp file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;kubectl create -f deploy/example_clusters/cr-cluster-cassandra311.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;See the running Cassandra cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
cassandra311-seed-v24r6-0         1/1       Running   0          1m
cassandra311-seed-v24r6-1         1/1       Running   0          1m
cassandra311-worker-rqrhl-0       1/1       Running   0          1m
cassandra311-worker-rqrhl-1       1/1       Running   0          1m
kubedirector-58cf59869-djdwl      1/1       Running   0          1d
spark221e2-controller-tq8d6-0     1/1       Running   0          22m
spark221e2-jupyter-6989v-0        1/1       Running   0          22m
spark221e2-worker-d9892-0         1/1       Running   0          22m
spark221e2-worker-d9892-1         1/1       Running   0          22m
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you have a Spark cluster (with a Jupyter notebook) and a Cassandra cluster running on Kubernetes. Use &lt;code&gt;kubectl get service&lt;/code&gt; to see the set of services.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;~&gt; kubectl get service
NAME                                TYPE         CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                   AGE
kubedirector                        ClusterIP    10.98.234.194    &amp;#x3C;none&gt;        60000/TCP                                                 1d
kubernetes                          ClusterIP    10.96.0.1        &amp;#x3C;none&gt;        443/TCP                                                   1d
svc-cassandra311-seed-v24r6-0       NodePort     10.96.94.204     &amp;#x3C;none&gt;        22:31131/TCP,9042:30739/TCP                               3m
svc-cassandra311-seed-v24r6-1       NodePort     10.106.144.52    &amp;#x3C;none&gt;        22:30373/TCP,9042:32662/TCP                               3m
svc-cassandra311-vhh29              ClusterIP    None             &amp;#x3C;none&gt;        8888/TCP                                                  3m
svc-cassandra311-worker-rqrhl-0     NodePort     10.109.61.194    &amp;#x3C;none&gt;        22:31832/TCP,9042:31962/TCP                               3m
svc-cassandra311-worker-rqrhl-1     NodePort     10.97.147.131    &amp;#x3C;none&gt;        22:31454/TCP,9042:31170/TCP                               3m
svc-spark221e2-5tg48                ClusterIP    None             &amp;#x3C;none&gt;        8888/TCP                                                  24m
svc-spark221e2-controller-tq8d6-0   NodePort     10.104.181.123   &amp;#x3C;none&gt;        22:30534/TCP,8080:31533/TCP,7077:32506/TCP,8081:32099/TCP 24m
svc-spark221e2-jupyter-6989v-0      NodePort     10.105.227.249   &amp;#x3C;none&gt;        22:30632/TCP,8888:30355/TCP                               24m
svc-spark221e2-worker-d9892-0       NodePort     10.107.131.165   &amp;#x3C;none&gt;        22:30358/TCP,8081:32144/TCP                               24m
svc-spark221e2-worker-d9892-1       NodePort     10.110.88.221    &amp;#x3C;none&gt;        22:30294/TCP,8081:31436/TCP                               24m
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Get Involved&lt;/h2&gt;
&lt;p&gt;KubeDirector is a fully open source, Apache v2 licensed, project – the first of multiple open source projects within a broader initiative we call BlueK8s. The pre-alpha code for KubeDirector has just been released and we would love for you to join the growing community of developers, contributors, and adopters. Follow &lt;a href=&quot;https://twitter.com/BlueK8s/&quot;&gt;@BlueK8s&lt;/a&gt; on Twitter and get involved through these channels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;KubeDirector &lt;a href=&quot;https://join.slack.com/t/bluek8s/shared_invite/enQtNDUwMzkwODY5OTM4LTRhYmRmZmE4YzY3OGUzMjA1NDg0MDVhNDQ2MGNkYjRhM2RlMDNjMTI1NDQyMjAzZGVlMDFkNThkNGFjZGZjMGY/&quot;&gt;chat room on Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;KubeDirector &lt;a href=&quot;https://github.com/bluek8s/kubedirector/&quot;&gt;GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Deploying Complex Stateful Applications on Kubernetes with KubeDirector]]></title><description><![CDATA[Kubernetes is clearly the container orchestrator of choice for cloud-native stateless applications. And with StatefulSets and Persistent…]]></description><link>https://developer.hpe.com/deploying-complex-stateful-applications-on-kubernetes-with-kubedirector/</link><guid isPermaLink="false">https://developer.hpe.com/deploying-complex-stateful-applications-on-kubernetes-with-kubedirector/</guid><pubDate>Mon, 09 Sep 2019 17:36:33 GMT</pubDate><content:encoded>&lt;p&gt;Kubernetes is clearly the container orchestrator of choice for cloud-native stateless applications. And with StatefulSets and Persistent Volumes, it’s now possible to run stateful applications on Kubernetes. Tools like Kustomize, Helm, and Kubeflow help tackle some of the deployment complexity for stateful applications. However, running complex stateful applications for distributed AI, machine learning, and big data analytics on Kubernetes remains beyond the reach of most users.&lt;/p&gt;
&lt;p&gt;Enter KubeDirector. KubeDirector is an open source Apache project that uses the standard Kubernetes custom resource functionality and API extensions to deploy and manage complex stateful scale-out application clusters. With KubeDirector, you can run complex stateful clusters for AI, machine learning, and big data analytics on Kubernetes without writing a single line of Go code.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=X2kEk5wLe9g&quot;&gt;This webinar&lt;/a&gt; will provide an overview of the KubeDirector architecture, show how to author the metadata and artifacts required for an example stateful application (e.g. with Spark, Jupyter, and Cassandra), and demonstrate the deployment and management of the cluster on Kubernetes using KubeDirector.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Complex Stateful Applications on Kubernetes: KubeDirector version 0.2]]></title><description><![CDATA[Last summer, I wrote here about our BlueK8s initiative and a new open source project for deploying and managing complex stateful scale-out…]]></description><link>https://developer.hpe.com/complex-stateful-applications-on-kubernetes-kubedirector-version-02/</link><guid isPermaLink="false">https://developer.hpe.com/complex-stateful-applications-on-kubernetes-kubedirector-version-02/</guid><pubDate>Mon, 09 Sep 2019 17:26:49 GMT</pubDate><content:encoded>&lt;p&gt;Last summer, &lt;a href=&quot;https://www.bluedata.com/blog/2018/07/operation-stateful-bluek8s-and-kubernetes-director/&quot;&gt;I wrote here about our BlueK8s initiative and a new open source project&lt;/a&gt; for deploying and managing complex stateful scale-out applications on Kubernetes: &lt;strong&gt;KubeDirector.&lt;/strong&gt; KubeDirector enables data scientists familiar with data-intensive distributed applications such as Hadoop, Spark, Cassandra, TensorFlow, Caffe2, etc. to easily run these applications on Kubernetes.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://kubernetes.io/blog/2018/10/03/kubedirector-the-easy-way-to-run-complex-stateful-applications-on-kubernetes/&quot;&gt;my blog post on the Kubernetes site in the fall&lt;/a&gt;, I introduced version 0.1 of KubeDirector and described how it works. Since then, we’ve seen a lot of interest in KubeDirector from the community we’re very excited about the progress so far. The BlueData team behind this effort is &lt;a href=&quot;https://www.bluedata.com/blog/2018/11/hpe-and-bluedata-joining-forces-in-ai-ml-big-data/&quot;&gt;now part of HPE&lt;/a&gt;, and the KubeDirector project continues to move full steam ahead.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/1mbaqzb3tkv_zx36nxxdgbw-1568050291276.jpeg&quot; alt=&quot;1*mbaqzb3tkv_zx36nxxdgbw&quot;&gt;&lt;/p&gt;
&lt;p&gt;To that end, we just pushed out the next release and our first public update of KubeDirector: version 0.2. You can check out the full details on our github site &lt;a href=&quot;https://github.com/bluek8s/kubedirector/releases/tag/v0.2.0&quot;&gt;here:&lt;/a&gt; &lt;a href=&quot;https://github.com/bluek8s/kubedirector/releases/tag/v0.2.0&quot;&gt;https://github.com/bluek8s/kubedirector/releases/tag/v0.2.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Some of the highlights of what’s new in version 0.2 of KubeDirector include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A fully deployable Cloudera 5.14.2 image is now available in the catalog of example applications&lt;/li&gt;
&lt;li&gt;Cluster launch performance has been enhanced through additional work on launch parallelization&lt;/li&gt;
&lt;li&gt;The “configcli” tool used in application setup is now included in the “nodeprep” directory.&lt;/li&gt;
&lt;li&gt;We’ve made additional improvements to the Makefile support and functionality:
&lt;ul&gt;
&lt;li&gt;KubeDirector can now be built and deployed on Ubuntu systems&lt;/li&gt;
&lt;li&gt;“make deploy” now waits for deployment to succeed before returning&lt;/li&gt;
&lt;li&gt;“make teardown” now waits for teardown to finish before returning.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;KubeDirector actions are now recorded as Kubernetes events and can be viewed by the standard “kubectl describe” command&lt;/li&gt;
&lt;li&gt;KubeDirector has been tested on the following Kubernetes platforms:
&lt;ul&gt;
&lt;li&gt;DigitalOcean Kubernetes (DOK)&lt;/li&gt;
&lt;li&gt;Google Kubernetes Engine (GKE)&lt;/li&gt;
&lt;li&gt;Amazon Elastic Container Service for Kubernetes (EKS)&lt;/li&gt;
&lt;li&gt;Kubernetes version 1.13.2 on CentOS kernels&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See below for a screenshot of KubeDirector v0.2 running four pods of a Spark cluster:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/16syu8q9lacfchtp9-ctdca-1568050296650.png&quot; alt=&quot;1*6syu8q9lacfchtp9 ctdca&quot;&gt;&lt;/p&gt;
&lt;p&gt;One of those pods is a Jupyter notebook, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/1bazfzfp7zyqvpvtww-ekbw-1568050305473.png&quot; alt=&quot;1*bazfzfp7zyqvpvtww ekbw&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Join the Community&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We’re working towards the next version of KubeDirector (and the broader BlueK8s initiative) and we’d welcome your help as developers, contributors, and adopters. Follow &lt;a href=&quot;https://twitter.com/BlueK8s/&quot;&gt;@BlueK8s&lt;/a&gt; on Twitter and get involved through these channels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;KubeDirector &lt;a href=&quot;https://bluek8s.slack.com/join/shared_invite/enQtNTQzNDQzNjQwMDMyLTdjYjE0ZTg0OGJhZWUxMzhkZTZjNDg5ODIyNzZmNzZiYTk4ZjQxNDFjYzk4OWM0MjFlNmVkNWNlNmFjNzkzNjQ&quot;&gt;chat room on Slack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;KubeDirector &lt;a href=&quot;https://github.com/bluek8s/kubedirector/&quot;&gt;GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Say NO to (Cloud) Vendor Lock-in!]]></title><description><![CDATA[gettyimages 761603815 During my 30 year technology career, working for Digital Equipment Corporation (DEC), Compaq Computer, Hewlett Packard…]]></description><link>https://developer.hpe.com/say-no-to-cloud-vendor-lock-in/</link><guid isPermaLink="false">https://developer.hpe.com/say-no-to-cloud-vendor-lock-in/</guid><pubDate>Tue, 03 Sep 2019 17:13:05 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/gettyimages-761603815-1567544802064.jpg&quot; alt=&quot;gettyimages 761603815&quot;&gt;&lt;/p&gt;
&lt;p&gt;During my 30 year technology career, working for Digital Equipment Corporation (DEC), Compaq Computer, Hewlett Packard (HP) and now Hewlett-Packard Enterprise (HPE), I was in the center of the action when the industry-standard server tsunami hit the computer industry. I rode the wave as we moved away from vendor-specific operating systems (HP-UX, SunOS, AIX, VMS) running on vendor-specific servers (HP, IBM, DEC, Sun) that were powered by vendor-specific processors (PA-RISC, Alpha, Sparc, Power, etc.). Intel x86 powered servers, running Linux, available from any kind of vendor, quickly took over. Death to vendor lock-in! That was our motto at the time.&lt;/p&gt;
&lt;p&gt;Soon after, database and other software enablers ran into a similar situation. It was an open source snowball effect, having everything to do with avoiding vendor lock-in and encouraging customers to part ways with Oracle, DB2, and RDB. When virtualization technologies came along, they again started out very vendor specific (and still are with VMware, for the most part) but open source solutions also emerged, i.e OpenStack. These solutions now allow customers to build open source private clouds that run on any industry-standard server.&lt;/p&gt;
&lt;p&gt;Next came the container revolution. Right from the start, containers appeared as an open source solution and that certainly helped in its incredible adoption rate. The success of the container management framework, Kubernetes, is the latest proof of the power of open source and vendor lock-in avoidance.&lt;/p&gt;
&lt;p&gt;So, what’s the story with public cloud? Some vendors, like Google, are active open source contributors. For the most part, though, public cloud vendors are acting like the HP/Compaq/IBM/DEC of the ‘80s: they propose attractive and innovative solutions only available on their specific cloud technologies and lock you in. The minute you start using some of the best APIs from AWS right in the heart of your code, you are hooked! If you decide to run your application on another cloud, you must rewrite the application to a different API. There are multiple reasons why the ability to deploy on another cloud could become important for your application, whether it’s because of a large customer requirement, merger and acquisition, or cost, as explained by one of Hashi Corp’s founders &lt;a href=&quot;https://www.reddit.com/r/devops/comments/91afzz/why_multicloud/e2x156y/&quot;&gt;here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you think about it, public cloud today is all about vendor lock-in. Going &lt;a href=&quot;https://en.wikipedia.org/wiki/Serverless_computing&quot;&gt;serverless&lt;/a&gt; (i.e. AWS Lambda, Azure Functions, Google Cloud Functions), meaning the ability to write code that can run on anything and for which you just pay for what’s used, isn&apos;t going to help, as it locks you up even tighter. But if serverless is what developers really want, then the next logical step would be cloudless. With cloudless, you could write cloud-agnostic applications that can run on any public or private cloud in an everything-as-a-service consumption model. Given everything that I’ve seen, this makes the most sense and is probably where the industry is headed so make sure you learn more about &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/cloudless-1906.html&quot;&gt;Cloudless&lt;/a&gt; from &lt;a href=&quot;https://www.hpe.com/cloudless&quot;&gt;here.&lt;/a&gt; Finally, don’t make the same mistakes some made 20 years ago and get entangled in proprietary solutions: say &lt;strong&gt;No&lt;/strong&gt; to vendor lock-in now!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[So you know how to code. Is that enough? ]]></title><description><![CDATA[picture1 You've probably spent considerable time mastering a programming language. You may even be an expert in something like JavaScript…]]></description><link>https://developer.hpe.com/so-you-know-how-to-code-is-that-enough/</link><guid isPermaLink="false">https://developer.hpe.com/so-you-know-how-to-code-is-that-enough/</guid><pubDate>Tue, 03 Sep 2019 17:08:20 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1567530751772.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;You&apos;ve probably spent considerable time mastering a programming language. You may even be an expert in something like JavaScript, CSS, Python or perhaps C/C++.  You&apos;ve written so many lines of code you cannot remember all the projects. And you know you can probably solve just about any problem using your favorite language. That&apos;s awesome!&lt;/p&gt;
&lt;p&gt;But what happens when that language becomes less relevant and is replaced by something else? Technology changes faster than almost anything else (well, maybe not as fast as that green light you never seem to catch in traffic). As a developer, you are constantly challenged to learn the latest programming language or tool that will give you that extra edge to stand out from the rest of the pack. And, despite your best efforts, that&apos;s not always going to happen. But you can do plenty of other things to stay ahead in this fast-paced environment.&lt;/p&gt;
&lt;p&gt;To be a successful developer, you need more than the ability to code. Here are some characteristics I&apos;ve seen experienced developers display:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;An enthusiasm for continuous learning.&lt;/strong&gt; Yes, this is an age old adage, and you&apos;ve heard it many times. But it equates to more than just formal and informal training. It&apos;s about inviting others to critique your work, and making them feel comfortable with giving you feedback by demonstrating how you listen and act on that feedback. It&apos;s about trying something new that&apos;s outside your comfort zone, such as doing a presentation or conducting customer research. It also means finding other interests and bringing those conversations and thoughts into your development work. These are just some of the many non-traditional ways to keep learning.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Adaptability to changing situations.&lt;/strong&gt; Every day the tech industry is flooded with new announcements that portend change -- change that could affect your job, what you do, and what makes you happy. Embracing that change and finding a path forward is key to being successful. You might need to learn a new skill, or you are placed with a new set of team mates, or your favorite project goes away. These things are out of your control. Professional developers will focus on what they can control, adapt as necessary, and find a path forward.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A desire to contribute in a meaningful way.&lt;/strong&gt; Yes, you wrote your code, made your deadline and can claim success. But, is the code bug free? Is it efficient and performant code? How will the customer or others react to consuming your code? Skilled developers look beyond meeting just the milestone to ensure that what they are doing contributes to a successful outcome for the customer. They also look to see that what they are doing helps lift the contributions of their teammates as well as the broader organization.&lt;/p&gt;
&lt;p&gt;These are just a few common traits I&apos;ve seen in developers who I thought were successful. They were not always the experts in the latest languages or tools, but they were inherently an important part of the team.&lt;/p&gt;
&lt;p&gt;Do you know any successful developers? What characteristics do they display?  Share your thoughts and stories with me on Slack at &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;hpedev.slack.com&lt;/a&gt; or connect with me on Twitter &lt;a href=&quot;https://twitter.com/KrenekJeff&quot;&gt;@KrenekJeff.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing a multi-vendor CSI driver for Kubernetes]]></title><description><![CDATA[In true HPE storage tradition, we’re introducing an open source, multi-platform and multi-vendor container storage interface (CSI) driver…]]></description><link>https://developer.hpe.com/introducing-a-multi-vendor-csi-driver-for-kubernetes/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-a-multi-vendor-csi-driver-for-kubernetes/</guid><pubDate>Fri, 30 Aug 2019 17:57:28 GMT</pubDate><content:encoded>&lt;p&gt;In true HPE storage tradition, we’re introducing an open source, multi-platform and multi-vendor container storage interface (CSI) driver for Kubernetes. In essence, it&apos;s meant to support multiple block and file backends from the HPE portfolio. We encourage others to implement our specification (more on this below) to take advantage of the driver architecture for any file and block backend, including those not from HPE. In the same way our FlexVolume driver (dory) and Dynamic Provisioner (doryd) provided an abstraction for any vendor’s Docker Volume plugin to Kubernetes, we are bringing a similar concept to life with the HPE CSI Driver for Kubernetes. In this blog post, I’ll walk you through the architecture, specification and deployment model. But first, let’s go through the basics of what CSI is.&lt;/p&gt;
&lt;h1&gt;Container Storage Interface&lt;/h1&gt;
&lt;p&gt;CSI is a specification that allows container orchestration systems to implement a standardized interface to interact with storage. Kubernetes happens to be one container orchestration implementation that supports CSI. The CSI specification has evolved at a rapid pace since its inception nearly two years ago, steadily adding new features and capabilities. The Kubernetes community declared CSI stable and made it Generally Available (GA) in Kubernetes 1.13 which was released earlier this year. CSI improves the quality of life for both Dev and Ops staff. The developer gets a very consistent interface that allows him to consume storage resources. The Ops person benefits from a very consistent deployment model, as CSI drivers and relevant sidecar containers are simply Kubernetes workloads.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/slides-csi-hpedev-market-1566866211263.png&quot; alt=&quot;HPE CSI Driver for Kubernetes with HPE Nimble Storage CSP&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Kubernetes implementation of CSI acts like an extension. This means the specification can evolve outside of &lt;a href=&quot;https://github.com/kubernetes/kubernetes&quot;&gt;kubernetes/kubernetes&lt;/a&gt; and be integrated selectively by downstream Kubernetes vendors. There’s an ongoing project to migrate all relevant Kubernetes in-tree storage drivers to CSI, so all Kubernetes users need to be familiar with CSI to leverage persistent storage in Kubernetes moving forward. The extensibility of CSI leans on a concept where you declare Customer Resource Definitions (CRD) to extend the functionality of Kubernetes. Further, vendors may write their own CRDs to abstract vendor-specific capabilities for end-users. At HPE, we are providing one such example in our deployment where you can easily extract node information that is relevant from a storage mapping perspective.&lt;/p&gt;
&lt;p&gt;There are many advantages to using CSI over the previous interfaces that were available for storage vendors. At HPE, we used the parameter “overload” concept, as well as extensive PVC annotation, to abstract platform functionality for end-users. With CSI, many data management capabilities become API objects in Kubernetes. As an example, an underlying storage system snapshot becomes a snapshot API object.&lt;/p&gt;
&lt;p&gt;The table below gives you an idea of where we’ve been and where we’re going with CSI.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/slides-csi-hpedev-feat-1566866257740.png&quot; alt=&quot;CSI spec 1.1.0 capabilities implemented by the HPE CSI Driver for Kubernetes&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Container Storage Provider&lt;/h1&gt;
&lt;p&gt;The new HPE CSI Driver architecture introduces a sidecar deployment we call a Container Storage Provider (CSP). The CSP is unique per storage platform and responds to a minimal set of APIs that interfaces with the storage platform. The HPE CSI Driver does all the heavy lifting on the nodes themselves that is common across storage platforms, such as attach/detach a block device or mount a remote filesystem. The key here is that introducing a new storage platform to Kubernetes using the new HPE CSI Driver will now be quite trivial, as you only need to respond at the API endpoint and don’t need any knowledge of CSI or Kubernetes. The CSP can be written in any language capable of providing a RESTful API on a network port. Some vendors may choose to keep the CSP proprietary to protect their IP as it’s just a microservice and part of a larger system that happens to be open source. Also, running the CSP on Kubernetes is not a requirement either, as it could be an external service as part of an appliance or delivered as SaaS.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/slides-csi-hpedev-1567184427188.png&quot; alt=&quot;Communication diagram for the HPE CSI Driver for Kubernetes&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the communication diagram above, you can see what component talks to which endpoint and the distinct boundaries of responsibilities. Innovation will happen at the HPE CSI Driver layer and CSPs will then be able to introduce the newly supported endpoints. For example, a future CSI spec will introduce replica API objects. The HPE CSI Driver takes care of all the minutia the CSI CRDs needs. When the new CSP API spec is ready, storage platform teams may implement the new API endpoint in their CSP and be confident that the CSI-specific work and potential host connectivity is already taken care of.&lt;/p&gt;
&lt;p&gt;The HPE CSI Driver determines which spec of CSI the running version of Kubernetes supports. The CSP requires no modification or gated behavior based on which CSI spec is enforced on the Kubernetes cluster. This means that there’s a fairly large blast radius in terms of diversity in the Kubernetes environments a CSP can support.&lt;/p&gt;
&lt;h1&gt;HPE Nimble Storage public beta&lt;/h1&gt;
&lt;p&gt;Our initial beta driver includes a CSP for HPE Nimble Storage with other HPE platforms and services being developed as we speak. The CSP has been in the works for quite a while and we expect full feature parity with our legacy FlexVolume plugin at its launch with a few additional features as we’ve evolved these two implementations in tandem. The HPE CSI Driver currently supports the current CSI v1.1.0 feature set which the CSP implements: Dynamic Provisioning (with a comprehensive set of parameters), Raw Block Volume, Volume Snapshots, Volume Expansion and Volume Cloning and Inline Ephemeral Volumes.&lt;/p&gt;
&lt;p&gt;We’ve also taken the opportunity to modernize the deployment of the entire solution with a &lt;a href=&quot;https://helm.sh&quot;&gt;Helm&lt;/a&gt; chart. Helm is used to package software that runs on Kubernetes to give users a consistent way of running their workloads. Since the HPE Nimble Storage CSP runs on Kubernetes, this is the most practical way to deploy it.&lt;/p&gt;
&lt;p&gt;Some of the features are easier to show than describe, so let us introduce the HPE CSI Driver for Kubernetes &lt;a href=&quot;https://www.youtube.com/watch?v=TK5H4o3Tg_s&quot;&gt;in this interview with the architect&lt;/a&gt;, followed by a demo.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=TK5H4o3Tg_s&quot;&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/hpecsi-beta-thumb-1566865524360.png&quot; alt=&quot;Introducing the multi-platform and multi-vendor HPE CSI Driver (beta) for Kubernetes&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All the examples used in the demo can be found &lt;a href=&quot;https://github.com/NimbleStorage/container-examples/tree/master/misc/CSI-beta/K8s-1.15&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As you can see, it’s quite easy to set up and use (you need a HPE Nimble Storage array to actually provision resources). We support a plethora of different use cases besides the few things we squeezed into the demo. Please see the comprehensive list of &lt;a href=&quot;https://github.com/hpe-storage/csi-driver/blob/master/examples/kubernetes/hpe-nimble-storage/README.md&quot;&gt;StorageClass parameters&lt;/a&gt; on GitHub.&lt;/p&gt;
&lt;h1&gt;Future&lt;/h1&gt;
&lt;p&gt;We expect to hit GA later in the fall. More platforms, protocols, and services should follow shortly. We also expect a few example open source CSPs to surface that will be able to use the HPE CSI Driver from developers who may not own any HPE product. Since everything needed to build a CSP for any storage platform is available in the spec, we might see any number of non-HPE CSP implementations.&lt;/p&gt;
&lt;p&gt;The team has been hard at work bringing all this functionality together in this beta. We can’t be more excited to share it with the community! Stay tuned to &lt;a href=&quot;https://hpedev.io&quot;&gt;hpedev.io&lt;/a&gt; for news and updates as we bring more platforms and features into the fold.&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;These are the various resources available around the HPE CSI Driver for Kubernetes&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;HPE CSI Driver for Kubernetes on &lt;a href=&quot;https://github.com/hpe-storage/csi-driver&quot;&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;HPE Nimble Storage CSP &lt;a href=&quot;https://github.com/hpe-storage/csi-driver/blob/master/examples/kubernetes/hpe-nimble-storage/README.md&quot;&gt;StorageClass parameters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Container Storage Interface &lt;a href=&quot;https://github.com/container-storage-interface/spec&quot;&gt;specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Container Storage Provider &lt;a href=&quot;https://github.com/hpe-storage/container-storage-provider&quot;&gt;specification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Container Storage Provider &lt;a href=&quot;https://developer.hpe.com/api/hpe-nimble-csp&quot;&gt;Swagger documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Join us on the &lt;a href=&quot;https://hpedev.slack.com/&quot;&gt;HPE DEV slack&lt;/a&gt;! We&apos;re hanging out in #Kubernetes and #NimbleStorage&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://community.hpe.com/t5/HPE-Storage-Tech-Insiders/HPE-CSI-Driver-for-Kubernetes-and-Red-Hat-OpenShift-in-beta/ba-p/7059941&quot;&gt;HPE CSI Driver for Kubernetes and Red Hat OpenShift in beta&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Meet the HPE DEV Team – Jeff Krenek and Vivek Kulkarni]]></title><description><![CDATA[As promised in my last Meet the HPE DEV Team blog, I’m going to introduce you to additional members of the HPE Dev Community. This time I’d…]]></description><link>https://developer.hpe.com/meet-the-hpe-dev-team-jeff-krenek-and-vivek-kulkarni/</link><guid isPermaLink="false">https://developer.hpe.com/meet-the-hpe-dev-team-jeff-krenek-and-vivek-kulkarni/</guid><pubDate>Wed, 28 Aug 2019 15:56:28 GMT</pubDate><content:encoded>&lt;p&gt;As promised in my last &lt;a href=&quot;/blog/meet-the-hpe-dev-team&quot;&gt;Meet the HPE DEV Team&lt;/a&gt; blog, I’m going to introduce you to additional members of the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE Dev Community.&lt;/a&gt; This time I’d like to focus on some of our &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/azure-hybrid-cloud.html&quot;&gt;Azure Stack&lt;/a&gt; experts, Jeff Krenek and Vivek Kulkarni, who work on accelerating the customer adoption of hybrid IT environments. As part of the Azure Stack Innovation Centers, they bring in customer and partner developers to experiment and try out Microsoft Azure Stack, which often results in these developers becoming better informed on the potential of using this platform.&lt;/p&gt;
&lt;p&gt;Jeff manages the code incubation team for HPE DEV as well as the Azure Stack Innovation Centers. He knows hybrid IT and &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/hyper-converged.html&quot;&gt;hyperconverged solutions&lt;/a&gt; well, having previously acted as the lead product architect for HPE’s hyperconverged products. We rely on Jeff’s many years of experience. Not only can he pull the right people together to attend to a specific issue, as he does with our strike teams, he also knows a lot about what’s gone on before. He knows what’s worked, what hasn’t, and how to best apply previously obtained insights to new situations. Jeff was a born manager. It’s easy to see how he likes to support people and help them grow. The fact that he’s on the advisory board for the Computer Engineering Department for his alma mater attests to this, as does the fact that interns seem to do very well in this group. As the father of four kids in college, I suspect he’s had a lot of practice!&lt;/p&gt;
&lt;p&gt;Vivek is a Certified Specialist for Azure Solutions. He helps build the solutions for users and operators to run scripts and use cases for Azure Stack, as well as solutions to assist in Azure Stack migration. Although Vivek has only been with the team for a couple of years, he has a great deal of experience with Microsoft Azure, having worked at Microsoft for some time. Not only does Vivek assist our Azure Stack customers, but he also lends his Azure expertise to other groups inside HPE, including the &lt;a href=&quot;https://www.hpe.com/us/en/services/it-consumption.html&quot;&gt;HPE GreenLake&lt;/a&gt; team, to help them develop products and services around Azure. A graduate of B.V. Bhoomareddi College of Engineering and Technology, it’s great to have Vivek on our team.&lt;/p&gt;
&lt;p&gt;As I pointed out in my last post, given that HPE is a leader in the technology industry, it has the advantage of being able to recruit some pretty impressive personalities. And they’re all very approachable. You can reach Jeff on Twitter &lt;a href=&quot;https://twitter.com/KrenekJeff&quot;&gt;@KrenekJeff&lt;/a&gt; and Vivek &lt;a href=&quot;https://twitter.com/@vkhpedev&quot;&gt;@vkhpedev.&lt;/a&gt; You can also connect with them on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But, enough from me. Why don’t you &lt;a href=&quot;https://www.youtube.com/watch?v=bbglDBHnLh0&amp;#x26;feature=youtu.be&quot;&gt;watch the video&lt;/a&gt; and let them tell you their own stories.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Developing a Client App for Accessing Services via the Open Service Broker API]]></title><description><![CDATA[picture13 The Open Service Broker (OSB) API gives customers a uniform and less complicated way to access services offered by providers. The…]]></description><link>https://developer.hpe.com/developing-a-client-app-for-accessing-services-via-the-open-service-brok/</link><guid isPermaLink="false">https://developer.hpe.com/developing-a-client-app-for-accessing-services-via-the-open-service-brok/</guid><pubDate>Wed, 28 Aug 2019 15:36:16 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture13-1567007521162.png&quot; alt=&quot;picture13&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Open Service Broker (OSB) API gives customers a uniform and less complicated way to access services offered by providers. The service broker acts as the middleman that links the two parties, transfering and implementing services between the two. It is the missing link, allowing for better service consumption and various kinds of client applications to reach services from any provider. The OSB API is the language specification that provides this communication. In this tutorial, we will show you how to develop a client app for accessing services using OSB. For the purposes of this post, we will be using Grommet OSB Broker, a broker we deployed in AWS cloud.&lt;/p&gt;
&lt;p&gt;Note: For more information regarding service brokers, please see Pramod Sareddy’s blog post &lt;a href=&quot;/blog/using-open-service-broker-as-a-quick-and-easy-way-to-offer-everything-as&quot;&gt;Using Open Service Broker as a Quick and Easy Way to Offer Everything as-a-Service.&lt;/a&gt; You can also view another example walkthrough in Peng Liu’s post about &lt;a href=&quot;/blog/an-open-service-broker-project-delivers-a-sample-devops-environment-to-a&quot;&gt;An Open Service Broker Project Delivers a Sample DevOps Environment to AWS.&lt;/a&gt; To access API documentation, access the &lt;a href=&quot;https://www.openservicebrokerapi.org/&quot;&gt;Open Service Broker website&lt;/a&gt; and the &lt;a href=&quot;https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md&quot;&gt;OSB documentation repository.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The stand-alone client application we developed is an example of an OSB client with a graphical user interface (GUI). Its purpose is to communicate with service brokers and deploy and manage services for the user. The service broker we use in this example is one that provisions an ec2 instance on AWS and installs a Grommet environment for a developer.&lt;/p&gt;
&lt;p&gt;To register a service broker, you must first know the IP address of a running broker (ex: ip address:port). If you have this address, as well as valid user credentials (username and password), you will be able to connect to the broker. You should also choose a name and description for the local client app’s representation of that broker.&lt;/p&gt;
&lt;p&gt;Once registered, the client will fetch a broker’s catalog of services. The app will update to reflect the available services in the catalog page. From there, you can search for, or select, the desired service. Selecting a service will bring up a form for deploying an instance of that service. In this example, you will select the grommet service. You can then choose the service plan you want to use and fill in the inputs the broker needs. A plan is essentially a tier of service that may differ in capabilities, size, price, etc. from other plans.&lt;/p&gt;
&lt;p&gt;Here is the workflow for using the Grommet service via our app:&lt;/p&gt;
&lt;h2&gt;Step 0 - Download and Install the App&lt;/h2&gt;
&lt;p&gt;Follow the directions in the README.md and download the source code at the &lt;a href=&quot;https://github.com/HewlettPackard/hpe-openservicebroker-clientapp&quot;&gt;HPE public GitHub repository.&lt;/a&gt; Installation is simple and fast, and there is no backend to the app. The only prerequisites to have installed are Node &gt;= 8.10 and yarn.&lt;/p&gt;
&lt;h2&gt;Step 1 - Log in to the App&lt;/h2&gt;
&lt;p&gt;Authentication is not really implemented at this time. Simply enter ‘user’ for the username and ‘password’ for the password to access the platform.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1567007165491.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 2 – Deploy a broker&lt;/h2&gt;
&lt;p&gt;In the real world, the broker could be deployed out on the Internet, on a local server, or anywhere in between. For the purposes of this post, we&apos;ll be using the Grommet OSB Broker, deployed on AWS.&lt;/p&gt;
&lt;h2&gt;Step 3 - Register Grommet Broker&lt;/h2&gt;
&lt;p&gt;On the broker settings page, you can register a broker. You should click the ‘add broker’ tile and then complete the register form. You can choose a name and an optional description. The broker’s address (URL) must be known in order to access it, and a username and password are required to authenticate to the broker. The username and password to register our broker are both ‘ubuntu’. If the registration fails, an alert will display an error message. Otherwise, the catalog for the broker will be fetched, which we will soon see reflected in the catalog page of our platform. The broker settings page will then be populated with a new broker tile, displaying the name and description of the registered broker. Clicking on this tile will display a details panel that shows the broker’s address, status, and time created. You may choose to edit the broker’s name and description, or delete the broker. Deleting the broker would remove its services from the catalog.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture5-1567007472045.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture6-1567007478316.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 4 - Select Grommet Service from Catalog&lt;/h2&gt;
&lt;p&gt;You can now click on the ‘Catalog’ link to arrive at a page containing the broker’s service as a tile. There is also a search field that could be useful for finding a specific service by name if there are many on the page. Clicking the ‘grommet’ service tile will enable you to deploy an instance of that service.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture7-1567007485144.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 5 - Deploy Service Instance&lt;/h2&gt;
&lt;p&gt;You should select the desired plan for the service. Choosing plan-1 for the Grommet service will produce a form that requires inputs for AWS implementation details. You must also name the instance, and that name must be unique. After clicking the submit button, the app will attempt to order the broker to instantiate the service. If something went wrong with the API call, an error will be displayed, and no instance will be created. If the provisioning is successful, you will be redirected to the Deployed Instances part of the app. You will see a tile that bears the name of the instance you just created.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture8-1567007492340.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 6 - Access the Instance&lt;/h2&gt;
&lt;p&gt;Clicking on the aforementioned tile will produce a details panel for accessing that instance. The status should be ‘loading’, because the ec2 provisioning takes a while to process on the broker’s end. The app will continuously poll the broker’s last-operation endpoint to retrieve the status of the deployment. It will only do so for as long as the broker’s maximum-polling property will allow it to, if specified. Eventually, the status should turn green and say ‘loaded’. You will then have access to the instance’s access details, i.e. the public ip address of the ec2 instance.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture9-1567007498208.png&quot; alt=&quot;picture9&quot;&gt;&lt;/p&gt;
&lt;p&gt;Deployed&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture10-1567007504694.png&quot; alt=&quot;picture10&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture11-1567007510153.png&quot; alt=&quot;picture11&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Step 7 - Delete the Service Instance&lt;/h2&gt;
&lt;p&gt;On the same panel, you can choose to delete the instance. Clicking the delete button will remove the instance and stop the ec2 instance. Do this only when you are done with the AWS virtual machine!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture12-1567007515231.png&quot; alt=&quot;picture12&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;The OSB specification recommends an end point to bind/unbind to a service instance. Essentially, this is where the app and the service instance “connect”. This is currently unavailable in our app and will be developed in the future. After a service instance has been provisioned, you should see a button labeled ‘Bind’ that will initiate a HTTP PUT call to the /v2/service_instances/:instance_id/service_bindings/:binding_id endpoint. This feature must also handle polling the /v2/service_instances/:instance_id/service_bindings/:binding_id/last_operation endpoint until the bind/unbind returns the state ‘succeeded’. These operations will be used by more complex brokers.&lt;/p&gt;
&lt;p&gt;Since the OSB specification says authentication is optional, we’ve only implemented basic authentication. This can be enhanced in the future to use JSON web tokens or more robust techniques. Also, session management can be implemented after a user is authenticated successfully. This would involve creating a session store on the backend using an in-memory database like Redis. Right now, if a user refreshes the page, the session info is lost.&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;Access the Open Service Broker Client App &lt;a href=&quot;https://github.com/HewlettPackard/hpe-openservicebroker-clientapp&quot;&gt;here on GitHub.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Click &lt;a href=&quot;https://www.youtube.com/watch?v=ERwrlvc1KdU&amp;#x26;feature=youtu.be&quot;&gt;here to view a video&lt;/a&gt; that steps you through the process.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;We had a great time developing this client example and hope you will find it helpful in testing and implementing your service brokers. Please remember to follow our &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog&lt;/a&gt; posts for more information on this and other topics designed to streamline your application development environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[CodeWars, A Unique, Mind Boggling Coding Competition for Senior High School Students]]></title><description><![CDATA[picture10 On July 27, 2019, Hewlett Packard Enterprise (HPE) held an exciting and fun event at the HPE campus in Mahadevapura in Bengaluru…]]></description><link>https://developer.hpe.com/codewars-a-unique-mind-boggling-coding-competition-for-senior-high-schoo/</link><guid isPermaLink="false">https://developer.hpe.com/codewars-a-unique-mind-boggling-coding-competition-for-senior-high-schoo/</guid><pubDate>Wed, 21 Aug 2019 22:29:46 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture10-1566426968816.png&quot; alt=&quot;picture10&quot;&gt;&lt;/p&gt;
&lt;p&gt;On July 27, 2019, Hewlett Packard Enterprise (HPE) held an exciting and fun event at the HPE campus in Mahadevapura in Bengaluru -- the 4th edition of CodeWars. A legacy that started 22 years ago in Houston with just 80 students has now evolved to become one of the most anticipated and sought after events exclusively organized for 11th and 12th grade students – &lt;a href=&quot;http://www.hpcodewars.org/&quot;&gt;CodeWars!&lt;/a&gt; CodeWars is a high school computer programming competition designed to spark interest in careers in science, technology, engineering, and math (STEM).&lt;/p&gt;
&lt;p&gt;CodeWars has become synonymous to a battlefield for budding coders across all regions where HPE, the company that was recognized as one of &lt;a href=&quot;https://www.forbes.com/best-employers-for-new-grads/list/#tab:overall&quot;&gt;the Best Employers for New College Graduates by Forbes (2018),&lt;/a&gt; has an established presence. In this competition, HPE challenges young coders using 20 problems of increasing levels of complexity that are to be cracked within a time frame of 3 hours. This particular CodeWars event successfully combined several quality ingredients: a high-tech setting, a wide range of programming challenges, plenty of food, music, and giveaways - all in an exciting, stimulating, and competitive environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture11-1566426977946.png&quot; alt=&quot;picture11&quot;&gt;&lt;/p&gt;
&lt;p&gt;CodeWars gives students the opportunity to see what it is like to code for a technology company. At these events, students get to solve the real-life problems they are likely to encounter in their future professional careers. Those in attendance found it exciting to see a completely packed cafeteria with enthusiastic students eagerly looking forward to a day full of challenging competitions. While a few students looked nervous, others chilled out, playing solitaire to pass the time. About 100 teams from various schools across the city participated in the event. All students put their best foot forward battling for the winning title.&lt;/p&gt;
&lt;p&gt;Students used some of the most popular programming languages such as Python and Java, C++ to crack the problems. The range of complexity was such that the participants were asked to write code for questions as simple as finding the difference between the original number and its reverse to more complex questions such as finding circular primes.&lt;/p&gt;
&lt;p&gt;In addition to coding challenges, the students were quizzed on HPE trivia. It was exhilarating to see how the students answered these questions with ease, including some tricky ones such as what landmark innovation of HP is used in households even today and what term is used to define the green HPE rectangular logo. The students weren’t the only ones challenged however. The judges themselves found it very difficult to narrow the selection down to just 3 winners. Because of this, they decided to make some special mentions and give a shout-out to a few of the top performing teams.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture12-1566427031557.png&quot; alt=&quot;picture12&quot;&gt;&lt;/p&gt;
&lt;p&gt;At CodeWars, students often go all out and dress up in costumes reflecting their favorite fantasy characters. The extensive effort that participants put in for this event was clearly visible. They were not just determined to ace the tough coding challenges, but they also left no stone unturned in sporting the best possible costumes. Attendees saw so many iconic characters come alive that the event almost doubled as a mini Comicon. The dedication amongst students was such that many teams coded with their masks and accessories on throughout the entire event.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture13-1566427040293.png&quot; alt=&quot;picture13&quot;&gt;&lt;/p&gt;
&lt;p&gt;To summarize, CodeWars 2019 India was a great concoction of quality ingredients: a high-tech setting, a wide range of programming challenges, plenty of coding, and tempting giveaways -- all delivered in an exciting, stimulating, and competitive environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to Register a Grommet OSB Broker in a Kubernetes Service Catalog]]></title><description><![CDATA[registering a broker In my previous article, Using Open Service Broker as a Quick and Easy Way to Offer Everything as-a-Service, we examined…]]></description><link>https://developer.hpe.com/how-to-register-a-grommet-osb-broker-in-a-kubernetes-service-catalog/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-register-a-grommet-osb-broker-in-a-kubernetes-service-catalog/</guid><pubDate>Wed, 21 Aug 2019 18:44:52 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/registering-a-broker-1566920379874.png&quot; alt=&quot;registering a broker&quot;&gt;&lt;/p&gt;
&lt;p&gt;In my previous article, &lt;a href=&quot;/blog/using-open-service-broker-as-a-quick-and-easy-way-to-offer-everything-as&quot;&gt;Using Open Service Broker as a Quick and Easy Way to Offer Everything as-a-Service,&lt;/a&gt; we examined what a Open Service Broker (OSB) API is and how it can be used to expose the Grommet development environment as-a-Service. Now, I would like to show you how to register and consume services offered by the Grommet OSB Broker in a Kubernetes Service Catalog to provision, bind, unbind, and deprovision a Grommet Dev Instance.&lt;/p&gt;
&lt;p&gt;This tutorial will be helpful for developers in many companies who today deploy Kubernetes clusters to ensure scalability for their applications. Applications running inside Kubernetes clusters may need  access  to 3rd party services, like databases or additional storage, and you need to be able to provide that service to app developers as part of the Kubernetes Service Catalog. One way of exposing a service is to use OSB. Once you register your OSB inside the Kubernetes Service Catalog, you can see the service, and then you can provision and bind the service to your application.&lt;/p&gt;
&lt;p&gt;This tutorial assumes that you&apos;ve installed a Service Catalog onto your Kubernetes cluster. If you haven&apos;t, please see the &lt;a href=&quot;https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md&quot;&gt;installation instructions.&lt;/a&gt; Optionally you may install the Service Catalog CLI, svcat. Examples for both svcat and kubectl are provided.&lt;/p&gt;
&lt;p&gt;All commands shown assume that you&apos;re operating out of the root of this repository.&lt;/p&gt;
&lt;p&gt;In the figure below, you can see the overall architecture setup of the Kubernetes Service Catalog running on premise interacting with Grommet OSB broker running in AWS cloud to provision, bind, and deprovision a Grommet Dev Instance in the cloud.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1566414261415.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;In this architecture, the Kubernetes Service Catalog establishes an association with the Grommet OSB broker by sending a GET request to the /v2/catalog endpoint, which then responds with a 200 OK and a body containing all the information about the services it offers. Kubernetes stores and exposes these services to consumers via the Kubernetes Service Catalog. Cloud operators will instantiate the service by sending a PUT request to the /v2/service_instances/:service_id end point. The broker will do the actual provisioning of the service instance. Let’s discuss each of these components individually.&lt;/p&gt;
&lt;h2&gt;Kubernetes Internal Architecture&lt;/h2&gt;
&lt;p&gt;To start, I will briefly cover the basics of how Kubernetes works internally. There is an API server that listens to user requests. Users perform most actions by declaratively describing Kubernetes resources in yaml files that get written through to Etcd, shared key-value store. In my example here, a user is declaring some object Foo, which creates a record in the Etcd.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture2-1566414265789.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, we have a Foo Controller, which is watching the shared Etcd through the API Server for any changes in Foo objects. Now that we just created a new one, our Foo Controller sees the change and begins taking actions to implement this change of state.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture3-1566414269747.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;This results in ithe Foo controller creating a new Foo object. Depending on what Foo is, this action could be doing something directly or it could be sending a command to another Kubernetes component. Nevertheless, the general principle remains the same. This particular example was for some object-type Foo, but there are many resources in Kubernetes and, correspondingly, many API servers and controllers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture4-1566414273123.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Kubernetes Service Catalog&lt;/h2&gt;
&lt;p&gt;A Kubernetes Service Catalog is an extension API that enables applications running in Kubernetes clusters to easily use externally managed software offerings, such as a datastore service offered by a cloud provider or a standalone VM, like a Grommet Dev Environment as-a-Service.&lt;/p&gt;
&lt;p&gt;The service catalog provides a way to list, provision, and bind with externally managed services from service brokers without needing detailed knowledge about how those services are created or managed.&lt;/p&gt;
&lt;p&gt;With that in mind, here is an overview of what the service catalog looks like. It’s a custom Kube API server and controller that maintains the state for five new resource types that correspond to their equivalents from the &lt;a href=&quot;https://www.openservicebrokerapi.org&quot;&gt;OSB API.&lt;/a&gt; The controller implements the client side of the OSB API, allowing it to query service brokers for their catalogs, and to manipulate them so it can provision and maintain services. It also makes use of a native Kubernetes resource, Secrets, to inject credentials for service bindings into running Pods, but more on that in a moment.&lt;/p&gt;
&lt;p&gt;Everything I mentioned about the OSB architecture in my &lt;a href=&quot;/blog/using-open-service-broker-as-a-quick-and-easy-way-to-offer-everything-as&quot;&gt;previous blog,&lt;/a&gt; is contained in this interface between the controller and the service brokers. App developers don’t have to be aware of it, and they can continue to use Kubernetes the same as before, issuing normal CRUD commands through the API service.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture5-1566414276654.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once again, this tutorial assumes that you&apos;ve installed a Service Catalog onto your Kubernetes cluster. If you haven&apos;t, please see the &lt;a href=&quot;https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md&quot;&gt;installation instructions.&lt;/a&gt; Optionally, you may install the Service Catalog CLI, svcat. Examples for both svcat and kubectl are provided.&lt;/p&gt;
&lt;p&gt;All commands in this post assume that you&apos;re operating out of the root of this repository.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: For the purposes of this post, we&apos;ll be using Grommet OSB broker, a broker that we deployed in AWS cloud and one that is accessible at this &lt;a href=&quot;http://3.86.206.101:8099/&quot;&gt;endpoint.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Step 1 – Deploy a broker&lt;/h2&gt;
&lt;p&gt;In the real world, the broker could be deployed within our cluster, next to our cluster in the same data center, out on the Internet, or anywhere in between. For the purpose of this post, we&apos;ll be using Grommet OSB broker, a broker that we deployed in AWS cloud.&lt;/p&gt;
&lt;h2&gt;Step 2 - Register Grommet Broker&lt;/h2&gt;
&lt;p&gt;In this second step, the, cluster operator creates a ClusterServiceBroker resource within the servicecatalog.k8.io group. This resource contains the URL and connection details necessary to access a service broker endpoint. The service catalog control manager triggers a call to the external service broker for a list of all available services. The service broker returns a list of available managed services and a list of service plans, which are cached locally as ClusterServiceClass and ClusterServicePlan resources respectively. A cluster operator can then get the list of available managed services and service plans using kubectl get clusterservicecalassess or clusterserviceplans commands.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture6-1566414280554.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;Because we haven&apos;t created any resources in the service-catalog API server yet, querying the service catalog returns an empty list of resources:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat get brokers
  NAME   URL   STATUS
+------+-----+--------+

$ kubectl get clusterservicebrokers,clusterserviceclasses,serviceinstances,servicebindings
No resources found.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We&apos;ll register a broker server with the catalog by creating a new ClusterServiceBroker resource:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cat grommet-broker-clusterservicebroker.yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
  name: grommet-broker
spec:
        url: http://3.86.206.101:8099
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl create -f grommet-broker-clusterservicebroker.yaml
clusterservicebroker.servicecatalog.k8s.io/grommet-broker created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When we create this ClusterServiceBroker resource, the service catalog controller responds by querying the broker server to see what services it offers and creates a ClusterServiceClass for each.&lt;/p&gt;
&lt;p&gt;We can check the status of the broker by entering the following commands:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat describe broker grommet-broker
 Name:     grommet-broker
  Scope:    cluster
  URL:      http://3.86.206.101:8099
  Status:   Ready - Successfully fetched catalog entries from broker @ 2019-07-09 19:17:10 +0000 UTC

$ kubectl get clusterservicebrokers grommet-broker -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceBroker
metadata:
  creationTimestamp: &quot;2019-07-09T19:17:10Z&quot;
  finalizers:
  - kubernetes-incubator/service-catalog
  generation: 1
  name: grommet-broker
  resourceVersion: &quot;8&quot;
  selfLink: /apis/servicecatalog.k8s.io/v1beta1/clusterservicebrokers/grommet-broker
  uid: 266bed9b-a27e-11e9-9f3e-3aac54c90eba
spec:
  relistBehavior: Duration
  relistRequests: 0
  url: http://3.86.206.101:8099
status:
  conditions:
  - lastTransitionTime: &quot;2019-07-09T19:17:10Z&quot;
    message: Successfully fetched catalog entries from broker.
    reason: FetchedCatalog
    status: &quot;True&quot;
    type: Ready
  lastCatalogRetrievalTime: &quot;2019-07-09T19:17:10Z&quot;
  reconciledGeneration: 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that the status reflects that the broker&apos;s catalog of service offerings has been successfully added to our cluster&apos;s service catalog.&lt;/p&gt;
&lt;h2&gt;Step 3 – Viewing ClusterServiceClasses and ClusterServicePlans&lt;/h2&gt;
&lt;p&gt;The controller has already created a ClusterServiceClass for each service that the grommet broker provides. We can view the ClusterServiceClass resources available:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat get classes
   NAME     NAMESPACE     DESCRIPTION
+---------+-----------+-----------------+
  grommet               grommet service

$ kubectl get clusterserviceclasses
NAME                                   EXTERNAL-NAME   BROKER           AGE
97ca7e25-8f63-44a7-99d1-a75729ebfb5e   grommet         grommet-broker   4m30s

&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The above kubectl command uses a custom set of columns. The NAME field is the Kubernetes name of the ClusterServiceClass and the EXTERNAL NAME field is the human-readable name for the service that the broker returns.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat describe class grommet
  Name:              grommet
  Scope:             cluster
  Description:       grommet service
  Kubernetes Name:   97ca7e25-8f63-44a7-99d1-a75729ebfb5e
  Status:            Active
  Tags:              ui, grommet
  Broker:            grommet-broker

Plans:
       NAME         DESCRIPTION
+----------------+----------------+
  grommet-plan-1   t2.micro instance with NodeJS
  grommet-plan-2   t2.small instance with NodeJS

$ kubectl get clusterserviceclasses 97ca7e25-8f63-44a7-99d1-a75729ebfb5e -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServiceClass
metadata:
  creationTimestamp: &quot;2019-07-09T19:17:10Z&quot;
  name: 97ca7e25-8f63-44a7-99d1-a75729ebfb5e
  ownerReferences:
  - apiVersion: servicecatalog.k8s.io/v1beta1
    blockOwnerDeletion: false
    controller: true
    kind: ClusterServiceBroker
    name: grommet-broker
    uid: 266bed9b-a27e-11e9-9f3e-3aac54c90eba
  resourceVersion: &quot;5&quot;
  selfLink: /apis/servicecatalog.k8s.io/v1beta1/clusterserviceclasses/97ca7e25-8f63-44a7-99d1-a75729ebfb5e
  uid: 268d3344-a27e-11e9-9f3e-3aac54c90eba
spec:
  bindable: true
  bindingRetrievable: false
  clusterServiceBrokerName: grommet-broker
  description: grommet service
  externalID: 97ca7e25-8f63-44a7-99d1-a75729ebfb5e
  externalMetadata:
    displayName: The Grommet Broker
    listing:
      blurb: Add a blurb here
      imageUrl: http://example.com/cat.gif
      longDescription: UI component library, in a galaxy far far away...
    provider:
      name: The grommet
  externalName: grommet
  planUpdatable: true
  requires:
  - route_forwarding
  tags:
  - ui
  - grommet
status:
  removedFromBrokerCatalog: false
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Additionally, the controller created a ClusterServicePlan for each of the plans for the broker&apos;s services. We can view the ClusterServicePlan resources available in the cluster:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat get plans
       NAME        NAMESPACE    CLASS     DESCRIPTION
+----------------+-----------+---------+----------------+
  grommet-plan-1               grommet   t2.micro instance with NodeJS
  grommet-plan-2               grommet   t2.small instance with NodeJS

$ kubectl get clusterserviceplans
NAME                                   EXTERNAL-NAME    BROKER           CLASS                                  AGE
2a44ed0e-2c09-4be6-8a81-761ddba2f733   grommet-plan-1   grommet-broker   97ca7e25-8f63-44a7-99d1-a75729ebfb5e   7m2s
e3c4f66b-b7ae-4f64-b5a3-51c910b19ac0   grommet-plan-2   grommet-broker   97ca7e25-8f63-44a7-99d1-a75729ebfb5e   7m2s

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can view the details of a ClusterServicePlan with this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat describe plan grommet/default

$ kubectl get clusterserviceplans 86064792-7ea2-467b-af93-ac9694d96d52 -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ClusterServicePlan
metadata:
  creationTimestamp: &quot;2019-07-09T19:17:10Z&quot;
  name: 2a44ed0e-2c09-4be6-8a81-761ddba2f733
  ownerReferences:
  - apiVersion: servicecatalog.k8s.io/v1beta1
    blockOwnerDeletion: false
    controller: true
    kind: ClusterServiceBroker
    name: grommet-broker
    uid: 266bed9b-a27e-11e9-9f3e-3aac54c90eba
  resourceVersion: &quot;6&quot;
  selfLink: /apis/servicecatalog.k8s.io/v1beta1/clusterserviceplans/2a44ed0e-2c09-4be6-8a81-761ddba2f733
  uid: 268e25b5-a27e-11e9-9f3e-3aac54c90eba
spec:
  clusterServiceBrokerName: grommet-broker
  clusterServiceClassRef:
    name: 97ca7e25-8f63-44a7-99d1-a75729ebfb5e

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 4 – Creating a New ServiceInstance&lt;/h2&gt;
&lt;p&gt;Here, the cluster operator can instantiate the provisioning of a new instance by creating a ServiceInstance resource within the servicecatalog.k8.io group. When the ServiceInstance resource is created, the service catalog control manager initiates a call to the external service broker to provision an instance of the service.&lt;/p&gt;
&lt;p&gt;The service broker creates a new instance of the managed service and returns an HTTP response. A cluster operator can then check the status of the instance to see if it is ready.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture7-1566414285460.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that a ClusterServiceClass named grommet exists within our cluster&apos;s service catalog, we can create a ServiceInstance that points to it.&lt;/p&gt;
&lt;p&gt;Unlike ClusterServiceBroker and ClusterServiceClass resources, ServiceInstance resources must be namespaced. Create a namespace with the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl create namespace grommet-ns
namespace/grommet-ns created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, create the ServiceInstance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ cat grommet-broker-instance.yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  name: grommet-broker-instance
  namespace: grommet-ns
spec:
  clusterServiceClassExternalName: grommet
  clusterServicePlanExternalName: grommet-plan-1
  parameters:
    region: &quot;us-east-1&quot;
    Access_Key_ID: &quot;XXXXXXXXXXXXXX&quot;
    Secret_Access_Key: &quot;XXXXXXXXXXX&quot;
    Image_ID: &quot;ami-05f07ee3c7aa&quot;
    Flavor: &quot;t2.small&quot;
    NodeJS_version: &quot;12.1.0&quot;

$ kubectl create -f grommet-broker-instance.yaml –n grommet-ns
serviceinstance.servicecatalog.k8s.io/grommet-broker-instance created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the ServiceInstance is created, the service catalog controller will communicate with the appropriate broker server to initiate provisioning. Check the status of that process:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat describe instance -n grommet-ns grommet-broker-instance
  Name:           grommet-broker-instance
  Namespace:      grommet-ns
  Status:         Ready - The instance was provisioned successfully @ 2019-07-09 23:15:56 +0000 UTC
  DashboardURL:   http://:3000
  Class:          grommet
  Plan:           grommet-plan-1

Parameters:
  Access_Key_ID: XXXXXXXXX
  Flavor: t2.small
  Image_ID: ami-05f07ee3c7aa
  NodeJS_version: 12.1.0
  Secret_Access_Key: XXXXXXXXXXX
  region: us-east-1

Bindings:
No bindings defined

$ kubectl get serviceinstances -n grommet-ns grommet-broker-instance -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  creationTimestamp: &quot;2019-07-09T22:40:41Z&quot;
  finalizers:
  - kubernetes-incubator/service-catalog
  generation: 1
  name: grommet-broker-instance
  namespace: grommet-ns
  resourceVersion: &quot;83&quot;
  selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/grommet-ns/serviceinstances/grommet-broker-instance
  uid: 9479a488-a29a-11e9-9f3e-3aac54c90eba
spec:
  clusterServiceClassExternalName: grommet
  clusterServiceClassRef:
    name: 97ca7e25-8f63-44a7-99d1-a75729ebfb5e
  clusterServicePlanExternalName: grommet-plan-1
  clusterServicePlanRef:
    name: 2a44ed0e-2c09-4be6-8a81-761ddba2f733
  externalID: 9479a40b-a29a-11e9-9f3e-3aac54c90eba
  parameters:
    Access_Key_ID: XXXXXXXXXX
    Flavor: t2.small
    Image_ID: ami-05f07ee3c7aaadaaa
    NodeJS_version: 12.1.0
    Secret_Access_Key: XXXXXXXXXX
    region: us-east-1
  updateRequests: 0
  userInfo:
    groups:
    - system:masters
    - system:authenticated
    uid: &quot;&quot;
    username: kubernetes-admin
status:
  asyncOpInProgress: false
  conditions:
  - lastTransitionTime: &quot;2019-07-09T23:15:56Z&quot;
    message: The instance was provisioned successfully
    reason: ProvisionedSuccessfully
    status: &quot;True&quot;
    type: Ready
  dashboardURL: http://:3000
  deprovisionStatus: Required
  externalProperties:
    clusterServicePlanExternalID: 2a44ed0e-2c09-4be6-8a81-761ddba2f733
    clusterServicePlanExternalName: grommet-plan-1
    parameterChecksum: 2ffa186d88170935135d51e53d4048f2950386d5e3a54e08e811bac054f78779
    parameters:
      Access_Key_ID: XXXXXXXXXX
      Flavor: t2.small
      Image_ID: ami-05f07ee3c7aaadaaa
      NodeJS_version: 12.1.0
      Secret_Access_Key: XXXXXXXXXX
      region: us-east-1
    userInfo:
      groups:
      - system:masters
      - system:authenticated
      uid: &quot;&quot;
      username: kubernetes-admin
  observedGeneration: 1
  orphanMitigationInProgress: false
  provisionStatus: Provisioned
  reconciledGeneration: 1
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 5 – Requesting a ServiceBinding to use the ServiceInstance&lt;/h2&gt;
&lt;p&gt;After a new instance has been provisioned, a cluster operator must bind to the managed service to get the connection credentials and service account details necessary for the application to use the service or to access the service. This is done by creating a ServiceBinding resource.&lt;/p&gt;
&lt;p&gt;After the ServiceBinding is created, the service catalog makes a call to the external service broker requesting the information necessary to bind with the service instance.&lt;/p&gt;
&lt;p&gt;The service broker enables the application permissions/roles for the appropriate service account.&lt;/p&gt;
&lt;p&gt;The service broker returns the information necessary to connect and access the managed service instance. This is provider and service-specific so the information returned may differ between service providers and their managed services. In our case, the Grommet broker returns the Grommet Dev Instance access credentials.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture8-1566414289475.png&quot; alt=&quot;picture8&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that our ServiceInstance has been created, we can bind to it. Create a ServiceBinding resource:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl create -f grommet-broker-binding.yaml
servicebinding.servicecatalog.k8s.io/grommet-broker-binding created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the ServiceBinding resource is created, the service catalog controller will communicate with the appropriate broker server to initiate binding. Generally, this will cause the broker server to create and issue credentials that the service catalog controller will insert into a Kubernetes Secret. We can check the status of this process like so:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat describe binding -n grommet-ns grommet-broker-binding
  Name:        grommet-broker-binding
  Namespace:   grommet-ns
  Status:      Ready - Injected bind result @ 2019-07-09 23:37:18 +0000 UTC
  Secret:      grommet-broker-binding
  Instance:    grommet-broker-instance

Parameters:
  No parameters defined

Secret Data:
  uri        25 bytes
  username   6 bytes

$ kubectl get servicebindings -n grommet-ns grommet-broker-binding -o yaml
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
  creationTimestamp: &quot;2019-07-09T23:37:17Z&quot;
  finalizers:
  - kubernetes-incubator/service-catalog
  generation: 1
  name: grommet-broker-binding
  namespace: grommet-ns
  resourceVersion: &quot;90&quot;
  selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/grommet-ns/servicebindings/grommet-broker-binding
  uid: 7cf4999a-a2a2-11e9-9f3e-3aac54c90eba
spec:
  externalID: 7cf498f3-a2a2-11e9-9f3e-3aac54c90eba
  instanceRef:
    name: grommet-broker-instance
  secretName: grommet-broker-binding
  userInfo:
    groups:
    - system:masters
    - system:authenticated
    uid: &quot;&quot;
    username: kubernetes-admin
status:
  asyncOpInProgress: false
  conditions:
  - lastTransitionTime: &quot;2019-07-09T23:37:18Z&quot;
    message: Injected bind result
    reason: InjectedBindResult
    status: &quot;True&quot;
    type: Ready
  externalProperties:
    userInfo:
      groups:
      - system:masters
      - system:authenticated
      uid: &quot;&quot;
      username: kubernetes-admin
  orphanMitigationInProgress: false
  reconciledGeneration: 1
  unbindStatus: Required
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice that the status has a Ready condition set. This means our binding is ready to use! If we look at the Secrets in our grommet-ns namespace, we should see a new one:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get secrets -n grommet-ns
NAME                     TYPE                                  DATA   AGE
default-token-hjm6z      kubernetes.io/service-account-token   3      139m
grommet-broker-binding   Opaque                                2      3m37s
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Notice that a new Secret named grommet-broker-binding has been created.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Step 6 – Delete the ServiceBinding&lt;/h2&gt;
&lt;p&gt;Now, let&apos;s unbind the instance:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat unbind -n grommet-ns grommet-broker-instance
deleted grommet-broker-binding
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the deletion is complete, we should see that the Secret is gone:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl get secrets -n grommet-ns
NAME                  TYPE                                  DATA   AGE
default-token-hjm6z   kubernetes.io/service-account-token   3      154m
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 7 – Deleting the ServiceInstance&lt;/h2&gt;
&lt;p&gt;There may be times you want to delete a ServiceInstance. In that case you can deprovision it. You can do so using the following steps:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat deprovision -n grommet-ns grommet-broker-instance
deleted grommet-broker-instance
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Step 8 – Deleting the ClusterServiceBroker&lt;/h2&gt;
&lt;p&gt;Next, remove the ClusterServiceBroker resource. This tells the service catalog to remove the broker&apos;s services from the catalog. Do so with this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ kubectl delete clusterservicebrokers grommet-broker
clusterservicebroker.servicecatalog.k8s.io &quot;grommet-broker&quot; deleted
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now see that all the ClusterServiceClass resources that came from that broker have also been deleted:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ svcat get classes
  NAME   NAMESPACE   DESCRIPTION
+------+-----------+-------------+

$ kubectl get clusterserviceclasses
No resources found.
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;There are many ways to consume the services offered by an OSB broker such as the Grommet OSB broker. Using a Kubernetes Service Catalog is one option. In our next article, you’ll learn how to download a standalone Open Service broker client application built on Grommet, and how to register an OSB inside that application. You can also use it to test an OSB you already have. Keep an eye out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV site&lt;/a&gt; for more articles on OSB.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Open Service Broker as a Quick and Easy Way to Offer Everything as-a-Service ]]></title><description><![CDATA[everything as a service 2 Service brokers can be very helpful in delivering applications, development environments, or anything as-a-Service…]]></description><link>https://developer.hpe.com/using-open-service-broker-as-a-quick-and-easy-way-to-offer-everything-as/</link><guid isPermaLink="false">https://developer.hpe.com/using-open-service-broker-as-a-quick-and-easy-way-to-offer-everything-as/</guid><pubDate>Mon, 19 Aug 2019 15:29:52 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/everything-as-a-service-2-1568045844717.png&quot; alt=&quot;everything as a service 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Service brokers can be very helpful in delivering applications, development environments, or anything as-a-Service. Before the use of service brokers, setting up a development environment as-a-Service required a lot of time and manual effort. Service brokers compliant with the Open Service Broker (OSB) API specification can intuitively provision a new instance of a service they offer and provide all of the information your app or container needs to connect to it.&lt;/p&gt;
&lt;p&gt;OSB packages all the pieces required for communication between the service provider and the consumer of that service, basically acting as the middle man who ensures clear communication and helps things work in a more automated way. Before OSB, developers had to download or create all the missing linkages. OSB eliminates that need.&lt;/p&gt;
&lt;p&gt;In this tutorial, I will explain what an OSB API is and how to use it. For this example, I will show you how to expose the Grommet development environment as-a-Service.  From there you can use similar methods to offer other application environments in the same way.&lt;/p&gt;
&lt;h2&gt;What is an Open Service Broker API&lt;/h2&gt;
&lt;p&gt;The OSB API defines an HTTP(S) interface between platforms and service brokers.  The OSB API specifies how automated deployment, management, and use of services is handled. For example, app developers can use it to instantiate services and attach them to apps without having to care about how those services are ultimately created or managed. The OSB API is based on the service architecture originally developed for a well-known PaaS platform called Cloud Foundry.&lt;/p&gt;
&lt;p&gt;The client side of the OSB API is implemented by the client platform, i.e. Kubernetes Service Catalog,  that looks to use services exposed by OSB. The server side is implemented by the service provider, and consists of many endpoints the client hits to provision and manipulate services. The three most important endpoints to remember are the catalog, service instances, and service bindings, which I’ll go over in more detail later.&lt;/p&gt;
&lt;h2&gt;Why use an Open Service Broker&lt;/h2&gt;
&lt;p&gt;It’s important to understand the problem the OSB API is attempting to solve. In some cases, an application might require access to a network of resources it depends on -- databases, caches, email systems, subscriptions to subsystem APIs, among others. Previously, these were manually and statically configured; now with today’s agile software development processes, where app instances are quickly spun up and deleted, they must be managed dynamically.&lt;/p&gt;
&lt;p&gt;So, how do we solve this problem? Keep in mind we want to take as much of the cognitive load off the app developers as possible. We don’t want app devs to be responsible for starting or configuring all these services. It’s important to ensure applications are as plug-and-play as possible. Similarly, we don’t want cloud providers to be saddled with that responsibility, either.&lt;/p&gt;
&lt;p&gt;Service brokers are the missing link between the consumer and the provider. The broker holds the information about the services being provided, and carries out the details of ordering, provisioning, and connecting these services to the application being built by the consumer. Additionally, it automates steps that used to be performed by IT operations with multiple infrastructure management tools.&lt;/p&gt;
&lt;h2&gt;Open Service Broker (OSB) terms&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Broker: A broker offers set of capabilities called services&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ex: Grommet Broker, MongoDB Broker&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Catalog: Collection of services is called catalog&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A marketplace/catalog contains services from one or more brokers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Service: a capability managed by the service broker&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ex: Grommet Dev Environment as a Service&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Service instance: an instantiation of a particular service&apos;s capability
*Ex: Grommet Dev Environment VM&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Binding: relationship between a service instance and an application&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ex: Credentials created to access &apos;Grommmet Dev Environment&apos;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ok, now that we understand what brokers and all those other terms are and the value that they bring, I’m going to briefly go over how to write a sample broker, like Grommet OSB broker in Python. I’ll also show the basic workflow of registering a Grommet OSB broker to your cloud platform so it can consume the services.&lt;/p&gt;
&lt;h2&gt;Register a Service Broker&lt;/h2&gt;
&lt;p&gt;The first thing that needs to happen is we need to associate a broker with our platform somehow -- this is called registering a broker. To do this, your cloud platform sends a GET request to the /v2/catalog endpoint on the broker, which responds with a 200 OK and a body containing all the information about the services the broker offers. It’s up to the cloud platform to store this data and make it available to users.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The Grommet OSB broker exposes services via /v2/catalog endpoint. Below, you can find the sample Python code exposing the service called “grommet” with two plans - “grommet-plan-1” and “grommet-plan-2”.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def catalog():
    &quot;&quot;&quot;
    Return the catalog of services handled
    by this broker

    GET /v2/catalog:

    HEADER:
        X-Broker-API-Version: &amp;#x3C;version&gt;

    return:
      JSON document with details about the
      services offered through this broker

      Using OSB Spec of Get Catalog:
      https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md
    &quot;&quot;&quot;
#    api_version = bottle.request.headers.get(&apos;X-Broker-API-Version&apos;)
    print(&quot;inside catalog&quot;)
#    if (not api_version or not (api_version_is_valid(api_version))):
#        bottle.abort(
#            409,
#            &quot;Missing or incompatible %s. Expecting version %.0f.%.0f or later&quot; % (
#                X_BROKER_API_VERSION_NAME,
#                X_BROKER_API_MAJOR_VERSION,
#                X_BROKER_API_MINOR_VERSION))
    return {&quot;services&quot;: [service]}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;{
  &quot;services&quot;: [
	{
  	&quot;id&quot;: &quot; 97ca7e25-8f63-44a7-99d1-a75729eb&quot;,
  	&quot;name&quot;: &quot;grommet&quot;,
  	&quot;description&quot;: &quot;Grommet Dev Environment as a Service&quot;,
...
  	&quot;plans&quot;: [{
      		&quot;id&quot;: &quot;123&quot;,
      		&quot;name&quot;: &quot; grommet-plan-1&quot;,
&quot;description&quot;: &quot;t2.micro instance with NodeJS&quot;,
      		...
    	},
    	{
      		&quot;id&quot;: &quot;456&quot;,
      		&quot;name&quot;: &quot;grommet-plan-2&quot;,
&quot;description&quot;: &quot;t2.small instance with NodeJS&quot;,
      		...
    	},
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creating a Service&lt;/h2&gt;
&lt;p&gt;Now, if I want to create a service, I send a PUT request to /v2/service_instances/:service_id, where the service ID is a unique ID for the service being created. It’s up to the platform to create and track these. Once the service has been created, the broker returns a 201 to the platform.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The Grommet OSB broker exposes a /v2/service_instances/:service_id endpoint to create a service instance. Below, you can find the sample Python code creating a Ec2 instance in an AWS cloud.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def provision(instance_id):
    &quot;&quot;&quot;
    Provision an instance of this service
    for the given org and space

    PUT /v2/service_instances/&amp;#x3C;instance_id&gt;:
        &amp;#x3C;instance_id&gt; is provided by the Cloud
          Controller and will be used for future
          requests to bind, unbind and deprovision

    BODY:
        {
          &quot;service_id&quot;:        &quot;&amp;#x3C;service-guid&gt;&quot;,
          &quot;plan_id&quot;:           &quot;&amp;#x3C;plan-guid&gt;&quot;,
          &quot;organization_guid&quot;: &quot;&amp;#x3C;org-guid&gt;&quot;,
          &quot;space_guid&quot;:        &quot;&amp;#x3C;space-guid&gt;&quot;
        }

    return:
        JSON document with details about the
        services offered through this broker
    &quot;&quot;&quot;
    if bottle.request.content_type != &apos;application/json&apos;:
        bottle.abort(415, &apos;Unsupported Content-Type: expecting application/json&apos;)
    # get the JSON document in the BODY
    provision_details = bottle.request.json
    # Provision and launch the EC2 instance
    instance_info = create_ec2_instance(image_id, instance_type, keypair_name, user_data)
    global ec2_instance_id
    ec2_instance_id = instance_info[&quot;InstanceId&quot;]
    bottle.response.status = 202
    #ec2_ip_addr = instance_info[&apos;Instances&apos;][0][&apos;PublicIpAddress&apos;]
    dashboard_url = &quot;http://&quot;+ec2_ip_addr+&quot;:3000&quot;
    return {&quot;dashboard_url&quot;: dashboard_url}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Binding a Service&lt;/h2&gt;
&lt;p&gt;Once I create an instance, I need to create a binding to access that instance. This is done in a similar way, with a PUT request to /v2/service_instances/:instance_id/service_bindings/:binding_id, where binding_id is some globally unique ID that the platform has created. We again receive a 201 response.&lt;/p&gt;
&lt;p&gt;Example: Grommet OSB broker /v2/service_instances/:instance_id/service_bindings/:binding_id endpoint&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The Grommet OSB broker exposes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;/v2/service_instances/:instance_id/service_bindings/:binding_id endpoint to create a binding for an instance. Below, you can find the sample Python code used to create access credentials for an Ec2 instance provisioned in the previous step. End users can use these credentials to access the Grommet Dev Environment and start creating Grommet-based web apps.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def bind(instance_id, binding_id):
    &quot;&quot;&quot;
    Bind an existing instance with the
    for the given org and space

    PUT /v2/service_instances/&amp;#x3C;instance_id&gt;/service_bindings/&amp;#x3C;binding_id&gt;:
        &amp;#x3C;instance_id&gt; is the Cloud Controller provided
          value used to provision the instance
        &amp;#x3C;binding_id&gt; is provided by the Cloud Controller
          and will be used for future unbind requests

    BODY:
        {
          &quot;plan_id&quot;:           &quot;&amp;#x3C;plan-guid&gt;&quot;,
          &quot;service_id&quot;:        &quot;&amp;#x3C;service-guid&gt;&quot;,
          &quot;app_guid&quot;:          &quot;&amp;#x3C;app-guid&gt;&quot;
        }

    return:
        JSON document with credentails and access details
        for the service based on this binding
        http://docs.cloudfoundry.org/services/binding-credentials.html
    &quot;&quot;&quot;
    if bottle.request.content_type != &apos;application/json&apos;:
        bottle.abort(415, &apos;Unsupported Content-Type: expecting application/json&apos;)
    # get the JSON document in the BODY
    binding_details = bottle.request.json
    bottle.response.status = 201
    uri =&quot;http://&quot;+ec2_ip_addr+&quot;:3000&quot;
    return {&quot;credentials&quot;: {&quot;uri&quot;: uri, &quot;username&quot;: &quot;ubuntu&quot;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;UnBinding a Service&lt;/h2&gt;
&lt;p&gt;Once I am done using the instance and don’t need the instance any more, I can delete the binding to remove the access to that instance. To do this, I send a DELETE request to /v2/service_instances/:service_id/service_bindings/:binding_id, where binding_id is some globally unique ID that the platform has created. I will again receive a 200 or 202 response and the cloud platform will clear the access credentials stored locally.&lt;/p&gt;
&lt;p&gt;Example: Grommet OSB broker DELETE /v2/service_instances/:service_id/service_bindings/:binding_id endpoint&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The Grommet OSB broker exposes a DELETE /v2/service_instances/:service_id/service_bindings/:binding_id endpoint to delete a binding for an instance. Below, you can find the sample Python code to delete the binding for a previously provisioned Ec2 instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def unbind(instance_id, binding_id):
    &quot;&quot;&quot;
    Unbind an existing instance associated
    with the binding_id provided

    DELETE /v2/service_instances/&amp;#x3C;instance_id&gt;/service_bindings/&amp;#x3C;binding_id&gt;:
        &amp;#x3C;instance_id&gt; is the Cloud Controller provided
          value used to provision the instance
        &amp;#x3C;binding_id&gt; is the Cloud Controller provided
          value used to bind the instance

    return:
        As of API 2.3, an empty JSON document
        is expected
    &quot;&quot;&quot;
    return {}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Deprovision Instance&lt;/h2&gt;
&lt;p&gt;Now, if I want to remove the provisioned service instance, I send a DELETE request to /v2/service_instances/:service_id, where the instanceID is a unique ID for the service being created. It’s up to the platform to create and track these. Once the service has been deleted, the broker returns a 200 OK to the platform, or it returns 202 Accepted if deletion is in progress.&lt;/p&gt;
&lt;p&gt;Example: Grommet OSB broker DELETE /v2/service_instances/:service_id/ endpoint&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NOTE: The Grommet OSB broker exposes a DELETE /v2/service_instances/:service_id/ endpoint to delete an instance. Below, you can find the sample Python code used to delete a previously provisioned Ec2 instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;def deprovision(instance_id):
    &quot;&quot;&quot;
    Deprovision an existing instance of this service

    DELETE /v2/service_instances/&amp;#x3C;instance_id&gt;:
        &amp;#x3C;instance_id&gt; is the Cloud Controller provided
          value used to provision the instance

   return:
        As of API 2.3, an empty JSON document
        is expected
    &quot;&quot;&quot;
    # send response
    #ec2_client = boto3.client(&apos;ec2&apos;)
    #ec2.Instance(&apos;i-00434b87058703892&apos;).terminate()
    #ec2.instances.filter(InstanceIds=ids).terminate()
    deprovision_details = bottle.request.json
    return {}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, I have shown you the sample Python code of the Grommet OSB broker that will expose Grommet Dev Environment as-a-Service. You can deploy this broker anywhere in the cloud or on premise and share the broker endpoint with users to start consuming the services offered.&lt;/p&gt;
&lt;h2&gt;Get and run the OSB broker service&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Create an OSB working folder in the hosting VM. Use any folder you prefer.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/reddypramod85/osb-grommet-python&quot;&gt;Get the OSB broker service&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;You can download the zip file and unzip the package to the OSB working folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Install the required the packages
&lt;ul&gt;
&lt;li&gt;sudo pip3 install --no-cache-dir -r requirements.txt&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Start the OSB service broker:
&lt;ul&gt;
&lt;li&gt;python3 osb_template.py&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Next Steps&lt;/h2&gt;
&lt;p&gt;There are many ways to write an OSB broker, like the Grommet OSB Broker, following the OSB API specification. Using Python is one option. In future articles, I will explore other approaches. Keep a look out on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; for my next article, &lt;strong&gt;How to Register a Grommet OSB broker in Kubernetes Service Catalog,&lt;/strong&gt; where I will show how to register and consume the services offered by Grommet OSB Broker in the Kubernetes Service Catalog and show you how to provision, bind, unbind, and deprovision a Grommet Dev instance.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Up Close and Personal - Newsletter]]></title><link>https://developer.hpe.com/2019-August-13/</link><guid isPermaLink="false">https://developer.hpe.com/2019-August-13/</guid><pubDate>Tue, 13 Aug 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Celebrate the beginning of DevOps]]></title><description><![CDATA[gearhead banner Something very special is just around the corner--something of historic proportions. This October, in beautiful Ghent…]]></description><link>https://developer.hpe.com/celebrate-the-beginning-of-devops/</link><guid isPermaLink="false">https://developer.hpe.com/celebrate-the-beginning-of-devops/</guid><pubDate>Thu, 08 Aug 2019 16:30:30 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/gearhead-banner-1565283401379.png&quot; alt=&quot;gearhead banner&quot;&gt;&lt;/p&gt;
&lt;p&gt;Something very special is just around the corner--something of historic proportions. This October, in beautiful Ghent, Belgium, &lt;a href=&quot;https://devopsdays.org/&quot;&gt;DevOpsDays&lt;/a&gt; will be celebrating its &lt;a href=&quot;https://devopsdays.org/events/2019-ghent/welcome/&quot;&gt;10th year anniversary&lt;/a&gt; in the very place where the term DevOps was first coined. This is where software developers and ITOps staff first gathered to talk about the miscommunications between these two groups and looked for ways to get around the so-called &lt;a href=&quot;https://sellegi.se/glossary/wall-of-confusion/&quot;&gt;“Wall of Confusion”. &lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you’re not familiar with DevOpsDays, it is a set of events that happens year-round throughout the world focusing on software development, IT operations, and the intersection between the two. This worldwide series of technical conferences grew out of the efforts of one man, Patrick Debois.&lt;/p&gt;
&lt;p&gt;Frustrated by the constant miscommunication between operations and development teams, Patrick hoped to push organizations from traditional software development cycles (where the development team throws a software release “over the wall” to IT operations) to a more agile process where developers and ITOps staff work together.&lt;/p&gt;
&lt;p&gt;In 2007, Patrick was a system administrator working on a huge data migration project for the Belgian government. Patrick saw hope in being able to apply the &lt;a href=&quot;https://agilemanifesto.org/&quot;&gt;Agile&lt;/a&gt; methodology to infrastructure, and together with Andrew Shafer, created the Agile Systems Administration Group.&lt;/p&gt;
&lt;p&gt;In 2009, Patrick was unable to attend the Velocity O’Reilly conference in the U.S. where the now mythical session &lt;a href=&quot;https://fr.slideshare.net/jallspaw/10-deploys-per-day-dev-and-ops-cooperation-at-flickr/15-Lowering_risk_of_changethrough_tools&quot;&gt;10+ Deploys a Day: Dev and Ops Cooperation at Flickr&lt;/a&gt; was given by John Allspaw and Paul Hammond. Undeterred, Patrick organized his own conference in Ghent to propagate this idea of the two groups cooperating and closely collaborating. He wanted a conference name that was short, easy to remember, and fitting, so DevOpsDays (#DevOpsDays) was born. The term DevOps quickly emerged and gained attention via social networks.&lt;/p&gt;
&lt;p&gt;Hewlett-Packard Enterprise (HPE) will be supporting the 10th anniversary of this event as a Gold sponsor because we believe DevOps is critical to building applications for the New Idea Economy. The HPE DEV Community will be there to showcase how we work to design and build infrastructure as code to support greater agility. Don’t miss this opportunity to join us October 28th, 2019 in Ghent to celebrate &lt;a href=&quot;http://cloudplatformonline.com/rs/248-TPC-286/images/DORA-State%20of%20DevOps.pdf&quot;&gt;and accelerate the coming together of DevOps.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Redfish Workshop at the Open Source Summit NA 2019]]></title><description><![CDATA[Bruno Cornec, the HPE Open Source and Linux Technology Strategist, will be speaking at the upcoming Open Source Summit  in San Diego, August…]]></description><link>https://developer.hpe.com/redfish-workshop-at-the-open-source-summit-na-2019/</link><guid isPermaLink="false">https://developer.hpe.com/redfish-workshop-at-the-open-source-summit-na-2019/</guid><pubDate>Mon, 05 Aug 2019 16:14:11 GMT</pubDate><content:encoded>&lt;p&gt;Bruno Cornec, the HPE Open Source and Linux Technology Strategist, will be speaking at the &lt;a href=&quot;https://events.linuxfoundation.org/events/open-source-summit-north-america-2019/&quot;&gt;upcoming Open Source Summit&lt;/a&gt;  in San Diego, August 21st – 23rd, 2019. He will deliver two sessions – one on Docker (a hands-on lab that’s designed to help you understand what it means to containerize) and one on Redfish. Bruno will also act as the coordinator for the co-located, first-ever HPE+SUSE Redfish Workshop.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;http://trac.project-builder.org/wiki/RedfishWSNA2019&quot;&gt;Redfish Workshop&lt;/a&gt; is a one day event, held Tuesday, August 20, 2019 from 9:00am – 5:00pm at the Hilton San Diego Bayfront. At the event, you can see live demos and interact with &lt;a href=&quot;https://en.wikipedia.org/wiki/Redfish_(specification)&quot;&gt;Redfish&lt;/a&gt; Project technical experts. In this workshop, system administrators, architects, and developers can learn how to use Redfish and about the benefit derived from a standard management layer to deploy, configure, and manage an environment.&lt;/p&gt;
&lt;p&gt;Knowledgeable and engaging speakers will help you better understand topics like using DMTF tools for system configuration and using the REST API to perform Redfish operations from Python. You’ll also get clued into the latest news on the standard and its future evolution. Through interactive sessions, demos, and labs, you’ll get the chance to better understand and practice the concepts being presented.&lt;/p&gt;
&lt;p&gt;This is a free event, sponsored by Hewlett Packard Enterprise (HPE) and SUSE.&lt;/p&gt;
&lt;p&gt;Who can benefit:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;System/Software Developers&lt;/li&gt;
&lt;li&gt;Tech Leads / Development Leads&lt;/li&gt;
&lt;li&gt;Software Architects&lt;/li&gt;
&lt;li&gt;Chief Engineers&lt;/li&gt;
&lt;li&gt;System Engineers&lt;/li&gt;
&lt;li&gt;Development Engineers&lt;/li&gt;
&lt;li&gt;DevOps / System Administrators&lt;/li&gt;
&lt;li&gt;Application Engineers&lt;/li&gt;
&lt;li&gt;Open Source Technologists&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information about the event, please visit the &lt;a href=&quot;http://trac.project-builder.org/wiki/RedfishWSNA2019&quot;&gt;Redfish Workshop page.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT:&lt;/strong&gt; There are a limited number of seats available for this event, so please &lt;a href=&quot;https://framaforms.org/redfish-workshop-oss-na-2019-registration-form-1564098902&quot;&gt;register&lt;/a&gt; as soon as you can.&lt;/p&gt;
&lt;p&gt;If you have expertise you would like to share during this workshop, please email &lt;a href=&quot;mailto:Bruno.Cornec@hpe.com&quot;&gt;Bruno.Cornec@hpe.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Designing, Installing, and Configuring a Failover Cluster with Hyper-V Managed by SCVMM ]]></title><description><![CDATA[System Center Virtual Machine Manager (SCVMM) is a management tool developed by Microsoft to efficiently manage Hyper-V in a scalable way…]]></description><link>https://developer.hpe.com/designing-installing-and-configuring-a-failover-cluster-with-hyper-v-man/</link><guid isPermaLink="false">https://developer.hpe.com/designing-installing-and-configuring-a-failover-cluster-with-hyper-v-man/</guid><pubDate>Mon, 05 Aug 2019 15:40:49 GMT</pubDate><content:encoded>&lt;p&gt;System Center Virtual Machine Manager (SCVMM) is a management tool developed by Microsoft to efficiently manage Hyper-V in a scalable way. While Hyper-V includes its own tools for managing virtual machines (VMs), as an enterprise scales to include failover clustering, Hyper-V replication, or multiple Hyper-V hosts across a variety of physical servers, SCVMM helps to simplify the management of the virtualized infrastructure.&lt;/p&gt;
&lt;p&gt;In this tutorial, you will learn how to install SCVMM in a Microsoft Windows Failover Cluster to improve the availability of applications and services. All you need are at least 2 servers running Windows server 2016 with Hyper-V role in a Domain environment to create this failover configuration using the SCVMM application.&lt;/p&gt;
&lt;p&gt;For the purposes of this tutorial, I used two blade servers, each with a 12-core CPU and 64 GB of memory, in an &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt; 12000 Frame. I created two profiles with three different network connections and three storage volumes (one for boot and two shared, with one for VMs and one for quorum).&lt;/p&gt;
&lt;p&gt;After a firmware upgrade with the latest Service Pack for ProLiant (SPP), I booted the systems from the virtual CD-ROM with the Windows Server 2016 ISO file. Once the servers became available, I configured the network with a static IP address, enabled Remote Desktop Service, installed Windows updates, and then activated the Windows key.&lt;/p&gt;
&lt;p&gt;The servers were ready to join the Windows domain after they were rebooted to apply the new configuration.&lt;/p&gt;
&lt;p&gt;Next, I installed Hyper-V role, then failover Clustering and the Multipath IO feature, after which I rebooted the systems again.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture1-1565019791083.png&quot; alt=&quot;picture1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture2-1565019843536.png&quot; alt=&quot;picture2&quot;&gt;&lt;/p&gt;
&lt;p&gt;The shared volumes were created to be added in the failover cluster from disks that were part of the Cluster Shared Volume (CVS).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture3-1565019886020.png&quot; alt=&quot;picture3&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the Hyper-V Failover Cluster can now be validated.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture4-1565019943404.png&quot; alt=&quot;picture4&quot;&gt;&lt;/p&gt;
&lt;p&gt;I then installed the prerequisites for SCVMM including SQL Server 2016, Windows ADK (Assessment and Deployment Kit), and Windows PE (Pre-installation Environment), as well as created specific users and groups in the domain. Note that you must run the SCVMM Setup as an administrator. After answering a few questions, I Iaunched the application and connected to the Hyper-V Cluster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture5-1565020033667.png&quot; alt=&quot;picture5&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture6-1565020039994.png&quot; alt=&quot;picture6&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/8/picture7-1565020045576.png&quot; alt=&quot;picture7&quot;&gt;&lt;/p&gt;
&lt;p&gt;For more detailed information, check out the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/windows-server/failover-clustering/prestage-cluster-adds&quot;&gt;Prestage cluster computer objects in Active Directory Domain Services (ASDS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/windows-server/failover-clustering/configure-ad-accounts&quot;&gt;Configuring cluster accounts in Active Directory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.garethjones294.com/installing-system-center-virtual-machine-manager-2016-step-by-step-quick-start-deployment-guide/&quot;&gt;Installing System Center Virtual Machine Manager (SCVMM)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://support.microsoft.com/en-gb/help/2770582/event-id-1222-when-you-create-a-windows-server-2012-failover-cluster&quot;&gt;Event ID 1222&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope you found this tutorial helpful in setting up System Center Virtual Machine Manager (SCVMM) in a Microsoft Windows Failover Cluster. Continue to monitor our &lt;a href=&quot;/blog&quot;&gt;blog posts&lt;/a&gt; for more hints that you can use to make your development environment more efficient.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Terraform provider now available with support for HPE OneView 4.1]]></title><description><![CDATA[HPE OneView Terraform provider v1.0.1 now supports HPE OneView 4.1 (REST API version 800). Terraform, by HashiCorp, is a popular tool for…]]></description><link>https://developer.hpe.com/hpe-oneview-terraform-provider-now-available-with-support-for-hpe-onevie/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-terraform-provider-now-available-with-support-for-hpe-onevie/</guid><pubDate>Fri, 02 Aug 2019 20:57:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; Terraform provider v1.0.1 now supports HPE OneView 4.1 (REST API version 800). &lt;a href=&quot;https://www.terraform.io/&quot;&gt;Terraform&lt;/a&gt;, by &lt;a href=&quot;https://www.hashicorp.com/&quot;&gt;HashiCorp&lt;/a&gt;, is a popular tool for infrastructure automation and is rapidly increasing in adoption. The Terraform community includes a large number of providers that facilitate a multicloud or hybrid cloud approach to infrastructure management. As part of the HPE OneView comprehensive partner ecosystem, Terraform is a powerful orchestration tool used to create, manage, and update infrastructure resources. These resources may be physical machines, VMs, network switches, containers, or others.&lt;/p&gt;
&lt;p&gt;HPE OneView Terraform provider automates the provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView. One way administrators could use this to their benefit would be to create a resource topology similar to that of a public cloud on their own physical infrastructure. By creating the resource topology this way, it&apos;s easier for administrators to migrate applications and workloads to an on-premise, private cloud as part of a hybrid cloud strategy.&lt;/p&gt;
&lt;p&gt;FOR MORE INFORMATION&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/releases/tag/v1.0.1&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/blob/master/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/terraform-provider-oneview/blob/master/README.md&quot;&gt;Details about the Terraform provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Returning from the land of ports and developers, AKA OSCON 2019 in Portland!]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1564600558790 Some HPE DEV team members and I just returned from beautiful Portland, Oregon with…]]></description><link>https://developer.hpe.com/returning-from-the-land-of-ports-and-developers-aka-oscon-2019-in-portla/</link><guid isPermaLink="false">https://developer.hpe.com/returning-from-the-land-of-ports-and-developers-aka-oscon-2019-in-portla/</guid><pubDate>Wed, 31 Jul 2019 19:12:15 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1564600558780.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1564600558790&quot;&gt;&lt;/p&gt;
&lt;p&gt;Some HPE DEV team members and I just returned from beautiful Portland, Oregon with another successful &lt;a href=&quot;https://conferences.oreilly.com/oscon/oscon-or&quot;&gt;OSCON&lt;/a&gt; in the books. As a Gold sponsor for OSCON for the past two years, the HPE Developer Community always looks forward to attending this premier open source event. I’m always astounded by the technical proficiency of this crowd and their knowledge depth. We also enjoy meeting with developers, like you, who are using Grommet and our open source solutions and discussing your experience with these products face-to-face. This is truly a developer’s conference.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture2-1564600566235.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1564600566237&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a last minute addition to our booth, Blaine, Jeff, Pramod, and I decided to host a &lt;a href=&quot;https://github.com/HewlettPackard/hpe-hack-shack-attack&quot;&gt;Hack Shack Attack!&lt;/a&gt; gaming competition, offering awesome collector Lego sets as prizes. For those of you who don’t know, &lt;strong&gt;Hack Shack Attack!&lt;/strong&gt; is our open source, 8-bit arcade game that gives players a chance to tame the IT monster. Since this game was a late addition to our booth, I had to run out to Gamestop the evening before the event to pick up an Xbox controller and write the controller bindings overnight. The game was originally built to support Nvidia Shield controllers, which would be almost impossible to find in a retail store. That being said, I am glad we brought the game and updated the controller bindings, because it proved wildly popular with attendees. We saw a total 1,040 game plays! That IT monster didn’t know what hit him.&lt;/p&gt;
&lt;p&gt;As usual, Grommet and our Grommet Gremlin, Stack (the purple guy you see in the footer of the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV home page&lt;/a&gt;), drew a crowd. Pramod Sareddy drummed up a lot of attention for Grommet during his Open Service Broker (OSB) + Grommet for the Enterprise presentation. You can learn more about what Pramod presented in our &lt;a href=&quot;/blog/solving-enterprise-devops-and-front-end-challenges-with-open-source-at-o&quot;&gt;pre-OSCON blog.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Talking with developers and their experiences with our products often leads to inspired innovation, both on our part and theirs. It’s discussions like these that help us find better ways to support the HPE Developer Community. We look forward to seeing everyone again at OSCON 2020!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Golang SDK now available with support for HPE OneView 4.1]]></title><description><![CDATA[HPE OneView takes
a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a modern RESTful…]]></description><link>https://developer.hpe.com/hpe-oneview-golang-sdk-now-available-with-support-for-hpe-oneview-41/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-golang-sdk-now-available-with-support-for-hpe-oneview-41/</guid><pubDate>Tue, 30 Jul 2019 21:05:42 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; takes
a software-defined, programmatic approach to managing infrastructure with efficient workflow automation, a modern RESTful API, and a comprehensive partner ecosystem. The HPE OneView Golang SDK v1.0.0 now supports HPE OneView 4.1 (REST API version 800). The HPE OneView Golang SDK extends the HPE OneView API language support to include the &lt;a href=&quot;https://golang.org&quot;&gt;go language&lt;/a&gt;, which is rapidly increasing in adoption and a popular favorite for cloud native applications.&lt;/p&gt;
&lt;p&gt;The SDK allows go developers to programmatically control HPE OneView managed resources using an infrastructure-as-code approach for physical compute, storage, and fabric resources. Benefits of this infrastructure-as-code approach include, complete datacenter automation, consistent reproducibility, versioning, and roll back.&lt;/p&gt;
&lt;p&gt;FOR MORE INFORMATION&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/releases/tag/v1.0.0&quot;&gt;Release content&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang/blob/v1.0.0/CHANGELOG.md&quot;&gt;List of supported resources and changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/HewlettPackard/oneview-golang&quot;&gt;Code repository and examples&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[HPE DEV and Design Gear Up for August Conferences: JSConf and React Rally]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1564504318352 HPE DEV and Design engineers are looking forward to attending two upcoming conferences…]]></description><link>https://developer.hpe.com/hpe-dev-and-design-gear-up-for-august-conferences-jsconf-and-react-rally/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-and-design-gear-up-for-august-conferences-jsconf-and-react-rally/</guid><pubDate>Tue, 30 Jul 2019 16:25:53 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1564504318351.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1564504318352&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV and Design engineers are looking forward to attending two upcoming conferences – &lt;a href=&quot;https://2019.jsconf.us/&quot;&gt;JSConf&lt;/a&gt;  and &lt;a href=&quot;https://www.reactrally.com/&quot;&gt;React Rally.&lt;/a&gt; HPE Chief Designer, Chris Carlozzi, as well as UX Designer/Developer, Eric Soderberg, and HPE Senior Open Source Developer, Shimrit Yacobi, will be participating in panel discussions at JSConf. HPE DEV lead, Alex Mejias and members of his software development team will staff the HPE DEV area at React Rally.&lt;/p&gt;
&lt;h2&gt;JSConf US 2019&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://2019.jsconf.us/&quot;&gt;JSConf US 2019&lt;/a&gt; is a unique developer conference that invites the JavaScript community to spend three days listening to impactful speaker sessions, communing with industry leaders, and participating in lots of &lt;a href=&quot;https://2019.jsconf.us/about/activities/&quot;&gt;fun activities.&lt;/a&gt; JSConf US is the only conference where attendees can learn how to push their favorite language beyond the confines of the browser and into robots and video games.&lt;/p&gt;
&lt;p&gt;The JSConf format provides &lt;a href=&quot;https://2019.jsconf.us/schedule/#first-day-of-talks&quot;&gt;two different tracks for attendees.&lt;/a&gt; JavaScript pioneer, Sitepen, a modern web consultancy specializing in Enterprise JavaScript, is sponsoring one track, and Hewlett-Packard Enterprise (HPE) is sponsoring the other. While the Sitepen track includes sessions from a curated set of speakers, HPE’s track features a unique format that encourages anyone to speak on a first-come/first-speak basis. These talks, some of the most exciting at JSConf, tend to cover the full range of web development and the world of JavaScript.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://jsconf.com/&quot;&gt;Historically, JSConf&lt;/a&gt; has been the launch pad for many web innovations. Their call-for-papers process is designed to shine the spotlight on the latest ideas and future leaders of the JavaScript world. This volunteer-run conference focuses on shared social experiences and does its best to make a difference in the community. Some examples include offering scholarships to attend the conference, serving vegetarian food alternatives, and providing childcare services during the conference. JSConf will be held August 12-13 in Carlsbad, CA.&lt;/p&gt;
&lt;h2&gt;React Rally&lt;/h2&gt;
&lt;p&gt;Another community conference HPE DEV and Design engineers are looking forward to is &lt;a href=&quot;https://www.reactrally.com/&quot;&gt;React Rally.&lt;/a&gt; This conference focuses on the React ecosystem and topics that are interesting to React developers. Its friendly, welcoming atmosphere, engaging talks from new and established speakers, and opportunities for attendees to chat with industry leaders makes React Rally a popular event among those in the React community.&lt;/p&gt;
&lt;p&gt;A unique attribute of this conference is the audience and speaker interaction. There are no Q&amp;#x26;A’s after the talks. Instead, audience members are encouraged to discuss issues with speakers during planned downtime sessions; thereby facilitating more effective sharing of perspectives.&lt;/p&gt;
&lt;p&gt;With this goal in mind, React Rally features a lot of downtime throughout the day where speakers and the audience can interact informally in a more relaxed atmosphere. This unstructured time is valued just as much as the structured sessions, and it’s referred to as the hallway track. At other conferences, attendees sometimes skip a talk in order to hang out in the hallway and talk to all the interesting people you meet at the conference. Organizers didn’t want React Rally attendees to have to choose between sessions and informal conversations, so they put a lot of emphasis on break times, encouraging this type of discourse.&lt;/p&gt;
&lt;p&gt;If you happen to be going to React Rally August 22-23, 2019 in Salt Lake Utah, keep an eye out for HPE DEV members like Alex Mejias. The HPE DEV team is looking forward to being there to engage in conversations with you.&lt;/p&gt;
&lt;h2&gt;Keep up with our HPE DEV and Design activities&lt;/h2&gt;
&lt;p&gt;HPE DEV and Design is here to support the HPE Developer Community. One of the best ways we can do this is through meet-and-greet opportunities like these two developer conferences. Keep up with where you can find us by checking our &lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;HPE DEV Events&lt;/a&gt; page regularly.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Beyond HPE Discover Las Vegas '19 - Newsletter]]></title><link>https://developer.hpe.com/2019-July-16/</link><guid isPermaLink="false">https://developer.hpe.com/2019-July-16/</guid><pubDate>Tue, 16 Jul 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Deploy a Full Stack Application on Netlify that Includes a CI/CD Pipeline]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1562947940344 Continuous Integration/Continuous Delivery & Deployment (CI/CD) is becoming a software…]]></description><link>https://developer.hpe.com/deploy-a-full-stack-application-on-netlify-that-includes-a-cicd-pipeline/</link><guid isPermaLink="false">https://developer.hpe.com/deploy-a-full-stack-application-on-netlify-that-includes-a-cicd-pipeline/</guid><pubDate>Fri, 12 Jul 2019 15:44:24 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/use-case-graphic_continuous-delivery-1562947940343.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562947940344&quot;&gt;&lt;/p&gt;
&lt;p&gt;Continuous Integration/Continuous Delivery &amp;#x26; Deployment (CI/CD) is becoming a software development practice in which you build, test, integrate, deliver, and deploy software every time a developer or a development team merges code into a particular branch. Continuous integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably, and repeatedly with minimal human intervention.  This blog helps you to understand in detail how to enable a CI/CD pipeline for a full stack application. In this blog post, I will show you how to host a static website on Netlify, including how to set up continuous deployment.&lt;/p&gt;
&lt;h2&gt;Project structure:&lt;/h2&gt;
&lt;p&gt;In order to deploy the site on Netlify, a few configurations need to be added to the project directory to enable the CI/CD pipeline.&lt;/p&gt;
&lt;p&gt;For the purpose of this example, let’s assume that you have an application already created using the &lt;code&gt;create-react-app&lt;/code&gt; cli tool.&lt;/p&gt;
&lt;p&gt;At the root of the project, add the netlify.toml file and include the following text.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[build]
  command = &quot;yarn build&quot;
  publish = &quot;dist&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a folder called &lt;code&gt;dist&lt;/code&gt; at the project root so that when netlify starts its build command, it automatically copies the compiled binary files into a &lt;code&gt;dist&lt;/code&gt; folder. You can also delegate this task to netlify and have it create a build folder during compilation using commands provided as part of the build command.&lt;/p&gt;
&lt;p&gt;Your package.json file should look like this when the deployment process is complete.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;scripts&quot;: {
    &quot;start&quot;: &quot;react-scripts start&quot;,
    &quot;build&quot;: &quot;run-p build:**&quot;,
    &quot;build:app&quot;: &quot;react-scripts build&quot;,
    &quot;test&quot;: &quot;react-scripts test&quot;,
    &quot;eject&quot;: &quot;react-scripts eject&quot;
  },


&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Push all these changes to your Git repository to move on to the deployment part of the exercise.&lt;/p&gt;
&lt;h2&gt;Getting started on Netlify:&lt;/h2&gt;
&lt;p&gt;In this section, I will walk you through how easy it is to launch your site on Netlify. If you are not already a Netlify user, go ahead and sign up for free &lt;a href=&quot;https://app.netlify.com/signup&quot;&gt;here&lt;/a&gt; first.&lt;/p&gt;
&lt;h2&gt;Step 1: Create new team&lt;/h2&gt;
&lt;p&gt;Each Netlify site belongs to a team, even if it’s only a team of one. Teams can have multiple Netlify users as members, and a Netlify user can be a member of multiple teams.&lt;/p&gt;
&lt;h2&gt;Step 2: Create new site from Git&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture2-1562946531885.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946531886&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;a) Connect your Git provider where the source code is hosted. Once you authorize Netlify to access the Git provider account, Netlify app is installed in your GH profile to allow access specifically to a repo or all repos hosted under the Git account.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture3-1562946579014.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946579015&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;b)	&lt;strong&gt;Pick&lt;/strong&gt; a repository or &lt;strong&gt;Select All&lt;/strong&gt; repositories to avoid doing this step for every repo.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture5-1562946635493.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946635494&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;c)	This is where you provide all of the deployment settings. Your build settings should match those found in the build directory provided in the netlify.toml file that was added at the beginning of the exercise.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture6-1562946692251.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946692252&quot;&gt;&lt;/p&gt;
&lt;p&gt;Click on the &lt;code&gt;Deploy site&lt;/code&gt; button and watch the deployment progress in the CLI provided.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture7-1562946739478.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946739479&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture8-1562946768653.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562946768653&quot;&gt;&lt;/p&gt;
&lt;p&gt;At this point, you can preview the deployment for this specific build or go back to the home page on app.netlify.com site to find the default url allocated on the Netlify domain.&lt;/p&gt;
&lt;p&gt;The URL should look like this &lt;code&gt;https://something-abc-38642d.netlify.com&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If your app needs to access any other routes apart from ‘/’ route, the above example indicates a problem. To fix this, add a ‘_redirects’ file to the public folder in your project directory where the index.html also resides. This allows any other routes to work with the app. Without this, you will end up landing on a “page not found” or you may be redirected to the home page.&lt;/p&gt;
&lt;p&gt;_redirects:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/*    /index.html   200

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Having completed the above, you should now have a working frontend application. But when you want to have your server running on the same host, you need to take a few extra steps, which I’ll explain next.&lt;/p&gt;
&lt;h2&gt;Server side setup:&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Pre-requisites:&lt;/strong&gt; You need to install the netlify-lambda package to enable serverless lambda functions for running the API. Since lambda functions run as service from the same host, you also need two more dependencies:  1) a &lt;code&gt;http-proxy-middleware&lt;/code&gt; package to have localhost proxy setup, and 2) a &lt;code&gt; npm-run-all&lt;/code&gt; package to run all build commands in parallel.&lt;/p&gt;
&lt;p&gt;Once you install the &lt;code&gt;netlify-lambda&lt;/code&gt; and &lt;code&gt;http-proxy-middleware&lt;/code&gt; packages, update the package.json file to reflect the scripts tag.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
&quot;scripts&quot;: {
    &quot;start&quot;: &quot;react-scripts start&quot;,
    &quot;start:lambda&quot;: &quot;netlify-lambda serve src/lambda&quot;,
    &quot;build&quot;: &quot;run-p build:**&quot;,
    &quot;build:app&quot;: &quot;react-scripts build&quot;,
    &quot;build:lambda&quot;: &quot;netlify-lambda build src/lambda&quot;,
    &quot;test&quot;: &quot;react-scripts test&quot;,
    &quot;eject&quot;: &quot;react-scripts eject&quot;
  },

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, add a setupProxy.js file inside the &lt;code&gt;src&lt;/code&gt; directory and add the following code:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You do not need to import this file anywhere. It is automatically registered by create-react-app when you start the development server.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
const proxy = require(&apos;http-proxy-middleware&apos;);

module.exports = function(app) {
  app.use(proxy(&apos;/.netlify/functions/&apos;, { 
    target: &apos;http://localhost:9000/&apos;,
    &quot;pathRewrite&quot;: {
      &quot;^/\\.netlify/functions&quot;: &quot;&quot;
    }
  }));
};

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At the root of the project, update &lt;code&gt;netlify.toml&lt;/code&gt; file with the following code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[build]
  command = &quot;yarn build&quot;
  functions = &quot;functions&quot;
  publish = &quot;dist&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, add a &lt;code&gt;lambda&lt;/code&gt; folder under the &lt;code&gt;src&lt;/code&gt; directory to start setting the server side functionality. For simplicity, we will write a simple express.js app with few routes.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
// DB connection
const mongoose = require(&apos;mongoose&apos;);
mongoose.connect(process.env.MONGODB_URI, { useNewUrlParser: true });

const express = require(&quot;express&quot;);
const cors = require(&apos;cors&apos;)
const serverless = require(&apos;serverless-http&apos;);

const app = express();
const router = express.Router();

const userSchema = mongoose.Schema({
  name: {type: String, required: true},
  email: {type: String},
});

const User = mongoose.model(&apos;User&apos;, userSchema);

app.use(express.json());
app.use(express.urlencoded({extended:true}));
app.use(cors());

router.get(&apos;/ping&apos;, (req, res) =&gt; {
  res.send(&apos;pong!&apos;);
});

// Get all user records
router.get(&apos;/users, (req,res) =&gt; {
  User.find()
    .then( users =&gt; res.send(users))
    .catch( err =&gt; {
      res.send(err); 
    }); 
});
  }
});

// Model routes
app.use(`/.netlify/functions/api`, router);

module.exports = app;

module.exports.handler = serverless(app);

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this setup, when Netlify runs the build command as part of deployment, it builds the frontend app into the dist directory and places the API into the functions directory.&lt;/p&gt;
&lt;p&gt;You can make GET requests on the API with &lt;code&gt;http://localhost:9000/.netlify/functions/api/users&lt;/code&gt; if you are running locally or &lt;code&gt;https://something-abc-38642d.netlify.com/.netlify/functions/api/users&lt;/code&gt; if you are running in production mode. Custom domains can be setup on the Netlify site if you want your site to run on a non-Netlify domain. As a bonus, Netlify automatically enables an SSL/TLS certificate with the help of Let’s Encrypt.&lt;/p&gt;
&lt;p&gt;Now, every time you push changes to the master (or a PR is created to merge a particular branch into the master), the Netlify CI/CD pipeline automatically builds the code and deploys it into production.&lt;/p&gt;
&lt;p&gt;CI/CD is integral in the building and deployment of software today, what with our smaller teams, constant changes, fast and real-time feedback, and quick app deployment. Not only does CI/CD provide clear benefits to businesses but also to stakeholders such as product owners, development teams, and end-users. While building a CI/CD pipeline requires additional effort, once the CI/CD process stabilizes within an organization, it offers substantial advantages, such as reduced costs, faster deployments and increased ROI. In addition, with CI/CD automation, developers can invest more time building better software solutions rather than spending the time on code integration and delivery.&lt;/p&gt;
&lt;p&gt;The CI/CD methodology facilitates the building and improvement of great apps with faster time to market. Improved automation also allows for a more streamlined app development cycle, enabling developers to receive feedback quicker to help them build better, more consistent apps.&lt;/p&gt;
&lt;p&gt;I hope this blog helps you to set up the CI/CD pipeline using Netlify for any of your personal projects or a project setup for an entire team. Please continue to follow our &lt;a href=&quot;/blog&quot;&gt;blogs&lt;/a&gt; for more hints and tutorials designed to help you optimize your development environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Power of Single Sign-On with HPE OneView]]></title><description><![CDATA[An infrastructure management tool, like HPE OneView, allows IT to do most day-to-day infrastructure administration tasks. But there are…]]></description><link>https://developer.hpe.com/the-power-of-single-sign-on-with-hpe-oneview/</link><guid isPermaLink="false">https://developer.hpe.com/the-power-of-single-sign-on-with-hpe-oneview/</guid><pubDate>Thu, 11 Jul 2019 19:44:24 GMT</pubDate><content:encoded>&lt;p&gt;An infrastructure management tool, like &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView,&lt;/a&gt; allows IT to do most day-to-day infrastructure administration tasks. But there are cases where other related tools can be useful too. It can be tedious to remember how to log on to all these different tools. Sure, you can use a password safe tool, or (please don’t do this…) a TXT file edited with Notepad.&lt;/p&gt;
&lt;p&gt;A better way is to use a single sign-on between different tools, meaning that you log on once with the most encompassing tool, and then drill down to other dependent tools without having to log on again. This is very straightforward when you use a GUI that allows you to follow hyperlinks, but it can also be done with APIs and your favorite programming language.&lt;/p&gt;
&lt;p&gt;In this article I will show you how to use &lt;a href=&quot;https://www.getpostman.com/&quot;&gt;Postman&lt;/a&gt; to work with REST APIs. I will begin with &lt;a href=&quot;https://buy.hpe.com/b2c/us/en/software/converged-infrastructure-management-software/converged-infrastructure-management/oneview-management-software/hpe-oneview-global-dashboard/p/1009187269&quot;&gt;HPE OneView Global Dashboard&lt;/a&gt; which gives you an overall view of a datacenter, one that could be very large (up to 20,000 servers are supported in the latest version). From there, I will drill down with single sign-on (SSO) to an HPE OneView appliance. And from there, to the &lt;a href=&quot;https://developer.hpe.com/platform/ilo-restful-api/home&quot;&gt;Redfish API&lt;/a&gt; provided by the iLO of a server – all without having to pass credentials again after the initial logon.&lt;/p&gt;
&lt;p&gt;So, let’s start with the HPE OneView Global Dashboard and log on with a POST call to /rest/login-sessions with a few headers shown below and a body set to this JSON content:
{&quot;userName&quot;:&quot;your_username&quot;,&quot;password&quot;:&quot;your_password&quot;}&lt;/p&gt;
&lt;p&gt;You should see a response like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1562874385556.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874385558&quot;&gt;&lt;/p&gt;
&lt;p&gt;The important data returned is the content of the token key, or sessionID, which is the same value. We will carry this token in the “auth” header of all subsequent calls to the HPE OneView Global Dashboard API.&lt;/p&gt;
&lt;p&gt;Let’s look at how to connect to an HPE OneView appliance that was previously registered in our HPE OneView Global Dashboard. We can get a list of appliances with a GET to /rest/appliances, from which we get the UUID of the appliance we want. Here it is shown as 5224a1f9-f272-4501-8fbb-80b3d9c13339.&lt;/p&gt;
&lt;p&gt;When we do a GET on /rest/appliances/5224a1f9-f272-4501-8fbb-80b3d9c13339/sso we receive a response like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture2-1562874422291.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874422292&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once again, the value we are interested in here is the sessionID.  We will use it in the auth header for subsequent calls to the HPE OneView appliance. We did not have to pass a username/password to the HPE OneView appliance, since we received a token from our HPE OneView Global Dashboard authenticated session. We can use this token to do anything we want to this particular HPE OneView appliance. For instance, we can get a list of servers managed by the appliance with a GET call to /rest/server-hardware, this time directing it to the IP address of the HPE OneView appliance instead of HPE OneView Global Dashboard.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture3-1562874460779.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874460780&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now, I’ll show you how to use the Redfish-compliant REST API of the iLO of a server to get more details than what surfaces in HPE OneView. Following the same principle, we will find the UUID of the server we are interested in, and we will make a GET call to /rest/server-hardware/{uuid}/remoteConsoleUrl&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture5-1562874508772.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874508773&quot;&gt;&lt;/p&gt;
&lt;p&gt;From the response, we get a link to the iLO remote console of that server, but we can use the data for other purposes as well. It gives us the IP address of the iLO (192.168.3.105) and a session key (8da4257ecf181c186f2510a03ac2a2fe). With that, we can make calls to the iLO without having to create a session with username/password credentials. For example, we can get the list of all DIMM memory slots in the server and which DIMMs are installed in them.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture6-1562874564543.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874564544&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture7-1562874569155.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562874569156&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, single sign-on is an easy, yet powerful, way to use different tools with minimum hassle. I hope you found this tutorial useful. Please continue to follow our &lt;a href=&quot;/blog&quot;&gt;blog posts&lt;/a&gt; for more ways to optimize your software development environment. I would be very interested to see how you are creating your own shortcuts. Feel free to connect with me and the team on &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; to share your experiences and ask questions.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Discover Las Vegas Attendees Win Big at the Hack Shack]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1562772944972 Folks come to Las Vegas to win, and at HPE Discover ’19, it was no different. The HPE…]]></description><link>https://developer.hpe.com/hpe-discover-las-vegas-attendees-win-big-at-the-hack-shack/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discover-las-vegas-attendees-win-big-at-the-hack-shack/</guid><pubDate>Wed, 10 Jul 2019 15:34:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1562772944968.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772944972&quot;&gt;&lt;/p&gt;
&lt;p&gt;Folks come to Las Vegas to win, and at HPE Discover ’19, it was no different. The HPE Discover Las Vegas Hack Shack welcomed about 1,125 guests, many of them developers, who came to try their hand at winning some cool prizes such as drones, Raspberry Pi kits, tee shirts, and hats. The Hack Shack proved to be a popular destination, with attendance surpassing even that of an interesting Disney booth situated at the entrance to the event. Here’s how some Hack Shack guests came away as big winners.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture2-1562772953069.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772953071&quot;&gt;&lt;/p&gt;
&lt;p&gt;One opportunity attendees had to win prizes came from participating in Hack Shack technical coding challenges. These challenges ranged from writing simple scripts that displayed the words “Hello World,” to more complicated hacks that required the coder to enable a camera deployed at the network edge to failover in the event of an outage. This latter challenge proved very popular, as it featured products that attendees were keen to learn more about, such as Kubernetes, HPE Edgeline servers, and a fun and innovative facial recognition software app.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture3-1562772960861.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772960862&quot;&gt;&lt;/p&gt;
&lt;p&gt;It didn’t matter which challenge guests attempted. The winners were chosen based on four criteria:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Usefulness&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Uniqueness&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creativity&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Completeness&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;First prize winner, Matt Burke, took the &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt;HPE SimpliVity&lt;/a&gt; API challenge, which required the coder to build a simple report that showed the status of on-going backups within a user-defined time period. Matt created a web application using NodeJs as a backend and designed a front end using HTML, CSS, and Javascript. What made his solution stand out was the attention he paid to making it visually attractive and user friendly.&lt;/p&gt;
&lt;p&gt;Chris Price, who came in second place, completed an &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; challenge using Python to programmatically find how many BL460 Gen10 servers had greater than 128 GB of memory. He included user interaction in his script, enabling the user to select various options, such as blades and memory size.&lt;/p&gt;
&lt;p&gt;Perhaps one of the most impressive hacks was from third place winner, John Frakes, who completed an HPE OneView challenge using Python. When John arrived, he was unfamiliar with Python. But because he wanted to participate, he took it upon himself to learn some Python basics and not only completed the challenge but also included additional functionality. As part of the solution, John parameterized the queries so that users could change what types of servers were chosen, without having to change the code. He also took the time to ensure that the output was formatted nicely on the screen, instead of simple, plain text. In appreciation of the assistance he received from the HPE DEV team, who helped him find ways to familiarize himself with Python, John agreed to spend some time developing Python language bindings for the HPE OneView Global Dashboard and contribute them to the community.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture4-1562772966574.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772966575&quot;&gt;&lt;/p&gt;
&lt;p&gt;Another way attendees could win prizes at the Hack Shack was to play the 8-bit arcade game, &lt;a href=&quot;https://github.com/HewlettPackard/hpe-hack-shack-attack&quot;&gt;Hack Shack Attack!&lt;/a&gt;  This cool, retro-style game focused on taming the IT Monster drew almost 1,400 game plays. The first day winner was Wayne Holland, although Wayne was quickly ousted from that spot the next morning by another attendee, Jared, on his first play through. Determined to regain the top spot on the leaderboard, Wayne played diligently until he became the ultimate winner, receiving his prize at the Wednesday night award ceremony, where beer, sunglasses, and popcorn were enjoyed by all.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture5-1562772973840.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772973841&quot;&gt;&lt;/p&gt;
&lt;p&gt;While there was a lot of friendly competition going on at the Hack Shack, it wasn’t all fun and games. During the event, HPE DEV team members were on hand in the Hack Shack to help educate attendees and supply customers with information they needed to make purchasing decisions. Some of the most popular technical workshops featured HPE OneView and HPE SimpliVity, giving attendees a deeper insight on how to code and integrate these products into their businesses. Customers were delighted to have the opportunity to chat one-on-one with product architects. And while everyone was excited to take home a prize or at least a little swag, probably the most important takeaway guests appreciated was learning how they could extend HPE product capabilities through APIs. Learning that they could do more with their existing investments made all the participants feel as if they were winners.&lt;/p&gt;
&lt;p&gt;For those of you who missed any of the sessions or just want to go back and review something, we’ve made all the &lt;a href=&quot;https://lv19.hpedev.io/&quot;&gt;workshop session presentations and challenges available on our HPE DEV Hack Shack HPE Discover Las Vegas 2019&lt;/a&gt; site. Check it out. You can even still play &lt;a href=&quot;https://github.com/HewlettPackard/hpe-hack-shack-attack&quot;&gt;Hack Shack Attack!,&lt;/a&gt; which is available on GitHub.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture6-1562772980984.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562772980985&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Helping Connect the HPE DEV Community]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1562774149709 Hey there… Dale Rensing here. You may remember my name as the editor of this newsletter…]]></description><link>https://developer.hpe.com/helping-connect-the-hpe-dev-community/</link><guid isPermaLink="false">https://developer.hpe.com/helping-connect-the-hpe-dev-community/</guid><pubDate>Wed, 10 Jul 2019 15:26:11 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/connecting-community2-1562774149692.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1562774149709&quot;&gt;&lt;/p&gt;
&lt;p&gt;Hey there… Dale Rensing here. You may remember my name as the editor of this newsletter, but I haven’t really introduced myself properly. Although I’m fairly new to the Hewlett Packard Enterprise (HPE) Chief Design Office, I’ve been around the technology industry for quite some time. I’ve been lucky in my career to often find myself working with teams of highly-talented engineers engaged in cutting-edge technology, getting to report on their innovation and extrapolating its value in customer terms. As a writer and the news editor of the HPE Newsletter and the HPE DEV website, my job now is to connect you with all activities within HPE DEV.&lt;/p&gt;
&lt;p&gt;HPE DEV builds, communicates, and collaborates with developers and operators in an open community, with the aim of making the deployment of applications across traditional and cloud infrastructures simple and effortless. Part of my job includes working hand-in-hand with HPE DEV engineers, encouraging them to share news and tutorials that are pertinent to the overall community. I also report on different topics – chasing news stories at events with Sir Hackington and bringing more awareness to open source projects from HPE, such as Grommet.&lt;/p&gt;
&lt;h2&gt;Methods to Facilitate Engagement&lt;/h2&gt;
&lt;p&gt;The HPE DEV website and our monthly Newsletter were put in place to enable better communication and collaboration between HPE DEV developers, UX designers, and you. You may have noticed some recent, minor modifications to these platforms, although nothing overly dramatic. I believe in the adage “If it ain’t broke, don’t fix it.” Still, that doesn’t mean there isn’t room for continued improvement. Over the next few months, you’ll see me reaching out in different ways to learn more about how you prefer to engage. I’d like to understand more about who you are and what you do. You may see a couple of quick surveys come your way. I encourage you to take the time to fill them out so I can learn more about your preferences and find easier ways to engage.&lt;/p&gt;
&lt;p&gt;Part of facilitating collaboration is helping you get to know the HPE DEV team members and what roles they play. To that end, I’m in the process of developing some short videos that introduce the different team members to you. You’ll see these come out in subsequent newsletters. If there are other suggestions you might have to help foster more collaboration within the community, I’d love to hear about them.&lt;/p&gt;
&lt;p&gt;I look forward to engaging with you more closely as we grow the HPE DEV community. Please feel free to connect with me on Slack at &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;hpedev.slack.com&lt;/a&gt; at @Dale Rensing if you have something specific, you’d like to discuss. You can also follow me and the team on Twitter &lt;a href=&quot;https://twitter.com/HPE_Developer?lang=en&quot;&gt;@HPE_DevCom.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[OneView Global Dashboard 101: Chock-Full of Useful Features]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1560890766707 To simplify the management of hybrid cloud infrastructures, the HPE OneView Global…]]></description><link>https://developer.hpe.com/oneview-global-dashboard-101-chock-full-of-useful-features/</link><guid isPermaLink="false">https://developer.hpe.com/oneview-global-dashboard-101-chock-full-of-useful-features/</guid><pubDate>Tue, 18 Jun 2019 20:32:33 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1560890766706.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1560890766707&quot;&gt;&lt;/p&gt;
&lt;p&gt;To simplify the management of hybrid cloud infrastructures, the &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt; Global Dashboard (OVGD) provides a unified view of the health and inventory of Hewlett Packard Enterprise (HPE) servers, profiles, enclosures, &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt; frames, and HPE 3PAR storage systems across data centers.&lt;/p&gt;
&lt;p&gt;HPE just released OVGD version 1.8, and I’m excited to tell you about what’s new. Can you believe it? This is our 9th release in less than three years! Our 10th is planned around OVGD’s 3rd birthday in October of this year. The team prides itself on having met every release on time. We are also happy with the features in each release of the product.&lt;/p&gt;
&lt;p&gt;The OVGD team designs software that helps you tame your IT monster, but sometimes features can sneak in unnoticed by users. Let’s take a quick look back and examine some of these newer releases to see if there is something you didn’t know about or a feature you have used and would like to comment on. We constantly strive to make the product better in the hopes of making you more efficient. So, keep us in mind and don’t hesitate to reach out via &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack&lt;/a&gt; or &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;Twitter&lt;/a&gt; if you think our product team can help you.&lt;/p&gt;
&lt;h2&gt;Data Aggregator&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;OVGD supplies data about:
&lt;ul&gt;
&lt;li&gt;Enclosures&lt;/li&gt;
&lt;li&gt;Server hardware (You get loads more data with 1.8! Just click the Server hardware page, then choose a particular resource and view all the data in the detailed pane on the right side of your screen).&lt;/li&gt;
&lt;li&gt;Server profiles&lt;/li&gt;
&lt;li&gt;Server profile templates&lt;/li&gt;
&lt;li&gt;Storage systems&lt;/li&gt;
&lt;li&gt;Storage pools&lt;/li&gt;
&lt;li&gt;Volumes&lt;/li&gt;
&lt;li&gt;SAN managers&lt;/li&gt;
&lt;li&gt;SANs&lt;/li&gt;
&lt;li&gt;Converged systems&lt;/li&gt;
&lt;li&gt;Appliance alerts (critical only)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Using single sign-on, you can log on and use HPE OneView/Composer to manage a resource with a single click!&lt;/li&gt;
&lt;li&gt;With all the data provided, OVGD can be your one-stop shop to see your data center’s health, and help you get where you need to go to fix any issues or find out more information.&lt;/li&gt;
&lt;li&gt;Using the dashboard and critical alert reporting, you can quickly understand system health across data centers and use single sign-on to “drill down” to wherever the problem resides.&lt;/li&gt;
&lt;li&gt;You can click on the wrench icon   on each resource page to decide what columns will be displayed by default to ensure you get the view you need every time you login.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;New in Version 1.8&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;This version jumps up the scale numbers! We are going from supporting 50 HPE OneView and/or Composers to 75, with up to 20,000 servers and 25 concurrent users.&lt;/li&gt;
&lt;li&gt;We have a new resource page for your Interconnects.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Report Generator&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;We’ve got sixteen out-of-the-box reports that help you view your data center through different lenses.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture2-1560890771858.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1560890771858&quot;&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Reports have a lot of flexibility hidden underneath those ellipses you see on the screen.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Manage Report Content,&lt;/strong&gt; with the same familiar wrench icon, lets you choose which columns to display.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Export&lt;/strong&gt; allows you to export the data in CSV format. It also reflects any customizations you’ve made, such as search, sort, filter, and column customizations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Schedule&lt;/strong&gt; enables a report to be emailed to whomever you’d like at a configurable cadence (with the customizations mentioned above).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mail&lt;/strong&gt; allows you to mail the report right at the very moment it is ready (again, including any customizations made to the report).&lt;/li&gt;
&lt;li&gt;If you want to schedule an email with a search applied, a column sorted a certain way, columns added, or filters applied, you’ll need to first &lt;strong&gt;Save&lt;/strong&gt; a custom report and then schedule it. This feature allows you to create custom reports to suit your team’s needs. For example:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Say Dave is responsible for keeping track of your Gen10s, and Wendy is in charge of your Gen9s and Gen8s. Create the ‘Dave Server Inventory’ report with a filter selected to limit the report and schedule it to arrive at work when he does. You can then do likewise for Wendy and her responsibilities.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Delete&lt;/strong&gt; is for custom reports that you no longer need.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;New in Version 1.8&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Two new reports, with all of the same flexibility mentioned above.
&lt;ul&gt;
&lt;li&gt;Interconnect Inventory&lt;/li&gt;
&lt;li&gt;Enclosure Inventory&lt;/li&gt;
&lt;li&gt;See the health and status of all your interconnects and enclosures, respectively.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;OVGD 1.8 is now available, and we are excited to continue delivering a product that helps you tame your IT monster. We are driven to solve the problems you are facing. Nobody knows those problems better than you, so please get in touch with us to help us best understand your needs.&lt;/p&gt;
&lt;p&gt;If you want to know more about OVGD, you might want to check out this product &lt;a href=&quot;https://www.youtube.com/watch?v=qsmNvNoy-qw&quot;&gt;overview video&lt;/a&gt; or go &lt;a href=&quot;https://buy.hpe.com/b2c/us/en/enterprise-software/converged-infrastructure-management-software/converged-infrastructure-management/oneview-management-software/hpe-oneview-global-dashboard/p/1009187269&quot;&gt;here&lt;/a&gt; to download it and try it yourself.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Solving Enterprise DevOps and Front End Challenges with Open Source at OSCON 2019]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1560460835964 For over 20 years, the O’Reilly Open Source Software Conference (OSCON) has been the…]]></description><link>https://developer.hpe.com/solving-enterprise-devops-and-front-end-challenges-with-open-source-at-o/</link><guid isPermaLink="false">https://developer.hpe.com/solving-enterprise-devops-and-front-end-challenges-with-open-source-at-o/</guid><pubDate>Thu, 13 Jun 2019 21:11:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1560460835957.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1560460835964&quot;&gt;&lt;/p&gt;
&lt;p&gt;For over 20 years, the &lt;a href=&quot;https://conferences.oreilly.com/oscon/oscon-or/public/content/about&quot;&gt;O’Reilly Open Source Software Conference (OSCON)&lt;/a&gt; has been the focal point of the open source movement, earning a reputation as the destination for all things free and open. OSCON welcomes anyone who’s passionate about open source; from software developers, designers, and architects, to activists, hackers, and geeks. This year the event will take place in Portland, Oregon, from July 15 – 19, 2019.&lt;/p&gt;
&lt;p&gt;Some may be surprised to see Hewlett Packard Enterprise (HPE) at an event like OSCON, but those “in the know” will recognize HPE as a leading innovator who has a lot to offer open source software development and design teams. Attendees will get a chance to see how an HPE-developed, easy-to-use open source component library, like &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet,&lt;/a&gt; can be used to create responsive, mobile-first projects.&lt;/p&gt;
&lt;p&gt;At this year’s OSCON event, Pramod Sareddy, Full-stack Developer for HPE DEV, will present &lt;a href=&quot;https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/78263&quot;&gt;“Solving enterprise DevOps and front end challenges with open source”.&lt;/a&gt; For those planning to attend, make sure you catch his talk starting at 11:00 am on Wednesday, July 17th. Pramod will also show the audience how Open Service Broker (OSB) can save time and effort by exposing a web development framework.&lt;/p&gt;
&lt;p&gt;OSCON attendees can expect to hear from innovative programmers, talented managers, and senior developers who are all doing amazing things with open source. The event features over a hundred sessions that cover a full range of open source languages and platforms, including practical tutorials designed to enhance technical skills. With fun evening events and receptions, Birds of a Feather sessions, awards ceremonies, late night parties, and OSCON activities around town, there will be plenty of learning and networking opportunities for everyone.&lt;/p&gt;
&lt;p&gt;Check out all the &lt;a href=&quot;https://conferences.oreilly.com/oscon/oscon-or/schedule/2019-07-15&quot;&gt;sessions&lt;/a&gt; and conference &lt;a href=&quot;https://conferences.oreilly.com/oscon/oscon-or/public/schedule/stype/1350&quot;&gt;events&lt;/a&gt; that are planned. I hope to see  you there.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Accelerate Next - Newsletter]]></title><link>https://developer.hpe.com/2019-June-12/</link><guid isPermaLink="false">https://developer.hpe.com/2019-June-12/</guid><pubDate>Wed, 12 Jun 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[An Insider’s Look into the HPE Aspire 2019 Experience]]></title><description><![CDATA[kubekon An Insider’s Look into the HPE Aspire 2019 Experience By Pramod Sareddy Recently, I was fortunate enough to join over 2,000 Solution…]]></description><link>https://developer.hpe.com/an-insiders-look-into-the-hpe-aspire-2019-experience/</link><guid isPermaLink="false">https://developer.hpe.com/an-insiders-look-into-the-hpe-aspire-2019-experience/</guid><pubDate>Tue, 11 Jun 2019 17:43:54 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/kubekon-1560274711136.png&quot; alt=&quot;kubekon&quot;&gt;&lt;/p&gt;
&lt;h1&gt;An Insider’s Look into the HPE Aspire 2019 Experience&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;By Pramod Sareddy&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Recently, I was fortunate enough to join over 2,000 Solution and Technology Architect professionals who gathered at the &lt;a href=&quot;http://www.hpeaspire.com/&quot;&gt;HPE Aspire 2019&lt;/a&gt; internal employee and partner conference. The event was held May 19-23 at the &lt;a href=&quot;https://www.swandolphin.com/&quot;&gt;Swan and Dolphin Resort&lt;/a&gt; in Orlando, Florida. While there, event attendees enjoyed four days focused on growing their Hewlett Packard Enterprise (HPE) product and solution technical expertise and connecting with HPE engineers and subject matter experts (SMEs), as well as other industry specialists.&lt;/p&gt;
&lt;p&gt;The HPE DEV booth shared space with other HPE and sponsor booths in the HPE Aspire Showcase. In the booth we were able to meet, collaborate, and network with event sponsors and other members of the HPE community, including &lt;a href=&quot;https://techpro.hpe.com/hpelogin.aspx?HPPSESSION=NO&quot;&gt;Tech Pro presales support&lt;/a&gt;. The theme for this year’s show was &quot;HPE Aspire Legends&quot; and revolved around legendary characters from movies, sports, music, and television. Jack Sparrow from Pirates of the Caribbean even made a personal appearance! During the keynote, we all enjoyed a dance performance. The event hosts thought of everything – even providing us with HPE-branded bread as a snack between meals.&lt;/p&gt;
&lt;p&gt;In addition to fielding questions from sales reps while working the booth, I was approached by a number of HPE employees interested in the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV portal&lt;/a&gt;. HPE software engineers were happy to see us there and showed a strong interest in digging deeper and contributing content. Based on our discussions, some are now considering posting scripts that they developed to connect their products to &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;. &lt;a href=&quot;https://www.arubanetworks.com/&quot;&gt;Aruba Networks&lt;/a&gt; engineers expressed interest in becoming more closely aligned with the HPE DEV portal as well.
One HPE presales engineer was very excited to see that specific API documentation his developer customers had been searching for was easily accessible on HPE DEV. I pointed out that we had not only HPE OneView and &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt; material, but also information on other platforms, like &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt;HPE SimpliVity&lt;/a&gt; and &lt;a href=&quot;https://www.hpe.com/us/en/storage/nimble.html&quot;&gt;HPE Nimble Storage&lt;/a&gt;, including all the supporting tools and modules.&lt;/p&gt;
&lt;p&gt;All in all, it was a great opportunity to showcase the wide variety of resources HPE DEV provides.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV at KubeCon + CloudNativeCon Europe 2019]]></title><description><![CDATA[group-picture HPE DEV at KubeCon + CloudNativeCon Europe 2019 Didier Lalli For three intense days, KubeCon + CloudNativeCon Europe 2019 was…]]></description><link>https://developer.hpe.com/hpe-dev-at-kubecon-cloudnativecon-europe-2019/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-at-kubecon-cloudnativecon-europe-2019/</guid><pubDate>Tue, 11 Jun 2019 17:40:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/grouppicture-1560274699625.png&quot; alt=&quot;group-picture&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE DEV at KubeCon + CloudNativeCon Europe 2019&lt;/h1&gt;
&lt;p&gt;&lt;em&gt;Didier Lalli&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For three intense days, &lt;a href=&quot;https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/&quot;&gt;KubeCon + CloudNativeCon Europe 2019&lt;/a&gt; was held in Barcelona, Spain, May 20-23, 2019. The show, which took place in the Fira Gran Via, drew more than 7,000 attendees, a substantial increase from the 4,400 who attended last year. A cross-organizational team from Hewlett Packard Enterprise (HPE) staffed the event booth with representatives from the product business units, HPE DEV, HPE Storage, HPE Pre-Sales, and HPE Pointnext services. Though the team came from different groups within the company, everyone had one goal in mind: promote and educate attendees on what HPE offers in the areas of containers and Kubernetes (k8s).&lt;/p&gt;
&lt;p&gt;I was fortunate to attend the conference, meeting other attendees who were all extremely engaged. Conversations at the booth revolved around HPE’s offerings for storage, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/infrastructure/composable-infrastructure.html&quot;&gt;composable infrastructure&lt;/a&gt;, and HPE Pointnext services, as well as the DevOps value offered via the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV community&lt;/a&gt;. I talked with people from small startups, who were unfamiliar with the split between Hewlett Packard (HP) and HPE, and those who were unfamiliar with what the acronym “HPE” meant. I also spoke with attendees from large companies, like Airbus and Orange Labs, looking to manage their infrastructure as code.&lt;/p&gt;
&lt;p&gt;As always, booth swag was in high demand, especially our various stickers. We held a drawing each day for prizes and the winners were selected from new subscribers to the HPE DEV newsletter. We also offered attendees the &lt;a href=&quot;http://www.techdemand.io/whitepaper/solutions/hybrid-cloud-management-for-dummies/&quot;&gt;Hybrid Cloud Management for Dummies&lt;/a&gt; booklet, which proved very popular. Our local host, Jessica Navas, helped with translation to ensure communications ran smoothly both in Spanish and English. With the great weather, intriguing conversations, and incredible tapas, it was an amazing show to attend.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Elaborate the Monitoring of KVM-based Private Clouds Using Open Source Tools such as Nagios]]></title><description><![CDATA[Today, many cloud service providers offer OpenStack-based private cloud management solutions that can be delivered as a Software-as-a…]]></description><link>https://developer.hpe.com/elaborate-the-monitoring-of-kvm-based-private-clouds-using-open-source-t/</link><guid isPermaLink="false">https://developer.hpe.com/elaborate-the-monitoring-of-kvm-based-private-clouds-using-open-source-t/</guid><pubDate>Wed, 29 May 2019 21:28:19 GMT</pubDate><content:encoded>&lt;p&gt;Today, many cloud service providers offer OpenStack-based private cloud management solutions that can be delivered as a Software-as-a-Service (SaaS). In SaaS delivery models for OpenStack-based private clouds, there are two main parts: the OpenStack control plane and the node components. The control plane is generally hosted in the cloud and managed, maintained, and operated by the cloud service provider. The compute, network, and storage resources of the private cloud platforms, however, remain on-premises where customers can maintain greater control and security. The OpenStack control plane REST API endpoint is the only control plane component accessible to customers. Host agents are typically installed on compute servers in the data center to connect the customer’s on-premises compute servers to the hosted control plane through a secure communication channel.&lt;/p&gt;
&lt;p&gt;While the OpenStack-based control plane is hosted and managed by the cloud service provider, IT professionals continue to report they need guidance about the on-premises components. They want more guidance around what they should monitor with their open source monitoring software, so they know when there is a problem that affects their OpenStack-based private cloud environment.&lt;/p&gt;
&lt;p&gt;In this blog, you will learn how to extend your ability to monitor KVM-based private clouds using the enterprise open source monitoring platform Nagios with NRPE (Nagios Remote Plugin Executor) add-on.&lt;/p&gt;
&lt;h2&gt;About Nagios – the open source monitoring software&lt;/h2&gt;
&lt;p&gt;Nagios is an open source enterprise monitoring platform that allows you to monitor systems, networks and infrastructure with alerting services for servers, switches, applications and services.&lt;/p&gt;
&lt;p&gt;Installation and configuration of Nagios core and NRPE add-on are beyond the scope of this article.  More information about Nagios can be found &lt;a href=&quot;https://www.nagios.org/&quot;&gt;here&lt;/a&gt; and a Nagios core installation and configuration guide can be found &lt;a href=&quot;https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/&quot;&gt;here:&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Step-by-step instructions to install Nagios Core server, NRPE add-on and NRPE daemon on Linux-based (CentOS7 and Ubuntu) are available here:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-install-nagios-4-and-monitor-your-servers-on-centos-7&quot;&gt;CentsOS7&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.digitalocean.com/community/tutorials/how-to-install-nagios-4-and-monitor-your-servers-on-ubuntu-14-04#monitor-an-ubuntu-host-with-nrpe&quot;&gt;Ubuntu&lt;/a&gt;&lt;/p&gt;
&lt;br/&gt;
&lt;h2&gt;Services and resources to check for a KVM-based private cloud platform:&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Compute servers general health monitoring
&lt;ul&gt;
&lt;li&gt;Reachability of KVM compute servers from Nagios monitoring server&lt;/li&gt;
&lt;li&gt;Health of KVM compute servers (CPU load, processes, storage, memory)&lt;/li&gt;
&lt;li&gt;NTP clock on KVM compute servers&lt;/li&gt;
&lt;li&gt;Reachability of proxy server from KVM compute servers (if a proxy server is used to reach the Internet)&lt;/li&gt;
&lt;li&gt;DNS hostname resolution for OpenStack control plane API  endpoint from KVM compute servers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;OpenStack control plane REST API endpoint status
&lt;ul&gt;
&lt;li&gt;Check status of OpenStack Control plane REST API endpoint&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Secure channel communication between on-premises private cloud platform and hosted control plane
&lt;ul&gt;
&lt;li&gt;Check for error on host agent logs&lt;/li&gt;
&lt;li&gt;Check for error on secure communication agent logs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;OpenStack services
&lt;ul&gt;
&lt;li&gt;Check for OpenStack compute services (Nova services)&lt;/li&gt;
&lt;li&gt;Check for Networking agent services (Neutron services)&lt;/li&gt;
&lt;li&gt;Check for block storage volumes services (Cinder services)&lt;/li&gt;
&lt;li&gt;Check for image services (Glance services)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;OpenStack Virtual Machine instance status on private cloud
&lt;ul&gt;
&lt;li&gt;Check Virtual Machine instance status for private cloud&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;br/&gt;
&lt;h2&gt;What does it take to implement this monitoring solution?&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;A Nagios core server (the monitoring server) with access to the private cloud compute servers&lt;/li&gt;
&lt;li&gt;Installation of Nagios NRPE add-on on Nagios core server&lt;/li&gt;
&lt;li&gt;Installation of Nagios NRPE daemon on each compute server (the monitored host)&lt;/li&gt;
&lt;li&gt;Installation of NRPE ‘standard’ plugins on each compute server&lt;/li&gt;
&lt;li&gt;A Linux-based workstation with OpenStack command line interface (CLI) installed&lt;/li&gt;
&lt;li&gt;Development and testing of custom Nagios check plugins&lt;/li&gt;
&lt;li&gt;Installation and configuration of custom check plugins on each compute server&lt;/li&gt;
&lt;li&gt;Configuration of ‘servers’ and ‘service’ definitions on Nagios server (monitoring server) for monitoring the resources and services on remote compute servers&lt;/li&gt;
&lt;/ul&gt;
&lt;br/&gt;
&lt;h2&gt;Private cloud platform health monitoring with Nagios core and NRPE add-on&lt;/h2&gt;
&lt;p&gt;Figure 1 and Figure 2 show Nagios Core-based monitoring with NRPE add-on for a KVM-based private cloud services and resources, as well as OpenStack services (nova, neutron, glance, cinder, VM instances status) monitored from a Linux workstation with OpenStack CLI installed:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture-1559166146677.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1559166146679&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Figure 1 – Health monitoring of a compute server&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture11-1559166225373.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1559166225374&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Figure 2 – Health monitoring of OpenStack control plane services&lt;/em&gt;In this scenario, standard Nagios NRPE check plugins are used to monitor compute server health (processes, CPU load, storage disk space, reachability, NTP time, DNS resolution). HPE DEV developed a few simple bash scripts and leveraged several others from existing sample codes available in GitHub to create custom checks (also known as plugins). These custom checks are used to monitor the status of the OpenStack services such as Nova, Cinder, Glance, Neutron, and the OpenStack control plane REST API endpoint. They are also used to verify the secure communication channel between the on-premises compute servers and the control plane hosted in the cloud.&lt;/p&gt;
&lt;h2&gt;Some of the custom checks are shown here:&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Custom check to monitor the OpenStack control plane endpoint status&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a custom NRPE plugin developed to check status of the OpenStack control plane REST API endpoint:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash
#
#Custom Nagios nrpe plugin
#This plugin checks OpenStack control plane REST API endpoint
#status
#

URL=&quot;https://&amp;#x3C; OpenStack-control-plane-endpoint-URL&gt;/rest/status&quot;
proxysrv=&quot;http://&amp;#x3C;proxy-server&gt;:&amp;#x3C;port&gt;&quot;

status=$(curl -k -x $proxysrv -H &quot;accept: application/json&quot; -X GET $URL | jq -r &quot;.service&quot;)

if [ &quot;$status&quot; != &quot;OK&quot; ]; then
echo &quot;CRITICAL - OpenStack control plane API endpoint is not responding - $status&quot;
  	exit 2
else
  	echo &quot;OK- OpenStack control plane API is $status&quot;
  	exit 0
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;br/&gt;
&lt;p&gt;&lt;strong&gt;Custom check to monitor for error in host agent logs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a custom NRPE plugin developed to check for a host agent error running on the compute server:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash
#
#custom Nagios nrpe plugin
#This plugin checks hostagent log for errors
#

output=`tail -1 /var/log/hostagent.log | grep -i &apos;error\|failed&apos;`

if [ -z &quot;$output&quot; ]; then
  echo &quot;OK- No error or failure in hostagent.log&quot;
  exit 0
else
  echo &quot;CRITICAL- $output&quot;
  exit 2
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;br/&gt;
&lt;p&gt;&lt;strong&gt;Custom check (obtained from GitHub) to monitor OpenStack Nova compute services:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a custom NRPE plugin obtained from GitHub to check for the OpenStack Nova compute services status. Very similar custom checks are used to monitor other OpenStack services such as Cinder, Neutron, and Glance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash
#source: https://github.com/taha-bindas/openstack_nagios
#
#Custom Nagios nrpe plugin for openstack
#This plugin checks nova services if they are enable and up or not
#

export http_proxy=http://&amp;#x3C;proxy-server&gt;:&amp;#x3C;port&gt;
export https_proxy=http://&amp;#x3C;proxy-server&gt;:&amp;#x3C;port&gt;
#
export OS_AUTH_URL=https://&amp;#x3C;OpenStack-control-plane-endpoint&gt; /keystone/v3
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=&quot;&amp;#x3C;region-Name&gt;&quot;
export OS_USERNAME=&quot;&amp;#x3C;admin-username&gt;&quot;
export OS_PASSWORD=&quot;&amp;#x3C;admin-password&gt;&quot;
export OS_PROJECT_NAME=&quot;&amp;#x3C;Tenant-Name&gt;&quot;
export OS_PROJECT_DOMAIN_ID=${OS_PROJECT_DOMAIN_ID:-&quot;default&quot;}
export OS_USER_DOMAIN_ID=default

EXIT=&quot;GOOD&quot;

CMD=$(openstack compute service list | egrep -v &quot;+----|Binary&quot; | awk -F&apos;|&apos; &apos;{print $2,$3,$4,$5,$6,$7,$8}&apos;)
while read ID BINARY HOST ZONE STATUS STATE REASON
do
        if [ &quot;$STATE&quot; != &quot;up&quot; ]
        then
                EXIT=&quot;BAD&quot;
                echo &quot;$BINARY on $HOST is $STATE, reason: $REASON&quot;
                continue
        fi
done &amp;#x3C; &amp;#x3C;(echo &quot;$CMD&quot;)
        if [ &quot;$EXIT&quot; == &quot;BAD&quot; ]
        then
echo &quot;CRITICAL - not all nova services are up and running: $CMD&quot;
                exit 2
        else
                echo &quot;OK - all nova services are up and running&quot;
        		exit 0
        fi
&lt;/code&gt;&lt;/pre&gt;
&lt;br/&gt;
&lt;p&gt;&lt;strong&gt;Custom check (obtained from GitHub) to monitor OpenStack Virtual Machine instance status:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here is a custom NRPE plugin obtained from GitHub to check for OpenStack Virtual Machine instances status for all tenants.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash
#Created by: Taha Ali (tahazohair@gmail.com)
#Created Date: 4/11/2014
#https://github.com/taha-bindas/openstack_nagios
#
#Nagios nrpe plugin for openstack
#This plugin checks each openstack vm if they are showing
#active or not for a given Project/Tenant
#
export http_proxy=http://&amp;#x3C;proxy-server&gt;:&amp;#x3C;port&gt;
export https_proxy=http://&amp;#x3C;proxy-server&gt;:&amp;#x3C;port&gt;  
#
export OS_AUTH_URL=https://&amp;#x3C;OpenStack-control-plane-endpoint&gt; /keystone/v3
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=&quot;&amp;#x3C;region-Name&gt;&quot;
export OS_USERNAME=&quot;&amp;#x3C;admin-username&gt;&quot;    
export OS_PASSWORD=&quot;&amp;#x3C;admin-password&gt;&quot;    
export OS_PROJECT_NAME=&quot;&amp;#x3C;Tenant-Name&gt;&quot;    
export OS_PROJECT_DOMAIN_ID=${OS_PROJECT_DOMAIN_ID:-&quot;default&quot;}
export OS_USER_DOMAIN_ID=default

#set -x.     

CMD=$(openstack server list --all-projects | egrep -v &quot;+---------|ID&quot; | awk -F&apos;|&apos; &apos;{print $3,$4,$5}&apos;)
echo &quot;$CMD&quot; | while read VM STATUS NETWORK
do
 case $STATUS in
   ERROR)
echo &quot;CRITICAL -  $VM is currently in $STATUS state&quot;
exit 2
;;
   SUSPENDED)
echo &quot;CRITICAL -  $VM is currently in $STATUS state&quot;
exit 2
;;
   PAUSED)
echo &quot;WARNING -  $VM  is currently in $STATUS state&quot;
exit 1
;;
 esac
done
echo &quot;OK - all vm&apos;s are in OK state&quot;
exit 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I hope this article helped you determine how to monitor the health of a KVM-based private cloud compute layer including the service availability in the managed OpenStack control plane portion of the service delivered via SaaS. For more tutorials like this, continue to monitor &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;hpedev.io.&lt;/a&gt; Consider &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;subscribing to our newsletter&lt;/a&gt; which highlights our latest posts.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Destination: Discover - Newsletter]]></title><link>https://developer.hpe.com/2019-may-29/</link><guid isPermaLink="false">https://developer.hpe.com/2019-may-29/</guid><pubDate>Wed, 29 May 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[An Open Service Broker Project Delivers a Sample DevOps Environment to AWS]]></title><description><![CDATA[As described by Wikipedia, “DevOps is a set of software development practices that combines software development (Dev) and information…]]></description><link>https://developer.hpe.com/an-open-service-broker-project-delivers-a-sample-devops-environment-to-a/</link><guid isPermaLink="false">https://developer.hpe.com/an-open-service-broker-project-delivers-a-sample-devops-environment-to-a/</guid><pubDate>Thu, 23 May 2019 19:07:42 GMT</pubDate><content:encoded>&lt;p&gt;As described by &lt;a href=&quot;https://en.wikipedia.org/wiki/DevOps&quot;&gt;Wikipedia, “DevOps&lt;/a&gt; is a set of software development practices that combines software development (Dev) and information technology operations (Ops).” DevOps shortens the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives. In a DevOps environment, different disciplines collaborate, making quality everyone&apos;s “job.” DevOps is often hailed as a giant step towards bringing better software to market faster and many organizations now view it as an important goal to establish an efficient DevOps pipeline and attain a true DevOps environment.&lt;/p&gt;
&lt;p&gt;Because it is so unlike traditional software development methods, implementing DevOps introduces some challenges. DevOps works differently and the working environment uses distinct virtual machines, tool sets, and utilities. Infrastructure and the surrounding software environment needs to be provisioned from scratch, maintained, and managed differently from ITOps.&lt;/p&gt;
&lt;p&gt;There are many ways to provision a DevOps environment. This blog describes one method that uses Open Service Broker, AWS CloudFormation, and Ansible. This methodology uses Python, AWS CloudFormation template, and Ansible playbooks. These tools are all put under version control so that you can trace changes and find and debug potential problems. This method has several advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Open Service Broker (OSB) is a service delivery method. For any portal or component that supports OSB API, the service can be brought up to their portal to make it available for its users. &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/onesphere.html&quot;&gt;HPE OneSphere&lt;/a&gt; supports key OSB APIs so you can easily register the DevOps broker to HPE OneSphere to be consumed by HPE OneSphere customers.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AWS CloudFormation provides a common language to describe and provision all the infrastructure resources in an AWS cloud environment. Using AWS CloudFormation also makes it easier to manage the resources. For example, a DevOps environment normally requires more than one virtual machine. With AWS CloudFormation, you can group all virtual machines into one &lt;a href=&quot;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html&quot;&gt;stack.&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ansible is used to provision tools, utilities, and applications in the infrastructure. The tools and applications may be any software such as Java, Git, Nginx, MariaDB, PHP, and Jenkins. IT admins can centrally manage the Ansible playbook easily and then push out to the DevOps environment. Different versions of applications can be dynamically installed on the fly instead of pre-burned into a virtual machine image.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Prepare the environment for running OSB service and Ansible&lt;/h3&gt;
&lt;p&gt;In this first step, prepare the virtual machine environment for hosting the OSB broker service.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ubuntu 16.04 (the hosting VM) in azure.com was chosen for hosting the OSB broker service. Other versions of Linux (such as CentOS) will also work. The hosting environment can be either in a cloud (such as AWS) or on-prem (such as VMware).&lt;/li&gt;
&lt;li&gt;In the hosting VM, use ssh-keygen to generate an SSH secure key pair, then create the security key pair in AWS and copy/paste the content of public key over to the AWS key pair. Please remember the key pair name, which will be used later.&lt;/li&gt;
&lt;li&gt;Make sure that &lt;a href=&quot;http://ubuntuhandbook.org/index.php/2017/07/install-python-3-6-1-in-ubuntu-16-04-lts/&quot;&gt;python 3.5.2 or above is installed&lt;/a&gt; in the hosting VM.&lt;/li&gt;
&lt;li&gt;Make sure that &lt;a href=&quot;https://linuxize.com/post/how-to-install-pip-on-ubuntu-18.04/&quot;&gt;pip3 is installed&lt;/a&gt; in the hosting VM.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://tecadmin.net/install-ansible-on-ubuntu-16-04-xenial/&quot;&gt;Install Ansible&lt;/a&gt; in the hosting VM.&lt;/li&gt;
&lt;li&gt;Make sure the Ansible working folder exists in the hosting VM.
&lt;ul&gt;
&lt;li&gt;Normally, it is “/etc/ansible”. Create the folder if it does not exist.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html#installing-roles&quot;&gt;Install the following Ansible roles in the Ansible working folder&lt;/a&gt; in the hosting VM.
&lt;ul&gt;
&lt;li&gt;apache&lt;/li&gt;
&lt;li&gt;jenkins&lt;/li&gt;
&lt;li&gt;php&lt;/li&gt;
&lt;li&gt;phpunit&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Configure Ansible dynamic inventory modules in the Ansible working folder in the hosting VM.
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.py&quot;&gt;Download ec2.py&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Modify first line of ec2.py to make sure python3 is chosen: #!/usr/bin/env python3.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.ini&quot;&gt;Download ec2.ini&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Modify ec2.ini to point at the right regions, for example: regions = us-west-1&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;#regions_exclude = us-gov-west-1, cn-north-1.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html&quot;&gt;Install awscli&lt;/a&gt; in the hosting VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Get and run the OSB broker service&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Create an OSB working folder in the hosting VM. Use any folder you prefer.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.hpe.com/peng-liu/osb-devops&quot;&gt;Get the OSB broker service&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;You can download the zip file and unzip the package to the OSB working folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Install the required the packages
&lt;ul&gt;
&lt;li&gt;sudo pip3 install --no-cache-dir -r requirements.txt&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Start the OSB service broker:
&lt;ul&gt;
&lt;li&gt;python3 osb_template.py&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Register the DevOps OSB service in HPE OneSphere&lt;/h3&gt;
&lt;p&gt;Log in to HPE OneSphere instance, choose Settings (see Figure 1), then click on Catalog Registry.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture1-1558638905136.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558638905137&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 1. HPE OneSphere Settings&lt;/p&gt;
&lt;p&gt;Register the OSB broker service in the catalog by clicking the plus (+) sign in the Catalog Registry page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture11-1558638965078.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558638965078&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 2. Register OSB broker service.&lt;/p&gt;
&lt;p&gt;The broker service now appears in the HPE OneSphere catalog.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture13-1558639040081.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639040081&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 3. The OSB broker service appears as osb-devops.&lt;/p&gt;
&lt;h3&gt;Deploy an instance of the OSB broker service – DevOps environment&lt;/h3&gt;
&lt;p&gt;The DevOps OSB broker service offers two virtual environments; one is for Jenkins with PHP and PHPUnit, and the other is for an Apache Webserver with PHP.&lt;/p&gt;
&lt;p&gt;Deploying this broker service provisions two AWS CloudFormation stacks. Each stack has one virtual machine. Corresponding software will be installed into each VM respectively, with all dependent tools and utilities. The deployment will also create security groups for both virtual machines and open ports for Jenkins service and WebServer.&lt;/p&gt;
&lt;p&gt;Figure 4 shows the screen start deploying an instance of the DevOps environment.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture14-1558639104803.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639104804&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 4. Start deploying an instance of the OSB broker service.&lt;/p&gt;
&lt;p&gt;The deployment of OSB broker service within HPE OneSphere is in progress.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture15-1558639161150.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639161151&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 5. The deployment in progress.&lt;/p&gt;
&lt;p&gt;In the AWS CloudFormation service portal, two stacks (one for Jenkins and the other for the web server) are in the process of being deployed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture16-1558639251375.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639251376&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 6. Showing the DevOps stacks creation in progress.&lt;/p&gt;
&lt;p&gt;After a while, the deployment of the OSB service broker completes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture20-1558639363748.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639363749&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 7. The completed OSB service broker deployment&lt;/p&gt;
&lt;p&gt;After an instance of the OSB broker service is successfully deployed, you can access the DevOps environment from either the AWS CloudFormation portal or HPE OneSphere portal.&lt;/p&gt;
&lt;p&gt;In the AWS CloudFormation portal, when you click on each stack, you should see both the SSH connection information and a WebUrl link.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture111-1558639415294.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639415295&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 8. AWS CloudFormation portal showing the SSH connection and WebUrl link.&lt;/p&gt;
&lt;p&gt;In the HPE OneSphere portal, access information is available by clicking on the Access link in the deployment page.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture671-1558639474817.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639474818&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 9. Access page shows both SSH connection and Web URL link.&lt;/p&gt;
&lt;p&gt;To open the Jenkins portal, click on the WebUrl link of the Jenkins stack. Follow the on-screen instructions to log in with a default initial password, and you will be connected to the Jenkins portal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture551-1558639531779.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639531780&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 10. Jenkins portal.&lt;/p&gt;
&lt;p&gt;A web server home page served by a sample index page (PHP) will open when you click on the WebUrl link of the WebServer stack.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picturel1-1558639581208.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639581209&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 11. Web server index page.&lt;/p&gt;
&lt;h3&gt;Delete the DevOps deployment&lt;/h3&gt;
&lt;p&gt;You can delete a deployment in HPE OneSphere. The deletion will further delete the DevOps stacks in AWS. This means that the security group and all virtual machines instances will be deleted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/pictures1-1558639696893.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639696893&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 12. Deleting the DevOps deployment from HPE OneSphere.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/picture001-1558639745847.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558639745848&quot;&gt;&lt;/p&gt;
&lt;p&gt;Figure 13. DevOps stacks are being deleted in the AWS CloudFormation portal.&lt;/p&gt;
&lt;h3&gt;Unregister the OSB broker service&lt;/h3&gt;
&lt;p&gt;You can also disable an OSB broker service and delete it from the HPE OneSphere portal.&lt;/p&gt;
&lt;h3&gt;Use the test program to simulate a service consumer&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.hpe.com/peng-liu/osb-devops&quot;&gt;test program (test.py) in the repository&lt;/a&gt; calls out the OSB broker service to list the catalog and provision the DevOps environment in AWS. It can be used and modified to include more functions to simulate a portal environment where the OSB broker service is to be registered.&lt;/p&gt;
&lt;h3&gt;Next Steps&lt;/h3&gt;
&lt;p&gt;This blog shows a very simple beginning step to automate the DevOps provisioning process using OSB, AWS CloudFormation, and Ansible. Stay tuned for more blog posts regarding this topic, and more mature and complete packages for managing the DevOps environment.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[My Journey from Student to HPE Full-stack Developer ]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1558709131030 This time last year, I was walking across the stage to receive my diploma. In that…]]></description><link>https://developer.hpe.com/my-journey-from-student-to-hpe-full-stack-developer/</link><guid isPermaLink="false">https://developer.hpe.com/my-journey-from-student-to-hpe-full-stack-developer/</guid><pubDate>Thu, 23 May 2019 18:08:07 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/unsquished-brittany-grad-picture-1558709131030.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558709131030&quot;&gt;&lt;/p&gt;
&lt;p&gt;This time last year, I was walking across the stage to receive my diploma. In that moment, I had a million thoughts racing through my head. For example, what is the next step in my journey? What kind of position do I see myself attaining in the next few years? And, most importantly, how do I not fall off this stage in front of all these people! For the past five years, I had a routine: go to class, work, and then study or work on projects/homework. The routine as I knew it would come to an end, and I had no idea what my next steps were going to be. Just like any other new graduate, I was concerned with the uncertainty of landing a job and whether or not those long years spent getting my undergrad were going to pay off.&lt;/p&gt;
&lt;h2&gt;Landing my first position&lt;/h2&gt;
&lt;p&gt;Luckily for me, I had the chance to join Hewlett Packard Enterprise (HPE). I had so many questions when I first started working. For example, what was I going to be working on and how should I prepare before my start date? When my first day arrived, I was excited to sit alongside the user experience (UX) team. UX was something that interested me, but since it was not in my course work, I did not know much about the research and the design process. But you learn a lot through on-the-job training. Within a few months, I learned enough to work with another designer on a web application that counted down the days to HPE’s premier event, HPE Discover Madrid. I even incorporated my own ideas into the design!&lt;/p&gt;
&lt;h2&gt;Learning about Grommet&lt;/h2&gt;
&lt;p&gt;As a new hire, there was so much that was unfamiliar to me. I knew nothing about the React or &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; software development and design tools. I learned to seek help through different resources, such as the &lt;a href=&quot;https://slackin.grommet.io/&quot;&gt;Grommet Slack community.&lt;/a&gt; In the beginning, I was nervous about asking the community for help and advice, but once I posed my first question, I received a quick and helpful response. They always provided great answers, as well as insightful ideas, on how I might be able to implement changes in my code base.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;Grommet&lt;/a&gt; is an awesome library to use to get started. After getting familiar with the different components of Grommet as well as React, I started to gain more confidence as a developer. Grommet is now the first thing I think of whenever I have a new project to get started on. I tend to think about the different components that Grommet has to offer and how I can leverage them to get the design I want. Once I have an idea of which components I will use, I start looking at the examples that Grommet provides. The Grommet site contains excellent resources, such as a &lt;a href=&quot;https://storybook.grommet.io/?path=/story/components--all&quot;&gt;storybook&lt;/a&gt; which displays an example and code for each component. The majority of the components in Grommet also have a link to &lt;a href=&quot;https://codesandbox.io/s/github/grommet/grommet-sandbox?initialpath=box&amp;#x26;module=%2Fsrc%2FBox.js&quot;&gt;Code SandBox,&lt;/a&gt; an open editor for users to be able to play around with the code as well as see their changes. These tools are very useful, especially for someone who is just starting out with React and Grommet. The editor allows you to play with the different components that Grommet offers and become more familiar with them.&lt;/p&gt;
&lt;p&gt;In these past nine months, I learned not only about React, Grommet, and many other tools and languages; I also learned more about myself as a developer. Coming straight out of school, I was not sure about the type of work I would enjoy. Now I know that I really enjoy developing front-end applications. I also know I dislike writing tests for applications! (Even though it is a must.)&lt;/p&gt;
&lt;p&gt;For those of you who are in the same position I was in this time last year, I can assure you that the transition from student to developer is not as nerve-wreaking as it seems. I do have some good advice from what I have learned that may shed some light on your path forward.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do not be afraid to push yourself. When I was given projects, I thought to myself -- how am I ever going to implement this code?! However, I’ve learned to take each project and break the pieces apart. Take each challenge one step at a time.&lt;/li&gt;
&lt;li&gt;In school, my goals included turning in a project on time, getting a good grade on a test, etc. Once you start working, things will change. But I still suggest you set goals for yourself. This way you can keep track of how you’re doing. Think of it now, though, as if you are grading yourself!&lt;/li&gt;
&lt;li&gt;Embrace the opportunity to grow. Getting feedback from other developers is a helpful way to see where your strengths are and what you need to work on. Take the feedback as a learning opportunity given to you by another experienced developer that has most likely been in your footsteps.&lt;/li&gt;
&lt;li&gt;It’s okay to step out of your comfort zone. I’m guilty of staying in my lane, however, it is important to be able to step out and be heard. As a new developer you may believe that your opinions and thoughts do not matter, but they do, and they should be heard!&lt;/li&gt;
&lt;li&gt;Last, but not least, write clean code. This is something that I am guilty of not doing. Working on a team keeps me on my feet because the last thing I want to do is commit code to a team project that is not even formatted correctly!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hope this advice helps anyone else who is on the same path as I was. If you would like to see more blogs for advice and tutorials, check out our &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[GitHub – The Project Management Tool You Didn’t Know You Wanted ]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1558638303118 The project management landscape is filled with lots of tools and processes that can be…]]></description><link>https://developer.hpe.com/github-the-project-management-tool-you-didnt-know-you-wanted/</link><guid isPermaLink="false">https://developer.hpe.com/github-the-project-management-tool-you-didnt-know-you-wanted/</guid><pubDate>Thu, 23 May 2019 18:00:37 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/hpe-dev-github-1558638303117.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558638303118&quot;&gt;&lt;/p&gt;
&lt;p&gt;The project management landscape is filled with lots of tools and processes that can be used for a wide variety of purposes. Managing a software development operation today, however, doesn’t always lend itself to generalities. Taking in a broad view of a project to understand how all the pieces connect and contribute requires a subtle set of skills. Being able to plan work, identify obstacles, and ensure effective communications is essential for the long-term success of a team. In today’s dynamic, fast-paced industry, this is very difficult to achieve.&lt;/p&gt;
&lt;h2&gt;Project Management Landscape&lt;/h2&gt;
&lt;p&gt;Although an overwhelming number of agile project management tools are available today, each person has his or her own preference. Initially, this wouldn’t appear to be a problem, because a team can come together for a singular project and determine what tool they will use. However, due to the dynamic nature of the industry, if you or your colleagues work across different teams, you could easily end up working with multiple tools, each with different processes and terminology. Efficiency and morale plummet because teams end up spending more time and effort learning about the tools that were merely meant to facilitate their work. In larger companies, the use of multiple tools also affects communication with executives, because you now require a third view that summarizes the status of each project across all the tools. When you compound this challenge over multiple quarters/years, it becomes unmanageable.&lt;/p&gt;
&lt;p&gt;There must be a better solution. As a software development project manager, I was looking for a tool that had the basic functionality needed for agile project management. I also wanted a tool that was:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Easy to use&lt;/li&gt;
&lt;li&gt;Flexible enough to meet the needs of different teams and objectives&lt;/li&gt;
&lt;li&gt;Transparent across multiple projects and organizations&lt;/li&gt;
&lt;li&gt;Able to offer an executive-level view&lt;/li&gt;
&lt;li&gt;Scalable over time to ensure the team won’t need to switch tools again in the near future&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As a project manager, I always try to remove personal opinion-based reasoning from the decision-making process. So, my first question was: What was the best project management tool that meets all of the above requirements? I also tried to think out of the box. What if the solution wasn’t a project management tool? What if it was a software development platform instead?&lt;/p&gt;
&lt;h2&gt;GitHub&lt;/h2&gt;
&lt;p&gt;GitHub is the leading software development platform with over 22 million users. For years, my manager/technical lead, Alex Mejias, and I felt the pain of creating/moving cards in a project management tool and then creating issues in GitHub for the developers because that was where the code was managed and maintained. Earlier this year, we decided we had enough. We wanted to try and manage all the aspects of a project within GitHub itself. The switch proved to be successful because GitHub has the basic functionality of any project management tool, but it&apos;s so much more than that. The more we used it, the more we realized that GitHub met all the needs listed above. GitHub even offered additional benefits, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A complete history of the project located in the same place where the project is being maintained&lt;/li&gt;
&lt;li&gt;Shared terminology across multiple teams and projects throughout the lifecycle of any particular project&lt;/li&gt;
&lt;li&gt;A tight integration of issue lifecycle with codebase changes. Pushing code to the project can automatically close issues removing a lot of manual maintenance&lt;/li&gt;
&lt;li&gt;The fact that most developers already have experience using GitHub due to the high-user base, which makes onboarding a breeze&lt;/li&gt;
&lt;li&gt;Vast amounts of documentation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Just like any other decision based on a long-term team goal, the complete value of using GitHub as a project management tool will become visible over time through the overall success of the team. Based on previous experience, I believe the benefits of using GitHub to manage digital projects for our current needs far outweigh any other project management tool. This is something new for our Chief Design Office at HPE, and we are still working through all the details. I would love to hear about other teams’ experiences so that we may collaborate and learn from each other. We invite you to &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;join us on Slack&lt;/a&gt; to share stories.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[See Where You Can Find The Open Source UI Tool, Grommet, at HPE Discover]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1558636315561 The Hewlett Packard Enterprise (HPE) premier event, HPE Discover, is just around the…]]></description><link>https://developer.hpe.com/see-where-you-can-find-the-open-source-ui-tool-grommet-at-hpe-discover/</link><guid isPermaLink="false">https://developer.hpe.com/see-where-you-can-find-the-open-source-ui-tool-grommet-at-hpe-discover/</guid><pubDate>Thu, 23 May 2019 17:43:56 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/imageedit_2_3129272736-1558636315560.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1558636315561&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Hewlett Packard Enterprise (HPE) premier event, &lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;HPE Discover,&lt;/a&gt; is just around the corner, and the HPE DEV &amp;#x26; Design team members are busy preparing their sessions for the HPE Hack Shack. Everyone is excited to interact with event attendees and share ideas and helpful hints on how to simplify and improve the software development and design process for today’s applications. A large part of what the group has to offer centers around a specific open source UI development and design tool called [Grommet.] (&lt;a href=&quot;https://v2.grommet.io/&quot;&gt;https://v2.grommet.io/&lt;/a&gt;) Grommet is a React-based library of reusable UI components that help developers and designers create web applications. This open source UI development and design tool simplifies the way web applications are built by providing a package of commonly used interface elements from which developers and designers can choose to use.&lt;/p&gt;
&lt;p&gt;If you were to walk the HPE Discover show floor and stop by the HPE Hack Shack (DEMO403), you would notice the visual incarnation of Grommet – a purple, gremlin-like character the team endearingly refers to as Stack, the Grommet mascot. You will find him on stickers, posters, slide presentations, large promotional screens, etc. What you might not immediately notice is how Grommet, the UI tool itself, permeates not just this booth, but also other areas of the show floor.&lt;/p&gt;
&lt;p&gt;For instance, Grommet is the subject of numerous workshops and Hack Shack Challenges. Several of the HPE Design Workshops feature Grommet, including:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18625_8975&amp;#x26;locale=en_US&quot;&gt;HSW8625&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – User experience research tools and techniques that drive customer experience&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Tuesday, June 18 at 4:30pm.&lt;br&gt;
Thursday, June 20 at 9:00am.&lt;br&gt;
Learn how insights from user experience research and design can improve the outcome of your product roadmap using a multi-disciplinary approach to UX research and design strategy.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18593_9028&amp;#x26;locale=en_US&quot;&gt;HSW8593&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – Tools that help your customers and partners build amazing experiences&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Wednesday, June 19 at 12:00pm.&lt;br&gt;
See all the resources available to you through the HPE Design team. They will walk through the hpe.design website and share tips and tricks   you can use to create world-class experiences.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18623_9256&amp;#x26;locale=en_US&quot;&gt;HSW8623&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Learn about HPE’s most popular open source library, Grommet, a class-leading React based UI/UX framework&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Wednesday, June 19 at 2:00pm.&lt;br&gt;
Are you ready to turn stunning designs into pixel-perfect accessible web applications? Learn how you can start using Grommet to build class-leading web applications.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge -- You’ve heard of soup to nuts. Now, join us for a napkin to template challenge,&lt;/strong&gt;.    Tuesday, June 18 – Thursday, June 20.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This challenge runs throughout the event and shows how easy it is to code in Grommet. Participants start off sketching out their ideas on a napkin and then, using a template project, develop that idea into a publishable UI. The fun part is that you don’t have to be an expert coder or designer. Even beginners are welcome to try their hand at coding and compete for cool prizes.&lt;/p&gt;
&lt;p&gt;Some of less obvious places you’ll find Grommet is where it runs behind the scenes. Grommet powers the Leaderboard that displays the Hack Shack Attack game results. This is where passersby can see who is currently in line to win the grand prize that will be given out at the awards reception on Wednesday 5– 6pm in the Hack Shack. The website that houses each session presentation .pdf, which enables attendees to download and take the information home with them, is also powered by Grommet. Even the team’s own &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt; and &lt;a href=&quot;https://hpe.design/&quot;&gt;HPE Design&lt;/a&gt; websites run with Grommet.&lt;/p&gt;
&lt;p&gt;It’s also interesting to note that the technologies presented at HPE Discover, &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView,&lt;/a&gt; &lt;a href=&quot;/platform/nimble-storage/home&quot;&gt;HPE Nimble storage,&lt;/a&gt; &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt;HPE SimpliVity,&lt;/a&gt; and &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/composable-fabric.html&quot;&gt;HPE Composable Fabric,&lt;/a&gt; were all developed using Grommet. A special demo presented elsewhere on the show floor, &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18314_0&amp;#x26;locale=en_US#session-details-page&quot;&gt;(DEMO1409)&lt;/a&gt; &lt;strong&gt;The data gold rush: Trapped at the edge today, Swarm Learning is how we’re mining for data intelligence tomorrow,&lt;/strong&gt; was also developed using Grommet. In this demonstration of AI (artificial intelligence), event attendees will learn how training edge devices may be more effective than the current method where machines are trained at the core and only reference the edge for data and insights.&lt;/p&gt;
&lt;p&gt;If you are going to the HPE Discover event in Las Vegas, June 18-20, take a close look around the HPE DEV &amp;#x26; Design Hack Shack and see how many times you can find Grommet. For developers and designers who’d like to understand more about how Grommet can provide significant benefits, seek out developer support through the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community Program&lt;/a&gt; and consider joining the &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;HPE DEV community.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE SimpliVity PowerShell Module]]></title><description><![CDATA[HPE SimpliVity PowerShell Module This PowerShell module utilizes the HPE SimpliVity REST API to display information and manage an HPE…]]></description><link>https://developer.hpe.com/hpe-simplivity-powershell-module/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-simplivity-powershell-module/</guid><pubDate>Thu, 16 May 2019 01:37:52 GMT</pubDate><content:encoded>&lt;h1&gt;HPE SimpliVity PowerShell Module&lt;/h1&gt;
&lt;p&gt;This PowerShell module utilizes the HPE SimpliVity REST API to display information and manage an HPE SimpliVity federation. It works by connecting to any HPE OmniStack virtual controller in your environment. With the release of HPE SimpliVity V4.0.0 and above, you can now also implement and connect to a management virtual appliance, which is recommended.&lt;/p&gt;
&lt;p&gt;All cmdlets are written as advanced cmdlets, with comment-based help and the majority have the ability to accept the output from another cmdlet as input. Most cmdlets that show information have parameters to limit the number of objects returned. The cmdlets have been written to adhere to the current recommendations with the REST API. For example, limiting the number of records when returning virtual machines and backup objects.&lt;/p&gt;
&lt;p&gt;Most &quot;Get&quot; commands display default properties; use Format-List or Select-Object to show all the properties. For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; Connect-SVT -OVC 192.168.1.11 -Credential $Cred
    PS C:\&gt; Get-SVThost
 
    HostName      DataCenterName    ClusterName   FreeSpaceGB    ManagementIP   StorageIP   FederationIP 
    --------      --------------    -----------   -----------    ------------   ---------    ------------
   srvr1.sg.com     SunGod          Production1         2,671    192.168.1.11   192.168.2.1   192.168.3.1
   srvr2.sg.com     SunGod          Production1         2,671    192.168.1.12   192.168.2.2   192.168.3.2
   srvr3.sg.com     SunGod          DR1                 2,671    192.170.1.11   192.170.2.1   192.170.3.1
 
    PS C:\&gt;Get-SVThost -HostName 192.168.1.1 | Select-Object *
 
    PolicyEnabled            : True
    ClusterId                : 3baba7ec-6d02-4fb6-b510-5ce19cd9c1d0
    StorageMask              : 255.255.255.0
    Model                    : HPE SimpliVity 380 Series 4000
    HostName                 : srvr1.sg.com
    .
    .
    .
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Latest Update&lt;/h2&gt;
&lt;p&gt;Refer to the release notes &lt;a href=&quot;https://github.com/atkinsroy/HPESimpliVity/blob/master/RELEASENOTES.md&quot;&gt;here&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;The module contains 58 exported cmdlets, divided into the following feature categories:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Datastore&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Backup&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Backup Policy&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Copy-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTdatastoreComputeNode&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTpolicySchedule&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTexternalStore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTfile&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Lock-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTpolicyRule&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTexternalStore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Publish-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVTpolicyRule&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Rename-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Rename-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVTexternalStore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Restore-SVTfile&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Resume-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Resize-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Set-SVTbackupRetention&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Suspend-SVTpolicy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Set-SVTdatastorePolicy&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Stop-SVTbackup&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Update-SVTpolicyRule&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Set-SVTexternalStore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Update-SVTbackupUniqueSize&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Unpublish-SVTdatastore&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Cluster &amp;#x26; Utility&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Host&lt;/strong&gt;&lt;/th&gt;
&lt;th align=&quot;left&quot;&gt;&lt;strong&gt;Virtual Machine&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Connect-SVT&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTdisk&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTcapacity&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVThardware&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTvmReplicaSet&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTcluster&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVThost&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Move-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTclusterConnected&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTshutdownStatus&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;New-SVTclone&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTmetric&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTthroughput&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Restore-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTtask&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Remove-SVThost&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Set-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTtimezone&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Start-SVTshutdown&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Start-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Get-SVTversion&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Stop-SVTshutdown&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;Stop-SVTvm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot;&gt;Set-SVTtimezone&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;/td&gt;
&lt;td align=&quot;left&quot;&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Requirements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Windows PowerShell V5.1 or PowerShell Core V7.x (PowerShell Core V6.x is not recommended)&lt;/li&gt;
&lt;li&gt;The IP address and the credentials of an authorized OmniStack user account&lt;/li&gt;
&lt;li&gt;The module has been tested with HPE SimpliVity V4.0.1 and should be compatible with older versions (but has not been tested).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Install or update the HPESimplivity module from the PowerShell Galllery using the following respective commands:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; Install-Module -Name HPESimpliVity
 # or
    PS C:\&gt; Update-Module -Name HPESimpliVity
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The module is signed, so it will work with an execution policy set to &apos;Remote Signed&apos;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Restart PowerShell to load the module, or type:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; Import-Module HPESimpliVity -Force
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;After this, the module will automatically load in new PowerShell sessions. Issue the following commands to confirm:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; Get-Command -Module HPESimpliVity
    PS C:\&gt; Get-Help Connect-SVT
    PS C:\&gt; Get-Help Get-SVTbackup
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Once installed, you’re ready to connect to the OmniStack virtual controller or Management Virtual Appliance, as follows:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; $Cred = Get-Credential -Message &apos;Enter OVC/MVA Credentials&apos;
    PS C:\&gt; Connect-SVT -OVC  -Credential $Cred
    PS C:\&gt; Get-SVThost
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Or, if you need to run commands in batch (non-interactively), save your credentials to a file first:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; $Cred = Get-Credential -Username &apos;administrator@vsphere.local&apos; | Export-Clixml .\OVCcred.XML 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and then in your script, import the credential:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; $Cred = Import-CLIXML .\OVCcred.XML
    PS C:\&gt; Connect-SVT -OVC  -Credential $Cred
    PS C:\&gt; Get-SVThost
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You must login with an admin account (e.g. an account with the vCenter Admin Role for VMware environments).&lt;/p&gt;
&lt;h2&gt;Known issues with the REST API (HPE SimpliVity V4.0.1)&lt;/h2&gt;
&lt;p&gt;The API has some documented and undocumented issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OMNI-69918: GET /virtual_machines fails with OutOfMemoryError. The HPE SimpliVity module limits the number of VMs returned to 8000, as per the recommendation&lt;/li&gt;
&lt;li&gt;OMNI-46361: REST API GET operations for backup objects and sorting filtering constraints. Comma separated list for filtering backup objects is not supported when connecting to OmniStack Virtual Controllers. Comma separated lists CAN be used when connected to a Management Virtual Appliance. For example, the following commands all work when connected to an MVA:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;    PS C:\&gt; Get-SVTbackup -VmName Vm1,Vm2,Vm3
    PS C:\&gt; Get-SVTbackup -Destination Cluster1,Cluster2
    PS C:\&gt; Get-SVTbackup -Destination StoreOnce-Data01,StoreOnce-Data02
    PS C:\&gt; Get-SVTbackup -Datastore DS01,DS02
    PS C:\&gt; Get-SVTbackup -BackupName Test1,Test2
    PS C:\&gt; Get-SVTbackup -BackupState FAILED,SAVING,QUEUED
    PS C:\&gt; Get-SVTbackup -BackupId a9e82f..., bef1bd...
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Backups stored on external stores cannot be deleted if the VM has been deleted, with a “backup not found” error. This does not apply to backups stored on SimpliVity clusters. This restriction is specific to the API; the CLI command &lt;code&gt;svt-backup-delete&lt;/code&gt; works as expected for external store backups.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;PUT  /policies/policyId/rules/ruleId&lt;/code&gt; API call (implemented in &lt;code&gt;Update-SVTpolicyRule&lt;/code&gt;) doesn’t work as expected in some circumstances. Changing a rules’ destination is not supported (this is documented). In addition, changing the consistency type to anything other than NONE or DEFAULT doesn’t work. If you attempt to change the consistency type to VSS, for example, the command is ignored. In this scenario, a work around would be to delete the rule entirely from the policy using &lt;code&gt;Remove-SVTpolicyRule&lt;/code&gt; and then use &lt;code&gt;New-SVTpolicyRule&lt;/code&gt; to create a new rule with the desired destination, consistency type and other settings.&lt;/li&gt;
&lt;li&gt;Using &lt;code&gt;GET /backups&lt;/code&gt; with a specific cluster_id (implemented as &lt;code&gt;Get-SVTbackup –DestinationName ClusterName&lt;/code&gt;) will result in both backups located on the specified cluster AND external stores being displayed. This issue only applies when connected to an OVC; calls to an MVA work as expected. In either case, filtering on an external store works as expected (e.g. &lt;code&gt;Get-SVTbackup –DestinationName ExternalStore1&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you would like to keep up to date with the latest features, please visit the project website on &lt;a href=&quot;https://github.com/atkinsroy/HPESimpliVity&quot;&gt;GitHub&lt;/a&gt; and subscribe to receive notifications. Updates are published to the PowerShell Gallery at the same time.&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://developer.hpe.com/platform/hpe-simplivity/home&quot;&gt;HPE SimpliVity platform page&lt;/a&gt; on HPE DEV for more information. Don&apos;t forget to check out our other posts on the &lt;a href=&quot;/blog&quot;&gt;HPE DEV blog site&lt;/a&gt; or join us on our #simplivity &lt;a href=&quot;https://slack.hpedev.io/&quot;&gt;Slack Channel&lt;/a&gt; to ask a question.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Us at the Hack Shack at HPE Discover Las Vegas, June 18-20]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1557861662231 HPE DEV is gearing up for Hewlett Packard Enterprise’s (HPE) premier event, HPE Discover…]]></description><link>https://developer.hpe.com/meet-us-at-the-hack-shack-at-hpe-discover-las-vegas-june-18-20/</link><guid isPermaLink="false">https://developer.hpe.com/meet-us-at-the-hack-shack-at-hpe-discover-las-vegas-june-18-20/</guid><pubDate>Tue, 14 May 2019 19:19:11 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/img_0246-1557861662220.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1557861662231&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV is gearing up for Hewlett Packard Enterprise’s (HPE) premier event, &lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;HPE Discover,&lt;/a&gt; to be held June 18-20, 2019 in Las Vegas. Expect three action-packed days filled with talks, workshops, and events designed to help your business become more agile. Software developers and designers will want to block off a good amount of time to spend at the HPE DEV &amp;#x26; Design’s Hack Shack, which offers opportunities for hands-on learning and a place to become more engaged with the HPE Developer and Design Community. There, you’ll learn how HPE supports developers and designers in creating innovative solutions to complex problems, collaborating with them using open source software, “design thinking”, DevOps and ITOps tools.&lt;/p&gt;
&lt;p&gt;The Hack Shack features two types of interactive activities: Workshops and Challenges. In the Workshops, you’ll get a chance to see innovative technologies in action. There will also be several design-related workshops, including some focused on &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; – an open source user interface (UI) toolkit designed by HPE. In the Challenges, which run from show open till show close, we encourage attendees to apply their skills and build out their application ideas while competing for cool prizes.&lt;/p&gt;
&lt;p&gt;At the Hack Shack you’ll find over twenty sessions where everyone is welcome, from beginners to experts. Below is a list of all Hack Shack Workshops and Challenges going on at the event.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;HPE OneView&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18598_9074&amp;#x26;locale=en_US&quot;&gt;HSW8598&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Dive into infrastructure automation with the HPE OneView API&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 3:00pm.&lt;br&gt;
During this interactive session, you’ll explore the HPE OneView API using Postman. After that, you’ll use PowerShell or Python to write a script that automates HPE OneView. By the end, you will have written a simple example of infrastructure as code. Be sure to bring your laptop.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=19834_9253&amp;#x26;locale=en_US&quot;&gt;HSW9834&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – HPE OneView Global Dashboard REST API&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 4:00pm.&lt;br&gt;
Thursday, June 20 at 11:00am.&lt;br&gt;
Experiment with HPE OneView Global Dashboard REST API. Learn how you can interact programmatically with multiple HPE OneView appliances in a large, distributed environment through a single pane of glass. Be sure to bring your laptop to follow along.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE OneView: Locate the rogue server in a large data center&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Apply the skills that you’ve learned to find the misbehaving server using only limited information and iLO logs. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE OneView Global Dashboard: Creating and populating a resource group&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Show off your ability to enhance an existing PowerShell module or create and populate a group of HPE OneView resources using HPE Global Dashboard REST API. No registration required. Just show up and start coding.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;HPE SimpliVity&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18622_9045&amp;#x26;locale=en_US&quot;&gt;HSW8622&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Use HPE SimpliVity REST API fundamentals to build a simple automation script in HPE SimpliVity&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 1:00pm.&lt;br&gt;
Thursday, June 20 at 10:00am.&lt;br&gt;
Bring your laptop and learn how to interact with the HPE SimpliVity hyperconverged platform through its REST API. Explore the API description and use PowerShell or Python to automate the kind of HPE SimpliVity use case that may come up in your day-to-day work.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE SimpliVity API: Build simple reports that show the status of on-going backups with user-defined time interval&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Leverage the granularity and simplicity of the SimpliVity REST API to build a report. Keep it simple or unleash your creativity to come up with a cool 3D graphical representation. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE SimpliVity: Build a solution to inject user scripts before and after a backup&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Use the SimpliVity REST API to build a solution for pre-and-post backup script injection to enable backup life cycle management use cases. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE SimpliVity: Create the most powerful backup script&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Learn innovative ways of producing bulletproof backups using the HPE SimpliVity REST API. No registration required. Just show up and start coding.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;HPE Nimble Storage&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18596_9138&amp;#x26;locale=en_US&quot;&gt;HSW8596&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Improve day 2 operations with your own custom automation with the HPE Nimble Storage REST API&lt;/strong&gt;.&lt;br&gt;
Thursday, June 20 at 12:00pm.&lt;br&gt;
Bring your own laptop with VMware Workstation Player (or VMware Fusion for Mac) installed and get ready to use REST API tools to create your own custom automation for HPE Nimble Storage. You’ll get access to your own personal Nimble Virtual Array (a limited-time offer, but a great perk). To participate, a basic knowledge of block storage is helpful.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18597_8916&amp;#x26;locale=en_US&quot;&gt;HSW8597&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Persist, optimize and accelerate using persistent storage in Kubernetes&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 at 9:00am.&lt;br&gt;
In this workshop, we’ll explain the differences between storing data locally versus externally and what workload types in Kubernetes are suitable for persistent storage. An overview of API-related objects, such as StorageClass, PersistentVolumeClaim and PersistentVolume, will be included, along with storage plugins and drivers.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;HPE Composable Fabric&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18621_8970&amp;#x26;locale=en_US&quot;&gt;HSW8621&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Use network optimization with Composable Fabric to accelerate operations&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 at 1:00pm.&lt;br&gt;
Here, you’ll explore how Composable Fabric Affinity technology can streamline operations by automatically adjusting to different network needs (i.e. daytime versus nighttime). Different example workloads will be examined to show how they compete for bandwidth and how adding an automated storage affinity, i.e. HPE SimpliVity, can help.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge with HPE Composable Fabric Manager: Initiate Composable Fabric operations through social applications&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Build a dashboard that shows the current network view configuration of the Composable Fabric mesh network quickly. Prizes will be awarded to the most innovative solution. No registration required. Just show up and start coding.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;HPE Design Workshops&lt;/h2&gt;
&lt;p&gt;At the Hack Shack, you’ll get to meet the HPE application design team and learn more about the resources they can provide to help develop applications.  HPE Design workshops include:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18626_8949&amp;#x26;locale=en_US&quot;&gt;HSW8626&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – Design Thinking for disruptive ideas - Empathy &amp;#x26; Brainstorming&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 at 11:00am and 12:00pm.&lt;br&gt;
Do you have ideas that need to be explored? What if you had no limitations? Discover your next big idea in IT in this fast-paced human-centered collaborative design session.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18625_8975&amp;#x26;locale=en_US&quot;&gt;HSW8625&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – User experience research tools and techniques that drive customer experience&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 at 4:30pm.&lt;br&gt;
Thursday, June 20 at 9:00am.&lt;br&gt;
Learn how insights from user experience research and design can improve the outcome of your product roadmap using a multi-disciplinary approach to UX research and design strategy.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18628_9003&amp;#x26;locale=en_US&quot;&gt;HSW8628&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – Design Sprint, Go! Rapid Prototyping &amp;#x26; The Customer Feedback Loop&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 10:00am and 11:00am.&lt;br&gt;
Don’t make long term investments on ideas that have unproven customer value. Learn the value that comes when you pick a use case, design mockups, develop a prototype and test it with a customer.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18593_9028&amp;#x26;locale=en_US&quot;&gt;HSW8593&lt;/a&gt; &lt;strong&gt;HPE Design Workshop – Tools that help your customers and partners build amazing experiences&lt;/strong&gt;.  &lt;br&gt;
Wednesday, June 19 at 12:00pm.&lt;br&gt;
See all the resources available to you through the HPE Design team.  We’ll walk through the hpe.design website and share tips and tricks that you can use to create world class experiences.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=18623_9256&amp;#x26;locale=en_US&quot;&gt;HSW8623&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Learn about HPE’s most popular open source library, Grommet, a class-leading React based UI/UX framework&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 2:00pm.&lt;br&gt;
Are you ready to turn stunning designs into pixel-perfect accessible web applications? Learn how you can start using Grommet to build class-leading web applications.
&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;More Fun Workshops and Challenges&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=19718_8994&amp;#x26;locale=en_US&quot;&gt;HSW9718&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop – Using RedFish API, monitor and manage HPE ProLiant servers from core to edge&lt;/strong&gt;.&lt;br&gt;
Wednesday, June 19 at 9:00am.&lt;br&gt;
Join us for a demonstration of the configuration and monitoring of Redfish Event Service compliant infrastructure using Nagios Core monitoring engine.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;sid=19835_9252&amp;#x26;locale=en_US&quot;&gt;HSW9835&lt;/a&gt; &lt;strong&gt;Hack Shack Workshop — Simplify edge/IoT management with intent-driven computing&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 at 10:00am.&lt;br&gt;
Thursday, June 20 at 1:00pm.&lt;br&gt;
See how an intent-based computing framework, Libere, can free users from tedious and complicated configuration tasks by automatically managing applications based on performance goals.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge - Join HPE “Hello World” and get started on understanding scripting&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Learn how to code or improve your current coding skills. Your hack can be as simple as showing Hello World in notepad or a text editor like Vim. To win the prize, remember, creativity counts! No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge - You’ve heard of soup to nuts. Now, join us for a napkin to template challenge&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
In this two stage hackathon, develop something using public APIs. First, take a napkin and sketch your ideas on it. Then build out your idea using the napkin as a reference and the template project for starters. Judging will encompass both the napkin and the final published UI. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge for HPE Proliant for Azure Stack Administrators/Operators – Provisioning services and enabling DevOps for tenants&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
As an Azure Stack cloud administrator, you can create IaaS, PaaS offers that your tenant users can self-subscribe to. Here you’ll learn how to use PowerShell and/or admin portal to create/configure service offers with plans, quotas and enable user subscriptions. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hack Shack Challenge to seamlessly deploy hybrid cloud solution infrastructure with HPE Proliant for Azure Stack – Deploy apps to Azure and Azure Stack using an infrastructure-as-code template&lt;/strong&gt;.&lt;br&gt;
Tuesday, June 18 – Thursday, June 20 during show floor hours.&lt;br&gt;
Explore the concept of using infrastructure as code to deploy application resources globally via Azure or Azure Stack. You’ll also learn how to integrate HPE Nimble solutions as external storage for virtual machines running on Azure Stack. No registration required. Just show up and start coding.&lt;/p&gt;
&lt;p&gt;If you’re planning to be at the &lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;HPE Discover 2019 event,&lt;/a&gt; remember to stop by booth 2088 and meet HPE DEV &amp;#x26; Design experts, attend Workshops, participate in the Hack Shack Challenges, or just chill and relax with us. The &lt;a href=&quot;https://content.attend.hpe.com/go/agendabuilder.sessions/?l=39&amp;#x26;locale=en_US&quot;&gt;HPE Discover sessions catalog&lt;/a&gt; can give you more details on the workshops and challenges. To continue your experience after the show, &lt;a href=&quot;https://developer.hpe.com/signup&quot;&gt;sign up at the HPE DEV website&lt;/a&gt; to join the HPE DEV community. #letshackshack&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Your First Grommet Component with Create-React-App]]></title><description><![CDATA[In this tutorial, I will walk you through the steps it takes to set up your first grommet component using create-react-app. Prerequisites…]]></description><link>https://developer.hpe.com/using-your-first-grommet-component-with-create-react-app/</link><guid isPermaLink="false">https://developer.hpe.com/using-your-first-grommet-component-with-create-react-app/</guid><pubDate>Fri, 10 May 2019 15:27:12 GMT</pubDate><content:encoded>&lt;p&gt;In this tutorial, I will walk you through the steps it takes to set up your first grommet component using create-react-app.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;This tutorial assumes that you have node.js and a package manager; either npm (npm is installed with node.js) or yarn package manager. Create react app can be installed globally or locally using npx (npx comes with npm 5.2+).&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://nodejs.org/en/download/&quot;&gt;Node Download Page&lt;/a&gt; - The latest LTS version will have node.js, npm, and npx.&lt;br&gt;
&lt;a href=&quot;https://facebook.github.io/create-react-app/docs/getting-started&quot;&gt;Create React App Local Install&lt;/a&gt; - Instructions to install it locally can be found here (npm versions 5.2+).&lt;br&gt;
&lt;a href=&quot;https://gist.github.com/gaearon/4064d3c23a77c74a3614c498a8bb1c5f&quot;&gt;Create React App Global Install&lt;/a&gt; - Instructions to install it globally can be found here (npm versions 5.1 or earlier).&lt;br&gt;
&lt;a href=&quot;https://yarnpkg.com/en/docs/getting-started&quot;&gt;Yarn Package Manager - &lt;em&gt;Not Required&lt;/em&gt;&lt;/a&gt; - I use yarn as my primary package manager, however npm will work fine for this tutorial.&lt;/p&gt;
&lt;h2&gt;Clean-up&lt;/h2&gt;
&lt;p&gt;Create and name your application with create-react-app. Then navigate into the folder of your newly created application and open it with the editor of your choice.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you have create-react-app installed globally,&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;create-react-app grommet-rules
cd grommet-rules
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;or if you use npx.&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npx create-react-app grommet-rules
cd my-app
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once inside, delete the files &lt;code&gt;App.css&lt;/code&gt;, &lt;code&gt;App.test.js&lt;/code&gt;, &lt;code&gt;index.css&lt;/code&gt;, and &lt;code&gt;logo.svg&lt;/code&gt; that are located in the src folder. These files will not be used in this tutorial.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Delete the highlighted files.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/deletethese-1557502375028.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1557502375030&quot;&gt;&lt;/p&gt;
&lt;p&gt;All associated imports and components in index.js and App.js must be deleted as well.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In App.js, delete the highlighted imports and header component.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/delete-app-1557502556518.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1557502556519&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In index.js, delete the highlighted import.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/delete-index-1557502646695.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1557502646696&quot;&gt;&lt;/p&gt;
&lt;p&gt;Your &lt;code&gt;App.js&lt;/code&gt; file should look like this,&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &apos;react&apos;;
function App() {
  return (
    &amp;#x3C;div className=&quot;App&quot;&gt;
    &amp;#x3C;/div&gt;
  );

}
export default App;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and your &lt;code&gt;index.js&lt;/code&gt; file should look like this.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &apos;react&apos;;
import ReactDOM from &apos;react-dom&apos;;
import App from &apos;./App&apos;;
import * as serviceWorker from &apos;./serviceWorker&apos;;

ReactDOM.render(&amp;#x3C;App /&gt;, document.getElementById(&apos;root&apos;));
serviceWorker.unregister();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;It&apos;s sanity check time!&lt;/strong&gt;&lt;br&gt;
Run your application with&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;yarn start
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm start

&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;When you run your application, do you see a blank screen? Yes? Cool!&lt;br&gt;
When you add text to &lt;code&gt;App.js&lt;/code&gt;, is it visible on the screen? Yes? Rad! It&apos;s time to move on.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Set-up&lt;/h2&gt;
&lt;p&gt;Install the grommet packages and dependencies using a package manager of your choice.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;yarn add grommet grommet-icons styled-components
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;npm install grommet grommet-icons styled-components --save
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will not be using grommet-icons and styled-components in this tutorial, but if you would like to keep using this application to practice using grommet in the future, it would be best if you installed them now.&lt;/p&gt;
&lt;p&gt;A requirement for grommet is to import it and use it as a top-level node. You can do this in &lt;code&gt;App.js&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In &lt;code&gt;App.js&lt;/code&gt;, import grommet and replace the div component with the Grommet component. Your app is now ready to use all that grommet has to offer.&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &apos;react&apos;;
import { Grommet, Heading } from &apos;grommet&apos;
function App() {
  return (
    &amp;#x3C;Grommet className=&quot;App&quot;&gt;
    &amp;#x3C;/Grommet&gt;
  );

}
export default App;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Using Your First Component&lt;/h2&gt;
&lt;p&gt;Grommet components, such as Heading, can be imported and included in very much the same way as you did in the previous step.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In &lt;code&gt;App.js&lt;/code&gt;, import the Heading component, put it within the Grommet component, and add some text.&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &apos;react&apos;;
import { Grommet, Heading } from &apos;grommet&apos;
function App() {
  return (
    &amp;#x3C;Grommet className=&quot;App&quot;&gt;
      &amp;#x3C;Heading&gt;
        Please work!
      &amp;#x3C;/Heading&gt;
    &amp;#x3C;/Grommet&gt;
  );

}
export default App;
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;It&apos;s sanity check time, yet again:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Is the text visible in your application? Yes? It&apos;s time to work with properties!&lt;/p&gt;
&lt;h2&gt;Props&lt;/h2&gt;
&lt;p&gt;Properties can be used to position and style your grommet components. The Heading component has two properties, color and size, that you can manipulate.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In &lt;code&gt;App.js&lt;/code&gt;, within your Heading component add the properties color and size, and give them values.&lt;br&gt;
Detailed information on how the properties of each component can be modified can be found at &lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;grommet components docs&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-jsx&quot;&gt;import React from &apos;react&apos;;
import { Grommet, Heading } from &apos;grommet&apos;
function App() {
  return (
    &amp;#x3C;Grommet className=&quot;App&quot;&gt;
      &amp;#x3C;Heading
        size=&apos;large&apos;
        color=&apos;#00739D&apos;
      &gt;
        I&apos;ve Mastered Grommet!
      &amp;#x3C;/Heading&gt;
    &amp;#x3C;/Grommet&gt;
  );

}
export default App;

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s time to check your work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/tutorialcomplete-1557503147092.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1557503147092&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Maybe, dial it back a bit.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;p&gt;You&apos;ve created your grommet playground and all there is left to do is practice. Here are a few resources that I have found helpful when I first began familiarizing myself with grommet.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;Grommet Components Docs&lt;/a&gt; - This is useful for when you want information on each individual component, the properties that they have, and the default and accepted values for each property.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;Grommet Github&lt;/a&gt; - It&apos;s always important to dig through the closed and open issues for any solutions not covered in the docs.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://slackin.grommet.io/&quot;&gt;Grommet Slack Inviter&lt;/a&gt; - When I&apos;ve exhausted all my resources, the grommet community is eager to help and quick to answer.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Grommet – the Glue that Binds Development and Design]]></title><description><![CDATA[Simple, consistent, and thoughtfully designed user interfaces (UI) can go a long way to integrating business applications and streamlining…]]></description><link>https://developer.hpe.com/grommet-the-glue-that-binds-development-and-design/</link><guid isPermaLink="false">https://developer.hpe.com/grommet-the-glue-that-binds-development-and-design/</guid><pubDate>Thu, 09 May 2019 14:59:02 GMT</pubDate><content:encoded>&lt;p&gt;Simple, consistent, and thoughtfully designed user interfaces (UI) can go a long way to integrating business applications and streamlining operations. Hewlett Packard Enterprise (HPE) is a huge proponent of simplifying complex environments. That’s why HPE developed &lt;a href=&quot;http://v2.grommet.io&quot;&gt;Grommet,&lt;/a&gt; an open source UI development tool, and made it available to anyone looking to create consistent user experiences for their applications.&lt;/p&gt;
&lt;p&gt;When I first started working with the Chief Design Office at HPE, I was very excited for the opportunity to learn more about Grommet. Design is playing a bigger part in computing applications today. To me, Grommet is like the glue that binds function and design together. As evidenced by input received through &lt;a href=&quot;http://developer.hpe.com&quot;&gt;HPE DEV&lt;/a&gt;  and several tradeshows and events, Grommet is both liked and used, sustained by a vibrant, grassroots community. It has attracted plenty of developer input with over 4,000 commits from a wide variety of different contributors.&lt;/p&gt;
&lt;p&gt;For those of you unfamiliar with Grommet, it is a library of reusable UI components for developers and designers. Grommet makes the application creation process both agile and easy. It does so by offering developers commonly used interface elements in a Node package and delivering them as building blocks to create web applications. It also gives designers a full library of design assets and templates. Grommet uses React as an underlying technology to help deliver its modularity and adds accessibility, responsiveness, and theming.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/screen-shot-2019-05-09-at-31254-pm2-1557501148565.png&quot; alt=&quot;screen shot 2019 05 09 at 3.12.54 pm[2]&quot;&gt;&lt;/p&gt;
&lt;p&gt;Grommet works using a mobile-first strategy in combination with a task-based design approach. Its components offer nothing more and nothing less than what’s required to complete a task, while still being flexible enough to customize and tailor it to your own needs.&lt;/p&gt;
&lt;p&gt;Grommet’s libraries give both designers and developers a way to collaborate and inject consistent user experiences into their applications easily. It offers ready-to-use layouts, type, colors, controls, and mechanisms to incorporate and present content. You might want to check out this link for &lt;a href=&quot;https://v2.grommet.io/components&quot;&gt;specifics on Grommet components.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Grommet is used by both internal HPE and external developers to build enterprise-class and consumer-grade applications that can be used in web, desktop, and mobile-friendly formats. It underpins many of the company’s software-defined platforms, like &lt;a href=&quot;http://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt; HPE Synergy,&lt;/a&gt; HPE Storage, Aruba networks and HPE Server management. HPE provides developer support for Grommet through the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;HPE Developer Community&lt;/a&gt; and &lt;a href=&quot;https://github.com/grommet/grommet&quot;&gt;GitHub.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As much as I’ve learned about Grommet, I still have more questions. Like, what’s up with that purple Gremlin creature that keeps showing up on Grommet pages and Grommet stickers? Stay tuned for a future post where I hope to uncover the answer. For more thoughts on Grommet, you might want to check out the &lt;a href=&quot;https://www.hpe.com/us/en/insights/articles/getting-to-know-grommet-an-open-source-ui-dev-tool-1808.html&quot;&gt;blog&lt;/a&gt;  by John Paul Mueller of DataCon Services. You can also keep up with what’s going on by connecting with us at &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt; and subscribing to our &lt;a href=&quot;https://developer.hpe.com/newsletter-signup&quot;&gt;newsletter.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/5/rsz_1rsz_stack-waving-1557501164196.png&quot; alt=&quot;rsz_1rsz_stack waving&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A Busy Month for HPE Developer - Newsletter]]></title><link>https://developer.hpe.com/2019-April-29/</link><guid isPermaLink="false">https://developer.hpe.com/2019-April-29/</guid><pubDate>Mon, 29 Apr 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Virtualization is Not Cloud Computing]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1556211177810 It seems everyone in the IT industry is focused on cloud computing. The industry has…]]></description><link>https://developer.hpe.com/virtualization-is-not-cloud-computing/</link><guid isPermaLink="false">https://developer.hpe.com/virtualization-is-not-cloud-computing/</guid><pubDate>Thu, 25 Apr 2019 15:27:57 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/gettyimages-160673315-1556211177673.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556211177810&quot;&gt;&lt;/p&gt;
&lt;p&gt;It seems everyone in the IT industry is focused on cloud computing. The industry has seen these trends in the past: distributed computing, client-server computing, and virtualization to name a few. You may feel that if you&apos;re not in the cloud, you are falling behind and missing some great opportunities. Well, you may be.  To determine if you are, it’s important to understand what cloud computing is and how it&apos;s different from previous models and technologies.&lt;/p&gt;
&lt;p&gt;For over a decade, organizations have employed virtualization to make workloads more portable, easier to configure, and – most importantly – capable of more fully utilizing all available compute resources. Virtualization technologies allow users to abstract the physical compute environment and cram a bunch more stuff onto a fixed set of compute resources.&lt;/p&gt;
&lt;p&gt;Virtualization is fairly easy for IT and developers to adopt because the basic behavior models do not change. IT operators can still employ the same constructs with virtualized machines that they are used to using with physical infrastructure. They can provision a virtual machine, configure it, power it up, patch it, etc.  And developers do not really need to change their programming models, either. The coding models and tools they use are generally the same between physical and virtual environments. Developers don’t really need to think about whether the target environment is a physical or virtualized.&lt;/p&gt;
&lt;p&gt;Cloud computing is different. It&apos;s a new way of thinking about and approaching how IT operators manage environments and how developers employ those resources to solve business problems. Cloud computing is a new model that requires new methods and techniques to fully realize its benefits.&lt;/p&gt;
&lt;p&gt;For IT operators, this means that you are no longer concerned with just machines, memory, and networking. You must now learn about platform services (PaaS) and software services (SaaS) and how to effectively manage those for your developers and organizations. Given that resources often reside outside the organization, you may feel like cloud computing, especially public cloud, obsoletes your job. But it doesn’t – it actually makes you more essential. You are needed even more now to effectively manage your IT resources across both private and public domains.&lt;/p&gt;
&lt;p&gt;Developers face similar challenges with cloud computing.  Yes, as a developer, you still need to know your programming languages and SDKs.  But you have a whole new set of resources and constructs to employ that were not available in previous models. For example, cloud computing enables you to easily automate things like spinning up a database or webserver. You can also take advantage of new constructs like serverless computing. This frees you from thinking about virtual machines and resources and enables you to focus on the business problem and applications, letting the cloud environment handle the routine tasks. The cloud environment has a whole new set of tools for doing this -- and who doesn’t like learning about new tools?&lt;/p&gt;
&lt;p&gt;You may run workloads in public clouds such as Amazon, Azure, Google, or others, and think this is ‘cloud computing’.  But if you’re simply managing virtual machines and infrastructure (IaaS), you’re just doing ‘virtualization’ in the cloud and missing the real opportunities provided by the cloud model.  Similarly for developers, if you are ignoring the additional platform and software services, you might also be missing out. Employing platform and software services in the cloud frees you up from concentrating on compute resources so you can focus your attention on solving business issues.&lt;/p&gt;
&lt;p&gt;Keep in touch with us at &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE DEV&lt;/a&gt; HPE DEV to learn more about serverless computing, including Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) concepts and services.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV at Google Cloud Next `19]]></title><description><![CDATA[Sir Hackington Appbuilder III recounts HPE DEV experiences at Google Cloud Next ‘19 April 9-11 5bf2e1a0cd93d0796238ae01-blog-content…]]></description><link>https://developer.hpe.com/hpe-dev-at-google-cloud-next-19/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-at-google-cloud-next-19/</guid><pubDate>Tue, 23 Apr 2019 20:04:56 GMT</pubDate><content:encoded>&lt;h1&gt;Sir Hackington Appbuilder III recounts HPE DEV experiences at Google Cloud Next ‘19 April 9-11&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/untitled1-1556056663049.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556056663050&quot;&gt;&lt;/p&gt;
&lt;p&gt;Sir Hackington Appbuilder III here. As an important member of the HPE Developer Community (HPE DEV), I get to travel to a lot of shows and hang out with event attendees. I just got back from #GoogleNext19 and wanted to tell you all about it!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture12-1556051302495.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556051302503&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://cloud.withgoogle.com/next/sf&quot;&gt;Google Cloud Next &apos;19&lt;/a&gt; April 9-11, 2019 in Moscone Center, San Francisco brought together some of brightest minds in tech for three full days of networking, learning, and problem solving. Thousands of IT professionals and executives across industries participated. Hewlett Packard Enterprise (HPE) commanded a presence at the event with an &lt;a href=&quot;http://cloud.google.com/blog/topics/partners/google-cloud-partners-with-hpe-on-hybrid-cloud-next19&quot;&gt; important announcement on groundbreaking partnerships that help simplify the pathway to hybrid cloud.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On the first day, HPE announced that it would further expand its commitment to deliver the options and the experiences customers desire for hybrid cloud by strategically aligning with two powerful industry players – Nutanix and Google Cloud, These partnerships help to extend the fast growing, ever-evolving HPE GreenLake ecosystem. The partnerships aims to provide customers with a consistent experience across public cloud and on premises environments.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture1112-1556051430387.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556051430388&quot;&gt;&lt;/p&gt;
&lt;p&gt;A key part of the announcement brought developer event attendees to the HPE DEV booth to ask about next steps and how they could participate. HPE will offer validated designs for Google Kubernetes Engine (GKE) based on &lt;a href=&quot;http://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt; HPE SimpliVity &lt;/a&gt; hyperconverged solutions and many developers came to the HPE booth looking for information on how to interact with HPE SimpliVity using the HPE APIs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture14-1556051540938.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556051540940&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV booth personnel were happy to interact with the attendees, showing them the HPE DEV portal and where they could find the tools they need. A few folks came in just to see me. One kind person even took me for a walk.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture51-1556051572650.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556051572651&quot;&gt;&lt;/p&gt;
&lt;p&gt;There was also quite an interest in &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt; Attendees expressed their excitement about how HPE was supporting open source and provided feedback from having used the UI framework. The use of Grommet seems to be a grassroots engagement starting with new users. Attendees seemed equally delighted to obtain Grommet stickers, as well as stickers for other HPE products.&lt;/p&gt;
&lt;p&gt;The highlight of each day was the daily prize giveaway where, through a raffle, one lucky event attendee walked away with an Argon Raspberry Pi kit just for having signed up for the HPE DEV newsletter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture15-1556051644744.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556051644745&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here’s a link if you are interested in finding out more information on the HPE announcement about entering into &lt;a href=&quot;http://www.hpe.com/us/en/newsroom/blog-post/2019/04/two-groundbreaking-partnerships-help-simplify-the-pathway-to-hybrid-cloud.html&quot;&gt;strategic partnerships with Nutanix and Google Cloud.&lt;/a&gt; Hope to see you soon at our next event!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE DEV to attend KubeCon + CloudNativeCon Europe 2019 in May]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1556049624648 HPE DEV to attend KubeCon + CloudNativeCon Europe 2019 in May Hewlett Packard Enterprise…]]></description><link>https://developer.hpe.com/hpe-dev-to-attend-kubecon-cloudnativecon-europe-2019-in-may/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-dev-to-attend-kubecon-cloudnativecon-europe-2019-in-may/</guid><pubDate>Tue, 23 Apr 2019 19:56:01 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/picture1-1556049624644.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556049624648&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE DEV to attend KubeCon + CloudNativeCon Europe 2019 in May&lt;/h1&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) is excited to be attending &lt;a href=&quot;http://https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/&quot;&gt;KubeCon + CloudNativeCon Europe 2019&lt;/a&gt; in Barcelona, Spain, May 20-23. Over 6,000 talented individuals in the industry are expected to gather, making it a great place to network with industry professionals and learn from top cloud native experts.
At the HPE Booth (G3), our team of subject matter experts will focus on 3 main topics that will help DevOps administrators and developers take advantage of Kubernetes clusters. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Containers-as-a-service and Kubernetes Clusters-as-a-service&lt;/li&gt;
&lt;li&gt;Persistent Storage for Kubernetes&lt;/li&gt;
&lt;li&gt;The HPE DEV community&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Through &lt;a href=&quot;http://https://www.hpe.com/us/en/solutions/cloud/onesphere.html&quot;&gt;HPE OneSphere&lt;/a&gt; and our Persistent Storage for Kubernetes, HPE provides DevOps administrators with an automated, easy to manage Kubernetes clusters as a Service, deployable across hybrid cloud architectures (public and private) in a coherent and cost-controlled way. We offer both centrally managed and consumer deployed and managed clusters on public cloud (AWS), on-prem (VMware ESX) and on bare metal, to enable agile, DevOps environments.&lt;/p&gt;
&lt;p&gt;If you are a DevOps Admin concerned with providing Kubernetes Clusters and Containers to the masses, or a developer who would like to understand the benefits of joining the HPE DEV community, please visit us at the HPE Booth (G3) in the Sponsor Showcase. When you stop by our booth, sign up for the HPE DEV newsletter and be entered for a chance to win a prize!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Creating a Powershell Module as a wrapper for the HPE Global Dashboard REST API]]></title><description><![CDATA[With the release of HPE Global Dashboard version 1.60 HPE shipped a new (public) RESTful API. The documentation for the API is available…]]></description><link>https://developer.hpe.com/creating-a-powershell-module-as-a-wrapper-for-the-hpe-global-dashboard-r/</link><guid isPermaLink="false">https://developer.hpe.com/creating-a-powershell-module-as-a-wrapper-for-the-hpe-global-dashboard-r/</guid><pubDate>Tue, 23 Apr 2019 16:59:48 GMT</pubDate><content:encoded>&lt;p&gt;With the release of HPE Global Dashboard &lt;a href=&quot;https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00056235en_us&amp;#x26;docLocale=en_US&quot;&gt;version 1.60&lt;/a&gt; HPE shipped a new (public) RESTful API.&lt;/p&gt;
&lt;p&gt;The documentation for the API is available through the Help section of the appliance itself, and there&apos;s a version of the API published on &lt;a href=&quot;https://app.swaggerhub.com/apis-docs/hpe-global-dashboard/hpe-one_view_global_dashboard_rest_api/2&quot;&gt;swaggerhub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can also head over to &lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview-global-dashboard/home&quot;&gt;HPE Dev&lt;/a&gt; to get more information about the API and there&apos;s also a &lt;a href=&quot;/blog/accessing-the-hpe-oneview-global-dashboard-api&quot;&gt;&quot;getting started&quot; post&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Currently there&apos;s no official Powershell module from HPE that let&apos;s you work with HPE Global Dashboard, and I haven&apos;t heard of any plans to create one either so I decided to create one my self.&lt;/p&gt;
&lt;h2&gt;The HPE Global Dashboard Powershell module&lt;/h2&gt;
&lt;p&gt;At the time of this writing the module is at version 0.5.0.&lt;/p&gt;
&lt;p&gt;It doesn&apos;t support all the functionality from the REST API yet. I&apos;ve focused on the GET stuff and more specifically on the things we use in our environment. Currently the following can be done through the module:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connect, (and disconnect) to a HPE Global Dashboard (GD) instance&lt;/li&gt;
&lt;li&gt;Add, reconnect and remove a OneView appliance to the GD instance&lt;/li&gt;
&lt;li&gt;List the OneView appliances connected to the GD instance&lt;/li&gt;
&lt;li&gt;List the current GD appliance certificate&lt;/li&gt;
&lt;li&gt;List Converged Systems&lt;/li&gt;
&lt;li&gt;List Enclosures&lt;/li&gt;
&lt;li&gt;List Server Hardware&lt;/li&gt;
&lt;li&gt;List Server Profiles and Profile Templates&lt;/li&gt;
&lt;li&gt;List Storage Systems&lt;/li&gt;
&lt;li&gt;Create and delete logical groups&lt;/li&gt;
&lt;li&gt;List Logical groups&lt;/li&gt;
&lt;li&gt;List members of a logical group&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install module&lt;/h3&gt;
&lt;p&gt;Installing the module is easiest done through the &lt;a href=&quot;https://www.powershellgallery.com/packages/GlobalDashboardPS&quot;&gt;Powershell Gallery&lt;/a&gt; (note that you should start PS as an Admin).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Find-Module -Repository PSGallery -Name GlobalDashboardPS | Install-Module
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/gdps-psgallery-1556038459722.png&quot; alt=&quot;Install module from PS Gallery&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can also download the code from GitHub and import it the way you want to your Powershell session&lt;/p&gt;
&lt;h3&gt;Usage&lt;/h3&gt;
&lt;p&gt;To check the available functions you can use we utilize the builtin Get-Command cmdlet&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Get-Command -Module GlobalDashboardPS
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/gdps-functions-1556038517571.png&quot; alt=&quot;Available functions&quot;&gt;&lt;/p&gt;
&lt;p&gt;To start off we need to connect to the HPE Global Dashboard Instance. Note that currently you need to specify your credentials. I will add support for PS Credentials going forward.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Connect-OVGD -Server &quot;your-appliance-ip&quot; -Username &quot;your-username&quot; -Directory &quot;your-directory&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Just to show the basic usage of the module I&apos;ll demonstrate how we list Server Hardware. First with the default format (pulled from the ps1xml formatting file).&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Get-OVGDServerHardware
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/gdps-serverhwlist-1556038565601.png&quot; alt=&quot;List server hardware&quot;&gt;&lt;/p&gt;
&lt;p&gt;To see all details of an object you need to specify this.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Get-OVGDServerHardware | select -First 1 *
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/gdps-alldetails-1556038588225.png&quot; alt=&quot;Server details&quot;&gt;&lt;/p&gt;
&lt;p&gt;All functions have help text where you can see examples of the usage.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;Get-Help Get-OVGDServerHardware

Get-Help Get-OVGDServerHardware -Examples
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Paging and querying&lt;/h3&gt;
&lt;p&gt;Note that the API uses paging of results and defaults to 25 objects. You can specify the count of objects you want to return in the module functions, but currently it does not support paging as such.&lt;/p&gt;
&lt;p&gt;Currently you can query for specific objects, but you need to use the ID of the object. I am working on adding support for querying by name as well.&lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;The full source of this project is available on &lt;a href=&quot;https://github.com/rumart/GlobalDashboardPS&quot;&gt;GitHub&lt;/a&gt; and the module is available on the &lt;a href=&quot;https://www.powershellgallery.com/packages/GlobalDashboardPS&quot;&gt;Powershell Gallery&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There&apos;s quite a few things missing in the module, but hopefully I will be able to add in stuff going forward. Please feel free to contribute if you want!&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/rumart/GlobalDashboardPS/blob/master/changelog.md&quot;&gt;changelog&lt;/a&gt; should be up to date with current and upcoming features.&lt;/p&gt;
&lt;p&gt;If you have any questions, comments, requests or want to contribute please contact me on &lt;a href=&quot;https://twitter.com/RudiMartinsen&quot;&gt;Twitter&lt;/a&gt;, or open an issue on &lt;a href=&quot;https://github.com/rumart/GlobalDashboardPS/issues&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From Developer to Manager to Freight Captain]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1556306907820 You’re a developer. You spend the majority of your day clacking away at your keyboard…]]></description><link>https://developer.hpe.com/from-developer-to-manager-to-freight-captain/</link><guid isPermaLink="false">https://developer.hpe.com/from-developer-to-manager-to-freight-captain/</guid><pubDate>Wed, 17 Apr 2019 19:35:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/4/container-1638068_1920-1556306907818.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1556306907820&quot;&gt;&lt;/p&gt;
&lt;p&gt;You’re a developer. You spend the majority of your day clacking away at your keyboard outputting lines and lines of code and working hard to vet your solutions. At the end of the day, you check your code in and mark issues as closed. Each day, you deliver your daily dose of syntactic morsels for everyone to admire.&lt;/p&gt;
&lt;p&gt;Because of the hard work you’ve put in, you earn the respect of others and move up to a technical lead position. You now spend the majority of your day architecting delightfully elegant solutions for technically complex problems. You mentor your team and keep them technically on-point while keeping an eye out for technical soundness. You pick up on the concept of delegation and really start to learn what it takes to be a leader. When you wake up in the morning, you know your team is on track with well-crafted and thoughtful solutions. When your day comes winding to an end, you feel satisfied knowing all of the project issues your team has closed. As you review the new and exciting features you’ve developed together, you feel pride in your team. Life is good!&lt;/p&gt;
&lt;p&gt;Time continues to pass and now you are officially leading as an engineering manager. You spend your time delegating, communicating, planning, and strategically positioning your team and organization for utter success. Your technical skills are neatly packed away and put in a drawer. And now, when you wrap up your day, &lt;em&gt;you&lt;/em&gt; have nothing tangible to deliver.&lt;/p&gt;
&lt;p&gt;Having made this same transition myself, going from ending my day delivering tangibles to what feels like piloting a freighter on an endless journey at sea, I know how strange this can feel. At first, I felt as if I wasn’t being productive. When I wrapped up the day, I found myself trying to reflect on all of the things I accomplished – but was left with the feeling that the whole day was one giant blur of meetings and conversations I had with my team and the other groups I work with.&lt;/p&gt;
&lt;p&gt;The transition from individual contributor to manager is not always an easy journey. The skillset used to technically lead or write code has very little direct carryovers to people management and thoughtful leadership. It wasn’t until I replaced “I” with “we” did I really start seeing the tangible deliverables of this new role. Enabling my team to achieve all of the end-of-the-day items I previously accomplished and keeping them focused on ‘full speed ahead’, while avoiding the occasional iceberg, becomes my new deliverable.&lt;/p&gt;
&lt;p&gt;If you are traveling this same journey from developer to manager, you will face many challenges. Perhaps you have already transitioned and recognize many of these issues. The best advice I can give to folks making this transition is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Practice forbearance. Avoid the urge to open your code editor to quickly fix a bug in the app your team is working on. This practice is like leaving the bridge of the ship to focus on ensuring the cargo is safe. You’re not doing your team any favors by injecting your code into their projects. Instead, guide them to write effective code by transferring your technical knowledge to them.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Embrace failure. Let your guard down and accept the failure scenarios. We all fall down. It’s how we pick ourselves back up and prevent future failure that truly defines us. The best learning experiences will always come from failure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Actively conduct retrospectives. Similar to allowing for failure, your retrospectives will be a great place for everyone to learn what worked and what didn’t work with a project.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set goals. Make sure you set short and long term goals with your team. Reflect on these goals regularly.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep the skills drawer open. Find other ways to keep your coding skills sharp. You’ll still need them to remain a relevant leader to your team.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All in all, avoid focusing on the tangible deliverables. Focus on your team’s happiness and steering your cargo safely to port. If your team is well focused, happy, and motivated, the pieces they are building will fall correctly into place.&lt;/p&gt;
&lt;p&gt;Read more of our blogs at [hpedev.io.]
(/blog) And if you haven’t yet signed up for our [Newsletter,] (/newsletter-signup) consider doing so to get more advice, tips, and tools to assist you with your own journey.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Welcoming New Faces and New Roles - Newsletter]]></title><link>https://developer.hpe.com/2019-March-29/</link><guid isPermaLink="false">https://developer.hpe.com/2019-March-29/</guid><pubDate>Fri, 29 Mar 2019 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Hacking away at the Paris Technology and Solutions Summit (TSS)]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1553888959463 This year, and for the first time at an HPE Technology and Solutions Summit, the HPE…]]></description><link>https://developer.hpe.com/hacking-away-at-the-paris-technology-and-solutions-summit-tss/</link><guid isPermaLink="false">https://developer.hpe.com/hacking-away-at-the-paris-technology-and-solutions-summit-tss/</guid><pubDate>Thu, 28 Mar 2019 21:05:04 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/didier-couch-from-chris-1553888959463.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553888959463&quot;&gt;&lt;/p&gt;
&lt;p&gt;This year, and for the first time at an HPE Technology and Solutions Summit, the HPE Developer Program decided to deliver a Hack Shack experience to event attendees. During the week of March 11 in Paris, France we offered developer and designer centric activities to 3,200 HPE and partner presales employees. In this very special Hack Shack – complete with modern décor, a comfortable sofa, an expresso machine, and video games – HPE hosted technical workshops for attendees to follow along on their own laptops and learn how to use the Application Programming Interfaces (API) for such HPE products as &lt;a href=&quot;http://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView,&lt;/a&gt; &lt;a href=&quot;http://www.hpe.com/us/en/solutions/cloud/onesphere.html&quot;&gt;HPE OneSphere,&lt;/a&gt; &lt;a href=&quot;http://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt;HPE SimpliVity&lt;/a&gt; and the &lt;a href=&quot;http://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=Z7550-63371&quot;&gt;HPE OneView Global Dashboard&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/lslsl-1553803713926.png&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553803713931&quot;&gt;&lt;/p&gt;
&lt;p&gt;One didn’t need to be a developer to participate in these well-attended workshops; anyone could join to understand what it means to use REST APIs. The idea behind the Hack Shack was to make the event attendees feel more comfortable discussing APIs with DevOps and DevIT staff. The tools used during these labs included Postman and scripting languages such as PowerShell or Python. The Hack Shack also came fully equipped with Macs and PCs; so in case someone forgot to bring their laptop, they could still participate. We also hosted a session explaining both the HPE DEV program and the Design@HPE program. Dana and Parul offer some more insights into Design@HPE in this short interview that was placed on &lt;a href=&quot;http://twitter.com/SBUCloud/status/1105771231008247809&quot;&gt;Twitter&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/didier-people-standing-from-chris-1553888995259.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553888995260&quot;&gt;&lt;/p&gt;
&lt;p&gt;On Wednesday afternoon and Friday morning, we hosted simple hackathons. We proposed a number of challenges to our attendees, using the same HPE OneView, HPE OneSphere, HPE SimpliVity, and HPE OneView Global Dashboard APIs. The most innovative, passionate, and creative ways to solve the challenges were rewarded with cool gifts, like a Fitbit watch and RasberryPi units. We ran 17 sessions in the Hack Shack, which were attended by 251 participants. All in all, we learned a lot and had a lot of fun.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/didier-handshake-from-chris-1553889004324.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889004325&quot;&gt;&lt;/p&gt;
&lt;p&gt;Want to see more? Check out Didier’s interview on &lt;a href=&quot;http://twitter.com/SBUCloud/status/1106149221701468166&quot;&gt;Twitter&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ Discovering HPE DEV – A Newcomer’s perspective]]></title><description><![CDATA[Digital transformation has flipped the industry’s focus from hardware-centricity to a software-defined reality. What makes up the server…]]></description><link>https://developer.hpe.com/discovering-hpe-dev-a-newcomers-perspective/</link><guid isPermaLink="false">https://developer.hpe.com/discovering-hpe-dev-a-newcomers-perspective/</guid><pubDate>Thu, 28 Mar 2019 21:03:14 GMT</pubDate><content:encoded>&lt;p&gt;Digital transformation has flipped the industry’s focus from hardware-centricity to a software-defined reality. What makes up the server, and where it’s located, is no longer as important as what’s running on it and how seamlessly these applications can be accessed and used across the infrastructure. No matter how the infrastructure fabric is woven throughout on premise and cloud resources, the most important thing is to ensure the best user experience (UX).&lt;/p&gt;
&lt;p&gt;As a new member of the HPE DEV team, I was excited to learn about the great work that’s coming out of this group. This high-energy team is dedicated to the continuous improvement of both software development and UX design.  Whether the software is being developed internally to HPE or is open source code – these folks are here to help.&lt;/p&gt;
&lt;p&gt;Modern collaborative DevOps methods have fundamentally changed software development processes, providing more efficient, modular processes that enable quicker time to market and allow developers to respond to customer’s needs faster. Additionally, these processes focus more on the importance of the user experience. After all, this is where customers live – at the user interface.&lt;/p&gt;
&lt;p&gt;With the shift to a more developer-centric mindset, HPE is empowering developers to manage infrastructure as code more effectively. Building an open source community allows HPE DEV engineers to work with developers directly to incorporate &lt;a href=&quot;http://www.interaction-design.org/literature/article/what-is-design-thinking-and-why-is-it-so-popular&quot;&gt;Design-Thinking&lt;/a&gt; principles that will help the developers produce better applications for their customers.&lt;/p&gt;
&lt;p&gt;When I joined this team, I found it interesting that both development AND design were combined in the same group. In my experience, developers and designers spoke very different languages. This is partly because the skills that make a successful programmer aren’t the same as those required to create a good UI. But then I learned about &lt;a href=&quot;http://v2.grommet.io/&quot;&gt;Grommet&lt;/a&gt;– a set of libraries that enable consistency in the design of code and application UIs, giving both designers and developers a way to inject consistent user experiences into enterprise apps easily. This open source UI dev tool is now being used by internal HPE and external developers to build enterprise applications that can be used in a variety of formats: web, desktop, and mobile. It also permeates HPE’s cloud and software-defined management products.&lt;/p&gt;
&lt;p&gt;As HPE continues to find better ways to work more closely with developers, you’ll see more coming out of this organization. The HPE DEV and design teams can connect you to HPE subject matter experts and resources. They can also help you develop software and solutions that improve automation, service delivery, and workflows, as well as your UX design. Maybe you’re new like me. If so, welcome to our newsletter! For those of you who have known us for a while, it would be great to connect and hear about some of your experiences with our team.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Design Thinking Workshops at TSS Brighten a Rainy Week at the Paris Expo]]></title><description><![CDATA[5bf2e1a0cd93d0796238ae01-blog-content-1553889135906 The weather at the Technology and Solutions Summit at the Paris Expo  at the Paris Expo…]]></description><link>https://developer.hpe.com/design-thinking-workshops-at-tss-brighten-a-rainy-week-at-the-paris-expo/</link><guid isPermaLink="false">https://developer.hpe.com/design-thinking-workshops-at-tss-brighten-a-rainy-week-at-the-paris-expo/</guid><pubDate>Thu, 28 Mar 2019 20:49:25 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/dana-paris-from-chris-1553889135905.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889135906&quot;&gt;&lt;/p&gt;
&lt;p&gt;The weather at the &lt;a href=&quot;http://h41382.www4.hpe.com/tss/&quot;&gt;Technology and Solutions Summit at the Paris Expo&lt;/a&gt;  at the Paris Expo this past week was definitely a shade of gray to say the least. Rainy, cold, damp, and overcast -- with no sunshine the entire week. That didn’t keep Parul Tyagi and I from our mission to deliver Design Thinking workshops to conference attendees located in the Hack Shack. What is a Hack Shack? It’s a place to hang out, write some code, have a coffee, share ideas, and to brainstorm the next big idea, but most of all to have fun! The overall experience is better captured in this &lt;a href=&quot;http://twitter.com/SBUCloud/status/1106149221701468166/video/1&quot;&gt;tweet by Didier Lalli,&lt;/a&gt; one of the Hack Shack founders.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/dana-hack-shack-from-chris-1553889149915.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889149916&quot;&gt;&lt;/p&gt;
&lt;p&gt;The Hack Shack format was introduced to code lovers last year at HPE Discover in Las Vegas. The same setup was implemented at HPE Discover in Madrid this past November where we also introduced Design Thinking Workshops as part of the Hack Shack experience. This is where participants explore ideas through a guided ideation framework popularized by the &lt;a href=&quot;http://dschool.stanford.edu/&quot;&gt;Stanford d.school.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In the Design Workshops, participants learn how to build upon ideas with their teammates by deferring judgement and by applying a few constraints that cause their ideas to “flare,” resulting in even more ideas being generated. Participants are asked to look to unexpected places for inspiration, a technique the d.school refers to as “analogous inspiration.” This is where something unrelated can help inform a solution.&lt;/p&gt;
&lt;p&gt;During my d.school training, facilitators Jeremy Utley and Perry Klabahn shared an analogous case study from Fairchild Semiconductor. By looking at the operating model of the floral delivery industry (an unexpected place), Fairchild was able to solve a supply chain issue they were having with a manufacturing component that was continuously on backorder. Neat, huh?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/dana-accessory-table-from-chris-1553889169006.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889169007&quot;&gt;&lt;/p&gt;
&lt;p&gt;During the brainstorm sessions at TSS, the evaluation of ideas was left to the very end. The purpose of ideation is to get as many ideas out in the open by deferring judgement while focusing on quantity versus quality. Then towards the end of the session, participants evaluate all of the ideas based on a few criteria:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Which ideas are potentially a quick win?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Which ideas are the most disruptive?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Which ideas are the most delightful for end users?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;By enforcing constraints in and throughout the process, participants become focused and learn to how to prioritize. Each session is unique, and there are many “a-ha” moments for everyone. Participants are continuously surprised at the volume and variety of ideas this process promotes and often express interest in carrying the process forward into their daily work lives.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/dana-blond-woman-from-chris-1553889180664.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889180668&quot;&gt;&lt;/p&gt;
&lt;p&gt;By the end of the week, we delivered seven well-attended workshop sessions  -- a few were at capacity, which meant we had to turn people away. Over the past 6 months, we were busy delivering Design Thinking to a wide variety of groups throughout HPE . This grassroots effort is just the beginning of a cultural mindset shift to one that is more design-led.&lt;/p&gt;
&lt;p&gt;In addition to workshops, we delivered three design talks at the Hack Shack at TSS. We introduced the audience to our recently formed Chief Design Office, led by Bryan Jacquot, VP and Chief Design Officer for the Software Defined and Cloud (SDCG) organization and to our recently formed research function led by Amy Reitz, Ph.D. Additionally, we cited two recent ground-breaking publications that measure the impact of design maturity on the industry. The first, &lt;a href=&quot;http://www.mckinsey.com/business-functions/mckinsey-design/our-insights/the-business-value-of-design&quot;&gt;The Business Value of Design by McKinsey&amp;#x26;Co,&lt;/a&gt; discusses the idea that design adoption is not about the number of designers at a company, but more about design as a cultural mindset. The second, &lt;a href=&quot;http://www.invisionapp.com/design-better/design-maturity-model/&quot;&gt;The New Design Frontier by InVision,&lt;/a&gt; discusses design maturity across 2,200 companies and 74 industries, highlighting the differences between Visionaries (companies who have adopted design as a business strategy) and Producers (those who simply focus on the pane of glass).&lt;/p&gt;
&lt;p&gt;We would love to hear your thoughts on the topic of design maturity. Stay tuned for more reporting from up and coming events such as: Google Cloud Next 19’, HPE Discover Las Vegas and more…&lt;/p&gt;
&lt;p&gt;See you next time,&lt;/p&gt;
&lt;p&gt;Dana and Parul&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2019/3/dana-and-parul-from-chris-1553889189849.jpg&quot; alt=&quot;5bf2e1a0cd93d0796238ae01-blog-content-1553889189852&quot;&gt;&lt;/p&gt;
&lt;p&gt;Want to see more? Check out Dana’s and Parul’s interview that was recorded and shared on &lt;a href=&quot;http://twitter.com/SBUCloud/status/1105771231008247809&quot;&gt;Twitter.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Want to learn more about the Business Value of Design? Download the &lt;a href=&quot;https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Design/Our%20insights/The%20business%20value%20of%20design/The-business-value-of-design-vF.ashx&quot;&gt;McKinsey &amp;#x26; Co. article here.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Coffee and Critical Alerts]]></title><description><![CDATA[What could be better than starting your day with a hot cup of coffee and a custom-made critical alerts report? Ok, an exaggeration, but…]]></description><link>https://developer.hpe.com/coffee-and-critical-alerts/</link><guid isPermaLink="false">https://developer.hpe.com/coffee-and-critical-alerts/</guid><pubDate>Wed, 20 Feb 2019 22:40:05 GMT</pubDate><content:encoded>&lt;p&gt;What could be better than starting your day with a hot cup of coffee and a custom-made critical alerts report? Ok, an exaggeration, but OneView Global Dashboard’s 1.7 (OVGD) release has several new features including a new report – Critical Alerts. This new report combined with the existing feature of scheduling reports can make a powerful ally in keeping your data center healthy by cutting time spent hunting for problems to fix and instead having them presented to you.&lt;/p&gt;
&lt;p&gt;In this post I’ll share how to create a custom Critical Alerts report and how to schedule it.&lt;/p&gt;
&lt;p&gt;Note: In order to create a custom report, enable email, and schedule a report, the user must have a role of type Infrastructure administrator.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/blog-figure1-1550702346651.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;After logging in to OVGD, select the Reports page to see the &lt;em&gt;Critical Alerts&lt;/em&gt; report. First, select the report, which will run it, the report has ellipsis in the top right show the options available to you – see Figure 1.&lt;/p&gt;
&lt;p&gt;If customizations are desired, you will need to save a custom version of this report. This can be done by selecting &lt;em&gt;Save&lt;/em&gt;. Customizations could include a search, sorting on a different column in the table, filtering by selecting any subset of the meters in the two meter groups (&lt;em&gt;Alerts by Occurred Date&lt;/em&gt; and &lt;em&gt;Alerts by Resource Type&lt;/em&gt;), or by adding or remove columns from the table. You can add or remove columns by selecting &lt;em&gt;Manage Report Content&lt;/em&gt;, also shown in Figure 1.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/blog-figure2-1550702392981.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here is an example. Say you have an employee who troubleshoots all of the &lt;em&gt;server-hardware&lt;/em&gt; issues. For that person, you could create a custom report filtered on the server-hardware meter underneath the Alerts by Resource Type meter group. Other customizations could be easily added as well. Once the report has the customizations you desire, remember to save it. Run that saved report and again click the ellipsis, but this time select &lt;em&gt;Schedule&lt;/em&gt;. See Figure 2.&lt;/p&gt;
&lt;p&gt;Whoever is responsible for getting your data center from red to green status can have a custom made email sent to them each morning with their area of expertise highlighted. Or everyone could receive the same emails by adding multiple recipients to each one.&lt;/p&gt;
&lt;p&gt;OVGD has lots of flexibility built into the &lt;em&gt;Reports&lt;/em&gt; feature. A cup of coffee combined with OVGD email may be just what you need to save some time and start your day off right.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A look ahead at 2019 - Newsletter]]></title><link>https://developer.hpe.com/2019-February-19/</link><guid isPermaLink="false">https://developer.hpe.com/2019-February-19/</guid><pubDate>Tue, 19 Feb 2019 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[The Advent of Ephemeral Infrastructure as Code]]></title><description><![CDATA[“Ephemeral infrastructure as code” refers to compute, storage, and networking that is programmatically assembled, provisioned and configured…]]></description><link>https://developer.hpe.com/the-advent-of-ephemeral-infrastructure-as-code/</link><guid isPermaLink="false">https://developer.hpe.com/the-advent-of-ephemeral-infrastructure-as-code/</guid><pubDate>Wed, 02 Jan 2019 15:40:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;“Ephemeral infrastructure as code”&lt;/strong&gt; refers to compute, storage, and networking that is programmatically assembled, provisioned and configured on demand to suit a variety of workloads. Once each workload execution is complete, resulting output is pushed to permanent storage and the infrastructure is available to be re-assembled and re-provisioned. These “composable” building blocks of compute, storage, and networking resources are available in the flexible workhorse known as HPE Synergy. Synergy offers a mix and match of compute nodes and storage drawers within each frame and is managed by a pair of Composer modules, running HPE OneView. A pair of redundant Image Streamer appliances is also available, which provides flexibility for automatically provisioning operating system and applications onto the Synergy nodes. Synergy plus Image Streamer is a powerful combination that is used at HudsonAlpha Institute for Biotechnology, a nonprofit genomics and genetics research institute in Huntsville, AL, to achieve ephemeral infrastructure as code.&lt;/p&gt;
&lt;p&gt;You may be thinking, “Write code and maintain GitHub repos? Isn&apos;t that more like a Developer or DevOps Engineer role? That&apos;s not really an IT thing.” But it can and should be! If your IT operations team is not writing code to communicate with infrastructure via API (application programming interface), you are missing an opportunity to decrease errors and increase innovation and speed of delivery of resources to developers and other end-users. Start by stocking the data center with API-capable hardware and then adopt a DevOps mentality within IT. There&apos;s something for everyone … HPE has OneView SDKs (Software Development Kits) available for a variety of languages including Python, Ruby, Go, Java, Hubot, and PowerShell.&lt;/p&gt;
&lt;p&gt;For the HudsonAlpha environment, I&apos;ve chosen to use the Python SDK to provision 6 use cases onto Synergy nodes including OpenStack private cloud, bare metal Docker, and Kubernetes (K8s) cluster running on Fedora 27 and CentOS 7.4. The use cases align with the requirements for our genomic application pipelines.&lt;/p&gt;
&lt;p&gt;I&apos;ve written a single Python script that can provision any one of the use cases and requires just a single argument of a use case configuration file. The Python script sends requests to Synergy&apos;s Composer via the OneView API, which applies the appropriate OneView profile template onto a Synergy node. The profile template contains an Image Streamer Deployment Plan, which consists of a Golden Image and Plan Scripts. The act of applying the profile template to the node includes automatic creation of an Image Streamer iSCSI boot volume for that node and creation of a clone of the Golden Image that is automatically customized per the Plan Scripts. The node boots from the iSCSI volume on Image Streamer and is fully personalized. How do I know once a node has booted and is ready? My Plan Scripts include automatic registration of the node with Hashicorp Consul, which is a service discovery tool. I can query Consul to see active nodes. I also include automatic installation of the DataDog monitoring daemon, so as soon as the node boots, metrics for the host OS and applications are visible within DataDog.&lt;/p&gt;
&lt;p&gt;HPE provides ready-to-use Artifact Bundles containing Deployment Plans for RHEL and SLES operating systems. But as I’ve shown, it is possible to create your own as well. See “docs” in my GitHub repo for instructions on creating Fedora 27 and CentOS 7.4 Golden Images, and more information about Consul and DataDog. Plan Scripts and OS Build Plans are stored within “Artifact Bundles” and Python scripts are inside the “projects” directory. Image Streamer is flexible, and there are many ways to accomplish automation. My approach and our use cases at HudsonAlpha are always evolving. I look forward to reading how others are using this technology!&lt;/p&gt;
&lt;p&gt;GitHub: &lt;a href=&quot;https://hudsonalpha.github.io/synergy&quot;&gt;https://hudsonalpha.github.io/synergy&lt;/a&gt;&lt;br&gt;
HudsonAlpha Institute for Biotechnology: &lt;a href=&quot;http://www.hudsonalpha.org&quot;&gt;http://www.hudsonalpha.org&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Twitter: &lt;a href=&quot;https://twitter.com/katmullican&quot;&gt;@katmullican&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Developer At Discover - Newsletter]]></title><link>https://developer.hpe.com/2018-December-19/</link><guid isPermaLink="false">https://developer.hpe.com/2018-December-19/</guid><pubDate>Wed, 19 Dec 2018 06:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[ A view of the network in the Hack Shack at HPE Discover Madrid 2018]]></title><description><![CDATA[Discover Madrid Stickers #LETSHACKSHACKHowdy! Brian here, just back from the Hack Shack at HPE Discover Madrid. Hanging out with customers…]]></description><link>https://developer.hpe.com/a-view-of-the-network-in-the-hack-shack-at-hpe-discover-madrid-2018/</link><guid isPermaLink="false">https://developer.hpe.com/a-view-of-the-network-in-the-hack-shack-at-hpe-discover-madrid-2018/</guid><pubDate>Fri, 14 Dec 2018 14:20:30 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/dmad-stickers-1544797453006.png&quot; alt=&quot;Discover Madrid Stickers&quot;&gt;&lt;/p&gt;
&lt;h1&gt;#LETSHACKSHACKHowdy! Brian here, just back from the Hack Shack at HPE Discover Madrid. Hanging out with customers and partners was the best part of the show. Having fun helping them accelerate their cloud operational efficiency was icing on the cake. Yes, I said having fun. Having fun with code is a big part of the Hack Shack mantra and just one reason why I enjoy my job. Struggling with integration? Upgrade the API! Dealing with repetitive tasks? Automate! I code where I can make a difference and I have a great time doing it.&lt;/h1&gt;
&lt;p&gt;My colleagues and I, (&lt;a href=&quot;https://twitter.com/netmanchris&quot;&gt;Chris Young&lt;/a&gt; and Mark Parenti), had the opportunity of representing &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/composable-fabric.html&quot;&gt;HPE Composable Fabric&lt;/a&gt; in the Hack Shack. We set out to do the impossible - ok, maybe just the highly improbable. Our mission: make network programming comprehensible and accessible. I know you must be chuckling about that mission statement but read on! You are about to embark on a journey of wonder and awe and see how truly simple your network can be.&lt;/p&gt;
&lt;p&gt;HPE Composable Fabric Manager (fka: Plexxi) was designed and developed from the beginning to be fully API driven. The web-based user interface uses the same RESTful API available to everyone - no special endpoints or mysterious parameters - and fully documented with Swagger.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/plexxi-swagger-1544797482316.png&quot; alt=&quot;Plexxi Swagger Screenshot&quot;&gt;&lt;/p&gt;
&lt;p&gt;To make programming even easier in the Hack Shack, we pulled together Python bindings for the API, and everyone’s favorite coding example, “hello world”. Several of the customers dropping by the Hack Shack with their laptops successfully programmed the Composable Fabric switches in just a few minutes. Let me show you how they were able to do so much in so little time.&lt;/p&gt;
&lt;p&gt;The first component is a client class that wraps the Composable Fabric API in easy to access properties and methods. Initializing the client with authentication credentials and the address of the Composable Fabric Manager instance establishes a secure connection. Convenience methods offer easy access to fabric switches, switch ports, and more. For our exploration, getting switches and ports along with updating ports will suffice.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/plexxi-ss-01-1544797520119.png&quot; alt=&quot;plexxi ss 01&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once we have the API client initialized, a call to the &lt;code&gt;get_switches()&lt;/code&gt; method returns the fabric switches, and &lt;code&gt;get_ports(switch_uuid)&lt;/code&gt; returns the ports for a given switch. We’ll use the port identifiers to update their status later with calls to &lt;code&gt;update_ports()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/plexxi-ss-02-1544797572994.png&quot; alt=&quot;plexxi ss 02&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &quot;secret sauce” in our example is mapping the characters in “hello world” to switch ports in binary ASCII! Then it’s just a simple matter of administratively enabling or disabling the appropriate ports to send our message to the world.
Sure, this example has limited applicability in your data center and only your geekiest friends will appreciate it, but it illustrates the simplicity of programming the Composable Fabric API, making even the most powerful operations simple and automate-able. In fact, many of the integrations in Composable Fabric Manager do exactly that - leverage internal and external APIs to automate network discover, configuration, and remediation.&lt;/p&gt;
&lt;p&gt;The Hack Shack at Discover 2018 in Madrid was a great experience and I loved meeting all of you, hearing your ideas and inspiring you. I look forward to seeing you again next year. If you didn’t stop by this time, be sure to visit us at the Discover 2019 event in Vegas.  Ask for me and let’s sit down and talk networking geek.&lt;/p&gt;
&lt;p&gt;And don’t forget! Keep an eye out for the upcoming Python bindings on &lt;a href=&quot;https://hpedev.io&quot;&gt;HPEDEV.IO&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Get Hands on with HPE Hybrid Cloud and Container Solutions at KubeCon, December 11-13]]></title><description><![CDATA[kubecon na 2018 KubeCon is the Cloud Native Computing Foundation’s flagship conference, which gathers adopters and technologists from…]]></description><link>https://developer.hpe.com/get-hands-on-with-hpe-hybrid-cloud-and-container-solutions-at-kubecon-de/</link><guid isPermaLink="false">https://developer.hpe.com/get-hands-on-with-hpe-hybrid-cloud-and-container-solutions-at-kubecon-de/</guid><pubDate>Mon, 03 Dec 2018 19:49:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/12/kubecon-na-2018-1543866788069.jpg&quot; alt=&quot;kubecon na 2018&quot;&gt;&lt;/p&gt;
&lt;p&gt;KubeCon is the Cloud Native Computing Foundation’s flagship conference, which gathers adopters and technologists from leading open source and cloud native communities for four days to further the education and advancement of cloud native computing. This year, HPE will be at &lt;a href=&quot;https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/co-located-events/&quot;&gt;KubeCon 2018&lt;/a&gt; in Seattle, WA from December 10-13, 2018.&lt;/p&gt;
&lt;p&gt;From beginners to experts, join HPE for their workshops covering the basics of how to implement HPE hybrid cloud and container solutions. Don’t miss your chance to walk through the challenges of deploying and managing applications in hybrid cloud environments and get help from HPE’s experts in the following areas:&lt;/p&gt;
&lt;h2&gt;End to End Hybrid Cloud Management using HPE OneSphere&lt;/h2&gt;
&lt;p&gt;Digital disruption is having a profound effect on every company. IT departments are being asked to support an increasing number of providers and manage a faster application cadence. HPE OneSphere helps by providing a managed cloud layer on existing virtualized infrastructure and offers a single portal to manage a hybrid cloud and container environment. In this workshop, you’ll learn how HPE OneSphere, with its powerful REST API, can compose hybrid clouds capable of supporting both traditional and cloud-native applications.&lt;/p&gt;
&lt;h2&gt;Storage for Containers&lt;/h2&gt;
&lt;p&gt;Come find out how HPE allows you to deploy a robust persistent storage layer with advanced data services for stateful containers on-premises and in the cloud using HPE 3PAR, Nimble, and Cloud Volumes.&lt;/p&gt;
&lt;h2&gt;Containers with End-to-End Life Cycle using HPE Composable Infrastructure&lt;/h2&gt;
&lt;p&gt;HPE Composable Infrastructure provides the flexible infrastructure you need to incorporate container services into DevOps, host existing applications, add new micro services, and modernize legacy applications. It enables you to dynamically provision and scale applications, whether they run in VMs or containers. The software-defined architecture allows you to compose compute, storage, and networking resources to target specific workloads. And HPE Image Streamer enables stateless deployment of the operating system and other supporting software required to make the resources ready to run. Join this workshop to learn more.&lt;/p&gt;
&lt;h2&gt;HPE ProLiant for Microsoft Azure Stack&lt;/h2&gt;
&lt;p&gt;Azure stack allows you to bring modern cloud services to your sensitive data and edge applications that may not yet be suitable for the public cloud. Join HPE experts to learn how the HPE ProLiant for Microsoft Azure Stack is a hybrid cloud solution that enables you to deliver Azure services from your data center.&lt;/p&gt;
&lt;p&gt;To learn more about KubeCon 2018, &lt;a href=&quot;https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/co-located-events/&quot;&gt;click here&lt;/a&gt;. See you at the show!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Configuring threads for Optimal performance in HPE PowerShell Cmdlets]]></title><description><![CDATA[Initially when HPE PowerShell cmdlets (1.x version) were launched it had a limit of 256 thread limit which was good for a certain range of…]]></description><link>https://developer.hpe.com/configuring-threads-for-optimal-performance-in-hpe-powershell-cmdlets/</link><guid isPermaLink="false">https://developer.hpe.com/configuring-threads-for-optimal-performance-in-hpe-powershell-cmdlets/</guid><pubDate>Mon, 03 Dec 2018 07:06:34 GMT</pubDate><content:encoded>&lt;p&gt;Initially when HPE PowerShell cmdlets (1.x version) were launched it had a limit of 256 thread limit which was good for a certain range of IPs but was not sufficient for a big set of input IP data like more than 10k range. Then we re-designed the default thread limit to 64 in iLO and BIOS modules which received good feedback initially but later few customers reported that its slow performance when a big range of data is provided. Then we came up with the idea of configurable threads where on a particular PowerShell session users can configure the threads above default or below that value using Get-HPEXYZMaxThreadLimit and Set-HPEXYZMaxThreadLimit (XYZ - means module name like iLO or BIOS) cmdlets like always go for even exponential power of 2 number. For e.g., 64, 128, 256(default), 512, 1024, 2048, 4096 (Maximum).&lt;/p&gt;
&lt;p&gt;Now the actual problem started customers does not know which thread limit they need to configure and with lack of knowledge they configure a higher limit on less capable hardware systems the result is lower performance. Even with optimal hardware many times you don&apos;t get good performance. Then we did a case study on this and found that just by increasing the thread limits will not give good performance as a lot of time is spent on switching the threads itself than doing the real job (Processing iLO IP&apos;s).&lt;/p&gt;
&lt;p&gt;For e.g., Customer had 350 good iLO IP addresses in an input csv list with 256 threads he had below performance results compared to same input data with 64 max thread limit configured. Below the table of test data will show you how discovery and connect cmdlets took time under different max thread circumstances.  With more tests with different max thread configured it was evident that there is a mapping between input IP&apos;s and thread count configured that need to be maintained for optimal performance results.&lt;/p&gt;
&lt;h1&gt;Our team recommends 5:1 ratio needs to be maintained for fixed input of IP addresses (All reachable and good iLO&apos;s).&lt;/h1&gt;
&lt;p&gt;i.e., For every 5 iLO IP addresses, one thread need to be configured is optimal. I mean if input IP count is 320 then optimal thread range you can have is 64 minimum threads. Anything more or less ( like 128 or 32) will result in not best performance results.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Physical Server Configuration: BL460c Gen9 with 8 Core (Hyper-Threaded to 16), 32GB of memory, Network Card FlexFabric 10GB.&lt;/em&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tests&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Total&lt;/th&gt;
&lt;th&gt;Discovery&lt;/th&gt;
&lt;th&gt;Connect&lt;/th&gt;
&lt;th&gt;Disconnect&lt;/th&gt;
&lt;th&gt;Server&lt;/th&gt;
&lt;th&gt;MaxThread&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Test1&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;346&lt;/td&gt;
&lt;td&gt;01:33&lt;/td&gt;
&lt;td&gt;00:37&lt;/td&gt;
&lt;td&gt;00:22&lt;/td&gt;
&lt;td&gt;00:03&lt;/td&gt;
&lt;td&gt;00:29&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test2&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;346&lt;/td&gt;
&lt;td&gt;02:34&lt;/td&gt;
&lt;td&gt;00:43&lt;/td&gt;
&lt;td&gt;00:29&lt;/td&gt;
&lt;td&gt;00:05&lt;/td&gt;
&lt;td&gt;01:15&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test3&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;346&lt;/td&gt;
&lt;td&gt;02:39&lt;/td&gt;
&lt;td&gt;00:40&lt;/td&gt;
&lt;td&gt;00:35&lt;/td&gt;
&lt;td&gt;00:05&lt;/td&gt;
&lt;td&gt;01:18&lt;/td&gt;
&lt;td&gt;512&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;What if you have input data in terms of range like 192.168.10.0-255 ( means 256 IP&apos;s) or 192.168.10-12 ( means 768 IP&apos;s) that is given as input range to any of our cmdlets then the above 5:1 ratio may not hold good as in the above case customer is not sure which ones are iLO in the above range. In this case, higher max thread limit will give better performance results. Below data will give more insights.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Physical Server Configuration: BL460c Gen9 with 8 Core (Hyper-Threaded to 16), 32GB of memory, Network Card FlexFabric 10GB.&lt;/em&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tests&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Total&lt;/th&gt;
&lt;th&gt;Discovery&lt;/th&gt;
&lt;th&gt;Connect&lt;/th&gt;
&lt;th&gt;Disconnect&lt;/th&gt;
&lt;th&gt;Server&lt;/th&gt;
&lt;th&gt;MaxThread&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Test1&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;118&lt;/td&gt;
&lt;td&gt;02:12&lt;/td&gt;
&lt;td&gt;01:36&lt;/td&gt;
&lt;td&gt;00:23&lt;/td&gt;
&lt;td&gt;00:01&lt;/td&gt;
&lt;td&gt;00:11&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test2&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;119&lt;/td&gt;
&lt;td&gt;01:49&lt;/td&gt;
&lt;td&gt;01:16&lt;/td&gt;
&lt;td&gt;00:18&lt;/td&gt;
&lt;td&gt;00:01&lt;/td&gt;
&lt;td&gt;00:12&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test3&lt;/td&gt;
&lt;td&gt;2.1&lt;/td&gt;
&lt;td&gt;119&lt;/td&gt;
&lt;td&gt;00:43&lt;/td&gt;
&lt;td&gt;00:43&lt;/td&gt;
&lt;td&gt;00:19&lt;/td&gt;
&lt;td&gt;00:01&lt;/td&gt;
&lt;td&gt;00:12&lt;/td&gt;
&lt;td&gt;512&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;You can see as Max threads limit is increased the Total time taken column is reducing that means we are getting better performance results with higher thread limit which was not the case with above numbers where we had only good iLO&apos;s.&lt;/p&gt;
&lt;p&gt;So the bottom line is depending on the type of input IP addresses the customer has one need to decide the Max Thread limit configuration. Of course, your host hardware needs to have sufficient CPU and RAM to support higher thread configurations. I hope this will help customers who want to use a large range of IP&apos;s in their environment to choose the Max Thread limit accordingly. (Default is 256)&lt;/p&gt;
&lt;p&gt;Note: Performance numbers taken above will depend on various factors like network (LAN\WAN) load, iLO server load at that point of time. These numbers are only for reference purpose please don&apos;t take them as ideal figures for Max thread limit.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Building HPE's new design portal with headless Wordpress and Next.JS]]></title><description><![CDATA[Work smarter, not harder I can't quite recall who planted the "work smarter, not harder" seed in my head but for the past six months, it has…]]></description><link>https://developer.hpe.com/building-hpes-new-design-portal-with-headless-wordpress-and-nextjs/</link><guid isPermaLink="false">https://developer.hpe.com/building-hpes-new-design-portal-with-headless-wordpress-and-nextjs/</guid><pubDate>Tue, 27 Nov 2018 20:35:27 GMT</pubDate><content:encoded>&lt;h2&gt;Work smarter, not harder&lt;/h2&gt;
&lt;p&gt;I can&apos;t quite recall who planted the &quot;work smarter, not harder&quot; seed in my head but for the past six months, it has been resonating non-stop in an almost comical manner. This project reflected that mantra to a T.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://media.giphy.com/media/SSw1KX5otdqPm/source.gif&quot; alt=&quot;work smarter not harder&quot;&gt;&lt;/p&gt;
&lt;h2&gt;The brief ...&lt;/h2&gt;
&lt;p&gt;An application concept recently arrived at my desk regarding constructing a new design portal for HPE, similar to the HPE DEV portal you&apos;re currently using. The goal for the application is to house all things related to HPE&apos;s new design program, including all of the various design resources we offer. My career at HPE has generally revolved around building or leading the construction of this same type of web application in various flavors. The stack usually includes a &lt;a href=&quot;https://v2.grommet.io/&quot;&gt;grommet&lt;/a&gt; based front end and some form of a custom content management system with an API. In fact, this very developer portal was built upon 100% custom solutions built over the course of three months. With each iteration of this fullstack scope, we&apos;ve eventually widdled away the complexity of a fully custom back and frontend. The difference between this new design portal project and previous projects was that we had &lt;strong&gt;3 weeks&lt;/strong&gt; to ship it for &lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;Discover &apos;18 Madrid&lt;/a&gt;. The &lt;em&gt;&quot;work smarter, not harder&quot;&lt;/em&gt; mantra showed up in full swing. What was once a Grommet based CMS was replaced with a headless instance of Wordpress running in a Docker container. A custom Webpack configuration which constantly needed tuning and a homebrewed server-side rendering engine were replaced by &lt;a href=&quot;https://nextjs.org/&quot;&gt;Next.JS&lt;/a&gt; with Grommet.&lt;/p&gt;
&lt;h2&gt;Off to the races&lt;/h2&gt;
&lt;p&gt;As we began building on this new stack I felt a huge weight being lifted off of my shoulders. I was no longer focusing on whether or not the tooling or build processes were working as expected and I could focus on what was important - the end user&apos;s experience. Not having to wrestle a webpack config or babel transpiling gave us the runway we needed to make sure this project was successful.&lt;/p&gt;
&lt;p&gt;Next&apos;s server-side rendering and bundling worked flawlessly out of the box and it integrated perfectly with Grommet. New stacks come with a learning curve, however, this one was quite low. Here&apos;s some items we needed to pick up to create a great Grommet based Next frontend:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Handling page &lt;a href=&quot;https://github.com/zeit/next.js#routing&quot;&gt;routing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The next/link component and nesting anchor tags within it&lt;/li&gt;
&lt;li&gt;Handling route parameters, &lt;a href=&quot;https://github.com/zeit/next.js/tree/canary/examples/parameterized-routing&quot;&gt;here&apos;s a great example&lt;/a&gt; of the approach we took&lt;/li&gt;
&lt;li&gt;Creating new components to handle SEO and meta tags&lt;/li&gt;
&lt;li&gt;Extending Next&apos;s custom babel config to support styled components, here&apos;s a &lt;a href=&quot;https://dev.to/aprietof/nextjs--styled-components-the-really-simple-guide----101c&quot;&gt;great article&lt;/a&gt; explaining how to handle this&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While we had cleared up tooling and build complexities on the front end there was still the matter of handling dynamic content. I had kicked around headless Wordpress in a few projects but always walked away wondering if it was the right choice. As time has gone by the ecosystem has ironed out a lot of the complexities which made me question the solution, WP API has now become a staple in web application development. We started out by referencing &lt;a href=&quot;https://github.com/postlight/headless-wp-starter&quot;&gt;Postlight&lt;/a&gt; to get a better understanding of the current state of headless Wordpress. I ended up not using Postlight&apos;s approach to Wordpress but it did provide me with some keen insight into proper architecture. Ultimately, we chose to roll our own environment variable driven Docker compose setup which would handle Wordpress, MySQL, SSL, Nginx, backups, and obtain certificates on the fly from Let&apos;s Encrypt. Our final Wordpress build included the following plugins:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://wordpress.org/plugins/acf-to-wp-api/&quot;&gt;ACF to WP API&lt;/a&gt; to expose custom metadata in the Wordpress API&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.advancedcustomfields.com/&quot;&gt;Advanced Custom&lt;/a&gt; Fields to allow for flexible content entry&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://wordpress.org/plugins/custom-post-type-ui/&quot;&gt;Custom Post Type UI&lt;/a&gt; to quickly create and edit post types without requiring code changes or rebuilds&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://deliciousbrains.com/wp-migrate-db-pro/&quot;&gt;WP Migrate Pro&lt;/a&gt; to effortlessly sync data from our staging to our production environment&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://wordpress.org/plugins/wp-rest-api-v2-menus/&quot;&gt;WP-REST-API V2 Menus&lt;/a&gt; to expose Wordpress&apos; native nav menu editing to the API&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That&apos;s all it took to create a fully featured CMS in about a week.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://media.giphy.com/media/XH6MU5zmqIpAA/giphy.gif&quot; alt=&quot;gotchas&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Headless Wordpress gotchas&lt;/h2&gt;
&lt;p&gt;I&apos;ve picked up a few things after building Wordpress APIs, here&apos;s what to look out for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your API will most likely live in its own sub-domain to avoid exposing non-traditional web ports like &lt;code&gt;8000&lt;/code&gt; in your URLs&lt;/li&gt;
&lt;li&gt;Keep &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS&quot;&gt;CORS&lt;/a&gt;, cross-origin resource sharing, in mind. You may need to include a &lt;a href=&quot;https://github.com/ahmadawais/WP-REST-Allow-All-CORS&quot;&gt;plugin&lt;/a&gt; to allow cross-domain fetches&lt;/li&gt;
&lt;li&gt;If you&apos;re running Wordpress in Docker consider state, be sure to mount and backup the media uploads volume and of course your MySQL database&lt;/li&gt;
&lt;li&gt;Disable your Wordpress front end with &lt;a href=&quot;https://github.com/postlight/headless-wp-starter/blob/master/wordpress/wp-content/themes/postlight-headless-wp/index.php&quot;&gt;redirects&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The perception of value through customization&lt;/h2&gt;
&lt;p&gt;I was once naive enough to think custom solutions added enough technical value to a project to disregard the delivery timeline and simply worked harder. While building these custom solutions are absolutely invaluable when it comes to learning it&apos;s not always a sustainable practice. I still struggle to justify my team&apos;s decision to use Wordpress for our data layer to my colleagues but when all of my technical hurdles come solved out of the box it&apos;s hard to refute the value of a staple like Wordpress. When we look at buying a car, we generally don&apos;t buy the pieces separately then build the engine. We find the car we like and we drive it.&lt;/p&gt;
&lt;p&gt;Now that you&apos;ve listened to my long-winded spiel about libraries, content management, and frameworks &lt;strong&gt;&lt;a href=&quot;https://hpe.design/&quot;&gt;check out the new HPE design portal!&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://media.giphy.com/media/BRpMznCmYTiik/giphy.gif&quot; alt=&quot;88 mph&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Don’t miss it! HPE Discover Hack Shack offers something for everyone at every skill level!]]></title><description><![CDATA[HPE Discover Madrid kicks off next week! If you are going to the event, the HPE Discover Hack Shack will offer unique hackathons, technical…]]></description><link>https://developer.hpe.com/dont-miss-it-hpe-discover-hack-shack-offers-something-for-everyone-at-ev/</link><guid isPermaLink="false">https://developer.hpe.com/dont-miss-it-hpe-discover-hack-shack-offers-something-for-everyone-at-ev/</guid><pubDate>Mon, 26 Nov 2018 17:59:48 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;HPE Discover Madrid&lt;/a&gt; kicks off next week! If you are going to the event, the HPE Discover Hack Shack will offer unique hackathons, technical workshops, and coding challenges focused on a variety of HPE products, including HPE OneSphere, HPE SimpliVity, and the new HPE Composable Fabric Manager API. Whatever your skill level, the HPE Discover Hack Shack has a session for you. Even if you don&apos;t know how to code, the Hack Shack located in the Transformation Showcase is still your destination to participate and win some great prizes.&lt;/p&gt;
&lt;p&gt;If you will be at HPE Discover next week, make sure the below Hack Shack activities are on your must-attend list:&lt;/p&gt;
&lt;h2&gt;Design Collaboration Workshop at the Hack Shack: Discover Your Next Big Idea!&lt;/h2&gt;
&lt;p&gt;Do you have an amazing idea that will accelerate your company’s business, but you don’t know how to develop it further? Join the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;Design@HPE&lt;/a&gt; team in the HPE Discover Hack Shack to take it to the next level. To help understand your vision, the design team will get to know more about you, your business, and how you interact with HPE products and solutions. You will collaborate with the team through immersive design and creative activities to help build out your next big idea! Everyone is welcome to attend this session.
Design Playground Workshop at the Hack Shack: Engage in New User Experience Gameplay
Get engaged with the Design@HPE team at HPE Discover in an interactive design playground to help create some amazing design experiences. You’ll interact with HPE’s latest prototypes while the design team listens to gain insight about who you are, what you do, and how you use HPE solutions. Through interactive play, we will have the opportunity to build out some memorable designs for future solutions and products. And, you get a chance to win cool prizes! Everyone is welcome to attend this session.&lt;/p&gt;
&lt;h2&gt;Interested in API’s? Learn how to make hybrid cloud simple with the HPE OneSphere API at this Hack Shack Workshop&lt;/h2&gt;
&lt;p&gt;Beginners or experts, come to the HPE Discover Hack Shack to learn more about &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt; and hybrid cloud management. In this workshop, you will learn how to interact with HPE OneSphere’s SaaS platform through its powerful REST API. HPE experts will start from the HPE DEV website, explore the API description, and then use the Postman open source tool to interact with the API. Remember to bring your laptop so you can learn how to make your code come alive in this session! Experts will help you use PowerShell (or Python) to write a script to automate hybrid cloud operations.&lt;/p&gt;
&lt;h2&gt;Dive into infrastructure automation with the HPE OneView API in a Discovery Workshop at the Hack Shack&lt;/h2&gt;
&lt;p&gt;Bring your laptop, come in, get comfortable, and be ready to learn about &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;, HPE’s &lt;a href=&quot;https://www.hpe.com/us/en/solutions/infrastructure/composable-infrastructure.html&quot;&gt;composable infrastructure&lt;/a&gt; management platform. During this HPE Discover interactive session, you’ll explore the API description and use the Postman open source tool to interact with it. Next, you will use PowerShell (or Python) to write a script that automates HPE OneView. By the end of the workshop, you will have written a simple example of infrastructure as code. Beginners and experts are welcome!&lt;/p&gt;
&lt;h2&gt;HPE OneSphere Hackathon – Create a super cool unified Hybrid Cloud Dashboard&lt;/h2&gt;
&lt;p&gt;Do you dare enter the 2-hour HPE OneSphere multi-cloud management challenge? In this session, you will build a functional report that shows all the existing deployments grouped by projects for each cloud provider. All solutions are accepted — from text reports to super cool graphical dashboards. Bring your friends, pick your favorite language, and let the challenge begin! The creators of the most innovative hack will also walk away with great prizes! Beginners and expert developers welcome.&lt;/p&gt;
&lt;h2&gt;HPE OneView Hackathon — Find the Misbehaving Server in a Large Datacenter&lt;/h2&gt;
&lt;p&gt;Want to challenge yourself to build a solution for finding a misbehaving server -- with only limited information and some logs from its iLO in just 2 hours? If this kind of engineering challenge meets your operational role, this hack is for you! All solutions are accepted as long as they do the job. Bring your co-workers, pick your favorite language, and show off your app-building chops. Winners will head home with great prizes! Beginners and expert developers welcome.&lt;/p&gt;
&lt;h2&gt;HPE Hack Shack: Welcome to the Hello World Hackathon&lt;/h2&gt;
&lt;p&gt;Enter the HPE Discover Hack Shack backyard to learn how to code or improve your current coding skills! Use your favorite language or embrace the challenge of using a new language you want to learn. Your hack can be as simple as showing Hello World in notepad or a text editor like Vim. Or, you can send a handshake message to a server and have the server return Hello World back to you. But remember, creativity counts! And of course, prizes will be awarded. Beginners and expert developers welcome.&lt;/p&gt;
&lt;h2&gt;What is the HPE Composable Fabric Manager API? Learn all about it at the Hack Shack Discovery Workshop&lt;/h2&gt;
&lt;p&gt;Interested in learning about HPE’s latest technology acquisition? Bring your laptop and come learn about the HPE &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/composable-fabric.html&quot;&gt;Composable Fabric&lt;/a&gt; Manager and its REST API. Access the Swagger-based API documentation from the management appliance, explore the API documentation, and execute API commands. You will use Python to write a script to automate network configuration and make it come to life. For beginners to experts, come prepared to code. Then apply your knowledge in the HPE Composable Fabric Manager Hackathon.&lt;/p&gt;
&lt;h2&gt;HPE Composable Fabric Manager Hackathon&lt;/h2&gt;
&lt;p&gt;Are you a networking engineer who has to track metrics and build dashboards? Think you can build a dashboard that shows the current configuration of the network view of the Composable Fabric mesh network – all in 2 hours? Come to this session in the HPE Discover Hack Shack to test your skills! All solutions are accepted, from text reports, graphical interpretations, representations of data, and anything in between. Pick your favorite language and get hacking! Prizes will be awarded to the most innovative solution; beginners and expert developers are all welcome.&lt;/p&gt;
&lt;h2&gt;Hack Shack HPE SimpliVity Workshop: Learn more about the HPE SimpliVity REST API fundamentals to build a simple automation script.&lt;/h2&gt;
&lt;p&gt;Are you an engineer interested in learning about HPE SimpliVity? Bring your laptop and join the HPE Developer Community in the HPE Discover Hack Shack backyard to learn how to interact with the HPE SimpliVity hyperconverged platform through its REST API. Explore the API description, communicate with the HPE OmniStack Virtual Controller, and finally use PowerShell (or Python) to automate the kind of HPE SimpliVity use case that may come up in your day-to-day work. For beginners to expert developers alike, come prepared to code at this session.&lt;/p&gt;
&lt;h2&gt;HPE SimpliVity Backup Status Report Hackathon&lt;/h2&gt;
&lt;p&gt;Are you an engineer eager to try your hand at creating a backup status report in under 2 hours? Enter this hack and challenge yourself on hyperconvergence. This hack will focus on creating a report that shows the status of on-going, successful, and failed backups. Leverage the granularity and simplicity of the HPE SimpliVity REST API to build out your solution. Prizes will be awarded for the most innovative hack. Beginners to expert developers are invited.&lt;/p&gt;
&lt;h2&gt;HPE SimpliVity User Scripts and Backup Hackathon&lt;/h2&gt;
&lt;p&gt;Are you an engineer tasked with producing and understanding backups? Wish there were more innovative ways to create them? This 2-hour challenge can get you started to creating that innovation. You will leverage the HPE SimpliVity REST API to build a solution for pre-and-post backup script injection to enable backup life cycle management use cases. Use your favorite language and see if your hack walks away with the winning prize! Beginners and expert developers are welcome.&lt;/p&gt;
&lt;p&gt;There is still time to register for HPE Discover Madrid 2018, &lt;a href=&quot;https://attend.hpe.com/discover2018madrid/&quot;&gt;click here&lt;/a&gt; for more info.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using Postman with HPE SimpliVity ]]></title><description><![CDATA[Summary This article is the first in a series that describes how to use the HPE SimpliVity API. The articles are targeted at developers and…]]></description><link>https://developer.hpe.com/using-postman-with-hpe-simplivity/</link><guid isPermaLink="false">https://developer.hpe.com/using-postman-with-hpe-simplivity/</guid><pubDate>Fri, 09 Nov 2018 17:39:07 GMT</pubDate><content:encoded>&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;This article is the first in a series that describes how to use the HPE SimpliVity API. The articles are targeted at developers and architects that want to understand the REST API’s capabilities and are interested in learning how to build automation and integration with HPE SimpliVity.&lt;/p&gt;
&lt;p&gt;This article uses Postman to place HPE SimpliVity REST API calls without writing any code. Postman is a free application from Postdot Technologies. It is available for Windows, Mac, Linux and as a Chrome browser plugin. You can download and install it from &lt;code&gt;https://www.getpostman.com/postman&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;At the end of this article, you should be able to use Postman to create variables to store the values that you need to use in other Postman exercises and to retrieve the set of virtual machines managed by an HPE SimpliVity federation.&lt;/p&gt;
&lt;p&gt;This article assumes that you understand how to navigate and use the Postman interface. If you are totally new to Postman, I recommend that you watch a few videos on creating variables in Postman.&lt;/p&gt;
&lt;h1&gt;Let&apos;s get started!&lt;/h1&gt;
&lt;p&gt;To begin, I need the credentials to access to the HPE SimpliVity virtual controller through TCP/IP.  You can use the vCenter local SSO administrator for the vCenter used in the HPE SimpliVity federation. I  use my domain username which has the &lt;code&gt;username@domainname&lt;/code&gt; format.&lt;/p&gt;
&lt;p&gt;I also need to disable Postman’s certificate validation. To turn the feature off, I click the icon on the Postman ribbon. (You can skip this if you have signed the SSL certificate used by HPE SimpliVity with trusted root.)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-turn-off-ssl-cert-1542122449127.png&quot; alt=&quot;svt turn off ssl cert&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Set up environment variables&lt;/h1&gt;
&lt;p&gt;I use variables to store the values that I know I need to use in my Postman requests. First, I declare the  variables that I use as input to REST API requests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;OVC_IP&lt;/code&gt;: Stores the virtual controller IP address.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VcenterAdminName&lt;/code&gt;:  Stores my vCenter user name.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vCenterAdminPasswd&lt;/code&gt;: Stores my vCenter password.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Next, I declare a set of variables to store the results of my REST API requests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;token&lt;/code&gt;: Stores the access token that is returned from an authenticated session. (We’ll learn how to authenticate in the next article.) I&apos;ll need this for many requests, so storing it in a variable makes it easy to transfer it among the requests.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;TASKID&lt;/code&gt;: Stores HPE SimpliVity task IDs.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;VMID&lt;/code&gt;: Stores the virtual machine ID.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-postman-variable-definitions-1542122682743.png&quot; alt=&quot;svt postman variable definitions&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Define the URI, header, and body parameters to request an access token&lt;/h1&gt;
&lt;p&gt;For my first REST request, I use the POST action to send the access token request. I need to define the URI, header, and body parameters for it.&lt;/p&gt;
&lt;p&gt;The URI to request an access token has this syntax:
&lt;code&gt;https://simplivity@&amp;#x3C;OVC_IP&gt;/api/oauth/token&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I use the &lt;code&gt;OVC_IP&lt;/code&gt; variable that I defined earlier.&lt;/p&gt;
&lt;p&gt;I need to send the &lt;code&gt;Accept&lt;/code&gt; and the &lt;code&gt;Content-Type&lt;/code&gt; headers with my request, so I define two key-value pairs for them. For example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-url-body-params-1542122702407.png&quot; alt=&quot;svt url body params&quot;&gt;&lt;/p&gt;
&lt;p&gt;Finally, I define three key-value pairs for the Body of the request: &lt;code&gt;grant_type&lt;/code&gt;, &lt;code&gt;username&lt;/code&gt;, and &lt;code&gt;password&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In Postman, I select &lt;code&gt;form-data&lt;/code&gt; to store these key-value pairs. I assign the &lt;code&gt;VcenterAdminName&lt;/code&gt; and &lt;code&gt;vCenterAdminPassword&lt;/code&gt; variables to the username and password keys.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-formdata-body-params-1542122691875.png&quot; alt=&quot;svt formdata body params&quot;&gt;&lt;/p&gt;
&lt;p&gt;I then move to the Tests tab where I define a script that intercepts the return value from the REST request. It parses the keyword token and stores it into the &lt;code&gt;data_access_token&lt;/code&gt; variable.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-test-script-postman-1542124697949.png&quot; alt=&quot;svt test script postman&quot;&gt;&lt;/p&gt;
&lt;p&gt;Assuming that you have entered the &lt;code&gt;OVC_IP&lt;/code&gt;, &lt;code&gt;username&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt; and the key-value pairs, you are ready to send a REST request by clicking &lt;strong&gt;Send&lt;/strong&gt;.
That&apos;s it! You just made your first HPE SimpliVity REST request.&lt;/p&gt;
&lt;p&gt;This request returns the access token that you need to use on many HPE SimpliVity requests. The access token value from my example is shown highlighted in yellow in the following example.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-request-access-token-1542124824497.png&quot; alt=&quot;svt request access token&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can now validate that the token’s current value is the same as the value in the body of the POST result. It’s so easy isn’t it?# This is easy, let&apos;s take it up a notch!&lt;/p&gt;
&lt;p&gt;Now that I have the access token, I can perform other REST operations using the same variables that I built before.&lt;/p&gt;
&lt;p&gt;Let’s give it a try and get all of the virtual machines managed by HPE SimpliVity.&lt;/p&gt;
&lt;p&gt;Start by creating a GET operation in a new Request tab of Postman.  Just like before, we need to define the URI for the GET request. It has this syntax:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;https://&amp;#x3C;OVC_IP&gt;/api/virtual_machines&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Since I have defined the variable OVC_IP, let’s use it for our next adventure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-get-all-vms-url-1542125040577.png&quot; alt=&quot;svt get all vms url&quot;&gt;&lt;/p&gt;
&lt;p&gt;Enter the parameters for this request by clicking on the Params tab.&lt;/p&gt;
&lt;p&gt;Provide the &lt;code&gt;limit&lt;/code&gt;, &lt;code&gt;offset&lt;/code&gt;, and &lt;code&gt;show_optional_fields&lt;/code&gt;, as defined in the specification for the REST API GET VM call. You can find the definition here:
&lt;a href=&quot;https://developer.hpe.com/api/simplivity/endpoint?&amp;#x26;path=%2Fvirtual_machines&quot;&gt;https://developer.hpe.com/api/simplivity/endpoint?&amp;#x26;path=%2Fvirtual_machines&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-get-all-vms-params-1542125124954.png&quot; alt=&quot;svt get all vms params&quot;&gt;&lt;/p&gt;
&lt;p&gt;I also need to provide the access token that I acquired in the previous POST request.  Because I used variables, there is no need to worry about copying the 16+ character hexadecimal value without error. I just enter the Authorization value in the form of &lt;code&gt;Bearer {{token}}&lt;/code&gt; then click &lt;em&gt;Send&lt;/em&gt;, and let Postman do the magic.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-get-all-vms-headers-1542125232197.png&quot; alt=&quot;svt get all vms headers&quot;&gt;&lt;/p&gt;
&lt;p&gt;The request returns the list of the virtual machines that are managed by HPE SimpliVity. You can see the offset value and the limit value for validation of the parameters that I entered for this request.&lt;/p&gt;
&lt;p&gt;I highlighted the VM ID below because it is one of the fields that is useful for many other requests.&lt;/p&gt;
&lt;p&gt;You can use the same strategy that I did above by entering the VM ID into another variable and so on.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/11/svt-offset-limit-1542125309822.png&quot; alt=&quot;svt offset limit&quot;&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s it! # What have I learned?&lt;/p&gt;
&lt;p&gt;Postman is great for experimentation to develop my understanding of the REST API. Postman supports the use of variables that can be manipulated with very little effort.&lt;/p&gt;
&lt;p&gt;Do I want to write a script using Postman? Probably not. I use PowerShell as my choice for automation as you can see in the following YouTube video: &lt;a href=&quot;https://www.youtube.com/watch?v=pBnadRc1Vsw&quot;&gt;https://www.youtube.com/watch?v=pBnadRc1Vsw&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Scripting languages, such as PowerShell, provide a rich library of features that I need to perform automation.&lt;/p&gt;
&lt;p&gt;To learn more about using Python curl to obtain an access token, go to &lt;a href=&quot;https://developer.hpe.com/platform/hpe-simplivity/authenticating-against-hpe-omnistack-api&quot;&gt;https://developer.hpe.com/platform/hpe-simplivity/authenticating-against-hpe-omnistack-api&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Storage management with Redfish]]></title><description><![CDATA[Updated: December 10, 2024 Introduction Integrated Lights-Out (iLO)  is an HPE ProLiant and Synergy server-embedded technology that delivers…]]></description><link>https://developer.hpe.com/storage-management-with-redfish/</link><guid isPermaLink="false">https://developer.hpe.com/storage-management-with-redfish/</guid><pubDate>Tue, 30 Oct 2018 17:05:18 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;style&gt;
.warning-box {
    background-color: #fff3cd; /* Light yellow background */
    /* border: 2px solid #ffeb3b; /* Yellow border */
    color: #856404; /* Dark text color for contrast */
    padding: 20px; /* Padding inside the box */
    border-radius: 5px; /* Rounded corners */
    margin: 20px 0; /* Margin for spacing */
    width: 80%; /* Width of the rectangle */
    max-width: 600px; /* Maximum width of the rectangle */
    margin-left: auto; /* Center the rectangle horizontally */
    margin-right: auto; /* Center the rectangle horizontally */
    /* box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); /* Optional: Adds a shadow for depth */
}

.warning-box p {
    margin: 0; /* Remove default margin from the paragraph */
    font-weight: bold; /* Bold text */
}

.warning-box a {
    color: blue; /* Change the color of the link */
    text-decoration: none; /* Remove the underline */
    /*font-style: italic; /* Make the text italic */
    /*font-weight: bold; /* Make the text bold */
        }
&lt;/style&gt;
&lt;p&gt;Updated: December 10, 2024&lt;/p&gt;
&lt;div class=&quot;warning-box&quot;&gt;
    &lt;p&gt;
    NOTE:
    &lt;/p&gt;
    &lt;p&gt;
    Due to the deprecation of the
    &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo6/ilo6_adaptation/#hpe-smart-storage-model-oem-deprecated&quot; target=&quot;_blank&quot;&gt;SmartStorageConfig&lt;/a&gt; data model and the adoption of the &lt;a href=&quot;https://developer.hpe.com/blog/overview-of-the-platform-level-data-model-for-redfish%C2%AE-device-enablement-standard/&quot; target=&quot;_blank&quot;&gt;PLDM for RDE&lt;/a&gt; standard by HPE iLO and storage
    device providers, this blog post is deprecated.
    &lt;/p&gt;
    &lt;p&gt;
    Refer to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/storage/#storage-data-models&quot; target=&quot;_blank&quot;&gt;Storage data models&lt;/a&gt; documentation section for up to date information concerning Redfish storage management.
    &lt;/p&gt;
&lt;/div&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Integrated Lights-Out (&lt;a href=&quot;https://www.hpe.com/info/ilo&quot; target=&quot;_blank&quot;&gt;iLO&lt;/a&gt;)  is an HPE ProLiant and Synergy server-embedded technology that delivers the core foundation for server intelligence along with out-of-band and in-band management facilities. This technology is a combination of the iLO ASIC that is part of the server board and the firmware that powers the ASIC. Out of the box, HPE iLO simplifies server setup, provides access to server health information, and enables server management at scale, including basic remote administration. Different generations of ProLiant Servers carry different versions of the iLO ASIC.&lt;/p&gt;
&lt;p&gt;In HPE ProLiant and Synergy Gen10 servers, HPE iLO 5 introduced the management of storage controllers via its graphical user interface and via the &lt;a href=&quot;https://www.dmtf.org/standards/redfish&quot; target=&quot;_blank&quot;&gt;Redfish&lt;/a&gt; RESTful API standard. Although &lt;a href=&quot;https://www.youtube.com/channel/UCIZhrIYcNh3wHLiY4ola5ew/search?query=logicaldrive&quot;  target=&quot;_blank&quot;&gt;videos&lt;/a&gt; already exist that cover the graphical user interface, I wanted to address this feature with a pure Redfish API approach, bypassing the &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot; target=&quot;_blank&quot;&gt;iLOrest interface tool&lt;/a&gt; and its &lt;code&gt;SmartController&lt;/code&gt; macro commands.&lt;/p&gt;
&lt;p&gt;In this article you start by learning how to cleanup and prepare a SmartRAID (SR) storage Controller for receiving a configuration with one or more logical drives using an HPE proprietary OEM process. Then, on this fresh environment, you will learn how to create a simple RAID array configuration prior to more complex ones.&lt;/p&gt;
&lt;h2&gt;Foreword&lt;/h2&gt;
&lt;p&gt;I&apos;ve used the &lt;a href=&quot;https://www.getpostman.com/&quot; target=&quot;_blank&quot;&gt;Postman&lt;/a&gt; API development to illustrate our examples. This should give you the ability to implement these raw examples using your own preferred language.&lt;/p&gt;
&lt;p&gt;The reader should have the knowledge of HTTP &lt;a href=&quot;https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods&quot; target=&quot;_blank&quot;&gt;request methods&lt;/a&gt; like &lt;code&gt;GET&lt;/code&gt;, &lt;code&gt;PUT&lt;/code&gt; and &lt;code&gt;PATCH&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Moreover, it is assumed that the reader knows how to manage Redfish sessions as described in the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/concepts/redfishauthentication/&quot; target=&quot;_blank&quot;&gt;HPE Redfish API Reference documentation&lt;/a&gt;. More hints for managing iLO sessions with Redfish can be found in this &lt;a href=&quot;https://developer.hpe.com/blog/managing-ilo-sessions-with-redfish/&quot; target=&quot;_blank&quot;&gt;article&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Storage data models&lt;/h2&gt;
&lt;p&gt;The Redfish standard defines a &lt;code&gt;storage&lt;/code&gt; data model as part of the &lt;code&gt;ComputerSystem&lt;/code&gt; resource type under the &lt;code&gt;/redfish/v1/Systems/{item}/&lt;/code&gt; URI. With the implementation of new standards, like the Platform Level Data Model for Redfish Device Enablement (&lt;a href=&quot;https://developer.hpe.com/blog/overview-of-the-platform-level-data-model-for-redfish%C2%AE-device-enablement-standard/&quot; target=&quot;_blank&quot;&gt;PLDM for RDE&lt;/a&gt;), this model is fully operational in terms of read, write and event operations against modern external provider devices. Starting at Gen10 and Gen10 Plus (firmware 2.30+), HPE servers can take advantage of it. Note that it is the only storage data model implemented in Gen11 servers and beyond.&lt;/p&gt;
&lt;p&gt;HPE initially developed the &lt;code&gt;SmartStorage&lt;/code&gt; Redfish OEM data model for HPE ProLiant DL580 Gen8 servers, before any Redfish specification was published. This model supports inventory (GET) and monitoring (Events) features only.&lt;/p&gt;
&lt;p&gt;In HPE ProLiant Gen10, the &lt;code&gt;SmartStorageConfig&lt;/code&gt; resource was added to support the system&apos;s configuration. This OEM model uses a proprietary API that only supports the SmartRAID (SR) line of HPE storage controllers.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The HPE OEM &lt;code&gt;SmartStorageConfig&lt;/code&gt; data model is removed in HPE Gen11 servers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This article focuses only on the HPE OEM &lt;code&gt;SmartStorageConfig&lt;/code&gt; data model.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As stated earlier, HPE Gen10 and Gen10 Plus servers implement the description and the configuration of their Smart Storage SR devices respectively in the &lt;code&gt;SmartStorage&lt;/code&gt; and &lt;code&gt;SmartStorageConfigX&lt;/code&gt; (&lt;strong&gt;X&lt;/strong&gt; is being an ID number) entry points of the same &lt;code&gt;ComputerSystem&lt;/code&gt; type. When multiple controllers are present in a system,
a &lt;code&gt;SmartStorage/ArrayControllers/{item}/&lt;/code&gt; entry point is present for each controller description and a &lt;code&gt;SmartStorageConfigX&lt;/code&gt; entry point exits for the configuration entry points. Note that &lt;code&gt;{item}&lt;/code&gt; and &lt;strong&gt;&lt;code&gt;X&lt;/code&gt;&lt;/strong&gt; are not correlated.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/0-smartstoragearraysentrypoints.png&quot; alt=&quot;HPE&amp;#x27;s legacy SmartStorage URIs&quot; title=&quot;HPE&amp;#x27;s legacy SmartStorage URIs&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/0-smartstorageconfigentrypoints.png&quot; alt=&quot;HPE&amp;#x27;s SmartStorageConfig URIs&quot; title=&quot;HPE&amp;#x27;s SmartStorageConfig URIs&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since the infrastructure used for this article contains only HPE storage devices and the iLO 5 firmware version is below 2.30, the &lt;code&gt;Storage&lt;/code&gt; DMTF URI is not populated. Only the &lt;code&gt;SmartStorage&lt;/code&gt; and &lt;code&gt;SmartStorageConfigX&lt;/code&gt; locations are populated. The following picture shows the URIs of four disk drives in the first controller of our infrastructure. Their properties can be obtained with a simple &lt;code&gt;GET&lt;/code&gt; request.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/1-physicaldrives.png&quot; alt=&quot;Physical Drives seen from the SmartStorage location&quot; title=&quot;Physical Drives seen from the SmartStorage location&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you only need the physical location of the disks to use for your logical drives, you can &lt;code&gt;GET&lt;/code&gt; them from the &lt;code&gt;SmartStorageConfig&lt;/code&gt; location as shown below. Note that for the first controller found in the system, the &lt;strong&gt;&lt;code&gt;X&lt;/code&gt;&lt;/strong&gt; ID number is omitted.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/2-physicaldrives.png&quot; alt=&quot;Physical Drives seen from the SmartStorageConfig location&quot; title=&quot;Physical Drives seen from the SmartStorageConfig location&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Storage controller configuration process&lt;/h2&gt;
&lt;p&gt;As described in my previous blog, &lt;a href=&quot;https://developer.hpe.com/blog/master-the-redfish-server-states-to-improve-your-monitoring-and-manageme/&quot; target=&quot;_blank&quot;&gt;Master the Redfish Server States to improve your monitoring and management applications&lt;/a&gt;, and to be sure the configuration process happens smoothly,  you should first verify that your managed systems are not at the &lt;code&gt;UEFI/System Utilities&lt;/code&gt; level. Being in the &lt;code&gt;Off&lt;/code&gt; state is a good state to start.&lt;/p&gt;
&lt;p&gt;Modification of Bios and storage controller configurations are performed in two stages as explained in the &lt;a href=&quot;https://developer.hpe.com/blog/setting-bios-and-storage-controller-properties-with-redfish/&quot; target=&quot;_blank&quot;&gt;Setting Bios and Storage Controller Properties with Redfish&lt;/a&gt; article. The first stage consists of setting the required parameters in the &quot;pending settings area&quot; of the Redfish data model (&lt;code&gt;/redfish/v1/Systems/{item}/SmartStorageConfig/Settings/&lt;/code&gt;). During the next server reboot, parameters in the &quot;pending settings area&quot;, after validation, are transferred to the &quot;current configuration&quot; at &lt;code&gt;/redfish/v1/Systems/{item}/SmartStorageConfig/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Upon reboot, and once the server has finished its Pre-OS Tasks (POST), you should check carefully whether the modifications have been accepted as the current configuration.&lt;/p&gt;
&lt;h2&gt;The DataGuard property&lt;/h2&gt;
&lt;p&gt;The management of HPE Smart Storage devices requires a proper understanding of the &lt;code&gt;DataGuard&lt;/code&gt; property part of the &lt;code&gt;SmartStorageConfig&lt;/code&gt; sub-tree. The value of this attribute &quot;&lt;em&gt;indicates  whether or not data destructive actions are allowed&lt;/em&gt;&quot; as explained in the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo5/ilo5_290/ilo5_other_resourcedefns290/#dataguard&quot;&gt;API Reference documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This property is set in the pending settings (&lt;code&gt;SmartStorageConfig/Settings&lt;/code&gt;) along with the directives to be performed by the Smart Storage device (i.e. Logical Volume Creation, Deletion...). During the next POST, the firmware checks its value and performs, or does not perform, the requested directives.&lt;/p&gt;
&lt;p&gt;If the value is &lt;code&gt;Strict&lt;/code&gt;, which is the default value when not changed in the pending settings, the firmware denies any destructive data action (create/delete logical drives or clear drive metadata....).&lt;/p&gt;
&lt;p&gt;If the value is set to &lt;code&gt;Disabled&lt;/code&gt;, destructive data actions are allowed. Finally, when the value is &lt;code&gt;Permissive&lt;/code&gt;, only destructive data actions are allowed on the specified objects.&lt;/p&gt;
&lt;p&gt;Refer to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/supplementdocuments/storage/&quot; target=&quot;_blank&quot;&gt;iLO RESTful API documentation&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2&gt;Storage controller and physical disks preparation&lt;/h2&gt;
&lt;p&gt;Imagine you just received a bunch of brand new servers or you want to re-deploy servers for a new project. In this context, you would want to remove completely the entire storage configuration as well as all meta data stored in the physical drives to get a clean Smart Storage subsystem.&lt;/p&gt;
&lt;p&gt;The HPE OEM storage subsystem models logical drives as a JSON collection stored in an array of objects represented like: &lt;code&gt;&quot;LogicalDrives&quot;: [{ld-1},{ld-2,...{ld-n}]&lt;/code&gt;. Each &lt;code&gt;ld-i&lt;/code&gt; of this array contains the necessary properties to describe precisely the corresponding logical drive.&lt;/p&gt;
&lt;p&gt;The following screenshot shows the &lt;code&gt;LogicalDrives&lt;/code&gt; array containing one element and its attributes. In it, you can see that this logical drive is a RAID 0 made of a single data drive (&lt;code&gt;1I:3:1&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/3-logicaldrivecollection.png&quot; alt=&quot;The LogicalDrives[] array&quot; title=&quot;The LogicalDrives[] array&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Removing all logical drives&lt;/h3&gt;
&lt;p&gt;Removing all the logical drives of an HPE SR Smart Storage Controller is equivalent to the removal of all the elements of the &lt;code&gt;LogicalDrives&lt;/code&gt; array. In practice, you need to request an empty array to the Redfish server.&lt;/p&gt;
&lt;p&gt;Since this action is data destructive, you must disable the &lt;code&gt;DataGuard&lt;/code&gt; property to make sure the firmware allows this operation during the next reboot/POST of the system.&lt;/p&gt;
&lt;p&gt;The next screenshot contains all the necessary information to complete this remove operation:  Perform an HTTP &lt;code&gt;PUT&lt;/code&gt; to the pending settings (&lt;code&gt;SmartStorageConfig/Settings/&lt;/code&gt;):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/3-deletealllogicaldrives.png&quot; alt=&quot;Deleting all HPE Logical drives&quot; title=&quot;Deleting all HPE Logical drives&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Clearing disk drives configuration metadata&lt;/h3&gt;
&lt;p&gt;HPE Smart Array RAID uses a reserved area at the end of each physical drive to store information about the logical drive configuration they are part of.  When the &lt;a href=&quot;https://www.hpe.com/us/en/product-catalog/detail/pip.5409020.html&quot; target=&quot;_blank&quot;&gt;Smart Storage Administrator&lt;/a&gt; (SSA) application or Redfish is used to delete a logical drive, the metadata is cleaned up.  However, if drives are moved around, there may be leftover metadata on the drives.  The controller may show a failed logical drive or keep the drive from being presented to the OS.  The &lt;code&gt;ClearConfigurationMetadata&lt;/code&gt; action with the &lt;code&gt;DataGuard&lt;/code&gt; property disabled using the &lt;code&gt;PATCH&lt;/code&gt; method can be used to remedy this problem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/4-clearconfigmetadata.png&quot; alt=&quot;Clear disk drives configuration metadata&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you want to perform a single request removing all the logical drives and clearing the metadata, you have to perform a &lt;code&gt;PUT&lt;/code&gt; of the pending settings with the following body:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;Actions&quot;: [
        {
            &quot;Action&quot;: &quot;ClearConfigurationMetadata&quot;
        }
        ],
    &quot;LogicalDrives&quot;: [],
    &quot;DataGuard&quot;: &quot;Disabled&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Removing a single/specific logical drive&lt;/h3&gt;
&lt;p&gt;To delete a specific logical drive you have to send a &lt;code&gt;PUT&lt;/code&gt; request to the &lt;code&gt;SmartStorageConfig/settings&lt;/code&gt; (pending settings). This request must contain the unique identifier of the desired logical drive as well as the &lt;code&gt;DataGuard&lt;/code&gt; property set to &lt;code&gt;Permissive&lt;/code&gt; to avoid the removal of other logical drives, if any:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/7-deletespecificlogicaldrive.png&quot; alt=&quot;Specific logical drive deletion&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Sanitizing / disk drives&lt;/h3&gt;
&lt;p&gt;In addition to the removal of logical drives and the meta data cleanup of physical drives, you may want to erase / sanitize a list of physical drives. To perform this (long) operation, send the following &lt;code&gt;PATCH&lt;/code&gt; action with the correct list of the drives to erase, separated by a comma.  Don&apos;t forget to disabled  the &lt;code&gt;DataGuard&lt;/code&gt; property as well:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/5-sanitizephysicaldrive.png&quot; alt=&quot;Sanitize physical disk drives&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;ErasePattern&lt;/code&gt; property supports the following &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishservices/ilos/ilo5/ilo5_290/ilo5_other_resourcedefns290/#actions-array&quot;&gt;values&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-text&quot;&gt;    SanitizeRestrictedBlockErase
    SanitizeUnrestrictedBlockErase
    SanitizeRestrictedOverwrite
    SanitizeUnrestrictedOverwrite
    SanitizeRestrictedCryptoScramble
    SanitizeUnrestrictedCryptoScramble
    OnePass
    TwoPass
    ThreePass
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As mentioned above, the sanitize process is extremely long and you can retrieve the erase status in the disk drive properties under the &lt;code&gt;SmartStorage&lt;/code&gt; sub-tree:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/6-erasestatus.png&quot; alt=&quot;Sanitize / Erase status&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Logical drive creation&lt;/h2&gt;
&lt;p&gt;Logical drives can be created using the &lt;code&gt;PUT&lt;/code&gt; method. In some circumstances a &lt;code&gt;PATCH&lt;/code&gt; can be used. To keep things simple, I&apos;ll only use the &lt;code&gt;PUT&lt;/code&gt; method.&lt;/p&gt;
&lt;p&gt;For this section, start with the clean infrastructure generated previously to create a 100GB RAID1 logical drive with a &lt;code&gt;Roaming&lt;/code&gt; spare drive.&lt;/p&gt;
&lt;p&gt;Then, add a RAID0 Logical drive located on a single hard disk without spare drive and spanning the entire disk (300GB).&lt;/p&gt;
&lt;p&gt;Finally, add a second logical drive of 50 GB in the RAID1 array created earlier.&lt;/p&gt;
&lt;p&gt;While this scenario may not be completely realistic, the goal here is to more didactic than realistic.&lt;/p&gt;
&lt;h3&gt;First logical drive creation&lt;/h3&gt;
&lt;p&gt;In this clean context, you just need to send a &lt;code&gt;PUT&lt;/code&gt; request to the &lt;code&gt;SmartStorageConfig/settings&lt;/code&gt; (pending settings) with the properties of the desired logical drive as an element of the &lt;code&gt;LogicalDrives&lt;/code&gt; array and the &lt;code&gt;DataGuard&lt;/code&gt; value set to &lt;code&gt;Disabled&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The following picture shows the entire body of the &lt;code&gt;PUT&lt;/code&gt; request with the mandatory properties in green boxes: &lt;code&gt;DataGuard&lt;/code&gt;, &lt;code&gt;DataDrives&lt;/code&gt; list, &lt;code&gt;Raid&lt;/code&gt; type, &lt;code&gt;SpareDrives&lt;/code&gt; list and &lt;code&gt;SpareRebuildMode&lt;/code&gt;. Other properties will be defaulted if not supplied.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;200&lt;/code&gt; return status with a &lt;code&gt;SystemResetRequired&lt;/code&gt; &quot;Error&quot;
message indicates that the remote SmartArray pending settings has been changed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/8-logical-raid1spare-creation.png&quot; alt=&quot;RAID1 with Spare drive creation&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon reboot of the server, you should verify that the current configuration contains only the &lt;code&gt;Success&lt;/code&gt; message. This ensures that the configuration happened effectively.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/9-runningzoneaftercreation.png&quot; alt=&quot;Successful Logical Drive creation&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Creating more logical drives&lt;/h3&gt;
&lt;p&gt;Now, I would like to show you how to add a logical drive without altering the existing configuration. This operation could be noted &quot;&lt;code&gt;=+&lt;/code&gt;&quot; in high level programming languages. I will use this analogy to build the body of the corresponding &lt;code&gt;PUT&lt;/code&gt; request.&lt;/p&gt;
&lt;p&gt;The &quot;&lt;code&gt;=&lt;/code&gt;&quot; portion of this process is performed by copying the entire &lt;code&gt;SmartStorageConfig&lt;/code&gt; current configuration into the body of a &lt;code&gt;PUT&lt;/code&gt; request. The remote iLO Redfish server is smart enough to keep only needed properties.&lt;/p&gt;
&lt;p&gt;The &quot;&lt;code&gt;+&lt;/code&gt;&quot; part consists of adding the desired logical drive into the &lt;code&gt;LogicalDrives&lt;/code&gt; array and changing the &lt;code&gt;DataGuard&lt;/code&gt; value.&lt;/p&gt;
&lt;p&gt;The body of this &lt;code&gt;PUT&lt;/code&gt; request is now ready to be sent to the &lt;code&gt;SmartStorageConfig/settings&lt;/code&gt; (pending settings) of the managed iLO.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/11-addlogicaldrive.png&quot; alt=&quot;Add a logical drive in a separate array&quot;&gt;&lt;/p&gt;
&lt;p&gt;Again and again, you must check the return status, reboot if it is &lt;code&gt;Ok&lt;/code&gt;, and verify that the current configuration contains only a single &lt;code&gt;Success&lt;/code&gt; message to be sure that your modification was made.&lt;/p&gt;
&lt;h3&gt;Adding a logical drive in an existing disk array&lt;/h3&gt;
&lt;p&gt;This last exercise consists of adding a RAID0 logical drive of 50GB in the first storage array that was created earlier. This array has a capacity of 300GB and, only 100GB are consumed with logical drive &lt;code&gt;RAID1-1-100&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;As I showed you in the previous section you have to perform a &quot;&lt;code&gt;=+&lt;/code&gt;&quot; operation including a modification of the &lt;code&gt;DataGuard&lt;/code&gt; property and the addition of a logical drive.&lt;/p&gt;
&lt;p&gt;The following screenshot shows the characteristics of the added RAID0 logical drive. It mentions its size as well as the &lt;code&gt;DataDrives&lt;/code&gt; list and the spare drive used in the storage array created  for the RAID1 logical drive in my first example. This ensures the creation of the desired logical drive in the right place.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/12-addlogicaldrivetoexistingarray.png&quot; alt=&quot;Adding a logical drive in an existing array&quot; title=&quot;Adding a logical drive in an existing array&quot;&gt;&lt;/p&gt;
&lt;p&gt;After reboot, the requested RAID0 logical drive is visible in the current configuration. Note that the &lt;code&gt;SpareRebuildMode&lt;/code&gt; has been automatically adjusted to &lt;code&gt;Dedicated&lt;/code&gt; since &lt;code&gt;Roaming&lt;/code&gt; is not a valid value anymore.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/img/13-runningzoneafteraddition.png&quot; alt=&quot;Current configuration after the addition of logical drive in an existing array&quot; title=&quot;Current configuration after the addition of logical drive in an existing array&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Redfish is a powerful, open source method of being able to manage systems. With a RESTful interface, it is designed to leverage existing internet standards and tool chains, making it usable by both amateurs as well as professionals.The ability of managing HPE Smart Storage with Redfish is a major step forward to the full control of HPE Gen10 and future servers using a single RESTful API.&lt;/p&gt;
&lt;p&gt;Make sure you continue to follow my blog posts on HPE DEV for more hints on working with Redfish in HPE environments. You can connect with me in &lt;a href=&quot;https://hpedev.slack.com/&quot; target=&quot;_blank&quot;&gt;the Redfish channel on Slack&lt;/a&gt; for specific questions.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Building Partnerships - Newsletter]]></title><link>https://developer.hpe.com/2018-October-29/</link><guid isPermaLink="false">https://developer.hpe.com/2018-October-29/</guid><pubDate>Mon, 29 Oct 2018 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Accessing the HPE OneView Global Dashboard API]]></title><description><![CDATA[By Brad Stanley Software Engineer working on Global Dashboard## Welcome
With OneView Global Dashboard 1.6, we are happy to introduce its…]]></description><link>https://developer.hpe.com/accessing-the-hpe-oneview-global-dashboard-api/</link><guid isPermaLink="false">https://developer.hpe.com/accessing-the-hpe-oneview-global-dashboard-api/</guid><pubDate>Thu, 11 Oct 2018 15:14:06 GMT</pubDate><content:encoded>&lt;p&gt;By Brad Stanley&lt;/p&gt;
&lt;p&gt;Software Engineer working on Global Dashboard## Welcome
With OneView Global Dashboard 1.6, we are happy to introduce its REST API! This means that you can query all of the appliances that have been added to Global Dashboard via command line, and even scripts if you are ambitious enough. There will be more on that later. In addition to querying the appliances, you can also add and delete appliances with our REST API. ### REST API?&lt;/p&gt;
&lt;p&gt;First, HPE OneView has some great blogs about their REST API, and some overhead information about how to use that.&lt;/p&gt;
&lt;p&gt;Check out &lt;a href=&quot;/blog/first-step-with-programming-the-hpe-composable-api&quot;&gt;this blog&lt;/a&gt; for a quick introduction to REST APIs, which includes information about tools like POSTman. And &lt;a href=&quot;/blog/curling-through-the-oneview-api&quot;&gt;here&lt;/a&gt; you could read a blog about cURL syntax.&lt;/p&gt;
&lt;h3&gt;Hackathon&lt;/h3&gt;
&lt;p&gt;Recently, we had a hackathon at work. This gave a few developers a chance to play with our REST API.  It is possible that offers of free doughnuts and pizza also enticed the developers, but hopefully, it was primarily the opportunity to create something new.&lt;/p&gt;
&lt;p&gt;In the script we created, a few different things would happen.&lt;/p&gt;
&lt;p&gt;First, a OneView appliance was added to a Global Dashboard.&lt;/p&gt;
&lt;p&gt;Then, some summary data was obtained. This was done by using the endpoints for the various resources, e.g. enclosures, server profiles, etc. The most interesting data came from resource alerts. Global Dashboard collects all of the critical alerts from the appliances it monitors. When there are enough critical alerts, you can begin to see the same alerts from different HPE OneView appliances. Using the REST API and some simple scripting, the script would print out how many instances of each alert there are. This could be an extremely useful tool for anyone troubleshooting at a data center.&lt;/p&gt;
&lt;p&gt;Let’s say someone has the same exact alert on X different HPE HPE OneView appliances. The script could alert you to which HPE OneView appliances have that same alert, so that the solution could then be applied to quickly address those X alerts. That saves whoever is troubleshooting valuable time.&lt;/p&gt;
&lt;p&gt;Beyond that, you could build an interactive script that would let you type in notes. With that, when a fix is found, you could type in what that fix is and associate it with the critical alert. Then, the next time that alert rears its ugly head, your beloved script could give you a head start by informing you how you solved it last time. The script could grow over time and be a one-stop seek-and-destroy alerts shop.&lt;/p&gt;
&lt;h3&gt;Using REST API&lt;/h3&gt;
&lt;p&gt;There are two key steps in the above example: adding an appliance, and then querying the resource alerts endpoint.&lt;/p&gt;
&lt;p&gt;To add an appliance there are a few steps, and this is where it can be helpful to use a tool like POSTman.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Add an appliance&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;POST https://&amp;#x3C;Global Dashboard IP&gt;/rest/login-sessions&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Look at your Global Dashboard’s API docs for more information: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/apidoc/#tag/Login-Sessions&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;For Global Dashboard versions 2.1 and earlier, this will need to be done with x-api-version having a value of 2&lt;/li&gt;
&lt;li&gt;And content-type will need to be specified with application/json&lt;/li&gt;
&lt;li&gt;If you look at your Global Dashboard’s API docs, you’ll see under Header Parameters that these two fields are required&lt;/li&gt;
&lt;li&gt;You’ll also notice a required Request Body, which is comprised of your userName, password and optionally an authLoginDomain
&lt;ul&gt;
&lt;li&gt;An example of this can be seen at the same API doc link that is above&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Send the POST and you’ll get back a response that includes a token&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET https://&amp;#x3C;Global Dashboard IP&gt;/rest/certificates/https/remote/&amp;#x3C;HPE OneView IP&gt;&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Look at your Global Dashboard’s API docs for more information: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/apidoc/#tag/Certificates&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Again, pay attention to the required parameters, one of them is auth which is the token you retrieved from the POST call&lt;/li&gt;
&lt;li&gt;This will return a large body of information, which will be needed for the next call* &lt;code&gt;POST https://&amp;#x3C;Global Dashboard IP&gt;/rest/certificates/servers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Look at your Global Dashboard’s API docs for more information: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/apidoc/#tag/Certificates&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The request body has a lot of information mentioned in the API docs, but it is simple: copy the whole body that was returned from the previous call, and paste that into the body of this call&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST https://&amp;#x3C;Global Dashboard IP&gt;/rest/appliances&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Look at your Global Dashboard’s API docs for more information: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/apidoc/#tag/Appliances&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The POST to /certificates/servers added the HPE OneView certificate to Global Dashboard, which enables the HPE OneView to now be added&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;&lt;strong&gt;Query resource alerts&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;GET https://&amp;#x3C;Global Dashboard IP&gt;/rest/resource-alerts&lt;/code&gt;
&lt;ul&gt;
&lt;li&gt;Look at your Global Dashboard’s API docs for more information: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/apidoc/#tag/Resource-Alerts&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;This will return at most one page of alerts, which is 25
&lt;ul&gt;
&lt;li&gt;In order to get more alerts, append &lt;code&gt;?count=-1&lt;/code&gt; to your query so it will look like: &lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/rest/resource-alerts?count=-1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;That will return up to 500 alerts, and if you have more than 500 alerts you would use a query like the following to get more
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;https://&amp;#x3C;Global Dashboard IP&gt;/rest/resource-alerts?count=500&amp;#x26;start=500&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;This endpoint is unique because it also allows the user to get back data in a CSV format&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Until Next Time&lt;/h2&gt;
&lt;p&gt;Throughout the API docs, which are accessible from your Global Dashboard, there are a number of different endpoints and information about required parameters and how to use them. Keep an eye on this blog for more about our REST API and other exciting features from OneView Global Dashboard.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Infrastructure as code with HPE OneView and Ansible by Red Hat]]></title><description><![CDATA[Infrastructure as code with HPE OneView and Ansible by Red Hat One of the ways DevOps practitioners achieve agility is by provisioning the…]]></description><link>https://developer.hpe.com/infrastructure-as-code-with-hpe-oneview-and-ansible-by-red-hat/</link><guid isPermaLink="false">https://developer.hpe.com/infrastructure-as-code-with-hpe-oneview-and-ansible-by-red-hat/</guid><pubDate>Wed, 03 Oct 2018 03:40:59 GMT</pubDate><content:encoded>&lt;h1&gt;Infrastructure as code with HPE OneView and Ansible by Red Hat&lt;/h1&gt;
&lt;p&gt;One of the ways DevOps practitioners achieve agility is by provisioning the infrastructure at the same time as the application or service. They use automation to provision infrastructure using the same scripts to deploy exact copies of development, test, and production environments. These techniques work well in cloud and virtualized environments, but often hit a wall when applied to physical infrastructure.
&lt;a href=&quot;https://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; and &lt;a href=&quot;https://www.ansible.com/&quot;&gt;Ansible by Red Hat&lt;/a&gt; enable physical infrastructure as code, bringing cloud like agility to the private data center.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/blob/master/infrastructure-as-code/infrastructure-as-code.md&quot;&gt;whitepaper&lt;/a&gt; &lt;em&gt;Infrastructure as code with HPE OneView and Ansible by Red Hat&lt;/em&gt; and accompanying &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/tree/master/infrastructure-as-code&quot;&gt;GitHub repository&lt;/a&gt; guides customers and partners in automating the provisioning of physical infrastructure. These playbooks can be checked into source control allowing you to treat infrastructure as code.&lt;/p&gt;
&lt;p&gt;Benefits of this infrastructure as code approach include, complete data center automation, consistent reproducibility, versioning, and rollback.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Running HPE OneView Ansible modules in a container]]></title><description><![CDATA[Running HPE OneView Ansible modules in a container The HPE OneView Ansible modules are one of the most popular HPE OneView integrations.
The…]]></description><link>https://developer.hpe.com/running-hpe-oneview-ansible-modules-in-a-container/</link><guid isPermaLink="false">https://developer.hpe.com/running-hpe-oneview-ansible-modules-in-a-container/</guid><pubDate>Tue, 02 Oct 2018 20:58:25 GMT</pubDate><content:encoded>&lt;h1&gt;Running HPE OneView Ansible modules in a container&lt;/h1&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;HPE OneView Ansible modules&lt;/a&gt; are one of the most popular &lt;a href=&quot;https://hpe.com/developers/oneview&quot;&gt;HPE OneView integrations&lt;/a&gt;.
The Ansible modules automate the provisioning of physical infrastructure on-demand using software-defined templates from &lt;a href=&quot;https://hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt;. A containerized version of the Ansible modules is available at the &lt;a href=&quot;https://store.docker.com/community/images/hewlettpackardenterprise/oneview-ansible-debian&quot;&gt;Docker Store&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Docker containers provide a low friction way to get developer environments and CI/CD pipelines up and running easily and consistently. The &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/tree/master/oneview-ansible-in-container&quot;&gt;oneview-ansible-in-container&lt;/a&gt; sample has an Ansible playbook you can use to try out the &lt;code&gt;oneview-ansible-debian&lt;/code&gt; container. It also has a &lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible-samples/blob/master/oneview-ansible-in-container/oneview-ansible-in-container.md&quot;&gt;how-to guide&lt;/a&gt; that walks you through the steps needed to set up and run the playbook container.  Once you have set up an environment to run Ansible playbooks using the HPE OneView modules in a container, you can switch to other directories and try other samples.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using the SSMC/CLI HPE 3PAR Ansible Module with Virtual Domains]]></title><description><![CDATA[Using the 3PAR SSMC/CLI & HPE 3PAR Ansible Module to Create and Manage Virtual Domains in a Multi-Tenant Scenario This guide walks through…]]></description><link>https://developer.hpe.com/using-the-ssmccli-hpe-3par-ansible-module-with-virtual-domains/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-ssmccli-hpe-3par-ansible-module-with-virtual-domains/</guid><pubDate>Mon, 01 Oct 2018 18:49:29 GMT</pubDate><content:encoded>&lt;h1&gt;Using the 3PAR SSMC/CLI &amp;#x26; HPE 3PAR Ansible Module to Create and Manage Virtual Domains in a Multi-Tenant Scenario&lt;/h1&gt;
&lt;p&gt;This guide walks through the steps to create and manage Virtual Domains within a 3PAR using a mixture of the SSMC or CLI and the Ansible module for 3PAR. This guide targets those who want to setup and manage multi-tenancy on a 3PAR using configuration management tools like Ansible.&lt;/p&gt;
&lt;p&gt;You can also find this content over at:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/hpe-storage/hpe3par-examples/tree/master/automation_tools/ansible/demo/virtual_domains&quot;&gt;https://github.com/hpe-storage/hpe3par-examples&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Assumptions&lt;a name=&quot;assumptions&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;HPE 3PAR configured and zoned correctly&lt;/li&gt;
&lt;li&gt;WSAPI enabled&lt;/li&gt;
&lt;li&gt;Super user access to 3PAR&lt;/li&gt;
&lt;li&gt;Ansible installed on workstation&lt;/li&gt;
&lt;li&gt;Knowledge of Ansible and playbooks&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Configuring Virtual Domains in SSMC&lt;a name=&quot;virtualdomain&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Domains allow an administrator to create up to 1,024 domains, or spaces, within a system, where each domain is dedicated to a specific application. A subset of the system users has assigned rights over the domains. Domains can be useful in scenarios where a single system is used to manage data from several different independent applications.
For more information, refer to the &lt;strong&gt;HPE 3PAR Storeserv Storage Concepts Guide&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The first steps to setting up your 3PAR for multi-tenancy is to create a new virtual domain. Currently the configuration of Domains/Users can only done within the SSMC or via the 3PAR CLI (using &lt;strong&gt;createdomain &amp;#x3C;domain&gt;&lt;/strong&gt;, &lt;strong&gt;createuser &amp;#x3C;username&gt; &amp;#x3C;domainname&gt; &amp;#x3C;role&gt;&lt;/strong&gt; ). The example shown below will be done within the SSMC.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Login into the SSMC with Super user access.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/ssmc_login-1538420080349.jpg&quot; alt=&quot;SSMC login&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;In the mega menu, click &lt;strong&gt;Show All &gt; Domains&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/3par_domains-1538420435371.jpg&quot; alt=&quot;3PAR domains&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Click &lt;strong&gt;Create domain&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/3par_domains_create-1538420530820.jpg&quot; alt=&quot;domains&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Enter the name of the domain. In this example, &lt;code&gt;bob_domain&lt;/code&gt;. Then click &lt;strong&gt;Add systems&lt;/strong&gt;. Specify the &lt;strong&gt;3PAR&lt;/strong&gt; where the domain will be created. Once complete, click &lt;strong&gt;Create&lt;/strong&gt;.&lt;a name=&quot;bob&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;You may ask why I am using Bob, because everyone knows Bob is cool!&lt;a name=&quot;bob&quot;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/3par_domains_create_bob_65-1538420610188.jpg&quot; alt=&quot;create domain&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;In the mega menu, click &lt;strong&gt;Show All &gt; Users&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/3par_users_menu-1538420664979.jpg&quot; alt=&quot;users&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Click &lt;strong&gt;Create User&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/create_user-1538420731817.jpg&quot; alt=&quot;create user&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Specify the &lt;strong&gt;NEW&lt;/strong&gt; user name and password&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/create_user_bob_65-1538420769787.jpg&quot; alt=&quot;create bob_user&quot;&gt;&lt;/p&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;Click &lt;strong&gt;Add Authorizations&lt;/strong&gt;, choose the domain created previously (&lt;code&gt;bob_domain&lt;/code&gt; on &lt;strong&gt;virt-3par&lt;/strong&gt; system). Choose the &lt;strong&gt;edit&lt;/strong&gt; Role for the user. Click Add.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/add_authorization-1538420815894.jpg&quot; alt=&quot;authorization&quot;&gt;&lt;/p&gt;
&lt;h3&gt;Repeat these steps as necessary to configure additional Domains and Users within your 3PAR.&lt;/h3&gt;
&lt;p&gt;If you want to use Ansible to configure the domains and users, Ansible can pass CLI commands to the 3PAR using the &lt;code&gt;shell&lt;/code&gt; module.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;---
- name: Create 3PAR Domain &amp;#x26; Users
  hosts: localhost
  tasks:
    - name: install sshpass
      package:
        name: sshpass
        state: present
      become: yes

    - name: Create Domain
      shell: /usr/bin/sshpass -p 3pardata ssh -oStrictHostKeyChecking=no 3paradm@192.168.1.50 &quot;createdomain bob_domain&quot;
      register: domain

    - name: print domain
      debug:
        msg: &quot;{{ domain }}&quot;

    - name: Create Users
      shell: /usr/bin/sshpass -p 3pardata ssh -oStrictHostKeyChecking=no 3paradm@192.168.1.50 &quot;createuser -c Password1 bob_user bob_domain edit&quot;
      register: users    

    - name: print users
      debug:
        msg: &quot;{{ users }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h1&gt;Using Ansible to configure CPGs, Hosts, Volumes and more&lt;a name=&quot;ansible&quot;&gt;&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;The following section demonstrates the process to configure resources like &lt;strong&gt;CPGs&lt;/strong&gt;, &lt;strong&gt;3PAR hosts&lt;/strong&gt;, etc. and assign them to the newly created domain(s). Remember that the domain users are only be able to view/edit resources they have access to and are be unable to view/edit resources from other domains unless authorized to do so.&lt;/p&gt;
&lt;p&gt;Also everything else from this point is done via the HPE 3PAR Ansible Storage Module on a Linux (RHEL, CentOS, Ubuntu) system with Ansible (ver. 2.5 or later) installed.&lt;/p&gt;
&lt;h2&gt;Storage Admin Perspective&lt;a name=&quot;admin&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;hr&gt;
&lt;h3&gt;Let&apos;s get started.&lt;/h3&gt;
&lt;p&gt;First, we need to make sure we have the 3PAR storage module for Ansible downloaded.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the HPE 3PAR Ansible Storage Modules&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;https://github.com/HewlettPackard/hpe3par_ansible_module
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Install the HPE 3PAR Python SDK&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;pip install hpe3par_sdk
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, clone the &lt;code&gt;hpe3par-examples&lt;/code&gt; repo to get access to the Virtual Domain demo.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;https://github.com/hpe-storage/hpe3par-examples
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Generic Ansible housekeeping&lt;a name=&quot;housekeeping&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Configure &lt;code&gt;ansible.cfg&lt;/code&gt; to know about the 3PAR Ansible Storage Modules.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;If you have other modules already installed on this system, you can move the &lt;strong&gt;Modules&lt;/strong&gt; folder from this repo to that directory.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre&gt;&lt;code&gt;vi /etc/ansible/ansible.cfg
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Under the &lt;code&gt;[defaults]&lt;/code&gt; section, edit &lt;code&gt;library&lt;/code&gt; to point to your Modules directory.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;[defaults]

# some basic default values...

inventory      = /etc/ansible/hosts
library        = /root/workspace/hpe3par_ansible_module/Modules
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Understanding the 3PAR Ansible playbooks&lt;a name=&quot;understanding&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Navigate to the &lt;code&gt;hpe3par_examples/automation_tools/ansible/demo/virtual_domains&lt;/code&gt; folder. Here we find two Ansible playbooks and the &lt;code&gt;properties/&lt;/code&gt; folder.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;virtual_domains_demo_3par_admin.yml&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;virtual_domains_demo_3par_user.yml&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;properties/storage_system_properties.yml&lt;/strong&gt; (This is configuration files containing the 3PAR IP address, Storage admin username and password for the 3PAR array)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;properties/storage_system_properties_bob.yml&lt;/strong&gt; (This is configuration files containing the 3PAR IP address, Domain user username and password for the 3PAR array)&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;cd ~/workspace/hpe3par_examples/automation_tools/ansible/

[root@ansible-host virtual_domains]# ls -la
total 12
drwxr-xr-x. 4 root root  137 Sep 26 13:00 .
drwxr-xr-x. 3 root root   29 Sep 26 09:23 ..
drwxr-xr-x. 2 root root  254 Sep 26 13:00 img
drwxr-xr-x. 2 root root  116 Sep 26 09:23 properties
-rw-r--r--. 1 root root 2219 Sep 26 13:00 README.md
-rw-r--r--. 1 root root 2065 Sep 26 09:23 virtual_domains_demo_3par_admin.yml
-rw-r--r--. 1 root root 2065 Sep 26 09:23 virtual_domains_demo_3par_create-domain-users_cli.yml
-rw-r--r--. 1 root root 1847 Sep 26 09:23 virtual_domains_demo_3par_user.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Storage System Property files&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Lets configure the &lt;code&gt;properties/storage_system_properties.yml&lt;/code&gt; and add the 3PAR IP address. Enter the &lt;strong&gt;Storage Admin&lt;/strong&gt; username and password. Save the file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;vi **properties/storage_system_properties.yml**
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;storage_system_ip: &quot;192.168.1.50&quot;
storage_system_username: &quot;3paradm&quot;
storage_system_password: &quot;3pardata&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Edit the &lt;code&gt;properties/storage_system_properties_bob.yml&lt;/code&gt; and configure the 3PAR IP address. Enter the &lt;strong&gt;bob_user&lt;/strong&gt; username and password. Save the file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;vi **properties/storage_system_properties_bob.yml**
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;storage_system_ip: &quot;192.168.1.50&quot;
storage_system_username: &quot;bob_user&quot;
storage_system_password: &quot;Password&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Ansible Vault - encrypt/decrypt&lt;/h3&gt;
&lt;p&gt;Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;ansible-vault encrypt&lt;/code&gt; on each of the properties files. Enter a unique password for each file.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;ansible-vault encrypt properties/storage_system_properties.yml

ansible-vault encrypt properties/storage_system_properties_bob.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Check to verify they are encrypted. You should see something similar to below.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;$ head -2 properties/storage_system_properties.yml
$ANSIBLE_VAULT;1.1;AES256
33636137356335
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you need to edit the encrypted file, you can run &lt;code&gt;ansible-vault edit file.yml&lt;/code&gt;, enter the vault password and perform the edits. Alternatively, if you need to decrypt the file, run &lt;code&gt;ansible-vault decrypt file.yml&lt;/code&gt; and enter the vault password.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h3&gt;Now let&apos;s get on to the fun stuff&lt;/h3&gt;
&lt;p&gt;We will be working in the &lt;code&gt;virtual_domains_demo_3par_admin.yml&lt;/code&gt; playbook. This playbook is ran by the Storage Admin to create &lt;strong&gt;CPGs&lt;/strong&gt; and assign &lt;strong&gt;Hosts&lt;/strong&gt; to the domain we created previously.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; These are very simple examples to help you understand the capabilities of the Virtual Domains within the HPE 3PAR system. You can expand these to add multiple CPGs and multiple Hosts within the same playbook without ever having to log into the SSMC. This is the power automating the configuration of the HPE 3PAR Storage System using the Ansible Storage Modules.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When we open the file, we see multiple sections. Again we are assuming that you are familiar with &lt;strong&gt;YAML&lt;/strong&gt; and Ansible playbooks to understand the layout and structure.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- name: Virtual Domains on 3PAR Ansible Demo playbook - Admin
  hosts: localhost
  become: no

  vars:
    cpg_name: &apos;bob_cpg_FC_r6&apos;
    host_name: &apos;scom.virtware.co&apos;
    domain: &apos;bob_domain&apos;
    iscsi_names: &apos;iqn.1991-05.com.microsoft:scom.virtware.co&apos;

  tasks:
    - name: Load Storage System Vars
      include_vars: &apos;properties/storage_system_properties.yml&apos;

    - name: Create CPG &quot;{{ cpg_name }}&quot;
      hpe3par_cpg:
        # 3PAR CPG options found here: https://github.com/HewlettPackard/hpe3par_ansible_module#modules
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: present
        domain: &quot;{{ domain }}&quot;
        cpg_name: &quot;{{ cpg_name }}&quot;
        growth_increment: 32.5
        growth_increment_unit: GiB
        growth_limit: 100
        growth_limit_unit: GiB
        growth_warning: 90
        growth_warning_unit: GiB
        raid_type: R6
        set_size: 6
        high_availability: MAG
        disk_type: FC

    - name: Create Host &quot;{{ host_name }}&quot;
      hpe3par_host:
        # 3PAR Host options found here: https://github.com/HewlettPackard/hpe3par_ansible_module#modules
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: present
        host_name: &quot;{{ host_name }}&quot;
        host_domain: &quot;{{ domain }}&quot;
        host_persona: WINDOWS_SERVER

    - name: Add iSCSI paths to Host &quot;{{ host_name }}&quot;
      hpe3par_host:
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: add_iscsi_path_to_host
        host_name: &quot;{{ host_name }}&quot;
        host_iscsi_names: &quot;{{ iscsi_names }}&quot;

&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Variables section&lt;/h3&gt;
&lt;p&gt;There are several sections where you can specify variables allowing maximum flexibility when creating playbooks. They can be specified at the &lt;strong&gt;playbook&lt;/strong&gt; level (Global), in &lt;strong&gt;external file&lt;/strong&gt; (properties/storage_system_properties.yml), or at the &lt;strong&gt;task&lt;/strong&gt; level.&lt;/p&gt;
&lt;p&gt;In the &lt;code&gt;vars&lt;/code&gt; section, you can modify the CPG/Host names to be added to the &lt;code&gt;bob_domain&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In order to assign the new CPGs and Hosts to the domain, you must specify a domain in the &lt;code&gt;domain: &apos;bob_domain&apos;&lt;/code&gt; variable. This variable is then used within each of the tasks (&lt;strong&gt;domain:&lt;/strong&gt; and &lt;strong&gt;host_domain:&lt;/strong&gt;), where required, to assign the CPG or Host to the domain. If the domain is &lt;strong&gt;not&lt;/strong&gt; specified, the CPG or Host is &lt;strong&gt;not&lt;/strong&gt; assigned to a domain and is not accessible to the Domain user when they log into the array.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Modify the &lt;strong&gt;host_name&lt;/strong&gt; and &lt;strong&gt;iscsi_names&lt;/strong&gt; to match a host and iSCSI iqns you want to add from your environment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In the &lt;code&gt;tasks&lt;/code&gt; section, for example in the &lt;strong&gt;Create CPG&lt;/strong&gt; task, you can add/modify the variables (growth_limit, raid_type, etc) in the tasks as well move them into &lt;strong&gt;vars&lt;/strong&gt; section if needed.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In the case of Multi-Tenancy, creating growth limits, defining disk characteristics on CPGs in critical in order to enforce boundaries per tenant (this prevents one tenant from consuming the entire storage array), as well as to ensure all tenants get the appropriate resources and performance per their needs. In the playbook above, we specified a &lt;strong&gt;100GB growth limit (with a 90GB warning)&lt;/strong&gt; on the CPG, therefore restricting the users within the &lt;code&gt;bob_domain&lt;/code&gt; from using more than 100GB of storage space on the array. All of this is configurable by the Storage Admin.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Also you can specify an external variables file like the &lt;code&gt;storage_system_properties.yml&lt;/code&gt;. This gives us the ability to encrypt the external file without affecting the main playbook.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Tasks sections&lt;a name=&quot;admin_tasks&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;We have 4 main tasks in this example.&lt;/p&gt;
&lt;p&gt;These tasks are taken from the main (CPG, Host, Volume, etc) playbooks found in the 3PAR Storage Module here: &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module/tree/master/playbooks&quot;&gt;https://github.com/HewlettPackard/hpe3par_ansible_module/tree/master/playbooks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please refer to the &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module/blob/master/Modules/readme.md&quot;&gt;Modules README&lt;/a&gt; for detailed information on each Module including required/optional parameters.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Load Storage System Vars&lt;/strong&gt; (load the encrypted storage system IP, username/password)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create CPG&lt;/strong&gt; (create CPGs per the provided specifications)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create Host&lt;/strong&gt; (create a basic 3PAR host)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add iSCSI paths to Host&lt;/strong&gt; (modify the host and add iSCSI IQNs or FC WWNs)&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h3&gt;Running the Playbook&lt;a name=&quot;admin_run&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Now that we know what is going on within the admin playbook, we can run it in order to create the CPGs, Host resources within the &lt;strong&gt;bob_domain&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;We run this playbook with the &lt;code&gt;ansible-playbook --ask-vault-pass&lt;/code&gt; option in order to decrypt the &lt;code&gt;storage_system_properties.yml&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;$ ansible-playbook --ask-vault-pass virtual_domains_demo_3par_admin.yml
Vault password:

PLAY [Virtual Domains on 3PAR Ansible Demo playbook - Admin] **************************

TASK [Gathering Facts] ****************************************************************
ok: [localhost]

TASK [Load Storage System Vars] *******************************************************
ok: [localhost]

TASK [Create CPG &quot;bob_cpg_FC_r6&quot;] *****************************************************
ok: [localhost]

TASK [Create Host &quot;scom.virtware.co&quot;] *************************************************
ok: [localhost]

TASK [Add iSCSI paths to Host &quot;scom.virtware.co&quot;] ***************************************************************************************
ok: [localhost]

PLAY RECAP ****************************************************************************
localhost                  : ok=5    changed=0    unreachable=0    failed=0

&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Success.&lt;/h3&gt;
&lt;p&gt;This sample playbook demonstrates how a Storage Admin can quickly and programmatically configure Domains, Users, and storage resources using a combination of the SSMC and Ansible playbooks.&lt;/p&gt;
&lt;h2&gt;Storage User Perspective&lt;a name=&quot;user&quot;&gt;&lt;/a&gt;&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;Now that we have finished configuring a Domain, Users, created CPGs and added hosts into the Domain, lets cover how a user can consume the 3PAR using Ansible playbooks while still being bound to the limits placed on the domain by the Storage Admin.&lt;/p&gt;
&lt;p&gt;This section follows closely to my other blog post about using the Ansible modules to provision storage on an HPE 3PAR. &lt;a href=&quot;/blog/storage-provisioning-using-ansible-with-hpe-3par-storage&quot;&gt;Storage Provisioning using Ansible with HPE 3PAR Storage&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Assumptions&lt;a name=&quot;user_assumptions&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Domains/users have been created&lt;/li&gt;
&lt;li&gt;CPGs and Hosts have been assigned to the domain&lt;/li&gt;
&lt;li&gt;Reviewed &lt;a href=&quot;#housekeeping&quot;&gt;Generic Ansible housekeeping&lt;/a&gt; section&lt;/li&gt;
&lt;li&gt;Vault password to unlock the &lt;strong&gt;properties/storage_system_properties_bob.yml&lt;/strong&gt; file. Check out the &lt;a href=&quot;#vault&quot;&gt;Vault section&lt;/a&gt; for more info.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Let&apos;s get started&lt;/h3&gt;
&lt;p&gt;Let&apos;s take a look at our Ansible playbooks again. Since we have everything ready for us on the array, it is very simple to run the playbooks as a user. The example playbook is a simple demonstration on how you can turn your &lt;strong&gt;Infrastructure into Code&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;hpe3par_examples/automation_tools/ansible/demo/virtual_domains&lt;/strong&gt; folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;cd hpe3par_examples/automation_tools/ansible/demo/virtual_domains
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here we find the &lt;code&gt;virtual_domains_demo_3par_user.yml&lt;/code&gt; playbook and the &lt;code&gt;properties/storage_system_properties_bob.yml&lt;/code&gt; (This file contains the 3PAR IP address, 3PAR Domain user username/password).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[root@ansible-host virtual_domains]# ls -la
total 12
drwxr-xr-x. 4 root root  137 Sep 26 13:00 .
drwxr-xr-x. 3 root root   29 Sep 26 09:23 ..
drwxr-xr-x. 2 root root  254 Sep 26 13:00 img
drwxr-xr-x. 2 root root  116 Sep 26 09:23 properties
-rw-r--r--. 1 root root 2219 Sep 26 13:00 README.md
-rw-r--r--. 1 root root 2065 Sep 26 09:23 virtual_domains_demo_3par_admin.yml
-rw-r--r--. 1 root root 2065 Sep 26 09:23 virtual_domains_demo_3par_create-domain-users_cli.yml
-rw-r--r--. 1 root root 1847 Sep 26 09:23 virtual_domains_demo_3par_user.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before we go much further, make sure you have configured the &lt;code&gt;properties/storage_system_properties_bob.yml&lt;/code&gt; with the appropriate 3PAR array information. Please review &lt;a href=&quot;#property&quot;&gt;Storage System Property Files&lt;/a&gt; for more info.&lt;/p&gt;
&lt;p&gt;Let&apos;s look at the &apos;virtual_domains_demo_3par_user.yml&apos; playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;vi virtual_domains_demo_3par_user.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When we open the file, we see multiple sections. Again we are assuming that you are familiar with &lt;strong&gt;YAML&lt;/strong&gt; and Ansible playbooks to understand the layout and structure.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- name: Virtual Domains on 3PAR Ansible Demo playbook - Bob User
  hosts: localhost

  vars:
    volume_name: &apos;bob_demo_volume&apos;
    cpg_name: &apos;bob_cpg_FC_r6&apos;
    host_name: &apos;scom.virtware.co&apos;
    domain: &apos;bob_domain&apos;
    vol_size: 10
    vol_size_unit: &apos;GiB&apos;
    autolun: False
    lunid: 110

  tasks:
    - name: Load Storage System Vars
      include_vars: &apos;properties/storage_system_properties_bob.yml&apos;

    - name: Load VolumeSet Vars
      include_vars: &apos;properties/volumeset_properties.yml&apos;

    - name: Create Volume &quot;{{ volume_name }}&quot;
      hpe3par_volume:
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: present
        volume_name: &quot;{{ item }}&quot;
        cpg: &quot;{{ cpg_name }}&quot;
        size: &quot;{{ vol_size }}&quot;
        size_unit: &quot;{{ vol_size_unit }}&quot;
      with_items: &quot;{{ [&apos;volume_bob_1&apos;, &apos;volume_bob_2&apos;, &apos;volume_bob_3&apos;] }}&quot;

    - name: Create volume set &quot;{{ volumeset_name }}&quot;
      hpe3par_volumeset:
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: present
        volumeset_name: &quot;{{ volumeset_name }}&quot;
        setmembers: &quot;{{ add_vol_setmembers }}&quot;

    - name: Create VLUN
      hpe3par_vlun:
        storage_system_ip: &quot;{{ storage_system_ip }}&quot;
        storage_system_username: &quot;{{ storage_system_username }}&quot;
        storage_system_password: &quot;{{ storage_system_password }}&quot;
        state: export_volumeset_to_host
        volume_set_name: &quot;{{ volumeset_name }}&quot;
        host_name: &quot;{{ host_name }}&quot;
        lunid: &quot;{{ lunid }}&quot;
        autolun: &quot;{{ autolun }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The sections within the user playbook are very similar to the one used by the admin user. From this playbook, you are able to provision volumes from the CPGs and resources specified by the Storage Admin (&lt;a href=&quot;#limits&quot;&gt;100GB limit&lt;/a&gt;) and exporting the volumes to hosts. The main difference here is that you are authenticating to the 3PAR with a 3PAR Domain user (&lt;strong&gt;bob_user&lt;/strong&gt;) rather than a 3PAR Super user.&lt;/p&gt;
&lt;p&gt;The 3PAR Domain user is specified in the &lt;code&gt;properties/storage_system_properties_bob.yml&lt;/code&gt; file. This is covered in the &lt;a href=&quot;#property&quot;&gt;Storage System Property Files&lt;/a&gt; section.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Variables section - User&lt;/h3&gt;
&lt;p&gt;A quick review on variables. There are several sections where you can specify variables allowing maximum flexibility when creating playbooks. They can be specified at the playbook level (Global), in external file (properties/storage_system_properties.yml), or at the task level.&lt;/p&gt;
&lt;p&gt;In the &lt;code&gt;vars&lt;/code&gt; section, you can modify the &lt;strong&gt;Volume&lt;/strong&gt;, &lt;strong&gt;CPG&lt;/strong&gt;, &lt;strong&gt;3PAR Host&lt;/strong&gt;, &lt;strong&gt;Volume Size&lt;/strong&gt;, etc. to meet your needs.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Be aware, if you exceed the 100GB limit on the CPG (as defined by the Storage Admin in our demo), by either creating a volume larger than 100GB or multiple volumes that exceed a cumulative 100GB in size, the playbook fails with an error that the allocation is larger than the limit.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Since we are logged into the &lt;code&gt;bob_domain&lt;/code&gt; as the &lt;code&gt;bob_user&lt;/code&gt;, we don&apos;t have to explicitly define the domain when creating Volumes, Volume Sets, etc. because the domain is inherited from the User&apos;s domain.&lt;/p&gt;
&lt;p&gt;In the &lt;code&gt;tasks&lt;/code&gt; section, for example in the &lt;strong&gt;Create Volume&lt;/strong&gt; task, you can use the &lt;code&gt;with_items&lt;/code&gt; option (with_items functions as a loop in Ansible) to create multiple volumes during runtime rather creating multiple tasks to create individual volumes.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Configuring Tasks sections - User&lt;a name=&quot;user_tasks&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;We have 4 main tasks in this example.&lt;/p&gt;
&lt;p&gt;These tasks are taken from the main (CPG, Host, Volume, etc) playbooks found in the 3PAR Storage Module here:  &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module/tree/master/playbooks&quot;&gt;https://github.com/HewlettPackard/hpe3par_ansible_module/tree/master/playbooks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Please refer to the &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module/blob/master/Modules/readme.md&quot;&gt;Modules README&lt;/a&gt; for detailed information on each Module including optional parameters.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Load Storage System Vars&lt;/strong&gt; (load the encrypted storage system IP, username/password)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Load VolumeSet Variables&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create Volume&lt;/strong&gt; (create 3 Volumes as defined)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create VolumeSet&lt;/strong&gt; (create a VolumeSet for the 3 Volumes)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Create VLUN&lt;/strong&gt; (Export the VolumeSet to the 3PAR Host (i.e. scom.virtware.co))&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h3&gt;Running the Playbook - User&lt;a name=&quot;user_run&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Now that we have configured our playbook, we can run it in order to create the &lt;strong&gt;Volumes&lt;/strong&gt;, &lt;strong&gt;VolumeSet&lt;/strong&gt; all within the &lt;code&gt;bob_domain&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;We run this playbook with the &lt;code&gt;ansible-playbook --ask-vault-pass&lt;/code&gt; option in order to decrypt the &lt;strong&gt;storage_system_properties_bob.yml&lt;/strong&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;$ ansible-playbook --ask-vault-pass virtual_domains_demo_3par_user.yml
Vault password:

PLAY [Virtual Domains on 3PAR Ansible Demo playbook - Bob User] *************************

TASK [Gathering Facts] ******************************************************************
ok: [localhost]

TASK [Load Storage System Vars] *********************************************************
ok: [localhost]

TASK [Load VolumeSet Vars] **************************************************************
ok: [localhost]

TASK [Create Volume &quot;bob_demo_volume&quot;] **************************************************
ok: [localhost] =&gt; (item=volume_bob_1)
ok: [localhost] =&gt; (item=volume_bob_2)
ok: [localhost] =&gt; (item=volume_bob_3)

TASK [Create volume set &quot;bob_volumeset&quot;]*************************************************
ok: [localhost]

TASK [Create VLUN] **********************************************************************
ok: [localhost]

PLAY RECAP ******************************************************************************
localhost                  : ok=6    changed=0    unreachable=0    failed=0

&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Success.&lt;a name=&quot;user_success&quot;&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;This sample playbook demonstrates how a Storage User can quickly and programmatically provision storage and manage those storage resources within their Domain while still being under the control of the Storage Admin.&lt;/p&gt;
&lt;p&gt;Now let&apos;s verify that these volumes have been exported to the Host.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Log into the &lt;strong&gt;SSMC&lt;/strong&gt; as &lt;code&gt;bob_user&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on &lt;strong&gt;Virtual Volumes&lt;/strong&gt;. You should see 3 volumes and also see that they are successfully exported to &lt;strong&gt;scom.virtware.co&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/ssmc_success-1538420946527.jpg&quot; alt=&quot;Success SSMC&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now let&apos;s check &lt;strong&gt;scom.virtware.co&lt;/strong&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You may need to perform a &lt;strong&gt;Rescan Disks&lt;/strong&gt; to see the new volumes.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/scom_success-1538421007139.jpg&quot; alt=&quot;Success SCOM&quot;&gt;&lt;/p&gt;
&lt;p&gt;You should be able to see all of the volumes available to the Windows Server.&lt;/p&gt;
&lt;p&gt;This concludes this demo. With this information, you will be able to create Multiple Domains, Users, and manage the resources that are available to each. You also will understand how to apply the appropriate controls around CPGs and your domains in order to facilitate multi-tenancy storage domain.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing Nemo: HPE Nimble Storage Advanced Data Services Emulator for Containers]]></title><description><![CDATA[The HPE Nimble Storage Docker Volume plugin brings a plethora of powerful capabilities to container ecosystems like Docker, Mesos and…]]></description><link>https://developer.hpe.com/introducing-nemo-hpe-nimble-storage-advanced-data-services-emulator-for-/</link><guid isPermaLink="false">https://developer.hpe.com/introducing-nemo-hpe-nimble-storage-advanced-data-services-emulator-for-/</guid><pubDate>Mon, 01 Oct 2018 17:33:50 GMT</pubDate><content:encoded>&lt;p&gt;The &lt;a href=&quot;https://store.docker.com/plugins/nimble&quot;&gt;HPE Nimble Storage Docker Volume plugin&lt;/a&gt; brings a plethora of powerful capabilities to container ecosystems like Docker, Mesos and Kubernetes. The plugin can be used with either &lt;a href=&quot;https://hpe.com/storage/nimble&quot;&gt;Nimble arrays&lt;/a&gt; or in conjunction with a &lt;a href=&quot;https://hpe.com/storage/cloudvolumes&quot;&gt;HPE Cloud Volumes&lt;/a&gt; account. Both platforms are very suitable for high performing mission-critical production workloads, which is to be expected. However, given the advanced functionality the Docker Volume plugin provides, wouldn&apos;t one be more comfortable experimenting with the features in a completely isolated sandbox? On your laptop?&lt;/p&gt;
&lt;p&gt;Nemo is the &lt;a href=&quot;https://github.com/NimbleStorage/Nemo&quot;&gt;HPE Nimble Storage Advanced Data Services Emulator for Containers&lt;/a&gt;. It&apos;s a free-standing Open Source project blessed by HPE. It emulates some of the Advanced Data Services that the real plugin provides and it only depends on OpenZFS being available on the host OS. OpenZFS provides similar data management capabilities such as snapshot, clones and metadata storage per filesystem. Nemo is not providing block storage; it is only a local filesystem and does not come with a support contract.&lt;/p&gt;
&lt;p&gt;HPE Nimble Storage does not use OpenZFS in any way in the shipping products.&lt;/p&gt;
&lt;h1&gt;The Soda Challenge&lt;/h1&gt;
&lt;p&gt;Nemo is designed to emulate the HPE Nimble Storage Docker Volume plugin in so, at a glance, they look very similar. But, if you were actaully comparing the two one to one – it&apos;s apples and oranges in terms of functionality. The official plugins are targeted towards mission-critical Enterprise applications and workloads. Nemo is for the dabbler and tinkerer who is unable to access a Nimble array or HPE Cloud Volumes to learn how Advanced Data Services are incorporated into DevOps centric environments.&lt;/p&gt;
&lt;p&gt;The options available to Nemo are divided in a number of self-explanatory sections to help layout the level of difference between the two:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Universal Options: Help and size, very rudimentary&lt;/li&gt;
&lt;li&gt;Nimble Compatible Global Options: Basic parameters&lt;/li&gt;
&lt;li&gt;Nimble Compatible Clone Options: Clone parameters&lt;/li&gt;
&lt;li&gt;Nimble Compatible Import Dataset as Clone Options: Import existing OpenZFS datasets&lt;/li&gt;
&lt;li&gt;Nimble Compatible Vanity Options: These options does not provide any real functionality&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These sections are outlined below and available in the &lt;code&gt;-o help&lt;/code&gt; output from the plugin:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ sudo docker volume create -d nemo -o help
Error response from daemon: create c6e7ad6cd8199feac4c485a5e79ad4edef8e231e9fa179e0191a1e5fadfe88fc: VolumeDriver.Create:  -o help

Nemo: HPE Nimble Storage Advanced Data Services Emulator for Containers

Create, Clone or Import an existing OpenZFS dataset into a locally scoped 
Docker Volume. All options are optional. Every &apos;-o key=value&apos; will be stored 
on the OpenZFS Dataset.

********************************************************************************

                            D I S C L A I M E R

 HPE Nimble Storage is not using OpenZFS in any of its products. Nemo is a tool
 to educate users how to integrate Advanced Data Services into DevOps workflows 
 using common developer and IT operations tools without owning or using a 
 HPE Nimble Storage product. Nemo is not supported by HPE Nimble Storage.

********************************************************************************
 
Universal Options:
  -o help           This help
  -o size=X         X is the quota of volume specified in GiB, 0 means no quota

Nimble Compatible Global Options:
  -o description=X      X is a vanity description set on the volume
  -o sizeInGiB=X        X is the alternative to &apos;size&apos;
  -o destroyOnRm=X      X is either &apos;true&apos; or &apos;false&apos; to actually destroy a 
                        dataset after it has been removed. Can be &quot;imported&quot; 
                        with &apos;importVol&apos;. Global default runtime flag available
  -o fsMode=X           X X is 1 to 4 octal digits that represent the file mode
                        to be applied to the root directory of the filesystem
  -o fsOwner=X:Y        X:Y is the uid:gid that should own the root directory of
                        the filesystem, in the form of uid:gid (string or nums)

Nimble Compatible Clone Options:
  -o cloneOf=X          X is the name of Docker Volume to create a clone of
  -o snapshot=X         X is the name of the snapshot to base the clone on 
                        (optional, if missing, a new snapshot is created)
  -o createSnapshot=X   &apos;true&apos; or &apos;false&apos;, indicates that a new snapshot of the
                        volume should be taken and used for the clone
  -o destroyOnDetach=X  indicates that the dataseut (including snapshots)
                        backing this volume should be destroyed when this volume
                        is unmounted or detached

Nimble Compatible Import Dataset as Clone Options:
  -o importVolAsClone=X X is an exisiting dataset without ZDVP properties to
                        import into a ZDVP dataset as a clone
  -o snapshot=X         X is an optional dataset snapshot to import clone from
  -o createSnapshot=X   X is either &apos;true&apos; or &apos;false&apos; to create a above snapshot
  -o destroyOnDetach=X  X indicates that the dataset (including snapshots)
                        backing this volume should be destroyed when this volume
                        is unmounted or detached

Nimble Compatible Import Options:
  -o importVol=X    X is an exisiting unmounted dataset without ZDVP properties
                    to import into a ZDVP dataset
  -o snapshot=X     X is an optional dataset snapshot to import from
  -o restore=X      X is dataset snapshot name to roll back to on import
  -o forceImport=X  Vanity flag, accepts any value, all imports are forced

Nimble Compatible Vanity Options (will be displayed in the &quot;Status&quot; field):
  -o limitIOPS=X            Defaults to &apos;-1&apos;
  -o limitMBPS=X            Defaults to &apos;-1&apos;
  -o dedupe=X               Defaults to &apos;false&apos;
  -o thick=X                Defaults to &apos;false&apos;
  -o encryption=X           Defaults to &apos;none&apos;
  -o folder=X               Defaults to &apos;none&apos;
  -o pool=X                 Defaults to &apos;default&apos;
  -o perfPolicy=X           Defaults to &apos;DockerDefault&apos;
  -o protectionTemplate=X   Defaults to &apos;none&apos;

Additional OpenZFS Options:
 -o help=OpenZFS
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;docker volume inspect&lt;/code&gt; output somewhat represents a fairly compatible output if the JSON output from a production system is being programmatically parsed. This would include the list of snapshots on a particular volume (snapshot naming conventions and timestamps differ slightly).&lt;/p&gt;
&lt;h1&gt;Get started!&lt;/h1&gt;
&lt;p&gt;Nemo is primarily being distributed in source form but will be made available for broader developer-friendly container ecosystems. There&apos;s a managed Docker Volume plugin available that installs pretty much out-of-the-box on an Ubuntu 18.04 machine. Instructions to get OpenZFS and Nemo rolling on RHEL and CentOS is available for the tinkerer &lt;a href=&quot;https://github.com/NimbleStorage/Nemo/tree/master/runtime/docker-v2#other-distributions&quot;&gt;in the repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is what a &quot;get to know Nemo session&quot; could look like on an Ubuntu server provisioned with &lt;code&gt;docker-machine&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker-machine create --driver generic --generic-ssh-port=22 --generic-ssh-user=vagrant --generic-ip-address=192.168.59.131 --generic-ssh-key=.vagrant/machines/default/vmware_fusion/private_key nemo
Running pre-create checks...
Creating machine...
(nemo) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env nemo
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ eval $(docker-machine env nemo)
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker version
Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:05:26 2018
 OS/Arch:           darwin/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:15 2018
  OS/Arch:          linux/amd64
  Experimental:     false
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker plugin install --alias nemo --grant-all-permissions nimblestorage/nemo:1.0.0
1.0.0: Pulling from nimblestorage/nemo
6ff473112dfe: Download complete 
Digest: sha256:8f256853b8d2a97aa233938f93ac69686be4f5a280c7372e2cd672898519828c
Status: Downloaded newer image for nimblestorage/nemo:1.0.0
Installed plugin nimblestorage/nemo:1.0.0
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker volume create -d nemo myvol1
myvol1
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker volume inspect myvol1
[
    {
        &quot;CreatedAt&quot;: &quot;2018-09-26T16:13:10Z&quot;,
        &quot;Driver&quot;: &quot;nemo:latest&quot;,
        &quot;Labels&quot;: {},
        &quot;Mountpoint&quot;: &quot;&quot;,
        &quot;Name&quot;: &quot;myvol1&quot;,
        &quot;Options&quot;: {},
        &quot;Scope&quot;: &quot;local&quot;,
        &quot;Status&quot;: {
            &quot;ApplicationCategory&quot;: &quot;Virtual Server&quot;,
            &quot;CachePinned&quot;: &quot;false&quot;,
            &quot;CachingEnabled&quot;: &quot;true&quot;,
            &quot;Connections&quot;: &quot;0&quot;,
            &quot;DedupeEnabled&quot;: &quot;false&quot;,
            &quot;Description&quot;: &quot;Docker knows this dataset as myvol1&quot;,
            &quot;EncryptionCipher&quot;: &quot;none&quot;,
            &quot;Folder&quot;: &quot;&quot;,
            &quot;Group&quot;: &quot;nemo&quot;,
            &quot;LimitIOPS&quot;: &quot;-1&quot;,
            &quot;LimitMBPS&quot;: &quot;-1&quot;,
            &quot;LimitSnapPercentOfSize&quot;: &quot;-1&quot;,
            &quot;LimitVolPercentOfSize&quot;: &quot;100&quot;,
            &quot;PerfPolicy&quot;: &quot;DockerDefault&quot;,
            &quot;Pool&quot;: &quot;default&quot;,
            &quot;SnapUsageMiB&quot;: &quot;0&quot;,
            &quot;Snapshots&quot;: [],
            &quot;ThinlyProvisioned&quot;: &quot;true&quot;,
            &quot;VolSizeMiB&quot;: 10240,
            &quot;VolUsageMiB&quot;: 0,
            &quot;destroyOnDetach&quot;: &quot;false&quot;,
            &quot;destroyOnRm&quot;: &quot;false&quot;,
            &quot;mountConflictDelay&quot;: &quot;30&quot;
        }
    }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker run --rm -it -v myvol1:/data bash
Unable to find image &apos;bash:latest&apos; locally
latest: Pulling from library/bash
4fe2ade4980c: Pull complete 
ec6d9ca5c66a: Pull complete 
d8685fbd86ca: Pull complete 
Digest: sha256:8634afcddefc8a10565b22d685df782058b096712a91bf45d75633f368dda729
Status: Downloaded newer image for bash:latest
bash-4.4# df -h /data
Filesystem                Size      Used Available Use% Mounted on
tank/v2/myvol1           10.0G    128.0K     10.0G   0% /data
bash-4.4# touch /data/myfile.txt
bash-4.4# exit
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker volume create -d nemo -o cloneOf=myvol1 myvol1-clone
myvol1-clone
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ docker run --rm -it -v myvol1-clone:/data bash
bash-4.4# ls /data
myfile.txt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this simple workflow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A docker-machine was setup and the Nemo managed Docker Volume plugin was installed&lt;/li&gt;
&lt;li&gt;A default 10GiB volume got created&lt;/li&gt;
&lt;li&gt;A file was created on the volume&lt;/li&gt;
&lt;li&gt;A clone of the volume was created and attached to a new container&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The exact same workflow, along with several others, is available with the HPE Nimble Storage Docker Volume plugin.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; By default Nemo creates a pool on a loopback device mapped to a 64GiB file in &lt;code&gt;/var/lib/nemo&lt;/code&gt; on the host.&lt;/p&gt;
&lt;h1&gt;Plenty of fish in the sea&lt;/h1&gt;
&lt;p&gt;One might wonder if Nemo is compatible with &lt;a href=&quot;https://github.com/hpe-storage/dory&quot;&gt;Dory&lt;/a&gt;? That is a given! Nemo fully implements the Docker Volume API, &lt;code&gt;dory&lt;/code&gt; and &lt;code&gt;doryd&lt;/code&gt; pretty much work stock with the exception that certain defaults needs to be overridden. More details on the Kubernetes integration is available &lt;a href=&quot;https://github.com/NimbleStorage/Nemo/tree/master/runtime/k8s&quot;&gt;in the repo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Example setup procedures for Kubernetes:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ kubectl create -f https://raw.githubusercontent.com/NimbleStorage/Nemo/master/runtime/k8s/daemonset-nemod.yaml
daemonset.apps/nemod created
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ kubectl create -f https://raw.githubusercontent.com/NimbleStorage/Nemo/master/runtime/k8s/deploy-doryd.yaml
clusterrole.rbac.authorization.k8s.io/doryd created
clusterrolebinding.rbac.authorization.k8s.io/doryd created
serviceaccount/doryd created
deployment.extensions/kube-storage-controller-doryd created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The cluster is now ready to create Storage Classes, Persistent Volume Claims or use the FlexVolume driver inline. A &lt;code&gt;StorageClass&lt;/code&gt; and &lt;code&gt;StatefulSet&lt;/code&gt; example is provided:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ kubectl create -f https://raw.githubusercontent.com/NimbleStorage/Nemo/master/runtime/k8s/sc-transactionaldb.yaml
storageclass.storage.k8s.io/transactionaldb created
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ kubectl create -f https://raw.githubusercontent.com/NimbleStorage/Nemo/master/runtime/k8s/statefulset-mariadb.yaml
secret/mariadb created
statefulset.apps/mariadb created
service/mariadb created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; created by the &lt;code&gt;volumeClaimTemplate&lt;/code&gt; part of the &lt;code&gt;StatefulSet&lt;/code&gt; can be inspect with &lt;code&gt;kubectl&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;gem:~ mmattsson$ kubectl get -o yaml pvc/mariadb-mariadb-0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: &quot;yes&quot;
    pv.kubernetes.io/bound-by-controller: &quot;yes&quot;
  creationTimestamp: 2018-09-28T00:03:06Z
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: mariadb
  name: mariadb-mariadb-0
  namespace: default
  resourceVersion: &quot;111181&quot;
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/mariadb-mariadb-0
  uid: e0572150-c2b1-11e8-a34c-000c290a512a
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: transactionaldb
  volumeName: transactionaldb-e0572150-c2b1-11e8-a34c-000c290a512a
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Due to the nature of Nemo attaching to local storage only, Pods and StatefulSets are the most common patterns for deploying stateful applications with. In single node clusters, like the one I used to create these examples, Replication Controllers and Deployments would also work as expected.&lt;/p&gt;
&lt;h1&gt;Next steps&lt;/h1&gt;
&lt;p&gt;We are simply testing the waters with Nemo (pun intended) to figure out if this could be helpful for developers who wants to learn about Advanced Data Services. Further, understand how to integrate those services into CI/CD pipelines, kick the tires on lifting &amp;#x26; shifting an application or validate more traditional workflows where you would provide production data to developers for Copy Data Management in containerized environments.&lt;/p&gt;
&lt;p&gt;Please feel free to give it a whirl and don&apos;t be afraid to reach out on &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;Slack&lt;/a&gt; or file any issues you may find on &lt;a href=&quot;https://github.com/NimbleStorage/Nemo&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Design at HPE Developer - Newsletter]]></title><link>https://developer.hpe.com/2018-September-29/</link><guid isPermaLink="false">https://developer.hpe.com/2018-September-29/</guid><pubDate>Sat, 29 Sep 2018 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Sir Hackington Appbuilder III Tames the IT Monster]]></title><description><![CDATA[My name’s Sir Hackington Appbuilder III for those of you just meeting me, and I’m the trusted guardian of all the swag when members of the…]]></description><link>https://developer.hpe.com/sir-hackington-appbuilder-iii-tames-the-it-monster/</link><guid isPermaLink="false">https://developer.hpe.com/sir-hackington-appbuilder-iii-tames-the-it-monster/</guid><pubDate>Mon, 24 Sep 2018 15:44:54 GMT</pubDate><content:encoded>&lt;p&gt;My name’s Sir Hackington Appbuilder III for those of you just meeting me, and I’m the trusted guardian of all the swag when members of the HPE Developer Community (HPE DEV) head out to conferences. I’d like to tell you a little about our recent time at VMworld Las Vegas that was hosted at the end of August.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/vmworld_collage-1538516807872.jpg&quot; alt=&quot;vmworld_collage&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Tame the IT Monster&lt;/h2&gt;
&lt;p&gt;HPE’s theme at VMworld this year was Tame the IT Monster. What does that mean? Simply put, it means reducing IT complexity as businesses continue on their digital transformation journey. With &lt;a href=&quot;https://www.wsj.com/articles/SB10001424053111903480904576512250915629460&quot;&gt;software eating the world&lt;/a&gt; and a substantial shift from a hardware-based to a software-based tech economy, companies must embrace a DevOps mindset to increase their opportunity for new business growth and to serve their customers better. HPE DEV promotes taming the IT monster with a two-pronged approach: by offering multiple integrations for HPE products, and by building deep connections to developers across the industry to collaborate and pool collective knowledge.&lt;/p&gt;
&lt;p&gt;HPE DEV is always excited to talk to people at conferences – especially VMworld. That’s because the HPE DEV portal hosts many integration activities with VMware that include applications and products like &lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;HPE OneView&lt;/a&gt;, &lt;a href=&quot;https://developer.hpe.com/platform/hpe-simplivity/home&quot;&gt;HPE SimpliVity&lt;/a&gt;, and &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt;. Spreading the word through our subject matter experts and having these guys on hand to talk through the integration projects was a great way to let customers and partners understand where they can become more involved in HPE DEV’s community activities.&lt;/p&gt;
&lt;p&gt;Not everyone who stopped by the booth knew about these VMware integrations, but the HPE DEV team made sure that everyone they met with came away fully informed about the specific integrations and the HPE DEV portal more broadly. Even folks from the operations side of their businesses were excited to get signed up for the HPE DEV newsletter so they can forward it to their teams’ developers and engineers.&lt;/p&gt;
&lt;p&gt;In addition to getting folks to sign up for a newsletter, HPE DEV team members worked to spread the word about the community by participating in collaborative activities at VMworld. Pramod Sareddy, HPE DEV’s representative at VMworld, was invited to attend two VMworld VBlogger sessions, where VMware experts, developers and industry bloggers met up to compare notes and share VMware knowledge with each other. This was a great opportunity for Pramod to promote what HPE DEV is doing and get direct feedback from this group. One of the items he promoted was the contributions made to the community by &lt;a href=&quot;/blog/the-advent-of-ephemeral-infrastructure-as-code&quot;&gt;HudsonAlpha&lt;/a&gt;, which got the attendees excited to contribute in a similar way.&lt;/p&gt;
&lt;h2&gt;Spreading the word&lt;/h2&gt;
&lt;p&gt;One of the biggest challenges HPE DEV currently faces is spreading the word about the community and the value it’s providing. For instance, did you know that HPE DEV has an &lt;a href=&quot;https://github.com/HewlettPackard/simplivity-vra-plugin&quot;&gt;HPE SimpliVity plugin for VMware vRealize Automation (vRA)&lt;/a&gt;, which allows users to back up, restore, clone, and move their virtual machines on-demand via the vRA interface? Or that HPE DEV offers &lt;a href=&quot;https://www.hpe.com/us/en/product-catalog/detail/pip.4152978.html&quot;&gt;HPE OneView for VMware vCenter&lt;/a&gt;, which seamlessly integrates manageability features with VMware virtualization solutions?&lt;/p&gt;
&lt;p&gt;I’ll brag a little here—it’s my presence that often draws people over to our booth. From there, HPE DEV conversations start around specific integrations, and then they typically turn into even larger conversations about HPE DEV and all the platforms they support. HPE DEV is committed to developers, DevOps, and helping businesses make their digital transformation successful.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/img_5414_resized_1-1538501085089.jpg&quot; alt=&quot;img_5414_resized_1&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Want to see more of HPE DEV?&lt;/h2&gt;
&lt;p&gt;Missed HPE DEV at VMworld? There are still plenty of chances to see the HPE DEV team this year at other developer-friendly conferences all over the world. For more information about where you can find them, visit the &lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;Events&lt;/a&gt; page. HPE DEV is going to have a busy autumn, with appearances at &lt;a href=&quot;https://www.vmworld.com/en/europe/index.html&quot;&gt;VMworld Europe&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/events/discover/&quot;&gt;HPE Discover&lt;/a&gt;, and &lt;a href=&quot;https://reinvent.awsevents.com/&quot;&gt;AWS re:Invent&lt;/a&gt;! And don’t forget to follow the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;Community&lt;/a&gt; page to see ways to get involved and contribute. See you soon!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE at Microsoft Ignite event, September 24-27]]></title><description><![CDATA[hpe at ms ignite Hewlett Packard Enterprise (HPE) will be at the Microsoft Ignite event, September 24th through 27th in Orlando, Florida…]]></description><link>https://developer.hpe.com/hpe-at-microsoft-ignite-event-september-24-27/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-at-microsoft-ignite-event-september-24-27/</guid><pubDate>Wed, 12 Sep 2018 19:16:44 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/9/hpe-at-ms-ignite-1536779956115.png&quot; alt=&quot;hpe at ms ignite&quot;&gt;&lt;/p&gt;
&lt;p&gt;Hewlett Packard Enterprise (HPE) will be at the &lt;a href=&quot;https://www.microsoft.com/en-us/ignite&quot;&gt;Microsoft Ignite&lt;/a&gt; event, September 24th through 27th in Orlando, Florida. The event, which welcomes thousands of IT professionals each year, is a chance for HPE to showcase their latest innovative technologies. At the event, HPE will host several sessions and demos for attendees to get a chance to see technologies such as &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/azure-hybrid-cloud.html&quot;&gt;HPE ProLiant for Microsoft Azure Stack&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/simplivity.html/&quot;&gt;HPE SimpliVity&lt;/a&gt; and the &lt;a href=&quot;https://developer.hpe.com/&quot;&gt;HPE Developer Community&lt;/a&gt; up-close. Below, find a list of the sessions HPE will host at the event.&lt;/p&gt;
&lt;p&gt;On Monday, September 24 at 5:45 PM, don’t miss &lt;a href=&quot;https://myignite.techcommunity.microsoft.com/sessions/66918?source=sessions&quot;&gt;HPE Edgeline and Microsoft Azure powering the Intelligent Edge (THR1134)&lt;/a&gt;. Come hear how HPE and Microsoft are helping customers with the digital transformation of their business with HPE Edgeline Converged Edge Systems and Microsoft Azure innovation that spans the edge to the cloud&lt;/p&gt;
&lt;p&gt;On September 26, 2018 starting at 2:15 PM, join HPE for &lt;a href=&quot;https://myignite.techcommunity.microsoft.com/sessions/66917?source=sessions&quot;&gt;How to tame your hybrid cloud (BRK1123)&lt;/a&gt;. This breakout session will offer a chance to learn how to make Hybrid IT simple so that data and applications are always optimized and agile, without compromising performance, security or control. HPE experts will discuss how, with your right mix of Hybrid IT in place, powered by software-defined infrastructure, your enterprise will be ready to accelerate innovation and thrive in our digital world.&lt;/p&gt;
&lt;p&gt;Join HPE OneView exerts for &lt;strong&gt;Take control of your virtual and physical server management with HPE OneView for Microsoft System Center&lt;/strong&gt;. Come learn how to avoid costly mistakes and added complexity by streamlining operations with &lt;a href=&quot;(https://www.hpe.com/us/en/integrated-systems/software.html)&quot;&gt;HPE OneView&lt;/a&gt; for Microsoft System Center. This integration provides comprehensive system health and alerting, driver and firmware updates, HPE fabric visualization, and provisioning of HPE servers, networking and storage. Create consistency for software deployment and updates and enable a faster response in the event of server or storage failure, reducing your risk for downtime. &lt;em&gt;Monday, September 24th at 4:30 PM and Wednesday, September 26th at 11:00 AM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you are in the market for an Azure Stack solution, don’t miss &lt;strong&gt;Top 5 Reasons HPE Delivers the Best Microsoft Azure Stack Solution&lt;/strong&gt;. Join this session to understand the 5 key areas that sets HPE apart from other Azure Stack offerings. Don’t miss this chance to hear how HPE is the only vendor who delivers &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/azure-hybrid-cloud.html&quot;&gt;the most configurable Azure Stack solution&lt;/a&gt; available, combined with the best service and support on the market. &lt;em&gt;Monday, September 24th at 2:30 PM and Wednesday, September 26th at 1:00 PM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Need a real-time view of utilization and costs across your hybrid cloud?  Meet HPE OneSphere&lt;/strong&gt;. Whether it&apos;s multiple data centers or multiple clouds, getting a comprehensive view of resources, workloads and costs can be a challenge. Join this session to see how &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt; hybrid cloud management enables you to quickly and easily get high-level or fine-grained visibility into costs and optimize digital services across your hybrid estate in real-time, instead of at the end of a billing cycle. &lt;em&gt;Monday, September 24th at 4:00 PM and Wednesday, September 26th at 3:00 PM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/hyper-converged.html&quot;&gt;Hyperconvergence&lt;/a&gt; is a hot topic! At Ignite, come learn &lt;strong&gt;3 reasons why HPE SimpliVity hyperconverged infrastructure simplifies IT environments to deliver dramatic business results&lt;/strong&gt;. This session will look at how &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/simplivity.html&quot;&gt;HPE SimpliVity&lt;/a&gt; hyperconverged infrastructure &lt;a href=&quot;http://www.intel.com/&quot;&gt;powered by Intel®&lt;/a&gt; with Hyper-V is designed and optimized for virtualized workloads and how it can streamline IT costs and operations, increase agility and time to production, prevent data loss and maximize uptime. Learn how you can radically transform your IT environment by uniting HPE SimpliVity advanced data services with Microsoft Hyper-V hypervisor and see a live demo in action.  &lt;em&gt;Monday, September 24th at 5:00 PM and Tuesday, September 25th at 4:30 PM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Lastly, don’t miss &lt;strong&gt;Ease your hybrid cloud management with HPE OneView for Microsoft Azure Log Analytics&lt;/strong&gt; while at the event. This session will focus on how you can manage your on-premises HPE infrastructure with the same Microsoft tools used for cloud services. Experts will discuss how HPE OneView for Microsoft Azure Log Analytics brings visibility of the underlying HPE infrastructure, including hardware and firmware inventory, infrastructure health and status, and long term event correlation and trend analysis. &lt;em&gt;Monday, September 24th at 6:30 PM and Wednesday, September 26th at 4:30 PM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Stop by booth 2125 at the &lt;a href=&quot;https://www.microsoft.com/en-us/ignite&quot;&gt;Microsoft Ignite&lt;/a&gt; event to meet HPE experts, join these sessions and more. More details about HPE at Microsoft Ignite can be &lt;a href=&quot;https://community.hpe.com/t5/Alliances/Attending-Microsoft-Ignite-September-24-28-See-HPE-there/ba-p/7016641#.W5KFU-hKiM8&quot;&gt;found here&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Using the Azure Stack cost monitor]]></title><description><![CDATA[Overview The Azure Stack cost monitor scripts are used to pull resource consumption data from an Azure Stack instance.  This data can then…]]></description><link>https://developer.hpe.com/using-the-azure-stack-cost-monitor/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-azure-stack-cost-monitor/</guid><pubDate>Fri, 07 Sep 2018 21:43:43 GMT</pubDate><content:encoded>&lt;h1&gt;Overview&lt;/h1&gt;
&lt;p&gt;The Azure Stack cost monitor scripts are used to pull resource consumption data from an Azure Stack instance.  This data can then be imported, manipulated and displayed in a variety of ways. This article describes how to pull this data and provides an example of what it looks like when displayed in a PowerBI dashboard.&lt;/p&gt;
&lt;h1&gt;Prerequisites&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;PowerShell for Azure Stack&lt;/li&gt;
&lt;li&gt;Operator level access to Azure Stack&lt;/li&gt;
&lt;li&gt;Connectivity to Azure Stack ARM endpoints&lt;/li&gt;
&lt;li&gt;Familiarity with PowerBI&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Concepts used in tool&lt;/h1&gt;
&lt;p&gt;The tool obtains resource consumption data directly  from the Azure Stack ARM endpoint using the resource usage API.   The tool workflow is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Confirm resource rates for Azure Stack meters&lt;/li&gt;
&lt;li&gt;Run the tool against the Azure Stack ARM endpoint with operator credentials&lt;/li&gt;
&lt;li&gt;Use the JSON  output to display the resource usage data in PowerBI# Procedure&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Install PowerShell for Azure Stack&lt;/h2&gt;
&lt;p&gt;If you have not already done so, install PowerShell for Azure Stack using [these instructions] (&lt;a href=&quot;https://docs.microsoft.com/azure/azure-stack/azure-stack-powershell-install&quot;&gt;https://docs.microsoft.com/azure/azure-stack/azure-stack-powershell-install&lt;/a&gt;).  Be sure it&apos;s installed on the computer you will use to access the Azure Stack endpoints.&lt;/p&gt;
&lt;h2&gt;Verify connectivity to Azure Stack ARM endpoint&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Launch PowerShell&lt;/li&gt;
&lt;li&gt;Copy the lines below into your PowerShell window.  Replace the name and endpoint details for your Azure Stack environment&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;# To get this value for Azure Stack integrated systems, contact your administrator who deployed Azure Stack.
#Example: For an Azure Stack that has been deployed with regionname=seattle; externalFQDN=stackcloud.com use value as, https://adminmanagement.seattle.stackcloud.com
$ArmEndpoint = &quot;&amp;#x3C;Admin Resource Manager endpoint for your environment&gt;&quot;

#Use Azure Stack operator credentials to log in
Add-AzureRMEnvironment  -Name &quot;AzureStackAdmin&quot; -ArmEndpoint $ArmEndpoint

# After signing in to your environment, Azure Stack cmdlets can be easily targeted at your Azure Stack instance.
Add-AzureRmAccount -EnvironmentName &quot;AzureStackAdmin&quot;

Get-AzureRmResourceGroup
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These commands should list the resource groups available on the Azure Stack instance.   If this succeeds, then you have verified connectivity to the Azure Stack endpoint.  If you encounter any errors, check with your Azure Stack administrator to fix the connectivity.&lt;/p&gt;
&lt;h2&gt;Export resource usage data&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Download [usage monitor tool] (&lt;a href=&quot;https://github.com/HewlettPackard/hpe-azurestack/blob/master/Usage-Monitor/Get-AzureStackUsageDataWithCost.ps1&quot;&gt;https://github.com/HewlettPackard/hpe-azurestack/blob/master/Usage-Monitor/Get-AzureStackUsageDataWithCost.ps1&lt;/a&gt;) to a local folder&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Supply the parameter values you want to use with the tool.   The complete list of parameters is in the tool.  Among the parameters that should be specified are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Granularity : Daily or Hourly&lt;/li&gt;
&lt;li&gt;Time range : From, To datetimes&lt;/li&gt;
&lt;li&gt;Aggregation Level : Provider or Tenant&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-powershell&quot;&gt;#
# EXAMPLE1: To get AzureStack Admin usage report with hourly granularity
# 
.\Get-AzsUsageInfo.ps1 -StartTime 3/01/2018 -EndTime 3/21/201 -AzureStackDomain azurestack.local -AzureStackRegion &quot;local&quot; -AzureStackCloudName &quot;Local MAS Cloud&quot; -AADDomain mydir.onmicrosoft.com  -Granularity Hourly 
# *** The generated output file will be &amp;#x3C;AzureStackRegion&gt;-&amp;#x3C;AzureStackDomain&gt;-Hourly-UsageSummary.json
       
#
# EXAMPLE2: To get AzureStack tenant usage report with daily granularity
.\Get-AzsUsageInfo.ps1 -StartTime 3/01/2018 -EndTime 3/21/2018 -AzureStackDomain azurestack.local -AzureStackRegion &quot;local&quot; -AzureStackCloudName &quot;Local MAS Cloud&quot; -AADDomain mydir.onmicrosoft.com -Granularity Daily -TenantUsage

# The generated output file will be &amp;#x3C;AzureStackRegion&gt;-&amp;#x3C;AzureStackDomain&gt;-Daily-TenantUsageSummary.json
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Generate a PowerBI dashboard&lt;/h2&gt;
&lt;p&gt;Use the data from the generated JSON files with powerBI and create dashboard reports with your  desired visualizations.# Examples
Below are example PowerBI dashboards using data extracted from HPE Azure Stack lab environments.&lt;/p&gt;
&lt;p&gt;The first dashboard shows daily usage over a 2 month period:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/uploads/media/2018/9/dailyusagesummary-1536947449691.png&quot; alt=&quot;&quot; title=&quot;DailyUsageReport&quot;&gt;&lt;/p&gt;
&lt;p&gt;The second dashboard shows hourly usage over a 3 day period:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/uploads/media/2018/9/hourlyusagesummary-1536947466387.png&quot; alt=&quot;&quot; title=&quot;HourlyUsageReport&quot;&gt;# Summary
Controlling cloud resource consumption and cost is a large consideration for cloud customers.  This tool enables you to see which resources are driving your costs and as well as which ones may be silently wasting capacity.  This allows Azure Stack operators to better monitor their consumption throughout the month.&lt;/p&gt;
&lt;p&gt;To further extend this tool, the entire process could be automated by periodically exporting usage data and refreshing the dashboard.&lt;/p&gt;
&lt;h1&gt;Next Steps&lt;/h1&gt;
&lt;p&gt;If you&apos;d like to learn more about development on the HPE ProLiant for Microsoft Azure Stack as well as other HPE platforms, join the HPE Developer community below.&lt;/p&gt;
&lt;p&gt;If you have any new project ideas for Azure Stack, contact the team at &lt;a href=&quot;mailto:asic@hpe.com&quot;&gt;asic@hpe.com&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Sir Hackington Appbuilder III Goes to Google Cloud Next]]></title><description><![CDATA[Hey, it’s me, your old friend, Sir Hackington Appbuilder III. For those of you who haven’t met me yet, that’s right, I’m the dog here. More…]]></description><link>https://developer.hpe.com/sir-hackington-appbuilder-iii-goes-to-google-cloud-next/</link><guid isPermaLink="false">https://developer.hpe.com/sir-hackington-appbuilder-iii-goes-to-google-cloud-next/</guid><pubDate>Tue, 28 Aug 2018 14:50:51 GMT</pubDate><content:encoded>&lt;p&gt;Hey, it’s me, your old friend, Sir Hackington Appbuilder III. For those of you who haven’t met me yet, that’s right, I’m the dog here. More importantly, I’m the event mascot of the HPE Developer Community (HPE DEV). Some people like to say that dogs are man’s best friend. In my case, I’m the best friend a developer can have, especially at conferences.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_20180726_140212-1535468013423.jpg&quot; alt=&quot;Sir Hackington&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV recently brought me to Google Cloud Next in San Francisco from July 24-26. Even though I’m a dog I’m really into all the things HPE DEV is into. So of course, in 2018, that means lots of thinking about cloud.  Cloud platforms and infrastructure are very important to enterprise IT these days, and HPE is a key player in this space. HPE DEV was really excited to spread the good word about all the new innovations they provide to businesses looking to get the most from their cloud strategies.  For instance HPE DEV showed off a number of the API references currently on the HPE DEV portal that allow developers to dive in and start working on 3rd party applications to integrate with HPE OneSphere and build powerful solutions for their customers. For those who don’t know, HPE OneSphere is HPE’s as-a-service hybrid cloud management solution.&lt;/p&gt;
&lt;h2&gt;HPE DEV Presentations&lt;/h2&gt;
&lt;p&gt;HPE DEV worked hand-in-hand with the HPE OneSphere team to run multiple sessions at the Google Next event booth, all geared toward the developer community.&lt;/p&gt;
&lt;p&gt;At the event, HPE DEV gave demos on how HPE is helping developers be more productive in the API economy. Getting APIs to work well with cloud platforms is a huge part of developers’ tasks these days, and the developers who stopped by the booth came away impressed with HPE’s API work, because it allows them to get up and running on 3rd party applications right away.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_20180724_162809-1535468245061.jpg&quot; alt=&quot;img_20180724_162809&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_20180725_113330-1535468321884.jpg&quot; alt=&quot;img_20180725_113330&quot;&gt;&lt;/p&gt;
&lt;p&gt;The team also provided sessions on multi-cloud management using HPE OneSphere, and showed a customer use-case on managing containers effectively using HPE Composable Infrastructure with HPE Synergy and Google Cloud Platform. As most of the developers at the conference were interested in leveraging Google Cloud Platform in their work, showcasing the ways that these developers can get the most from both HPE and Google was a big hit.&lt;/p&gt;
&lt;h2&gt;Key takeaways&lt;/h2&gt;
&lt;p&gt;I may be biased because I hang out with developers all the time, and they take good care of me, but developers really are the best. Google Cloud Next this year had a large number of developers visiting, more than I sometimes see at other conferences I attend. The large number of developers makes it easy for HPE DEV to start up conversations at our booth. I’d like to think it’s all because of my winning personality, but I have to give the HPE DEV team the credit—developers are eager to talk to their peers, and they’re also very interested to hear about what HPE is doing in and with the developer community.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_20180725_161340-1535468427944.jpg&quot; alt=&quot;img_20180725_161340&quot;&gt;&lt;/p&gt;
&lt;p&gt;Many initially think of HPE as just an infrastructure company, but when HPE DEV comes to conferences with a large developer presence, that perception changes. But don’t just take my word for it. Some of our partners from HudsonAlpha, a nonprofit institute that translates the power of genomics into real world results, stopped by the HPE DEV booth to talk about the benefits of working with HPE.&lt;/p&gt;
&lt;p&gt;[&lt;a href=&quot;https://www.pscp.tv/w/1yNGaXRNAQbKj&quot;&gt;https://www.pscp.tv/w/1yNGaXRNAQbKj&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_20180725_113415-1535468605630.jpg&quot; alt=&quot;img_20180725_113415&quot;&gt;&lt;/p&gt;
&lt;p&gt;As the world moves from an IT Ops model to a DevOps model, HPE is dedicated to being a leader and innovator, and developers are receptive to what the team is accomplishing. So much so that around 100 developers signed up to join our HPE DEV newsletter mailing list! Woof woof! (That means sign up for our newsletter today in mascot dog lingo.)&lt;/p&gt;
&lt;p&gt;In other words, HPE DEV is making inroads with the developer community. Which is pretty exciting for a company that sometimes isn’t thought of as having a significant developer profile. Sometimes building word of mouth on an individual by individual basis is a great way to go, and the HPE DEV team really felt that at Google Cloud Next.&lt;/p&gt;
&lt;h2&gt;Want to see more of HPE DEV?&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/google-cloud-next-event-presence-1535468699526.JPG&quot; alt=&quot;google cloud next event presence&quot;&gt;&lt;/p&gt;
&lt;p&gt;Missed HPE DEV Google Cloud Next? No worries -- they will be traveling all over the world in the next few months to conferences of interest to developers. For more information about where you can find the HPE DEV team, visit the &lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;Events&lt;/a&gt; page. And follow the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;Community&lt;/a&gt; page to see highlights of previous appearances and ways to get involved and contribute. Next up, come check in with us at &lt;a href=&quot;https://www.vmworld.com/en/us/index.html&quot;&gt;VMworld&lt;/a&gt;, Las Vegas booth 1300 to learn more! I’ll be there!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/hackignton-1535468992281.jpg&quot; alt=&quot;hackignton&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Managing iLO sessions with Redfish®]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/managing-ilo-sessions-with-redfish/</link><guid isPermaLink="false">https://developer.hpe.com/managing-ilo-sessions-with-redfish/</guid><pubDate>Mon, 27 Aug 2018 13:44:46 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/managingilosessions/managingilosessionswithredfish&quot;&gt;Server Management Portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[Announcing the Introduction of HPE OneView 4.1]]></title><description><![CDATA[We are pleased to announce the latest version of HPE OneView HPE OneView 4.1 has been released. Built with software-defined intelligence and…]]></description><link>https://developer.hpe.com/announcing-the-introduction-of-hpe-oneview-41/</link><guid isPermaLink="false">https://developer.hpe.com/announcing-the-introduction-of-hpe-oneview-41/</guid><pubDate>Tue, 21 Aug 2018 22:50:40 GMT</pubDate><content:encoded>&lt;h1&gt;We are pleased to announce the latest version of HPE OneView&lt;/h1&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/info/oneview&quot;&gt;HPE OneView&lt;/a&gt; 4.1 has been released. Built with software-defined intelligence and a unified API, HPE OneView enables you to deploy infrastructure faster, simplify lifecycle operations, increase productivity, and accelerate time to value.&lt;/p&gt;
&lt;p&gt;Over one million licenses have shipped so far, demonstrating the value customers are seeing in deploying HPE OneView.&lt;/p&gt;
&lt;p&gt;For full details on top new features for HPE OneView 4.1, please read the &lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/HPE-simplifies-infrastructure-management-with-announcement-of/ba-p/7004750#.W3yUt-hKj4a&quot;&gt;announcement blog&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Don’t miss HPE DEV at VMWorld 2018, August 26-30]]></title><description><![CDATA[Hewlett Packard Enterprise (HPE) will be attending VMWorld in Las Vegas, NV, August 26--30, 2018. VMworld 2018 is VMware’s premier digital…]]></description><link>https://developer.hpe.com/dont-miss-hpe-dev-at-vmworld-2018-august-26-30/</link><guid isPermaLink="false">https://developer.hpe.com/dont-miss-hpe-dev-at-vmworld-2018-august-26-30/</guid><pubDate>Fri, 17 Aug 2018 19:31:21 GMT</pubDate><content:encoded>&lt;p&gt;Hewlett Packard Enterprise (HPE) will be attending &lt;a href=&quot;https://www.vmworld.com/en/us/index.html&quot;&gt;VMWorld&lt;/a&gt; in Las Vegas, NV, August 26--30, 2018. VMworld 2018 is VMware’s premier digital infrastructure event, where businesses can find the tools and partners they need to accelerate their software-defined journey and launch the digital transformation.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer.8ar.ms/uploads/media/2018/8/hpe-dev-at-vmworld-1534534980822.png&quot; alt=&quot;HPE DEV at VMWorld&quot;&gt;
At the event, the HPE Developer Community (HPE DEV) will join other teams from across HPE as they strive to &lt;em&gt;Tame the IT Monster&lt;/em&gt;. HPE and VMware offer the broadest portfolio of integrated and certified solutions that help companies deliver proven outcomes for their business demands. During VMworld the HPE DEV team will be focused on will highlighting the HPE VMware partnership and how it is helping accelerate DevOps initiatives with a rich set of API’s and automated workflows.&lt;/p&gt;
&lt;p&gt;In addition, HPE DEV will be holding a daily raffle of Amazon Echo Dots for everyone who signs up for the developer newsletter. Sir Hackington Appbuilder III, the HPE DEV booth mascot, will make an appearance as he does at every HPE DEV event, and the team will be handing out stickers along with plenty of expert knowledge.&lt;/p&gt;
&lt;p&gt;Stop by HPE’s booth (#1300) to meet experts from across HPE, not just HPE DEV, and learn about the latest technology from HPE, including &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/software.html&quot;&gt;HPE OneView&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt;, and more.
To learn more about &lt;a href=&quot;https://www.vmworld.com/en/us/index.html&quot;&gt;VMWorld 2018&lt;/a&gt;, click here. See you at the event!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Welcome to the HPE Developer Newsletter]]></title><link>https://developer.hpe.com/2018-August-10/</link><guid isPermaLink="false">https://developer.hpe.com/2018-August-10/</guid><pubDate>Fri, 10 Aug 2018 05:00:00 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Sir Hackington Appbuilder III Tells You What Happened at OSCON]]></title><description><![CDATA[Sir Hackington Appbuilder III here. Yeah, take a look at the picture. I’m the dog with the lanyards. I’m pretty cute, huh? That’s what…]]></description><link>https://developer.hpe.com/sir-hackington-appbuilder-iii-tells-you-what-happened-at-oscon/</link><guid isPermaLink="false">https://developer.hpe.com/sir-hackington-appbuilder-iii-tells-you-what-happened-at-oscon/</guid><pubDate>Fri, 10 Aug 2018 01:00:29 GMT</pubDate><content:encoded>&lt;p&gt;Sir Hackington Appbuilder III here. Yeah, take a look at the picture. I’m the dog with the lanyards. I’m pretty cute, huh? That’s what people tell me. Sure, they also call me a mascot, but I’d like to think I’m the most important member of the HPE Developer Community (HPE DEV).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/photo-2-1533863198104.JPG&quot; alt=&quot;photo 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV recently attended the 20th annual Open Source Convention, or &lt;a href=&quot;https://community.hpe.com/t5/Shifting-to-Software-Defined/Don-t-miss-HPE-at-OSCON-2018-July-16-19/ba-p/7011089#.W2s6gShKiM8&quot;&gt;OSCON&lt;/a&gt; for short. The convention, put on by O’Reilly, was held from July 16-19 in Portland, Oregon. The HPE team enjoyed some cool weather in the Pacific Northwest and the opportunity to meet a bunch of cool developers spreading the word about HPE’s open source efforts and projects. And they got to show me off as well.&lt;/p&gt;
&lt;p&gt;OSCON is geared exclusively toward developers. Everybody was in jeans and t-shirts. You know, the kinds of people that look like they’d be eager to take me on a walk. Sadly, that didn’t happen because these are also the kinds of people HPE DEV loves to talk to. Which they did. A lot. But, all in all I was pretty pleased with how many in the developer community were interested in talking with my team and wanted to sign up for the monthly newsletter to stay in touch.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/1-1533863099426.png&quot; alt=&quot;1&quot;&gt;&lt;/p&gt;
&lt;p&gt;At the event, HPE DEV members Alex Mejias and Jim “Schreck” Schreckengast delivered a presentation on open source for the enterprise, which was extremely well received. They stressed the importance of community development and engagement being critical for adoption, otherwise developers are dropping code over a wall and hoping for the best.  And of course, having an open source strategy that harmonizes with the business is vital.&lt;/p&gt;
&lt;p&gt;You can check out that presentation &lt;a href=&quot;https://cdn.oreillystatic.com/en/assets/1/event/274/Open-sourcing%20enterprise%20software%20_sponsored%20by%20HPE_%20Presentation.pptx&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/2-1533863118364.png&quot; alt=&quot;2&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Key takeaways&lt;/h2&gt;
&lt;p&gt;One of the things attendees learned at the event was that while many developers do not necessarily associate HPE with open source, they were eager to hear about HPE’s presence in the open source community. Grommet, an open source tool set developed by HPE, especially piqued people’s interest. Grommet allows enterprise companies to easily design essential and modern web experiences. Alex Mejias, one of Grommet’s core team members and the community’s leading Grommet evangelist, was a hit with his ability to speak to and demonstrate the ease of using the tools.&lt;/p&gt;
&lt;p&gt;Another thing I learned--the Dev community loves stickers (about as much as I love bones!) All the presenters loaded up attendees with stickers. See that purple guy? That’s Stack, our other mascot (but he’s not real like me!) Stack was our most popular sticker by far, so expect to see more of him. Everybody loves a cute mascot, right? Especially when you feature him on another cute mascot. Ahem, I’m talking about me, Sir Hackington Appbuilder III.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/3-1533863149410.png&quot; alt=&quot;3&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/img_0066-1533863211157.JPG&quot; alt=&quot;img_0066&quot;&gt;&lt;/p&gt;
&lt;p&gt;HPE DEV is making inroads with the developer community, which they continued at OSCON. While many people think of HPE as just an infrastructure company, the presence of HPE DEV at conferences dedicated to the developer community is starting to change that perception. As the world moves from an IT Ops model to a DevOps model, HPE is dedicated to being a leader and innovator.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/8/4-1533863163033.png&quot; alt=&quot;4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Missed HPE DEV at OSCON? No worries. They will be making appearances at dev-friendly conferences around the United States and Europe over the next few months. For more information about where you can find them, visit the &lt;a href=&quot;https://developer.hpe.com/events&quot;&gt;Events&lt;/a&gt; page. And follow the &lt;a href=&quot;https://developer.hpe.com/community&quot;&gt;Community&lt;/a&gt; page to see highlights of previous appearances and ways to get involved and contribute. And come check us next at &lt;a href=&quot;https://www.vmworld.com/en/us/index.html&quot;&gt;VMworld&lt;/a&gt; to learn more! I’ll be there!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[ DevOps – It’s an IT thing, isn’t it?]]></title><description><![CDATA[“We can never judge the lives of others, because each person knows only their own pain and renunciation. It's one thing to feel that you are…]]></description><link>https://developer.hpe.com/devops-its-an-it-thing-isnt-it/</link><guid isPermaLink="false">https://developer.hpe.com/devops-its-an-it-thing-isnt-it/</guid><pubDate>Thu, 09 Aug 2018 16:13:21 GMT</pubDate><content:encoded>&lt;p&gt;“&lt;a href=&quot;http://www.azquotes.com/quote/350077&quot;&gt;We can never judge the lives of others, because each person knows only their own pain and renunciation. It&apos;s one thing to feel that you are on the right path, but it&apos;s another to think that yours is the only path.&lt;/a&gt;” &lt;a href=&quot;http://www.azquotes.com/author/3041-Paulo_Coelho&quot;&gt;Paulo Coelho&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Over the last 4 years as a CIO Advisor for HPE, I have been advising organisations in the adoption of collaborative working across the business not just IT.  Co-creation across the business and IT boundaries can create effective lean &amp;#x26; agile practices that can take full advantage of DevOps processes. These organisations have spanned multiple geographies, technologies, cultural background and skill sets.  Through these experiences I have learnt a lot – mostly that these transformations are very hard, normally take much longer than anyone wants, and that it is super important to learn from others on your journey to your transformation, be ready to face the oasis and deserts – in both cases and don’t stop.&lt;/p&gt;
&lt;h2&gt;DevOps Transformation Journey Choices&lt;/h2&gt;
&lt;p&gt;Here is an understatement – &lt;strong&gt;Change is difficult.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In all my long experience as a CIO and CIO Advisor (30 yrs. +) I have never seen any organisation relishing the thought of having to change any process, people, activity, project, etc etc.  It is normally done with great care and attention.  Hence, there are various facets when effecting change, even when agile practices may already be mainstream in an organisation.&lt;/p&gt;
&lt;p&gt;Consequently, moving beyond agile to unleash the full potential of the business by the use of DevOps practices requires the next level of collaboration, co-creation and automation. But, with all life-journeys, there will be blockers on the route – whether that be cultural, process or simply people. It is important then to adopt the right decisions-tools to address the pain points and convert the pain into relief as it becomes the easiest way to get it done anyway.  The choice of people (champions of change), culture (away from not invented here), processes (from rigid to supple) and tools (adopting agile/collaborative tools) will dictate your journey choices and difficulty in getting to your destination.&lt;/p&gt;
&lt;p&gt;As Paolo Coelho states – “In a forest of a hundred thousand trees, no two leaves are alike. And no two journeys along the same path are alike”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer.8ar.ms/uploads/media/2018/8/hpe20180226034_800_0_72_rgb-1533835764985.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h3&gt;My 3-Step Plan for an Effective DevOps Transformation Journey&lt;/h3&gt;
&lt;p&gt;This is my 3-step approach to DevOps Transformation that I use with organisations who are on this journey. It is NOT a sequential step approach, it is an agile step approach – that is to say sometimes sequential, other time concurrent but mainly aligned to the culture and capacity of the organisation.&lt;/p&gt;
&lt;p&gt;However, because cultural change, politics and behaviour underpin every aspect of any transformational change programme it is essential to read each step within this context and ask a critical question each time in terms of “is the organisation ready for this?”  For instance, if your initiative is about accelerating the delivery of new capabilities within your sales and marketing department – but it needs the support and training of 25,000 front-line staff – can you afford this, how long will it take, what’s your ROI, etc etc…&lt;/p&gt;
&lt;p&gt;You need to understand where the boundaries are politically and economically and figure out how to break those down.&lt;/p&gt;
&lt;h3&gt;Step 1 – Expand agile practices beyond IT&lt;/h3&gt;
&lt;p&gt;Too often DevOps is seen as the mandate solely of IT and there is an assumption that the business and in particular its C-level executives don’t take any interest in the software development function nor understand words like “agile” and “waterfall”, never mind “DevOps” – they just want it to get done.&lt;/p&gt;
&lt;p&gt;It is essential to explain that there has been shift to the way software is being developed with the advent of agile and DevOps that resembles the lean manufacturing shift in the 80s and 90s. Furthermore, it is a well understood fact that “software is eating the world” and most businesses need apps (both mobile and web apps) to sell its products. Any time you have an app, software development is seen as a critical part of the day-to-day business strategy. Hence, C-level executives are definitely taking a deeper interest in how these things can be delivered in an effective way&lt;/p&gt;
&lt;p&gt;Still, without true business involvement, customer feedback and effective operations to break down silos and improve outcomes, the expected efficiencies and benefits will never come.  All stakeholders in an organisation must be involved with full support from its leadership and management across all disciplines.&lt;/p&gt;
&lt;h3&gt;Step 2 – Shift Left within a Continuous Culture&lt;/h3&gt;
&lt;p&gt;Continuous this, continuous that, continuous everything where every organisational discipline has had this notion ingrained into their very DNA as a process of finding and eliminating waste on an ongoing basis. Add today’s agile age, where continuous delivery is a critical outcome in order to survive it is hence part of an organisation’s business culture that must involve everyone, leadership, management and employees.&lt;/p&gt;
&lt;p&gt;If we also add the premise of “Shift Left” across an organisation you would then move things that we typically do in later stages earlier. (It is human nature, but many people tend to defer particularly tough issues) – even shift left in terms of the customer by empowering the customer further.&lt;/p&gt;
&lt;p&gt;DevOps is predicated by shifting left in its testing approach, performing testing earlier in the software delivery lifecycle which will inevitably eliminate long back-end dependencies and increase quality.&lt;/p&gt;
&lt;h3&gt;Step 3 – Empower a culture of Fail Fast not Fail-Silently&lt;/h3&gt;
&lt;p&gt;When I first started my business work life it was a norm to “fail silently” where you mask the failure, hope it does not get spotted, blame someone else or simply ignore the issue. New ways of thinking by innovative companies is for the business to fail fast &amp;#x26; fail often as long as they learn the lesson. This strategy can help the company grow in an agile way.  Across the organisation not just technology DevOps “failing” principles can include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fail Early, the sooner the failure is spotted, the sooner the learning begins and the sooner you can ultimately fix it. This will also allow you to get real and fast feedback about what works and what does not;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;*.  Fail Fast, so that we can begin the learning process as fast as possible. In DevOps for instance a test driven process where you write a failing test before you even produce the code;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fail Often, the more things you try, the more failures you will see and therefore the more chances to both learn and steer your output in the right direction. In addition, this will remove the need to waste time by working on incorrect avenues;&lt;/li&gt;
&lt;li&gt;Fail Better, with early and frequent failures you maximise the learning which will deliver customers advocacy.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;According to Heraclitus, the Greek philosopher, “the only thing constant is change.” Indeed your business, and particularly the way you run its digital performance, is no exception.  But although DevOps is obviously the evolution of software development from waterfall delivery to agile delivery - its premise and baseline is predicated by sound business principles that must be applied across the business – NOT just IT.&lt;/p&gt;
&lt;p&gt;True transformation and change, continuous agile processes, collaborative and co-creation working, effective governance and quality services are a matter for all disciplines in an organisation. IT obviously must enable the business with effective technologies and governance, but it cannot do it alone – it needs a true partner in the business to deliver effective business solutions and outcomes.&lt;/p&gt;
&lt;p&gt;“&lt;a href=&quot;https://quotesia.com/paulo-coelho-quote/908683&quot;&gt;In our obsessive wish to arrive, we often forget the most important thing, which is the journey&lt;/a&gt;” , &lt;a href=&quot;https://quotesia.com/paulo-coelho-quotes&quot;&gt;Paulo Coelho&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/services.html&quot;&gt;Get started with your transformation today by visiting our HPE Pointnext Web-site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Mario Devargas&lt;/p&gt;
&lt;p&gt;WW Strategic Transformation&lt;/p&gt;
&lt;p&gt;Hewlett Packard Enterprise&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Master the Redfish Server States to improve your monitoring and management applications]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/master-the-redfish-server-states-to-improve-your-monitoring-and-manageme/</link><guid isPermaLink="false">https://developer.hpe.com/master-the-redfish-server-states-to-improve-your-monitoring-and-manageme/</guid><pubDate>Mon, 06 Aug 2018 14:07:07 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/masterserverstates/masterserverstates&quot;&gt;Server Management Portal&lt;/a&gt;.&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[Using the SimpliVity API with Java]]></title><description><![CDATA[Using the SimpliVity API with Java This sample Java code performs authentication, issues example GET requests, performs a POST operation (in…]]></description><link>https://developer.hpe.com/using-the-simplivity-api-with-java/</link><guid isPermaLink="false">https://developer.hpe.com/using-the-simplivity-api-with-java/</guid><pubDate>Fri, 03 Aug 2018 16:27:34 GMT</pubDate><content:encoded>&lt;h1&gt;&lt;strong&gt;Using the SimpliVity API with Java&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;This sample Java code performs authentication, issues example GET requests, performs a POST operation (in this case, renaming a backup), and monitors the status of the operation using a task instance.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;package main.java;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.util.Arrays;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import javax.net.ssl.SSLSession;
import javax.net.ssl.HostnameVerifier;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
import org.springframework.web.client.RestTemplate;
import org.json.JSONObject;
import com.sun.org.apache.xerces.internal.impl.dv.util.Base64;

public class RestClient {

    private static String access_token;
    private String BASE_URL;
    static final String HMS_USERNAME = &quot;HMS_USER&quot;;
    static final String HMS_PASSWORD = &quot;HMS_PASS&quot;;
    public RestClient(String hostIp)
    {
        BASE_URL = &quot;https://&quot;+hostIp+&quot;/api/&quot;;
    }

    // Create a trust manager that does not validate certificate chains.
    private void enableSSL()
    {
        TrustManager[] trustAllCerts = new TrustManager[]
        { new X509TrustManager()
        {
            public X509Certificate[] getAcceptedIssuers()
            {
                return new X509Certificate[0];
            }
            public void checkClientTrusted(X509Certificate[] certs, String authType)
            {
            }
            public void checkServerTrusted(X509Certificate[] certs, String authType)
            {
            }
        } };
        try {
            SSLContext sc = SSLContext.getInstance(&quot;TLSv1.2&quot;);
            sc.init(null, trustAllCerts, new SecureRandom());
            HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());
            HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier() {
                public boolean verify(String hostname, SSLSession session) {
                    return true;
                }
            });

        } catch (Exception e) {
        }
    }

    /*
     * Authenticate user and retrieve access token.
     */
     public String getAccessToken()
    {
        enableSSL();

        String encoding = Base64.encode(&quot;simplivity:&quot;.getBytes());

        RestTemplate restTemplate = new RestTemplate();
        MultiValueMap&amp;#x3C;String, String&gt; body = new LinkedMultiValueMap&amp;#x3C;String, String&gt;();
        body.add(&quot;username&quot;, HMS_USERNAME);
        body.add(&quot;password&quot;, HMS_PASSWORD);
        body.add(&quot;grant_type&quot;, &quot;password&quot;);
        HttpHeaders headers = new HttpHeaders();
        headers.set(&quot;Accept&quot;, &quot;application/json&quot;);
        headers.set(&quot;Authorization&quot;, &quot;Basic &quot; + encoding);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(body, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(
            BASE_URL+&quot;oauth/token&quot;, HttpMethod.POST, entity,
            String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        access_token = (String) jsonObj.get(&quot;access_token&quot;);
        System.out.println(&quot;Authenticated user and retrieved access token: &quot;+ access_token);
        return access_token;
    }

    /*
     * Issue a GET request: GET /policies.
     */
    public Object getPolicies()
    {
        RestTemplate restTemplate = new RestTemplate();
        HttpHeaders headers = new HttpHeaders();
        headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(&quot;parameters&quot;, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies&quot;, HttpMethod.GET, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object policies=  jsonObj.get(&quot;policies&quot;);
        System.out.println(policies.toString());
        return policies;
    }

    /*
     * Issue a GET request with sorting and filtering:
     * GET the first 100 policies
     * sorted in ascending order by name
     * and show only the name and rules fields.
     */
    public Object getFirst100Policies()
    {
        RestTemplate restTemplate = new RestTemplate();
        HttpHeaders headers = new HttpHeaders();
        headers.set(&quot;Accept&quot;, &quot;application/json&quot;);
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;Object&gt;(&quot;parameters&quot;, headers);
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies?fields=name,rules&amp;#x26;limit=100&amp;#x26;offset=0&amp;#x26;sort=name&amp;#x26;order=ascending&quot;,
            HttpMethod.GET, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object policies=  jsonObj.get(&quot;policies&quot;);
        System.out.println(policies.toString());
        System.out.println(&quot;Limit: &quot;+ jsonObj.get(&quot;limit&quot;));
        System.out.println(&quot;Count: &quot;+ jsonObj.get(&quot;count&quot;));
        return policies;
    }

    /*
     * Issue a POST request: Create a new policy.
     */
    public void createNewPolicy()
    {
        RestTemplate restTemplate = new RestTemplate();
        // Set a custom media type.
        MediaType myMediaType = new MediaType(&quot;application&quot;, &quot;vnd.simplivity.v1+json&quot;);

        // Set the headers.
        HttpHeaders headers = new HttpHeaders();   
        headers.setAccept(Arrays.asList(myMediaType));
        headers.setContentType(myMediaType);
        headers.set(&quot;Authorization&quot;, &quot;Bearer &quot; + access_token);

        // Form the POST body.
        String policyMo =  &quot;{\&quot;name\&quot;: \&quot;randomPolicyName4\&quot;}&quot;;
        HttpEntity&amp;#x3C;?&gt; entity = new HttpEntity&amp;#x3C;String&gt;(policyMo, headers);

        // Issue the POST operation and expect a task object in return.
        ResponseEntity&amp;#x3C;String&gt; res = restTemplate.exchange(BASE_URL+&quot;policies&quot;, HttpMethod.POST, entity, String.class);
        JSONObject jsonObj = new JSONObject(res.getBody());
        Object task = jsonObj.get(&quot;task&quot;);
        JSONObject taskJson = new JSONObject(task.toString());
        String taskId = taskJson.getString(&quot;id&quot;);
        String state = taskJson.getString(&quot;state&quot;);

        // Monitor the status of the policy creation operation by using a loop to query
        // the task while this task is IN_PROGRESS.
        // The state field in the JSON response body indicates the status.
        while(state.equals(&quot;IN_PROGRESS&quot;))
        {
            // Wait one second.
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            res = restTemplate.getForEntity(BASE_URL+&quot;tasks/&quot;+taskId, String.class);
            jsonObj = new JSONObject(res.getBody());
            task = jsonObj.get(&quot;task&quot;);
            taskJson = new JSONObject(task.toString());
            state = taskJson.getString(&quot;state&quot;);
        }
        System.out.println(&quot;Task object: &quot; +task.toString());    
    }

    public static void main(String[] args)
    {
        RestClient restClient = new RestClient(&quot;10.150.1.71&quot;);
        // Authenticate user and retrieve access token.
        restClient.getAccessToken();
        // GET policies.
        restClient.getPolicies();
        // GET first 100 policies sorted in ascending order by name 
        // and showing only the name and rules fields.
        restClient.getFirst100Policies();
        // POST /policies: Create a new policy.
        restClient.createNewPolicy();
    }
}
&lt;/code&gt;&lt;/pre&gt;</content:encoded></item><item><title><![CDATA[Setting Bios and Storage Controller Properties with Redfish]]></title><description><![CDATA[This blog post has been moved to the HPE server management portal.]]></description><link>https://developer.hpe.com/setting-bios-and-storage-controller-properties-with-redfish/</link><guid isPermaLink="false">https://developer.hpe.com/setting-bios-and-storage-controller-properties-with-redfish/</guid><pubDate>Thu, 19 Jul 2018 15:11:00 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;style&gt;
.warning-box {
    background-color: #fff3cd; /* Light yellow background */
    /* border: 2px solid #ffeb3b; /* Yellow border */
    color: #856404; /* Dark text color for contrast */
    padding: 20px; /* Padding inside the box */
    border-radius: 5px; /* Rounded corners */
    margin: 20px 0; /* Margin for spacing */
    width: 80%; /* Width of the rectangle */
    max-width: 600px; /* Maximum width of the rectangle */
    margin-left: auto; /* Center the rectangle horizontally */
    margin-right: auto; /* Center the rectangle horizontally */
    /* box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); /* Optional: Adds a shadow for depth */
}

.warning-box p {
    margin: 0; /* Remove default margin from the paragraph */
    font-weight: bold; /* Bold text */
}

.warning-box a {
    color: blue; /* Change the color of the link */
    text-decoration: none; /* Remove the underline */
    /*font-style: italic; /* Make the text italic */
    /*font-weight: bold; /* Make the text bold */
        }
&lt;/style&gt;
&lt;!--StartFragment--&gt;
&lt;div class=&quot;warning-box&quot;&gt;
&lt;p&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/biosandstorageprops/biosandstorageprops&quot; target=&quot;_blank&quot;&gt;HPE server management portal&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;!--EndFragment--&gt;</content:encoded></item><item><title><![CDATA[HPE Discover Hackathon Wrap-up: Top 10 list of what you should know]]></title><description><![CDATA[hpe dev hack shack team At HPE Discover 2018 Las Vegas this year, HPE held their first ever hackathon for customers in a Hack Shack located…]]></description><link>https://developer.hpe.com/hpe-discover-hackathon-wrap-up-top-10-list-of-what-you-should-know/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-discover-hackathon-wrap-up-top-10-list-of-what-you-should-know/</guid><pubDate>Tue, 10 Jul 2018 16:10:36 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hpe-dev-hack-shack-team-1531348164725.jpg&quot; alt=&quot;hpe dev hack shack team&quot;&gt;&lt;/p&gt;
&lt;p&gt;At HPE Discover 2018 Las Vegas this year, HPE held their first ever hackathon for customers in a Hack Shack located in the event showcase. The goal of the hackathon was to promote innovation through open source collaboration.&lt;/p&gt;
&lt;p&gt;During the 3-day event, over 280 people experienced the HPE Hack Shack. They entered a large workspace with slated wood walls containing 10 tables, chairs, and computers available for hacking – all complete with a mood lighting disco ball and background music. A backyard recreational area was also provided (just for hanging out) complete with Astroturf grass, swings, video games, a giant Jenga tower, and a Corn Hole game.&lt;/p&gt;
&lt;p&gt;Seventy-three enthusiastic would-be hackers sat down for one of five two-hour sessions to test their coding chops. The participants’ skills ranged from absolute beginner to intermediate levels. All participants received a coveted HPE DEV T-shirt. And one talented and creative hacker secured the grand prize of a DJI Mavic Air Drone with accessories. Second place received a DJI Spark drone with accessories. Amazon Echo Dots were also awarded to top finishers.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hpe-dev-hack-shack-line-1531348159077.jpg&quot; alt=&quot;hpe dev hack shack line&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Top 10 things you should know about the first-ever HPE Discover Hackathon&lt;/h1&gt;
&lt;h3&gt;10) You didn’t need good looks or coding skills to get into the HPE Discover Hack Shack&lt;/h3&gt;
&lt;p&gt;But you did need a willingness to learn. Because no coding talents were required, even novices participated…and did well!&lt;/p&gt;
&lt;h3&gt;9) Pretty much everyone thought it was cool&lt;/h3&gt;
&lt;p&gt;Hackathons are a fun way to learn software development and coding skills—and the first ever HPE Discover hackathon delivered. Participants were able to tap into skills that helped them better understand infrastructure as code, a great skill to have in this market.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hack-shack-runnerup-1531348144583.jpg&quot; alt=&quot;Hack Shack Ryan Jensen &quot;&gt;&lt;/p&gt;
&lt;h3&gt;8) A common question: “Why I am coding “Hello World?”&lt;/h3&gt;
&lt;p&gt;Traditionally, “Hello World” is the very first program people write. It’s used to illustrate the basic syntax of programming languages. Hey, we didn’t make the rules…we just follow them…well, at least sometimes.&lt;/p&gt;
&lt;h3&gt;7) Most common reaction of participants: Shock and awe-some!&lt;/h3&gt;
&lt;p&gt;The HPE Developer Community (HPE DEV) expected as much. Many people typically think of HPE as an infrastructure company. HPE DEV is starting to change that. An awesome team of experts are working hard to build out a vibrant, collaborative, and contributing software developer community. So stay tuned…&lt;/p&gt;
&lt;h3&gt;6) Top question asked: Where’s the beer?&lt;/h3&gt;
&lt;p&gt;As esteemed hosts, the HPE DEV team saved it for the end, knowing all hackathons end well with a beer. Besides, you don’t win friends (or hackathons) over salad.&lt;/p&gt;
&lt;h3&gt;5) Everyone loved Sir Hackington Appbuilder III&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hpe-dev-sir-hackington-1531348171119.jpg&quot; alt=&quot;Sir Hackington III&quot;&gt;&lt;/p&gt;
&lt;p&gt;Who doesn’t love an adorable puppy – even if he is a statue? Sir Hackington Appbuilder III will be attending all upcoming hackathons, as he is now the HPE DEV team’s official mascot.&lt;/p&gt;
&lt;h3&gt;4) The winner’s hack was amazing&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hack-shack-winner-michael-lai-1531348152252.jpg&quot; alt=&quot;Michael Lai Hack Shack winner&quot;&gt;&lt;/p&gt;
&lt;p&gt;Congratulations to Michael Lai who participated in the HPE OneView detect infrastructure failures hack. He created a server profile from a template and applied it to a server hardware. He then created an alert resource to monitor the server profile. A script runs every 10 seconds, which monitors the status and sends a status alert to the user.&lt;/p&gt;
&lt;h3&gt;3) HPE loves Open Source and sharing!&lt;/h3&gt;
&lt;p&gt;The winner’s script is in a state that can be easily extended and shared with the open source community to monitor infrastructure resources. It’s also something HPE DEV will add as an example to the SDKs – along with other hacks.&lt;/p&gt;
&lt;h3&gt;2) Who says you can’t teach an old dog new tricks&lt;/h3&gt;
&lt;p&gt;As the first-ever HPE Discover Hackathon, the HPE DEV team wasn’t really sure what attendees would think. The “Hello World” hack was eagerly embraced; beginners really do want to learn something new. And in today’s software-defined data center, the magic of coding is a great new skill to develop, and HPE DEV is here to help you grow your skills.&lt;/p&gt;
&lt;h3&gt;1) Overall takeaway from HPE Discover Hackathon&lt;/h3&gt;
&lt;p&gt;Everyone had a great time and will definitely come back for future Discover Hackathons. The HPE DEV team is already brainstorming creative and fun challenges for the next one!&lt;/p&gt;
&lt;h1&gt;A final word from our sponsor:&lt;/h1&gt;
&lt;p&gt;The HPE Developer Community connects you with HPE experts, resources and other developers worldwide to help you improve automation, service delivery, and workflow assets using open source software on HPE platforms and solutions. If you are interested in learning more or participating in a future HPE Hackathon, please reach out to &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;HPE DEV on SLACK&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hackathon-1530031408442.png&quot; alt=&quot;hackathon&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Puppet module now available with support for HPE OneView 4.0]]></title><description><![CDATA[HPE OneView Puppet module now available with support for HPE OneView 4.0 The HPE OneView Puppet module provides Puppet manifests and…]]></description><link>https://developer.hpe.com/hpe-oneview-puppet-module-now-available-with-support-for-hpe-oneview-40/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-puppet-module-now-available-with-support-for-hpe-oneview-40/</guid><pubDate>Tue, 03 Jul 2018 16:50:59 GMT</pubDate><content:encoded>&lt;h1&gt;HPE OneView Puppet module now available with support for HPE OneView 4.0&lt;/h1&gt;
&lt;p&gt;The HPE OneView Puppet module provides Puppet manifests and resources to interact with HPE OneView and HPE Synergy Image Streamer APIs, enabling developers to easily build integrations and scalable solutions.&lt;/p&gt;
&lt;p&gt;The HPE OneView Puppet module v2.3.0 now supports HPE OneView 4.0 (REST API version 600).&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/blob/v2.3.0/CHANGELOG.md&quot;&gt;https://github.com/HewlettPackard/oneview-puppet/blob/v2.3.0/CHANGELOG.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Release content is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.3.0&quot;&gt;https://github.com/HewlettPackard/oneview-puppet/releases/tag/v2.3.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository with code and examples is available on GitHub at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-puppet&quot;&gt;https://github.com/HewlettPackard/oneview-puppet&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Puppet Forge details are available at: &lt;a href=&quot;https://forge.puppet.com/hewlettpackard/oneview&quot;&gt;https://forge.puppet.com/hewlettpackard/oneview&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Winners named in the HPE Developer Community Hackathon  ]]></title><description><![CDATA[hackathon On May 24, HPE developers from around the world and across various HPE organizations began participating in the HPE Hackathon. A…]]></description><link>https://developer.hpe.com/winners-named-in-the-hpe-developer-community-hackathon/</link><guid isPermaLink="false">https://developer.hpe.com/winners-named-in-the-hpe-developer-community-hackathon/</guid><pubDate>Tue, 26 Jun 2018 16:40:23 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/hackathon-1530031408442.png&quot; alt=&quot;hackathon&quot;&gt;&lt;/p&gt;
&lt;p&gt;On May 24, HPE developers from around the world and across various HPE organizations began participating in the HPE Hackathon. A total of 49 individual hackers took up the challenge, comprising 18 teams that ranged in size from 1-7 participants. In the end, 12 teams crossed the finish line and presented outstanding projects to a distinguished panel of 4 judges.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/may24-2-1530031551619.jpg&quot; alt=&quot;may24 2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Judging took place from June 4th-10th. Teams were given 15 minutes to present their projects, which were judged on the following criteria:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;20%: Completeness &amp;#x26; Presentation&lt;/li&gt;
&lt;li&gt;20%: Technical Merit&lt;/li&gt;
&lt;li&gt;20%: Originality&lt;/li&gt;
&lt;li&gt;40%: Potential Customer Impact&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Participant experience varied widely--from teams with little to no coding background (looking to grow their skills), all the way to very experienced coders. Each team was assigned a coach from the Developer Experience &amp;#x26; Incubation team to guide and assist the team’s work.&lt;/p&gt;
&lt;h1&gt;And the winners are…&lt;/h1&gt;
&lt;p&gt;Congratulations to &lt;strong&gt;Team Chatbot&lt;/strong&gt;, who was awarded first place for their project. Team members include Jesse Olsen, Dana Lynn, and Jeroen Kleen. Team Chatbot built an awesome chatbot based on a React/Redux platform. More details about this project will be coming soon on the HPE DevCom portal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/ric-hack-1530031624425.jpg&quot; alt=&quot;ric hack&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Team Reality Distortion&lt;/strong&gt; received second place honors. Congratulations to the one-man team of Damian Janiszewski.&lt;/p&gt;
&lt;p&gt;Damian developed an HPE OneSphere extension module that exports HPE OneSphere metrics for Prometheus time series database and alerting system. Prometheus metrics are then easily visualized using Grafana dashboards.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/6/image-uploaded-from-ios-1530031727049.jpg&quot; alt=&quot;hackathon warsaw&quot;&gt;&lt;/p&gt;
&lt;p&gt;Third place went to &lt;strong&gt;Team Last Bench&lt;/strong&gt;, for their project. Contributing team members include Gandharva S, Shiva Kumar M, Krishna Kanth Mallela, Manjunath Patil, and Sahana Sampige Prabhakar.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Thank you to all participants&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The HPE DevCom would like to thank all of the HPE Hackathon participants for their outstanding creativity and hard work. A partial list of other completed projects includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Team NoName&lt;/strong&gt;: Collect &amp;#x26; display HPE OneSphere data using R&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team Everest&lt;/strong&gt;: Ruby bindings for HPE OneSphere&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team Hela&lt;/strong&gt;: HPE OneSphere metrics collection using Grafana&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team Spectrum Spiders&lt;/strong&gt;: Ansible module for HPE OneSphere&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team Ashlesha&lt;/strong&gt;: Enable provisioning of AWS CloudFormation templates via the HPE OneSphere catalog&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team Last Bench&lt;/strong&gt;: ServiceNow integration with HPE OneSphere&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Team India Matrix Master&lt;/strong&gt;: Terraform module for HPE OneSphere&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The work completed by all of the teams during the hackathon will be leveraged by the HPE Developer Community (@HPE_DevCom). More information about these projects and many others will be available in upcoming articles on the community’s blog site.# Why participate in a hackathon?&lt;/p&gt;
&lt;p&gt;Hackathons are a great way for companies to foster innovation through collaboration and creative problem-solving. And that’s exactly what the HPE Developer Community witnessed. The hackathon was designed to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Foster cross-organization collaboration and innovation&lt;/li&gt;
&lt;li&gt;Provide an opportunity for personal growth (growing software development skills in HPE)&lt;/li&gt;
&lt;li&gt;Provide an opportunity for the community to create tangible customer-centric solutions&lt;/li&gt;
&lt;li&gt;Build a foundation upon which to create a robust internal HPE developer community&lt;/li&gt;
&lt;li&gt;Have fun!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you are interested in learning more or participating in a future HPE Hackathon, please reach out to &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;the HPE DevCom on SLACK&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Chef cookbook now available with support for HPE OneView 4.0]]></title><description><![CDATA[HPE OneView Chef cookbook now available with support for HPE OneView 4.0 The HPE OneView Chef cookbook provides Chef recipes to interact…]]></description><link>https://developer.hpe.com/hpe-oneview-chef-cookbook-now-available-with-support-for-hpe-oneview-40/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-chef-cookbook-now-available-with-support-for-hpe-oneview-40/</guid><pubDate>Wed, 06 Jun 2018 19:55:30 GMT</pubDate><content:encoded>&lt;h1&gt;HPE OneView Chef cookbook now available with support for HPE OneView 4.0&lt;/h1&gt;
&lt;p&gt;The HPE OneView Chef cookbook provides Chef recipes to interact with HPE OneView and HPE Synergy Image Streamer APIs, enabling developers to easily build integrations and scalable solutions.&lt;/p&gt;
&lt;p&gt;The HPE OneView Chef cookbook v3.1.0 now supports HPE OneView 4.0 (REST API version 600).&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/blob/v3.1.0/CHANGELOG.md&quot;&gt;https://github.com/HewlettPackard/oneview-chef/blob/v3.1.0/CHANGELOG.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Release content is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.1.0&quot;&gt;https://github.com/HewlettPackard/oneview-chef/releases/tag/v3.1.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository with code and examples is available on GitHub at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-chef&quot;&gt;https://github.com/HewlettPackard/oneview-chef&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Chef Supermarket details are available at: &lt;a href=&quot;https://supermarket.chef.io/cookbooks/oneview/versions/3.1.0&quot;&gt;https://supermarket.chef.io/cookbooks/oneview/versions/3.1.0&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Storage Provisioning using Ansible with HPE 3PAR Storage]]></title><description><![CDATA[Storage Provisioning using Ansible with HPE 3PAR Storage Recently I was asked to look at getting the HPE 3PAR to work with Ansible. My…]]></description><link>https://developer.hpe.com/storage-provisioning-using-ansible-with-hpe-3par-storage/</link><guid isPermaLink="false">https://developer.hpe.com/storage-provisioning-using-ansible-with-hpe-3par-storage/</guid><pubDate>Wed, 30 May 2018 22:22:15 GMT</pubDate><content:encoded>&lt;h1&gt;Storage Provisioning using Ansible with HPE 3PAR Storage&lt;/h1&gt;
&lt;p&gt;Recently I was asked to look at getting the HPE 3PAR to work with Ansible. My exposure to Ansible before this, was by name recognition only. I didn’t have any working knowledge with it. So like every good challenge, I set off to learn this new “tool”.&lt;/p&gt;
&lt;p&gt;After looking online at several Youtube videos, I got an elementary understanding to what was possible with Ansible. I must say, it was definitely refreshing it was YAML based compared to some of the other languages I had been using i.e. javascript and Ruby. YAML is a much “simpler” language to understand and read.&lt;/p&gt;
&lt;p&gt;If you aren’t familiar with YAML, it stands for “YAML Ain&apos;t Markup Language”. It is used widely among programming languages as well as configuration files for many of the popular apps. So if you have dug around much within Linux, chances are you have come across YAML.&lt;/p&gt;
&lt;p&gt;Here is a quick example from Ansible’s website. Take a look at the documentation to learn more about the formatting for YAML. Reading YAML is pretty straightforward.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;# Employee records
-  martin:
    name: Martin D&apos;vloper
    job: Developer
    skills:
      - python
      - perl
      - pascal
-  tabitha:
    name: Tabitha Bitumen
    job: Developer
    skills:
      - lisp
      - fortran
      - erlang
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Again this is a simple example, but these same principals apply from the most basic to the most complex Ansible playbooks.&lt;/p&gt;
&lt;p&gt;Now let’s talk more specifically about Ansible and playbooks.&lt;/p&gt;
&lt;p&gt;Ansible’s website says it the best:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
If Ansible modules (modules are Ansible “plugins” that control system resources, like services, packages, or files) are the tools in your workshop, playbooks are your instruction manuals, and your inventory of hosts are your raw material.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts, interacting with monitoring servers and load balancers along the way.&lt;/p&gt;
&lt;p&gt;With this in mind, I will be specifically talking about creating Ansible playbooks to automate Storage deployments. Many developers and Operation teams are already using Ansible to help deploy infrastructure as well as apps within their datacenters, so it makes sense to include storage in these processes and workflows. Since I work primarily with the HPE 3PAR Storserv array, naturally I wanted to start there and figure out how to make Ansible talk to the HPE 3PAR’s Web Services API (3PAR’s REST API).&lt;/p&gt;
&lt;p&gt;I will be covering two different methods of how you can use Ansible with the 3PAR. If you are already &quot;TLDR&quot;, but want to know specifically how to use the new Storage module, you can skip over this section and go the section &lt;strong&gt;&quot;The New Better Way&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;h1&gt;Old Method (without the Storage Module)&lt;/h1&gt;
&lt;p&gt;When I first started looking into how to connect to Ansible to the WSAPI, I quickly realized that since we didn’t have a module (at the time) we would have to do it “old school” using REST calls. Since I came from several other projects recently where I had to learn REST (that is another topic in and of itself), this wasn’t going to be too difficult.
I was able to find the Ansible uri module to work with REST. So let’s take a look at how to write a raw playbook using REST and then I can show you the new HPE 3PAR module so you understand the benefits that come from the work done by the HPE engineering team.
Here is a “simple” example of a custom playbook using the uri module to query the HPE 3PAR to get volumes.&lt;/p&gt;
&lt;p&gt;Let’s start with creating the playbook. Every playbook (since it is YAML based) starts with &lt;code&gt;---&lt;/code&gt; and then the name of the playbook. Here you can define the hosts that it will run against. Since we are using the &lt;strong&gt;uri&lt;/strong&gt; module on the local machine to connect to a 3PAR, we set this to &lt;strong&gt;localhost&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- name: Connect to 3par - get volumes
  hosts: localhost

  vars:
    auth_3par_user: &quot;3paruser&quot;
    auth_3par_password: &quot;3parpass&quot;
    ip_address_3par: 192.168.1.50
    rest_api_url_3par: &quot;https://{{ ip_address_3par }}:8080/api/v1&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next section, includes your variables. These can be included in the playbook (like shown),  at runtime, or in a configuration file. It can also be a mix of these, for example, global variables could be in a configuration file while the runtime variables configured in the playbook or provided at the command line with the -e flag.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Be aware, &lt;strong&gt;YAML&lt;/strong&gt; is space sensitive. Always make sure your sections contain the same number of spaces in front otherwise it may be interpreted differently. Also, TABS have been outlawed in YAML and are not allowed to prevent errors between editors. You have been warned.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The next section contains the task(s). A playbook can contain a single task or multiple tasks to accomplish the desired configuration. Each task is denoted with the &lt;code&gt;– name:&lt;/code&gt; descriptor and provides a meaningful name to the task. Each task calls upon an Ansible module that is able to accomplish the desired endstate, in our case the &lt;strong&gt;uri&lt;/strong&gt; module. Under the &lt;strong&gt;uri&lt;/strong&gt; module, you will see the various parameters that are required in order for Ansible to talk with the HPE 3PAR.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;tasks:
  - name: check if 3par WSAPI is running
    uri:
      url: &quot;{{ rest_api_url_3par }}/credentials&quot;
      method: POST
      headers:
        Content-Type: &quot;application/json&quot;
      body_format: json
      body: &quot;{ &apos;user&apos;: &apos;{{ auth_3par_user }}&apos;, &apos;password&apos;: &apos;{{ auth_3par_password }}&apos; }&quot;
      status_code: 201
      return_content: yes
      validate_certs: no
    register: output

  - name: Parsing key
    debug:
      msg: &quot;{{ output.json.key }}&quot;

  - name: GET 3par hosts - get hosts
    uri:
      url: &quot;{{ rest_api_url_3par }}/volumes&quot;
      method: GET
      headers:
        Content-Type: &quot;application/json&quot;
        X-HP3PAR-WSAPI-SessionKey: &quot;{{ output.json.key }}&quot;
        Accept: &quot;application/json&quot;
      status_code: 200
      return_content: yes
      validate_certs: no
    register: host_output

  - name: Parsing Host GET
    debug:
      msg: &quot;{{ host_output }}&quot;

  - name: release authentication key
    uri:
      url: &quot;{{ rest_api_url_3par }}/credentials/{{ output.json.key }}&quot;
      method: DELETE
      headers:
        Content-Type: &quot;application/json&quot;
      validate_certs: no   
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see, there are 5 tasks in this playbook.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Authentication to the HPE 3PAR&lt;/li&gt;
&lt;li&gt;Acquires the required SessionKey&lt;/li&gt;
&lt;li&gt;With the SessionKey, Queries the HPE 3PAR for all volumes&lt;/li&gt;
&lt;li&gt;Outputs the query to the terminal&lt;/li&gt;
&lt;li&gt;Releases the SessionKey&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There are a lot of steps involved here to do something so simple. The nice thing though, once you have this initial playbook built, managing (i.e. provisioning volumes, creating vvsets, etc) your HPE 3PAR is rather straight forward only requiring the change of the REST call &lt;code&gt;url: &quot;{{ rest_api_url_3par }}/volumes&quot;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;You can include additional tasks, like exporting volumes, to make these playbooks more robust and part of a larger workflow. When I created a number of these playbooks in this manner, I found out quickly there were a lot of limitations to using the uri module, which I want highlight.&lt;/p&gt;
&lt;h3&gt;Impressions of using REST/uri module&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt;, is &lt;strong&gt;&lt;em&gt;length&lt;/em&gt;&lt;/strong&gt; of your playbooks. This was a simple playbook but it was still &lt;strong&gt;57 lines of code&lt;/strong&gt;. That is 57 different areas to make a mistake (especially with spaces). When I created some larger bare-metal provisioning workflows, they were hundreds of lines of code which becomes harder to follow, error prone and far more complex than I want.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt;, the &lt;code&gt;uri&lt;/code&gt; module. I had to use native &lt;strong&gt;REST&lt;/strong&gt; calls to talk with the 3PAR. I was fortunate to have learned &lt;strong&gt;REST&lt;/strong&gt; in previous projects, but it isn’t easy to pick up on the fly so if you had to get up and running quickly with Ansible this would be another HUGE learning curve in addition to learning Ansible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lastly&lt;/strong&gt;, there are no &lt;em&gt;3PAR specific resources&lt;/em&gt; to work with. I was dependent on the &lt;code&gt;uri&lt;/code&gt; module and then parsing &lt;strong&gt;json&lt;/strong&gt; in order to get specific information I needed in my playbooks like &lt;strong&gt;volume names&lt;/strong&gt;, &lt;strong&gt;lun id&lt;/strong&gt;, &lt;strong&gt;SessionKeys&lt;/strong&gt;, etc. An important part of automation, is the ability target an object (be it an &lt;strong&gt;HPE 3PAR volume&lt;/strong&gt;, &lt;strong&gt;ESXi host&lt;/strong&gt;, &lt;strong&gt;nginx service&lt;/strong&gt;) so you can act upon it later in your workflows.&lt;/p&gt;
&lt;p&gt;This is where the HPE 3PAR modules for Ansible comes into play.
&lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_ansible_module&quot;&gt;https://github.com/HewlettPackard/hpe3par_ansible_module&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;New Better Way (with the Storage Module)&lt;/h1&gt;
&lt;p&gt;Okay like I said, the previous is the long way to do it. If you like to write in &lt;strong&gt;REST&lt;/strong&gt; and enjoy having really long playbooks, then continue doing what you are doing. Otherwise, HPE has partnered with Red Hat and the Ansible team to develop high quality storage modules to manage the HPE 3PAR array. This collaboration has led to the development of HPE 3PAR specific storage modules, playbooks and resources in order to manage your HPE 3PAR storage arrays.&lt;/p&gt;
&lt;p&gt;The HPE 3PAR modules for Ansible is developed as a set of modules and example playbooks to provision the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPG&lt;/li&gt;
&lt;li&gt;Host&lt;/li&gt;
&lt;li&gt;Volume&lt;/li&gt;
&lt;li&gt;VLUN&lt;/li&gt;
&lt;li&gt;Host Set&lt;/li&gt;
&lt;li&gt;Volume Set&lt;/li&gt;
&lt;li&gt;Volume Offline Clone&lt;/li&gt;
&lt;li&gt;Volume Online Clone&lt;/li&gt;
&lt;li&gt;Volume Snapshot&lt;/li&gt;
&lt;li&gt;QOS&lt;/li&gt;
&lt;li&gt;Flash Cache&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So let’s look at what a playbook looks like using the HPE 3PAR modules for Ansible.&lt;/p&gt;
&lt;p&gt;First step to using the storage module, is to download it from Github. As of Ansible 2.5 and earlier, you will need to download it from Github, after the release of Ansible 2.6, it will be part of Ansible core and included when you download future versions.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;git clone https://github.com/HewlettPackard/hpe3par_ansible_module
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once we have it downloaded, you can look at the various items included with the Module.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/image1-1528126916512.png&quot; alt=&quot;image1&quot;&gt;&lt;/p&gt;
&lt;p&gt;We are most interested in the Modules and playbooks. The Modules folder contains the code (written in Python) behind the playbooks and these specific modules do all of the work on the 3PAR so we don’t have to write lengthy and complex REST calls like in the beginning example. They also provide HPE 3PAR resources and actions that we can then use within our playbooks like create volumes, modify 3PAR hosts, or delete hostsets as an example. We don’t have to modify anything here and they are available when we create our playbooks.&lt;/p&gt;
&lt;p&gt;Now let&apos;s look at the playbooks. There is a pretty large list of available playbooks here.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/image2-1528127089601.png&quot; alt=&quot;image2&quot;&gt;&lt;/p&gt;
&lt;p&gt;The name of the playbooks correlates to the actions and resources that you will be working with. So for example if we want to create a &lt;strong&gt;&lt;em&gt;snapshot&lt;/em&gt;&lt;/strong&gt;, we would be working with the &lt;strong&gt;snapshot_playbook.yml&lt;/strong&gt; or if we want to manage &lt;strong&gt;&lt;em&gt;volumes&lt;/em&gt;&lt;/strong&gt;, we would be using the &lt;strong&gt;volume_playbook.yml&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Let’s look at the volume playbook,&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/image3-1528127229536.png&quot; alt=&quot;image3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Variables within &lt;strong&gt;YAML&lt;/strong&gt; are defined with curly brackets: &lt;code&gt;{{ variable }}&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Second, &lt;strong&gt;&lt;em&gt;state&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;&lt;em&gt;state&lt;/em&gt;&lt;/strong&gt; option tells Ansible what action to perform. Ansible is a Change Management tool, so you don’t actually write code, instead playbooks define your environment and how it should be configured.&lt;/p&gt;
&lt;p&gt;Let’s look at the &lt;strong&gt;&lt;em&gt;Create Volume task&lt;/em&gt;&lt;/strong&gt; as an example, state=present. Upon running the playbook, the Create Volume task will call the HPE 3PAR storage module to check to see if the volume exists on the array and if it doesn’t, it will be created.  State tells Ansible that the volume must be present.&lt;/p&gt;
&lt;p&gt;Looking back at the &lt;strong&gt;volume_playbook.yml&lt;/strong&gt; as well as any of the other playbooks, you will see the many state definitions, each performing a specific action. Some of them include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;present&lt;/li&gt;
&lt;li&gt;convert_type&lt;/li&gt;
&lt;li&gt;set_snap_cpg&lt;/li&gt;
&lt;li&gt;change_snap_cpg&lt;/li&gt;
&lt;li&gt;grow&lt;/li&gt;
&lt;li&gt;grow_to_size&lt;/li&gt;
&lt;li&gt;modify&lt;/li&gt;
&lt;li&gt;delete&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now let’s create another playbook using these modules to create a volume and then export it to a host.&lt;/p&gt;
&lt;p&gt;Open your favorite editor.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vi demo_playbook.yml&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Let’s provide a name for our playbook and it will be running against &lt;code&gt;localhost&lt;/code&gt; again.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;---
- name: Demo 3PAR Ansible playbook
  hosts: localhost
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next let’s define our variables. In this example, we will use a combination of locally defined &lt;strong&gt;variables&lt;/strong&gt; and an external configuration file (&lt;code&gt;include_vars&lt;/code&gt;). Under &lt;code&gt;vars&lt;/code&gt;, we specify the volume specifics.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;vars:
  volume_name: &apos;demo_ansible_volume&apos;
  size: 10
  size_unit: &apos;GiB&apos;
  cpg: &apos;FC_r1&apos;
  vlun_host_name: &apos;example_host&apos;
  autolun: False
  lunid: 110

tasks:
  - name: Load Storage System Vars
    include_vars: &apos;properties/storage_system_properties.yml&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Before we go further, here is what the &lt;code&gt;properties/storage_system_properties.yml&lt;/code&gt; file looks like. It is a simple file with 3 lines, defining the connection information for the HPE 3PAR.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;storage_system_ip: &quot;192.168.1.50&quot;
storage_system_username: &quot;3paruser&quot;
storage_system_password: &quot;3parpass&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can include any of your variables in a combination of configuration files or within the playbook itself. It is completely up to you. I like to use configuration file for my “global” variables or to use them as “profiles” to limit the user defined variables in my playbooks.&lt;/p&gt;
&lt;p&gt;Now let’s create our tasks. Like I mentioned before, we can copy and paste these from the playbook templates found under the playbooks folder in the HPE 3PAR Storage Module. We will be copying the &lt;strong&gt;&lt;em&gt;Create Volume&lt;/em&gt;&lt;/strong&gt; action from the &lt;strong&gt;volume_playbook.yml&lt;/strong&gt; and the &lt;strong&gt;&lt;em&gt;Create VLUN&lt;/em&gt;&lt;/strong&gt; action from the &lt;strong&gt;volume_to_host_vlun_playbook.yml&lt;/strong&gt; playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;- name: Create Volume &quot;{{ volume_name }}&quot;
  hpe3par_volume:
    storage_system_ip=&quot;{{ storage_system_ip }}&quot;
    storage_system_username=&quot;{{ storage_system_username }}&quot;
    storage_system_password=&quot;{{ storage_system_password }}&quot;
    state=present
    volume_name=&quot;{{ volume_name }}&quot;
    cpg=&quot;{{ cpg }}&quot;
    size=&quot;{{ size }}&quot;
    size_unit=&quot;{{ size_unit }}&quot;

- name: Create VLUN
  hpe3par_vlun:
    storage_system_ip=&quot;{{ storage_system_ip }}&quot;
    storage_system_username=&quot;{{ storage_system_username }}&quot;
    storage_system_password=&quot;{{ storage_system_password }}&quot;
    state=export_volume_to_host
    volume_name=&quot;{{ volume_name }}&quot;
    host_name=&quot;{{ vlun_host_name }}&quot;
    lunid=&quot;{{ lunid }}&quot;
    autolun=&quot;{{ autolun }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We need to make one minor change. Under the &lt;strong&gt;&lt;em&gt;Create VLUN&lt;/em&gt;&lt;/strong&gt; task, we need to modify the following variable:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;volume_name=&quot;{{ vlun_volume_name }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;to&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;volume_name=&quot;{{ volume_name }}&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This allows you to link the two tasks by using the volume you created as the volume to export in the following task. Other than that we are done and have created a simple, yet powerful provisioning playbook.&lt;/p&gt;
&lt;p&gt;With it all together, it should look like and is ready to run.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;---
- name: Demo 3PAR Ansible playbook
  hosts: localhost

  vars:
    volume_name: &apos;demo_ansible_volume&apos;
    size: 100
    size_unit: &apos;GiB&apos;
    cpg: &apos;FC_r6&apos;
    vlun_host_name: &apos;example_host&apos;
    autolun: False
    lunid: 110
  
  tasks:
    - name: Load Storage System Vars
      include_vars: &apos;properties/storage_system_properties.yml&apos;

    - name: Create Volume &quot;{{ volume_name }}&quot;
      hpe3par_volume:
        storage_system_ip=&quot;{{ storage_system_ip }}&quot;
        storage_system_username=&quot;{{ storage_system_username }}&quot;
        storage_system_password=&quot;{{ storage_system_password }}&quot;
        state=present
        volume_name=&quot;{{ volume_name }}&quot;
        cpg=&quot;{{ cpg }}&quot;
        size=&quot;{{ size }}&quot;
        size_unit=&quot;{{ size_unit }}&quot;
      
    - name: Create VLUN
      hpe3par_vlun:
        storage_system_ip=&quot;{{ storage_system_ip }}&quot;
        storage_system_username=&quot;{{ storage_system_username }}&quot;
        storage_system_password=&quot;{{ storage_system_password }}&quot;
        state=export_volume_to_host
        volume_name=&quot;{{ volume_name }}&quot;
        host_name=&quot;{{ vlun_host_name }}&quot;
        lunid=&quot;{{ lunid }}&quot;
        autolun=&quot;{{ autolun }}&quot;
  
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now let’s run the playbook.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook demo_playbook.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see the following output:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/image4-1528127766708_80-1528129228459.png&quot; alt=&quot;image4 1528127766708_80&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you look at your HPE 3PAR array, you will now see it your new volume.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/image5-1528128461213.png&quot; alt=&quot;image5&quot;&gt;&lt;/p&gt;
&lt;p&gt;Congratulations on creating a playbook using the HPE 3PAR Storage modules for Ansible.&lt;/p&gt;
&lt;p&gt;The HPE 3PAR Storage modules make the job much simpler as well as you have greater flexibility in integrating HPE 3PAR into your DevOps and Operation team workflows. Again looking back at our original playbook example, it took 57 lines of code to simply read the volumes from the HPE 3PAR. We were able to create and then export the volume to a server in 37.&lt;/p&gt;
&lt;p&gt;This really is just the beginning of what you can do with the HPE 3PAR Storage modules for Ansible. This was a standard provisioning playbook. You can create playbooks that include tasks from bare-metal provisioning, OS deployment, storage provisioning all the way to application deployment and management. You can take these playbooks and integrate them into Ansible Tower to build out a self-service portal.&lt;/p&gt;
&lt;p&gt;The future looks brighter than ever when it comes to HPE. The landscape of the datacenter is constantly changing and the demand for storage and rapid deployments continue to grow. Thank you for following this tutorial and hopefully it has been useful and sparked interest and ideas about how to use Ansible and HPE Storage. Wherever you are on your journey, we&apos;re very eager to hear what challenges exist out there today. I genuinely believe HPE can help out in a multitude of ways.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[A behind the scenes look into the projects of the HPE Hackathon]]></title><description><![CDATA[HPE Hackathon On May 24, HPE developers from around the world began participating in the HPE Hackathon. A hackathon is a coding contest…]]></description><link>https://developer.hpe.com/a-behind-the-scenes-look-into-the-projects-of-the-hpe-hackathon/</link><guid isPermaLink="false">https://developer.hpe.com/a-behind-the-scenes-look-into-the-projects-of-the-hpe-hackathon/</guid><pubDate>Thu, 24 May 2018 20:29:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/hpe-hackathon-3d-v1-1-1526997723888.jpg&quot; alt=&quot;HPE Hackathon&quot;&gt;&lt;/p&gt;
&lt;p&gt;On May 24, HPE developers from around the world began participating in the HPE Hackathon. A hackathon is a coding contest where developers, programmers, and other creative experts come together to collaborate on a range of software development topics. In the &lt;a href=&quot;/blog/psssst-ive-got-the-inside-scoop-on-the-upcoming-hpe-hackathon&quot;&gt;HPE Hackathon&lt;/a&gt;, developers will focus on &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt;, HPE’s new hybrid-cloud management platform.&lt;/p&gt;
&lt;p&gt;The HPE Hackaton has 19 different teams of up to 5 people each located all over the world. Each team is focusing on a unique aspect of the HPE OneSphere API for their project. Below is a sampling of the different types of projects the teams will be working on during the event:&lt;/p&gt;
&lt;h2&gt;Team: India Matrix Master&lt;/h2&gt;
&lt;h3&gt;Location: Fort Collins, CO and India&lt;/h3&gt;
&lt;p&gt;This team of 5 is building a Terraform plugin for HPE OneSphere, allowing Terraform users to describe the desired state of its infrastructure in Terraform language and make sure this desired state is matched by running Terraform. The description could be a mix of OneSphere and other infrastructure providers already handled by Terraform.&lt;/p&gt;
&lt;p&gt;Expected outcome: An open source Terraform plugin for HPE OneShere hosted on the HPE Developer Github&lt;/p&gt;
&lt;h2&gt;Team: Spectrum Spider&lt;/h2&gt;
&lt;h3&gt;Location: US and Canada&lt;/h3&gt;
&lt;p&gt;This team of 5 is building an Ansible module for HPE OneSphere. Ansible is a very popular language for describing IT infrastructure. The ecosystem around Ansible (now owned by Red Hat) is huge, with hundreds of modules available to automate just about everything. This project would allow Ansible playbooks authors to include HPE OneSphere in their IT infrastructure automation.&lt;/p&gt;
&lt;p&gt;Expected outcome: An open source Ansible module for HPE OneSphere hosted on the HPE Developer Github&lt;/p&gt;
&lt;h2&gt;Team: Last Bench Squad&lt;/h2&gt;
&lt;h3&gt;Location: India&lt;/h3&gt;
&lt;p&gt;This team of 5 is using ServiceNow to control HPE OneSphere project and user management. The idea is to build an integration between ServiceNow and HPE OneSphere to allow a ServiceNow user to order a new development team environment in HPE OneSphere from a ServiceNow form following the governance rules already in place.&lt;/p&gt;
&lt;p&gt;Expected outcome: A importable ServiceNow project on the HPE Developer Github&lt;/p&gt;
&lt;h2&gt;Team: HPE OneSphere ChatBot&lt;/h2&gt;
&lt;h3&gt;Location: Fort Collins, CO&lt;/h3&gt;
&lt;p&gt;This team of 3 is building a Hubot plugin which will allow users to chat in natural language (in Slack, for example) and get answers from HPE OneSphere.&lt;/p&gt;
&lt;p&gt;Expected outcome: An open source Hubot plugin on the HPE Developer Github&lt;/p&gt;
&lt;h2&gt;Team: NoName&lt;/h2&gt;
&lt;h3&gt;Location: Poland&lt;/h3&gt;
&lt;p&gt;This team of one is made up of a young data scientist who will be building a sample program to demonstrate how to use language R, a programming language and free software environment for statistical computing and graphics, to extract Insights from HPE OneSphere.&lt;/p&gt;
&lt;p&gt;Expected outcome: A blog article with sample R code&lt;/p&gt;
&lt;h2&gt;Team: Alesha&lt;/h2&gt;
&lt;h3&gt;Location: India and Japan&lt;/h3&gt;
&lt;p&gt;This team of 2 is building a plugin for HPE OneSphere to allow description of infrastructure using Cloud Formation (from AWS).&lt;/p&gt;
&lt;p&gt;Expected outcome: An open source CloudFormation plugin for HPE OneSphere hosted on the HPE Developer Github&lt;/p&gt;
&lt;h2&gt;Team: Vizit&lt;/h2&gt;
&lt;h3&gt;Location: United States&lt;/h3&gt;
&lt;p&gt;This team of one is building a rendering tool using D3 (a famous open source library) to display HPE OneSphere configurations.&lt;/p&gt;
&lt;p&gt;Expected outcome: A demo program hosted on the HPE Developer Github&lt;/p&gt;
&lt;p&gt;HPE Hackathon participants will create a 15 minute presentation about what they have accomplished, which will be presented to the judges on June 4. HPE Hackathon winners will be announced on June 5th.&lt;/p&gt;
&lt;h1&gt;Stay tuned for more insider access and the results of the HPE Hackathon!&lt;/h1&gt;</content:encoded></item><item><title><![CDATA[Psssst: I’ve got the inside scoop on the upcoming HPE Hackathon ]]></title><description><![CDATA[HPE Hackathon You may not be aware, but something big is about to happen within the Developer Community at HPE. On May 24, HPE developers…]]></description><link>https://developer.hpe.com/psssst-ive-got-the-inside-scoop-on-the-upcoming-hpe-hackathon/</link><guid isPermaLink="false">https://developer.hpe.com/psssst-ive-got-the-inside-scoop-on-the-upcoming-hpe-hackathon/</guid><pubDate>Tue, 22 May 2018 14:00:54 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/hpe-hackathon-3d-v1-1-1526997723888.jpg&quot; alt=&quot;HPE Hackathon&quot;&gt;&lt;/p&gt;
&lt;p&gt;You may not be aware, but something big is about to happen within the Developer Community at HPE. On May 24, HPE developers from around the world will participate in the group’s first hackathon.&lt;/p&gt;
&lt;p&gt;For those unfamiliar with the term, a hackathon is a coding contest where developers, programmers, and other creative experts come together to collaborate on a range of software development topics. Participants work quickly – often over a short period of time – to innovate and solve challenges, resulting in creating usable software everyone can use.&lt;/p&gt;
&lt;p&gt;Hackathons are becoming popular and useful activities because they foster innovation through collaboration and creative problem solving. They can also stimulate some interesting results. For instance, &lt;a href=&quot;https://hackernoon.com/these-13-new-startups-were-born-at-hackathons-b758c37dde42&quot;&gt;this article tells how 13 new startup companies&lt;/a&gt; were created during hackathons, which demonstrates the power of harnessing the expertise of a creative group of people.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Hackathons let developers go beyond their business-as-usual, day-to-day operations – giving them the opportunity to work with other talented people on a particular challenge over a short timeframe,”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;explains Said A. Syed, Director -- Developer Experience and Product Incubation, HPE Software-Defined and Cloud Group.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“This type of concentrated brainpower and effort can lead to transformative innovation that produces new and disruptive solutions, capabilities, and products.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;HPE Hackathon – focus on HPE OneSphere&lt;/h2&gt;
&lt;p&gt;The HPE Hackathon will be hosted in Fort Collins, Colorado where developers will attend in person. Others will join remotely from all over the world -- Poland, Brazil, India, Sweden, Ireland, Canada, and other areas of the United States. Forty-six attendees make up nineteen different teams. The teams include up to five people with names such as India Matrix Master, Spectrum Spiders, Last Bench Squad, Kubewarriors, Say What?, and Reality Distortion.&lt;/p&gt;
&lt;p&gt;For this event, the focus for the hackathon is surrounding &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt; API. A variety of projects are already in the works that will provide the Open Source community with more integration modules for HPE OneSphere, HPE’s new hybrid-cloud management platform.&lt;/p&gt;
&lt;h2&gt;What’s happening?&lt;/h2&gt;
&lt;p&gt;Officially, the hackathon started with a two hour training session on the HPE OneSphere API, held on May 10th. Following this training, all attendees were provided an account on a live prepopulated HPE OneSphere platform.&lt;/p&gt;
&lt;p&gt;May 24-25 is when things really get serious, as coaching and finalizing sessions are available to help participants finish their projects. The following week, participants will create a 15 minute presentation about what they have accomplished, which gets presented to the judges on June 4.  Winners will be announced on June 5th. Throughout June HPE experts within the developer community will be working with the teams to publish the open source hackathon code as onto &lt;a href=&quot;https://github.com/hewlettpackard/&quot;&gt;HPE Github&lt;/a&gt;, where it become available to everyone to use and improve upon.&lt;/p&gt;
&lt;p&gt;Participants on the winning team will each receive a high-end drone. If a team’s project can be open sourced and is pushed to the HPE Github, each team member will receive either an Amazon Echo Dot or Google Home Mini.  And all hackathon participants will receive HPE swag (T-shirts and other fun gifts).&lt;/p&gt;
&lt;h2&gt;Fun collaboration that leads to innovation and benefits customer&lt;/h2&gt;
&lt;p&gt;Alex Mejias, Senior Fullstack Developer and one of the HPE Hackathon organizers explains the purpose of the event.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“This hackathon provides a fun and social format to collaborate with HPE subject matter experts to create new ideas around existing software. And most of the projects created during the hackathon will be put through HPE’s Open Source Review Board (OSRB) to be available for free to the public.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Another HPE Hackathon organizer, Didier Lalli, Distinguished Technologist at HPE, details some of the projects that are planned.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Projects range from front end data visualizations using libraries like D3 and Grafana, to DevOps with Ansible and Terraform, to Chat bots. We wanted to provide a set of topics that use HPE OneSphere’s REST API to its fullest potential. We’re covering the full stack rainbow here; our goal is to push the boundaries on what can be developed.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The hackathon organizers have established a permanent &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;Slack channel&lt;/a&gt; to support live questions from the participants. They’ve also assembled a geographically dispersed team in EMEA and on both the east and west coast of the United States to provide constant support for all of the teams during the event May 24-25.&lt;/p&gt;
&lt;p&gt;Syed is looking forward to the HPE hackathon and excited about the anticipated benefits for HPE customers.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“HPE is in a unique position to solve complex problems for our customers. During the hackathon, HPE experts are taking on 19 projects that present unique real-world challenges and working together to solve them. The hackathon is a fun way to bring an amazing group of talented people together to solve some of these challenges quickly.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Good luck to all the participants of the HPE Hackathon!&lt;/h2&gt;</content:encoded></item><item><title><![CDATA[HPE at ChefConf 2018: May 22-25]]></title><description><![CDATA[Chef Conf ChefConf is the annual three day gathering of the DevOps and Continuous Automation community. This year, Hewlett Packard…]]></description><link>https://developer.hpe.com/hpe-at-chefconf-2018-may-22-25/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-at-chefconf-2018-may-22-25/</guid><pubDate>Fri, 18 May 2018 16:10:21 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/5/hpe-dev-chef-conf-2018-1526660123064.jpg&quot; alt=&quot;Chef Conf&quot;&gt;&lt;/p&gt;
&lt;p&gt;ChefConf is the annual three day gathering of the DevOps and Continuous Automation community. This year, Hewlett Packard Enterprise (HPE) will be on-hand at &lt;a href=&quot;http://chefconf.chef.io/&quot;&gt;ChefConf 2018&lt;/a&gt; in Chicago to discuss the latest and greatest in DevOps workflow, infrastructure, compliance, and application automation.&lt;/p&gt;
&lt;p&gt;The event, which takes place May 22-25, offers attendees a chance to attend sessions and demos delivered by the leading experts in DevOps, cloud-native development, and IT strategies from around the world.  If you are going to the event next week, don’t miss these sessions from HPE experts:&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Complacency is a killer; Innovation is not enough; what’s next?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Tuesday, May 22nd at 10:30 AM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;“That’s the way we’ve always done it” are the most dangerous words in the English language to businesses. In this session, HPE experts will examine the impact of complacency and how innovation alone is no longer enough in the modern digital age.  Today’s technology savvy consumers and startups are disrupting all industries. Learn how traditional enterprise IT can transform to support ever-changing business demands.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Deploy your applications anywhere safely with Habitat and HPE OneSphere&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;Tuesday, May 22nd, 6:00 PM&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt; provides a hybrid cloud management platform that spans public clouds, like AWS, as well as private clouds. In this demo, HPE will show how the HPE OneSphere APIs deploy habitat applications to both public cloud as well as the &lt;a href=&quot;https://www.hpe.com/us/en/integrated-systems/synergy.html&quot;&gt;HPE Synergy&lt;/a&gt; platform running in the data center (without any change to the underlying application).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Ruby SDK now available with support for HPE OneView 4.0]]></title><description><![CDATA[HPE OneView Ruby SDK now available with support for HPE OneView 4.0 The HPE OneView Ruby SDK provides a Ruby library to interact with HPE…]]></description><link>https://developer.hpe.com/hpe-oneview-ruby-sdk-now-available-with-support-for-hpe-oneview-40/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-ruby-sdk-now-available-with-support-for-hpe-oneview-40/</guid><pubDate>Wed, 04 Apr 2018 21:08:12 GMT</pubDate><content:encoded>&lt;h1&gt;HPE OneView Ruby SDK now available with support for HPE OneView 4.0&lt;/h1&gt;
&lt;p&gt;The HPE OneView Ruby SDK provides a Ruby library to interact with HPE OneView and HPE Synergy Image Streamer APIs, enabling developers to easily build integrations and scalable solutions.&lt;/p&gt;
&lt;p&gt;The HPE OneView Ruby SDK v5.4.0 now supports HPE OneView 4.0 (REST API version 600).&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/blob/v5.4.0/CHANGELOG.md&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby/blob/v5.4.0/CHANGELOG.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Release content is available at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.4.0&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby/releases/tag/v5.4.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository with code and examples is available on GitHub at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Details on API coverage (additional APIs will be included in subsequent releases) can be found at: &lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-ruby/blob/v5.4.0/endpoints-support.md&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-ruby/blob/v5.4.0/endpoints-support.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;RubyGems details are available at: &lt;a href=&quot;https://rubygems.org/gems/oneview-sdk/versions/5.4.0&quot;&gt;https://rubygems.org/gems/oneview-sdk/versions/5.4.0&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automating 3PAR provisioning with Chef]]></title><description><![CDATA[Using Chef to configure your 3PAR Storeserv Traditionally storage has been one of the last components to be integrated into Automation. It…]]></description><link>https://developer.hpe.com/automating-3par-provisioning-with-chef/</link><guid isPermaLink="false">https://developer.hpe.com/automating-3par-provisioning-with-chef/</guid><pubDate>Mon, 02 Apr 2018 08:46:25 GMT</pubDate><content:encoded>&lt;h1&gt;Using Chef to configure your 3PAR Storeserv&lt;/h1&gt;
&lt;p&gt;Traditionally storage has been one of the last components to be integrated into Automation. It has been viewed as one of the “untouchables” along with networking due to its complexity and risk potential. With the increased development of APIs, including the 3PAR WSAPI, including storage provisioning tasks into your DevOps project is not only becoming simpler but also a requirement to meet SLAs and policy requirements.&lt;/p&gt;
&lt;h2&gt;A quick intro to Chef&lt;/h2&gt;
&lt;p&gt;Chef is a configuration management tool that uses a declarative syntax. It is different than scripting in a few ways, but mainly:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You define what the environment should look like, not how to do it. This removes a lot of complexity, and simplifies your configuration code.&lt;/li&gt;
&lt;li&gt;Chef code lives primarily in recipes. 1 or more recipes are packaged inside a cookbook that has a single purpose. Cookbooks act like policies that are versioned and checked into source control, unlike many scripts.&lt;/li&gt;
&lt;li&gt;Chef integrates with a whole host of tools that make testing easy and meaningful. ChefSpec, InSpec, and Test Kitchen are some of these tools. Just like your applications, you can test and release your configuration code using automated pipelines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some common reasons you&apos;d want (or need) to use a configuration management tool like Chef are to have success at scale, increase consistency, and gain visibility into your environment.
For more in-depth information on Chef and its architecture, refer to &lt;a href=&quot;https://docs.chef.io&quot;&gt;https://docs.chef.io&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Developing with Chef and 3PAR&lt;/h2&gt;
&lt;p&gt;If you&apos;re new to Chef, I&apos;d encourage you to go through some of the tutorials and self-paced trainings at &lt;a href=&quot;https://learn.chef.io&quot;&gt;learn.chef.io&lt;/a&gt; before going any further.&lt;/p&gt;
&lt;p&gt;About the only thing you&apos;ll need to get started developing is some basic terminal knowledge and the ChefDK installed. This will give you Ruby, as well as tools such as Berkshelf, Test Kitchen, ChefSpec, Foodcritic, and Rubocop. For this guide, we&apos;ll use a Bash terminal; if you&apos;re on Windows, you may have to modify some of the system commands accordingly.&lt;/p&gt;
&lt;h2&gt;Prerequisites&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;3PAR OS - 3.2.2 MU4, 3.3.1 MU1&lt;/li&gt;
&lt;li&gt;Ruby - 2.4.1 or higher&lt;/li&gt;
&lt;li&gt;Chef - 12.1 or higher&lt;/li&gt;
&lt;li&gt;WSAPI service should be enabled on the 3PAR storage arrays&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Examples&lt;/h2&gt;
&lt;p&gt;This section will help you get started using the 3PAR cookbook, and in doing so, explain some key concepts pertaining to Chef and how to use the 3PAR cookbook. It is also designed to show you how to create a basic recipe that includes real, working code and comments about how the resources work.&lt;/p&gt;
&lt;p&gt;The main README found on &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_chef_cookbook&quot;&gt;https://github.com/HewlettPackard/hpe3par_chef_cookbook&lt;/a&gt; shows a few details about the resources in this cookbook, but mainly includes code to demonstrate the properties you can set on a resource. As such, many of those code &quot;examples&quot; are incomplete or contain much more than you&apos;d want to use.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;To get started, we&apos;ll create a new directory named &lt;strong&gt;chef-repo&lt;/strong&gt;; this is where we&apos;ll put our new cookbooks. Also in this directory, lives the &lt;strong&gt;&lt;em&gt;chef/knife.rb&lt;/em&gt;&lt;/strong&gt; configuration file used to connect to a Chef server.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ mkdir chef-repo
$ cd chef-repo
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;We’ll create a directory within our &lt;strong&gt;&lt;em&gt;chef-repo&lt;/em&gt;&lt;/strong&gt; named cookbooks, where our cookbooks will live:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ mkdir cookbooks
$ cd cookbooks
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Now we will need to download the &lt;strong&gt;&lt;em&gt;hpe3par&lt;/em&gt;&lt;/strong&gt; cookbook.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# You can use knife to download the hpe3par cookbook or you can use git – use whichever you are most comfortable with

#----------------------------------------
# Using git example
# (make sure you are in chef-repo/cookbooks)
$ git clone https://github.com/HewlettPackard/hpe3par chef_cookbook.git
# We need to rename the directory to hpe3par from hpe3par_chef_cookbook – see note
$ mv hpe3par_3par_cookbook hpe3par

#----------------------------------------
# Using knife example
$ knife cookbook site download hpe3par

# Now we will need to untar the download
$ tar xvf hpe3par-*.tar.gz
# Now the hpe3par cookbook lives in chef-repo/hpe3par

# You can remove the tar file if you’d like, it is no longer needed
$ rm hpe3par-*.tar.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;      &lt;strong&gt;Note:&lt;/strong&gt; If you want to directly use from &lt;a href=&quot;https://github.com/HewlettPackard/hpe3par_chef_cookbook&quot;&gt;https://github.com/HewlettPackard/hpe3par_chef_cookbook&lt;/a&gt;, you need to rename the cookbook directory to &lt;strong&gt;&lt;em&gt;hpe3par&lt;/em&gt;&lt;/strong&gt; from &lt;strong&gt;&lt;em&gt;hpe3par_cookbook&lt;/em&gt;&lt;/strong&gt; to use the resources.&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;Now we can use the chef command to generate a new cookbook named &apos;my-3par&apos;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ chef generate cookbook my_3par
$ cd my_3par
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can examine the files and folders that were created &lt;code&gt;ls -la&lt;/code&gt;, the most important ones that we will be working with will be &lt;strong&gt;&lt;em&gt;metadata.rb&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;recipes/default.rb&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;ol start=&quot;5&quot;&gt;
&lt;li&gt;
&lt;p&gt;At this point, as noted in the main README, we need to specify a dependency to the &lt;code&gt;hpe3par&lt;/code&gt; cookbook. We do this by adding the following line to the end of the &lt;strong&gt;&lt;em&gt;metadata.rb&lt;/em&gt;&lt;/strong&gt; file:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt; # my_3par/metadata.rb
 ...
 depends &apos;hpe3par&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;      &lt;strong&gt;Tip:&lt;/strong&gt; Check out the &lt;a href=&quot;https://docs.chef.io/config_rb_metadata.html&quot;&gt;Chef docs&lt;/a&gt; to see what all else you can put in the &lt;strong&gt;&lt;em&gt;metadata.rb&lt;/em&gt;&lt;/strong&gt; file.&lt;/p&gt;
&lt;ol start=&quot;6&quot;&gt;
&lt;li&gt;Now we can create our first recipe. Open up &lt;strong&gt;&lt;em&gt;recipes/default.rb&lt;/em&gt;&lt;/strong&gt; and add the following lines:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#Configure Storage system
#Enter name, IP, user, and password

node.default[&apos;hpe3par&apos;][&apos;storage_system&apos;] = {
        name: &apos;my-3par&apos;,
        ip: &apos;192.168.0.100&apos;,
        user: &apos;3par_user&apos;,
        password: &apos;secret123&apos;
}


#Create Virtual Volume ‘my_1st_chef_vol’ definitions

hpe3par_virtual_volume &apos;my_1st_chef_vol&apos; do
  storage_system node[&apos;hpe3par&apos;][&apos;storage_system&apos;]
  cpg node.default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;cpg&apos;] = &apos;FC_r1&apos;
  size node.default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size&apos;] = 10.0
  size_unit node.default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size_unit&apos;] = &apos;GiB&apos;
  type node.default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;type&apos;] = &apos;thin&apos;
  snap_cpg node.default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;snap_cpg&apos;] = &apos;FC_r1&apos;

  action :create
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; We are defining the 3PAR connectivity settings within the recipe, there may be instances where you have multiple recipes in a cookbook and rather than defining the storage system (or other parameters i.e. virtual volumes, vvsets, etc) within each recipe, we can create global variables by setting parameters in the &lt;strong&gt;&lt;em&gt;attributes/default.rb&lt;/em&gt;&lt;/strong&gt; file.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ruby&quot;&gt;# global storage system - configured in attributes/default.rb
default[&apos;hpe3par&apos;][&apos;storage_system&apos;] = {
    name: &apos;my-3par&apos;,
    ip: &apos;192.168.0.100&apos;,
    user: &apos;3par_user&apos;,
    password: &apos;secret123&apos;
}

#virtual_volume global parameters
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;name&apos;] = &apos;my_1st_chef_vol&apos;
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;cpg&apos;] = &apos;FC_r1&apos;
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size&apos;] = 1024.0
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size_unit&apos;] = &apos;MiB&apos;
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;type&apos;] = &apos;thin&apos;
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;compression&apos;] = false
default[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;snap_cpg&apos;] = &apos;FC_r1&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The recipe will change slightly as well. Note the parameters are missing, they are now being defined in the &lt;strong&gt;&lt;em&gt;attributes/default.rb&lt;/em&gt;&lt;/strong&gt; file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ruby&quot;&gt;#Example recipe using attributes/default.rb defininition file

#Create Virtual Volume ‘my_1st_chef_vol’ definitions

hpe3par_virtual_volume node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;name&apos;] do
  storage_system node[&apos;hpe3par&apos;][&apos;storage_system&apos;]
  cpg node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;cpg&apos;]
  size node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size&apos;]
  size_unit node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;size_unit&apos;]
  type node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;type&apos;]
  snap_cpg node[&apos;hpe3par&apos;][&apos;virtual_volume&apos;][&apos;snap_cpg&apos;]

  action :create
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To learn more about attributes go here: &lt;a href=&quot;https://docs.chef.io/attributes.html&quot;&gt;docs.chef.io/attributes&lt;/a&gt;&lt;/p&gt;
&lt;ol start=&quot;7&quot;&gt;
&lt;li&gt;Now let’s run our recipe and see what happens:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;$ chef-client -z -o my_3par::default
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you see an error saying it can&apos;t find the correct cookbook, this is because Chef either can’t find our cookbooks or we haven&apos;t downloaded the cookbook dependencies (&lt;strong&gt;&lt;em&gt;hpe3par&lt;/em&gt;&lt;/strong&gt;) yet. We can fix this by creating a &lt;strong&gt;&lt;em&gt;knife.rb&lt;/em&gt;&lt;/strong&gt; file at &lt;strong&gt;&lt;em&gt;chef-repo/.chef/knife.rb&lt;/em&gt;&lt;/strong&gt; and adding:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-ruby&quot;&gt;# See http://docs.chef.io/config_rb_knife.html for more information on knife configuration options

current_dir = File.dirname(__FILE__)
cookbook_path [&quot;#{current_dir}/../cookbooks&quot;]

# If you&apos;re behind a proxy, you&apos;ll need to set the http_proxy environment variable or set the http_proxy option here
# http_proxy &apos;http://proxy.company.com:3128&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now download the &lt;strong&gt;hpe3par&lt;/strong&gt; cookbook by running:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# (make sure we are in chef-repo/cookbooks)
$ pwd
$ knife cookbook site download hpe3par

# Now we will need to untar the download
$ tar xvf hpe3par-*.tar.gz
# Now the hpe3par cookbook lives in chef-repo/hpe3par

# You can remove the tar file if you’d like, it is no longer needed
$ rm hpe3par-*.tar.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;8&quot;&gt;
&lt;li&gt;Now re-run &lt;strong&gt;&lt;code&gt;chef-client -z -o my_3par::default&lt;/code&gt;&lt;/strong&gt;. This time it should succeed, and if you log into your SSMC, you&apos;ll see the new volume &lt;strong&gt;&lt;em&gt;my_1st_chef_vol&lt;/em&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That&apos;s it! You can re-run the recipe and it will tell you if the volume already exists.&lt;/p&gt;
&lt;p&gt;Now that you&apos;ve gotten your feet wet, please take another look at the main README to see the complete list of resources available to you. The 3PAR cookbook includes many of the same capabilities that are available via the SSMC.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[In-band management with HPE iLOrest and a LiveCD]]></title><description><![CDATA[Updated March 4 2024 In a previous blog, I explained how the HPE RESTful interface tool (iLOREST) is able to communicate securely with the…]]></description><link>https://developer.hpe.com/in-band-management-with-ilorest-and-a-livecd/</link><guid isPermaLink="false">https://developer.hpe.com/in-band-management-with-ilorest-and-a-livecd/</guid><pubDate>Fri, 30 Mar 2018 11:19:26 GMT</pubDate><content:encoded>&lt;style&gt; li { font-size: 27px; line-height: 33px; max-width: none; } &lt;/style&gt;
&lt;p&gt;Updated March 4 2024&lt;/p&gt;
&lt;p&gt;In a &lt;a href=&quot;https://developer.hpe.com/blog/in-band-management-with-ilorest-and-a-livecd&quot; target=&quot;_blank&quot;&gt;previous blog&lt;/a&gt;, I explained how the HPE RESTful interface tool (&lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot; target=&quot;_blank&quot;&gt;iLOREST&lt;/a&gt;) is able to communicate securely with the underlying iLO and perform in band management of the lower layers of the server. This blog post provides a method to create a custom bootable LiveCD that embeds the HPE ilorest utility and potentially other management tools.&lt;/p&gt;
&lt;p&gt;After booting the server on the LiveCD, it is possible, locally or from a remote location, to perform in-band management operations.&lt;/p&gt;
&lt;p&gt;Although the solution presented here is based on Red Hat / CentOS distributions, it can be adapted to other Linux distributions as well as to Windows since the support of iLOrest in WinPE exists since &lt;code&gt;hprest&lt;/code&gt; version 1.5 (at that time this utility was still called &lt;code&gt;hprest&lt;/code&gt;).&lt;/p&gt;
&lt;h2&gt;High level solution description&lt;/h2&gt;
&lt;p&gt;The Red Hat / CentOS ecosystem provides the &lt;code&gt;livecd-tools&lt;/code&gt; package for an easy generation of customized bootable media. It contains the &lt;code&gt;livecd-creator&lt;/code&gt; utility for the creation of bootable ISO using a classical kickstart configuration file.&lt;/p&gt;
&lt;p&gt;The basic steps for embedding &lt;code&gt;ilorest&lt;/code&gt; in a customized bootable LiveCD are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Setup of a RHEL / CentOS Media Server configured with the &lt;code&gt;livecd-tools&lt;/code&gt; and the &lt;code&gt;createrepo&lt;/code&gt; packages. The custom repository containing iLOrest (see next step) will also be hosted on this Media Server&lt;/li&gt;
&lt;li&gt;Creation of a custom repository on the Media Server holding the iLOrest package (and potentially other packages).&lt;/li&gt;
&lt;li&gt;Customization of the LiveCD configuration file.&lt;/li&gt;
&lt;li&gt;Creation of the ISO bootable image.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Once the LiveCD ISO file is created we will describe how to use it:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Presentation of the LiveCD to a managed server as an iLO Virtual Drive.&lt;/li&gt;
&lt;li&gt;Boot of the managed server on the LiveCD ISO image.&lt;/li&gt;
&lt;li&gt;Submission of iLOrest in band management commands to the managed server using SSH.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Media Server setup&lt;/h2&gt;
&lt;p&gt;The following instructions suppose you are logged in as root in a CentOS or RHEL 7 server with an Internet connection or with the ability to retrieve all required packages mentioned below.&lt;/p&gt;
&lt;p&gt;The first step in the above list is not fully described in this article; I assume that &lt;a href=&quot;https://httpd.apache.org/download.cgi&quot;&gt;Apache&lt;/a&gt; or another Web server application is installed and configured to provide Linux YUM repositories to other servers on your network.&lt;/p&gt;
&lt;p&gt;You can obtain the &lt;code&gt;livecd-tools&lt;/code&gt; package from the &lt;a href=&quot;http://fedoraproject.org/wiki/EPEL&quot; target=&quot;_blank&quot;&gt;Extra Packages for Enterprise Linux 7&lt;/a&gt; (EPEL) repository.&lt;/p&gt;
&lt;p&gt;On a CentOS server issue:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# yum epel-release
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Read the &lt;a href=&quot;http://fedoraproject.org/wiki/EPEL&quot;&gt;EPEL Wiki&lt;/a&gt; for detailed installation instructions if you are using a RHEL Media Server.&lt;/p&gt;
&lt;p&gt;Install the &lt;code&gt;livecd-tools&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# yum install livecd-tools
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you review the content of this package you will notice that it contains several binary tools and a minimal kickstart file for Fedora:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# rpm –ql livecd-tools
...
/usr/bin/livecd-creator
...
/usr/share/doc/livecd-tools-13.4.9/livecd-fedora-minimal.ks
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We could use this minimal configuration file as a starting point and modify it to create a RHEL or CentOS configuration.&lt;/p&gt;
&lt;p&gt;Another possibility is to download a public configuration file, more suitable for our RHEL / CentOS Media Server in order to have only a few modifications to perform.&lt;/p&gt;
&lt;p&gt;If not already done, create a directory that will contain the LiveCD ISO image. Make sure that this directory is HTTP-accessible by the managed server. Then, download the configuration file in this directory:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# mkdir ISOs
MediaServer# cd ISOs

MediaServer# wget https://raw.githubusercontent.com/CentOS/sig-core-livemedia/master/kickstarts/centos-7-livecd.cfg
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We will customize this kickstart file later.&lt;/p&gt;
&lt;h2&gt;Create a package repository for &lt;code&gt;ilorest&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;If not already done, download the &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/releases/latest&quot; target=&quot;_blank&quot;&gt;latest RESTful Interface Tool package&lt;/a&gt; and store it on your Media Server in a location accessible from the network:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# ls myrepo
ilorest-2.1-73.x86_64.rpm
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create your custom iLOrest repository with &lt;code&gt;createrepo(8&lt;/code&gt;)and verify that a &lt;code&gt;repodata&lt;/code&gt; directory has been created along with the iLOrest package. The &lt;code&gt;createrepo&lt;/code&gt; utility is present by default on RHEL / CentOS, so you should not have to install it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# createrepo myrepo
…
MediaServer# ls myrepo
ilorest-2.1-73.x86_64.rpm  repodata
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Ultimately, you should verify that this repository can be accessed by other servers on your network. A simple local test could be to retrieve the &lt;code&gt;repomd.xm&lt;/code&gt; file using &lt;code&gt;wget&lt;/code&gt; or &lt;code&gt;curl&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# cd /tmp
MediaServer# wget http://localhost/&amp;#x3C;Kit-DIR&gt;/myrepo/repodata/repomd.xml
...
2016-07-13 16:06:34 (237 MB/s) - “repomd.xml” saved [2974/2974]
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Customization of the LiveCD configuration file&lt;/h2&gt;
&lt;p&gt;It is now time to edit and customize the RHEL/CentOS LiveCD configuration file. The minimum modifications are:&lt;/p&gt;
&lt;p&gt;·         Allow SSH and HTTP protocols through the firewall&lt;/p&gt;
&lt;p&gt;·         Start service &lt;code&gt;sshd&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;·         Add the iLOrest repository&lt;/p&gt;
&lt;p&gt;·         Add the iLOrest package&lt;/p&gt;
&lt;p&gt;·         Modify the &lt;code&gt;sshd&lt;/code&gt; config file to allow root’s SSH connections with a null password because the LiveCD post configuration script deletes the root password string (&lt;code&gt;passwd -d root&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The corresponding modified and added lines are:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# vi centos-7-livecd.cfg
…
firewall --enabled --ssh --http
…
services --enabled=NetworkManager,sshd
…
repo --name=myrepo --baseurl=http://&amp;#x3C;MediaServerIP&gt;/&amp;#x3C;KIT-DIR&gt;/myrepo/
...
%packages
ilorest
...
%post
echo “Enabling SSH access with null root password”
cd /etc/ssh
sed -i &apos;s/#\(PermitEmptyPasswords\) no/\1 yes/&apos; sshd_config
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I suggest performing the following additional modifications to reflect your environment location and needs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;timezone Europe/Paris --isutc
...
skipx
# xconfig –startxonboot
…
# gdm-libs
...

# gnome-settings-daemon-updates

&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creation of the ISO bootable LiveCD&lt;/h2&gt;
&lt;p&gt;The last step is to generate the LivCD ISO bootable file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# livecd-creator --config centos-7-livecd.cfg
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above command generates a bootable &lt;code&gt;livecd-centos-7-livecd-&amp;#x3C;timestamp&gt;.iso&lt;/code&gt; image containing an OS and the required packages including iLOrest.&lt;/p&gt;
&lt;h2&gt;Presenting the LiveCD to a server&lt;/h2&gt;
&lt;p&gt;To boot the server on this image, we just need to present it as an iLO Virtual Drive and make sure it boots automatically during next reboot. The iLO Web Graphical User Interface can help to perform those tasks in an interactive manner.&lt;/p&gt;
&lt;p&gt;However, for didactic reasons, we will use the iLOrest interface tool in an Out-Of-Band manner from the Media Server. This tool provides atomic and macro commands for reading and setting parameters in the iLO, the BIOS or other subsystems&lt;/p&gt;
&lt;p&gt;Moreover, it respects the good practices rules described in the &lt;em&gt;Getting Started with the iLO X Redfish API – A primer for coders&lt;/em&gt; articles (&lt;a href=&quot;https://sourceforge.net/p/redfish-lab/wiki/Getting-started-with-the-iLO4-Redfish-API/&quot;&gt;iLO 4&lt;/a&gt;, &lt;a href=&quot;https://sourceforge.net/p/redfish-lab/wiki/Getting-started-with-the-iLO5-Redfish-API/&quot;&gt;iLO 5&lt;/a&gt;]: It does not assume that REST objects are at a specific location, but smartly crawls the entire mesh of REST types and sub-types.&lt;/p&gt;
&lt;p&gt;Fortunately, iLOrest provides a builtin macro command described in its &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/redfishclients/ilorest-userguide/ilocommands/#virtualmedia-command&quot; target=&quot;_blank&quot;&gt;online documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To present the LiveCD to the managed server, the following basics steps will be performed from the Media Server:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code&gt;Open an iLOrest session with the managed iLO.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code&gt;Retrieve the ID number of the CD/DvD media type (usually 2):
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code&gt;Present the LiveCD as a Virtual CD/DVD using the builtin `virtualmedia` command with the `--bootnextreset` option to trigger the next reboot on this Virtual Drive.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;pre&gt;&lt;code&gt;Close the session
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Opening an Out-of-Band management session with iLOrest is very straightforward. You just need to supply the IP address of the managed iLO and the privileged credentials:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# ilorest login &amp;#x3C;iLO_IP&gt; -u &amp;#x3C;username&gt; -p &amp;#x3C;password&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use the &lt;code&gt;virtualmedia&lt;/code&gt; command with no arguments to retrieve the CD/DVD Media ID:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# export VID=$(ilorest virtualmedia | awk &apos;/CD/ {print $1}&apos;  | cut -b2)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We are now ready to present the virtual CD/DVD with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# ilorest virtualmedia $VID http://&amp;#x3C;MediaServerIP&gt;/livecd-centos-7-livecd-&amp;#x3C;timestamp&gt;.iso --bootnextreset
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure you logout this iLOrest session:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Host# ilorest logout
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;In-Band Management with iLOrest&lt;/h2&gt;
&lt;p&gt;Once the managed server is powered on, it boots on the LiveCD and can be accessed via SSH, as root without supplying any password. This gives us the possibility to send iLOrest in-band management commands to the managed server:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MediaServer# ssh root@livecd ilorest login
MediaServer# ssh root@livecd ilorest types
MediaServer# ssh root@livecd ilorest select VirtualMedia.
MediaServer# ssh root@livecd ilorest ls --json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above commands can be includes in a shell script file to perform specific actions or review specific BIOS or iLO parameters.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We’ve learned how to create a Linux (RHEL / CentOS) LiveCD embedding the iLOrest RESTful interface tool and enabling in band management of a single managed server. In case you have more than one server to manage, it is also possible to use this solution and to configure all of them the exact same way at once. In this case you would use a tool like the parallel distributed shell (&lt;code&gt;pdsh&lt;/code&gt;) or the cluster shell (&lt;code&gt;clush&lt;/code&gt;). Look forward to more blogs that will cover topics like &lt;code&gt;pdsh&lt;/code&gt; or &lt;code&gt;clush&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Don&apos;t forget to check out some of my other &lt;a href=&quot;https://developer.hpe.com/search/?term=donze&quot; target=&quot;_blank&quot;&gt;blog posts&lt;/a&gt; on the HPE Developer portal to learn more about Redfish tips and tricks.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[How to change the factory generated iLO Administrator password]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal]]></description><link>https://developer.hpe.com/how-to-change-the-factory-generated-ilo-administrator-password/</link><guid isPermaLink="false">https://developer.hpe.com/how-to-change-the-factory-generated-ilo-administrator-password/</guid><pubDate>Thu, 29 Mar 2018 15:34:24 GMT</pubDate><content:encoded>&lt;br&gt;
&lt;p&gt;&lt;big&gt;This blog post has been moved to the &lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/changefactorypasswd/changefactorypasswd&quot;&gt;Server Management Portal &lt;/a&gt;&lt;/big&gt;&lt;/p&gt;
&lt;br&gt;</content:encoded></item><item><title><![CDATA[The Redfish Event Service]]></title><description><![CDATA[This blog post has been moved to the Server Management Portal.]]></description><link>https://developer.hpe.com/the-redfish-event-service/</link><guid isPermaLink="false">https://developer.hpe.com/the-redfish-event-service/</guid><pubDate>Tue, 27 Mar 2018 15:17:22 GMT</pubDate><content:encoded>&lt;p&gt;This blog post has been moved to the &lt;strong&gt;&lt;a href=&quot;https://servermanagementportal.ext.hpe.com/docs/references_and_material/blogposts/etc/eventservice/redfisheventservice&quot;&gt;Server Management Portal&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Why we created HPE Deep Learning Cookbook]]></title><description><![CDATA[A history behind the Cookbook In the beginning of 2014, when Hewlett Packard Labs (still HP Labs back then, within Hewlett-Packard…]]></description><link>https://developer.hpe.com/why-we-created-hpe-deep-learning-cookbook/</link><guid isPermaLink="false">https://developer.hpe.com/why-we-created-hpe-deep-learning-cookbook/</guid><pubDate>Wed, 21 Mar 2018 06:00:26 GMT</pubDate><content:encoded>&lt;h2&gt;A history behind the Cookbook&lt;/h2&gt;
&lt;p&gt;In the beginning of 2014, when &lt;a href=&quot;https://www.labs.hpe.com/&quot;&gt;Hewlett Packard Labs&lt;/a&gt; (still HP Labs back then, within Hewlett-Packard), embarked on its journey towards Memory Driven Computing and &lt;a href=&quot;https://www.labs.hpe.com/the-machine&quot;&gt;The Machine&lt;/a&gt;, “software” people of Labs have been asked to find good applications where something like The Machine will shine. We started with algorithms and problems which were still challenging to run on existing systems. Monte Carlo simulations, graph inference, search space optimization and deep learning were, among others, on our list. We started to model the performance of
these algorithms for different system architectures. How performance will
change, if we have more powerful compute nodes or faster interconnect between
nodes in the system? For deep learning we soon realized that answers largely
depend on a topology of an artificial neural network. An optimal hardware system
to train a deep convolutional neural network is not always the best one to train
a fully connected model. Depending on a model which you want to train (or to use
in production - run inference), you’ll need different hardware to minimize
training or inference time. Similar to cooking, where depending on what do you
want to cook, you need different ingredients in different proportions.&lt;/p&gt;
&lt;p&gt;In parallel with our Memory Driven Computing efforts, artificial intelligence
and deep learning started to gain interest from enterprise customers. During our
interactions with customers, we started to hear questions about the choice of
optimal hardware/software environment to run deep learning workloads. How to
choose from a sea of available options today? How to size and configure
infrastructure? A need for the Cookbook became clear.&lt;/p&gt;
&lt;p&gt;On our journey towards the Cookbook, we first came up with analytical
performance models to predict performance of various machine learning algorithms
(including deep learning) depending on compute power and a number of compute
nodes in a system, and properties of the interconnect between the compute nodes.
These simple models were useful for rough estimates, but didn’t take into
account many subtitles of compute systems. We had to get into
benchmarking, in addition to analytical modeling, to reason based on real
performance data.&lt;/p&gt;
&lt;h2&gt;Deep Learning Benchmarking Suite&lt;/h2&gt;
&lt;p&gt;We wanted to be able to collect performance data on various hardware systems
with various software and for different deep learning workloads. We wanted the
results to be consistent, reproducible and comparable. This means we need to
make sure that we run exactly the same workloads on multiple systems. We needed
a benchmarking tool. We had several options:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use existing projects that target the deep learning domain. One of the most
recognized by a community are &lt;a href=&quot;https://svail&quot;&gt;DeepBench&lt;/a&gt; from BAIDU and the
&lt;a href=&quot;https://github.com/soumith/convnet-benchmarks&quot;&gt;convnet-benchmarks&lt;/a&gt;. These
projects aim at benchmarking low-level functionality such as convolution or
matrix multiply operations or use simplified models.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use example training scripts that are part of their respective frameworks,
something what most companies do. Typical examples are &lt;a href=&quot;https://github.com/tensorflow/models/tree/master/research/inception&quot;&gt;TensorFlow&apos;s
Inception​,&lt;/a&gt;
&lt;a href=&quot;https://github.com/apache/incubator-mxnet/tree/master/example/image-classification&quot;&gt;MXNet&apos;s image
classification&lt;/a&gt;
and &lt;a href=&quot;https://github.com/caffe2/caffe2/tree/master/caffe2/python/examples&quot;&gt;Caffe2&apos;s
ResNet50​&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Unfortunately, none of these options provides what we need - a way to collect
performance data for different deep learning workloads (and not only low-level
operations) in a consistent and reproducible manner across a range of software
and hardware combinations. Thus, we decided to create our own tool - &lt;a href=&quot;github.com/HewlettPackard/dlcookbook-dlbs&quot;&gt;Deep
Learning Benchmarking Suite&lt;/a&gt;. It is
open sourced and available on github for everyone who wants to run reproducible
and consistent deep learning benchmarks.&lt;/p&gt;
&lt;h2&gt;Deep Learning Performance Guide&lt;/h2&gt;
&lt;p&gt;We’ve been using Deep Learning Benchmarking Suite internally at HPE for some
time already. We’ve collected a variety of performance data on many hardware and
software configurations, and we continue to collect more and more data. Now we
want to share this data with everyone, so we can guide the choice of optimal
hardware and software environment in the open. Deep Learning Performance Guide
is a tool to do so. It is a web-based tool connected to our vast database of
benchmarking data. We plan to open this tool to the public at the end of March, 2018.
The first version of this tool will be based entirely on data from actual
benchmarks. In the future we plan to incorporate our analytical performance
models into the Performance Guide so it will provide performance estimates for
untested hardware/software configurations alongside with real collected
performance measurements for tested configurations.&lt;/p&gt;
&lt;h2&gt;Reference Designs&lt;/h2&gt;
&lt;p&gt;Reference designs are the last component of our Cookbook toolset. These are
default hardware recipes for selected classes of deep learning workloads. So far
we have released Image Classification Reference Designs. We created those by
collecting performance data (with our Benchmarking Suite) for most common
convolutional neural networks (widely used for image classification problems),
on multiple hardware and software configurations, and by analyzing the collected
data with Performance Guide.&lt;/p&gt;
&lt;h2&gt;So, what is in the Cookbook?&lt;/h2&gt;
&lt;p&gt;HPE Deep Learning Cookbook is a toolset to characterize deep learning workloads
and to guide the choice of optimal hardware and software configurations for any
given workload. It consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Deep Learning Benchmarking Suite - a tool to run deep learning benchmarks&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deep Learning Performance Guide - a web-based tool to compare and analyze
the results of deep learning benchmarks. In the next version we will
integrate into this tool analytical/machine learning models to predict
performance of deep learning workloads for situations when we cannot run
actual benchmarks. These performance models already exist, we just need to
add them to the Performance Guide.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reference Designs - recipes (descriptions) of default recommended hardware
configurations for classes of selected deep learning workloads, such as
image classification, natural language processing, speech recognition, video
analytics, and others.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We created the Cookbook with the following objectives in mind:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To be able to make recommendations to our customers on the optimal
hardware/software combinations for running their specific and varied deep
learning workloads for both the development (training) and deployment
(inference) stages.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To be able to run reproducible and consistent deep learning benchmarks in
different hardware/software environments that include a range of hardware
systems, deep learning frameworks and deep learning workloads. This would be
especially useful for performance benchmarking, capacity planning and
product qualification.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provide to the community a standard tool that enables apple to apple
comparison of various systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To be able to validate and justify design options for future products.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Ansible Modules v5.0.0 has been released]]></title><description><![CDATA[HPE OneView Ansible Modules v5.0.0 has been released The HPE OneView Ansible modules have been updated to support OneView 4.0 (REST API…]]></description><link>https://developer.hpe.com/hpe-oneview-ansible-modules-v500-has-been-released/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-ansible-modules-v500-has-been-released/</guid><pubDate>Tue, 06 Mar 2018 20:24:23 GMT</pubDate><content:encoded>&lt;h1&gt;HPE OneView Ansible Modules v5.0.0 has been released&lt;/h1&gt;
&lt;p&gt;The HPE OneView Ansible modules have been updated to support OneView 4.0 (REST API version 600)&lt;/p&gt;
&lt;p&gt;The Ansible modules automate the provisioning of physical infrastructure on-demand using software-defined templates from HPE OneView.&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at:
&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/blob/v5.0.0/CHANGELOG.md&quot;&gt;https://github.com/HewlettPackard/oneview-ansible/blob/v5.0.0/CHANGELOG.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Release content is available at:
&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.0.0&quot;&gt;https://github.com/HewlettPackard/oneview-ansible/releases/tag/v5.0.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository where code and examples exist is available on GitHub at:
&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible&quot;&gt;https://github.com/HewlettPackard/oneview-ansible&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Details about the Ansible modules, parameters, example usage, etc.:
&lt;a href=&quot;https://github.com/HewlettPackard/oneview-ansible/blob/master/oneview-ansible.md&quot;&gt;https://github.com/HewlettPackard/oneview-ansible/blob/master/oneview-ansible.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Additional information about using Ansible with HPE OneView can be found in the Ansible section at:
&lt;a href=&quot;https://developer.hpe.com/platform/hpe-oneview/home&quot;&gt;https://developer.hpe.com/platform/hpe-oneview/home&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE OneView Python SDK v4.5.0 has been released]]></title><description><![CDATA[HPE OneView Python SDK v4.5.0 has been released The HPE OneView python SDK has been updated to support OneView 4.0 (REST API version 60…]]></description><link>https://developer.hpe.com/hpe-oneview-python-sdk-v450-has-been-released/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-python-sdk-v450-has-been-released/</guid><pubDate>Tue, 06 Mar 2018 19:56:14 GMT</pubDate><content:encoded>&lt;h1&gt;HPE OneView Python SDK v4.5.0 has been released&lt;/h1&gt;
&lt;p&gt;The HPE OneView python SDK has been updated to support OneView 4.0 (REST API version 600).&lt;/p&gt;
&lt;p&gt;The SDK allows python developers to programmatically control HPE OneView managed resources  using an infrastructure-as-code approach for physical compute, storage, and fabric resources.&lt;/p&gt;
&lt;p&gt;The list of supported resources and changes is available at:
&lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView/blob/v4.5.0/CHANGELOG.md&quot;&gt;https://github.com/HewlettPackard/python-hpOneView/blob/v4.5.0/CHANGELOG.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Release content is available at:
&lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView/releases/tag/v4.5.0&quot;&gt;https://github.com/HewlettPackard/python-hpOneView/releases/tag/v4.5.0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The repository where code and examples exist is available on GitHub at:
&lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView&quot;&gt;https://github.com/HewlettPackard/python-hpOneView&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Details on API coverage (additional APIs will be included in subsequent releases):
&lt;a href=&quot;https://github.com/HewlettPackard/python-hpOneView/blob/master/endpoints-support.md&quot;&gt;https://github.com/HewlettPackard/python-hpOneView/blob/master/endpoints-support.md&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;PyPi details are available at:
&lt;a href=&quot;https://pypi.python.org/pypi/hpOneView/4.5.0&quot;&gt;https://pypi.python.org/pypi/hpOneView/4.5.0&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Creating a Python version that enforces FIPS]]></title><description><![CDATA[Creating a Python version that enforces FIPS Version 2.2 and later of our python-redfish-utility can enforce the FIPS mode of the operating…]]></description><link>https://developer.hpe.com/creating-a-python-version-that-enforces-fips/</link><guid isPermaLink="false">https://developer.hpe.com/creating-a-python-version-that-enforces-fips/</guid><pubDate>Thu, 15 Feb 2018 18:04:25 GMT</pubDate><content:encoded>&lt;h1&gt;&lt;strong&gt;Creating a Python version that enforces FIPS&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Version 2.2 and later of our python-redfish-utility can enforce the FIPS mode of the operating system it is used on. If an OS is in FIPS mode, but uses a non-FIPS encryption algorithm, Python will crash. If used with a FIPS-validated module such as the OpenSSL FIPS module, a project can be FIPS-compliant. By default, Python does not ship with a FIPS version of OpenSSL, so we must build Python from source to meet the OpenSSL FIPS requirement. In this blog post, we cover building Python 2.7 from source with a FIPS version of OpenSSL in Windows and Linux. In addition, we provide instructions for adding extra functions to the Python SSL library to set and get the OpenSSL FIPS mode.
&lt;strong&gt;Patching Python with our FIPS mode functions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Before building Python from source, we must add extra functions so we can get and set OpenSSL FIPS mode. To add this functionality, we must change the &lt;code&gt;_ssl.c&lt;/code&gt; and &lt;code&gt;ssl.py&lt;/code&gt; source files. A patch file that adds the required functions is available [here] (&lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility/tree/master/patches&quot;&gt;https://github.com/HewlettPackard/python-redfish-utility/tree/master/patches&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Building Python in Linux&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When you build Python from source, it uses the version of OpenSSL on the local system. If you already have a FIPS version of OpenSSL that you are comfortable with, you can use that. If not, you must update OpenSSL. You can also install an alternate version of OpenSSL and use it with some modifications to the Python setup files. Use an OpenSSL version with shared objects, because that is what Python uses to build its &lt;code&gt;pyd&lt;/code&gt; and
&lt;code&gt;lib&lt;/code&gt; files.&lt;/p&gt;
&lt;p&gt;Once you have the OpenSSL you want on the system, you can install Python with the required modifications. If you are using an OpenSSL FIPS from &lt;code&gt;/usr/local/ssl/&lt;/code&gt;, you do not need to modify the setup files. If you are using an alternate installation of OpenSSL or one in a different location, changes are required. You must configure Python with the location of the OpenSSL &lt;code&gt;include&lt;/code&gt; and &lt;code&gt;lib&lt;/code&gt; files in the &lt;code&gt;setup.py&lt;/code&gt; file. Additionally, you must configure Python with the OpenSSL location, and uncomment code in the `Modules/Setup.dist file. The following example shows a patch file with the required changes. The file locations differ based on the location of the OpenSSL version you want to use.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/7-1518719833337.png&quot; alt=&quot;7&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the required changes are complete (if any), we can build Python just as we normally would. Once Python is successfully installed, you can use the extra SSL functions to enable FIPS mode in OpenSSL. &lt;strong&gt;Building Python in Windows&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Windows does not come with a version of OpenSSL, which complicates things. Each installation of Python on Windows comes with its own non-FIPS version of OpenSSL, so we will use our own. To make things even more interesting, Python 2.7 only has Visual Studio projects up to 9.0 (Visual Studio 2008 Professional). The easiest way to get the required runtimes is to build using this version of Visual Studio. If using this version is not an option, and you need a more recent version, there are resources available to describe how to do that.&lt;/p&gt;
&lt;p&gt;To begin, we must build a FIPS version of OpenSSL in Windows. Make sure that you build OpenSSL with shared objects, because that is what Python uses to build its pyd and lib files. Documentation is available online if you need assistance building OpenSSL with the FIPS module.&lt;/p&gt;
&lt;p&gt;After a FIPS version of OpenSSL is built on the system, we can start building Python. Before building Python, a few changes in the Python source are required. The first change is in the pyproject.vsprops file located in the &lt;code&gt;PC\(VS version)\&lt;/code&gt; location of the Python source code. We must update the location of OpenSSL to our new version. In the following example, you can see that Visual Studio looks for an OpenSSL folder from the externalsDir, and the externalsDir is two folders above from our current location named externals. This folder does not exist by default, so we must create it and add our OpenSSL folder.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/8-1518719845111.png&quot; alt=&quot;8&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/9-1518719854112.png&quot; alt=&quot;9&quot;&gt;&lt;/p&gt;
&lt;p&gt;Next, change the location to our new OpenSSL.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/10-1518719873936.png&quot; alt=&quot;10&quot;&gt;&lt;/p&gt;
&lt;p&gt;The next files to change are the _&lt;code&gt;hashlib.vcproj&lt;/code&gt; and &lt;code&gt;_ssl.vcproj&lt;/code&gt; files, located in the same directory, for building the &lt;code&gt;_hashlib.pyd&lt;/code&gt; and &lt;code&gt;_ssl.pyd&lt;/code&gt; files. We must update both files with the locations we compile and link to when building. Find the configuration you intend to build python with in each file, and navigate to the &lt;strong&gt;VCPreBuildEventTool&lt;/strong&gt; entry. The following example shows the Release|x64 configuration. Since we already built OpenSSL, we do not need to run the build_ssl.py script, so you can remove the &lt;strong&gt;CommandLine&lt;/strong&gt; entry.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/11-1518719882829.png&quot; alt=&quot;11&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since we are not building using build_ssl.py, the libraries and include files might not be in the location Python is expecting. To fix this issue, change the &lt;strong&gt;VCCLCompilerTool&lt;/strong&gt; entry. This entry must point to the &lt;code&gt;include&lt;/code&gt; folder of your built OpenSSL.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/12-1518719933237.png&quot; alt=&quot;12&quot;&gt;&lt;/p&gt;
&lt;p&gt;We also must change the &lt;strong&gt;VCLinkerTool&lt;/strong&gt; entry to our new library locations.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/13-1518720063593.png&quot; alt=&quot;13&quot;&gt;&lt;/p&gt;
&lt;p&gt;After you make these changes, you should be able to successfully build Python, &lt;code&gt;_ssl.pyd&lt;/code&gt;, and &lt;code&gt;_hashlib.pyd&lt;/code&gt;. You might get errors building other extensions, but solving those issues is beyond the scope of this blog. If you want to update only &lt;code&gt;_ssl.pyd&lt;/code&gt; and &lt;code&gt;_hashlib.pyd&lt;/code&gt;, there is a shortcut you can take. Instead of building all of Python, right-click on both the &lt;strong&gt;_hashlib&lt;/strong&gt; project and &lt;strong&gt;_ssl&lt;/strong&gt; project respectively, and then select &lt;strong&gt;Build&lt;/strong&gt; in Visual Studio.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/14-1518720073858.png&quot; alt=&quot;14&quot;&gt;&lt;/p&gt;
&lt;p&gt;This action will provide the &lt;code&gt;pyd&lt;/code&gt; files that you need. Install Python by using the installer from the Python website, and then replace the &lt;code&gt;_ssl.pyd&lt;/code&gt; and &lt;code&gt;_hashlib.pyd&lt;/code&gt; files in the &lt;code&gt;DLLs&lt;/code&gt; folder with your new versions.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/15-1518720081531.png&quot; alt=&quot;15&quot;&gt;&lt;/p&gt;
&lt;p&gt;You also must replace the &lt;code&gt;ssl.py&lt;/code&gt; file in the &lt;code&gt;Lib&lt;/code&gt; folder with your new version.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/16-1518720090640.png&quot; alt=&quot;16&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Using our FIPS mode functions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now that we have our new FIPS version of &lt;code&gt;_ssl&lt;/code&gt; and &lt;code&gt;_hashlib&lt;/code&gt;, we can make sure that they work correctly in the Python shell.
Importing ssl and checking the OpenSSL version, we can confirm that it has been updated with a FIPS version of SSL.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/17-1518720097685.png&quot; alt=&quot;17&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can now try one of our newly added functions. Let’s check the &lt;strong&gt;FIPS mode&lt;/strong&gt; of OpenSSL using our first function, FIPS_mode. This function returns a long type 0 for not enforcing FIPS, and 1 for enforcing FIPS. It is important to note that whether or not the OS is in a FIPS mode, when we start Python, OpenSSL is not enforcing FIPS mode.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/18-1518720288533.png&quot; alt=&quot;18&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FIPS_mode&lt;/strong&gt; returned a 0, so we are not enforcing FIPS. Using a hash that is not FIPS-compliant will work.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/19-1518720279167.png&quot; alt=&quot;19&quot;&gt;&lt;/p&gt;
&lt;p&gt;Let’s set Python’s OpenSSL to enforce FIPS mode, by using our second new function &lt;strong&gt;FIPS_mode_set&lt;/strong&gt; and passing it a long value of 1. After we set it, let’s verify that OpenSSL is enforcing FIPS by using the previous function again.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/20-1518720318403.png&quot; alt=&quot;20&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that we are enforcing FIPS mode in OpenSSL, if we try to use a hash that is not FIPS-compliant Python will crash with the following error:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/21-1518720331286.png&quot; alt=&quot;21&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you want to enforce FIPS, make sure that you use OpenSSL hash functions, and not Python hash functions. This means using the &lt;strong&gt;new&lt;/strong&gt; constructor instead of directly calling a hash function that Python already knows.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/22-1518720342935.png&quot; alt=&quot;22&quot;&gt;&lt;/p&gt;
&lt;p&gt;Now that Python is updated to use a FIPS version of OpenSSL, and we included extra functions to set OpenSSL to enforce FIPS, we can make sure that we are using only FIPS algorithms and hashes. If you want to enforce the OS FIPS mode, check for specific Environment Variables or Registries. For examples, see our GitHub for the &lt;a href=&quot;https://github.com/HewlettPackard/python-redfish-utility&quot;&gt;python-redfish-utility&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Updating Python and Openssl on OS X]]></title><description><![CDATA[Updating Python and Openssl on OS X Security is a vital part of any server. On our Enterprise servers we provide options for higher levels…]]></description><link>https://developer.hpe.com/updating-python-and-openssl-on-os-x/</link><guid isPermaLink="false">https://developer.hpe.com/updating-python-and-openssl-on-os-x/</guid><pubDate>Mon, 18 Dec 2017 18:53:11 GMT</pubDate><content:encoded>&lt;h1&gt;&lt;strong&gt;Updating Python and Openssl on OS X&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;Security is a vital part of any server. On our Enterprise servers we provide options for higher levels of security. However, Mac computers ship with an older version of Openssl, a vital component for our python-redfish-library. In this blog we will cover updating Openssl to at least 1.0.0 to allow support for our python-redfish-library.
&lt;strong&gt;Higher Security Settings&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;“High Security&quot; is one of the Higher Security levels offered on our Gen10 servers. When “High Security” or other higher security settings are enabled, Openssl must be used to connect to the server. Below is a screenshot where these security settings can be changed. It can be found under Security &gt; Encryption.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/12/1-1513693864107.jpg&quot; alt=&quot;openssl1&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running Examples without Openssl&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Here we show an example of an attempt to execute an example script on a Mac without updated Openssl.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/12/2-1513693937238.png&quot; alt=&quot;openssl2&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note in the Terminal we are utilizing Python 2.7.11 as well as OpenSSL 0.9.8zg. Here we attempted to execute the first example, ex01_get_resource_directory.py. Since our version is outdated, we have an error in attempting to connect.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Installing Homebrew&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;To perform our upgrades, we will use Homebrew. Homebrew is a package manager for macOS. We will be using it to install the most recent version (at least v1.0) of Openssl, and then use that version to install a newer version of Python.&lt;/p&gt;
&lt;p&gt;Homebrew installs into &lt;code&gt;/usr/local/&lt;/code&gt; so it must be accessible to the user.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;sudo chown -R $(whoami) /usr/local&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The above command will give the user permissions to install to &lt;code&gt;/usr/local/&lt;/code&gt;
To install Homebrew:
&lt;code&gt;/usr/bin/ruby -e &quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Paste this command into your console, and Homebrew will install itself. Note that this should not be done with sudo.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/3-1518714501022.png&quot; alt=&quot;ssl3&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon installing Homebrew, the first thing we want to do is update it.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;brew update&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Then we want to install openssl
&lt;code&gt;brew install openssl&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;If you’re behind a firewall and use a proxy, add &lt;code&gt;ALL_PROXY=proxy&lt;/code&gt; before your command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ALL_PROXY=socks5://127.0.0.1:9001 brew install openssl&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Substitute &lt;code&gt;socks5://127.0.0.1:9001 &lt;/code&gt; with your own proxy. This command can be used for both the update and install commands.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Linking Openssl and Installing Python&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We can attempt to create symlinks through homebrew for Openssl with the following command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;brew link openssl --force&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This first command may work in certain versions of Homebrew, but recent updates have changed how Homebrew processes openssl installation. Alternatively, we can create these symlinks manually with the following commands:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Now that the symlinks have been created, we can install Python:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;brew install python --with-brewed-openssl&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/4-1518714723500.png&quot; alt=&quot;ssl4&quot;&gt;&lt;/p&gt;
&lt;p&gt;Upon completion, we will have Python installed with the new version of Openssl. In the example below, we have opened the new version of Python, as well as checked the version of Openssl, which shows as 1.0.2j.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/5-1518716804698.png&quot; alt=&quot;ssl5.2&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Setting the new Python as default&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The newly installed Python is different from the system Python that comes prepackaged on the Mac. We need to ensure that it becomes the preferred Python installation. We can use the which python command to find which installation is being used.
Often, the default directory for system Python is:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;usr/bin/python2.7&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Homebrew’s Python would be installed under something like:
&lt;code&gt;usr/local/bin/python2.7&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;In order to fix this, we simply edit our etc/paths file. We want to ensure that &lt;code&gt;/usr/local/&lt;/code&gt; is at the top of the list to ensure that version of Python is used first. See example paths file below:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/usr/local/bin&lt;/code&gt;
&lt;code&gt;/usr/local/sbin&lt;/code&gt;
&lt;code&gt;/usr/bin&lt;/code&gt;
&lt;code&gt;/bin&lt;/code&gt;
&lt;code&gt;/usr/sbin&lt;/code&gt;
&lt;code&gt;/sbin&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note that we do not want to remove the system Python, since some Mac services rely on the system version of Python. Instead, we simply want to redirect it to use our new version of Python.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Running the new Python with Openssl&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now that we’ve updated Openssl, and integrated it with Python, we can use this new version of Python to utilize our python-redfish-library. Note that pip install may need to be run to reinstall the redfish library. (pip install python-redfish-library) In the example below, IDLE was used to run the first example again:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2018/2/6-1518717614095.png&quot; alt=&quot;6.2ssl&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, our Python has since been updated to the newest version as of writing, 2.7.12. Additionally, the example runs successfully instead of failing.&lt;/p&gt;
&lt;p&gt;Now that we’ve successfully updated Openssl and Python on our Mac, we can now utilize the python-redfish-library to its full extent in managing remote servers via the iLO RESTful API.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Doryd: A Dynamic Provisioner for Docker Volume plugins]]></title><description><![CDATA[Many vendors have invested quite a lot of effort into building robust Docker Volume plugins, including HPE Nimble Storage and HPE 3PAR…]]></description><link>https://developer.hpe.com/doryd-a-dynamic-provisioner-for-docker-volume-plugins/</link><guid isPermaLink="false">https://developer.hpe.com/doryd-a-dynamic-provisioner-for-docker-volume-plugins/</guid><pubDate>Wed, 06 Dec 2017 00:39:08 GMT</pubDate><content:encoded>&lt;p&gt;Many vendors have invested quite a lot of effort into building robust Docker Volume plugins, including HPE Nimble Storage and HPE 3PAR. Docker and Kubernetes (K8s) share the same fundamental principle of being able to bind mount a host filesystem inside a container through the &lt;code&gt;mnt&lt;/code&gt; namespace. Any other similarity pretty much stops there as far as compatibility goes. K8s has a multitude of Persistent Volume (PV) plugins mainly focused on IaaS provider frameworks and native filesystems.&lt;/p&gt;
&lt;p&gt;The FlexVolume plugin is part of K8s and allows vendors such as HPE to provide out-of-tree FlexVolume drivers. What is important to understand here is that the FlexVolume plugin only supports static provisioning. A K8s cluster administrator must create PVs manually for K8s users to create Persistent Volume Claims (PVC) against. This might seem a bit tedious but is still magnitudes more gracious than creating a volume on the backend array manually and present it to the cluster (don&apos;t forget to those IQNs or WWNs to the target when you expand your cluster!) if you would use the iSCSI or FC plugin for K8s. Although, I would argue this is good enough if you only have a handful of applications that require PVs on a fairly static cluster without any need for advanced data services or deny users to dynamically create PVs based on their needs.&lt;/p&gt;
&lt;p&gt;Storage Classes (SC) allows dynamic provisioning of PVs based on PVCs created by a user. This feature has been around since K8s 1.2 and was promoted to beta in 1.4. SCs allows a cluster administrator to define named classes with certain attributes, such as which provisioner to use, default PV plugin parameters and so on. SCs enables policy-based storage management which essentially abstracts storage minutia for the user. This blog post will discuss this concept in great detail.&lt;/p&gt;
&lt;p&gt;Our story around Dory: The FlexVolume driver that speaks whale, would not be complete without a dynamic provisioner as cluster administrators have better things to do than provision PVs to their users. A provisioner is a very simple daemon that listens for PVCs and satisfies those claims based on the defined SCs.&lt;/p&gt;
&lt;p&gt;The rest of this blog post walks through the steps on how to get started with Dory and Doryd. Just replace &lt;code&gt;nimble&lt;/code&gt; in the examples below with your particular Docker Volume plugin to get started.&lt;/p&gt;
&lt;h1&gt;Dory: A FlexVolume Recap&lt;/h1&gt;
&lt;p&gt;Dory is an &lt;a href=&quot;https://github.com/hpe-storage/dory&quot;&gt;open source project available on GitHub&lt;/a&gt;. The latest instructions on how to build and install Dory will always be available in the &lt;a href=&quot;https://github.com/hpe-storage/dory/blob/master/docs/dory/README.md&quot;&gt;README.md&lt;/a&gt;, steps vary between distributions. I&apos;ve covered this &lt;a href=&quot;https://community.hpe.com/t5/HPE-Nimble-Storage-Tech-Blog/Dory-A-FlexVolume-Driver-that-speaks-Whale/ba-p/6986638&quot;&gt;in the past&lt;/a&gt; but it did not include Doryd and the naming schemes required for Doryd.&lt;/p&gt;
&lt;p&gt;I currently have an Ubuntu 16.04.3 machine in front of me with K8s 1.8.4 and the latest Nimble Linux Toolkit (NLT) installed but I choose to install the binary instead. NLT contains the HPE Nimble Storage Docker Volume plugin and the rest of the guide assumes all hosts have the Docker Volume plugin installed and working.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dev.hpe.com~nimble
sudo curl -sLo /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dev.hpe.com~nimble/nimble \
https://dl.bintray.com/hpe-storage/dory/dory-master
sudo chmod 755 /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dev.hpe.com~nimble/nimble
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; What is being made available to the kubelet is a FlexVolume driver referenced as &lt;code&gt;dev.hpe.com/nimble&lt;/code&gt;. Changing &lt;code&gt;dev.hpe.com&lt;/code&gt; to something else will break the stock provisioner we&apos;re going to use in the examples below.&lt;/p&gt;
&lt;p&gt;In the same directory where you placed the &lt;code&gt;dory&lt;/code&gt; binary, in my case &lt;code&gt;dev.hpe.com~nimble&lt;/code&gt;, you need to copy &lt;code&gt;dory.json&lt;/code&gt; file from the Dory repository to &lt;code&gt;dev.hpe.com~nimble/nimble.json&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo curl -sLo /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dev.hpe.com~nimble/nimble.json \
https://raw.githubusercontent.com/hpe-storage/dory/master/src/nimblestorage/cmd/dory/dory.json
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The key here is to identify where the socket file to your Docker Volume plugin accepts API calls. Since this can vary between plugins and distributions as well, please &lt;code&gt;curl&lt;/code&gt; an API call to the socket file you intend to use. For example, in my case:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo curl -XPOST --unix-socket /run/docker/plugins/nimble.sock http:/Plugin.Activate
{&quot;Implements&quot;:[&quot;VolumeDriver&quot;]}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;code&gt;curl --unix-socket&lt;/code&gt; is a fairly new thing for &lt;code&gt;curl&lt;/code&gt; and could be missing from your distribution.&lt;/p&gt;
&lt;p&gt;Adjust your &lt;code&gt;dory.json&lt;/code&gt; accordingly. Full documentation for each key is available in &lt;a href=&quot;https://github.com/hpe-storage/dory/blob/master/docs/dory/README.md#building&quot;&gt;the Dory repo&lt;/a&gt;. In most cases, only &lt;code&gt;dockerVolumePluginSocketPath&lt;/code&gt; needs to be changed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;{
    &quot;logFilePath&quot;: &quot;/var/log/dory.log&quot;,
    &quot;logDebug&quot;: false,
    &quot;stripK8sFromOptions&quot;: true,
    &quot;dockerVolumePluginSocketPath&quot;: &quot;/run/docker/plugins/nimble.sock&quot;,
    &quot;createVolumes&quot;: true,
    &quot;enable1.6&quot;: false,
    &quot;listOfStorageResourceOptions&quot;: [ &quot;size&quot;, &quot;sizeInGiB&quot; ],
    &quot;factorForConversion&quot;: 1073741824
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Neither Dory or Doryd cares if Docker is installed on the host or not. As long as there is a daemon responding to Docker Volume API calls, you&apos;re good. In other words, Dory and Doryd should work just fine with other container engines compatible with K8s, such as &lt;code&gt;rkt&lt;/code&gt; or &lt;code&gt;cri-o&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Another gotcha worth mentioning, if you intend to only create PVs against the FlexVolume driver. PVs are not created until a pod or deployment requests an attachment of the PV. In other words, just creating the PV and PVC won&apos;t create the Docker Volume itself.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; If you&apos;re using K8s &amp;#x3C; 1.8, The kubelet needs to be restarted to pick up the installed FlexVolume driver. This step may be different depending on how you run your kubelet, on Ubuntu 16.04 where the cluster has been installed with &lt;code&gt;kubeadm&lt;/code&gt;, simply:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Watch &lt;code&gt;/var/log/dory.log&lt;/code&gt; for initialization:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;tail /var/log/dory.log
Info : 2017/12/02 20:50:31 dory.go:55: [5202] entry  : Driver=nimble Version=1.1.0-d95ad289 Socket=/run/docker/plugins/nimble.sock Overridden=true
Info : 2017/12/02 20:50:31 dory.go:58: [5202] request: init []
Info : 2017/12/02 20:50:31 dory.go:68: [5202] reply  : init []: {&quot;status&quot;:&quot;Success&quot;,&quot;capabilities&quot;:{&quot;attach&quot;:false}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We&apos;re now ready to move on to Doryd!&lt;/p&gt;
&lt;h1&gt;Kubekuddle this!&lt;/h1&gt;
&lt;p&gt;Conveniently enough, &lt;code&gt;doryd&lt;/code&gt; runs as a DaemonSet (DS), a DS ensures the provisioner runs (and keep running) on all the cluster nodes. You&apos;re given the option to build the &lt;code&gt;doryd&lt;/code&gt; image and specify your own DS specification, however, the following specification is known to work under most circumstances:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/hpe-storage/dory/985f0313440cd2181c27e5d94b31f9fe75714c3f/examples/ds-doryd.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You may inspect the DS on the cluster by running:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;kubectl describe ds/doryd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&apos;ll see something similar to this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Name:           doryd
Selector:       daemon=dory-daemon
Node-Selector:  &amp;#x3C;none&gt;
Labels:         daemon=dory-daemon
Annotations:    kubectl.kubernetes.io/last-applied-configuration={&quot;apiVersion&quot;:&quot;extensions/v1beta1&quot;,&quot;kind&quot;:&quot;DaemonSet&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;doryd&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;template&quot;:{&quot;metadata&quot;...
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  daemon=dory-daemon
  Containers:
   dory:
    Image:        nimblestorage/doryd
    Port:         &amp;#x3C;none&gt;
    Environment:  &amp;#x3C;none&gt;
    Mounts:
      /etc/kubernetes from k8s (rw)
      /run/docker/plugins/ from dockersocket (rw)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolumedriver (rw)
  Volumes:
   k8s:
    Type:  HostPath (bare host directory volume)
    Path:  /etc/kubernetes/
   flexvolumedriver:
    Type:  HostPath (bare host directory volume)
    Path:  /usr/libexec/kubernetes/kubelet-plugins/volume/exec
   dockersocket:
    Type:  HostPath (bare host directory volume)
    Path:  /run/docker/plugins/
Events:
  Type    Reason            Age   From        Message
  ----    ------            ----  ----        -------
  Normal  SuccessfulCreate  1m    daemon-set  Created pod: doryd-6m52b
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s it! We&apos;re ready to create some Storage Classes!&lt;/p&gt;
&lt;h1&gt;Defining and consuming Storage Classes&lt;/h1&gt;
&lt;p&gt;Specifications for SCs are defined by the cluster admin. Parameters that may be specific for your particular environment is the &lt;code&gt;provisioner&lt;/code&gt; in the class. The particular parameters you want to be passed to down the underlying Docker Volume plugin has its own block as well. HPE Nimble Storage, in the below example, have a ton of different parameters but let&apos;s keep it lean and simple for these particular examples.&lt;/p&gt;
&lt;p&gt;A popular pattern throughout the history of storage has been the notion of categorizing storage into gold, silver and bronze &quot;tiers&quot; where each tier has certain performance, reliability or availability characteristics attached to each tier. At this time, HPE Nimble Storage goes from fast to faster, the reliability and availability is the same across the portfolio and we can&apos;t use this traditional tier paradigm. Instead, let&apos;s optimize each class for a certain workload and capacity. SCs named &quot;transactionaldb&quot;, &quot;unstructuredfile&quot; and &quot;securearchive&quot; would be more descriptive examples.&lt;/p&gt;
&lt;p&gt;HPE Nimble Storage uses Predictive Analytics to look across the install base to help optimize Performance Policies, such as block size, compression, caching and quota behavior. There&apos;s also the concept of Protection Templates where a storage administrator may define snapshot schedules, retention and replication to either another HPE Nimble Storage array or to the public cloud (Amazon AWS/Azure) via &lt;a href=&quot;https://www.hpe.com/us/en/storage/cloud-volumes.html&quot;&gt;HPE Cloud Volumes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The below YAML files are available in the &lt;a href=&quot;https://github.com/NimbleStorage/container-examples/tree/master/dory/doryd-intro&quot;&gt;container-examples repository&lt;/a&gt; on GitHub.&lt;/p&gt;
&lt;h2&gt;transactionaldb&lt;/h2&gt;
&lt;p&gt;An example SC optimized for a transactional workload on an all-flash pool where we would attach a stock Protection Template.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: transactionaldb
provisioner: dev.hpe.com/nimble
parameters:
  description: &quot;Volume provisioned by doryd from transactionaldb StorageClass&quot;
  perfPolicy: &quot;SQL Server&quot;
  protectionTemplate: &quot;Retain-48Hourly-30Daily-52Weekly&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create the SC with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/NimbleStorage/container-examples/master/dory/doryd-intro/sc-transactionaldb.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This SC may then be referenced as such:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mariadb-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 16Gi
  storageClassName: transactionaldb
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Let&apos;s create the PVC and walk through what we can expect:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/NimbleStorage/container-examples/master/dory/doryd-intro/pvc-example.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A subtle difference between using SCs and the FlexVolume driver directly, is that Docker Volumes actually gets created upon creating the PVC. Example inspection:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ kubectl get pvc/example-claim 
NAME            STATUS    VOLUME                                                 CAPACITY   ACCESS MODES   STORAGECLASS      AGE
example-claim   Bound     transactionaldb-8db1a469-d7f3-11e7-8f86-000c291bed2c   16Gi       RWO            transactionaldb   9m
$ kubectl get pv/transactionaldb-8db1a469-d7f3-11e7-8f86-000c291bed2c 
NAME                                                   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                   STORAGECLASS      REASON    AGE
transactionaldb-8db1a469-d7f3-11e7-8f86-000c291bed2c   16Gi       RWO            Delete           Bound     default/example-claim   transactionaldb             10m
$ docker volume inspect transactionaldb-8db1a469-d7f3-11e7-8f86-000c291bed2c --format &apos;{{.Driver }} {{ .Name }}&apos;
nimble transactionaldb-8db1a469-d7f3-11e7-8f86-000c291bed2c
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That particular PVC could then be referenced from a pod or deployment to have the volume dynamically attached upon request. A complete example that creates a MariaDB deployment with a &lt;code&gt;transactionaldb&lt;/code&gt; SC is available in the example repo. For completeness:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/NimbleStorage/container-examples/master/dory/doryd-intro/mariadb.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The below examples require your array to have certain resources setup and pre-configured and but discussed in the context of Storage Classes.&lt;/p&gt;
&lt;h2&gt;unstructuredfile&lt;/h2&gt;
&lt;p&gt;Optimizing for a traditional file/web server dishing out fairly static content, with another stock Protection Template on a hybrid flash pool named &lt;code&gt;hybridflash&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: unstructuredfile
provisioner: dev.hpe.com/nimble
parameters:
  description: &quot;Volume provisioned by doryd from unstructuredfile StorageClass&quot;
  perfPolicy: &quot;Windows File Server&quot;
  protectionTemplate: &quot;Retain-30Daily&quot;
  pool: hybridflash
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;securearchive&lt;/h2&gt;
&lt;p&gt;For PVs requiring a secure destination for archival data, you could potentially have an array in the group that has more stringent security measures and setup for long-term storage and protection. The below example would provision PVs in a performance restricted folder (a HPE Nimble Storage concept of collectively restricting bandwidth, IOPS and capacity to a group of volumes). A custom Performance Policy would deny caching of volumesand set a block size of 32KiB. Volumes are also encrypted at rest.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: securearchive
provisioner: dev.hpe.com/nimble
parameters:
  description: &quot;Volume provisioned by doryd from securearchive StorageClass&quot;
  perfPolicy: &quot;Archive&quot;
  protectionTemplate: &quot;Retain-90Daily-130Biweekly&quot;
  encryption: &quot;true&quot;
  folder: &quot;ElephantStorage&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Advanced use cases for HPE Nimble Storage&lt;/h1&gt;
&lt;p&gt;One of the most useful features offered by the HPE Nimble Storage Docker Volume plugin is to create Zero-Copy Clones and Ephemeral Clones from any given snapshot or the current representation of the data in the volume. While this is something we&apos;re exploring with customers and prospects, I just want to give a short glimpse of how powerful these interfaces are.&lt;/p&gt;
&lt;h2&gt;Zero-Copy Clone from Production&lt;/h2&gt;
&lt;p&gt;It&apos;s not too uncommon for developers wanting a sandbox copy of the latest production database to develop against and refresh at will. I refrain from using the term &quot;&lt;a href=&quot;http://searchstorage.techtarget.com/definition/copy-data-management-CDM&quot;&gt;copy data management&lt;/a&gt;&quot; as it means a lot of different things depending on your background but it&apos;s essentially exactly what is if you don&apos;t care about the anonymization (which you can do for yourself or some databases has built in with RBAC).&lt;/p&gt;
&lt;p&gt;Assuming there is a production pod spawned that is actively using &lt;code&gt;transactionaldb-26063d7f-d7f2-11e7-8f86-000c291bed2c&lt;/code&gt;. The cluster administrator would then create an SC along these lines:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: from-production
provisioner: dev.hpe.com/nimble
parameters:
  description: &quot;Clone from production database.&quot;
  cloneOf: &quot;transactionaldb-26063d7f-d7f2-11e7-8f86-000c291bed2c&quot;
  snapshot: &quot;nightly-locktables&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The developer may then create/delete PVCs against that particular SC to refresh his data from the &lt;code&gt;nightly-locktables&lt;/code&gt; snapshot.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: clone
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 0Gi
  storageClassName: from-production
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The volume being created, could be inspected as such:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ docker volume inspect from-production-dc10ebf3-d7f8-11e7-8f86-000c291bed2c --format &apos;{{ .Status.Parent }} {{ .Status.ParentSnapshot }}&apos;
transactionaldb-26063d7f-d7f2-11e7-8f86-000c291bed2c.kubernetes nightly-locktables
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The keys used in the custom output filter above are HPE Nimble Storage specific.&lt;/p&gt;
&lt;p&gt;Assuming that there is particular column that needs to be scrambled before a developer gain access to it, how would you work around that? The cluster administrator would create a FlexVolume PV (no SC) based on the parameters above, run a batch pod that scrambled the column and later created a SC based on that processed cloned. It&apos;s really that simple!&lt;/p&gt;
&lt;h2&gt;Ephemeral Clones for ETL and CI/CD pipelines&lt;/h2&gt;
&lt;p&gt;In the case where a temporary view is needed of a particular PV, be it for ETL (Extract, Transform, Load) or CI/CD (Continuous Integration/Continuous Deployment/Delivery) pipelines it becomes extremely powerful to create ephemeral representations of PVs. Think a container that has terabytes of data in it ready for processing without having to ship the data in a container image.&lt;/p&gt;
&lt;p&gt;This SC will recreate the underlying Docker Volume for each attachment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-JSON&quot;&gt;---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: ephemeral
provisioner: dev.hpe.com/nimble
parameters:
  description: &quot;Instant Ephemeral Clone from production database.&quot;
  cloneOf: &quot;transactionaldb-26063d7f-d7f2-11e7-8f86-000c291bed2c&quot;
  createSnapshot: &quot;true&quot;
  destroyOnDetach: &quot;true&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: PVs and PVCs will remain intact and does not need to be manually deleted. Just reschedule the pod (or delete/create) and the data will be refreshed from the source. This differs from the developer use case in the sense that a developer might need to redeploy against the dataset he modified himself and not necessarily re-create the entire stack upon re-deploy. Also, &lt;code&gt;detroyOnDetach&lt;/code&gt; will destroy the dependent snapshot and will keep the production volume clean from residual snapshots that eventually will reference deleted data from the parent volume.&lt;/p&gt;
&lt;p&gt;This is just the tip of the iceberg. If you have a lot of data that need to be represented in different stages for your containers, HPE has the solution!&lt;/p&gt;
&lt;h1&gt;Start today!&lt;/h1&gt;
&lt;p&gt;HPE is working feverishly to release an officially supported product of Dory and Doryd exclusively for HPE Nimble Storage and HPE 3PAR arrays and go through the hoops of certifying the productized version against K8s and Red Hat OpenShift (a popular Enterprise K8s PaaS). Nothing prevents anyone to use the bits built from the Dory repository with any other vendor&apos;s Docker Volume plugin, it was the original intent, to translate the Docker Volume API into something K8s may consume. Please submit any issues you may find through GitHub and we&apos;ll get it straighten out. Always make sure to check for &lt;a href=&quot;https://github.com/hpe-storage/dory/blob/master/docs/plugins/README.md&quot;&gt;tested Docker Volume plugins&lt;/a&gt; and please submit pull requests against that page to help us keep track of what works!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.hpe.com&quot;&gt;HPE DEV&lt;/a&gt; is a developer community that is launching to foster HPE related open source software, projects and integrations with contributions not only from HPE but we invite anyone who wants to collaborate on our projects. It&apos;s still early days and we just about launched the portal which serves as an umbrella across GitHub repos, Puppet Forge, Ansible Galaxy and of course this blog.&lt;/p&gt;
&lt;p&gt;We also have a Slack channel that is open to the public, please register &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;here&lt;/a&gt; and join the conversation. I&apos;m &lt;code&gt;michaelm&lt;/code&gt; and you&apos;ll find me in #NimbleStorage #Kubernetes and #Docker. KubeCon is just around the corner and don&apos;t be shy to stop by the HPE DEV booth to have a chat about &lt;a href=&quot;https://github.com/hpe-storage/dory&quot;&gt;Dory&lt;/a&gt;, &lt;a href=&quot;https://www.hpe.com/us/en/solutions/cloud/hybrid-it-management.html&quot;&gt;HPE OneSphere&lt;/a&gt; or &lt;a href=&quot;https://developer.hpe.com&quot;&gt;HPE DEV&lt;/a&gt; itself (there will be schwag!).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Join us at KubeCon 2017]]></title><description><![CDATA[HPE is at KubeCon KubeCon 2017 is here! This time Linux Foundation is organizing both Kubecon and CloudNativeCon together in Austin…]]></description><link>https://developer.hpe.com/join-us-at-kubecon-2017/</link><guid isPermaLink="false">https://developer.hpe.com/join-us-at-kubecon-2017/</guid><pubDate>Wed, 29 Nov 2017 19:10:21 GMT</pubDate><content:encoded>&lt;h1&gt;HPE is at KubeCon&lt;/h1&gt;
&lt;p&gt;KubeCon 2017 is here! This time Linux Foundation is organizing both Kubecon and CloudNativeCon together in Austin Convention Center, Austin TX December 6-8. KubeCon + CloudNativeCon gathers all CNCF (Cloud Native Computing Foundation) projects under one roof. Leading vendors and technologists from open source cloud native communities are attending KubeCon 2017 to further the advancement of cloud native computing.
Hewlett Packard Enterprise (HPE) has realigned its business focus to make hybrid IT simple. It wants to help customers create a hybrid IT model that streamlines day-to-day IT operations and accelerate their digital transformation. HPE&apos;s major initiatives and recent product launches such as the new HPE Developer Community Portal, HPE OneSphere, HPE Synergy, HPE SimpliVity HyperConverged solutions, HPE InfoSight predictive analytics, and other solutions reinforce that vision towards its support for Cloud Native Computing.&lt;/p&gt;
&lt;p&gt;Given HPE’s commitment to Cloud Native Computing and specifically advancement and integration of Kubernetes and other container solutions in our product and services, HPE has joined CNCF as a silver member and will be participating in KubeCon as a sponsor. At KubeCon2017 kicking off today, HPE will be highlighting:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The new HPE Developer Community Portal&lt;/li&gt;
&lt;li&gt;HPE OneSphere – a SaaS based Multi-Cloud Management solution&lt;/li&gt;
&lt;li&gt;HPE’s contribution to Kubernetes Project Dory&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;HPE Developer Community Portal is being announced at KubeCon 2017. The new HPE Developer Community Portal is a place where members can interact with HPE’s new open-source community and create innovative solutions to complex problems by collaborating with like-minded coders. Go to &lt;a href=&quot;http://bit.ly/2AWshlw&quot;&gt;http://bit.ly/2AWshlw&lt;/a&gt; and join the conversation&lt;/p&gt;
&lt;p&gt;HPE OneSphere is an as-a-service multi-cloud management platform that simplifies management of multi-cloud environments and on-premises infrastructure. Through a unified view, IT can compose hybrid clouds capable of supporting both traditional and cloud-native applications and offers cost analytics across on-premises and off-premises cloud consumption. It provides a simple VM and Container Vending capability allowing developers to focus on developing solutions instead of worrying about deploying VMs and containers.&lt;/p&gt;
&lt;p&gt;HPE Project Dory is an Open Source project led by HPE. HPE Dory is an Open Source FlexVolume Driver that allows use of any storage to persist data for Kubernetes. Dory works with any storage that has a Docker Volume Driver. It manages data with Kubernetes and platforms such as OpenShift and dynamically provision persistent volumes with Kubernetes storage classes.&lt;/p&gt;
&lt;p&gt;Stop by HPE Booth #S9 in KubeCon 2017 to learn more about these solutions, to have some interesting conversations, view demos and to win some raffle prizes&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Apps and Infrastructure as Code with Ansible using HPE Cloud Volumes and Amazon AWS]]></title><description><![CDATA[HPE Cloud Volumes (formerly Nimble Cloud Volumes) introduces the exciting capability of consuming HPE Nimble Storage as a service in the…]]></description><link>https://developer.hpe.com/apps-and-infrastructure-as-code-with-ansible-using-hpe-cloud-volumes-and/</link><guid isPermaLink="false">https://developer.hpe.com/apps-and-infrastructure-as-code-with-ansible-using-hpe-cloud-volumes-and/</guid><pubDate>Wed, 29 Nov 2017 09:18:49 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://cloudvolumes.hpe.com&quot;&gt;HPE Cloud Volumes&lt;/a&gt; (formerly &lt;a href=&quot;https://www.nimblestorage.com/cloud/&quot;&gt;Nimble Cloud Volumes&lt;/a&gt;) introduces the exciting capability of consuming HPE Nimble Storage as a service in the public cloud. We recently came out of beta and we&apos;re introducing new features constantly. We just published our &lt;a href=&quot;https://docs.cloudvolumes.hpe.com/help/rest/api-reference&quot;&gt;REST API for HPE Cloud Volumes&lt;/a&gt; and all of a sudden open up a plethora of new use cases for the service. One of them is the ability to include HPE Cloud Volumes when managing applications and infrastructure as code as most cloud-native shops do today. Also, we added basic HPE Cloud Volumes functionality to the &lt;a href=&quot;https://galaxy.ansible.com/NimbleStorage/Ansinimble/&quot;&gt;HPE Nimble Storage Ansible role&lt;/a&gt;. The following tutorial will walk through deploying &lt;a href=&quot;https://www.docker.com/community-edition&quot;&gt;Docker CE&lt;/a&gt; on HPE Cloud Volumes and deploy a distributed application on top of Docker Swarm using &lt;a href=&quot;https://www.ansible.com&quot;&gt;Ansible&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;Introduction&lt;/h1&gt;
&lt;p&gt;As my day job is primarily in the container space I can&apos;t be more excited of being able to consume HPE Nimble Storage as a service in the public cloud. While we don&apos;t have any container specific integration at this point (please see Future below), such as a Docker Volume plugin, there are still some unique benefits we bring to the container use case over traditional EBS (&lt;a href=&quot;https://aws.amazon.com/ebs/&quot;&gt;Elastic Block Storage&lt;/a&gt;) from AWS.&lt;/p&gt;
&lt;p&gt;In essence, I&apos;m going to relocate the Docker host &lt;code&gt;/var/lib/docker&lt;/code&gt; filesystem to a volume hosted by HPE Cloud Volumes. At a glance, these are the immediate realizations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Millions of times more reliable block storage over EBS&lt;/li&gt;
&lt;li&gt;Instant snapshots, clones and restore without having to copy objects to S3&lt;/li&gt;
&lt;li&gt;Clone volumes to any cloud, not just AWS&lt;/li&gt;
&lt;li&gt;Instant resize of block devices and immediately being able to expand the filesystem&lt;/li&gt;
&lt;li&gt;Run multicloud Docker Swarm clusters with a single pane of management for storage consumption&lt;/li&gt;
&lt;li&gt;More than twice the random performance and granularly being able to dynamically set IOPS limits&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I think these are very compelling reasons why you want to host your applications on HPE Cloud Volumes for better reliability, performance and data services.&lt;/p&gt;
&lt;p&gt;We also support replication of on-premises HPE Nimble Storage arrays to HPE Cloud Volumes for data migration and Hybrid IT use cases. This functionality will be covered and included in our Ansible role at a later date. Specifically, how to replicate any on-premises volume and clone it into a container to run workloads.&lt;/p&gt;
&lt;h1&gt;Assumptions&lt;/h1&gt;
&lt;p&gt;Certain assumptions are being made and are not covered in depth and a bit out of scope:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.ansible.com&quot;&gt;Ansible&lt;/a&gt; is an IT automation platform that lets users define and manipulate the state of apps and infrastructure in simple to read &lt;a href=&quot;https://en.wikipedia.org/wiki/YAML&quot;&gt;YAML&lt;/a&gt; files. This blog post is not the right forum to get started with Ansible but it&apos;s used here merely as an example.&lt;/li&gt;
&lt;li&gt;Following along these examples requires a HPE Cloud Volumes account with automation setup for Amazon AWS as explained in &lt;a href=&quot;https://cloudvolumes.hpe.com/ncv-help/automating/&quot;&gt;the HPE Cloud Volumes documentation&lt;/a&gt;. This includes setting up an IAM role for HPE Cloud Volumes.&lt;/li&gt;
&lt;li&gt;The EC2 security group attached to the VPC requires SSH access from the Ansible host network and access to port 9000.&lt;/li&gt;
&lt;li&gt;We&apos;re deploying &lt;a href=&quot;https://minio.io&quot;&gt;Minio&lt;/a&gt; in distributed mode. Minio is a standalone AWS S3 compatible storage server. Some knowledge of AWS S3 is helpful but not required.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Overview&lt;/h1&gt;
&lt;p&gt;This stack diagram illustrate our end-state to help better understand what we&apos;re deploying.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/11/blog-diagram-hpecv-1511946990044.png&quot; alt=&quot;Architecture Overview&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Setup&lt;/h1&gt;
&lt;p&gt;The example goes from spinning up eight EC2 instances to consume HPE Cloud Volumes, install Docker, setup a Docker Swarm, deploy &lt;a href=&quot;https://www.minio.io&quot;&gt;Minio&lt;/a&gt;/&lt;a href=&quot;https://www.nginx.com&quot;&gt;NGINX&lt;/a&gt; and setup an ELB (&lt;a href=&quot;https://aws.amazon.com/elasticloadbalancing/&quot;&gt;Elastic Load Balancer&lt;/a&gt;) for external facing traffic. Some utility playbooks shows off some of the Ansible role features and capabilities such as expanding storage and setting new IOPS limits. There&apos;s also a playbook provided to destroy all the resources created during the deployment phase.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;DISCLAIMER&lt;/strong&gt;: While I&apos;ve gone through great length to ensure resources are tagged appropriately I do not check for conflicting tags that may have unforeseen side effects when connecting and deleting resources.&lt;/p&gt;
&lt;h2&gt;git clone and setup basic parameters&lt;/h2&gt;
&lt;p&gt;All playbooks and roles are hosted on &lt;a href=&quot;https://github.com/NimbleStorage/automation-examples&quot;&gt;GitHub&lt;/a&gt;. The Ansible host uses Ubuntu 16.04.3 in these examples but any Linux distro will do, please adjust the install procedure accordingly.&lt;/p&gt;
&lt;p&gt;Prepare Ansible host (Ubuntu 16.04.3):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo apt-get update 
sudo apt-get install -y git python python-boto python-boto3 python-jmespath python-pip
sudo pip install --upgrade pyopenssl ansible
git clone https://github.com/NimbleStorage/automation-examples
cd automation-examples/cloud/varlibdocker
ansible-galaxy install -r galaxy.txt 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The rest of this guide assumes current working directory &lt;code&gt;automation-examples/cloud/varlibdocker&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Copy &lt;code&gt;group_vars/all/main.yml-dist&lt;/code&gt; and &lt;code&gt;group_vars/all/secrets.yml-dist&lt;/code&gt; to their respective basenames:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for i in group_vars/all/*-dist; do mv ${i} $(sed -e &apos;s/-dist$//&apos; &amp;#x3C;&amp;#x3C;&amp;#x3C; ${i});done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Edit &lt;code&gt;group_vars/all/main.yml&lt;/code&gt; to fit your AWS environment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;---
# Configure these
swarm_region: us-west-1        # This is the region you&apos;re provisioning in
swarm_key: myawskey            # This is the named IAM key to use for your EC2 instances
swarm_subnet: subnet-00000000  # The subnet ID your HPE Cloud Volumes will be provisioned to
swarm_cidr: &quot;0.0.0.0/0&quot;        # The CIDR of the above subnet
swarm_cloud: vpc-00000000      # The VPC the HPE Cloud Volumes will be provisioned to
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are a few other preference parameters to look at, like instance and cluster sizes. These are not necessary to tune.&lt;/p&gt;
&lt;p&gt;Edit &lt;code&gt;group_vars/all/secrets.yml&lt;/code&gt; and store your HPE Cloud Volumes credentials:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-YAML&quot;&gt;---
cloud_portal_access_key: nimble            # Your HPE Cloud Volumes key
cloud_portal_access_secret: nimblestorage  # Your HPE Cloud Volumes secret
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; While not required, it&apos;s highly recommended to protect credentials with Ansible Vault.&lt;/p&gt;
&lt;p&gt;The next few steps require you to download and install the named AWS key that is referenced in the &lt;code&gt;swarm_key&lt;/code&gt; variable and name it &lt;code&gt;ec2.pem&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;scp user@host:mykey.pem ec2.pem
chmod 0400 ec2.pem
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, we rely on the latest &lt;code&gt;ec2.py&lt;/code&gt; dynamic inventory script. Download accordingly:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;wget -O contrib/inventory/ec2.py https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
chmod 755 contrib/inventory/ec2.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Both the the &lt;code&gt;ec2.py&lt;/code&gt; inventory script and the Ansible modules relies on environment variables with your access key, secret key and AWS region you&apos;re working with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;set +o history
export AWS_ACCESS_KEY_ID=nimble
export AWS_SECRET_ACCESS_KEY=nimblestorage
export AWS_REGION=us-west-1
set -o history
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you have instances running in EC2 under your account. Now is a good time to see if your credentials are good:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;./contrib/inventory/ec2.py --list
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This should eventually list your EC2 instances&apos; external IP addresses in various groupings based on tags, instance names, internal names and a very handy list of variables made available to the instance with an &lt;code&gt;ec2_&lt;/code&gt; prefix.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Building the EC2 dynamic inventory seems to take excessive amount of time on occasion. Please be patient.&lt;/p&gt;
&lt;h1&gt;Deploy!&lt;/h1&gt;
&lt;p&gt;The next few steps discuss the operation of the playbooks provided. Please ensure that environment variables are properly set as described above and that the current working directory is &lt;code&gt;storage-automation/cloud/varlibdocker&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;site.yml&lt;/h2&gt;
&lt;p&gt;As per convention, &lt;code&gt;site.yml&lt;/code&gt; sets up the environment end-to-end and after execution you should have a Minio environment set up at the ELB URL presented at the end of the play.&lt;/p&gt;
&lt;p&gt;Execute as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook site.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Four main plays are executed and divided as follows:&lt;/p&gt;
&lt;h3&gt;deploy.yml&lt;/h3&gt;
&lt;p&gt;This playbook deploys the instances as specified in &lt;code&gt;group_vars/all/main.yml&lt;/code&gt;. Since the base AMI is the official vanilla Ubuntu from Canonical a few extra bootstrap steps are required to make the instances eligible for Ansible automation.&lt;/p&gt;
&lt;p&gt;The play also creates a Cloud Volume for each instance and attaches it on &lt;code&gt;/var/lib/docker&lt;/code&gt;. It uses most defaults from the &lt;code&gt;NimbleStorage.Ansinimble&lt;/code&gt; role which is a 10GiB volume capped at 1000 IOPS and presented on the networks specified in &lt;code&gt;group_vars/all/main.yml&lt;/code&gt;. You should see eight volumes appear on the HPE Cloud Volumes portal with the naming convention &lt;code&gt;varlibdocker-&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The Cloud Volume is then mapped and connected to the instances with the &lt;code&gt;ncvol&lt;/code&gt; utility which is downloaded and installed from the HPE Cloud Volumes portal as part of the play itself.&lt;/p&gt;
&lt;h3&gt;prepare.yml&lt;/h3&gt;
&lt;p&gt;This play installs Docker from the official repos supplied by Docker, Inc. Plain and simple.&lt;/p&gt;
&lt;h3&gt;formation.yml&lt;/h3&gt;
&lt;p&gt;One of the master nodes is tagged as the pet (or master manager, depending how you see it) and the pet node will initialize the Docker Swarm and join the remaining managers and workers to the cluster.&lt;/p&gt;
&lt;h3&gt;app_deploy.yml&lt;/h3&gt;
&lt;p&gt;Since the Docker integration with Ansible is a bit behind, bare &lt;code&gt;docker&lt;/code&gt; commands are run on the pet node to deploy the latest official Docker Swarm Minio example &lt;a href=&quot;https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/docker-swarm/docker-compose-secrets.yaml&quot;&gt;provided by Minio&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What is important to understand here is that Minio uses locally named volumes in this example and data will be locked to wherever the distributed Minio cluster is instantiated. Therefore, intentionally, I put a constraint on the service post-deploy to lock the services to whichever node they were first started on. Minio will survive loss of half the nodes in distributed mode due to its erasure coding but Docker Swarm has no means to guarantee any two services will be placed correctly and you&apos;ll in most cases end up with a broken Minio at the next service update or restart. Highly undesirable. To allow services to roam freely in the cluster a shared filesystem or an external Docker Volume plugin is required. We discuss the latter in the Future section below.&lt;/p&gt;
&lt;p&gt;A trained eye will also see that there&apos;s an NGINX global service being deployed. This is because of a long-standing &lt;a href=&quot;https://forums.aws.amazon.com/thread.jspa?threadID=33085&quot;&gt;issue with ELB&lt;/a&gt; and the NGINX trick is simply used to workaround that particular issue (the executive version: you can&apos;t have multiple backend ports). The &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;nginx.conf&lt;/code&gt; file is included in &lt;code&gt;files/nginx_lb&lt;/code&gt;. The image being pulled is from my personal Docker Hub account, please enjoy!&lt;/p&gt;
&lt;p&gt;The last step deploys an ELB that fronts the NGINX server. The ELB is an unencrypted TCP load-balancer.&lt;/p&gt;
&lt;h2&gt;Hello World!&lt;/h2&gt;
&lt;p&gt;If the &lt;code&gt;site.yml&lt;/code&gt; play completes successfully, you should see a URL at the end of the play. This is where your Minio instance is running. Please proceed to login with username &lt;code&gt;nimble&lt;/code&gt; and password &lt;code&gt;nimblestorage&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Just before the profile summary and play recap:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;
TASK [deploy_elb : debug] ***************************************************************
Wednesday 22 November 2017  08:10:24 +0000 (0:00:01.599)       0:12:08.675 ****
ok: [localhost] =&gt; {
    &quot;msg&quot;: &quot;http://minio-271113730.us-west-1.elb.amazonaws.com:9000&quot;
}

PLAY RECAP ******************************************************************************
13.56.224.202              : ok=73   changed=18   unreachable=0    failed=0   
13.56.248.32               : ok=73   changed=18   unreachable=0    failed=0   
13.57.10.167               : ok=73   changed=18   unreachable=0    failed=0   
52.53.127.122              : ok=73   changed=18   unreachable=0    failed=0   
52.53.220.8                : ok=73   changed=18   unreachable=0    failed=0   
54.193.104.48              : ok=73   changed=18   unreachable=0    failed=0   
54.215.225.69              : ok=73   changed=18   unreachable=0    failed=0   
54.241.130.88              : ok=88   changed=31   unreachable=0    failed=0   
localhost                  : ok=23   changed=11   unreachable=0    failed=0   

Wednesday 22 November 2017  08:10:24 +0000 (0:00:00.031)       0:12:08.706 *************
========================================================================================
deploy_minio : ...and add constraints ------------------------------------------ 180.81s
Get Ansible working ------------------------------------------------------------- 28.61s
deploy_ec2 : Launch Workers ----------------------------------------------------- 28.20s
deploy_ec2 : Launch Managers ---------------------------------------------------- 22.56s
install_docker : Install Docker ------------------------------------------------- 18.54s
Refresh inventory --------------------------------------------------------------- 15.24s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.69s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.62s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.62s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.54s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.51s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.51s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.45s
NimbleStorage.Ansinimble : Install prerequisite packages ------------------------ 12.41s
install_docker : Add Docker repo ------------------------------------------------- 9.44s
Validate Docker ------------------------------------------------------------------ 7.51s
deploy_minio : command ----------------------------------------------------------- 6.59s
NimbleStorage.Ansinimble : Connect Cloud Volume ---------------------------------- 5.03s
NimbleStorage.Ansinimble : Connect Cloud Volume ---------------------------------- 4.86s
NimbleStorage.Ansinimble : Connect Cloud Volume ---------------------------------- 4.86s
Playbook run took 0 days, 0 hours, 12 minutes, 8 seconds
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Use a web browser to visit that URL or skip directly to the &lt;code&gt;mc&lt;/code&gt; part below the screenshots:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/11/minio-login-1511947049263.png&quot; alt=&quot;Minio Login&quot;&gt;&lt;/p&gt;
&lt;p&gt;Once logged you&apos;ll see something that resembles this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/11/minio-empty-1511947029503.png&quot; alt=&quot;Minio Empty&quot;&gt;&lt;/p&gt;
&lt;p&gt;Here you can either create buckets and upload files or you can simply use the &lt;code&gt;mc&lt;/code&gt; utility to access Minio:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo wget -O /usr/local/bin/mc wget https://dl.minio.io/client/mc/release/linux-amd64/mc
sudo chmod 755 /usr/local/bin/mc
mc config host add mys3 http://minio-271113730.us-west-1.elb.amazonaws.com:9000 nimble nimblestorage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;code&gt;mc&lt;/code&gt; you can perform normal unix-like commands. Let&apos;s create a bucket and upload this repo&apos;s roles directory:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mc mb mys3/mybucket
mc cp -r roles mys3/mybucket
mc ls mys3/mybucket
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote&gt;
&lt;p&gt;From an operational perspective, what we&apos;ve done here is quite powerful. Everything is provisioned from human-readable YAML files that can be revision controlled and peer reviewed. Nothing is manual. Installing all the applications and infrastructure manually is just tedious and error-prone and not repeatable. This is Applications and Infrastructure as Code.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Expand and boost performance&lt;/h2&gt;
&lt;p&gt;Included in the repository is a couple of utility playbooks. One expands the capacity and the other sets a new IOPS limit. Both have a fairly low conservative default limit and passing extra variables to the play is recommended.&lt;/p&gt;
&lt;p&gt;Before expanding the volumes, load up the Minio landing page that indicates how much space there is. After the play has finished, reload the page. You should see the extra space accordingly.&lt;/p&gt;
&lt;p&gt;Expand Cloud Volume:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook util_expand_cloud_volume.yml -e &apos;{&quot;cloud_volume_size&quot;: 1000000}&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Extra variables such as &lt;code&gt;-e&lt;/code&gt; can normally be passed as key/value pairs without JSON syntax. However, the REST APIs have strict data types and Jinja2 (the template engine Ansible relies on) converts to plain strings when not passing parameters in JSON.&lt;/p&gt;
&lt;p&gt;You should have similar space as observed in the screenshots above if you didn&apos;t tinker with the default volume sizes. The new capacity is immediately available to Minio and should look something like this (if you didn&apos;t tinker with the cluster size):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/11/minio-resized-1511947069528.png&quot; alt=&quot;Minio Resized&quot;&gt;&lt;/p&gt;
&lt;p&gt;While it&apos;s entirely possible to write a playbook that takes both capacity and IOPS in one go, it&apos;s more intuitive to perform controlled updates in isolated increments.&lt;/p&gt;
&lt;p&gt;Set a new IOPS limit:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook util_iops_cloud_volume.yml -e &apos;{&quot;cloud_volume_iops&quot;: 10000 }&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The new limit and capacity should be reflected in the HPE Cloud Volumes portal:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/11/portal-1511947086096.png&quot; alt=&quot;Portal&quot;&gt;&lt;/p&gt;
&lt;h2&gt;Destroy&lt;/h2&gt;
&lt;p&gt;Once done playing in the sandbox, the following playbook destroys &lt;strong&gt;everything&lt;/strong&gt; provisioned. Including your HPE Cloud Volumes, ELB and EC2 instances.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook util_stack_destroy.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Future&lt;/h1&gt;
&lt;p&gt;The future looks brighter than ever when it comes to HPE Cloud Volumes and specialized host application integrations such as containers. HPE Nimble Storage has a strong vision that we&apos;re currently executing against that will gradually enable more use cases, especially for DevOps. We recently introduced replication to HPE Cloud Volumes, that in conjunction with our on-premises HPE Nimble Storage Docker Volume plugin and the upcoming HPE Nimble Storage Docker Volume plugin for HPE Cloud Volumes will allow for some advanced use cases for multicloud persistent storage. What about running CI/CD pipelines on cloned datasets or present data to ETL workflows and Hybrid IT deployments with multicloud strategies not even remotely possible with the constraints in traditional environments.&lt;/p&gt;
&lt;p&gt;Running persistent storage in the public cloud for containers does not come without challenges. Docker Volumes for Docker, Persistent Volumes for Kubernetes or using DVDI to patch in a volume for Marathon with Mesos. They&apos;re all subject to the same &lt;a href=&quot;http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html&quot;&gt;inherit limitations&lt;/a&gt; of EBS (or &lt;a href=&quot;https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#virtual-machine-disk-limits&quot;&gt;Azure Disk Storage&lt;/a&gt;). Assessing the storage market there&apos;s a number of vendors out there trying to combat the persistent storage on EBS paradigm by abstracting the block devices (with or without filesystems) to container native storage elements and create resiliency in software by replication or mirroring schemes. Storage management becomes part of the application domain and compute cycles and N-way replication schemes becomes part of the end-users accounting along with any software licenses schemes imposed by the vendor. There are of course very viable open source options that are free to use but ties more compexity to the back of the application owner who at the end of the day just wanted some persistent storage for his application.&lt;/p&gt;
&lt;p&gt;We hope to address a number of shortcomings with our persistent storage drivers and plugins for the top three container orchestrators and leverage the native storage frameworks provided without any side-effects.&lt;/p&gt;
&lt;h1&gt;Conclusions&lt;/h1&gt;
&lt;p&gt;Thank you for following this tutorial and hopefully it has been useful and sparked interest and ideas about Containers, Hybrid IT and multicloud. Wherever you are on your journey, we&apos;re very eager to hear what challenges exist out there today, particularly when it comes to presenting data across public cloud boundaries and data intense use cases. I genuinely believe HPE can help out in a multitude of ways.&lt;/p&gt;
&lt;p&gt;Please join our Slack community, it&apos;s just starting up, &lt;a href=&quot;https://www.labs.hpe.com/slack&quot;&gt;register here and say hello&lt;/a&gt;, I&apos;m user &lt;code&gt;michaelm&lt;/code&gt;. Follow &lt;a href=&quot;https://twitter.com/drajen&quot;&gt;me&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/HPE_Developer&quot;&gt;HPE DEV&lt;/a&gt; on Twitter!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[CPUs for GPU-enabled deep learning]]></title><description><![CDATA[A role of CPUs in Deep Learning pipelines and how many CPU cores is enough for training on a GPU-enabled system How CPUs are typically used…]]></description><link>https://developer.hpe.com/cpus-for-gpu-enabled-deep-learning/</link><guid isPermaLink="false">https://developer.hpe.com/cpus-for-gpu-enabled-deep-learning/</guid><pubDate>Sun, 26 Nov 2017 09:48:58 GMT</pubDate><content:encoded>&lt;h1&gt;A role of CPUs in Deep Learning pipelines and how many CPU cores is enough for training on a GPU-enabled system&lt;/h1&gt;
&lt;h2&gt;How CPUs are typically used in deep learning pipelines&lt;/h2&gt;
&lt;p&gt;Although GPUs are the main engine today used to train deep neural networks, training is not possible without CPUs. And not only because a CPU is required to manage GPU kernels. It has other tasks to perform.&lt;/p&gt;
&lt;h3&gt;Preprocessing&lt;/h3&gt;
&lt;p&gt;A common use for CPUs in Deep Learning pipelines is to perform data preprocessing. There are several reasons why preprocessing should be done on CPUs.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPUs provide the ability to overlap data preprocessing with training. In this case two operations happens simultaneously: 1) preparing data for a next batch (on CPUs), and 2) optimizing a neural network on a current batch (on GPUs). The main goal is to constantly keep GPUs busy crunching numbers. We want GPUs to be focused on training and not waiting for the next batch of samples to be ready.&lt;/li&gt;
&lt;li&gt;An opportunity to stage the data feeding pipeline with CPUs, if data is stored in a high latency storage. With CPUs the pipelined process looks as follows (see reference below for TensorFlow high performance models that introduced this schema):
&lt;ul&gt;
&lt;li&gt;Stage 1: Copy data from database or remote file system into a host memory;&lt;/li&gt;
&lt;li&gt;Stage 2: Preprocess data on CPUs and store result in host memory;&lt;/li&gt;
&lt;li&gt;Stage 3: Copy preprocessed data from host memory to GPU memory.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;If raw training data is large and needs to be transformed during the preprocessing (for image classification, this may mean resizing original high resolution images to a standard &lt;em&gt;256x256&lt;/em&gt; resolution), doing it on GPUs has the following implications:
&lt;ul&gt;
&lt;li&gt;Copying large volumes of data from host memory to GPU memory via PCIe lanes;&lt;/li&gt;
&lt;li&gt;Allocating large amount of expensive GPU memory for data preprocessing, stealing it from training operators.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For preprocessing on GPUs there should exist efficient GPU implementation of preprocessing operations&lt;/li&gt;
&lt;li&gt;Some frameworks like TensorFlow, if doing preprocessing on GPUs:
&lt;ul&gt;
&lt;li&gt;Will first copy data from host memory to GPU memory;&lt;/li&gt;
&lt;li&gt;Will do preprocessing on GPU;&lt;/li&gt;
&lt;li&gt;Will copy preprocessed data back to host memory;&lt;/li&gt;
&lt;li&gt;Will copy again preprocessed data to GPU memory.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;CPUs are a better fit for certain data transformation tasks than GPUs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;How many cores/CPUs do we need?&lt;/h2&gt;
&lt;p&gt;How do we decide how many CPUs (and how many cores per CPU) do we need to run a training job on a GPU-enabled system?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;One factor to consider is the memory bandwidth. The optimal number of CPU cores is the number of cores that saturate memory bandwidth. If we have more cores, they start to fight for memory access slowing down the entire preprocessing pipeline.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The topology of a neural network and its computational profile also influences the choice of CPU cores. Deeper models with higher computational complexity take a longer time for a single forward/backward pass to complete, leaving enough time for a CPU to prepare the next batch of training samples.  Neural network models with lower computational complexity may benefit from more CPU cores, as it will speed up the preprocessing and prevent the GPUs from being stalled while waiting for the next batch of data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If data preprocessing is overlapped with forward/backward passes, we need at least two CPU cores for a one-GPU system. One core is responsible for launching CUDA kernels for each layer/operator in forward/backward passes. The second core is responsible for prefetching data for the next batch.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&apos;s now discuss some of the options in greater details.&lt;/p&gt;
&lt;h3&gt;Option 1: No CPU/GPU overlapping.&lt;/h3&gt;
&lt;p&gt;The simplest approach is to not overlap preprocessing with training. We first fetch data from data store, preprocess it, copy it to GPU, compute gradients and update weights. We then continue to fetch data for a next batch. Can this be efficient? It depends on where data is stored and on the computational complexity of the model. Several observations to take into account:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If a model is computationally expensive, data feeding may take a relatively small time compared to forward/backward passes on a GPU. In this case the difference in time between overlapped and sequential processing may be small and may not worth the trouble of an overlapped option.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If data is stored on a low latency storage with a high bandwidth interconnect, data fetching may not be an issue either.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If data is already preprocessed, and data fetching is fast enough, again, one might skip overlapping.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Option 2: Overlapping CPU/GPU with non-pipelined preprocessing&lt;/h3&gt;
&lt;p&gt;In general case, we need one CPU core per GPU to run CUDA kernels and &lt;i&gt;M&lt;/i&gt; cores to do prefetching/preprocessing. &lt;em&gt;M&lt;/em&gt; will depend on the complexity of preprocessing and the complexity of the model. If we want to copy data asynchronously, we need an additional CPU core per GPU for that.&lt;/p&gt;
&lt;h3&gt;Option 3: Overlapping CPU/GPU with pipelined preprocessing&lt;/h3&gt;
&lt;p&gt;Google in a blog post on high performance TensorFlow models suggests that it makes sense to use pipelined (staged) preprocessing routines. In particular, they suggest having three running stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fetching data from remote storage into host local memory&lt;/li&gt;
&lt;li&gt;Preprocessing data in host local memory&lt;/li&gt;
&lt;li&gt;Copying data from host memory to GPU memory&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The three stages communicate to each other via queues. In this case, we need at least three CPU cores for preprocessing.&lt;/p&gt;
&lt;h2&gt;Data preprocessing with popular frameworks&lt;/h2&gt;
&lt;h3&gt;Caffe&lt;/h3&gt;
&lt;p&gt;Caffe&apos;s default data layers (written in C++) can have one background thread to prefetch data. They can prefetch up to &lt;i&gt;prefetch_count&lt;/i&gt; batches. That prefetching thread is responsible for fetching data and preprocessing it. It then copies data to GPU in asychnronous mode. Having multiple prefetched batches will result in larger memory consumption. This basically means that we need just two CPU cores per GPU. But you can write your own, non-standard data layer that will take advantage of multiple prefetch threads.&lt;/p&gt;
&lt;h3&gt;TensorFlow&lt;/h3&gt;
&lt;p&gt;A great post on efficient data feeding is the [TensorFlow high performance models]
(&lt;a href=&quot;https://www.tensorflow.org/performance/performance_models&quot;&gt;https://www.tensorflow.org/performance/performance_models&lt;/a&gt;). In theory, we can utilize as many cores as we have.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Scaling deep learning workloads]]></title><description><![CDATA[Data parallelism, weak and strong scaling, and what you need to know to scale a single training job to multiple CPUs or GPUs Training of…]]></description><link>https://developer.hpe.com/scaling-deep-learning-workloads/</link><guid isPermaLink="false">https://developer.hpe.com/scaling-deep-learning-workloads/</guid><pubDate>Sun, 26 Nov 2017 09:38:36 GMT</pubDate><content:encoded>&lt;h1&gt;Data parallelism, weak and strong scaling, and what you need to know to scale a single training job to multiple CPUs or GPUs&lt;/h1&gt;
&lt;p&gt;Training of many state-of-the-art deep neural networks is a very compute-intensive task. It can take hours, days or even weeks to train a model with a single computational  device, such as a CPU or GPU. To speed up the training, you have to scale out, to distribute the computations to multiple devices. The most commonly used approach to distribute the training is &lt;code&gt;data parallelism&lt;/code&gt;, when every computational device possesses its own replica of a model and computes a model update based on its own shard of data. Two options are possible for data parallelism: &lt;code&gt;Strong&lt;/code&gt; and &lt;code&gt;weak&lt;/code&gt; scaling. HPE&apos;s &lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs&quot;&gt;Deep Learning Benchmarking Suite&lt;/a&gt; supports both.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Strong scaling&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Strong scaling assumes that the problem size remains the same and we vary the number of computational devices. From a deep learning point of view it means that we fix a batch size and vary a number of CPUs/GPUs to train a neural network. For instance, given a batch size of 1024 training samples, an eight-GPU system will give us the following benchmarking configurations:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#GPUs&lt;/th&gt;
&lt;th&gt;GPUs IDs&lt;/th&gt;
&lt;th&gt;per-GPU batch size&lt;/th&gt;
&lt;th&gt;effective batch size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0,1&lt;/td&gt;
&lt;td&gt;512&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0,1,2,3&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;0,1,2,3,4,5,6,7&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The efficiency of strong scaling is calculated in the following manner:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;efficiency = t1 / (N * tN) * 100% 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where &lt;em&gt;t1&lt;/em&gt; is the time of solving a problem with one compute device, &lt;em&gt;N&lt;/em&gt; is the number of compute devices, and &lt;em&gt;tN&lt;/em&gt; is the time to solve the same problem with these N devices. For instance, an ideal case is when the &lt;em&gt;tN&lt;/em&gt; time is &lt;em&gt;N&lt;/em&gt; times smaller than &lt;em&gt;t1&lt;/em&gt; and we get ideal efficiency of 100%.&lt;/p&gt;
&lt;p&gt;With strong scaling, an &lt;code&gt;effective batch size&lt;/code&gt; is constant, and per-device batch size is varying. In case of synchronous training, strong scaling with multiple devices is equivalent to training with a single device. The effective batch will be the same, and model updates will happen based on gradients computed for the same number of training samples.&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;Weak scaling&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Weak scaling assumes we keep the amount of work per compute device per iteration fixed. With &lt;em&gt;N&lt;/em&gt; compute devices we end up solving &lt;em&gt;N&lt;/em&gt; times larger problem than with one. In deep learning world it means we keep per-device batch size fixed. With the same hardware setup as it is described above, we get the following benchmarking configurations assuming that the per GPU batch size is 128:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#GPUs&lt;/th&gt;
&lt;th&gt;GPUs IDs&lt;/th&gt;
&lt;th&gt;per-GPU batch size&lt;/th&gt;
&lt;th&gt;effective batch size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0,1&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0,1,2,3&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;512&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;0,1,2,3,4,5,6,7&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The efficiency of weak scaling is calculated in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;efficiency = t1 / tN * 100% 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Where &lt;em&gt;t1&lt;/em&gt; is the time of solving a problem with one compute device and &lt;em&gt;tN&lt;/em&gt; is the time to solve the &lt;em&gt;N&lt;/em&gt; times larger problem with &lt;em&gt;N&lt;/em&gt; compute devices. Ideally the &lt;em&gt;tN&lt;/em&gt; time is exactly the same as &lt;em&gt;t1&lt;/em&gt; and we get ideal efficiency of 100%.&lt;/p&gt;
&lt;p&gt;With weak scaling, a per-device batch size is constant and the &lt;code&gt;effective batch size&lt;/code&gt; increases with the assignment of more compute devices to the training job.&lt;/p&gt;
&lt;p&gt;It is much easier to utilize a larger number of compute devices efficiently with weak scaling, as the amount of work per unit doesn&apos;t decreases when more units are added. At the same time, weak scaling may lead to very large effective batches, when convergence will suffer, and it won&apos;t be faster after all.&lt;/p&gt;
&lt;h2&gt;Multi-GPU benchmarking with DLBS&lt;/h2&gt;
&lt;p&gt;All supported frameworks (BVLC/NVIDIA Caffe, Caffe2, MXNet and TensorFlow) can be benchmarked in a multi-GPU mode, excluding Intel&apos;s fork of Caffe and apparently NVIDIA&apos;s inference engine TensorRT.&lt;/p&gt;
&lt;h3&gt;Weak scaling&lt;/h3&gt;
&lt;p&gt;The default option implemented in DLBS is a &lt;code&gt;weak&lt;/code&gt; scaling. Users need to provide a &lt;code&gt;--exp.device_batch&lt;/code&gt; and &lt;code&gt;exp.gpus&lt;/code&gt; parameters. The first one is an integer value specifying a per-GPU batch size. The second one is a comma-separated list of GPU identifiers. For instance, the following command line launches TensorFlow benchmark with ResNet50 on a an eight-GPU system with a per GPU batch size being equal to 32:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;python experimenter.py run -Pexp.framework=&apos;&quot;tensorflow&quot;&apos; \
                           -Pexp.model=&apos;&quot;resnet50&quot;&apos; \
                           -Pexp.gpus=&apos;&quot;0,1,2,3,4,5,6,7&quot;&apos; \
                           -Pexp.device_batch=32 \
                           -Pexp.bench_root=&apos;&quot;./benchmarks/my_experiment&quot;&apos;\
                           -Pexp.log_file=&apos;&quot;${exp.bench_root}/tf_resnet50.log&quot;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;DLBS will compute internal parameter &lt;code&gt;exp.effective_batch&lt;/code&gt; by multiplying number of GPUs by a per-GPU device batch size:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
    &quot;exp.num_gpus&quot;: &quot;$(len(&apos;${exp.gpus}&apos;.replace(&apos;,&apos;, &apos; &apos;).split()))$&quot;,
    &quot;exp.device&quot;: &quot;$(&apos;gpu&apos; if ${exp.num_gpus} &amp;#x26;gt; 0 else &apos;cpu&apos;)$&quot;,
    &quot;exp.effective_batch&quot;: &quot;$(${exp.num_gpus}*${exp.device_batch} if &apos;${exp.device}&apos; == &apos;gpu&apos; else ${exp.device_batch})$&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Strong scaling&lt;/h3&gt;
&lt;p&gt;There are several ways to benchmark strong scaling. One is to correctly adjust a per-device batch size with parameter &lt;code&gt;exp.device_batch&lt;/code&gt; so that the effective batch size does not change. The mechanism of &lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs/#/intro/intro?id=extensions&quot;&gt;extensions&lt;/a&gt; can effectively be used to disable certain benchmarks that result in non-desirable effective batch sizes.&lt;/p&gt;
&lt;p&gt;The second approach is to provide effective batch size as a base parameter and compute per-device batch size based on it. For instance, to explore Caffe2&apos;s strong scaling with AlexNet model with 1024 images in a batch on an eight-GPU system, users can run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;python experimenter.py run -Pexp.framework=&apos;&quot;caffe2&quot;&apos; \
                           -Pexp.model=&apos;&quot;alexnet&quot;&apos; \
                           -Vexp.gpus=&apos;[&quot;0&quot;, &quot;0,1&quot;, &quot;0,1,2,3&quot;, &quot;0,1,2,3,4,5,6,7&quot;]&apos; \
                           -Pexp.effective_batch=1024\
                           -Pexp.device_batch=&apos;&quot;$(${exp.effective_batch}/${exp.num_gpus})$&quot;&apos;\
                           -Pexp.bench_root=&apos;&quot;./benchmarks/my_experiment&quot;&apos;\
                           -Pexp.log_file=&apos;&quot;${exp.bench_root}/${exp.id}.log&quot;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For an introduction to specifying benchmark configurations, read the &lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs/#/intro/intro&quot;&gt;introduction&lt;/a&gt; section. For commonly used parameters and framework specific parameters, read the &lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs/#/parameters/parameters?id=parameters&quot;&gt;parameters&lt;/a&gt; and the &lt;a href=&quot;https://hewlettpackard.github.io/dlcookbook-dlbs/#/frameworks/frameworks?id=frameworks&quot;&gt;framework specific parameters&lt;/a&gt; sections.&lt;/p&gt;
&lt;h3&gt;BVLC/NVIDIA Caffe&lt;/h3&gt;
&lt;p&gt;The family of Caffe frameworks supports multi-GPU training with the NCCL/NCCL2 library. The only configuration parameter that affects this is the comma-separated list of  GPUs, &lt;code&gt;exp.gpus&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;TensorFlow&lt;/h3&gt;
&lt;p&gt;We use the &lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/tree/master/python/tf_cnn_benchmarks&quot;&gt;tf_cnn_benchmarks&lt;/a&gt; project as a backend for TensorFlow framework. The following parameters affect multi-GPU training:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;exp.gpus&lt;/code&gt; Comma separated list of GPUs to use.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tensorflow.var_update&lt;/code&gt; The method for managing variables: &lt;em&gt;parameter_server&lt;/em&gt;, &lt;em&gt;replicated&lt;/em&gt;, &lt;em&gt;distributed_replicated&lt;/em&gt;, &lt;em&gt;independent&lt;/em&gt; (&lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/blob/master/python/tf_cnn_benchmarks/tf_cnn_benchmarks.py#L164&quot;&gt;source code&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tensorflow.use_nccl&lt;/code&gt; Whether to use nccl all-reduce primitives where possible (&lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/blob/master/python/tf_cnn_benchmarks/tf_cnn_benchmarks.py#L168&quot;&gt;source code&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tensorflow.local_parameter_device&lt;/code&gt; Device to use as parameter server: cpu or gpu (see &lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/blob/master/python/tf_cnn_benchmarks/tf_cnn_benchmarks.py#L92&quot;&gt;source code&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Caffe2&lt;/h3&gt;
&lt;p&gt;In the current version of a &lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/tree/master/python/caffe2_benchmarks&quot;&gt;Caffe2 backend&lt;/a&gt;, the only parameter that affects multi-GPU training is &lt;code&gt;exp.gpus&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;MXNet&lt;/h3&gt;
&lt;p&gt;Currently, in our &lt;a href=&quot;https://github.hpe.com/labs/dlcookbook/tree/master/python/mxnet_benchmarks&quot;&gt;MXNet backend&lt;/a&gt; two parameters affect multi-GPU training:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;exp.gpus&lt;/code&gt; Comma-separated list of GPUs to use.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;mxnet.kv_store&lt;/code&gt; A method to aggregate gradients &apos;local&apos;, &apos;device&apos;, &apos;dist_sync&apos;, &apos;dist_device_sync&apos; or &apos;dist_async&apos;. See &lt;a href=&quot;https://mxnet.incubator.apache.org/how_to/multi_devices.html&quot;&gt;this page&lt;/a&gt; for more details.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://arxiv.org/pdf/1610.06276.pdf&quot;&gt;Modeling Scalability of Distributed Machine Learning&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.sharcnet.ca/help/index.php/Measuring_Parallel_Scaling_Performance&quot;&gt;Measuring Parallel Scaling Performance&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[Intelligent Provisioning ]]></title><link>https://developer.hpe.com/intelligent-provisioning/</link><guid isPermaLink="false">https://developer.hpe.com/intelligent-provisioning/</guid><pubDate>Wed, 22 Nov 2017 18:48:50 GMT</pubDate><content:encoded></content:encoded></item><item><title><![CDATA[Surviving in the Schema while running initial inventory]]></title><description><![CDATA[In previous articles, we managed to login and acquire a login session.
Now let us discover the Composable Infrastructure managed by HPE…]]></description><link>https://developer.hpe.com/surviving-in-the-schema-while-running-initial-inventory/</link><guid isPermaLink="false">https://developer.hpe.com/surviving-in-the-schema-while-running-initial-inventory/</guid><pubDate>Mon, 11 Sep 2017 18:45:18 GMT</pubDate><content:encoded>&lt;p&gt;In previous articles, we managed to login and acquire a login session.
Now let us discover the Composable Infrastructure managed by HPE
OneView, in a top down approach through the objects available. The
following illustration shows the objects we will check during this
process:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-1-1505155314126.png&quot; alt=&quot;Datacenter structure&quot;&gt;&lt;/p&gt;
&lt;h1&gt;So what exactly is a Datacenter?&lt;/h1&gt;
&lt;p&gt;From the graph, we know that the top most object is a Datacenter, so let
us use our REST client to browse for available Datacenters in our
configuration using /rest/datacenters. Before we do so, how do we know
what are the properties that make up a Datacenter object? There is a
technique, to find out about this. This technique is to add a /schema at
the end of the URL, for example /rest/datacenters/schema, to retrieve
the schema for datacenter.&lt;/p&gt;
&lt;p&gt;Note: This technique is still experimental in HPE OneView v2.0, and not
all resources are described by a GET /schema command, although the ones
typically used to create an initial inventory are. This technique is
subject to evolve in the future&lt;/p&gt;
&lt;p&gt;Let us try with our HTTP Requester Client.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-2-1505155330418.png&quot; alt=&quot;REST call via HTTP to retrieve the datacenters schema&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you find the result difficult to read, copy the response JSON and
open another tab of your browser to an JSON online parser (such as
&lt;a href=&quot;http://json.parser.online.fr&quot;&gt;http://json.parser.online.fr&lt;/a&gt;), and paste the JSON response in the left
pane there&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-3-1505155337227.png&quot; alt=&quot;JSON online parser to parse the retrieved JSON object&quot;&gt;&lt;/p&gt;
&lt;p&gt;From this tool, you can collapse and expand the properties of an object
and understand what is available. For example from the capture below, we
can see that a Datacenter has a dozen properties such as width, depth,
or status.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-4-1505155343740.png&quot; alt=&quot;JSON online parser have collapse and expand option to understand what is available in the JSON object&quot;&gt;&lt;/p&gt;
&lt;p&gt;Alternatively, we can use Postman (from Chrome). The result window
allows to expand or collapse elements directly in the body of the
response, which is very convenient:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-5-1505155351099.png&quot; alt=&quot;Postman REST tool for chrome allows to expand or collapse elements directly in the body of the response&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can continue this exploration and drill down on width, which has a
description, a type (integer), and a min and a max value. We also can
find a required flag, which is important, as required properties are
mandatory when creating new objects with the POST method. We can find
other types of object. For example expand the currency property and you
will see that the type is string.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-6-1505155367635.png&quot; alt=&quot;drill down to individual properties of an object&quot;&gt;&lt;/p&gt;
&lt;p&gt;Integer and string are simple types, but if you continue to scroll down
the list of properties, you will see the contents property, which is an
array of items, of type object. An item is constructed from a number of
properties such as rotation, x, y and a resourceUri. We also call this a
collection.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-7-1505155374560.png&quot; alt=&quot;Array of items with in JSON object&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: In JSON, integer are integral numeric values (3, -24 …) while
number can be an integer or a floating-point value (3, -24, 1.5, -3.3333
…)&lt;/p&gt;
&lt;p&gt;We understood from the exploration of the datacenter schema that, a
datacenter is composed of collection of zero or more racks for which we
now have a resourceUri.&lt;/p&gt;
&lt;h1&gt;So what is a Rack?&lt;/h1&gt;
&lt;p&gt;Before we start exploring the content of our datacenter, let us look at
the schema for a rack by using the /rest/racks/schema URL using our REST
client tool.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-8-1505155381704.png&quot; alt=&quot;schema for a rack using URL in REST client tool&quot;&gt;&lt;/p&gt;
&lt;p&gt;Again, we can cut/paste the REST client response into a JSON parser to
navigate it easily. From there we can discover the properties of a rack,
such as width, height, depth, and a collection of rackMounts, which
represent zero or more items contained in the rack (enclosure,
server-hardware, power delivery device)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-9-1505155389486.png&quot; alt=&quot;paste the REST client response into a JSON parser to discover the properties of a rack&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can expand the rackMounts property to understand what composes a
rack. We discover that it is made of a collection of items, and that
each item has a mountUri to access it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-10-1505155395931.png&quot; alt=&quot;expand the rackMounts property to understand what composes a rack&quot;&gt;&lt;/p&gt;
&lt;p&gt;We now understand that an HPE OneView instance contains a collection of
one or more datacenters, which is composed of a collection of zero or
more racks, which contains a collection of one or more items such as a
server or an enclosure. We know enough to start our first inventory.&lt;/p&gt;
&lt;h1&gt;Let&apos;s look it up now!&lt;/h1&gt;
&lt;p&gt;Let&apos;s now use our REST Client to explore the object instances found in
our environment. Starting from the topmost objects, the datacenters.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-11-1505155402777.png&quot; alt=&quot;REST Client to explore the object instances found in our environment. Starting from the topmost objects, the datacenters&quot;&gt;&lt;/p&gt;
&lt;p&gt;From the result JSON we find out that we have only one datacenter,
called &quot;Sophia Antipolis&quot;, and in this datacenter we have a single rack
which URI is: /rest/racks/Rack-221.&lt;/p&gt;
&lt;p&gt;Note: total is the total number of elements while count is the number of
element returned in this &quot;page&quot;. prevPageUri and nextPageUri allow to
navigate to other pages in case of a large number of items&lt;/p&gt;
&lt;p&gt;We can now drill down to the content of RACK-221&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-12-1505155409949.png&quot; alt=&quot;drill down to the content of RACK-221&quot;&gt;&lt;/p&gt;
&lt;p&gt;From that query, we discover that our Rack-221 is a &quot;HP 42U Intelligent
Series Rack&quot; and in this rack we have, from the bottom up:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;One 10U enclosure (/rest/enclosures/09SGH102X6J1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Another 10U enclosure (/res/enclosures/09SGH100X6J1)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;One 2U
server (/rest/server-hardware/37333036-3831-584D-5131-303030333037)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;And finally, a 1U
server (/rest/server-hardware/37333036-3831-584D-5131-303030323038).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We have, with the mountUri property, a direct access to the URI of each
item in the rack.&lt;/p&gt;
&lt;h1&gt;Hugh! What is an Enclosure?&lt;/h1&gt;
&lt;p&gt;We have not yet explored what an enclosure is, so let&apos;s be curious and
take a quick look at the schema of an enclosure. In HPE terminology, an
enclosure is a C7000 enclosure, and it look something like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-13-1505155418059.png&quot; alt=&quot;C7000 enclosure&quot;&gt;&lt;/p&gt;
&lt;p&gt;It is composed of many elements, such as blade servers (deviceBays),
onboard administrators (managerBays), interconnect modules
(interconnectBays), fans (fanBays), power supplies (powerSupplyBays). If
we look at the schema of an enclosure (/rest/enclosures/schema) we can
confirm that.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-14-1505155424179.png&quot; alt=&quot;ov inventory schema 14&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can also see that an enclosure is composed of a deviceBays array of
items. If we now query our first enclosure
(/rest/enclosures/09SGH102X6J1 for example), we can see there that this
is an enclosure called Encl2, and we can see many of the useful
parameter that make up an enclosure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-15-1505155430487.png&quot; alt=&quot;explore enclosure deviceBays array of items&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can also discover that it is composed of 16 device bays and that each
of these bays is, in our case a blade server for which we are given the
deviceUri (/rest/server-hardware/31393736-3831-4753-4831-30325837524E
for example). Note that this is a server-hardware resource, similar to
the rackmount servers we already discovered in the rack earlier.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-16-1505155436138.png&quot; alt=&quot;Explore the enclosure deviceBays array of items&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you checked all the items in the deviceBays collection, you would
discover 14 servers, and another 14 servers in the second enclosure
(Encl1), plus our 2 rack-mount systems, which makes a total of 30
server-hardware resources.&lt;/p&gt;
&lt;p&gt;Note: servers in bay 1 and 2 are double-height servers, and as such,
they use the bays 9 and 10. Actually, in such case, if you look in the
server URI of bay x, and bay x+8, you shall see the same server URI&lt;/p&gt;
&lt;h1&gt;Et voilà! The result of our initial discovery&lt;/h1&gt;
&lt;p&gt;Based on this information we can now build a complete picture of the
discovered environment. We could represent this in our own software
component, and we could be storing some of the properties for it, plus
the resource URI for quick retrieval of the full object properties.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-inventory-schema-17-1505155441995.png&quot; alt=&quot;complete picture of the discovered environment&quot;&gt;&lt;/p&gt;
&lt;p&gt;In a next article, we will discuss how to find and associate the
management processors of each of these servers.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[PowerShell-Defined Infrastructure]]></title><description><![CDATA[In previous articles
(here,
here
and
here),
we used POSTman to experiment with the Composable Infrastructure API,
and then we reused some of…]]></description><link>https://developer.hpe.com/powershell-defined-infrastructure/</link><guid isPermaLink="false">https://developer.hpe.com/powershell-defined-infrastructure/</guid><pubDate>Mon, 11 Sep 2017 17:07:33 GMT</pubDate><content:encoded>&lt;p&gt;In previous articles
(&lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/First-steps-with-programming-the-HPE-Composable-Infrastructure/ba-p/235724&quot;&gt;here&lt;/a&gt;,
&lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/Authenticating-against-HPE-Composable-Infrastructure-API/ba-p/235893&quot;&gt;here&lt;/a&gt;
and
&lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/Surviving-in-the-Schema-while-running-our-first-inventory/ba-p/235998&quot;&gt;here&lt;/a&gt;),
we used POSTman to experiment with the Composable Infrastructure API,
and then we reused some of these experiments
(&lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/quot-cURL-ing-quot-through-the-HPE-Composable-Infrastructure-API/ba-p/236298&quot;&gt;here&lt;/a&gt;),
to script some automation using cURL in a Shell script (or from a
Windows Command prompt with some patience). Well, for Windows there is a
far better solution to script and automate a HPE Composable
Infrastructure, and it uses Windows PowerShell.&lt;/p&gt;
&lt;p&gt;Windows PowerShell is THE Command Language Interface (CLI) and scripting
language unanimously adopted by all Windows administrators for several
reasons including:&lt;/p&gt;
&lt;p&gt;• Its standard and normalized syntax. PowerShell uses the syntax of
verb-noun to interact with the system. You use the Get verb to query and
provide the object noun to specify the resource. For example,
Get-Process returns list of processes, Get-EventLog returns list of
event logs. To create new resources, you use the verb new, to update,
you use the verb set…. You learn the syntax once and apply the same
pattern for different resources[1]&lt;/p&gt;
&lt;p&gt;• Its extensibility. By default, Windows PowerShell provides commands or
precisely Cmdlets to interact with the OS. Microsoft also provides a
rich set of libraries to interact with other components of Windows such
as Active Directory, DHCP, network... Libraries are packaged as modules
and administrators simply import the modules to get access to Cmdlets
related to components. For example, to get a list of Active Directory
users, you can run Get-ADuser; to get a list of disk volumes you can use
Get-Volumes. Going further, any Microsoft product is shipped with its
own PowerShell module and as a result, Windows PowerShell can be seen as
a management platform rather than just a rigid CLI.&lt;/p&gt;
&lt;p&gt;On that trend, HPE provides an HPE OneView PowerShell module, which
consists of a set of script functions exported as Cmdlets. Script
functions wraps PowerShell code around the REST API and hides the
complexity of using REST API to interact with HPE OneView. Windows
administrators now can use their familiar syntax to interact with HPE
Oneview, for example Get-HPOVNetwork, get-HPOVServer and so on.&lt;/p&gt;
&lt;h1&gt;Installing the HPE OneView PowerShell module&lt;/h1&gt;
&lt;p&gt;The HPE OneView PowerShell library is an HPE Open Source project that is
hosted on &lt;a href=&quot;http://hewlettpackard.github.io/POSH-HPOneView/&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-1-1505150268392.png&quot; alt=&quot;Link to PowerShell library .exe hosted on GitHub&quot;&gt;&lt;/p&gt;
&lt;p&gt;The easiest way to install the PowerShell library is to select and
download the
&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView/releases&quot;&gt;installer&lt;/a&gt;
to run. The Installer provides digitally signed assets and verifies
system requirements. You can also download the repository (zip or
tar.gz), which will only contain the library components, and it is up to
the administrator to put the library into the required directory and
validate that their system meets the requirements. Once it is installed,
you can use it in your PowerShell environment (interactive or script)
using an import-module command:&lt;/p&gt;
&lt;p&gt;import-module hponeview.200&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-2-1505150284255.png&quot; alt=&quot;ps basic 2&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Looking for help?&lt;/h1&gt;
&lt;p&gt;We can check what this library offers with the following command:&lt;/p&gt;
&lt;p&gt;get-command -module hponeview.200&lt;/p&gt;
&lt;p&gt;There is a long list of commands available. You can get more details on
a given Cmdlet using the help command:&lt;/p&gt;
&lt;p&gt;help connect-hpovmgmt&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-3-1505150291900.png&quot; alt=&quot;import-module command of hponeview.200&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Connect…&lt;/h1&gt;
&lt;p&gt;Let us try this immediately&lt;/p&gt;
&lt;p&gt;Connect-HPOVMgmt 213.30.139.22:37441 -username administrator -password
password&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-4-1505150298918.png&quot; alt=&quot;connect command to HPEOneview&quot;&gt;&lt;/p&gt;
&lt;h1&gt;And start scripting!&lt;/h1&gt;
&lt;p&gt;Once connected we can use the different Cmdlets to start automating an
HPE Composable Infrastructure from PowerShell. For example, let us
gather versions:&lt;/p&gt;
&lt;p&gt;Get-HPOVVersion&lt;/p&gt;
&lt;p&gt;Get-HPOVVersion -ApplianceVer&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-5-1505150306720.png&quot; alt=&quot;Get HPE OneView version command&quot;&gt;&lt;/p&gt;
&lt;p&gt;As we did in previous articles using POSTman or cURL, let us now
retrieve the enclosures managed in this HPE Composable Infrastructure
using:&lt;/p&gt;
&lt;p&gt;Get-HPOVEnclosure&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-6-1505150314278.png&quot; alt=&quot;Retrieve HPE OneView enclosures&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can also retrieve the list of servers available:&lt;/p&gt;
&lt;p&gt;Get-HPOVServer&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-7-1505150322382.png&quot; alt=&quot;Retrieve HPE OneView servers&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can see that, the PowerShell library handles a lot of the hard work
we had to do while programming directly against the REST API. The HPE
OneView PowerShell library handles for us things such as HTTP Headers or
JSON parsing.&lt;/p&gt;
&lt;h1&gt;We can always REST&lt;/h1&gt;
&lt;p&gt;However, what if we need to get something, which does not have a Cmdlet
available yet? For example in our REST based inventory, we listed the
datacenters managed by the HPE OneView appliance, but there is no
GET-HPOVDatacenter in the HPE OneView Module. Well, there is a solution
for this in the library, it is called Send-HPOVRequest, and it is a
great way to access the REST API directly. Send-HPOVRequest is very
similar to what we did with cURL in a previous article.&lt;/p&gt;
&lt;p&gt;Let us use this to retrieve our datacenter list:&lt;/p&gt;
&lt;p&gt;$datacenters=Send-HPOVRequest &quot;/rest/datacenters&quot;&lt;/p&gt;
&lt;p&gt;$datacenters.members[0].name&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-8-1505150331230.png&quot; alt=&quot;Retrieve datacenters using Send-HPOVRequest cmdlet&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can use help Send-HPOVRequest to find out more about the Cmdlet, and
this shows that the default verb is GET, but we can also use other HTTP
verbs, for example let us say we want to change the currency property of
our Sophia Antipolis datacenter from USD to Euros:&lt;/p&gt;
&lt;p&gt;$datacenters.members[0].currency&lt;/p&gt;
&lt;p&gt;USD&lt;/p&gt;
&lt;p&gt;$datacenters.members[0].currency=&quot;Euros&quot;&lt;/p&gt;
&lt;p&gt;Send-HPOVRequest $datacenters.members[0].uri &quot;PUT&quot;
$datacenters.members[0]&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ps-basic-9-1505150337250.png&quot; alt=&quot;ps basic 9&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Feel like contributing?&lt;/h1&gt;
&lt;p&gt;Well, even better. If you think that instead of using Send-HPOVRequest,
it would be better to write a CmdLet for GET-HPOVDatacenter, and you
feel like doing it yourself, then you can contribute using the following
procedure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;From GitHub, fork the repo&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make the code change to add your contribution&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From GitHub, submit a Pull request for review&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;Beware with asynchronous operations&lt;/h1&gt;
&lt;p&gt;By design, HPE OneView is an asynchronous task-based system. Therefore,
Cmdlets might initiate an asynchronous operation, and return immediately
(this is documented in the CmdLet help). In these cases, it is best
practice to retrieve that task object returned by the Cmdlet and use the
Wait-HPOVTaskComplete to wait for task completion before moving on to
the next step. For example, let us use this technique to create a new
server profile:&lt;/p&gt;
&lt;p&gt;New-HPOVServerProfile -name TestProfile -assignmentType &apos;unassigned&apos;
-serverhardwaretype &quot;BL460c Gen8 1&quot; -enclosuregroup &quot;HPE Composable
Infrastructure Enclosures&quot; | Wait-HPOVTaskComplete&lt;/p&gt;
&lt;h1&gt;The sky is the limit!&lt;/h1&gt;
&lt;p&gt;Well, I hope I gave you enough details to get your attention so that you
can start writing your own PowerShell scripts. We have also made
available on the
&lt;a href=&quot;https://github.com/HewlettPackard/POSH-HPOneView/wiki&quot;&gt;Wiki&lt;/a&gt; of the
GitHub page, a number of sample scripts, so do not hesitate to leverage
these as well. We will come back to some of these in upcoming articles.&lt;/p&gt;
&lt;p&gt;[1] Microsoft Approved PowerShell Verbs:
&lt;a href=&quot;https://msdn.microsoft.com/en-us/library/ms714428(v=vs.85).aspx&quot;&gt;https://msdn.microsoft.com/en-us/library/ms714428(v=vs.85).aspx&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Leveraging HPE OneView Single Sign On to iLO Management Processors]]></title><description><![CDATA[In this previous article, Surviving in the Schema… while running our
first
inventory,
we discussed how to run a discovery of the environment…]]></description><link>https://developer.hpe.com/leveraging-hpe-oneview-single-sign-on-to-ilo-management-processors/</link><guid isPermaLink="false">https://developer.hpe.com/leveraging-hpe-oneview-single-sign-on-to-ilo-management-processors/</guid><pubDate>Mon, 11 Sep 2017 16:49:26 GMT</pubDate><content:encoded>&lt;p&gt;In this previous article, &lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/Surviving-in-the-Schema-while-running-our-first-inventory/ba-p/235998&quot;&gt;Surviving in the Schema… while running our
first
inventory&lt;/a&gt;,
we discussed how to run a discovery of the environment managed by HPE
OneView. We derived a tree structure of the following format:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-1-1505149237398.png&quot; alt=&quot;Datacenter tree structure&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can check the schema for a server-hardware using our favorite REST
client tool (Postman extension for Chrome in our case) with the
following command: GET /rest/server-hardware/schema against the HPE
OneView API.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-2-1505149245923.png&quot; alt=&quot;check the schema for a server-hardware using our favorite REST client tool&quot;&gt;&lt;/p&gt;
&lt;p&gt;We notice that a management processor (mp) is also available in a
server-hardware and that some details about the management processor are
provided (mpFirmwareVersion, mpHostInfo, mpModel, mpState).&lt;/p&gt;
&lt;h1&gt;Don&apos;t you like my iLO?&lt;/h1&gt;
&lt;p&gt;I am sure most of you are familiar with the concept of management
processor in servers, but in the ProLiant family of servers, these are
best known as iLO (integrated Lights-Out). iLO management cards have
been provided in ProLiant servers since 2001. They have since then been
replaced by a custom chip on server motherboard and are extremely
popular in datacenter management. Today iLO 4 is the current generation
of iLO processors present on Gen8 and Gen9 of ProLiant servers. iLO
management processors provide a wide variety of great options when it
comes to management of servers in an out-of-band manner. To start with,
the management IP address of the iLO offers a web console to manage the
server-hardware.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-3-1505149253257.png&quot; alt=&quot;ILO remote console to the system&quot;&gt;&lt;/p&gt;
&lt;p&gt;But probably the most used feature of the iLO processor, is the
capability to open a remote console to the system, which really allows
to be &quot;on&quot; the system without ever having to enter a computer room, and
whether or not the server is powered on, or installed with an operating
system or not.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-4-1505149260202.png&quot; alt=&quot;ILO remote console to the system&quot;&gt;&lt;/p&gt;
&lt;p&gt;The HPE OneView web console provides a direct link the web interface of
the iLO of any given server-hardware, as well as a direct link to the
iLO Remote Console of the server. This is great but what would also be
nice, is if third party applications integrating with HPE OneView using
the Composable API, could leverage this feature and embed those links in
their application context allowing users to reach, in one single click,
the iLO Web Console or Remote Console of any discovered server-hardware.
This article describes how to do this.&lt;/p&gt;
&lt;h1&gt;So how do we SSO?&lt;/h1&gt;
&lt;p&gt;iLO Management Processors are protected by credentials. Users can be
added to the authorized users list, or you can even leverage an existing
LDAP infrastructure, but to get started you have to access the iLO using
a factory generated administrator password provided with every server
that HPE ships with an iLO. However, because the server was discovered
and placed under the management of HPE OneView, there is a special
&quot;agreement&quot; between HPE OneView and each iLO to allow for a Single Sign
On (SSO) from HPE OneView to login to iLOs. This means that valid
credentials in HPE OneView allow for a pass-through authentication to
all managed iLOs. This is an extremely powerful feature, which allows
configuration of large groups of server iLOs, with simply an HPE OneView
account.&lt;/p&gt;
&lt;p&gt;We query for the list of server-hardware available to us using a GET
/rest/server-hardware, and from there pick a given one of them using its
URI (for example
/rest/server-hardware/37333036-3831-4753-4831-30325838524E)&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-5-1505149266642.png&quot; alt=&quot;query for the list of server-hardware available to us&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can see all the details about this particular server-hardware iLO
instance. All? Well, not quite, because if we check the HP OneView API
Reference we can find the two following commands:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;GET /rest/server-hardware/{id}/iloSsoUrl&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GET /rest/server-hardware/{id}/remoteConsoleUrl&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-6-1505149274227.png&quot; alt=&quot;HPE Oneview API reference to retrieve server hardware&quot;&gt;&lt;/p&gt;
&lt;p&gt;Note: As you can see, the HPE OneView API Reference describes it but it
does not appear in the server-hardware description nor in the schema
description&lt;/p&gt;
&lt;p&gt;If we try to use the first one:
/rest/server-hardware/37333036-3831-4753-4831-30325838524E/iloSsoUrl&lt;/p&gt;
&lt;p&gt;We obtain the following result:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-7-1505149281204.png&quot; alt=&quot;Retrieve server hardware ILO SSO Url&quot;&gt;&lt;/p&gt;
&lt;p&gt;While if we try to query HPE OneView with the following additional URI:
/rest/server-hardware/37333036-3831-4753-4831-30325838524E/remoteConsoleUrl&lt;/p&gt;
&lt;p&gt;We get the following response:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/ov-ilo-sso-8-1505149288674.png&quot; alt=&quot;Oneview API reference to get server hardware&quot;&gt;&lt;/p&gt;
&lt;p&gt;Using these two calls, we can offer three types of interesting
integrations:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Open the Web interface of the iLO of that given server&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open the iLO Remote Console of that given server&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the iLO REST API to set/get particular attributes of a given
server&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;Direct link to iLO Web Console&lt;/h1&gt;
&lt;p&gt;An application might decide to provide a link to the iLO Web Interface
as part of its own representation of a server-hardware object. In this
case, it will need to run a GET /rest/server-hardware/{id}/iloSsoUrl, to
retrieve the URL to use when the user click to obtain the iLO Web
Interface.&lt;/p&gt;
&lt;h1&gt;Direct link to iLO Remote Console&lt;/h1&gt;
&lt;p&gt;In addition to providing a link to the iLO Web Console, an application
might decide to provide a direct link to the iLO Remote Console of a
server. In order to do this, a call to GET
/rest/server-hardware/{id}/remoteConsoleUrl would have to be initiated,
and the retrieved link should be opened.&lt;/p&gt;
&lt;p&gt;Note: This link requires the HP iLO Remote Console application installed
on the client machine on which the link is clicked. This application can
be downloaded from the Overview page of the iLO Web Interface of any iLO
management processor&lt;/p&gt;
&lt;h1&gt;Use the iLO REST API&lt;/h1&gt;
&lt;p&gt;Finally, since iLO 4 management processor offers a powerful REST API, it
is also possible to use SSO to manipulate configuration of the iLO using
its REST API. More details about the iLO REST API can be found in the
following web page: &lt;a href=&quot;http://www.hpe.com/info/ilo&quot;&gt;www.hpe.com/info/ilo&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: the iLO REST API is available since firmware version 2.0, and is
Redfish compliant starting at version 2.3&lt;/p&gt;
&lt;p&gt;For example, you could decide that the application wants to change the
server power capping value based on some internal rules. For this, you
would have to use a PATCH method to and pass an HTTP Header X-Auth-Token
with the value of the SSO session token retrieved from a GET
/rest/server-hardware/{id}/iloSsoUrl.&lt;/p&gt;
&lt;p&gt;Let&apos;s review an example and imagine you retrieved from GET
/rest/server-hardware/{Server id}/remoteConsoleUrl, the following:&lt;/p&gt;
&lt;p&gt;{&quot;remoteConsoleUrl&quot;:&quot;hplocons://addr=&lt;strong&gt;192.168.1.184&lt;/strong&gt;&amp;#x26;sessionkey=&lt;strong&gt;823b6814fd3aa2d9f82ec9f34504b6d9&lt;/strong&gt;&quot;}&lt;/p&gt;
&lt;p&gt;You would have to assemble the following PATCH command:&lt;/p&gt;
&lt;p&gt;PATCH https://&lt;strong&gt;192.168.1.184/redfish/v1/Chassis/1/PowerMetrics&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;With HTTP Header: X-Auth-Token = &lt;strong&gt;823b6814fd3aa2d9f82ec9f34504b6d9&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;And a body of: {&quot;PowerLimit&quot;: {&quot;LimitInWatts&quot;: 250}}&lt;/p&gt;
&lt;p&gt;In order to cap the given server-hardware to a maximum power utilization
of 250 Watts.&lt;/p&gt;
&lt;p&gt;Note: Priority should be to change settings of server-hardware via the
HPE OneView API, and keep the iLO REST API for things not available in
the HPE OneView API&lt;/p&gt;
&lt;h1&gt;Tighter partner integration = best user experience&lt;/h1&gt;
&lt;p&gt;As we discovered there are different levels of integrations that can be
built into an HPE OneView partner application. Of course, the tighter
the integration is, the richer is the user experience for our joint
customers.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[HPE Composable Infrastructure API can Java too]]></title><description><![CDATA[We have seen in the previous articles, PowerShell-defined
infrastructure at your
fingertips!
that we can use a scripting language to…]]></description><link>https://developer.hpe.com/hpe-composable-infrastructure-api-can-java-too/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-composable-infrastructure-api-can-java-too/</guid><pubDate>Fri, 08 Sep 2017 16:49:18 GMT</pubDate><content:encoded>&lt;p&gt;We have seen in the previous articles, &lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/PowerShell-defined-infrastructure-at-your-fingertips/ba-p/236848&quot;&gt;PowerShell-defined
infrastructure at your
fingertips!&lt;/a&gt;
that we can use a scripting language to automate HPE Composable
Infrastructure, and control it in a Software-defined way. However, what
about our developer community? Some say, &quot;Real developers don&apos;t use
scripting languages&quot;, and I will not argue and discuss the differences
between a scripting language and a programming language. Instead, this
blog introduces how to get started programming HPE Composable
Infrastructure with Java.&lt;/p&gt;
&lt;h1&gt;We need an Eclipse&lt;/h1&gt;
&lt;p&gt;Most programmers use an Integrated Development Environment (IDE) to
write code. The most famous ones has to be Eclipse, an open source IDE
that you can get from &lt;a href=&quot;https://eclipse.org/downloads&quot;&gt;https://eclipse.org/downloads&lt;/a&gt;. Other famous ones
are Microsoft Visual Studio, and its recent open source incantation
called Visual Studio Code, which you can get from
&lt;a href=&quot;https://code.visualstudio.com/download&quot;&gt;https://code.visualstudio.com/download&lt;/a&gt; In this article, we will use
Eclipse.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/api-java-1-1505088036254.png&quot; alt=&quot;Eclipse IDE&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Hello HPE OneView World!&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;All programmers know this; it all starts with a Hello World! So
let&apos;s try to write our first Java program and print on the screen
the version of our HPE OneView appliance. To do this&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new Java Project called OneView&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new Java class called HelloOneViewWorld with all defaults
and hit Finish&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cut and paste the following Java code which will be the starting
point for our experiments&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	import java.io.IOException;
	import java.io.IOException;
	import java.io.InputStreamReader;
	import java.net.HttpURLConnection;
	import java.net.URL;
	public class HelloOneViewWorld {
		public static void main(String\[\] args) {
			try {
			}
			catch (IOException e) {
			e.printStackTrace();
			}
		}
	}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Within the Try {} block, create a new URL object pointing to your HP
OneView appliance:&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;URL url = &lt;strong&gt;new&lt;/strong&gt; URL(&quot;https:///rest/version&quot;);&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Next, create an URLConnection object. Then specify some of the
arguments such as Method (GET), and Property
(Accept:application/json), remember we already had to do the same
when we used POSTman in &lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/API-version-What-API-version/ba-p/235776&quot;&gt;API version? What API version?
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	HttpURLConnection conn = (HttpURLConnection)url.openConnection();
	conn.setRequestMethod(&quot;GET&quot;);
	conn.setRequestProperty(&quot;Accept&quot;, &quot;application/json&quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Next, we want to read the result in a BufferedReader object with:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	BufferedReader br = new BufferedReader(new InputStreamReader((conn.getInputStream())));
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;To keep it simple, let&apos;s only continue if we received a Status code of 200&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	if (conn.getResponseCode() != 200) {
		System.out.println(&quot;Error from HPE OneView.... \\n&quot;);
		return;
	}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Finally, let&apos;s print the result on the console with&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;    String output;
    System.out.println(&quot;Hello World! HPE OneView Version is: \\n&quot;);
    while ((output = br.readLine()) != null) {
		System.out.println(output);
    }
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;And close the connection&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	conn.disconnect();
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Save your work&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can download the entire module from the attachments of this article.
Nevertheless, things should look like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/api-java-2-1505088497854.png&quot; alt=&quot;Eclipse IDE workspace structure&quot;&gt;&lt;/p&gt;
&lt;p&gt;When you run this code, you get the following displayed on the Eclipse
Console:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hello World! HPE OneView Version is:&lt;/p&gt;
&lt;p&gt;{&quot;currentVersion&quot;:200,&quot;minimumVersion&quot;:1}&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;When SSL security strikes again…&lt;/h1&gt;
&lt;p&gt;Well maybe not… Because, unfortunately, or fortunately should I say
(security is important), if you run this code you will get a little
delay (trying to open a connection to your HPE OneView) but finally you
will get a timeout error with something like this in the stack trace:&lt;/p&gt;
&lt;p&gt;at sun.security.ssl.SSLSocketImpl.connect(Unknown Source)&lt;/p&gt;
&lt;p&gt;Which tells us that there was a security problem connecting to this URL,
and the reason is that by default, HPE OneView Appliances install with a
self-signed certificate, which is not considered secure by client
applications. Most developers are familiar with what has to be done at
this point:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Option 1: Explicitly remove all security checks from your java code
(not advisable). There are many article describing how to do this.
This one is the more readable one:
&lt;a href=&quot;http://www.nakov.com/blog/2009/07/16/disable-certificate-validation-in-java-ssl-connections/&quot;&gt;http://www.nakov.com/blog/2009/07/16/disable-certificate-validation-in-java-ssl-connections/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Option 2: Reconfigure your HPE OneView appliance to join an existing
corporate Public Key infrastructure (PKI)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Option 3: Import the self-signed certificate of the HPE OneView
appliance, in the Eclipse certificates keystore using a utility
called keytool (more precisely in the keystore of the Java
environment used by Eclipse). There are plenty of articles on how to
do this, but this one does a good job:
&lt;a href=&quot;http://www.grim.se/guide/jre-cert&quot;&gt;http://www.grim.se/guide/jre-cert&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let&apos;s describe briefly the steps involved with option 3:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Locate the Java run time environment used by Eclipse (Project
properties, then Java Build Path, Libraries)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open a command prompt and change directory to the root of the Java
run time library&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open a browser to the HPE OneView Appliance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Export certificate to the root of Java run time library with name:
OneView.cer&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In command promt run:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cd bin&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;keytool -import -file ../OneView.cer -alias MyOneView -keystore
../lib/security/cacerts&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;when prompted for password, use: changeit&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;review changes and make note of the DNS names associated with
this certificate, you will use one of these as a base URL in
your program&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Type yes to import certificate in keystore&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can use keytool -list to verify it has been imported, and
keytool -delete to clean up&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the DNS names found in the certificate are not valid in your
DNS environment, add a line for it in your system hosts file
(in c:\windows\system32\drivers\etc\).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once the certificate is loaded in the keystore (meaning that your Java
environment now trust the HPE OneView self-signed certificate), your
Java code will proceed.&lt;/p&gt;
&lt;h1&gt;Getting a session token&lt;/h1&gt;
&lt;p&gt;We can now move to the next logical step, which is to authenticate
against the HPE OneVew REST API and obtain a session token as we already
did in &lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/Authenticating-against-HPE-Composable-Infrastructure-API/ba-p/235893&quot;&gt;Authenticating against HPE Composable Infrastructure
API&lt;/a&gt;.
So let&apos;s create another Java class called GetToken.java in Eclipse, and
cut/paste the code from HelloOneViewWorld.java.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Add another import for java.io.OutputStream&lt;/p&gt;
&lt;p&gt;import java.io.OutputStream;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change the URL object to use:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;URL url = new URL(&quot;https:///rest/login-sessions&quot;);&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Replace the connection properties with the following:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;	conn.setRequestMethod(&quot;POST&quot;);
	conn.setDoOutput(true);
	conn.setRequestProperty(&quot;Accept&quot;, &quot;application/json&quot;);
	conn.setRequestProperty(&quot;Content-Type&quot;, &quot;application/json&quot;);
	conn.setRequestProperty(&quot;X-API-Version&quot;, &quot;200&quot;);
	String body = &quot;{\\&quot;userName\\&quot;:\\&quot;administrator\\&quot;,\\&quot;password\\&quot;:\\&quot;password\\&quot;}&quot;;
	OutputStream os = conn.getOutputStream();
	os.write(body.getBytes());
	os.flush();
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Replace the Hello World string with &quot; HPE OneView Response is: \n&quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything else can remain unchanged and the final code should look like
this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/api-java-3-1505088504470.png&quot; alt=&quot;Java code to get Oneview appliance token&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can now execute this code, which, provided you solved the SSL issue
mentioned above, will return the following:&lt;/p&gt;
&lt;p&gt;HPE OneView Response is:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;	{&quot;partnerData&quot;:{},
	&quot;sessionID&quot;:&quot;MjA5NjEwOTM0MTQ4bcFsogYf-fHCmOPdSsLu0259scrx\_SFV&quot;}

# And continue from there…

Once you have a version number and a session token you can place any
call you&apos;d like on the HPE OneView API. Remember to always use the
following connection properties:

```java
	conn.setRequestProperty(&quot;Accept&quot;, &quot;application/json&quot;);
	conn.setRequestProperty(&quot;X-API-Version&quot;, &quot;YourSelectedVersion&quot;);
	conn.setRequestProperty(&quot;auth&quot;, &quot;your-sessionID-goes-here&quot;);

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, you should also be aware that a Java SDK for HPE OneView is
available from Github
(&lt;a href=&quot;https://github.com/HewlettPackard/oneview-sdk-java&quot;&gt;https://github.com/HewlettPackard/oneview-sdk-java&lt;/a&gt;), and this SDK
simplifies many things when doing Java programming, by creating a
lightweight level of abstraction between your Java code and the HPE
OneView API. Most important is the parsing of the resulting JSON, which
becomes simple Java properties when using the Java SDK. We will cover
this in a forthcoming article.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[First step with programming the HPE Composable API]]></title><description><![CDATA[A bit of history In December 2015, in London, HPE announced Synergy, the first platform
architected from the ground up for composability…]]></description><link>https://developer.hpe.com/first-step-with-programming-the-hpe-composable-api/</link><guid isPermaLink="false">https://developer.hpe.com/first-step-with-programming-the-hpe-composable-api/</guid><pubDate>Fri, 08 Sep 2017 16:36:19 GMT</pubDate><content:encoded>&lt;h1&gt;A bit of history&lt;/h1&gt;
&lt;p&gt;In December 2015, in London, HPE announced Synergy, the first platform
architected from the ground up for composability. Earlier that year, HPE
had announced the Composable API, and the Composable ecosystem. This
ecosystem is now growing because more and more software partners have
expressed interest in integrating with the Composable API. Some have
already demonstrated products, some have products in development, but
all of them are using the same Composable API, also referred to as the
HPE OneView API. This API allows HPE OneView and the Synergy Composer
(powered by HPE OneView) to be controlled by another software entity,
thus it is an essential step in building a software-defined
infrastructure. The controlling software entity might be as simple as a
small script (in PowerShell or Python), it might also be a configuration
management system, such as Chef, Ansible or any kind of software that
needs to interact with the underlying infrastructure (VMware vCenter,
Microsoft System Center, Docker Machine, etc.). In all cases an API is a
prerequisite for a software defined infrastructure and the growing
ecosystem around it. HPE OneView and the Synergy Composer is no
exception and it comes with an API since v1.0 in September 2013.&lt;/p&gt;
&lt;h1&gt;REST API in 30 seconds&lt;/h1&gt;
&lt;p&gt;There are many types of APIs, but the most widely used API in modern
software, is called REST (for Representational State Transfer). A REST
API (or RESTful API as it is often referred to) has a number of design
principles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;It uses HTTP and HTTPS as a communication protocol&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It uses a very simple set of HTTP verbs to initiate actions (PUT,
GET, POST, DELETE, PATCH)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Objects are represented by their URI (Universal Resource Identifier)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, the following are examples of REST calls:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;POST &lt;a href=&quot;https://oneviewsrv.cilab.net/rest/login-sessions&quot;&gt;https://oneviewsrv.cilab.net/rest/login-sessions&lt;/a&gt; (means: apply
for an identity token)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GET &lt;a href=&quot;https://oneviewsrv.cilab.net/rest/alerts&quot;&gt;https://oneviewsrv.cilab.net/rest/alerts&lt;/a&gt; (means: get the list
of alerts)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More generically, we can see that any call is in the form of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Verb protocol://server-end-point/URI&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Where &lt;em&gt;Verb&lt;/em&gt; can be one of the following, and its action is applied to
the provided URI, over the specified protocol at a given server
endpoint:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;GET: to read the value of a resource&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PUT: to change the value of a resource&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;POST: to create a new instance of a resource&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DELETE: to remove an instance of a resource&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;PATCH: to update values of a resource.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&gt; &lt;strong&gt;Note: PATCH is part of the newest additions to the HTTP protocol and
&gt; it is not yet fully supported by the Composable API&lt;/strong&gt;&lt;/p&gt;
&lt;h1&gt;RTFM: Read The Fantastic Manuals&lt;/h1&gt;
&lt;p&gt;Ok, enough talking! Let&apos;s try to make our first call to the HPE
Composable API. Where do we start? Well, the API is fully described in
the product’s online Help, and it is also available online at
&lt;a href=&quot;http://www.hpe.com/info/oneview/docs&quot;&gt;http://www.hpe.com/info/oneview/docs&lt;/a&gt;. There, you will find a
downloadable version of the document, which is nice to have on your
laptop as it presents itself as a set of HTML pages that is easy to
navigate. As an example, once you open the help page, you can search for
&quot;version&quot; and you will find something like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/progapi-1-1504889058235.png&quot; alt=&quot;Response of get API version REST call&quot;&gt;&lt;/p&gt;
&lt;p&gt;There is a page like this for each single API call available in HPE
OneView. From the information provided, we can see that a call to GET
/rest/version will return the versions supported by the API. Let’s use
this for our first interaction with the HPE OneView API. If you scroll
down a bit in the help page, you will also find what kind of response is
expected:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/progapi-2-1504889064687.png&quot; alt=&quot;Response of get API version REST call&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can see, that this API returns a Content-Type of JSON (JavaScript
Object Notation). This is a very popular format used to exchange data
with an API. It is similar to XML, just a little more human readable.
Many APIs are moving away from XML as the exchange format in preference
of JSON. HPE OneView uses JSON for input parameters, and responses.
There are many web sites for you to learn about JSON, but I have found
this online parser very helpful to check the syntax of a JSON payload
before using it: &lt;a href=&quot;http://json.parser.online.fr/&quot;&gt;http://json.parser.online.fr/&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Pick a REST Tool&lt;/h1&gt;
&lt;p&gt;So now we have all the information we need to place our first HPE
OneView API call to retrieve the version numbers. However, if you are
new to REST programming, you might ask yourself: how do I place a REST
call to an API without writing any code? There are a number of simple
solutions for this. For example, there are several browser plug-ins
available and I am sure you will find one for your favorite browser
(HttpRequester for Firefox, POSTman for Chrome, just to name a few). My
favorite is &lt;strong&gt;HttpRequester&lt;/strong&gt; for Firefox and it looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/progapi-3-1504889071823.png&quot; alt=&quot;HttpRequester tool for Firefox&quot;&gt;&lt;/p&gt;
&lt;p&gt;If you prefer Chrome, then &lt;strong&gt;POSTman&lt;/strong&gt; would look like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/progapi-4-1504889079190.png&quot; alt=&quot;POSTman for Google Chrome&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Let&apos;s go!&lt;/h1&gt;
&lt;p&gt;Both are very easy to install and intuitive to use, but I will continue
to use HttpRequester in the rest of this document. So let’s fill out the
following fields:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Try it here&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;URL&lt;/td&gt;
&lt;td&gt;https://{HPEOneViewappliance}/rest/version&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://213.30.139.22:37441/rest/version&quot;&gt;https://213.30.139.22:37441/rest/version&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verb&lt;/td&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Header&lt;/td&gt;
&lt;td&gt;Accept=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;Note: Before you start, make sure you open a browser connection to
your HPE OneView API at least once and accept the self-signed
certificate. Otherwise HttpRequester will not work.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;There is no request payload to worry about for the GET /rest/version
call. You can specify an HTTP Header to explicitly tell the API that we
expect a response in JSON, you might get XML response otherwise. When
ready, press the &lt;strong&gt;Submit&lt;/strong&gt; button.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/progapi-5-1504889085855.png&quot; alt=&quot;Retrieve the oneview version using HttpRequester for firefox&quot;&gt;&lt;/p&gt;
&lt;p&gt;HttpRequester will contact the URL specified and call GET version from
the API. In the Response section, as illustrated above, you want to
check the HTTP status code. A value of 200 means it was successful.
Next, you can check the Response body formatted as JSON, to find out
about the version. This response will have to be parsed to retrieve the
currentVersion value. But we can read pretty easily that
currentVersion=200 (meaning HPE OneView V2.0)&lt;/p&gt;
&lt;p&gt;Congratulations! You have successfully placed your first call to the HPE
OneView API. As you have seen, GET version does not require any
authentication. In fact, it is one of the few calls you can place
without having to provide proper authentication. In subsequent articles,
we will dive deeper into the API and learn about how to authenticate and
retrieve information about the HPE OneView resources.&lt;/p&gt;
&lt;p&gt;Although this example might seems very basic, any integration with HPE
OneView should always start by querying the API versions supported,
therefore, this is always going to be your very first step.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Do not miss the HPE Composable Infrastructure Bus]]></title><description><![CDATA[One API and 2 Busses In previous articles, we discussed how to get started with programming
the HPE Composable Infrastructure API. We…]]></description><link>https://developer.hpe.com/do-not-miss-the-hpe-composable-infrastructure-bus/</link><guid isPermaLink="false">https://developer.hpe.com/do-not-miss-the-hpe-composable-infrastructure-bus/</guid><pubDate>Thu, 07 Sep 2017 18:09:49 GMT</pubDate><content:encoded>&lt;h1&gt;One API and 2 Busses&lt;/h1&gt;
&lt;p&gt;In previous articles, we discussed how to get started with programming
the HPE Composable Infrastructure API. We reviewed some of the concept
on the versioning, the authentication and the discovery of the
infrastructure, via a REST API approach. Well, there is an additional
component to the HPE Composable Infrastructure, which we have not yet
discussed called the Message Bus. Actually, there are two of these
busses. One is called the State Change Message Bus (SCMB) and the second
one is called the Metrics Streaming Message Bus (MSMB).&lt;/p&gt;
&lt;h1&gt;So what is a message bus?&lt;/h1&gt;
&lt;p&gt;Before we look at the specific of the HPE Composable Infrastructure
message busses, let us look at what a message bus is. Message bus is a
programming concept, which allows for exchange of data between a
producer and one or more consumer in a guaranteed, and asynchronous
manner. This concept has evolved over time and there is now a well know
standard for it, called AMQP (Advanced Message Queuing Protocol). Most
complex software solutions, which need to communicate asynchronously
between two components, use AMQP to do so nowadays. In this technology
we can identify a Producer (in our case it is going to be our HPE
Composable Infrastructure brain, HPE OneView), which generates and puts
data (called messages), on different message queues (via an Exchange),
and then we can have any number of consumers, which have declared
themselves as subscribers of a given queue. We call Message Bus, the sum
of these message queues. Consumers can be any program implementing AMQP.
This most recognized implementation of AMQP is called RabbitMQ. There
are RabbitMQ server side components and client side components. For
example, HPE OneView has implemented the server side of the message bus
using RabbitMQ.&lt;/p&gt;
&lt;p&gt;The following picture illustrate the concept of producer/consumer and
message bus&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/messabus-1-1504810187875.png&quot; alt=&quot;Illustrate the concept of producer/consumer and message bus&quot;&gt;&lt;/p&gt;
&lt;h1&gt;HPE OneView Message Bus? What for?&lt;/h1&gt;
&lt;p&gt;HPE OneView is using internally message busses, to exchange data between
its own components. It also exposes two of these busses externally and
allows client applications to subscribe to them and retrieve data from
those asynchronous queues. For example, you can, as an external
application, subscribe to all events relative to creation of new
Networks in HPE Composable Infrastructure. If you followed my previous
articles, and continued exploring the HPE Composable infrastructure API,
you probably know by now, that there is a call to retrieve network
configuration (/rest/networks). So one way (let&apos;s call it the &quot;API way&quot;)
to achieve this would be with the following algorithm:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;While true

Do

	Query for list of Network using /rest/networks

	Compare with old list of networks

	If new list of networks bigger then old list of network

	Then compute delta

		For each network in delta do-work-on-new-network(network)

	Endif

	Sleep 15mn

Endo
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We call this technique interval polling. It works fine but it has
several drawbacks:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;It&apos;s not real time, if a new network is created, you might be aware
of it, worst case 15mn later&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It&apos;s pulling a lot of data from the API, especially if you reduce
the polling interval to be more accurate, or if you are polling
multiple resource types&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It requires to store in memory the state of the networks (this can
grow big)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It requires a lot of computation to get the delta between the old
networks list and the new networks list&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In most of the cases the delta would be null and you have wasted a
lot of time and energy polling and computing for nothing&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A better approach would be using a message bus (in our case HPE
OneView&apos;s SCMB) and the algorithm is the following:&lt;/p&gt;
&lt;p&gt;Subscribe to SCMB(Events of Interest= Network.Create,
Callback=Do-Work-On-New-Network)&lt;/p&gt;
&lt;p&gt;That&apos;s it! All you really need to do is to tell RabbitMQ that you want
to call Do-Work-On-New-Network whenever a new Network is created on the
Composable Infrastructure by the HPE OneView Web interface or by its
REST API.&lt;/p&gt;
&lt;p&gt;In fact that is exactly what some of our ecosystem partners have done.
For example, Arista has implemented such a technique to allow
top-of-rack ARISTA switches to detect and react to new networks created
in HPE OneView. In this case, the reaction is to propagate automatically
the VLAN configuration to top of rack switches. HPE Network Management
Tool iMC v7.2 also implemented the same type of automation.&lt;/p&gt;
&lt;h1&gt;Routing Keys&lt;/h1&gt;
&lt;p&gt;When we subscribe to a message bus, we have to specify what we called in
the previous chapter: &quot;Events of Interest&quot;. These in HPE OneView is
called a routing key. It specifies the type of messages you are
interested in, and you want to subscribe to. Routing keys are used by
the Exchange to drop messages in the right consumer queues. The generic
form of a routing key is:&lt;/p&gt;
&lt;p&gt;.&amp;#x3C;&lt;em&gt;resource-category&gt;&lt;/em&gt;.&amp;#x3C;&lt;em&gt;change-type&gt;&lt;/em&gt;.&amp;#x3C;&lt;em&gt;resource-uri&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;exchange-name: scmb or msmb&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;resource-category: any valid resource type from HPE OneView&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;change-type: Create, Deleted or Updated&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;resource-uri: any valid resource URI&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The following tables shows a number of valid routing keys for the SCMB:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Routing Key&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;scmb.#&lt;/td&gt;
&lt;td&gt;all messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.tasks.#&lt;/td&gt;
&lt;td&gt;all messages about any tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.ethernet-networks.#&lt;/td&gt;
&lt;td&gt;all messages about any Ethernet networks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.fc-networks .#&lt;/td&gt;
&lt;td&gt;all messages about any FibreChannel networks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.interconnects.#&lt;/td&gt;
&lt;td&gt;all messages about any interconnects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.enclosures.#&lt;/td&gt;
&lt;td&gt;all messages about any enclosures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.*.Created.#&lt;/td&gt;
&lt;td&gt;only create messages for any type of resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.server-hardware.#&lt;/td&gt;
&lt;td&gt;all messages about any server hardware&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.connections.Deleted.#&lt;/td&gt;
&lt;td&gt;only delete messages for any connections&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;scmb.enclosures.Updated./rest/enclosures/Encl1&lt;/td&gt;
&lt;td&gt;only update messages from Encl1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Note: routing keys are case sensitive. * means anything for a given
field, while # means anything for the rest of the string. So scmb.# is
the same as scmb.*.*.*&lt;/p&gt;
&lt;p&gt;The MSMB only accept one routing key: msmb.#&lt;/p&gt;
&lt;p&gt;The following picture show routing keys in action with three different
partner application subscribing to different types of messages from the
HPE Composable Infrastructure.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/messabus-2-1504810196244.png&quot; alt=&quot;routing keys in action with three different partner application subscribing to different types of messages&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Security&lt;/h1&gt;
&lt;p&gt;Security of this mechanism is implemented through SSL. In order to start
using the SCMB or the MSMB, a client application needs to use the REST
API to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create the RabbitMQ keypair, if not already created with&lt;/p&gt;
&lt;p&gt;POST /rest/certificates/client/rabbitmq and a body of
{&quot;type&quot;:&quot;RabbitMqClientCertV2&quot;,&quot;commonName&quot;:&quot;default&quot;}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download the necessary certificate and private key using: GET
/rest/certificates/client/rabbitmq/keypair/default&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Separate and save the client certificate and the private key
(procedure depends on what language is used by application)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download the HPE Composable Infrastructure Appliance root CA
certificate using GET /rest/certificates/ca&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Store the root ca certificate in appropriate store (depends on
language used)&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note: you can use DELETE /rest/certificates/ca/rabbitmq_readonly and
then POST /rest/certificates/client/rabbitmq to regenerate a new client
certificate and private key&lt;/p&gt;
&lt;h1&gt;Message Bus messages attributes&lt;/h1&gt;
&lt;p&gt;Once queue is setup and application is &quot;listening&quot;, it will be receiving
messages in JSON. We can find the format of messages received from each
message bus from the online documentation,
&lt;a href=&quot;http://h17007.www1.hpe.com/docs/enterprise/servers/oneview2.0/cic-rest/en/content/c_using-msg-bus.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The following is an example of an SCMB message:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;data&quot;:null  
&quot;timestamp&quot;:&quot;2016-07-04T15:26:02.109Z&quot;  
&quot;resourceUri&quot;:&quot;/rest/ethernet-networks/e5facce5-bfe9-4ce2-b793-f0e1d907d620&quot;  
&quot;eTag&quot;:&quot;3a7f25c9-9bb0-4405-8d56-7600447e3411&quot;  
&quot;changeType&quot;:&quot;Created&quot;  
&quot;newState&quot;:&quot;Active&quot;  
&quot;associatedTask&quot;:&quot;/rest/tasks/C15F02F8-CE5E-47C9-A090-CB8CFF3F158A&quot;  
&quot;newSubState&quot;:null  
&quot;userInitiatedTask&quot;:false  
&quot;changedAttributes&quot;:null  
&quot;resource&quot;:{&quot;type&quot;:&quot;ethernet-networkV2&quot;  
&quot;vlanId&quot;:2112  
&quot;ethernetNetworkType&quot;:&quot;Tagged&quot;  
&quot;internalVlanId&quot;:0  
&quot;smartLink&quot;:true  
&quot;connectionTemplateUri&quot;:&quot;/rest/connection-templates/0a9ecfd2-bfc7-450b-b4cf-48d7bf1044f2&quot;  
&quot;purpose&quot;:&quot;General&quot;  
&quot;privateNetwork&quot;:false  
&quot;name&quot;:&quot;DemoNetwork&quot;  
&quot;state&quot;:&quot;Active&quot;  
&quot;description&quot;:null  
&quot;status&quot;:&quot;OK&quot;  
&quot;category&quot;:&quot;ethernet-networks&quot;  
&quot;eTag&quot;:&quot;3a7f25c9-9bb0-4405-8d56-7600447e3411&quot;  
&quot;modified&quot;:&quot;2016-07-04T15:26:02.050Z&quot;  
&quot;created&quot;:&quot;2016-07-04T15:26:02.049Z&quot;  
&quot;uri&quot;:&quot;/rest/ethernet-networks/e5facce5-bfe9-4ce2-b793-f0e1d907d620&quot;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note: Because it is not using a REST API technique, Message Bus JSON
messages are always sent using the current API version of the appliance&lt;/p&gt;
&lt;h3&gt;SCMB vs MSMB&lt;/h3&gt;
&lt;p&gt;As we have seen, the SCMB is used to get information about changes made
in the HPE Composable Infrastructure. It was the first bus to be exposed
by the HPE Composable Infrastructure and opened to partner consumption.
Since HPE OneView 2.0, a new bus was introduced called MSMB for Metrics
Streaming Message Bus. This bus is used to get regular value of
consumption of the different resources of the HPE Composable
Infrastructure such as dower devices, enclosures and server hardware.&lt;/p&gt;
&lt;p&gt;SCMB Consumer applications should always select their routing key with
care and make sure that they can consume messages efficiently, and pick
them up as fast as the appliance puts them on the queues.&lt;/p&gt;
&lt;h1&gt;Putting it all together&lt;/h1&gt;
&lt;p&gt;We have built a demo &quot;consumer&quot; application to browse the HPE OneView
Message Busses, so let us start it up and as a simple example, subscribe
to all Ethernet Networks changes.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/messabus-3-1504810203042.png&quot; alt=&quot;demo &amp;#x22;consumer&amp;#x22; application to browse the Ethernet network changes of HPE OneView Message Busses&quot;&gt;&lt;/p&gt;
&lt;p&gt;Then let us use the HPE OneView web console to create a new Ethernet
network:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/messabus-4-1504810211206.png&quot; alt=&quot;HPE OneView web console to create a new Ethernet network&quot;&gt;&lt;/p&gt;
&lt;p&gt;Back on the MessageBus Explorer, we can see that it has been notified of
that change in the HPE Composable Infrastructure and can potentially
take the necessary actions. In our case, it simply displays the details
of the messages in a list.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/messabus-5-1504810217256.png&quot; alt=&quot;displays the modified or created or updated details of the Oneview messages in a list.&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can select to view the JSON details which displays the following
content (highlighted in red are the important fields):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{&quot;data&quot;:null  
&quot;timestamp&quot;:&quot;2016-07-04T15:26:02.109Z&quot;  
&quot;resourceUri&quot;:&quot;/rest/ethernet-networks/e5facce5-bfe9-4ce2-b793-f0e1d907d620&quot;  
&quot;eTag&quot;:&quot;3a7f25c9-9bb0-4405-8d56-7600447e3411&quot;  
&quot;changeType&quot;:&quot;Created&quot;  
&quot;newState&quot;:&quot;Active&quot;  
&quot;associatedTask&quot;:&quot;/rest/tasks/C15F02F8-CE5E-47C9-A090-CB8CFF3F158A&quot;  
&quot;newSubState&quot;:null  
&quot;userInitiatedTask&quot;:false  
&quot;changedAttributes&quot;:null  
&quot;resource&quot;:{&quot;type&quot;:&quot;ethernet-networkV2&quot;  
&quot;vlanId&quot;:2112  
&quot;ethernetNetworkType&quot;:&quot;Tagged&quot;  
&quot;internalVlanId&quot;:0  
&quot;smartLink&quot;:true  
&quot;connectionTemplateUri&quot;:&quot;/rest/connection-templates/0a9ecfd2-bfc7-450b-b4cf-48d7bf1044f2&quot;  
&quot;purpose&quot;:&quot;General&quot;  
&quot;privateNetwork&quot;:false  
&quot;name&quot;:&quot;DemoNetwork&quot;  
&quot;state&quot;:&quot;Active&quot;  
&quot;description&quot;:null  
&quot;status&quot;:&quot;OK&quot;  
&quot;category&quot;:&quot;ethernet-networks&quot;  
&quot;eTag&quot;:&quot;3a7f25c9-9bb0-4405-8d56-7600447e3411&quot;  
&quot;modified&quot;:&quot;2016-07-04T15:26:02.050Z&quot;  
&quot;created&quot;:&quot;2016-07-04T15:26:02.049Z&quot;  
&quot;uri&quot;:&quot;/rest/ethernet-networks/e5facce5-bfe9-4ce2-b793-f0e1d907d620&quot;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Summary&lt;/h1&gt;
&lt;p&gt;Message Bus is a very powerful and standard method to integrate with an
ecosystem. In the HPE Composable Infrastructure ecosystem, it provides a
very complementary approach to the existing HPE Composable
Infrastructure REST API, and allows partner applications to integrate
tighter and more efficiently with HPE Composable Infrastructure.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Curling through the OneView API]]></title><description><![CDATA[In previous articles, we used a REST client plug-in to Firefox or Chrome
to exercise the HPE Composable Infrastructure API. This is nice…]]></description><link>https://developer.hpe.com/curling-through-the-oneview-api/</link><guid isPermaLink="false">https://developer.hpe.com/curling-through-the-oneview-api/</guid><pubDate>Thu, 07 Sep 2017 17:33:30 GMT</pubDate><content:encoded>&lt;p&gt;In previous articles, we used a REST client plug-in to Firefox or Chrome
to exercise the HPE Composable Infrastructure API. This is nice, for
discovery and understanding of the API, but the next step toward a
software-defined infrastructure is to be able to automate those actions,
in a script. One possible approach to writing automation scripts using a
REST API such as the HPE Composable Infrastructure API, is to use a tool
like curl, and open source command line tool able to exchange data with
a web application or API using an URL syntax. cURL (Command-line URL) is
available for just about any device and any operating system, and
actually used by millions of users. More on cURL from
&lt;a href=&quot;https://curl.haxx.se&quot;&gt;https://curl.haxx.se&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;cURL syntax 101&lt;/h1&gt;
&lt;p&gt;Let us remember how we described a REST call in &lt;a href=&quot;https://community.dev.hpe.com/t5/Blogs/First-steps-with-programming-the-HPE-Composable-Infrastructure/ba-p/235724&quot;&gt;our first
article&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;&gt; Verb protocol://server-end-point/URI&lt;/p&gt;
&lt;p&gt;With some examples such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;POST &lt;a href=&quot;https://oneviewsrv.cilab.net/rest/login-sessions&quot;&gt;https://oneviewsrv.cilab.net/rest/login-sessions&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GET &lt;a href=&quot;https://oneviewsrv.cilab.net/rest/alerts&quot;&gt;https://oneviewsrv.cilab.net/rest/alerts&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So how do these calls translate in cURL?&lt;/p&gt;
&lt;p&gt;We can call cURL with no option, and see the syntax: **curl
[options...] **&lt;/p&gt;
&lt;p&gt;There are many available options but let us focus on the most important
ones:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;H&lt;/td&gt;
&lt;td&gt;HTTP Header&lt;/td&gt;
&lt;td&gt;-H &quot;accept: application/json&quot;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;-H &quot;content-type: application/json&quot;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;d&lt;/td&gt;
&lt;td&gt;data or payload&lt;/td&gt;
&lt;td&gt;-d &apos;{&quot;userName&quot;:&quot;Administrator&quot;,&quot;password&quot;:&quot;password&quot;}&apos;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;td&gt;-X GET &lt;a href=&quot;https://213.30.139.22:37441/rest/version&quot;&gt;https://213.30.139.22:37441/rest/version&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;-X POST &lt;a href=&quot;https://213.30.139.22:37441/rest/login-sessions&quot;&gt;https://213.30.139.22:37441/rest/login-sessions&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;k&lt;/td&gt;
&lt;td&gt;insecure&lt;/td&gt;
&lt;td&gt;bypasses SSL certificate validation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;i&lt;/td&gt;
&lt;td&gt;include HEADER in response&lt;/td&gt;
&lt;td&gt;useful to check the response status code, but not used when parsing JSON result&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p&lt;/td&gt;
&lt;td&gt;use a proxy&lt;/td&gt;
&lt;td&gt;-p &lt;a href=&quot;http://mycompanyproxy.com:8888&quot;&gt;http://mycompanyproxy.com:8888&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;So let us try!&lt;/p&gt;
&lt;p&gt;On a Linux machine with Internet access, type the following command,
which will retrieve the version of the API.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -i -k -H &quot;accept: application/json&quot; \\
-X GET https://213.30.139.22:37441/rest/version
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-1-1504806712277.png&quot; alt=&quot;Curl command to retrieve the version of the  API.&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Parsing JSON&lt;/h1&gt;
&lt;p&gt;We can see in the last line of the above example that the response is
provided in JSON (we asked for it with our accept HTTP header). So in
order to write powerful scripts, we will need a mechanism to parse the
JSON responses to extract the fields that we need. We have found this
convenient open source tool called jq (command line JSON processor)
available from &lt;a href=&quot;http://stedolan.github.io/jq/download/linux64/jq&quot;&gt;http://stedolan.github.io/jq/download/linux64/jq&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Once installed you can use it like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -k -H &quot;accept: application/json&quot; \\
-X GET https://213.30.139.22:37441/rest/version | jq -r &quot;.&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-2-1504806725594.png&quot; alt=&quot;command line JSON processor to parse the JSON&quot;&gt;&lt;/p&gt;
&lt;p&gt;This will just pretty print the JSON response, but you can also extract
field, for example to retrieve the currentVersion.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -k -H &quot;accept: application/json&quot; \\
-X GET https://213.30.139.22:37441/rest/version | jq -r &quot;.currentVersion&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-3-1504806732409.png&quot; alt=&quot;Retrieve API version using command line JSON parser&quot;&gt;&lt;/p&gt;
&lt;p&gt;So let us pretend would like to capture the currentVersion in a variable
within a shell script, we could do the following:&lt;/p&gt;
&lt;h1&gt;Retrieve API version&lt;/h1&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;currentVersion=$(curl -k -H &quot;accept: application/json&quot; -X GET
https://213.30.139.22:37441/rest/version | jq -r &quot;.currentVersion&quot;)

echo $currentVersion
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-4-1504806739646.png&quot; alt=&quot;retrieve current version of API using open source jq tool&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Getting a Session token&lt;/h1&gt;
&lt;p&gt;Let us now apply this technique to login to our HPE OneView Appliance
and start managing our HPE Composable Infrastructure. As we have already
seen in previous articles, we need to send a POST to the
/rest/login-sessions API.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -k -H &quot;accept: application/json&quot; -H &quot;content-type:
application/json&quot; \\
-d &apos;{&quot;userName&quot;:&quot;Administrator&quot;,&quot;password&quot;:&quot;password&quot;}&apos; \\
-X POST https://213.30.139.22:37441/rest/login-sessions | jq -r &quot;.&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-5-1504806745891.png&quot; alt=&quot;Getting a Session token using POST Curl command&quot;&gt;&lt;/p&gt;
&lt;p&gt;We can then, extract the sessionID in a variable using the following
syntax:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;sessionID=$(curl -k -H &quot;accept: application/json&quot; -H &quot;content-type:
application/json&quot; \\
-d &apos;{&quot;userName&quot;:&quot;Administrator&quot;,&quot;password&quot;:&quot;password&quot;}&apos; \\
-X POST https://213.30.139.22:37441/rest/login-sessions | jq -r &quot;.sessionID&quot;)

echo $sessionID
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-6-1504806754438.png&quot; alt=&quot;extract the sessionID in a variable&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Putting it all together&lt;/h1&gt;
&lt;p&gt;We have now retrieved the API Version (currentVersion) and a login
session (sessionID), we are now ready to explore the full HPE Composable
Infrastructure API with a simple shell script. For example, let us
enumerate the models of the servers managed in this environment&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -k -H &quot;accept: application/json&quot; -H &quot;content-type:
application/json&quot; \\
-H &quot;x-api-version: $currentVersion&quot; -H &quot;auth: $sessionID&quot; \\
-X GET https://213.30.139.22:37441/rest/server-hardware \\
| jq -r &quot;.members\[\].model&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-7-1504806761092.png&quot; alt=&quot;enumerate the models of the servers managed in this environment&quot;&gt;&lt;/p&gt;
&lt;h1&gt;cURL on Linux and Windows&lt;/h1&gt;
&lt;p&gt;But what about Windows? Some of you might ask. The good news is that
everything we have just shown here, can be done on Windows. If you use
cygwin, the cURL package is part of the default distribution and jq can
be easily compiled from sources located at
&lt;a href=&quot;https://stedolan.github.io/jq/download/&quot;&gt;https://stedolan.github.io/jq/download/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Another option is to install jq and cURL for Window, and in that case,
you will have to perform a few syntax changes to the examples we showed
above.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;cURL if available from:
&lt;a href=&quot;http://winampplugins.co.uk/curl/curl_7_48_0_openssl_nghttp2_x64.7z&quot;&gt;http://winampplugins.co.uk/curl/curl_7_48_0_openssl_nghttp2_x64.7z&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;jq is available from:
&lt;a href=&quot;https://github.com/stedolan/jq/releases/download/jq-1.5/jq-win64.exe&quot;&gt;https://github.com/stedolan/jq/releases/download/jq-1.5/jq-win64.exe&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once curl and jq are installed and in the default path, we can then
execute the following command to retrieve a sessionID:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;curl -k -H &quot;accept: application/json&quot; -H &quot;content-type:
application/json&quot; -X POST
https://213.30.139.22:37441/rest/login-sessions -d
&quot;{\\&quot;userName\\&quot;:\\&quot;Administrator\\&quot;,\\&quot;password\\&quot;:\\&quot;password\\&quot;}&quot; |
jq -r &quot;.sessionID&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-8-1504806767303.png&quot; alt=&quot;Curl command to retrieve a sessionID using jq tool&quot;&gt;&lt;/p&gt;
&lt;p&gt;However, if you really need to store it into a variable in Windows
command interpreter, things get a little more complicated:&lt;/p&gt;
&lt;p&gt;for /f &quot;delims=&quot; %a in (&apos;curl -k -H &quot;accept: application/json&quot; -H
&quot;content-type: application/json&quot; -X POST
&lt;a href=&quot;https://213.30.139.22:37441/rest/login-sessions&quot;&gt;https://213.30.139.22:37441/rest/login-sessions&lt;/a&gt; -d
&quot;{\&quot;userName\&quot;:\&quot;Administrator\&quot;,\&quot;password\&quot;:\&quot;password\&quot;}&quot; ^|
jq -r &quot;.sessionID&quot;&apos;) do @set sessionID=%a&lt;/p&gt;
&lt;p&gt;echo %sessionID%&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-9-1504806773156.png&quot; alt=&quot;store session id into a variable in Windows command interpreter&quot;&gt;&lt;/p&gt;
&lt;h1&gt;And the last good news is…&lt;/h1&gt;
&lt;p&gt;So I asked early on, how do these calls translate in cURL? Here is the
last piece of good news: POSTman can help you with that. Imagine you
have been experimenting with a GET in POSTman, and you would like to run
the same GET from cURL in a script&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-10-1504806780133.png&quot; alt=&quot;POSTman can help to generate Curl code&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can use the &lt;strong&gt;Generate Code&lt;/strong&gt; button in the upper right corner and
select cURL from the drop down to get the right string ready to paste in
your own script.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/curl-11-1504806788067.png&quot; alt=&quot;Generated Curl code from POSTman REST tool&quot;&gt;&lt;/p&gt;
&lt;h1&gt;What is next?&lt;/h1&gt;
&lt;p&gt;Scripting with cURL might solve a number of use cases, and is somewhat
portable across Linux and Windows, but it is not the most convenient and
readable scripting language. In the next articles, we will cover how to
script against the HPE Composable Infrastructure API using Microsoft
PowerShell and Python.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Automation of Support Pack for ProLiant in HPE OneView]]></title><description><![CDATA[Support Pack for ProLiant (SPP) is the official HPE distribution of
updates to the ProLiant family of server. It is an ISO image, which…]]></description><link>https://developer.hpe.com/automation-of-support-pack-for-proliant-in-hpe-oneview/</link><guid isPermaLink="false">https://developer.hpe.com/automation-of-support-pack-for-proliant-in-hpe-oneview/</guid><pubDate>Thu, 07 Sep 2017 03:17:17 GMT</pubDate><content:encoded>&lt;p&gt;Support Pack for ProLiant (SPP) is the official HPE distribution of
updates to the ProLiant family of server. It is an ISO image, which
customers typically download from HPE website. Once locally available it
can be mounted on a ProLiant server and booted from, in order to update
the system software of the server such as the BIOS, component firmware
and OS drivers. It can also be used in combination with HPE Smart Update
Manager (HPSUM). The releases are named after their release date so for
example find SPP2016.10.0.iso for a SPP released in October 2016. HPE
OneView also offers the capability of leveraging this ISO image, by
loading it as a new &quot;baseline&quot; for automatic systems updates. This
article describes how to script this operation. More on the SPP web site
&lt;a href=&quot;http://h17007.www1.hpe.com/us/en/enterprise/servers/products/service%5C_pack/spp/index.aspx&quot;&gt;http://h17007.www1.hpe.com/us/en/enterprise/servers/products/service\_pack/spp/index.aspx&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Managing servers at scale&lt;/h1&gt;
&lt;p&gt;SPP ISO images can be mounted and booted on a single server in order to
upgrade its components. That is ok for a couple of servers but will
become a very tedious and time-consuming operation when managing
hundreds of servers in a datacenter. A more comprehensive way to use the
SPP is to combine it with HPE OneView. HPE OneView offers the option to
load a SPP and to use it as a property of a server profile.&lt;/p&gt;
&lt;h1&gt;Server Profile&lt;/h1&gt;
&lt;p&gt;Before we look at how to load SPP into HPE OneView, let us review what a
server profile is. Server Profile is the most important concept in HPE
OneView. It represents the abstraction layer, which isolates a workload
from the underlying server-hardware on which it runs. The server-profile
artifact is the place where we setup BIOS settings, hardware type, and
number of network connections, SAN attachments, number and size of disks
for a given workload. We can create many of these server profiles, and
even create server profile templates, and these remain completely
disconnected from the hardware until finally applied to an available
compute resource. When this happens, properties of the profile are
applied to the designated server hardware, connections are made to the
network infrastructure, and disks are carved automatically. One of the
property of a server profile (or a server profile template) is the
Firmware Baseline. The picture below shows options available for
managing firmware baseline during the creation of a server profile.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-sp-1504754621422.png&quot; alt=&quot;shows options available for managing firmware baseline during the creation of a server profile&quot;&gt;&lt;/p&gt;
&lt;p&gt;So if a server profile also has the Firmware property set to a given
baseline, this baseline will automatically be installed on the server
hardware as part of the profile deployment. This offline technique
(there are other online methods for applying firmware, using SUT - Smart
Update Tool) removes a lot of complexity for server administrators. It
adds a lot more flexibility too, as it allows to validate and apply
certain SPP to given workloads, and other SPP to other workloads, as
this is just another property of a server profile.&lt;/p&gt;
&lt;h1&gt;Custom SPP&lt;/h1&gt;
&lt;p&gt;It is important to note that, while HPE provides its customers with
regular updates of the SPP ISO images, it is possible to build custom
SPP. With this technique, you can create smaller ISO images, for example
by selecting only components, which exist in the customer&apos;s datacenter.
More information on custom SPP on &lt;a href=&quot;https://spp.hpe.com/custom/&quot;&gt;https://spp.hpe.com/custom/&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;Loading SPP in HPE OneView&lt;/h1&gt;
&lt;p&gt;In order to get an SPP to appear in the list of available SPP to choose
from for server profile, it needs to be loaded in HPE OneView. The
obvious and GUI-based option to do this is in HPE OneView Web Console
Firmware Bundles page, as shown below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-firmwarebundle-1504754856328.png&quot; alt=&quot;HPE OneView Web Console Firmware Bundles page&quot;&gt;&lt;/p&gt;
&lt;p&gt;From there it is possible to load additional Firmware Bundle from SPP
ISO images.&lt;/p&gt;
&lt;h1&gt;Automating it&lt;/h1&gt;
&lt;p&gt;Another approach to achieve this is to use PowerShell and the HPE
OneView library for PowerShell, which we discussed in the previous post
There is a couple of PowerShell commandlets in the library to deal with
Firmware Bundles. We have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Get-HPOVBaseline&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add-HPOVBaseline&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Remove-HPOVBaseline&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are several possible scenario to automate the management of
baselines. For example, we could:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Permanently monitor the create/delete file in a given folder and add
or remove ISO images as they show up. We can use the
System.IO.FileSystemWatcher PowerShell object to achieve this&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Look for the files in a repository and compare with the baselines
already loaded and keep them in synch&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Simply load ISO images from disk&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To keep things simple we will implement option 3, as it requires less
PowerShell code. The steps of the scripts are the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Importing the HPE OneView library&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Import-module HPEOneView.200&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connect to a given HPE OneView Appliance (if not already connected)&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;if (!$global:myAppliance) {
	$global:myAppliance = Read-Host &quot;HPE OneView FQDN or IP address&quot;
}
if ($global:ConnectedSessions.Name -notcontains $global:myAppliance) {
	$connection=Connect-HPOVMgmt -appliance $global:myAppliance -User
	administrator -password 
 }
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Enumerate Firmware Bundles already on appliance&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;$bundlesloaded = get-hpovbaseline
write-host &quot;Found the following baseline on appliance&quot;
foreach ($item in $bundlesloaded) { write-host &quot; - &quot;
$item.isoFilename}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Enumerate ISO images in current folder&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;$filesinfolder = get-item \*.iso
write-host &quot;Found the following ISO in folder&quot;
foreach ($item in $filesinfolder) { write-host &quot; - &quot; $item.Name}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Ask for confirmation&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;Write-Host &quot;Ok to upload ISO files on disk to appliance (Y/N)?&quot;
$key = $Host.UI.RawUI.ReadKey(&quot;NoEcho, IncludeKeyDown&quot;)
if ($key.Character -ne &quot;Y&quot;) { write-host You wimp! ; break }
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;And start uploading&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;foreach ($item in $filesinfolder) {
write-host &quot;Uploading file&quot; $item.name &quot;to appliance&quot;
$task=add-hpovbaseline $item.name
$t = Wait-HPOVTaskComplete $task.uri -timeout (New-TimeSpan -Minutes 30)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Terminate&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-postscript&quot;&gt;Write-Host &quot;Done!&quot;
Write-Host &quot;Disconnecting...&quot;
disconnect-hpovmgmt
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;Putting it all together&lt;/h1&gt;
&lt;p&gt;It is now time to save this PowerShell script and test it against an HPE
OneView Appliance or a HPE Synergy Composer.&lt;/p&gt;
&lt;p&gt;Before we run our script, let us check the appliance baseline view
(empty to start with):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-nofirmwarebundle-1504803548928.png&quot; alt=&quot;HPE OneView Web Console Firmware Bundles page&quot;&gt;&lt;/p&gt;
&lt;p&gt;In our first run, we can say Y(es) to confirm upload:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-ps1-1504803751946.png&quot; alt=&quot;Powershell console showing firmware bundle upload&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-ps2-1504803760139.png&quot; alt=&quot;Powershell console showing firmware bundle addition&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-ps3-1504803766997.png&quot; alt=&quot;Powershell console showing firmware bundle addition&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-ps4-1504803773690.png&quot; alt=&quot;Powershell console showing firmware bundle upload process complete&quot;&gt;&lt;/p&gt;
&lt;p&gt;After the run, we can check in HPE OneView again, to see that the
baseline is now loaded&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-firmwarebundle-1504754856328.png&quot; alt=&quot;Check Baseline is loaded through HPE OneView web console&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the second run, we are notified that baseline is already there. We
can just say N(o) to exit:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/spp-ps5-1504803781691.png&quot; alt=&quot;Powershell console notifying that baseline is already there in the second run of the script&quot;&gt;&lt;/p&gt;
&lt;p&gt;&gt; &lt;strong&gt;Note: be careful as HPE OneView modifies the filename of the bundle
&gt; by replacing unwanted characters such as &quot;.&quot; and spaces, by &quot;_&quot;&lt;/strong&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Authenticating against the Composable API]]></title><description><![CDATA[Login Sessions In the previous articles, we learned how to query the version of the HPE
Composable API (also referenced here as the HPE…]]></description><link>https://developer.hpe.com/authenticating-against-the-composable-api/</link><guid isPermaLink="false">https://developer.hpe.com/authenticating-against-the-composable-api/</guid><pubDate>Wed, 06 Sep 2017 17:11:08 GMT</pubDate><content:encoded>&lt;h1&gt;Login Sessions&lt;/h1&gt;
&lt;p&gt;In the previous articles, we learned how to query the version of the HPE
Composable API (also referenced here as the HPE OneView API), using the
&lt;code&gt;GET /rest/version&lt;/code&gt; call, and we discussed the importance of specifying
which version of the API is expected by an application. Let us now use
this to open a login session to HPE OneView. First, we can check the
syntax of the login session by looking up &quot;login&quot; in the online help.
This is what we find:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/loginsession-1504732180732.png&quot; alt=&quot;check the syntax of the login session by looking up &amp;#x22;login&amp;#x22; in the online help&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Getting a session token&lt;/h1&gt;
&lt;p&gt;From the documentation, we can see that in order to open a login
session, we will need to use the POST method against the URI
/rest/login-sessions. This will create (remember POST is the verb used
in HTTP to create something) a session token, which we will use in the
subsequent calls to the API. We will ignore the login domain information
for now. We can also see in the documentation, that we will need to set
three HTTP Headers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;X-API-Version: set to the version of the API we want to use (let&apos;s
assume 200)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Content-Type: set to application/json, to tell the API, that the
payload we will be providing is in JSON notation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Accept: set to application/json, to tell the API to return response
in JSON notation&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The payload expected by the API must contains (at minimum) the following
parameters:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;userName: set to the user name to use for authentication&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;password: set to the password for that user name&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note: Be careful, JSON is case sensitive. userName is expected here, not
username, nor UserName. As a rule, the API expects lower case words, but
capitalized, after the first word, if there are more than one word&lt;/p&gt;
&lt;h1&gt;HttpRequester example&lt;/h1&gt;
&lt;p&gt;To keep it simple we can use HttpRequester (a plug-in for Firefox; or
POSTman a similar plug-in for Chrome) to test this. Set the following
parameters and hit Submit.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Try it here!&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;URL&lt;/td&gt;
&lt;td&gt;https://{ HPEOneViewappliance}/rest/login-sessions&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://213.30.139.22:37441/rest/rest/login-sessions&quot;&gt;https://213.30.139.22:37441/rest/rest/login-sessions&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verb&lt;/td&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Header&lt;/td&gt;
&lt;td&gt;Content-Type=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Accept=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X-API-Version=200&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Payload&lt;/td&gt;
&lt;td&gt;{&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&quot;userName&quot;:&quot;administrator&quot;,&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&quot;password&quot;:&quot;password&quot;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;}&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/httprequester-1504732233332.png&quot; alt=&quot;HttpRequester example to retrieve the login session token for Oneview login&quot;&gt;&lt;/p&gt;
&lt;p&gt;The response we get is in the following form (extracted from online
documentation):&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/httpresponse-1504732248902.png&quot; alt=&quot;HttpRequester response showing the session Id&quot;&gt;&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;sessionID&lt;/strong&gt;, also known as the session token is what we are
looking for.&lt;/p&gt;
&lt;h1&gt;Careful with that token!&lt;/h1&gt;
&lt;p&gt;Now in order to be authorized to call any other API methods, we will
need to provide the session token as another HTTP Header called Auth. So
remember, we now have two HTTP Headers, which &lt;em&gt;have to be provided&lt;/em&gt; at
each call to the API: &lt;code&gt;Auth&lt;/code&gt; and &lt;code&gt;X-API-Version&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Note: You should also be aware that session tokens expire 24 hours after
the last utilization.&lt;/p&gt;
&lt;p&gt;We can now test this token and call another API method. Let us pick &lt;code&gt;GET /rest/global-settings&lt;/code&gt; which returns the current settings of the HPE
OneView appliance.&lt;/p&gt;
&lt;p&gt;Use HttpRequester with the following parameters and press Submit:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Try it here!&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;URL&lt;/td&gt;
&lt;td&gt;https://{HPEOneViewappliance}/rest/global-settings&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://213.30.139.22:37441/rest/global-settings&quot;&gt;https://213.30.139.22:37441/rest/global-settings&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verb&lt;/td&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Header&lt;/td&gt;
&lt;td&gt;Content-Type=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Accept=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X-API-Version=200&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Auth=your &lt;strong&gt;sessionID&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/httprequesterwithauth-1504732241236.png&quot; alt=&quot;Retrieve global settings using the generated session token in API call&quot;&gt;&lt;/p&gt;
&lt;p&gt;As a quick check, remove the &lt;code&gt;Auth&lt;/code&gt; HTTP Header, and try again. You should
get a 401 Status code (Unauthorized). The details of the error clearly
state that you forgot to provide an &lt;code&gt;Auth&lt;/code&gt; header for that request.&lt;/p&gt;
&lt;p&gt;Note: HTTP Headers are case insensitive: Auth or auth would work fine.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/forgotauthheader-1504732225620.png&quot; alt=&quot;Response showing Auth header is missing&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Cleaning up&lt;/h1&gt;
&lt;p&gt;Finally, it is best practice to delete a token when it is not needed
anymore using the &lt;code&gt;DELETE /rest/login-sessions&lt;/code&gt;. The session ID passed as
an &lt;code&gt;Auth&lt;/code&gt; Header will be deleted and will not be valid for authentication
anymore. In HttpRequester, you can submit the following request:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Try it here!&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;URL&lt;/td&gt;
&lt;td&gt;https://{HPEOneViewappliance}/rest/login-sessions&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://213.30.139.22:37441/rest/login-sessions&quot;&gt;https://213.30.139.22:37441/rest/login-sessions&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verb&lt;/td&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Header&lt;/td&gt;
&lt;td&gt;Content-Type=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Accept=application/json&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X-API-Version=200&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Auth=your &lt;strong&gt;sessionID&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;You should get a Status of 204 (anything in the 2xx is successful) with
no content for the response:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/deletelogintoken-1504732217113.png&quot; alt=&quot;Response showing successful token deletion&quot;&gt;&lt;/p&gt;
&lt;p&gt;The session token was deleted. If you are curious, try to reissue the
request from the HttpRequester History, which retrieved
&lt;code&gt;/rest/global-settings&lt;/code&gt;. This time you should get another 401
Authorization error. Your token is not valid anymore.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://hpe-developer-portal.s3.amazonaws.com/uploads/media/2017/9/tokeninvaliderror-1504732255731.png&quot; alt=&quot;tokeninvaliderror&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Key takeaways&lt;/h1&gt;
&lt;p&gt;Throughout the article, we have described what software will need to do
in order to integrate properly with HPE OneView API and the Composable
Infrastructure.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Query for the supported version of the API and verify compliance&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Authenticate (using HTTP Headers: &lt;strong&gt;X-API-Version&lt;/strong&gt;=version) and
retrieve session token&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Continue using the API (using HTTP Headers:
&lt;strong&gt;X-API-Version&lt;/strong&gt;=version and &lt;strong&gt;Auth&lt;/strong&gt;=token)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Close the session (using HTTP Headers: &lt;strong&gt;X-API-Version&lt;/strong&gt;=version and
&lt;strong&gt;Auth&lt;/strong&gt;=token)&lt;/p&gt;
&lt;p&gt;In the following articles, we will continue to investigate important
concepts of the HPE OneView API.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;</content:encoded></item><item><title><![CDATA[HPE OneView API Version]]></title><description><![CDATA[Backward compatibility of API In a previous article, we discussed how important it is to provide an
API with software so that your ecosystem…]]></description><link>https://developer.hpe.com/hpe-oneview-api-version/</link><guid isPermaLink="false">https://developer.hpe.com/hpe-oneview-api-version/</guid><pubDate>Wed, 06 Sep 2017 13:41:38 GMT</pubDate><content:encoded>&lt;h1&gt;Backward compatibility of API&lt;/h1&gt;
&lt;p&gt;In a previous article, we discussed how important it is to provide an
API with software so that your ecosystem can grow and other software
entities can integrate with it. But what happens when another release of
the software comes up? New capability maybe added, older behavior
modified, which is normal, but it makes it more difficult to allow the
software to evolve without disrupting the growing ecosystem. This is
where backward compatibility becomes critical. We, as API provider, need
to guarantee that existing software integrations are not going to break
when a new release is introduced.&lt;/p&gt;
&lt;p&gt;In order to make this happen, API providers usually require integrators
to specify the version of the API that they expect. In most REST APIs,
this is done with a HTTP Header. And the API provider guarantees that
older versions of the API remain unchanged. Of course if an integrator
wants to benefit from the newest functionality, it will have to update
their software to use the newest API, but the older version, running
against the older API, should continue to work without modification. It
is the API provider’s decision to decide how many older versions of the
API are supported at any given release and deprecate older ones if
necessary.&lt;/p&gt;
&lt;h1&gt;HPE OneView API history&lt;/h1&gt;
&lt;p&gt;The HPE OneView API has been around for a few years already and the
table below shows the versions of the API for each release&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Product Version&lt;/th&gt;
&lt;th&gt;API Version Number/ X-API-Version&lt;/th&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;September 2013&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.05&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;March 2014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.10&lt;/td&gt;
&lt;td&gt;101&lt;/td&gt;
&lt;td&gt;June 2014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1.20&lt;/td&gt;
&lt;td&gt;120&lt;/td&gt;
&lt;td&gt;December 2014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;October 2015&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;October 2016&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3.1&lt;/td&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;July 2017&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;Make a decision…&lt;/h1&gt;
&lt;p&gt;Ok, so I am a partner in the HPE Composable API ecosystem, and I&apos;m about
to make use of the HPE OneView API. Which version should I use? You
should always use the latest release when starting a new integration
project. You can look up &lt;code&gt;currentVersion&lt;/code&gt; by using the &lt;code&gt;GET /rest/version&lt;/code&gt;
as we did in a previous article. If you expect a long development cycle,
engage with your contact in the Composable API Ecosystem, to check what
the best approach is. There may already be a newer version in beta,
which you may be able to leverage.&lt;/p&gt;
&lt;h1&gt;And stick to it!&lt;/h1&gt;
&lt;p&gt;Once you have decided, you should make sure that your application only
uses that particular release (and let us call it &lt;code&gt;expectedVersion&lt;/code&gt;).
Remember that several months down the road, there will be new releases,
which you do not really know about at the time of the writing of your
code. You cannot be sure that your application will work well against a
future release.&lt;/p&gt;
&lt;p&gt;So let us recap. In your code, as the first thing, you should implement
the following algorithm to determine that the version of the API that
you expect to be present, is indeed present and supported by the
appliance instance you are about to integrate with.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Call &lt;code&gt;GET /rest/version&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Retrieve &lt;code&gt;minimumVersion&lt;/code&gt; and currentVersion&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check that &lt;code&gt;minimumVersion&lt;/code&gt; &amp;#x3C;= &lt;code&gt;expectedVersion&lt;/code&gt; &amp;#x3C;= &lt;code&gt;currentVersion&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If not then raise an error: &quot;Sorry, cannot integrate with this
version of HPE OneView&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Otherwise set HTTP Header &lt;code&gt;X-API-Version&lt;/code&gt; to &lt;code&gt;expectedVersion&lt;/code&gt; for &lt;em&gt;all&lt;/em&gt;
subsequent API calls&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;Plan your updates on a regular basis&lt;/h1&gt;
&lt;p&gt;HPE supports up to three versions prior to the current one. Therefore,
in theory, the above algorithm will allow your software to work across
many HPE OneView releases, and over a long period. Ultimately, there
will be a time where &lt;code&gt;minimumVersion&lt;/code&gt; will become greater than
&lt;code&gt;expectedVersion&lt;/code&gt; and the error message will show up (point 4). This is
not a great user experience for your user community, so you should
always do your best to upgrade your software and crank up the
&lt;code&gt;expectedVersion&lt;/code&gt;, before this message hits your users.&lt;/p&gt;</content:encoded></item></channel></rss>